1 Introduction

Over the past few decades, the study on the dynamical behavior of neural networks (NNs) has acquired widespread consideration in the domain of computer vision, object detection, image recognition, fixed-point computations, pattern classification, quantum communication, and so on [1, 3,4,5]. As the structure of NNs is concerned, varying the connectivity and learning algorithms produces distinct dynamic behaviors. Moreover, in the context of NNs, the learning process is commonly perceived as the task of modifying the network architecture and connection weights to achieve efficient dynamic behaviors. In general, there are three main learning paradigms for NNs, namely supervised, unsupervised, and hybrid, from which the unsupervised learning algorithm has efficient real-life applications since it does not require an output associated with each input pattern to train the data set. Further, the unsupervised learning algorithm explores the inherent structure or correlations among patterns within the data, categorizing them based on these correlations. Bearing this fact in mind, in [6] the authors incorporate competitive architecture, typically a type of unsupervised machine learning algorithm to update the neuron states in the NNs and as a result develop competitive NNs (CNNs). The major key to the design of competitive networks is the idea of excitatory and inhibitive influences on an artificial neuron. In most of the existing NNs, we might have noticed that neurons with activation functions receive inputs (\(x_i\)) and generate outputs (\(y_i\), \(i\in {\mathbb {N}}_n\)) accordingly, as shown in Fig. 1. Comparatively, neurons within CNNs engage in inter-neuronal competition to attain activation. In CNNs, the output layer is designed with lateral connection, where each neuron is fed back to itself in a self-excitatory manner and to the other neurons in an excitatory or inhibitory manner, as shown in Fig. 2. As a consequence, the output layer competes with each other attributes to the prevalence of both feed-forward and feed-backward connection layers. In such idealization, the synaptic weight is time-varying and hence it can be modified by external stimulus. Consequently, the network operating within this framework encompasses the dynamics of both neural activity levels, referred to as short-term memory (STM), and the dynamics of synaptic modifications, denoted as long-term memory (LTM). This type of unique structure formulation allows the network to incorporate the dynamics of both neural activity levels and synaptic modification, making it different from conventional NNs. In consequence, numerous reports based on the dynamics of CNNs have been published recently [7,8,9,10,11,12,13,14,15,16].

Fig. 1
figure 1

Learning Paradigm of NNs

Fig. 2
figure 2

Learning Paradigm of CNNs

In the parallel perspective, NNs are typically modeled and analyzed using continuous-time differential equations. It is common practice to approximate these networks using discrete-time difference equations in practical implementation. These approximations provide solutions at discrete time points that are expected to represent samples of the solutions of the original differential equations. Such approximations are commonly used in numerical integration techniques like Euler and Runge–Kutta methods for simulating continuous-time networks on computers. Recently, in [17, 18], the authors discretized the continuous-time NNs owing to their easier modeling and practical implementations. Additionally, there are two major benefits to analyze NNs in the discrete-time context. First, the digital controller can be implemented directly by using the proper technique rather than using the analog controller. Secondly, the synthesized network is executed directly within the digital processor. These two advantages make the discrete-time NNs easy to implement in reality [19, 21]. Even though there exist numerous techniques (such as Runge–Kutta and Euler schemes) in the existing results, to discretize the continuous-time dynamical systems, these numerical schemes can reveal fictitious steady-state responses. Also, it is important to note that the dynamics resulting from the numerical discretization of differential equations can lead to misleading steady-state solutions and asymptotic behavior. Moreover, these artifacts are not inherent in the original form of the differential equations. To solve these limitations, more recently in [19] and [21], the authors adopted the semi-discretization techniques to determine the discrete counterpart of impulsive Cohen–Grosberg and quaternion-valued NNs, respectively, and endorsed that the resultant discrete-time network preserves the dynamics of their continuous-time version. Inspired by these compelling facts and recognizing that all the previously mentioned studies rely on continuous-time CNNs and no outcomes have been reported yet regarding discrete-time CNNs (DT-CNNs), here we discretize the CNNs based on semi-discretization technique.

Meanwhile, it has been recognized that the qualitative behavior of NNs is usually characterized by an input–output relation and storage function consideration. The evaluation of input–output correlation within the examined network is accomplished using an energy-like function (referred to as the storage function) and input-power-like functions (referred to as the supply rate) [20]. The concept of dissipativity is theoretically delineated by these storage functions and supply rates, signifying that the growth of stored energy remains bounded by the quantity of energy introduced by the external environment. In essence, dissipative systems have the inherent capability to only dissipate and not generate energy. Consequently, dissipativity has gained recognition as a fundamental tool for analyzing and stabilizing large-scale systems, including NNs [22,23,24]. However, when there is no sufficient information about the external disturbance, then it leads to an unpredictable supply rate of the network, which no longer produces satisfactory performance. To deal with these types of problems various robust performances like \(H_\infty \) and \(L_2-L_\infty \) were introduced [25, 26]. In a generalization of all the above robust performance, the authors introduced a novel performance index called extended dissipativity, which contains passivity, dissipativity, \(H_\infty \), and \(L_2-L_\infty \) performance as its special case [27, 28]. In the existing literature, the study on robust performances like passivity and dissipativity has been analyzed for CNNs due to their numerous applications [29,30,31]. It is noteworthy to mention that there are only a few results available in the existing literature based on the robust performances of continuous-time CNNs and there are no such results in DT-CNNs. In this paper, the generalized robust performance called extended dissipativity performance is newly introduced to DT-CNNs, which increases the novelty of this study.

Based on the aforementioned discussions and the fact that the numerical analog is essential to perform the computational task, in this paper semi-discretization criteria are used to study the dynamics of DT-CNNs. Moreover, as mentioned earlier, obtaining the generalized performance index is an effective tool for achieving less conservative results. With this motivation in mind, this paper accomplishes the extended dissipativity performance for the delayed DT-CNNs. In implementing the stability criterion, the arduous problem is to construct an appropriate LKF of delayed DT-CNNs and to reach the tighter upper bounds of the estimated summation terms of LKF. In this paper, an appropriate system-dependent LKF is considered such that it contains more system information. In particular, the LKF is constructed based on both the STM and LTM state vectors, time-varying delay, and the activation function of the DT-CNNs. In addition, the augmented LKF is constructed based on the state vectors and delay bound components of the discretized CNNs which provide various cross-terms of the augmented vectors. Consequently, to obtain the tighter bound, in recent days the relaxed summation inequality which is obtained by the combination of tighter summation inequality and the matrix bounding techniques has attracted high research interest [32, 33]. For instance, in [32], the relaxed summation-based inequality was derived by combining Wirtinger-based summation inequality and reciprocally convex matrix inequality (RCMI). Similarly, this combination technique is used in [33], where the auxiliary function-based summation inequality (AFSI) is combined with extended RCMI (ERCMI) [34] to analyze the extended dissipativity of DT-CNNs. It is noteworthy that the inequalities AFSI and ERCMI are the generalization of Wirtinger and Jensen summation inequalities.

This paper aims to provide more insight into investigating the extended dissipativity criteria for DT-CNNs through relaxed AFSI. The key contributions of this article are summarized as follows:

  1. (i)

    The discrete-time analog of the continuous-time CNNs is formulated using the semi-discretization technique, where the corresponding system parameters of the DT-CNNs are obtained accordingly from its continuous-time counterpart. It demonstrates the appropriateness of the developed discrete-time counterparts of CNNs as mathematical models.

  2. (ii)

    The concept of extended dissipativity has been introduced to the discretized CNNs with time-varying delay. Accordingly, the established criteria not only ensure the discretized network is stable but also satisfies the common input–output energy functions like passivity, dissipativity, \(H_{\infty }\), and \(L_2-L_\infty \) performances.

  3. (iii)

    Finally, the effectiveness of DT-CNNs are investigated through simulation results.

The remaining part of this paper is systematized as follows. In Sect. 2, the discrete-time analog of the CNNs is formulated using the semi-discretization technique, where the corresponding system parameters of the DT-CNNs are obtained accordingly from its continuous-time counterpart. Some preliminaries are given in Sect. 3. In Sect. 4, the sufficient conditions ensuring the extended dissipativity are obtained in terms of LMIs. Section 5 demonstrate the effectiveness of the proposed results with two numerical examples. Section 6 concludes the work done in this paper.

Notations: The notation used in this paper are listed below:

Symbol

Meaning

\({\mathcal {A}}^{T}\) (\({\mathcal {A}}^{-1}\))

Transpose (Inverse) of the matrix \({\mathcal {A}}\)

\(vec_n\{y_i(k)\}\)

\([y_{1}(k), y_{2}(k),\ldots , y_{n}(k)]^{T}\)

\({\mathbb {S}}^n({\mathbb {S}}_+^n)\)

Collection of \(n\times n\) symmetric (positive definite) matrices

\({\mathbb {N}}_{n}\)

Finite set of natural numbers \(\{1,2,\ldots ,n\}\)

\({\mathbb {R}}^{m \times n} \)

Collection of all \(m \times n\) real matrices

\({\mathbb {R}}^{n}\)

Collection of all n-dimensional real vectors

\({\mathbb {Z}}^+\)

Collection of all non-negative integers

\(L_2[0,\infty )\)

The space of square integral vector functions over \([0,\infty )\)

\(diag\{\dots \}\)

Block-diagonal matrix

\({\mathcal {A}}>0\,(\ge 0)\)

The matrix \({\mathcal {A}}\) is positive definite (positive semi-definite)

\({\mathcal {A}}<0\,(\le 0)\)

The matrix \({\mathcal {A}}\) is negative definite (negative semi-definite)

0 and I

Zero and identity matrices of appropriate dimensions, respectively

\(*\)

Symmetric entry in a symmetric matrix

\({\mathbb {Z}}[a_{1},a_{2}]\)

\(\{a_{1}, a_{1}+1,\ldots , a_{2}\}\) for integers \(a_{1}\) and \(a_{2}\) with \(a_{1}<a_{2}\)

\({\mathcal {C}}({\mathbb {Z}}[a,b],{\mathbb {R}}^n)\)

Banach space consisting of all continuous functions defined on [ab]

2 System formulation

In this section, the discrete-time analog of the continuous-time CNNs is formulated by adapting the semi-discretization technique. Let us consider the following CNNs formulated by continuous-time differential equations:

$$\begin{aligned} \left\{ \begin{array}{ll} \epsilon \dfrac{dx_{S_i}(t)}{dt}&{}=-a_ix_{S_i}(t)+\sum \limits _{j=1}^{n}b_{ij}f_j(x_{S_j}(t)) +\sum \limits _{j=1}^{n}c_{ij}f_j(x_{S_j}(t-\rho (t)))+\sum \limits _{j=1}^{n}D_{ij}x_{L_i}(t),\\ \dfrac{dx_{L_i}(t)}{dt}&{}=-\alpha _ix_{L_i}(t)+\sum \limits _{j=1}^{n}\beta _{ij} f_j(x_{S_j}(t)), \ i\in {\mathbb {N}}_n, \end{array}\right. \end{aligned}$$
(1)

where the first system is called STM and the latter is called LTM; n indicates the number of neurons; \(\epsilon >0\) is the time scale of STM state; \(x_{S_i}(t)\) is the neuron current activity level in the system; \(f_j(x_{S_j}(t))\) and \(f_j(x_{S_j}(t-\rho (t)))\) denote the activation and delayed activation functions of the j-th neuron, respectively; \(x_{L_i}(t)\) is the synaptic efficiency, \(\rho (t)\) is the time-varying delay satisfying \(0\le \rho (t)\le \rho \); \(a_i>0\) is the self-feedback constant; \(b_{ij}\) and \(c_{ij}\) denote the synaptic weights of activation functions; \(D_{ij}\) denotes the strength of the external stimulus; \(\alpha _i>0\) and \(\beta _{ij}\) are scaling constants.

To obtain the discrete-time analog of the considered continuous-time CNNs (1), based on the semi-discretization method we initiate by reformulating the network based on the step size and obtain the following for \(t\in \left( \left[ \dfrac{t}{r}\right] r,\left[ \dfrac{t}{r}\right] r+r\right) \)

$$\begin{aligned} \left\{ \begin{array}{ll} \epsilon \dfrac{dx_{S_i}(t)}{dt}=&{}-a_ix_{S_i}(t)+\sum \limits _{j=1}^{n}b_{ij}f_j\left( x_{S_j} \left( \left[ \dfrac{t}{r}\right] r\right) \right) +\sum \limits _{j=1}^{n}c_{ij}f_j \left( x_{S_j}\left( \left[ \dfrac{t}{r}\right] r-\rho \left( \left[ \dfrac{t}{r}\right] r\right) \right) \right) \\ {} &{}+\sum \limits _{j=1}^{n}D_{ij}x_{L_i}\left( \left[ \dfrac{t}{r}\right] r\right) ,\\ \dfrac{dx_{L_i}(t)}{dt}=&{}-\alpha _ix_{L_i}(t)+\sum \limits _{j=1}^{n}\beta _{ij}f_j\left( x_{S_j}\left( \left[ \dfrac{t}{r}\right] r\right) \right) , \ i\in {\mathbb {N}}_n, \end{array}\right. \end{aligned}$$

where \(r>0\) is the positive constant denoting uniform discretization step size and \(\left[ \dfrac{t}{r}\right] \) represents the integral part of \(\frac{t}{r}\). For notation simplicity, let us assume \(\left[ \dfrac{t}{r}\right] =k,\) \(k=0,1,2,\dots \) and \(x_i\left( \left[ \dfrac{t}{r}\right] r\right) =x_i(k r)\triangleq x_i(k)\). Thus, the above system can be reformulated as

$$\begin{aligned} \left\{ \begin{array}{ll} &{} \epsilon \dfrac{dx_{S_i}(t)}{dt}=-a_ix_{S_i}(t)+\sum \limits _{j=1}^{n}b_{ij}f_j \left( x_{S_j}(k)\right) +\sum \limits _{j=1}^{n}c_{ij}f_j\left( x_{S_j} \left( k-\rho (k)\right) \right) +\sum \limits _{j=1}^{n}D_{ij}x_{L_i}(k),\\ &{} \dfrac{dx_{L_i}(t)}{dt}=-\alpha _ix_{L_i}(t)+\sum \limits _{j=1}^{n}\beta _{ij}f_i\left( x_{S_i}(k)\right) , \ \ i\in {\mathbb {N}}_n, \end{array}\right. \end{aligned}$$

for \( t\in \left( kr,(k+1)r\right) \), \(k=0,1,2,\dots .\) The aforementioned system can be restated as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} &{} \dfrac{d}{dt}\left[ x_{S_i}(t)e^{\frac{a_i}{\epsilon }t}\right] =\dfrac{e^ {\frac{a_i}{\epsilon }t}}{\epsilon }\left[ \sum \limits _{j=1}^{n}b_{ij}f_j \left( x_{S_j}(k)\right) +\sum \limits _{j=1}^{n}c_{ij}f_j\left( x_{S_j} \left( k-\rho (k)\right) \right) +\sum \limits _{j=1}^{n}D_{ij}x_{L_i}(k)\right] ,\\ &{} \dfrac{d}{dt}\left[ x_{L_i}(t)e^{\alpha _it}\right] =e^{\alpha _it} \left[ \sum \limits _{j=1}^{n}\beta _{ij}f_i\left( x_{S_i}(k)\right) \right] , \ i\in {\mathbb {N}}_n, \ t\in \left( kr,(k+1)r\right) , \ k=0,1,2,\dots . \end{array}\right. \end{aligned}$$

Now, by integrating the above over [krt), where \(t<(k+1)r\), we have

$$\begin{aligned} \left\{ \begin{array}{ll} x_{S_i}(t)e^{\frac{a_i}{\epsilon }t}-x_{S_i}(k)e^{\frac{a_i}{\epsilon }kr}=&{}\dfrac{e^ {\frac{a_i}{\epsilon }t}-e^{\frac{a_i}{\epsilon }kr}}{a_i} \left[ \sum \limits _{j=1}^{n}b_{ij}f_j\left( x_{S_j}(k)\right) +\sum \limits _{j=1}^{n} c_{ij}f_j\left( x_{S_j}\left( k-\rho (k)\right) \right) \right. \nonumber \\ &{} \left. +\sum \limits _{j=1}^{n}D_{ij}x_{L_i}(k)\right] ,\\ x_{L_i}(t)e^{\alpha _it}-x_{L_i}(k)e^{\alpha _ikr}=&{}\dfrac{e^{\alpha _it}-e^{\alpha _ikr}}{\alpha _i}\left[ \sum \limits _{j=1}^{n}\beta _{ij}f_i\left( x_{S_i}(k)\right) \right] , \ i\in {\mathbb {N}}_n. \end{array}\right. \end{aligned}$$

By using the continuity of \(x_{S_i}(t)\) and \(x_{L_i}(t)\), letting \(t\rightarrow (k+1)r\), and after some simple mathematical calculations, the above equations can be written as follows:

$$\begin{aligned} \left\{ \begin{array}{ll} x_{S_i}(k+1)=&{}x_{S_i}(k)e^{\frac{-a_i}{\epsilon }r}+\phi _i(r) \left[ \sum \limits _{j=1}^{n}b_{ij}f_j\left( x_{S_j}(k)\right) +\sum \limits _{j=1}^{n}c_{ij}f_j\left( x_{S_j}\left( k-\rho (k)\right) \right) \right. \\ &{} \left. +\sum \limits _{j=1}^{n}D_{ij}x_{L_i}(k)\right] ,\\ x_{L_i}(k+1)=&{}x_{L_i}(k)e^{-\alpha _ir}+\varphi _i(r) \left[ \sum \limits _{j=1}^{n}\beta _{ij}f_j\left( x_{S_j}(k)\right) \right] . \end{array}\right. \end{aligned}$$

For convenience, we let \(\phi _i(r)=\dfrac{1-e^{\frac{-a_i}{\epsilon }r}}{a_i}\) and \(\varphi _i(r)=\dfrac{1-e^{-\alpha _ir}}{\alpha _i}\), since \(a_i>0, \alpha _i>0\), and \(r>0\), we have \(\phi _i(r)>0\) and \(\varphi _i(r)>0\). Moreover, the above model converges to the continuous time CNNs (1) as \(r\rightarrow 0^{+} \).

We assume that the external disturbance in both STM \(x_S(k)\) and LTM \(x_L(k)\) as \(\omega _S(k)\) and \(\omega _L(k)\in L_2[0,\infty )\), respectively. Thus, the equivalent form of (2) together with external disturbances can be written as follows:

$$\begin{aligned} \left\{ \begin{array}{ll}STM : x_S(k+1)&{}=\bar{A}x_S(k)+\bar{B}f\left( x_S(k)\right) +\bar{C}f \left( x_S\left( k-\rho (k)\right) \right) +\bar{D}x_L(k)+\omega _S(k),\\ \hspace{1.6cm} y_S(k)&{}=Ex_S(k), \\ LTM: {x}_L(k+1)&{}=\bar{{\mathcal {A}}}x_L(k) +\bar{{\mathcal {B}} }f\left( x_S(k)\right) +\omega _L(k),\\ \hspace{1.6cm} y_L(k)&{}=Fx_L(k), \end{array}\right. \end{aligned}$$
(2)

where \(y_S(k)= vec_n\{y_{S_i}(k)\}\in {\mathbb {R}}^n\) and \(y_L(k)= vec_n\{y_{L_i}(k)\}\in {\mathbb {R}}^n\) are the measured output vectors; E and \(F\in {\mathbb {R}}^{n\times n}\) are the real constant matrices; \( x_S(k) = vec_n\{x_{S_i}(k)\}\in {\mathbb {R}}^n\); and \( x_L(k) = vec_n\{x_{L_i}(k)\}\in {\mathbb {R}}^n\) are the STM and LTM state vectors of the DT-CNNs (2); \(f(x_{S_m}(k))= vec_n\{f_{i}(x_{S_{mi}}(k))\} \in {\mathbb {R}}^n \) denotes the activation functions, which are not delay-dependent and \(f(x_{S_m}(k-\rho (k)))= vec_n\{f_{i}(x_{S_{mi}}(k-\rho (k)))\} \in {\mathbb {R}}^n \) denotes the delay-dependent activation function of the \(i^{th}\) neuron at \(k^{th}\) instant; \(\bar{A}=diag\{\bar{a}_1,\dots ,\bar{a}_n\}\in {\mathbb {R}}^{n\times n}\) is the state feedback connection weight matrix with \(\bar{a}_i=e^{\frac{-a_i}{\epsilon }r}\); \(\bar{B}\in {\mathbb {R}}^{n\times n}\) and \(\bar{C}\in {\mathbb {R}}^{n\times n}\) with \(\bar{b}_{ij}=\phi _i(r)b_{ij}\) and \(\bar{c}_{ij}=\phi _i(r)c_{ij}\) are the connection and delayed connection weight matrices, respectively; \(\bar{D}\in {\mathbb {R}}^{n\times n}\) represents the strength of the external stimulus with \(\bar{D}_{ij}=\phi _i(r)D_{ij}\); \(\bar{{\mathcal {A}}}=diag\{\bar{\alpha }_1,\dots ,\bar{\alpha }_n\}\in {\mathbb {R}}^{n \times n}\) with \(\bar{\alpha _i}=e^{\frac{-\alpha _i}{\epsilon }r}\); and \(\bar{{\mathcal {B}}}\in {\mathbb {R}}^{n \times n}\) is the matrix with \(\bar{\beta }_{ij}=\varphi _i(r)\beta _{ij}\). The time-varying delay function \(\rho (k)\) is supposed to be bounded as \( \rho _1\le \rho (k)\le \rho _2\). Further, the initial conditions for the discretized CNNs (2) are defined by \(x_S(\wp )=\xi _S(\wp )\) and \(x_L(\wp )=\xi _L(\wp )\), where \( \xi _S\) and \(\xi _L\in {\mathcal {C}}({\mathbb {Z}}[-\rho _2,0],{\mathbb {R}}^n)\).

Remark 2.1

Contemporary consensus holds that our world is characterized by a “continuous” flow of time and a seamlessly interconnected spatial arrangement. Consequently, the evolutionary mechanics within CNNs are often described using non-linear differential equations within the context of Euclidean space. Although this portrayal is conceptually clear, obtaining the exact dynamical state behavior of continuous-time CNNs in a simulation context is very hard. To surmount this hurdle, one can employ the time discretization technique, where the sequences of points (referred to as trajectories) are generated through iterative mappings. Obtaining discrete-time systems through the semi-discretization technique involves the following steps: (i) Divide the time domain into discrete intervals by using a uniform step size, often denoted as ‘r’. This step size determines the time resolution at which the continuous system will be approximated. (ii) From the original continuous-time CNNs, derive corresponding difference equations that approximate the system’s behavior within each discrete interval. This involves approximating derivatives with finite differences. It should be noted that the choice of the time step ‘r’ is crucial, smaller steps may lead to more accurate approximations.

3 Preliminaries

This section considers the following definition, lemma and assumption for proving the main results.

Assumption 3.1

For \(l\in {\mathbb {N}}_n\), the activation function \(f_l(\cdot )\) in (2) is considered to be bounded and satisfy

$$\begin{aligned} F_l^-\le \dfrac{f_{l}(e_{1})-f_{l}(e_{2})}{e_{1}-e_{2}}\le F_l^+, \ \forall \ e_{1}, e_{2} \in {\mathbb {R}}, \ e_{1}\ne e_{2}, \end{aligned}$$

where \(F_l^-\) and \(F_l^{+}\) are the known constants. For presentation convenience, in the following, we denote \(F_1=diag\{F_1^-F_1^+,\dots ,F_n^-F_n^+\}\) and \(F_2=diag\left\{ \dfrac{F_1^{-}+F_1^{+}}{2},\dots ,\dfrac{F_n^{-}+F_n^{+}}{2}\right\} . \)

Lemma 3.2

[33] For given scalars \(a>0\) and \(b>0\), delay function d(k) with \(a\le d(k)\le b\), matrix \(R>0\), any matrix S, and a vector function \(\psi (\cdot ):{\mathbb {Z}}[a,b] \rightarrow {\mathbb {R}}^{n}\), such that the following inequality holds:

$$\begin{aligned}l\sum \limits _{i=k-b}^{k-a-1} \psi ^{T}(i)R\psi (i)\ge \varepsilon ^{T}(k)\left[ \begin{array}{l}\Lambda _{1}\\ \Lambda _{2} \end{array}\right] ^TR_S\left[ \begin{array}{l}\Lambda _{1}\\ \Lambda _{2} \end{array}\right] \varepsilon (k), \end{aligned}$$

holds, where \(R_S\,{=}\,\left[ \begin{array}{cc}\widetilde{R}+(1-\alpha ){\mathcal {S}}_{1} &{} {\mathcal {S}}\\ *&{} \widetilde{R}+\alpha {\mathcal {S}}_{2} \end{array}\right] \), \({\mathcal {S}}_{1}=\widetilde{R}-S\widetilde{R}^{-1}S^{T}\), \({\mathcal {S}}_{2}=\widetilde{R}-S^{T}\widetilde{R}^{-1}S\), \(\widetilde{R}=diag\{R,3R,5R\}\), \(\alpha =\dfrac{\alpha _{1}}{l}\), \(1-\alpha =\dfrac{\alpha _{2}}{l}\), \(\alpha _{1}=b-d(k)\), \(\alpha _{2}=d(k)-a\), \(\varepsilon ^{T}(k)=\left[ \widehat{\varepsilon }^{T}_{1}(k)\ \widehat{\varepsilon }^{T}_{2}(k)\ \widehat{\varepsilon }^{T}_{3}(k)\ \widehat{\varepsilon }^{T}_{4}(k)\ \widetilde{\varepsilon }^{T}_{1}(k)\ \widetilde{\varepsilon }^{T}_{2}(k)\ \ \widetilde{\varepsilon }^{T}_{3}(k)\ \widetilde{\varepsilon }^{T}_{4}(k) \right] \) in which \(\widehat{\varepsilon }_{1}(k)=\sum \limits _{i=k-b}^{k-d(k)-1}\psi (i), \ \widehat{\varepsilon }_{2}(k)=\sum \limits _{j=-b}^{-d(k)-1}\sum \limits _{i=k+j}^{k-d(k)-1} \psi (i), \ \widehat{\varepsilon }_{3}(k)= \sum \limits _{j=-b}^{-d(k)-1}\sum \limits _{i=k-b}^{k+j}\psi (i), \ \widehat{\varepsilon }_{4}(k)=\sum \limits _{m=-b}^{-d(k)-1}\sum \limits _{j=-b}^{m}\sum \limits _{i=k-b}^{k+j} \psi (i), \ \widetilde{\varepsilon }_{1}(k)=\sum \limits _{i=k-d(k)}^{k-a-1}\psi (i), \ \widetilde{\varepsilon }_{2}(k)=\sum \limits _{j=-d(k)}^{-a-1}\sum \limits _{i=k+j}^{k-a-1} \psi (i), \ \widetilde{\varepsilon }_{3}(k)=\sum \limits _{j=-d(k)}^{-a-1}\sum \limits _{i=k-d(k)}^{k+j} \psi (i), \widetilde{\varepsilon }_{4}(k)=\sum \limits _{m=-d(k)}^{-a-1}\sum \limits _{j=-d(k)}^{m}\sum \limits _{i=k-d(k)}^{k+j} \psi (i)\). \(\Lambda _{1}^T=\Big [\imath _{1}\ \imath _{1}-\dfrac{2}{l_{1}+1}\imath _{2}\ \imath _{1}-\dfrac{6}{l_{1}+1}\imath _{3}+\dfrac{12}{(l_{1}+1)(l_{1}+2)}\imath _{4} \Big ]\), \(\Lambda _{2}^T=\Big [\imath _{5}\ \imath _{5}-\dfrac{2}{l_{2}+1}\imath _{6}\ \imath _{5}-\dfrac{6}{l_{2}+1}\imath _{7}+\dfrac{12}{(l_{2}+1)(l_{2}+2)}\imath _{8}\Big ]\), for \(\imath _{i}=[0_{n\times (i-1)n}\ I_{n}\ 0_{n\times (8-i)n}]^T\).

Definition 3.3

[35] Let \(\Gamma _i\in {\mathbb {S}}^{2n}_+\) (\(i\in {\mathbb {N}}_4\)) such that \(\Gamma _i=diag\{\Gamma _{i1},\Gamma _{i2}\}\), satisfies the following assumption:

$$\begin{aligned}&\left\{ \begin{array}{lll} \Gamma _1\le 0, \ \ \Gamma _3>0, \Gamma _4\ge 0, \\ \left( ||\Gamma _{1}||+||\Gamma _{2}||\right) ||\Gamma _{4}||=0. \end{array} \right. \end{aligned}$$
(3)

Denote \({\mathcal {J}}(k)=Y^T(k)\Gamma _{1}Y(k)+2Y^T(k)\Gamma _{2}W(k)+W^T(k)\Gamma _{3}W(k).\) For prescribed symmetric matrices \(\Gamma _{i}\), \(i\in {\mathbb {N}}_4\) satisfying the assumption in (3), the discretized system (2) is said to attain the extended dissipative performance under zero initial state, if \(\sum \limits _{i=0}^{d} {\mathcal {J}}(i)\ge \sup \limits _{0\le k\le d}Y^T(k)\Gamma _{4}Y(k)\) holds for any \(d\ge 0\), \(Y(\cdot )=\left[ \begin{array}{l} y_S(\cdot )\\ y_L(\cdot ) \end{array}\right] \) and \(W(\cdot )=\left[ \begin{array}{l} \omega _S(\cdot )\\ \omega _L(\cdot ) \end{array}\right] \).

4 Extended dissipativity of delayed DT-CNNs

In this section, based on the augmented LKF, the sufficient condition ensuring the extended dissipativity performance for the discretized CNNs (2) is obtained in terms of LMIs. The notations used for the matrices involved in this section are presented in Appendix A.

Theorem 4.1

Let the Assumption 3.1 holds. For given scalars \(\rho _1\) and \(\rho _2\), the semi-discretized CNNs (2) attains the extended dissipativity performance, if there exist positive definite real matrices \({P}_{i} \in {\mathbb {R}}^{2n\times 2n}\), \(\widetilde{Q}_{i}\), \(\widehat{Q}_{i}\), \({R}_{i}\), S, and \({T}_{i}\) \(\in {\mathbb {R}}^{n\times n}\) \((i\in {\mathbb {N}}_2)\), positive diagonal matrices \(\Lambda _{1}\) and \(\Lambda _{2}\), non-singular matrices \(Z_S\) and \(Z_L\), any matrices V and X, with appropriate dimension, such that the following LMIs hold:

$$\begin{aligned}&\left[ \begin{array}{ccc} \widehat{\Theta }_{1} &{} \widehat{\Xi }_{S2}V^T &{} \widehat{\Xi }_{L2}X^T\\ *&{} -\widetilde{T}_{1} &{} 0\\ *&{} *&{} -\widetilde{T}_{2} \end{array}\right]<0, \quad \left[ \begin{array}{ccc} \widehat{\Theta }_{2} &{} \widehat{\Xi }_{S1}V &{} \widehat{\Xi }_{L1}X\\ *&{} -\widetilde{T}_{1} &{} 0\\ *&{} *&{} -\widetilde{T}_{2} \end{array}\right] <0, \end{aligned}$$
(4)
$$\begin{aligned}&eP_1e^T-E^T\Gamma _{41}E>0, \ eP_2e^T-F^T\Gamma _{42}F>0, \end{aligned}$$
(5)

where \(\widehat{\Theta }_{1}=\widehat{\Theta }-\widehat{\Xi }_{S}T_S\widehat{\Xi }^T_{S}-\widehat{\Xi }_{S2}\widetilde{T}_1\widehat{\Xi }^T_{S2}-\widehat{\Xi }_{L}T_L\widehat{\Xi }^T_{L}-\widehat{\Xi }_{L2}\widetilde{T}_2\widehat{\Xi }^T_{L2}\), \(\widehat{\Theta }_{2}=\widehat{\Theta }-\widetilde{\Xi }_{S}T_S\widetilde{\Xi }^T_{S}-\widetilde{\Xi }_{S1}\widetilde{T}_1\widetilde{\Xi }^T_{S1}-\widetilde{\Xi }_{L}T_L\widetilde{\Xi }^T_{L}-\widetilde{\Xi }_{L1}\widetilde{T}_2\widetilde{\Xi }^T_{L1}\) in which \(\widehat{\Theta }=\Xi _{1}\widehat{P}_1\Xi _{1}^T+\Xi _{2}\widehat{P}_2\Xi _{2}^T+\Xi _{3}\widehat{Q}\Xi _{3}^T+\Xi _{4}\widehat{R}\Xi _{4}^T+\Xi _{5}\widehat{S}\Xi _{5}^T+\iota _{5}(\rho _{12}^{2}T_1)\iota _{5}^T+\iota _{17}(\rho _{12}^{2}T_2)\iota _{17}^T-\Xi _{6}\widehat{F}_{1}\Xi _{6}^{T}-\Xi _{7}\widehat{F}_{2}\Xi _{7}^{T}+ \Xi _{8}(Z_S)\Pi _S+(\Xi _{8}(Z_S)\Pi _S)^T+\Xi _{9}(Z_L)\Pi _L+(\Xi _{9}(Z_L)\Pi _L)^T-\iota _{1}(E^T\Gamma _{11}E)\iota _{1}^T-\iota _{13}(F^T\Gamma _{12}F)\iota _{13}^T-2\iota _{1}(E^T\Gamma _{21})\iota _{12}^T-2\iota _{13}(F^T\Gamma _{22})\iota _{24}^T-\iota _{12}\Gamma _{31} \iota _{12}^T-\iota _{24}\Gamma _{32}\iota _{24}^T\).

Proof

To prove the main result based on Lyapunov stability theory, let us consider the LKF \(V(k)=\sum \limits _{l=1}^{5}V_{l}(k)\) for the semi-discretized CNNs (2), where

$$\begin{aligned} V_{1}(k)=&\beth _{S}^{T}(k){P}_1\beth _{S}(k)+\beth _{L}^{T}(k){P}_2\beth _{L}(k),\\ V_{2}(k)=&\sum \limits _{l=1}^{2}\sum \limits _{i=k-\rho _l}^{k-1}\chi ^{T}(i)Q\chi (i),\\ V_{3}(k)=&\sum \limits _{i=k-\rho (k)}^{k-1}\chi ^{T}(i)R\chi (i) +\sum \limits _{i=-\rho _{2}+1}^{-\rho _{1}}\sum \limits _{j=k+i}^{k-1}\chi ^{T}(j)R\chi (j),\\ V_{4}(k)=&\sum \limits _{i=k-\rho (k)}^{k-1}f^{T}(x_S(i))Sf(x_S(i)) +\sum \limits _{i=-\rho _{2}+1}^{-\rho _{1}}\sum \limits _{j=k+i}^{k-1}f^{T}(x_S(j))Sf(x_S(j)),\\ V_{5}(k)=&\rho _{12}\sum \limits _{i=-\rho _{2}}^{-\rho _{1}-1}\sum \limits _{j=k+i}^{k-1}\left[ \begin{matrix} \eta _S(j)\\ \eta _L(j) \end{matrix}\right] ^T\left[ \begin{matrix} T_1 &{} 0\\ 0 &{} T_2 \end{matrix}\right] \left[ \begin{matrix} \eta _S(j)\\ \eta _L(j) \end{matrix}\right] , \end{aligned}$$

in which \(\beth _S ^T(k)= \left[ \begin{matrix} x_S^T(k)&\sum \limits _{i=k-\rho _{2}}^{k-\rho _1-1}x_S^T(i) \end{matrix}\right] \), \(\beth _L^T(k)= \left[ \begin{matrix} x_L^T(k)&\sum \limits _{i=k-\rho _{2}}^{k-\rho _1-1}x_L^T(i) \end{matrix}\right] \), \(\chi ^T(k)= \left[ \begin{matrix} x_S^T(k)&x_L^T(k) \end{matrix}\right] \), \({R}=diag\{{R}_1,{R}_2\}\), \(Q=diag\{\widehat{Q}_l,\widetilde{Q}_l\}\), \(\eta _S(k)=x_S(k+1)-x_S(k)\), \(\eta _L(k)=x_L(k+1)-x_L(k)\), and \(\rho _{12}=\rho _{2}-\rho _{1}\).

Now, we calculate the finite difference of V(k) together with the solution of the desired estimation error states \(x_S(k)\) and \(x_L(k)\) as \(\Delta V(k)=\sum \limits _{l=1}^{5}\Delta V_{l}(k)\), where the finite difference of \(V_{l}(k)\) \((l\in {\mathbb {N}}_5)\), can be computed as follows:

$$\begin{aligned} \Delta V_{1}(k)=&\beth _{S}^{T}(k+1){P}_1\beth _{S}(k+1)+\beth _{L}^{T}(k+1){P}_2\beth _{L}(k+1)\nonumber \\&-\beth _{S}^{T}(k){P}_1\beth _{S}(k)-\beth _{L}^{T}(k){P}_2\beth _{L}(k)\nonumber \\ =&\Upsilon ^{T}(k)\left\{ \Xi _{1}\widehat{P}_1\Xi _{1}^T+\Xi _{2}\widehat{P}_2\Xi _{2}^T\right\} \Upsilon (k). \end{aligned}$$
(6)

Next, we can determine the forward difference of \( V_{2}(k)\) as follows:

$$\begin{aligned} \Delta V_{2}(k)\! =&x_S^T(k)(\widehat{Q}_{1}+\widehat{Q}_{2})x_S(k)\!+x_L^T(k)(\widetilde{Q}_{1}+\widetilde{Q}_{2})x_L(k)\!-\sum \limits _{l=1}^{2}\left( \!x_S^{T}(k-\rho _{l})\widehat{Q}_{l}x_S(k-\rho _{l})\right) \nonumber \\ {}&-\sum \limits _{l=1}^{2}\left( \!x_L^{T}(k-\rho _{l})\widetilde{Q}_{l}x_L(k-\rho _{l})\right) \nonumber \\ =&\Upsilon ^{T}(k)\left\{ \Xi _{3}\widetilde{Q}\Xi _{3}^T\right\} \Upsilon (k). \end{aligned}$$
(7)

The finite difference of \(V_{3}(k)\) can be computed as follows:

$$\begin{aligned} \Delta V_{3}(k) \le&\chi ^{T}(k)(\rho _{12}+1)R\chi (k)-\chi ^{T}(k-\rho (k))R\chi (k-\rho (k))\nonumber \\ =&\Upsilon ^{T}(k)\big \lbrace \Xi _{4}\widehat{R}\Xi _{4}^T\big \rbrace \Upsilon (k). \end{aligned}$$
(8)

Now, we can compute \(\Delta V_{4}(k)\) as

$$\begin{aligned} \Delta V_{4}(k) \le&f^T(x_S(k))(\rho _{12}+1)Sf(x_S(k))-f^T(x_S(k-\rho (k)))Sf(x_S(k-\rho (k)))\nonumber \\ =&\Upsilon ^{T}(k)\big \lbrace \Xi _{5}\widehat{S}\Xi _{5}^T\big \rbrace \Upsilon (k). \end{aligned}$$
(9)

Now, \(\Delta V_{5}(k)\) can be computed as

$$\begin{aligned} \Delta V_{5}(k) =&\eta _S^{T}(k)({\rho }_{12}^{2}T_{1})\eta _S(k)+\eta _L^{T}(k)({\rho }_{12}^{2}T_{2})\eta _L(k)-{\rho }_{12}\sum \limits _{i=k-\rho _2}^{k-\rho _1-1}\eta _S^{T}(i)T_{1}\eta _S(i)\nonumber \\&-{\rho }_{12}\sum \limits _{i=k-\rho _2}^{k-\rho _1-1}\eta _L^{T}(i)T_{2}\eta _L(i). \end{aligned}$$
(10)

By utilizing Lemma 3.2, the first summation term of \(\Delta V_{5}(k)\) can be obtained for any matrix V as follows:

$$\begin{aligned}&{\rho }_{12}\sum \limits _{i=k-\rho _2}^{k-\rho _1-1}\eta _S^{T}(i)T_{1}\eta _S(i)\ge \left[ \begin{array}{l} \widehat{\eta }_{S1}(k)\\ \widetilde{\eta }_{S2}(k) \end{array}\right] ^{T}\left[ \begin{array}{cc}\widetilde{T}_{1}+\rho _{1}(k){U}_{1}&{}{V}\\ *&{}\widetilde{T}_{1}+\rho _{2}(k){U}_{2} \end{array}\right] \left[ \begin{array}{l} \widehat{\eta }_{S1}(k)\\ \widetilde{\eta }_{S2}(k) \end{array}\right] , \end{aligned}$$
(11)

where \(\widetilde{T}_{1}=diag\{T_1,3T_1,5T_1\}\) \({U}_{1}=\widetilde{T}_{1}-V\widetilde{T}_{1}^{-1}V^{T}\), \({U}_{2}=\widetilde{T}_{1}-V^{T}\widetilde{T}_{1}^{-1}V\), \(\rho _{1}(k)=\dfrac{\widehat{\rho }_{1}(k)}{\rho _{12}}\), \(\rho _{2}(k)=\dfrac{\widehat{\rho }_{2}(k)}{\rho _{12}}\), \(\widehat{\eta }_{S1}^T(k)=\left[ \begin{array}{lll}\widehat{\varepsilon }^T_{S1}&\widehat{\varepsilon }^T_{S1}\! -\!\frac{2}{\widehat{\rho }_{2}(k)+1}\widehat{\varepsilon }^T_{S2}&\widehat{\varepsilon }^T_{S1}\! -\!\frac{6}{\widehat{\rho }_{2}(k)+1}\widehat{\varepsilon }^T_{S3} +\frac{1 2}{\alpha _2(k)}\widehat{\varepsilon }^T_{S4} \end{array}\right] \), \(\widetilde{\eta }_{S2}^T(k)=\left[ \begin{array}{lll}\widetilde{\varepsilon }^T_{S1}&\widetilde{\varepsilon }^T_{S1} -\frac{2}{\widehat{\rho }_{1}(k)+1}\widetilde{\varepsilon }^T_{S2}&\widetilde{\varepsilon }^T_{S1} -\frac{6}{\widehat{\rho }_{1}(k)+1}\widetilde{\varepsilon }^T_{S3} +\frac{1 2}{\alpha _1(k)}\widetilde{\varepsilon }^T_{S4} \end{array}\right] \) in which \(\widehat{\rho }_{2}(k)=\rho _{2}-\rho (k),\ \widehat{\rho }_{1}(k)=\rho (k)-\rho _{1}, \ \widehat{\varepsilon }_{S1}=\sum \limits _{i=k-\rho _{2}}^{k-\rho (k)-1}\eta _S(i)\), \(\widehat{\varepsilon }_{S2}=\sum \limits _{i=-\rho _{2}}^{-\rho (k)-1}\sum \limits _{j=k+i}^{k-\rho {(k)-1}}\eta _S(j)\), \(\widehat{\varepsilon }_{S3}=\sum \limits _{i=-\rho _{2}}^{-\rho (k)-1}\sum \limits _{j=k-\rho _{2}}^{k+i}\eta _S(j)\), \(\widehat{\varepsilon }_{S4}=\sum \limits _{i=-\rho _{2}}^{-\rho (k)-1}\sum \limits _{j=-\rho _{2}}^{i}\sum \limits _{l=k-\rho _{2}}^{k+j}\eta _S(l)\), \(\widetilde{\varepsilon }_{S1}=\sum \limits _{i=k-\rho (k)}^{k-\rho _{1}-1}\eta _S(i)\), \(\widetilde{\varepsilon }_{S2}=\sum \limits _{i=-\rho (k)}^{-\rho _{1}-1}\sum \limits _{j=k+i}^{k-\rho _1-1}\eta _S(j),\) \(\widetilde{\varepsilon }_{S3}=\sum \limits _{i=-\rho (k)}^{-\rho _{1}-1}\sum \limits _{j=k-\rho (k)}^{k+i}\eta _S(j)\), and \(\widetilde{\varepsilon }_{S4}=\sum \limits _{i=-\rho (k)}^{-\rho _{1}-1}\sum \limits _{j=-\rho (k)}^{i}\sum \limits _{l=k-\rho (k)}^{k+j}\eta _S(l)\).

Now, using a simple mathematical calculation, (11) can be rewritten as follows:

$$\begin{aligned} \rho _{12}\sum \limits _{i=k-\rho _{2}}^{k-\rho _{1}-1}\!\!\!\eta _S^{T}(i)T_{1}\eta _S(i)\ge {\Upsilon }^T(k)\left\{ \Xi _S(k){T}_S\Xi _S^T(k)+\sum \limits _{i=1}^{2}\rho _{i}(k)\Xi _{Si}(k)U_{i}\Xi _{Si}^T(k)\right\} {\Upsilon }(k), \end{aligned}$$
(12)

where \(\Xi _S(k)=\left[ \begin{array}{ll} {\Xi }_{S1}(k)&{\Xi }_{S2}(k) \end{array}\right] \).

By following the similar steps as above and utilizing the Lemma 3.2, for any matrix X, the lower bounds for the second summation term of \(\Delta V_5(k)\) can be computed as follows:

$$\begin{aligned} \rho _{12}\sum \limits _{i=k-\rho _{2}}^{k-\rho _{1}-1}\!\!\!\eta _L^{T}(i)T_{2}\eta _L(i)\ge {\Upsilon }^T(k)\left\{ \Xi _L(k){T}_L\Xi _L^T(k)+\sum \limits _{i=1}^{2}\rho _{i}(k)\Xi _{Li}(k)Y_{i}\Xi _{Li}^T(k)\right\} {\Upsilon }(k), \end{aligned}$$
(13)

where \(\widetilde{T}_{2}=diag\{T_2,3T_2,5T_2\}\), \(\Xi _L(k)=\left[ \begin{array}{ll} {\Xi }_{L1}(k)&{\Xi }_{L2}(k) \end{array}\right] \), in which \({Y}_{1}=\widetilde{T}_{2}-X\widetilde{T}_{2}^{-1}X^{T}\), \({Y}_{2}=\widetilde{T}_{2}-X^{T}\widetilde{T}_{2}^{-1}X\).

By substituting (13) and (12) in (10), the following upper bound for \(\Delta V_{5}(k)\) can be obtained:

$$\begin{aligned} \Delta V_{5}(k)\le&{\Upsilon }^{T}(k)\left\{ \iota _{5}(\rho _{12}^{2}T_1)\iota _{5}^T+\iota _{17}(\rho _{12}^{2}T_2)\iota _{17}^T-\Xi _S(k)T_S\Xi _S^T(k)\right. \nonumber \\&-\sum \limits _{i=1}^{2}\rho _{i}(k)(\Xi _{Si}(k)U_{i}\Xi _{Si}^T(k)\nonumber \\&\left. -\Xi _{Li}(k)Y_{i}\Xi _{Li}^T(k))-\Xi _L(k)T_L\Xi _L^T(k) \right\} {\Upsilon }(k). \end{aligned}$$
(14)

By considering the Assumption 3.1, there exist diagonal matrices \(\Lambda _{1}>0\) and \(\Lambda _{2}>0\) such that the following inequality holds:

$$\begin{aligned} \left[ \begin{array}{c} x_S(k)\\ f(x_S(k)) \end{array}\right] ^T \left[ \begin{array}{cc} F_1\Lambda _1 &{} -F_2\Lambda _1\\ -F_2\Lambda _1 &{} \Lambda _1 \end{array}\right] \left[ \begin{array}{c} x_S(k)\\ f(x_S(k)) \end{array}\right] \le 0,\end{aligned}$$
(15)
$$\begin{aligned} \left[ \begin{array}{l} x_S(k-\rho (k))\\ f(x_S(k-\rho (k))) \end{array}\right] ^T \left[ \begin{array}{cc} F_1\Lambda _2 &{} -F_2\Lambda _2\\ -F_2\Lambda _2 &{} \Lambda _2 \end{array}\right] \left[ \begin{matrix} ~~~x_S(k-\rho (k))\\ f(x_S(k-\rho (k))) \end{matrix}\right] \le 0. \end{aligned}$$
(16)

Adding the above zero inequalities (15) and (16), we get

$$\begin{aligned} {\Upsilon }^{T}(k)\big \lbrace \Xi _{6}\widehat{F}_{1}\Xi _{6}^{T}+\Xi _{7}\widehat{F}_{2}\Xi _{7}^{T}\big \rbrace {\Upsilon }(k)\le 0. \end{aligned}$$
(17)

In addition to this, for any non-singular matrices \(Z_S\) and \(Z_L\), the subsequent zero equalities hold from (2):

$$\begin{aligned}&\left( x_S(k+1)\right) ^T2Z_S\Big (\bar{A}x_S(k)+\bar{B}f\left( x_S(k)\right) +\bar{C}f\left( x_S\left( k-\rho (k)\right) \right) +\bar{D}x_L(k)\\\&\qquad +\omega _S(k)-x_S(k+1)\Big )=0,\\&\quad \left( x_L(k+1)\right) ^T2Z_L\Big (\bar{{\mathcal {A}}}x_L(k)+\bar{{\mathcal {B}} }f\left( x_S(k)\right) +\omega _L(k)-x_L(k+1)\Big )=0. \end{aligned}$$

The above equations can be equivalently represented as

$$\begin{aligned} {\Upsilon }^{T}(k)\Big \lbrace \Xi _{8}(2Z_S)\Pi _S+\Xi _{9}(2Z_L)\Pi _L\Big \rbrace {\Upsilon }(k)=0, \end{aligned}$$
(18)

where \(\Pi _S=[ \bar{A}-I_{n} ~ 0_{n\times 3n} ~ -I_{n} ~ 0_{n\times 6n} ~ I_n ~ \bar{D} ~ 0_{n\times 11n} \ \bar{B}~ \bar{C} ]\) and \(\Pi _L=[ 0_{n\times 12n} \ \bar{{\mathcal {A}}}\!-\!I_n \ 0_{n\times 3n} \ -I_n \ 0_{n\times 6n} \ I_n\ \bar{{\mathcal {B}} } \ 0_{n\times n}]\).

For any real symmetric matrices \(\Gamma _i\) for \(i\in {\mathbb {N}}_3\), the supply rate \({\mathcal {J}}(k)\) is considered as follows:

$$\begin{aligned} {\mathcal {J}}(k)=&Y^T(k)\Gamma _{1}Y(k)+2Y^T(k)\Gamma _{2}W(k)+W^T(k)\Gamma _{3}W(k)\nonumber \\ =&{\Upsilon }^{T}(k)\Big \lbrace \big (\iota _{1}(E^T\Gamma _{11}E)\iota _{1}^T+\iota _{13}(F^T\Gamma _{12}F)\iota _{13}^T+2\iota _{1}(E^T\Gamma _{21})\iota _{12}^T+2\iota _{13}(F^T\Gamma _{22})\iota _{24}^T\nonumber \\ {}&+\iota _{12}\Gamma _{31} \iota _{12}^T+\iota _{24}\Gamma _{32}\iota _{24}^T\big )\Big \rbrace {\Upsilon }(k). \end{aligned}$$
(19)

Adding up the finite difference of \(V_{l}(k)\) \((l\in {\mathbb {N}}_5)\) derived in (6)–(9), and (14), along with inequality (17), equations (18), (19), and by performing some elementary algebraic calculations, we get

$$\begin{aligned} \Delta V(k)-{\mathcal {J}}(k)\le {\Upsilon }^{T}(k)\{ {\Theta }(k)\} {\Upsilon }(k), \end{aligned}$$
(20)

where \({\Theta }(k)=\widehat{\Theta }-\Xi _S(k)T_S\Xi _S^T(k)-\sum \limits _{i=1}^{2}\rho _{i}(k)\Xi _{Si}(k)U_{i}\Xi _{Si}^T(k)-\Xi _L(k)T_L\Xi _L^T(k)-\sum \limits _{i=1}^{2}\rho _{i}(k)\Xi _{Li}(k)Y_{i}\Xi _{Li}^T(k)\).

Assume the LMIs listed in (4) are true. Then, the following inequalities are obtained by utilizing the Schur complement lemma:

$$\begin{aligned} \widehat{\Theta }_{1}+\widehat{\Xi }_{S2}(V^{T}\widetilde{T}_{1}^{-1}V)\widehat{\Xi }^T_{S2}+\widehat{\Xi }_{L2}(X^{T}\widetilde{T}_{2}^{-1}X)\widehat{\Xi }_{L2}^{T}<{0}, \text { for } \rho (k)=\rho _{1}&,\end{aligned}$$
(21)
$$\begin{aligned} \widehat{\Theta }_{2}+\widetilde{\Xi }_{S1}(V\widetilde{T}_{1}^{-1}V^T)\widetilde{\Xi }^{T}_{S1}+\widetilde{\Xi }_{L1}(X\widetilde{T}_{2}^{-1}X^T)\widetilde{\Xi }^{T}_{L1}<{0}, \text { for } \rho (k)=\rho _{2}. \end{aligned}$$
(22)

In accordance with the convex combination method [36], the above two inequalities (21) and (22) are valid for any \(\rho _{1}\le \rho (k)\le \rho _{2}\) if and only if the inequality \(\Theta (k)<0\) is satisfied. Hence, from the inequalities (20) and \(\Theta (k)<0\), we conclude that

$$\begin{aligned} \Delta V(k)-{\mathcal {J}}(k)<0, \ k=0,1,2\dots . \end{aligned}$$
(23)

On the other hand, to establish the extended dissipativity criteria, for the discretized CNNs (2), we considered (23) under zero initial condition, described by

$$\begin{aligned} V(k)<\sum \limits _{i=0}^{k-1}{\mathcal {J}}(i). \end{aligned}$$
(24)

To complete the proof, it remains to verify the inequality in Definition 3.3. From the LMI conditions (5), one can ensure that

$$\begin{aligned} Y^T(k)\Gamma _4Y(k)=&x_S^T(k)E^T\Gamma _{41}Ex_S(k)+x_L^T(k)F^T\Gamma _{42}Fx_L(k)\nonumber \\\le&x^T_S(k)(eP_1e^T)x_S(k)+x^T_L(k)(eP_2e^T)x_L(k)\le V(k). \end{aligned}$$
(25)

Combining the inequalities (24) and (25), we have the following

$$\begin{aligned} Y^T(k)\Gamma _4Y(k)<\sum \limits _{i=0}^{k-1}{\mathcal {J}}(i). \end{aligned}$$
(26)

If \(\Gamma _4=0\), the inequality in Definition 3.3 holds since \(\sum \limits _{i=0}^{d}{\mathcal {J}}(i)>0, \ d\ge 0.\) If \(\Gamma _4\ne 0\), it is evident from assumption (3) that \(\Gamma _{1}=0\), \(\Gamma _{2}=0\) and \(\Gamma _{3}>0\), which leads to \({\mathcal {J}}(k)=W^T(k)\Gamma _{3}W(k)\ge 0\). As a result, for any integer \(d\ge 0\) with \(0\le k\le d\), we have the following inequality:

$$\begin{aligned} \sum \limits _{i=0}^{d}{\mathcal {J}}(i)\ge \sum \limits _{i=0}^{k-1}{\mathcal {J}}(i)\ge V(k)\ge Y^T(k)\Gamma _4Y(k). \end{aligned}$$

By taking the supremum over \(0\le k\le d\), we obtain the extended disipativity condition. This completes the proof.

Remark 4.2

Theorem 4.1 provides an intuitive criterion for determining the robust performance of the considered DT-CNNs (2). Here, to reduce the conservatism of the stability performance, the augmented LKF is constructed, which provides valuable structural insights into the DT-CNNs. One of the key challenges in estimating the forward difference of the constructed LKF is bounding the summation terms through summation-based inequalities. On the other hand, the matrix-based inequality, like the RCMI technique has been widely used to avoid the occurrence of the variable delay terms in the finite difference of the LKF. The combination of summation-based and matrix-based inequalities provides a powerful tool for the stability analysis of the dynamical systems [32]. Due to this consideration, in this paper, the relaxed AFSI [33] is used which is the generalization of well-known Jensen summation inequality and Wirtinger-based summation inequality. As a result, the LMIs (4) and (5) obtained from the Theorem 4.1 essentially reduce the conservatism of the proposed result.

Remark 4.3

It is worth pointing out that the analyzed extended dissipative criterion is a composite index consisting of other robust performances. By tuning the weighting matrices in Definition 3.3, some well-known performance indices can be obtained as follows:

  1. (i)

    When \(\Gamma _1=-I_{2n}\), \(\Gamma _2=0_{2n}\), \(\Gamma _3=\gamma ^2 I_{2n}\), and \(\Gamma _4=0_{2n},\) the extended dissipativity in Definition 3.3 reduces to \(H_{\infty }\) performance.

  2. (ii)

    When \(\Gamma _1=0_{2n}\), \(\Gamma _2=I_{2n}\), \(\Gamma _3=\gamma I_{2n}\), and \(\Gamma _4=0_{2n}\), we obtain passivity performance.

  3. (iii)

    The strict \(\left( \Gamma _1,\Gamma _2,\Gamma _3\right) -\) dissipativity can be obtained by setting \(\Gamma _4=0_{2n}\). Moreover, \(\left( \Gamma _1,\Gamma _2,R\right) -\) dissipativity is obtained by setting \(\Gamma _3=R-\gamma I_{2n}\) and \(\Gamma _4=0_{2n}\).

  4. (iv)

    When \(\Gamma _1=0_{2n}\), \(\Gamma _2=0_{2n}\), \(\Gamma _3=\gamma ^2 I_{2n}\), and \(\Gamma _4=I_{2n}\), the \(L_2-L_{\infty }\) performance is deduced from Definition 3.3.

5 Illustrative example

In this section, two numerical examples are provided to illustrate the validity and merits of the theoretical findings. In the existing literature, almost all results concerning the stability of CNNs are dealt with continuous-time analog. Generally, in the discretization process, the derived discrete-time analogs (2) of CNNs should faithfully preserve the convergence characteristics of the continuous-time CNNs (1). In order to demonstrate this effectiveness, in this paper, for the first example, we adopt the same system parameters as in continuous-time CNNs [15] and for the second example, we adopt the parameters as in [14]. From the simulation results, it is evident that the semi-discretized CNNs have the similar dynamical behaviors as in their continuous-time counterparts.

Example 5.1

In this example, we consider the same system parameters of continuous-time CNNs in [15]. The corresponding system parameters of DT-CNNs (2) are obtained by semi-discretization techniques for the step size \(r=0.7\) as follows:

\(\bar{A}=\left[ \begin{matrix} 0.4317 &{} 0\\ 0 &{} 0.4025 \end{matrix}\right] ,\) \(\bar{B}=\left[ \begin{matrix} 0.5209 &{} -0.0568\\ -2.2980 &{} 1.0111 \end{matrix}\right] ,\) \(\bar{C}=\left[ \begin{matrix} -0.7104 &{} 0.9471\\ -1.3788 &{} -1.1490 \end{matrix}\right] ,\)

\(\bar{D}=\left[ \begin{matrix} 0.0947 &{} 0\\ 0 &{} 0.1149 \end{matrix}\right] , \) \(\bar{{\mathcal {A}}}=\left[ \begin{matrix} 0.6126 &{} 0\\ 0 &{} 0.7047 \end{matrix}\right] ,\) \(\bar{{\mathcal {B}}}=\left[ \begin{matrix} -0.6641 &{} 0\\ 0 &{} 0.6497 \end{matrix}\right] ,\) \(E=G=0.5I\), \(F=H=0.4I.\) Let the activation function be \(f_i(k)=tanh(0.1 \ k)\) for \(i\in {\mathbb {N}}_2\), for which the Assumption 3.1 yields the bounds \(F_1=diag\{ 0,0\}\) and \(F_2=diag\{ 0.1,0.1\}.\) The time-varying delay parameter \(\rho (k)\) and the disturbance input \(\omega _S(k)\) and \(\omega _L(k)\) are assumed as \(\rho (k)=3+cos(\frac{k \pi }{2})\), \(\omega _S(k)=\left[ \frac{0.01}{1+k^2}\ \frac{0.01}{1+k^2} \right] ^T\), and \(\omega _L(k)=\left[ \frac{0.04}{1+k^2}\ \frac{0.04}{1+k^2} \right] ^T\), respectively. Accordingly, the bounds of time-varying delay can be taken as \(\rho _1=2\) and \(\rho _2=4.\)

Fig. 3
figure 3

Simulation results for the discretized CNNs (2) for Example 1

Fig. 4
figure 4

Simulation results for the CNNs (2) under different initial conditions

Based on the above-mentioned system parameters, the state trajectories of STM \(x_{S_i}(k)\) and LTM \(x_{L_i}(k), \ i\in {\mathbb {N}}_2\) of discretized CNNs (2) are obtained as given in Fig. 3. From the simulation results, it is evident that the discretized CNNs converge together with the external disturbances \(\omega _S(k)\) and \(\omega _L(k)\in L_2[0,\infty )\). Moreover, in order to prove the sustainability of theoretical results, the simulation figures are obtained for distinct initial conditions under the interval \([-2, \ 2]\) in Fig. 4.

Based on the above parameters and by choosing the appropriate values of the weighting matrices \(\Gamma _i\), \(i\in {\mathbb {N}}_4\), we obtain the following dynamic performance:

  1. (i)

    \(H_\infty \) performance: By choosing the weighting matrices \(\Gamma _1=-I_{4}\), \(\Gamma _2=-0_{4}\), \(\Gamma _3=\gamma ^2I_{4}\), and \(\Gamma _4=0_{4}\), the extended dissipativity criteria becomes \(H_\infty \) Performance. For different values of upper delay bound \(\rho _2\), Table 1 lists the minimum \(H_\infty \) level of \(\gamma \) by fixing the \(\rho _1\).

  2. (ii)

    Passivity performance: The extended dissipativity index reduces to passivity performance by choosing \(\Gamma _1=0_{4}\), \(\Gamma _2=I_{4}\), \(\Gamma _3=\gamma I_{4}\), and \(\Gamma _4=0_{4}\). The minimum performance level for passivity performance is listed in Table 1 for different values of \(\rho _2\).

  3. (iii)

    \(L_2-L_{\infty }\) performance By letting the matrices \(\Gamma _1=0_{4}\), \(\Gamma _2=0_{4}\), \(\Gamma _3=\gamma ^2 I_{4}\), and \(\Gamma _4=I_{4}\), we obtain the \(L_2-L_{\infty }\) performance. Table 1 displays the minimum \(L_2-L_{\infty }\) performance level \(\gamma \) for various \(\rho _2\) and fixed \(\rho _1\).

  4. (iv)

    \(\left( \Gamma _1,\Gamma _2,R\right) -\) dissipativity We obtain the dissipativity performance by choosing the matrices \(\Gamma _1=I_{4}\), \(\Gamma _2=0.1I_{4}\), \(\Gamma _3=0.5 I_{4}-\gamma I_4 \), and \(\Gamma _4=I_4\). Further, the maximum performance level \(\gamma \) of \(\left( \Gamma _1,\Gamma _2,R\right) -\) dissipativity is obtained in Table 2 for different upper delay bound \(\rho _2\) and fixed lower delay bound \(\rho _1\).

Table 1 Minimum robust performance level \(\gamma \) for \(\rho _{1}=2\) and various \(\rho _{2}\)
Table 2 Maximum dissipativity level \(\gamma \) for \(\rho _{1}=2\) and various \(\rho _{2}\)

Hence, from Table 1, one can observe that the larger value of \(\rho _2\), we obtained the larger performance index \(\gamma \). On the parallel perceptive, for the same \(\rho _2\), one can see that the obtained performance level \(\gamma \) using the Theorem 4.1 is smaller. Hence from Table 2, it is clear that the obtained performance level \(\gamma \) is inversely propositional to the upper delay bound \(\rho _2\), which verifies the advantage of the presented method.

Example 5.2

Consider the continuous-time CNNs (1) with the same parameters as in the numerical example section of [14]. By employing the semi-discretization technique, which is demonstrated in the system formulation, we obtain the DT-CNNs (2) of the corresponding continuous-time CNNs in [14], where the discretized coefficient matrices are obtained with discretization step size 0.9s as follows:

\(\bar{A}=\left[ \begin{matrix} 0.7634 &{} 0\\ 0 &{} 0.8353 \end{matrix}\right] ,\) \(\bar{B}=\left[ \begin{matrix} 1.9718 &{} -0.0789\\ -0.1235 &{} 2.8828 \end{matrix}\right] ,\) \(\bar{C}=\left[ \begin{matrix} -1.5775 &{} -0.3944\\ -0.2471 &{} -1.6473 \end{matrix}\right] ,\) \(\bar{D}=\left[ \begin{matrix} 1.8930 &{} -0.2366\\ 0.3706 &{} 0.4942 \end{matrix}\right] , \) \(\bar{{\mathcal {A}}}=\left[ \begin{matrix} 2.4596 &{} 0\\ 0 &{} 2.4596 \end{matrix}\right] ,\) \(\bar{{\mathcal {B}}}=\left[ \begin{matrix} -0.2335 &{} 0.0018\\ 0.0073 &{} -0.1095 \end{matrix}\right] ,\) \(E=G=I\), \(F=H=I.\) Here, the activation function is chosen as \(f_i(k)=tanh(k)\) for \(i\in {\mathbb {N}}_n\), and based on the Assumption 3.1 the corresponding lower and upper bounds are taken as \(F_1=diag\{ 0,0\}\) and \(F_2=diag\{ 0.5,0.5\},\) respectively. The time-varying delay parameter \(\rho (k)\) and the disturbance input \(\omega _S(k)\) and \(\omega _L(k)\) are assumed as same in Example 5.1. Further, by solving the LMI conditions (4) and (5) in Theorem 4.1, using Matlab Yalmip toolbox, we can obtain the required feasibility matrices.

Fig. 5
figure 5

Simulation results for the discretized CNNs (2) for Example 2

Based on the above-mentioned parameters the simulation results for the state responses of the discretized CNNs (2) are obtained as shown in Fig. 5 for several initial conditions between the interval \([-1, \ 1]\). This clearly demonstrates the superiority of the DT-CNNs in (2) over the existing continuous-time CNNs. Further, from the simulation results, it is evident that the constructed discretized CNNs (2) exhibit similar dynamical behaviors as in its original continuous-time CNNs and hence this illustrates the validity of our proposed results.

Remark 5.3

When discretizing continuous-time NNs, two critical concerns come to the forefront: (i) formulating discrete-time analogs of continuous-time NNs along the spatial coordinates only, and (ii) ensuring that these discrete-time analogs accurately capture the dynamics of their continuous-time counterparts. Addressing the first concern, according to numerical analysis theory, there exist numerous methods to derive discrete-time analogs from continuous-time dynamical CNNs (1). However, as emphasized in literature [21], discretization might fail to uphold the continuous-time dynamics, even with a small sampling period. To overcome this limitation in this paper, the technique of semi-discretization is utilized to achieve an accurate representation of the dynamic performance exhibited by the continuous-time CNNs. Further, to address the second concern (ii), in the aforementioned examples, we employed the identical system parameters of the continuous-time CNNs. As a result, from the figures, we can guarantee that the dynamics for both the continuous and DT-CNNs are uniform.

6 Conclusion

In this paper, the discrete-time analog of continuous-time CNNs has been formulated by using the semi-discretization technique. The extended dissipativity performance has been examined for the semi-discretized CNNs. In addition, the novel LKFs and relaxed AFSI have been utilized to obtain a tighter upper bound of the summation terms in the forward difference of LKF. Further, in order to ensure that the constructed DT-CNNs retain the dynamics of the corresponding continuous-time analog, in numerical simulation, we adopted the same parameters in continuous-time CNNs [14, 15] and verified the effectiveness of theoretical results. It should be noted that the fractional-order model is recognized for its enhanced accuracy in representing natural phenomena compared to the integer-order model. As a result, it would be interesting to extend our findings to discrete-time fractional-order NNs.