Abstract
In this article, we investigate exponential lag synchronization results for the Cohen–Grossberg neural networks with discrete and distributed delays on an arbitrary time domain by applying feedback control. We formulate the problem by using the time scales theory so that the results can be applied to any uniform or nonuniform time domains. Also, we provide a comparison of results that shows that obtained results are unified and generalize the existing results. Mainly, we use the unified matrixmeasure theory and Halanay inequality to establish these results. In the last section, we provide two simulated examples for different time domains to show the effectiveness and generality of the obtained analytical results.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Since the 1980 s, neural networks (NNs), including recurrent NNs, Hopfield NNs, cellular NNs, and bidirectional associative NNs, have been a subject of intense study because of their large number of potential applications in many fields, such as the classification of patterns, signal and image processing, optimization problems, associative memory, parallel computing, and so on. In 1983, Cohen–Grossberg [1] introduced the CGNNs which are recognized as one of the most important and typical NNs because some other wellknown NNs, for example, recurrent NNs, cellular NNs, and Hopfield NNs are special cases of CGNNs. As a result, these types of networks have attracted considerable research attention and have been extensively studied in terms of their dynamical properties such as state estimation [2], periodicity [3], stability [4, 5], boundedness [6], and synchronization [7, 8]. Furthermore, due to the importance of discretetime CGNNs as discussed in [9], the dynamics of discretetime CGNNs have become a popular research topic; see, for example, [10,11,12,13].
Synchronization is one of the most important qualitative properties of dynamic systems and means that two or more dynamic systems lead to a common dynamical behaviour by using some coupling or external forces. The concept of synchronization in driveresponse systems was first introduced by Pecora and Carrol [14], and since then, it has been capturing increased attention from both a fundamental and applicationdriven perspective. Potential applications of synchronization can be found in many areas of applied sciences, such as harmonic oscillation generation, information science, human heartbeat regulation, chemical and biological systems, and secure communication [15,16,17]. In the last few years, various types of synchronization phenomena have been discovered and investigated, such as exponential synchronization [18, 19], complete synchronization [20], finitetime synchronization [21, 22], lag synchronization [23], adaptive synchronization [24, 25], and projective synchronization [19, 26]. Among them, lag synchronization has been extensively studied [27,28,29,30] due to its relevance in connected electronic networks, where constant time shifts between drive and response systems can make complete synchronization difficult to implement effectively.
In practical applications, both discrete and continuous dynamic systems play a significant role, but results for them are often studied separately. In 1988, Hilger [31], introduced the socalled time scale theory (or measure chain theory) which unifies the separate analysis of discrete and continuous dynamic systems into a single comprehensive analysis. Eventhough, the study of dynamic systems is not limited to just discrete and continuoustime domains. In fact, there are many other time domains which can be useful to study the dynamic behaviours of dynamic systems more accurately. For example, to model the growth process of some species like Magicicada Septendecim, Magicicada Cassini, and Pharaoh Cicada, we need a time domain of the form \(T=\cup _{k=0}^\infty [k(a+b), k(a+b)+b], \ a,b \in (0,\infty )\). Further, there exist neurons in the brain that follow a pattern of being active during the day and inactive at night. Intuitively, the dynamic behaviour of these neurons can be observed in the time domain \({\mathbb {T}} = \bigcup _{l=0}^\infty [24\,l, 24\,l+d_l],\) where \(d_l\) denotes the number of active hours of the neurons in each day; see Fig. 1.
Another example is an RLC circuit (see Fig. 2), where if the capacitor discharges with small time units \(\delta >0\) at periodic intervals of l time units, the dynamics of such a model can be modelled on the time scale \(T = \bigcup _{l=0}^\infty [l, l+1\delta ].\)
These examples require a time domain which is neither discrete nor continuous. However, the time scale theory can overcome such difficulties as it gives the freedom to work on the general domain, i.e., the results obtained by using the time scales will also be valid for uniform and nonuniform time domains such as the nonoverlapping closed intervals, a mixture of closed intervals and discrete points, and even a discrete nonuniform time domain. Thus, we can summarize the above and state that “Unification and Extension" are two main features of the time scale theory. Therefore, it is worth to investigate the dynamic equations on time scales. For more studies on time scales, one can refer to the monograph [32].
In the last few years, the study of dynamic equations on time scales has drawn a tremendous amount of attention across the world and many researchers found its applications in many fields, such as epidemiology, economics, and control theory [33, 34]. Recently, many authors have also established different types of qualitative behaviours of dynamic systems on time scales, for example, the existence of solutions, stability analysis, stabilization, and synchronization [35,36,37,38,39]. Also, few authors established the existence of periodic, antiperiodic, almostperiodic solutions and their stability results of the CGNNs [40,41,42,43,44,45]. In [43], the authors studied the existence of an antiperiodic solution and exponential stability for CGNNs with timevarying delays on time scales. In [44], the authors established the existence and global exponential stability of almost periodic solutions for CGNNs with distributed delays on time scales while in [45], the authors considered the impulsive CGNNs with distributed delays on time scales and studied the existence and exponential stability of periodic solutions by using Lyapunov functions, Mmatrix theory, and coincidence degree theory.
Despite the growing interest in the study of dynamic equations on time scales, the synchronization problem of CGNNs on time scales has not been studied so far to the best of our knowledge. Therefore, to fill this gap, in this work, we establish exponential lag synchronization results for CGNNs with discrete and distributed time delays on time scales by using feedback control, a novel unified matrixmeasure technique and the Halanay inequality. In short, the main focus and benefit of this manuscript can be summarized as follows:

The CGNNs with discrete and distributed delays on arbitrary time domains are considered to study exponential lag synchronization.

The problem is formulated by using the time scales theory and the results are derived based on a unified matrixmeasure theory and the Halanay inequality.

The results for different special cases are given which shows that the obtained results unify and generalize the existing results.

A simulated example for different time scales including continuous, discrete and nonoverlapping closed intervals, is given to verify the obtained analytical outcomes.
The remaining part of the manuscript is organized as follows: In Sect. 2, we recall basic concepts from matrix theory and time scales that are essential for the subsequent sections. In Sect. 3, we formulate our statement of the problem. In Sect. 4, the main results are discussed. Finally, in Sect. 5, two numerical examples with simulation are given to verify the obtained results.
2 Preliminaries
Throughout this paper, the notations \({\mathbb {R}}, {\mathbb {Z}}\) and \({\mathbb {N}}\) denote the set of all real, integers. and natural numbers, respectively; \({\mathbb {T}}\) denotes the time scale; \(\emptyset \) denotes the empty set; \({\mathbb {R}}^n\) and \({\mathbb {R}}^{n \times m}\) denote the ndimensional Euclidean space and the set of all \(n \times m\) matrices, respectively; \({{\,\textrm{diag}\,}}\{\ldots \}\) denotes the diagonal matrix; Superscript \(*\) denotes the matrix transpose; \({{\,\textrm{Id}\,}}\) and \({{\,\textrm{O}\,}}\) denote the identity and zero matrices of appropriate dimensions, respectively; \([a,b]_{\mathbb {T}} = [a,b] \cap {\mathbb {T}}\), denotes the time scale interval. For any \(a, b \in {\mathbb {R}}, C([a,b],{\mathbb {R}}^n)\) denotes the set of continuous functions from [a, b] into \({\mathbb {R}}^n\); \(\Vert \cdot \Vert _p, \ (p =1,2,\infty )\) is used to denote the pnorm for a vector or for a matrix.
Next, we recall some basic definitions and results about time scale calculus.
A time scale is an arbitrary nonempty closed subset of the real numbers \({\mathbb {R}}\) with the topology and ordering inherited from \({\mathbb {R}}\). \(h{\mathbb {Z}}(h>0)\), \({\mathbb {R}},\) \({\mathbb {P}}_{a,b} = \cup _{k=0}^\infty [k(a+b), k(a+b) + a]\) for \(a,b \in (0,\infty )\), and any discrete set are some examples of time scales. The forward and backward jump operators \(\sigma , \rho : {\mathbb {T}} \rightarrow {\mathbb {T}}\) are defined by \(\sigma (t) = \inf \{ s \in {\mathbb {T}}: s>t\}\) and \(\rho (t) = \sup \{ s \in {\mathbb {T}}: s<t \},\) respectively with the substitution \(\sup {\mathbb {T}} = \inf \emptyset \) and \(\inf {\mathbb {T}} = \sup \emptyset \). Also the graininess functions \(\mu :{\mathbb {T}} \rightarrow [0,\infty )\) is given by \(\mu (t) = \sigma (t)  t\). A point \(t\in {\mathbb {T}}\) is called rightdense if \(t<\max \{ {\mathbb {T}} \}\) and \(\sigma (t) = t\), leftdense if \(t>\min \{ {\mathbb {T}} \}\) and \(\rho (t) = t\), rightscattered if \(\sigma (t)>t\), and leftscattered if \( \rho (t) <t\). If \({\mathbb {T}}\) has a leftscattered maximum M, then we set \({\mathbb {T}}^k = {\mathbb {T}}\setminus \{M\}\), otherwise \({\mathbb {T}}^k = {\mathbb {T}}\).
Definition 1
[ [35], Def. 1] Let \(f: {\mathbb {T}} \rightarrow {\mathbb {R}}\) be a function. Then the delta derivative of f at a point \(t \in {\mathbb {T}}^k\) is defined as a number \(f^\Delta (t)\) (provided it exists) whenever for each \(\epsilon > 0\) there exists a neighborhood U of t such that
Further, if the neighborhood U is replaced by the righthand sided neighborhood \(U^+\), then the delta derivative is called the upper right Dinideltaderivative and denoted by \(D_\Delta ^+f(t)\).
Remark 1
In the above Definition 1, if \(\mu (t) = 0\), then the delta derivative \(f^\Delta (t)\) becomes the ordinary derivative \(f^\prime (t)\) and the upper right Dinideltaderivative \(D_\Delta ^+f(t)\) becomes the ordinary upper right Diniderivative \(D^+f(t)\). Further, if \({\mathbb {T}} = h {\mathbb {Z}}, \ h>0\), then the delta derivative \(f^\Delta (t)\) becomes the hdifference operator, i.e., \(f^\Delta (t) = \frac{f(t+h)f(t)}{h}\).
Remark 2
Let \(f: {\mathbb {T}} \rightarrow {\mathbb {R}}\) is differentiable at \(t\in {\mathbb {T}}^k\), then the forward operator \(\sigma \) and the delta derivative of f are related by the formula \(f(\sigma (t)) = f(t) + \mu (t) f^\Delta (t)\).
A function \(f:{\mathbb {T}}\rightarrow {\mathbb {R}}\) is called regressive (or positive regressive) if \(1+\mu (t)f(t) \ne 0 (or \ >0)\) for all \(t\in {\mathbb {T}}\). Also, f is called regulated provided its rightside limit exists (finite) at all rightdense points of \({\mathbb {T}}\) and its leftside limit exist (finite) at all leftdense points of \({\mathbb {T}}\). Furthermore, f is called a rdcontinuous function if it is regulated and it is continuous at all rightdense points of \({\mathbb {T}}\). The collection of all rdcontinuous functions and rdcontinuous regressive (or rdcontinuous positive regressive) functions from \({\mathbb {T}}\) to \({\mathbb {R}}\) are defined, respectively, by \(C_{rd}({\mathbb {T}},{\mathbb {R}})\) and \({\mathcal {R}}(or \ {\mathcal {R}}^+)\).
Definition 2
[ [37], Def. 2.6] For any \(p \in {\mathcal {R}}\) and \(t \in {\mathbb {T}}^k\), we define \(\ominus p\) by
Remark 3
If \(p \in {\mathcal {R}}\), then \(\ominus p \in {\mathcal {R}}\).
Next, we define the time scales version of the exponential function.
Definition 3
[ [32], Def. 2.30] Let \(p\in {\mathcal {R}}\), then we define the exponential function on time scales by
with
Next, we define the deltaintegral on time scales.
Definition 4
[32], Def. 1.71] Let \(f: {\mathbb {T}} \rightarrow {\mathbb {R}}\) be a regulated function, then a function \(F: {\mathbb {T}} \rightarrow {\mathbb {R}}\) is called an antiderivative of f if \(F^\Delta (t) = f(t)\) holds for all \(t \in {\mathbb {T}}^k\). Also, we define the Cauchy integral by
Remark 4
For any \(a, b \in {\mathbb {T}}\) and \( f \in C_{rd}({\mathbb {T}},{\mathbb {R}})\), if we set \(\mathbb {T=R}\), then we have
Further, if \([a, b)_{\mathbb {T}}\) consists of only isolated points, then we have
Next, we recall some basics from matrixmeasure theory.
Definition 5
( [39], Def. 1) The generalized matrixmeasure and classical matrixmeasure of a real square matrix \(W = (w_{kl})_{n \times n}\) with respect to the \(p\)norm \((p=1, 2\) or \(\infty )\) are defined by
respectively, where \(h>0\). The matrix norms and corresponding classical matrixmeasures are given in Table 1.
Definition 6
( [39], Def. 2 ) Let \(W \in {\mathbb {R}}^{n\times n}\) be a real matrix and let \({\mathbb {T}}\) be an arbitrary time scale. Then the unified matrixmeasure on \({\mathbb {T}}\) with respect to the \(p\)norm \((p=1, 2\) or \(\infty )\) is defined as
Note that for \({\mathbb {T}}= {\mathbb {R}}\) and \({\mathbb {T}} = h{\mathbb {Z}}\), \(h>0\), Definition 6 reduces to Definition 5.
3 Statement of Problem
We consider a class of CGNNs with discrete and distributed delays on time scales of the following form:
where \(y(t) = [y_1(t),y_2(t),\ldots ,y_n(t)]^* \in {\mathbb {R}}^n\) is the state vector; \( R= (r_{ij})_{n \times n} \in {\mathbb {R}}^{n \times n}, S= (s_{ij})_{n \times n} \in {\mathbb {R}}^{n \times n}\) and \( T= (t_{ij})_{n \times n} \in {\mathbb {R}}^{n \times n}\) are the connection, discrete delay connection and distributed delay connection strength matrices, respectively; \(\eta _1(>0)\) and \(\eta _2(>0) \) are the discrete and distributed delay, respectively, such that \(t\eta _1 \in {\mathbb {T}}\) and \(t\eta _2 \in {\mathbb {T}}\); \(\eta = \max \{\eta _1,\eta _2\}\); \(\Gamma (y(t)) = {{\,\textrm{diag}\,}}\{\Gamma _1(y(t)), \Gamma _2(y(t)), \ldots , \Gamma _n(y(t))\} \in {\mathbb {R}}^{n \times n}\) is the statedependent amplification function; \(\Upsilon (y(t)) = [\Upsilon _1(y(t)),\Upsilon _2(y(t)),\ldots ,\Upsilon _n(y(t))]^* \in {\mathbb {R}}^n\) is the appropriate behaviour function; \({\mathcal {F}}(y(\cdot )) = [{\mathcal {F}}_1(y(\cdot )),{\mathcal {F}}_2(y(\cdot )),\ldots ,{\mathcal {F}}_n(y(\cdot ))]^* \in {\mathbb {R}}^n\) denotes the activation function; I is the external bias term; \(\phi \in C_{rd}([\eta ,0]_{\mathbb {T}}, {\mathbb {R}}^n)\).
In this paper, we shall establish synchronization results by using the driveresponse technique. Therefore, we consider system (1) as the drive system and, correspondingly, we consider a response system described as follows:
where \(z(t) \in {\mathbb {R}}^n\); \(\psi \in C_{rd}([\eta ,0]_{\mathbb {T}}, {\mathbb {R}}^n)\); \(u(t)\) is the control function defined as
where K is the feedback gain matrix and \(\beta \) is the transmittal delay such that \(t\beta \in {\mathbb {T}}\).
Remark 5
The considered class of CGNNs is defined on the general time domain, and hence, it contains the usual continuoustime CGNNs, discretetime CGNNs, and many more. For example, if we consider the continuoustime domain, i.e., \({\mathbb {T}} = {\mathbb {R}}\), then, see Remark 1, the drive system (1) becomes
and the response system (2) becomes
where \(t \in [0,\infty )\), and the rest of the parameters are the same as defined previously. Also, if we choose, the \(h\) difference discretetime domain, i.e., \({\mathbb {T}}=h{\mathbb {Z}}\), \(h>0\), then, see Remark 1 and Remark 4, the drive system (1) is converted to
and the response system (2) is converted to
where \(t \in [0,\infty )_{h {\mathbb {Z}}}\). Furthermore, by applying the above mentioned cases to the nonoverlapping time domain \({\mathbb {T}} =\cup _{i=0}^\infty [i, i+h], 0<h<1,\) the concrete expression of the drive system (1) can be derived as
and the response system (2) can be derived as
The main idea of synchronization is that the response system (2) utilizes a feasible controller to synchronize itself with the drive system (1). Mathematically, we can define it in the following definition.
Definition 7
The drive system (1) and the response system (2) are said to be exponentially lagsynchronized in the timescale sense under the control protocol (3) if there exist two constants \(C >0\) and \(\nu > 0\) such that the following inequality holds
Remark 6
In the above Definition 7, if \(\beta = 0\), then the drive system (1) and the response system (2) are called exponentially synchronized.
Now, to prove the synchronization results, we define the error between the drive system (1) and the response system (2) by \(\zeta (t) = z(t)  y(t\beta )\), then the error dynamics can be written as
where \(\zeta (t)\in {\mathbb {R}}^n\) and
From the definition of \(\zeta (t)\), it is clear that if the error system (10) is exponentially stable, then the drive system (1) and the response system (2) are exponentially lagsynchronized. Therefore, our goal is to show the exponential stability of the error system (10).
To deal with the lag delay, we set \(y(s) = \phi (\eta )\) for all \(s \in [\eta \beta , \eta ]_{\mathbb {T}}\) and
then, we can define the initial condition for the error system (10) as follows
In order to prove the main results, we need the following assumption.
Assumption 1
[ [25], Ass. A1,A2] The functions \(\Gamma , \Upsilon \) and \({\mathcal {F}}\) are Lipschitz continuous and bounded. In particular, for any \(y,z\in {\mathbb {R}}^n\), there exist positive constants \(L_\Gamma , L_\Upsilon , L_{\mathcal {F}}\) such that
Also, there exist positive constants \(M_\Gamma , M_\Upsilon , M_{\mathcal {F}}\) such that
We note that typical choices of the activation functions like \(\tanh \) or sigmoid fulfil this assumption. Moreover, anticipating the nature of the estimates of the following section, we can state the following relaxation of Assumption 1.
Remark 7
If the states can be confined a priori to a bounded set \(\Omega \subset {\mathbb {R}}^n\), then the Lipschitz and boundedness conditions need to be established on \(\Omega \) only.
4 Exponential Lag Synchronization Results
In this section, we provide the main results of this manuscript. Before that, we are giving an important lemma which is useful to establish these results.
Lemma 1
( [39], Lemma 2) For any real scalars c and d such that \(c>d>0\) and \(c \in {\mathcal {R}}^+\), let x(t) be a nonnegative rightdense continuous function satisfying
where \(D^+_\Delta x(t)\) is the upper right Dinideltaderivative of x at t. Then the inequality
holds, where \(\lambda >0\) is a solution of the inequality \(\lambda + d \exp (\lambda \eta )<c\).
Now, we are ready to give the first main result of this article in the following theorem.
Theorem 1
Let Assumption 1 hold. If, for some \(p \in \{1,2,\infty \}\), there exist a nonsingular matrix Z and a control gain matrix K such that \({\mathcal {M}}_1^p  {\mathcal {M}}_2^p >0 \) and \({\mathcal {M}}_1^p \in {\mathcal {R}}^+\), where
and \(M_p(\cdot ,{\mathbb {T}})\) denotes the unified matrixmeasure as defined in Definition 6, then the drive system (1) and the response system (2) are exponentially lagsynchronized.
Proof
For any nonsingular matrix Z, we define
Now, for any arbitrary point \(t \in {\mathbb {T}}\), from the definition of \(\mu (t)\), we have either \(\mu (t) =0\) or \(\mu (t)>0\). Therefore, we split the proof into the following two steps:
Step 1: When \(\mu (t)>0\), then for any \(t \in {\mathbb {T}}\), we have
Now, from the definition of \(\tilde{\Gamma }, \tilde{\Upsilon }, \tilde{\mathcal {F}}\) and Assumption 1, we have
Similarly, one can obtain
and
Now, from the inequalities (11), (12), (13), (14), (15) and (16), we get
Hence, using Definition 1, we get
Step 2: When \(\mu (t)=0\), the derivative is the classical derivative, therefore, by using the formula \(y(t+h) = y(t) + y'(t)h + o(h)\) with \(\lim _{h\rightarrow 0} \frac{\Vert o(h)\Vert _p}{h} = 0\), we can calculate
Hence, using Definition 1 again, we get the same inequality as (17).
Thus, from the above two steps, for any \(t \in {\mathbb {T}}\), we have
Therefore, from Lemma 1, we get
where \(\lambda \) is the solution of \(\lambda + {\mathcal {M}}_2^p \exp (\lambda \eta ) \le {\mathcal {M}}_1^p\). Further, it is clear that
where \(C= \Vert Z\Vert _p \Vert Z^{1}\Vert _p \sup _{s \in [t\eta ,t]_{\mathbb {T}}} \Vert \zeta (s)\Vert >0\). Hence, from Definition 7, the error
system (10) is exponentially stable, and hence, the drive system (1) and the response system (2) are exponentially lagsynchronized. \(\square \)
Remark 8
By choosing \(Z={{\,\textrm{Id}\,}}\), the constants \({\mathcal {M}}_1^p\) and \({\mathcal {M}}_2^p\) of Theorem 1 become
Next, we consider a particular case of the considered problem by setting \(\Gamma (y(t)) = {{\,\textrm{Id}\,}}\) and \(\Upsilon (y(t)) = Qy(t)\), where \(Q={{\,\textrm{diag}\,}}\{q_1,q_2,\ldots ,q_n \} \in {\mathbb {R}}^{n \times n}\) with \(q_i>0, i=1,2,\ldots ,n\), then the drive system (1) and the response system (2) become
and
respectively. Also, the error system (10) becomes
where \(\hat{\mathcal {F}}(\zeta (\cdot )) = {\mathcal {F}}(z(\cdot )) {\mathcal {F}}(y(\cdot \beta ))\).
Remark 9
One could have a remark similar to Remark 5 for the drive system (18) and the response system (19).
Now, we will give some sufficient conditions for the exponential lag synchronization for the systems (18)–(19) as follows.
Theorem 2
Let \({\mathcal {F}}\) satisfy the Lipschitz and bounded conditions as stated in Assumption 1. If, for some \(p \in \{1,2,\infty \}\), there exist a nonsingular matrix Z and a control gain matrix K such that \({\mathcal {M}}_3^p  {\mathcal {M}}_4^p >0 \) and \({\mathcal {M}}_3^p \in {\mathcal {R}}^+\), where
then the drive system (18) and response system (19) are exponentially lagsynchronized.
Proof
For any nonsingular matrix Z, we define
Similar to the proof of Theorem 1, we consider the following two steps: Step 1: When \(\mu (t)>0\), then for any \(t \in {\mathbb {T}}\), we have
Hence, from Definition 1, we get
Step 2: When \(\mu (t)=0\), then, using the same analysis as in Step 1, we get
Hence, using Definition 1 again, we get the same inequality as (21).
Thus, from the above two steps, for any \(t \in {\mathbb {T}}\), we have
Therefore, from Lemma 1, we get \( V(\zeta (t)) \le \sup _{s \in [t\eta ,t]_{\mathbb {T}}} V(\zeta (s)) e_{\ominus \lambda } (t,0), \) where \(\lambda \) is the solution of \(\lambda + {\mathcal {M}}_4^p \exp (\lambda \eta ) \le {\mathcal {M}}_3^p\). Further, it is clear that \(\Vert \zeta (t) \Vert _p = \Vert Z^{1} Z \zeta (t) \Vert _p \le C e_{\ominus \lambda }(t,0), \) where \(C= \Vert Z\Vert _p \Vert Z^{1}\Vert _p \sup _{s \in [t\eta ,t]_{\mathbb {T}}} \Vert \zeta (s)\Vert >0\). Hence, from Definition 7, the error
System (10) is exponentially stable, and hence, the drive system (1) and the response system (2) are exponentially lagsynchronized. \(\square \)
Remark 10
Similar to Remark 8, by choosing \(Z={{\,\textrm{Id}\,}}\), the constants \({\mathcal {M}}_3^p\) and \({\mathcal {M}}_4^p\) of Theorem 2 become
Remark 11
In the case when there is no distributed timedelay in the systems (1)–(2) (or (18)–(19)), i.e., when \(\eta _2 = 0\), then one can establish all the above results by setting the corresponding terms to zero in the computation of the constants \({\mathcal {M}}_1^p\) and \({\mathcal {M}}_2^p\) (or \({\mathcal {M}}_3^p\) and \({\mathcal {M}}_4^p\)).
Remark 12
The results of Theorem 1 and Theorem 2 cover the problem in all generality, therefore, one can obtain the results for particular time domains, such as the continuoustime domain (when \({\mathbb {T}}={\mathbb {R}}\)) and discretetime domain (when \({\mathbb {T}}= {\mathbb {Z}}\)), by replacing the matrixmeasures evolves in the constants \({\mathcal {M}}_1^p,{\mathcal {M}}_2^p,{\mathcal {M}}_3^p\) and \({\mathcal {M}}_4^p\) from the known Definition 5.
Remark 13
For the continuoustime domain, few authors reported the synchronization results for the CGNNs with mixed delays [21, 23, 24, 28]. Particularly, in [23], the authors considered a class of CGNNs with mixed delays and studied the exponential lag synchronization via periodically intermittent control and mathematical induction technique. In [21], the authors studied finitetime synchronization of CGNNs with mixed delays by using the LyapunovKrasovskii functional approach. Furthermore, there are only a few authors who studied the synchronization problem of the discretetime CGNNs [11, 13]. In particular, the authors in [13], studied the exponential synchronization results for an array of coupled discretetime CGNNs with timedependent delay by applying the LyapunovKrasovskii functional approach while in [11], the authors investigated the existence of a bounded unique solution, exponential stability, and synchronization by using some fixed point techniques and inequality techniques.
Remark 14
All the results obtained on continuoustime [21, 23, 24, 28] and discretetime [11, 13] CGNNs are studied separately. The continuoustime or discretetime CGNNs results cannot be directly applied and easily extended to the case of arbitrary time CGNNs. And, there is no manuscript on the continuoustime or discretetime domain which discussed the exponential lag synchronization results for the CGNNs with mixed delays by using the matrixmeasure and Halanay inequality, therefore, the results of this manuscript are completely new even for the continuous case (\({\mathbb {T}} = {\mathbb {R}}\)) and discrete case (\({\mathbb {T}} = {\mathbb {Z}}\)).
5 Illustrated Examples
In this section, we provide two examples to illustrate the obtained results for different time domains. Whereas the first example is tailored to best illustrate the potentials of our theoretical results with respect to arbitrary time domains, the second example is borrowed from [46] to show the general applicability of our methods.
Example 1
Consider the drive system (1) and response system (2) with the following coefficients
One can confirm that for Example 1, \(\Gamma , \Upsilon \), and \({\mathcal {F}}\) satisfy Assumption 1 with \(L_\Gamma = L_\Upsilon = 0.2, L_{\mathcal {F}}=M_{\mathcal {F}}= 0.8, M_\Gamma =0.6, M_\Upsilon = 0.5\). Now, we consider the following three different time domains as follows.
Case 1. \({\mathbb {T}} = {\mathbb {R}}\). Let \(\eta _1 = 0.5, \eta _2=0.8\) and \(\beta = 0.4\). Here, \(\eta =0.8\) and the graininess function \(\mu (t)=0\) for all \( t \in {\mathbb {R}}\). The state trajectories and the error trajectories of the systems (1)–(2) without feedback control are shown in Figs. 3 and 4, respectively. Clearly, from Figs. 3 and 4, the drive system (1) and the response system (2) are not synchronized.
However, for the control gain matrix
we can calculate
and
Hence,
Therefore, we can see that \({\mathcal {M}}_1^1  {\mathcal {M}}_2^1 = 0.1672 <0,\) \({\mathcal {M}}_1^2  {\mathcal {M}}_2^2 = 2.2700 >0,\) and \({\mathcal {M}}_1^\infty  {\mathcal {M}}_2^\infty = 0.0728 >0\). Also, \({\mathcal {M}}_1^2, {\mathcal {M}}_1^\infty \in {\mathcal {R}}^+\). Hence, for \(p=2, \infty \), all the conditions of Theorem 1 hold, and thus, the systems (1)–(2) with feedback control (3) are exponentially lagsynchronized with the maximum rate of convergence for \(p = 2,\infty \) are 1.0366 and 0.0394, respectively. The synchronized curves and synchronized errors curves with feedback control are shown in Figs. 5 and 6, respectively.
Case 2. \({\mathbb {T}} = 0.5 {\mathbb {Z}}\). Let \(\eta _1=\eta _2=\beta = 0.5\). Here, \(\eta =0.5\) and the graininess function \(\mu (t)=0.5\) for all \( t \in {\mathbb {R}}\). The state trajectories and the error trajectories of the systems (1)–(2) without feedback control are shown in Figs. 7 and 8, respectively which are clearly not synchronized.
However, for the control gain matrix
we can calculate
and
Hence,
Therefore, we can see that \({\mathcal {M}}_1^1  {\mathcal {M}}_2^1 = 0.1560 <0,\) \({\mathcal {M}}_1^2  {\mathcal {M}}_2^2 = 0.0812 >0,\) and \({\mathcal {M}}_1^\infty  {\mathcal {M}}_2^\infty = 0.0840 >0\). Also, \({\mathcal {M}}_1^2, {\mathcal {M}}_1^\infty \in {\mathcal {R}}^+\). Hence, for \(p=2, \infty \), all the conditions of Theorem 1 hold, and thus, the systems (1)–(2) with feedback control (3) are exponentially lagsynchronized with the maximum rate of convergence for \(p = 2,\infty \) are 0.0583 and 0.0590, respectively. The synchronized curves and synchronized errors curves with feedback control are shown in Figs. 9 and 10, respectively.
Case 3. \({\mathbb {T}} = {\mathcal {P}} = [1,0] \cup _{i=0}^\infty [i, i+0.7]\). Let \(\eta _1=\eta _2=\beta = 1\). Here, \(\eta =1\) and the graininess function \(\mu (t)\) is given by
The state trajectories and the error trajectories of the systems (1)–(2) without feedback control are shown in Figs. 11 and 12, respectively which are clearly not synchronized.
However, for the control gain matrix
we can calculate
and
Hence,
Therefore, we can see that \({\mathcal {M}}_1^1  {\mathcal {M}}_2^1 = 0.1080 <0,\) \({\mathcal {M}}_1^2  {\mathcal {M}}_2^2 = 0.1292 >0,\) and \({\mathcal {M}}_1^\infty  {\mathcal {M}}_2^\infty = 0.1320 >0\). Also, \({\mathcal {M}}_1^2, {\mathcal {M}}_1^\infty \in {\mathcal {R}}^+\). Hence, for \(p=2, \infty \), all the conditions of Theorem 1 hold and thus, the systems (1)–(2) with feedback control (3) are exponentially lagsynchronized with the maximum rate of convergence for \(p = 2,\infty \) are 0.0602 and 0.0599, respectively. The synchronized curves and synchronized errors curves with feedback control are shown in Figs. 13 and 14, respectively.
Next, we provide another example to illustrate our main Theorem 2.
Example 2
Consider the continuoustime case of the drive and response systems (18)–(19) with the following coefficients as in [46, Ex. 2]
One can confirm that for Example 2, \({\mathcal {F}}\) satisfies the Lipschitz conditions with \( L_{\mathcal {F}}=1\). The state trajectories and the error trajectories of the systems (18)–(19) without feedback control are shown in Figs. 15 and 16, respectively. Clearly, from Figs. 15 and 16, the drive system (18) and the response system (19) are not synchronized.
However, for the control gain matrix
we can calculate
and
Hence,
Therefore, we see that \({\mathcal {M}}_3^1  {\mathcal {M}}_4^1 = 5.520 <0,\) \({\mathcal {M}}_3^2  {\mathcal {M}}_4^2 = 0.4561 >0,\) and \({\mathcal {M}}_3^\infty  {\mathcal {M}}_4^\infty = 6.460 <0\). Also, \({\mathcal {M}}_3^2 \in {\mathcal {R}}^+\). Hence, for \(p=2\), all the conditions of Theorem 2 hold, and thus, the systems (18)–(19) with feedback control (3) are exponentially synchronized with the maximum rate of convergence 0.1533. The synchronized curves and synchronized errors curves with feedback control are shown in Figs. 17 and 18, respectively.
Comparing the results quantitatively, we note that our approach provides a faster error convergence rate 0.1533, compared to the convergence rate of 0.01 reported in [46].
Remark 15
Previous works, such as [11, 13, 19, 21, 23,24,25, 28, 46], have considered similar types of examples on either continuous or discretetime domains. To the best of our knowledge, there is currently no other example in the literature that has addressed lag synchronization of CGNNs on hybridtype time domains (as presented in case 3 of Example 1).
6 Conclusion
We have successfully established the exponential lag synchronization results for a new class of CGNNs with discrete and distributed time delays on arbitrary time domains by using the theory of time scales and feedback control law. We have also studied some special cases of the considered problem. We mainly used a unified matrixmeasure theory and Halanay inequality to establish these results. The obtained results are verified by providing some simulated examples for different time domains including the continuoustime domain (case 1 of Example 1, Example 2), discretetime domain (case 2 of Example 1), and nonoverlapping time domain (case 3 of Example 1). Possible future research could concern an extension of the results to nonsmooth though still bounded functions. Another potential future direction is to further investigate the stability and synchronization results for CGNNs with delays and impulsive conditions on arbitrary time domains. This could include studying the effects of different types of delays, such as timevarying delays or distributed delays, on the synchronization of CGNNs. Additionally, it could be interesting to investigate the robustness and reliability of the synchronization results for CGNNs with delays and stochastic effects on time scales. This could include studying the effects of random disturbances or noise on the synchronization of CGNNs, and how the proposed approach can be modified to handle these types of effects.
References
Cohen MA, Grossberg S (1983) Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans Syst Man Cybern 13(5):815–826
Wang L, Wang Z, Wei G, Alsaadi FE (2018) Finitetime state estimation for recurrent delayed neural networks with componentbased eventtriggering protocol. IEEE Trans Neural Netw Learn Syst 29(4):1046–1057
Bohner M, Stamov GT, Stamova IM (2020) Almost periodic solutions of Cohen–Grossberg neural networks with timevarying delay and variable impulsive perturbations. Commun Nonlinear Sci Numer Simul 80:104952
Wang L (2005) Stability of Cohen–Grossberg neural networks with distributed delays. Appl Math Comput 160(1):93–110
Zhang Z, Zhang X, Yu T (2022) Global exponential stability of neutraltype Cohen–Grossberg neural networks with multiple timevarying neutral and discrete delays. Neurocomputing 490:124–131
Jiang M, Shen Y, Liao X (2006) Boundedness and global exponential stability for generalized Cohen–Grossberg neural networks with variable delay. Appl Math Comput 172(1):379–393
Li CH, Yang SY (2009) Synchronization in delayed Cohen–Grossberg neural networks with bounded external inputs. IMA J Appl Math 74(2):178–200
Liu M, Jiang H, Hu C (2019) New results for exponential synchronization of memristive Cohen–Grossberg neural networks with timevarying delays. Neural Process Lett 49(1):79–102
Xiong W, Cao J (2005) Global exponential stability of discretetime Cohen–Grossberg neural networks. Neurocomputing 64:433–446
Dong Z, Wang X, Zhang X (2020) A nonsingular Mmatrixbased global exponential stability analysis of higherorder delayed discretetime Cohen–Grossberg neural networks. Appl Math Comput 385:125401
Rao S, Zhang T, Xu L (2022) Exponential stability and synchronisation of fuzzy MittagLeffler discretetime Cohen–Grossberg neural networks with time delays. Int J Syst Sci 53(11):2318–2340
Ramasamy S, Nagamani G, Zhu Q (2016) Robust dissipativity and passivity analysis for discretetime stochastic TS fuzzy Cohen–Grossberg Markovian jump neural networks with mixed time delays. Nonlinear Dyn 85(4):2777–2799
Li T, Song A, Fei S (2010) Synchronization control for arrays of coupled discretetime delayed Cohen–Grossberg neural networks. Neurocomputing 74(1–3):197–204
Pecora LM, Carroll TL (1990) Synchronization in chaotic systems. Phys Rev Lett 64(8):821
Chen J, Jiao L, Wu J, Wang X (2010) Projective synchronization with different scale factors in a drivenresponse complex network and its application in image encryption. Nonlinear Anal Real World Appl 11(4):3045–3058
Xie Q, Chen G, Bollt EM (2002) Hybrid chaos synchronization and its application in information processing. Math Comput Model 35(1–2):145–163
Lu J, Wu X, Lü J (2002) Synchronization of a unified chaotic system and the application in secure communication. Phys Lett A 305(6):365–370
Liang K, Wanli L (2019) Exponential synchronization in inertial Cohen–Grossberg neural networks with time delays. J Frankl Inst 356(18):11285–11304
Kumar R, Das S (2020) Weak, modified and function projective synchronization of Cohen–Grossberg neural networks with mixed timevarying delays and parameter mismatch via matrix measure approach. Neural Comput Appl 32:7321–7332
Chen L, Chen Y, Zhang N (2021) Synchronization control for chaotic neural networks with mixed delays under input saturations. Neural Process Lett 53(5):3735–3755
Peng D, Li X, Aouiti C, Miaadi F (2018) Finitetime synchronization for Cohen–Grossberg neural networks with mixed timedelays. Neurocomputing 294:39–47
He H, Liu X, Cao J, Jiang N (2021) Finite/fixedtime synchronization of delayed inertial memristive neural networks with discontinuous activations and disturbances. Neural Process Lett 53(5):3525–3544
Abdurahman A, Jiang H, Teng Z (2017) Lag synchronization for Cohen–Grossberg neural networks with mixed timedelays via periodically intermittent control. Int J Comput Math 94(2):275–295
Gan Q (2012) Adaptive synchronization of Cohen–Grossberg neural networks with unknown parameters and mixed timevarying delays. Commun Nonlinear Sci Numer Simul 17(7):3040–3049
Assali Aouiti CEA (2019) Nonlinear Lipschitz measure and adaptive control for stability and synchronization in delayed inertial Cohen–Grossbergtype neural networks. Int J Adapt Control Signal 33(10):1457–1477
Li M, Yang X, Song Q, Chen X (2022) Robust asymptotic stability and projective synchronization of timevarying delayed fractional neural networks under parametric uncertainty. Neural Process Lett 54(6):4661–4680
Liu Q, Zhang S (2012) Adaptive lag synchronization of chaotic Cohen–Grossberg neural networks with discrete delays. Chaos 22(3):033123
Hu C, Yu J, Jiang H, Teng Z (2010) Exponential lag synchronization for neural networks with mixed delays via periodically intermittent control. Chaos 20(2):023108
Wen S, Zeng Z, Huang T, Meng Q, Yao W (2015) Lag synchronization of switched neural networks via neural activation function and applications in image encryption. IEEE Trans Neural Netw Learn Syst 26(7):1493–1502
Huang J, Li C, Huang T, He X (2014) Finitetime lag synchronization of delayed neural networks. Neurocomputing 139:145–149
Hilger S (1988) Ein Maßkettenkalkül mit Anwendung auf Zentrumsmannigfaltigkeiten. Ph.D. thesis, Univ. Würzburg
Bohner M (2001) Peterson a dynamic equations on time scales. Birkhäuser, Boston, MA
Atici FM, Biles DC, Lebedinsky A (2006) An application of time scales to economics. Math Comput Model 43(7–8):718–726
Naidu D (2002) Singular perturbations and time scales in control theory and applications: an overview. Dyn Contin Discrete Impuls Syst B Appl Algorithms 9:233–278
Wang L, Huang T, Xiao Q (2018) Global exponential synchronization of nonautonomous recurrent neural networks with time delays on time scales. Appl Math Comput 328:263–275
Kumar V, Djemai M, Defoort M, Malik M (2021) Finitetime stability and stabilization results for switched impulsive dynamical systems on time scales. J Frankl Inst 358(1):674–698
Syed Ali M, Yogambigai J (2019) Synchronization criterion of complex dynamical networks with both leakage delay and coupling delay on time scales. Neural Process Lett 49(2):453–466
Huang Z, Cao J, Li J, Bin H (2019) Quasisynchronization of neural networks with parameter mismatches and delayed impulsive controller on time scales. Nonlinear Anal Hybrid Syst 33:104–115
Xiao Q, Huang T (2020) Stability of delayed inertial neural networks on time scales: a unified matrixmeasure approach. Neural Netw 130:33–38
Wang C, Li Y (2013) Almost periodic solutions to Cohen–Grossberg neural networks on time scales. Dyn Contin Discrete Impuls Syst B: Appl Algorithms 20(3):359–377
Li Y, Chen X, Zhao L (2009) Stability and existence of periodic solutions to delayed Cohen–Grossberg BAM neural networks with impulses on time scales. Neurocomputing 72(7–9):1621–1630
Zhang Z, Peng G, Zhou D (2011) Periodic solution to Cohen–Grossberg BAM neural networks with delays on time scales. J Frankl Inst 348(10):2759–2781
Li Y, Yang L, Wu W (2011) Antiperiodic solutions for a class of Cohen–Grossberg neural networks with timevarying delays on time scales. Int J Syst Sci 42(7):1127–1132
Liang T, Yang Y, Liu Y, Li L (2014) Existence and global exponential stability of almost periodic solutions to Cohen–Grossberg neural networks with distributed delays on time scales. Neurocomputing 123:207–215
Li Y, Zhao L, Zhang T (2011) Global exponential stability and existence of periodic solution of impulsive Cohen–Grossberg neural networks with distributed delays on time scales. Neural Process Lett 33(1):61–81
Li T, Fei SM, Zhang KJ (2008) Synchronization control of recurrent neural networks with distributed delays. Phys A Stat Mech Appl 387(4):982–996
Acknowledgements
We would like to express our gratitude to the associate editor and anonymous reviewers for their constructive comments and suggestions to improve the quality of the paper.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Contributions
All the authors contributed equally to finalize the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Kumar, V., Heiland, J. & Benner, P. Exponential Lag Synchronization of Cohen–Grossberg Neural Networks with Discrete and Distributed Delays on Time Scales. Neural Process Lett 55, 9907–9929 (2023). https://doi.org/10.1007/s11063023112312
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063023112312