1 Introduction

As time goes by, technological innovations have become indispensable in our lives. In this case, people constantly make new discoveries or achieve better results by improving what others have done. These types of technological experiments are generally aimed at making human life easier, both in education and our daily lives. The concept of neural networks is included in this category. Many of these problems have become complex and cannot be solved using conventional algorithms. Neural networks, which can provide solutions to complex problems and are used in many multidisciplinary fields, have becoming increasingly common. Features that distinguish neural networks from classical problem solving methods include parallel (simultaneous) operations and nonlinearity. This makes solving complex problems easier. Generally, meaningful responses can be attributed to similar events. Fault tolerance was also available. Unlike calculation or programming methods used in traditional computers, neural networks have a training (learning) mechanism based on examples. In the model, the existence-unity status and stability analysis of the balance point vary depending on the field and application. For example, if an artificial neural networks (ANNs) is to be used to solve optimization problems, it must have a single equilibrium point that is globally asymptotically stable. Stability is an important factor in network model. The stability features ANNs vary depending on their application type. According to its meaning and general definition in ANNs, it is "a stable system that tends to go into equilibrium or is in a state of equilibrium." Although external factors do not affect the stability of the system, they may cause it to take longer to reach equilibrium. Theoretically, a system that has become stable will not become unstable. For a system to be stable, it must have at least one equilibrium point. The number of balance points and the type of stability may vary depending on the characteristics of the system or application used. Stability can be expressed as the fact that a certain equilibrium point of a dynamic system is stable and other orbits remain in a certain neighborhood of this point. Asymptotic stability can be expressed as a situation in which, in addition to stability, the equilibrium point has a gravitational effect on other orbits in its neighborhood over a long time limit. If the equilibrium point of the system is locally or globally asymptotically stable, it is the only equilibrium point of the system. Therefore, systems with more than one equilibrium point do not satisfy the asymptotically stable condition. The most important factor affecting the existence-unity status and stability analysis of the balance point of ANNs is system delay. Contrary to real-world applications, in theory, it is assumed that the signal transmission between the neurons forming the ANNs is flawless. In the application of ANNs, there appear to be various delays in signal transmission between neurons. While examining the stability of ANNs, analysis by considering delay situations allows obtaining results closer to reality. Signal transmission in neural networks is the counterpart of the anti-periodic problem in the applied sciences. The existence and stability of anti-periodic solutions play an important role in characterizing the behavior of nonlinear differential equations. The importance of a stability analysis becomes evident when there is a delay in these problems. Recently, anti-periodic problems of neural networks have been addressed and discussed by many researchers. The negative feedback form of a neural network system can be defined as the forgetting delay. Neutral-type time delays and elastic rods are always present in automatic control operations. Dependent population dynamics and vibrating masses have recently become very effective. Very few studies have focused on the stability and existence of an antiperiodic solution for neutral bidirectional associative memory (BAM) neural networks with time-varying delays. Therefore, the existence and stability of the solution in antiperiodic problems should be emphasized. The first study on the generalized single-layer case of BAM neural networks was conducted by Koska [1]. Chen and Huang [2] investigated the interaction between two neural networks. Models using periodic and related concepts, as found in Ammar et al. and Ch'erif [3], are interesting. Repeating complex states has been represented using periodicity, and the dynamics and biological mechanisms of time-delayed periodic systems have been discussed by Yang [4]. Wang et al. [5] studied the one-way neutral type. It has been observed that the existence and stability of periodic and almost periodic solutions play a crucial role in the characterization [6,7,8,9]. Models based on global and exponential stability situations [10,11,12]. Recently, studies investigating the complex conditions of fractional neural networks with the development of theory and applications of fractional differential equations have intensified. It has been observed that the fractional model has more accurate data than the classical integer order when modeling nonlinear dynamic systems. The advantages of the fractional model have been observed, especially in defining memory and hereditary features, and fractional computation has been integrated into neural networks to accurately reveal the dynamic properties of ANNs. In real-world processes, viscoelastic systems, quantitative finance, diffusion waves, acoustics, mechanics, and electromagnetism are fractional order systems. Recently, studies on fractional-order neural networks have gained importance [13,14,15,16,17]. Kaslik and Sivasundaram [18] also obtained interesting results in their studies in this field. Alofi et al. [19] studied the finite-time stability of fractional-order networks with distributed delay. Hopfield’s [20] integer, bilateral associative memory model was developed by Liu and colleagues [21]. This neural network is of great importance for applications in pattern recognition and automatic control. There are also recent studies on these networks [22,23,24,25,26]. In this study, the asymptotic behavior of time-delayed BAM neural networks was investigated. The stability of delayed BAM neural networks has many applications in many fields and has become an important topic in scientific studies. Applications in the scientific field include image and signal processing, pattern recognition, optimization, and automatic control. It is in the form,

$$ \begin{aligned} x^{\prime}_{i} \left( t \right) = & - a_{i} x_{i} \left( t \right) + \mathop \sum \limits_{j = 1}^{n} b_{ij} f_{j} \left( {x_{j} \left( t \right)} \right) + \mathop \sum \limits_{j = 1}^{n} c_{ij} f_{j} \left( {x_{j} \left( {t - \tau_{ij} } \right)} \right) + \mathop \sum \limits_{j = 1}^{n} d_{ij} \left( {x_{j} \left( {t - \zeta_{ij} } \right)} \right) + I_{i} \\ x^{\prime}_{i} \left( t \right) = & - a_{i} x_{i} \left( t \right) + \mathop \sum \limits_{j = 1}^{n} b_{ij} f_{j} \left( {x_{j} \left( t \right)} \right) + \mathop \sum \limits_{j = 1}^{n} c_{ij} f_{j} \left( {x_{j} \left( {t - \tau_{ij} } \right)} \right) + \mathop \sum \limits_{j = 1}^{n} d_{ij} \left( {x_{j} \left( {t - \zeta_{j} } \right)} \right) + I_{i} \\ x^{\prime}_{i} \left( t \right) = & - a_{i} x_{i} \left( t \right) + \mathop \sum \limits_{j = 1}^{n} b_{ij} f_{j} \left( {x_{j} \left( t \right)} \right) + \mathop \sum \limits_{j = 1}^{n} c_{ij} f_{j} \left( {x_{j} \left( {t - \tau_{j} } \right)} \right) + \mathop \sum \limits_{j = 1}^{n} d_{ij} \left( {x_{j} \left( {t - \zeta_{j} } \right)} \right) + I_{i} \\ x^{\prime}_{i} \left( t \right) = & - a_{i} x_{i} \left( t \right) + \mathop \sum \limits_{j = 1}^{n} b_{ij} f_{j} \left( {x_{j} \left( t \right)} \right) + \mathop \sum \limits_{j = 1}^{n} c_{ij} f_{j} \left( {x_{j} \left( {t - \tau } \right)} \right) + \mathop \sum \limits_{j = 1}^{n} d_{ij} \left( {x_{j} \left( {t - \zeta } \right)} \right) + I_{i} \\ \end{aligned} $$

In the first model both lags in the equation are treated as two-dimensional (\({\tau }_{ij}\),\({\upzeta }_{ij}\)). This means that the delay between each pair of neurons may have different values. The delay here can be expressed as an \(n\times n\) matrix. Models in which the delay is accepted in this manner are the models closest to reality because in real ANNs applications, the delay value between each neuron pair is probably different. However, at the same time, this delay has made the stability analysis of the system very difficult, because it causes problems that are very difficult to solve mathematically. In the second model, there is again a double-dimensional delay (\({\tau }_{ij}\)). Although one of the system delays is one-dimensional, mathematical analysis of this model is still quite difficult. It is known in the literature that the mathematical analysis of even models with double-dimensional delay, even if they are not of the neutral type, involves difficult and complex operations. The delays in the third model are one-dimensional (\({\tau }_{ij}\),\({\upzeta }_{ij}\)) delays that are added to the model considering that the delay between any neuron and all other neurons is constant. The delay in this model is not a single number but a vector with n elements, one number for each neuron. The delays in the final model are fixed numbers (τ, \(\upzeta \)) addetensuringng that the delay between all neurons is equal. It is easier to analyze mathematically than other models, is one of the most discussed models in the literature, and has been examined in many studies [27,28,29,30,31,32]. The model considered in this study is a neutral system whose delay varies with time. In the model, both delays in the equation are double-dimensional and different from each other. Therefore, its mathematical analysis is more difficult and more complex than other delay situations. However, it creates the closest model type to reality. Its advantage and importance, if its stability is demonstrated, is that the processing unit weights an input with a set of weights, transforms it non-linearly and creates an output value. In traditional processors, a single central processing unit performs each action sequentially, whereas ANNs consist of many simple processing units, each dealing with a part of a larger problem. The power of neural computing comes from the dense connection structure between processing units that share the total processing load. In these systems, healthier learning is provided by the backpropagation method. In general, these systems, which belong to the delayed neural network class, have different characteristics from other delayed systems. The fact that neutral-type ANNs have delays on both sides of the differential equation complicates the analysis of these models, and advanced mathematical knowledge must be used to perform stability analyses. Although this may seem like a disadvantage, neutral-type ANNs can be used to solve more complex problems than classic model ANNs. To the best of our knowledge, there are no published papers on fractional delay. This article addresses this interesting, important, and unresolved problem. The main contributions of this article are as follows:

  1. (1)

    The first attempt is made to deal with stability and global asymptotic stability in time-delayed fractional BAM neural networks.

  2. (2)

    The conditions for the existence of the antiperiodic solution have been established for the proposed model.

  3. (3)

    The effect of delay on such neural networks has been revealed in depth, and stability performance has been demonstrated.

The remainder of this article is organized as follows; Sect. 2 provides preliminary information about the fractional derivative, including Caputo, the equilibrium of the fractional system, and the stability analysis method for linearized lag. In Sect. 3, the main conclusions are established regarding the existence and global stability of the anti-periodic solution of delayed BAM neural networks. During the proof process, the Lyapunov function and basic fThe signal transmission process of neural networks can generally be described as an anti-periodic process.unction sequences based on the solution of networks were used. An example and numerical simulations are provided to illustrate the theoretical results. In this study, the global stability and effects of an antiperiodic solution for bidirectional fractional order BAM neural networks with time-varying delays are investigated. Fractional calculus has been shown to be a more effective tool than integer calculus in expressing the objective world. Time delay is an important phenomenon in the implementation of a signal or effect passing through a neural network. According to the area to be used and the application, the entity uniqueness and stability analyses of the equilibrium point differ in the neural network model. If this network is used to solve optimization problems; must have a single equilibrium point that is globally asymptotically stable. If the neural network has more than one balance point in an associative memory design, more information storage is possible by ensuring its full stability. Although the results obtained cannot be directly applied to many applications, they extend neural networks to some extent and some previously known networks. Therefore, the results are important in terms of being complementary to previous studies. In the work we initially describe sufficient conditions for existence and stability, with some demonstrations and preliminary results. In Sect. 3, we show the stability and global stability of the anti-periodic solution for fractional-order BAM neural networks.

2 Preliminaries

In the first section we give some basic definitions of fractional calculus. We will use these to prove our main results. Let us first define the properties of the anti-periodic problem of a neutral BAM neural network. Let's give the system of equations we will discuss in this study as follows,

$$ D ^{{{\upalpha } }} x_{i} \left( t \right) = - a_{i} x_{i} \left( t \right) + \mathop \sum \limits_{j = 1}^{m} b_{ij} \left( t \right)f_{j} \left( {x_{j} \left( t \right)} \right) + \mathop \sum \limits_{j = 1}^{m} c_{ij} \left( t \right)v_{j} \left( {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right) + I_{i} \left( t \right), $$
$$ D ^{{{\upalpha } }} y_{j} \left( t \right) = - d_{j} y_{j} \left( t \right) + \mathop \sum \limits_{i = 1}^{n} e_{ij} \left( t \right)g_{i} \left( {y_{i} \left( t \right)} \right) - \mathop \sum \limits_{i = 1}^{n} \kappa_{ij} \left( t \right)s_{i} \left( {y_{i} \left( {t - \xi_{ij} \left( t \right)} \right)} \right) + J_{j} \left( t \right), $$
(2.1)

where \(i=\mathrm{1,2},\dots ,n, j=\mathrm{1,2},\dots ,m,\) and study the anti-periodic problems of the delayed neural networks,\({x}_{i}\left(t\right),{y}_{j}\left(t\right)\), denotes the potential of time \(t\); \({\alpha },\) \(0<{\alpha }<1.\) The constants \({a}_{i},{d}_{j}\) \({b}_{ij},{c}_{ij},{e}_{ij},{\kappa }_{ij}\) are the connection weight parameters of the neural networks, \({g}_{i},{f}_{j}\) denote the activation functions of the \(i\) th neurons and the \(j\) th neurons, respectively; \({I}_{i},{J}_{j}\) are the \(i\) th and the \(j\) th components of an external inputs source introduced from outside the networks to the cell \(i\) and \(j\), \(\xi > 0\) and τ \(> 0\) correspond to leakage and transmission delay respectively.

The \(f\) initial conditions with system (2.1) are of the following form:

$$ x_{i} \left( k \right) = \delta_{i} \left( k \right), k \in [ - \tau ,0]. \quad i = 1,2, \ldots ,n, $$
$$ y_{j} \left( k \right) = \phi_{j} \left( k \right),k \in [ - \xi ,0].\quad j = 1,2, \ldots ,m, $$
(2.2)

where \(\zeta \)= max{\({\tau }_{ij}{,\xi }_{ij}\)}, \({\delta }_{i}\) and \({\phi }_{j}\) are continuous and real valued functions.

Let \({x}_{i}(t): R \to R\) be continuous in \(t, {x}_{i}(t)\) is said to be \(T-\) anti-periodic on \(R\) if,

\({x}_{i}(t + T ) =-{x}_{i}(t)\) for all\(t \in R\), \(i=\mathrm{1,2},\dots ,n.\)

\({\left({\text{H}}1\right) a}_{i},{d}_{j},{I}_{i},{J}_{j},{b}_{ij},{c}_{ij},{e}_{ij},{\kappa }_{ij}: R \to R\) and \({\Upsilon}_{ij}\) \(:[0, \infty )\to R\) and

$$ \begin{aligned} f_{i} \left( {t + T } \right) = & a_{i} \left( t \right), d_{j} \left( {t + T } \right) = d_{j} \left( t \right) \\ b_{ij} \left( {t + T } \right)f_{i} \left( x \right) = & - b_{ij} \left( t \right)f_{i} \left( { - x} \right), \;\forall t, x \in R, \\ e_{ij} \left( {t + T } \right)k_{i} \left( y \right) = & - e_{ij} \left( t \right)k_{i} \left( { - y} \right), \;\forall t, y \in R, \\ c_{ij} \left( {t + T } \right)g_{i} \left( x \right) = & - c_{ij} \left( t \right)g_{i} \left( { - x} \right), \;\forall t,x \in R, \\ \kappa_{ij} \left( {t + T } \right)g_{i} \left( x \right) = & - \kappa_{ij} \left( t \right)g_{i} \left( { - x} \right),\; \forall t,x \in R, \\ I_{i} \left( {t + T } \right) = & - I_{i} \left( t \right),\;\forall t \in R, \\ J_{j} \left( {t + T } \right) = & - J_{j} \left( t \right),\; \forall t \in R, \\ \end{aligned} $$

(H2) \({f}_{j},{v}_{j},{g}_{i},\) \({s}_{i}\) are locally Lipscitz continuos and there existence constants \({f}_{j}>0,{v}_{j}>0,{g}_{i}>0,\) \({s}_{i}>0\) such that

$$ \begin{gathered} \left| {f_{i} \left( x \right) - f_{i} \left( y \right)} \right| \le f_{i} \left| {x - y} \right|, \hfill \\ \left| {v_{i} \left( x \right) - v_{i} \left( y \right)} \right| \le {\text{v}}_{i} \left| {x - y} \right|, \hfill \\ \left| {g_{j} \left( x \right) - g_{j} \left( y \right)} \right| \le g_{j} \left| {x - y} \right|, \hfill \\ \left| {s_{j} \left( x \right) - s_{j} \left( y \right)} \right| \le s_{j} \left| {x - y} \right|, \hfill \\ \end{gathered} $$

where \(y\) \(\in R, i=\mathrm{1,2},\dots ,n, j=\mathrm{1,2},\dots ,m\).

Definition 2.1 [15]

The fractional integral of order \(\alpha > 0\) of a function is given.

$$ I_{{a^{ + } }}^{\alpha } y\left( t \right) = \frac{1}{{\Gamma \left( \alpha \right)}}\int\limits_{a}^{t} {\left( {t - \tau } \right)^{{\alpha - 1}} } y\left( \tau \right)d\tau ,t \in \left( {a,b} \right] $$

Definition 2.2 [14]

The Riemann–Liouville fractional derivative of order \(\alpha > 0\) of a continuous function \(y : (a, b] \to R\) is defined by.

$$ D_{{a^{ + } }}^{\alpha } y\left( t \right) = \frac{1}{{\Gamma \left( {n - \alpha } \right)}}\left( {\frac{d}{{dt}}} \right)^{n} \int\limits_{a}^{t} {\frac{{y\left( \tau \right)}}{{\left( {t - \tau } \right)^{{\alpha - n + 1}} }}} d\tau ,n = \left[ \alpha \right] + 1. $$

Definition 2.3 [14]

The Caputo fractional of order \(\alpha > 0\) of function \(y\) on \((a, b]\) is explained by the Riemann–Liouville derivatives described above as follows.

$$ ~~\left( {^{c} D{}_{{a^{ + } }}^{\alpha } y} \right)\left( t \right)~ = \left( {D{}_{{a^{ + } }}^{\alpha } \left[ {y\left( t \right) - \mathop \sum \limits_{{k = 0}}^{{n - 1~~~~~~}} \frac{{(y^{{\left( k \right)}} \left( a \right)}}{{k!}}:~\left( {t~ - ~a} \right)^{k} } \right]} \right)t,t~ \in ~\left( {a,~b} \right]. $$

Definition 2.4

The solution of system (2.1) for any two solutions ((t), (t)), (\(\overline{x }\)(t), \(\overline{y }\)(t)), of system (2.1) with initial functions (\(\delta \)(t), (t)), (\(\overline{\delta }\)(t), \(\overline{\phi }\)(t)), respectively, it holds that.

\( \left\| {\overline{x}\left( t \right) - x\left( t \right) < \varepsilon } \right\|,\; \left\| {\overline{y}\left( t \right) - y\left( t \right)} \right\| < \varepsilon\),

for \( t \ge 0, \)

\( \left\| {\overline{\delta }\left( k \right) - \delta \left( k \right)} \right\| < \varepsilon , \;\left\| {\overline{\phi }\left( k \right) - \phi \left( k \right)} \right\| < \varepsilon\),

where

$$ \begin{aligned} \left\| {\overline{\delta }\left( k \right) - \delta \left( k \right)} \right\| = & \mathop \sum \limits_{i = 1}^{n} sup\left\{ {e^{ - k} \left| {\overline{\delta }_{i} \left( k \right) - \delta_{i} \left( k \right)} \right|} \right\}, \\ \left\| {\overline{\phi }\left( k \right) - \phi \left( k \right)} \right\| = & \mathop \sum \limits_{j = 1}^{m} sup\left\{ {e^{ - k} \left| {\overline{\phi }_{j} \left( k \right) - \phi_{j} \left( k \right)} \right|} \right\}, \\ \left\| {\overline{x}\left( t \right) - x\left( t \right)} \right\| = & \mathop \sum \limits_{i = 1}^{n} sup\left\{ {e^{ - t} \left| {\overline{x}\left( t \right) - x_{i} \left( t \right)} \right|} \right\}, \\ \left\| {\overline{y}\left( t \right) - y\left( t \right)} \right\| = & \mathop \sum \limits_{j = 1}^{m} sup\left\{ {e^{ - t} \left| {\overline{y}_{j} \left( t \right) - y_{j} \left( t \right)} \right|} \right\}. \\ \end{aligned} $$

Lemma 2.5

Let n be a positive integer satisfying the inequality \({\text{n}}-1 <{\alpha }<\) n if

\(y\left(t\right)\)\({C}^{n-1}\)[a, b], then

$$ I^{{\upalpha }} D ^{{{\upalpha } }} y\left( t \right) = y\left( t \right) - \mathop \sum \limits_{k = 0}^{n - 1} \frac{{y^{\left( k \right)} \left( a \right)}}{k!} \left( {t - a} \right)^{k} , $$

if 0 \(<{\alpha }\le \) 1, then

$$ I^{{\upalpha }} D ^{{{\upalpha } }} y\left( t \right) = y\left( t \right) - y\left( a \right). $$

The fractional-order BAM neural networks of the delayed system (2.1) are defined as

$$ \begin{aligned} D ^{{\upalpha }} x_{i} \left( t \right) = & - a_{i} x_{i} \left( t \right) + \mathop \sum \limits_{j = 1}^{m} b_{ij} \left( t \right)f_{j} \left( {x_{j} \left( t \right)} \right) + \mathop \sum \limits_{j = 1}^{m} c_{ij} \left( t \right)v_{j} \left( {y_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right) + I_{i} \left( t \right), \\ D ^{{\upalpha }} y_{j} \left( t \right) = & - d_{j} y_{j} \left( t \right) + \mathop \sum \limits_{i = 1}^{n} e_{ij} \left( t \right)g_{i} \left( {y_{i} \left( t \right)} \right) - \mathop \sum \limits_{i = 1}^{n} \kappa_{ij} \left( t \right)s_{i} \left( {y_{i} \left( {t - \xi_{ij} \left( t \right)} \right)} \right) + J_{j} \left( t \right), \\ \end{aligned} $$

The initial conditions are

$$ \begin{aligned} x_{i} \left( k \right) = & \delta_{i} \left( k \right), k \in [ - \tau ,0]\;i = 1,2, \ldots ,n, \\ y_{j} \left( k \right) = & \phi_{j} \left( k \right),k \in [ - \xi ,0]\;j = 1,2, \ldots ,m. \\ \end{aligned} $$

Lemma 2.6 [27]

Suppose that \({{\varpi }}_{1}\),\({{\varpi }}_{2}\): \(R \to R\) are nondecreasing functions, \({{\varpi }}_{1}\)(θ) and \({{\varpi }}_{2}\)(θ) are all positive when\(\uptheta > 0\).Then \(F : R \times {R}^{n } \to R\) is a continuously differentiable function, such that \({{\varpi }}_{1}\) (‖\(y\)‖) ≤ \(F\) (θ,\(y\)) ≤ \({{\varpi }}_{2}\)(‖\(y\)‖), \(y\)\({R}^{n}\). The solution \(y\)(θ) of Caputo system is satisfied by \({D}^{{\alpha } }F(\theta , y(\theta )) \le 0,\) whenever sup\(F(t, y(t)) = F(\theta , y(\theta ))\). Then Caputo system is uniformly stable.

Lemma 2.7 [27]

If all of the conditions of Lemma 2.6 are hold. If two constants ω, \(\dot{\omega }\) \(>\) 0 (ω \(<\) \(\dot{\omega }\)) are existed, so \({D}^{{\alpha } }F(\theta , y(\theta ))\) ≤  − \( \dot{\omega }F(\theta ,y(\theta )) \)+ ω sup \(F(\theta +t, y(\theta +t)\). Then Caputo system is globally uniformly asymptotically stable.

Lemma 2.8

Let 0 < \({\alpha }\)< 1, if \(y(t)\)\({C}^{1}\)[t, ∞), then.

\({D}^{{\alpha } }|y(t)|\)\(sgn(y(t)) D{}_{{a}^{+}}{}^{\mathrm{ \alpha }}y(t),\) where

$$ D_{{a^{ + } }}^{\alpha } y\left( t \right) = \frac{1}{{\Gamma \left( {1 - \alpha } \right)}}\int\limits_{a}^{t} {\frac{{\frac{d}{{dt}}y\left( t \right)}}{{\left( {t - \tau } \right)^{n} }}} d\tau . $$

3 Main Results

In this section, we shall investigate the existence and global uniformly asimptotically stable of the anti-periodic solution of system (2.1).

Theorem 3.1

Provided that assumptions (H1)–(H2) conditions are met, system (2.1) is uniformly stable. If \({L}_{1}>0, { C}_{1}>0,\) and \({L}_{1}{C}_{1} > {L}_{2}{C}_{2}\) hold, where.

$$ L_{1} = {1 } - \mathop {\max }\limits_{{1 \le {\text{i}} \le {\text{n }}}} \left\{ {a_{i} } \right\}{ }, C_{1} = 1 - \mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left\{ {d_{j} } \right\}{,} $$
$$ L_{2} = \mathop \sum \limits_{i = 1}^{n} \mathop {\max }\limits_{{1 \le {\text{i}} \le {\text{n }}}} \left\{ {\left| {b_{ij} } \right|f_{j} } \right\}{ } + \mathop \sum \limits_{i = 1}^{n} \mathop {\max }\limits_{{1 \le {\text{i}} \le {\text{n }}}} \left\{ {\left| {c_{ij} } \right|v_{j} } \right\}\;e^{ - \tau } $$
$$ C_{2} = \mathop \sum \limits_{j = 1}^{m} \mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left\{ {\left| {e_{ij} } \right|g_{j} } \right\}{ } + \mathop \sum \limits_{j = 1}^{m} \mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left\{ {\left| {\kappa_{ij} } \right|s_{j} } \right\}{ }\;e{ }^{ - \xi } . $$

Proof

Assume that (x(t), y(t))T = (\({x}_{1}\)(t),..., xn(t), y1(t), …,ym(t))T and (\(\overline{x }\left(t\right)\), \(\overline{y }\)(t))T = (\({\overline{x} }_{1}\)(t),..., \(\overline{x }\) n(t), \(\overline{y }\) 1(t), …,\(\overline{y }\) m(t))T. They are a solution of system (2.1) and satisfy condition (2.2), then.

$$ \begin{aligned} D^{\alpha } \left( {\overline{x}_{i} (t) - x_{i} (t)} \right) = & - a_{i} \left( {\overline{x}_{i} (t) - x_{i} (t)} \right) + \mathop \sum \limits_{j = 1}^{m} { }b_{ij} \left( {f_{j} \left( {\overline{y}_{j} (t) - f(y_{j} (t)} \right)} \right) \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }c_{ij} \left( {v_{j} \left( {\overline{y}_{j} \left( {t - \tau_{ij} } \right)} \right) - \left( {v_{j} y_{j} \left( {t - \tau_{ij} } \right)} \right)} \right), \\ D^{\alpha } \left( {\overline{y}_{j} (t) - y_{j} (t)} \right) = & - d_{i} \left( {\overline{y}_{j} (t) - y_{j} (t)} \right) + \mathop \sum \limits_{i = 1}^{m} { }e_{ij} \left( {g_{i} \left( {\overline{x}_{i} (t) - g_{i} (x_{i} (t)} \right)} \right) \\ \quad & + \mathop \sum \limits_{i = 1}^{m} { }\kappa_{ij} \left( s_{i} \left( {\overline{x}_{i} \left( {t - \xi_{ij} } \right)} \right)\right.\\ &\left. - \left( {s_{i} x_{i} \left( {t - \xi_{ij} } \right)} \right) \right), \;i = 1,2, \ldots ,n,j = 1,2, \ldots ,m. \\ \end{aligned} $$

From Lemma 2.5, we can getfrom here,

$$ \begin{aligned} \overline{x}_{i} \left( t \right) - x_{i} \left( t \right) = & \overline{\delta }_{i} \left( 0 \right) - \delta_{i} \left( 0 \right) \\ \quad & + I^{\alpha } \left( { - a_{i} \left( {\overline{x}_{i} (t) - x_{i} (t)} \right)} \right) + \mathop \sum \limits_{j = 1}^{m} { }b_{ij} \left( {f_{j} \left( {\overline{y}_{j} (t) - f(y_{j} (t)} \right)} \right) \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }c_{ij} \left( {v_{j} \left( {\overline{y}_{j} \left( {t - \tau_{ij} } \right)} \right) - \left( {v_{j} y_{j} \left( {t - \tau_{ij} } \right)} \right)} \right), \\ \overline{y}_{j} \left( t \right) - y_{j} \left( t \right) = & \overline{\phi }_{i} \left( 0 \right) - \phi_{i} \left( 0 \right) \\ \quad & + I^{\alpha } \left( { - d_{i} \left( {\overline{y}_{j} (t) - y_{j} (t)} \right)} \right) + \mathop \sum \limits_{i = 1}^{m} { }e_{ij} \left( {g_{i} \left( {\overline{x}_{i} (t) - g_{i} (x_{i} (t)} \right)} \right) \\ \quad & + \mathop \sum \limits_{i = 1}^{m} { }\kappa_{ij} \left( {s_{i} \left( {\overline{x}_{i} \left( {t - \xi_{ij} } \right)} \right) - \left( {s_{i} x_{i} \left( {t - \xi_{ij} } \right)} \right)} \right) \\ \end{aligned} $$
$$ \begin{aligned} e^{ - t} \left| {\overline{x}\left( t \right) - x_{i} \left( t \right)} \right| \le & e^{ - k} \left| {\overline{\delta }_{i} \left( 0 \right) - \delta_{i} \left( 0 \right)} \right| + \frac{1}{\Gamma \left( \alpha \right)} e^{ - t} \mathop \int \limits_{0}^{t} \left( {t - k} \right) ^{\alpha - 1} \\ \quad & \left( { - a_{i} \left( {\overline{x}_{i} (t) - x_{i} (t)} \right)} \right) + \mathop \sum \limits_{j = 1}^{m} { }b_{ij} \left( {f_{j} \left( {\overline{y}_{j} (t) - f(y_{j} (t)} \right)} \right) \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }c_{ij} \left( {v_{j} \left( {\overline{y}_{j} \left( {t - \tau_{ij} } \right)} \right) - \left( {v_{j} y_{j} \left( {t - \tau_{ij} } \right)} \right)} \right), \\ \overline{y}_{j} (t) - y_{j} (t) = & e^{ - t} \left| {\overline{\phi }_{j} \left( 0 \right) - \phi_{j} \left( 0 \right)} \right| + \frac{1}{\Gamma \left( \alpha \right)} e^{ - t} \mathop \int \limits_{a}^{t} \left( {t - k} \right) ^{\alpha - 1} \\ \quad & \left( { - d_{i} \left( {\overline{y}_{j} (t) - y_{j} (t)} \right)} \right) + \mathop \sum \limits_{i = 1}^{m} { }e_{ij} \left( {g_{i} \left( {\overline{x}_{i} (t) - g_{i} (x_{i} (t)} \right)} \right) \\ \quad & + \mathop \sum \limits_{i = 1}^{m} { }\kappa_{ij} \left( {s_{i} \left( {\overline{x}_{i} \left( {t - \xi_{ij} } \right)} \right) - \left( {s_{i} x_{i} \left( {t - \xi_{ij} } \right)} \right)} \right) \\ \end{aligned} $$
(3.1)

From the assumption (H2) and the inequality (3.1) it follows,

$$ \begin{aligned} & e^{{ - t}} \left| {\bar{x}\left( t \right) - ~x_{i} \left( t \right)} \right| \le e^{{ - k}} \left| {\bar{\delta }_{i} \left( 0 \right) - ~\delta _{i} \left( 0 \right)} \right| + a_{i} \frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{0}^{t} \left( {t~ - ~k} \right)~^{{\alpha - 1}} e^{{ - \left( {t - k} \right)}} e^{{ - k}} {\text{~ + ~}}\left| {\bar{x}_{i} \left( k \right) - x_{i} \left( k \right)} \right|dk \\ & \mathop \sum \limits_{{j = 1}}^{m} {\text{~}}\left| {b_{{ij}} } \right|\frac{1}{{\Gamma ~\left( \alpha \right)}}~{\text{~}}\mathop \smallint \limits_{0}^{t} \left( {t~ - ~k} \right)~^{{\alpha - 1}} e^{{ - \left( {t - k} \right)}} e^{{ - {\text{k}}}} ~\left| {f_{j} {\text{~}}(\bar{y}_{j} \left( k \right) - ~f_{j} {\text{~}}\left( {y_{j} \left( k \right)} \right)} \right| \\ & + \mathop \sum \limits_{{j = 1}}^{m} {\text{~}}\left| {c_{{ij}} } \right|\frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{0}^{t} \left( {t~ - ~k} \right)~^{{\alpha - 1}} e^{{ - \left( {t - k + \tau } \right)}} e^{{ - \left( {k - \tau } \right)}} \left| {v_{j} {\text{~}}(\bar{y}_{j} \left( {k - \tau _{{ij}} } \right) - ~v_{j} {\text{~}}(y_{j} \left( {k - \tau _{{ij}} } \right)} \right|dk \\ & \le e^{{ - k}} \left| {\bar{\delta }_{i} \left( 0 \right) - ~\delta _{i} \left( 0 \right)} \right| + a_{i} \frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{0}^{t} \left( {t~ - ~k} \right)~^{{\alpha - 1}} e^{{ - \left( {t - k} \right)}} e^{{ - k}} {\text{~}} \cdot {\text{~}}\left| {\bar{x}_{i} {\text{~}}\left( k \right) - x_{i} {\text{~}}\left( k \right)} \right|dk \\ & + \mathop \sum \limits_{{j = 1}}^{m} {\text{~}}\left| {b_{{ij}} } \right|f_{j} \frac{1}{{\Gamma ~\left( \alpha \right)}}~{\text{~}}\mathop \smallint \limits_{0}^{t} \left( {t~ - ~k} \right)~^{{\alpha - 1}} e^{{ - \left( {t - k} \right)}} e^{{ - k}} \left| {{\text{~}}(\bar{y}_{j} \left( k \right) - ~{\text{~}}\left( {y_{j} \left( k \right)} \right)} \right|{\text{~}}dk \\ & + \mathop \sum \limits_{{j = 1}}^{m} {\text{~}}\left| {c_{{ij}} } \right|v_{j} {\text{~}}\frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{0}^{t} \left( {t~ - ~k} \right)~^{{\alpha - 1}} e^{{ - \left( {t - k + \tau } \right)}} e^{{ - \left( {k - \tau } \right)}} \left| {{\text{~}}(\bar{y}_{j} \left( {k - \tau _{{ij}} } \right) - {\text{~}}(y_{j} \left( {k - \tau _{{ij}} } \right)} \right|{\text{~}}dk \\ \end{aligned} $$
$$ \begin{aligned} = & e^{ - k} \left| {\overline{\delta }_{i} \left( 0 \right) - \delta_{i} \left( 0 \right)} \right| + a_{i} \frac{1}{\Gamma \left( \alpha \right)}\mathop \int \limits_{0}^{t} \left( {t - k} \right) ^{\alpha - 1} e^{{ - \left( {t - k} \right)}} e^{ - k} { } \cdot { }\left| {\overline{\delta }_{i} { }\left( k \right) - { }\delta_{i} { }\left( k \right)} \right|dk \\ \quad & + a_{i} \frac{1}{\Gamma \left( \alpha \right)}\mathop \int \limits_{0}^{t} \left( {t - k} \right) ^{\alpha - 1} e^{{ - \left( {t - k} \right)}} e^{ - k} \left| {\overline{x}_{i} { }\left( k \right) - { }x_{i} \left( k \right)} \right|dk \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }\left| {b_{ij} } \right|f_{j} \frac{1}{\Gamma \left( \alpha \right)} { }\mathop \int \limits_{0}^{t} \left( {t - k} \right) ^{\alpha - 1} e^{{ - \left( {t - k} \right)}} e^{ - k} \left| {{ }(\overline{y}_{j} \left( k \right) - \left( {y_{j} \left( k \right)} \right)} \right|dk \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }\left| {c_{ij} } \right|v_{j} { }\frac{1}{\Gamma \left( \alpha \right)}\mathop \int \limits_{0}^{t} \left( {t - k} \right)^{\alpha - 1} e^{{ - \left( {t - k + \tau } \right)}} e^{{ - \left( {k - \tau } \right)}} \left| {{ }(\overline{\phi }_{j} \left( {k - \tau_{ij} } \right) - { }(\phi_{j} \left( {k - \tau_{ij} } \right)} \right|dk \\ \end{aligned} $$
$$ \begin{aligned} = & e^{{ - k}} \left| {\bar{\delta }_{i} \left( 0 \right) - ~\delta _{i} \left( 0 \right)} \right| + a_{i} \frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{{ - t}}^{0} \left( {t - \lambda ~} \right)~^{{\alpha - 1}} e^{{ - \left( {t - \lambda } \right)}} e^{{ - \lambda }} ~~\left| {\bar{\delta }_{i} ~\left( \lambda \right) - ~\delta _{i} ~\left( \lambda \right)} \right|~d\lambda \\ \quad & + a_{i} \frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{0}^{t} \left( {t - \lambda ~} \right)^{{\alpha - 1}} e^{{ - \left( {t - \lambda } \right)}} e^{{ - \lambda }} ~\left| {\bar{x}_{i} ~\left( \lambda \right) - ~x_{i} \left( \lambda \right)} \right|d\lambda \\ \quad & + \mathop \sum \limits_{{j = 1}}^{m} ~\left| {b_{{ij}} } \right|f_{j} \frac{1}{{\Gamma ~\left( \alpha \right)}}~~\mathop \smallint \limits_{{ - \tau }}^{0} \left( {t - \zeta ~ - \tau _{{ij}} } \right)~^{{\alpha - 1}} e^{{ - \left( {t - \zeta ~} \right)}} e^{{ - \zeta ~}} \left| {~\bar{\phi }_{j} \left( {\zeta ~} \right) - ~~\phi _{j} \left( {\zeta ~} \right)} \right|d\zeta \\ \quad & + \mathop \sum \limits_{{j = 1}}^{m} ~\left| {c_{{ij}} } \right|v_{j} ~\frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{0}^{{t - \zeta ~}} \left( {t - \zeta ~ - \tau _{{ij}} } \right)^{{\alpha - 1}} e^{{ - \left( {t - \zeta ~} \right)}} e^{{ - \zeta ~}} \left| {~\bar{y}_{j} \left( {\zeta ~} \right) - ~y_{j} \left( {\zeta ~} \right)} \right|~d\zeta ~ \\ \quad & \le \sup \left\{ {e^{{ - t}} \left| {\bar{\delta }_{i} \left( t \right) - ~\delta _{i} \left( t \right)} \right|} \right\}~~ + a_{i} \;\sup \left\{ {e^{{ - t}} \left| {\bar{\delta }_{i} \left( t \right) - ~\delta _{i} \left( t \right)} \right|} \right\} \\ & \frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{0}^{t} \psi ~^{{\alpha - 1}} e^{{ - \psi }} d\psi + a_{i} \;\sup e^{{ - t}} \left| {\bar{x}_{i} \left( t \right) - ~x_{i} \left( t \right)} \right|~\frac{1}{{\Gamma ~\left( \alpha \right)}}\mathop \smallint \limits_{0}^{t} \psi ~^{{\alpha - 1}} e^{{ - \psi }} d\psi \\ \quad & + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {b_{{ij}} } \right|f_{j} ~\mathop \sum \limits_{{j = 1}}^{m} ~\sup \left\{ {e^{{ - t}} \left| {\bar{y}_{j} \left( t \right) - ~y_{j} \left( t \right)} \right|} \right\}\frac{1}{{\Gamma ~\left( \alpha \right)}}~~\mathop \smallint \limits_{0}^{t} s^{{\alpha - 1}} e^{{ - s}} ds \\ \quad & + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {c_{{ij}} } \right|v_{j} ~\mathop \sum \limits_{{j = 1}}^{m} ~\sup \left\{ {e^{{ - t}} \left| {\bar{\phi }_{j} \left( t \right) - ~\phi _{j} \left( t \right)} \right|} \right\}~\frac{1}{{\Gamma ~\left( \alpha \right)}}~~\mathop \smallint \limits_{0}^{t} p~^{{\alpha - 1}} e^{{ - p}} dp~ \\ \quad & + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {c_{{ij}} } \right|v_{j} ~\mathop \sum \limits_{{j = 1}}^{m} ~\sup \left\{ {e^{{ - t}} \left| {\bar{y}_{j} \left( t \right) - ~y_{j} \left( t \right)} \right|} \right\}\frac{1}{{\Gamma ~\left( \alpha \right)}}~\mathop \smallint \limits_{0}^{t} p~^{{\alpha - 1}} e^{{ - p}} dp \\ \quad & \le \sup \left\{ {e^{{ - t}} \left| {\bar{\delta }_{i} \left( t \right) - ~\delta _{i} \left( t \right)} \right|} \right\}~~ + a_{i} \sup \left\{ {e^{{ - t}} \left| {\bar{\delta }_{i} \left( t \right) - ~\delta _{i} \left( t \right)} \right|} \right\} + a_{i} \sup e^{{ - t}} \left| {\bar{x}\left( t \right) - ~x_{i} \left( t \right)} \right| \\ \quad & + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {b_{{ij}} } \right|f_{j} ~\mathop \sum \limits_{{j = 1}}^{m} ~\sup \left\{ {e^{{ - t}} \left| {\bar{y}_{j} \left( t \right) - ~y_{j} \left( t \right)} \right|} \right\} \\ \quad & + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {c_{{ij}} } \right|v_{j} ~\mathop \sum \limits_{{j = 1}}^{m} ~\sup \left\{ {e^{{ - t}} \left| {\bar{\phi }_{j} \left( t \right) - ~\phi _{j} \left( t \right)} \right|} \right\}e^{{ - \sigma }} \\ \quad & + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {c_{{ij}} } \right|v_{j} ~\mathop \sum \limits_{{j = 1}}^{m} ~\sup \left\{ {e^{{ - t}} \left| {\bar{y}_{j} \left( t \right) - ~y_{j} \left( t \right)} \right|} \right\}~ \\ \quad & \le \sup \left\{ {e^{{ - t}} \left| {\bar{\delta }_{i} \left( t \right) - ~\delta _{i} \left( t \right)} \right|} \right\}~~ + a_{i} e^{{ - t}} \sup \left\{ {\left| {\bar{\delta }_{i} \left( t \right) - ~\delta _{i} \left( t \right)} \right|} \right\} \\ \quad & + a_{i} ~\sup e^{{ - t}} \left| {\bar{x}\left( t \right) - ~x_{i} \left( t \right)} \right| + a_{i} + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {b_{{ij}} } \right|f_{j} ~\left\{ {\left| {\bar{y}_{j} \left( t \right) - ~y_{j} \left( t \right)} \right|} \right\} \\ \quad & + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {c_{{ij}} } \right|v_{j} ~\left\{ {e^{{ - t}} \left| {\bar{\phi }_{j} \left( t \right) - \phi _{j} \left( t \right)} \right|} \right\} + \mathop {\max }\limits_{{1 \le j \le m~}} \left| {c_{{ij}} } \right|v_{j} ~\left\{ {e^{{ - t}} \left| {\bar{y}_{j} \left( t \right) - ~y_{j} \left( t \right)} \right|} \right\}, \\ \end{aligned} $$
$$ \begin{aligned} \left\| {\overline{x}\left( t \right) - x\left( t \right)} \right\| = & \mathop \sum \limits_{i = 1}^{n} { } sup\left\{ {e^{ - t} \left| {\overline{x}\left( t \right) - x_{i} \left( t \right)} \right|} \right\} \le sup\left\{ {e^{ - t} \left| {\overline{x}_{i} \left( t \right) - x_{i} \left( t \right)} \right|} \right\} \\ \quad & \le \left\| {\overline{\delta }_{i} \left( t \right) - \delta \left( t \right)} \right\| + \max a_{i} + \max a_{i} \;\left\| {\overline{x}\left( t \right) - x\left( t \right)} \right\| \\ \quad & + \mathop \sum \limits_{i = 1}^{n} { } \mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left| {b_{ij} } \right|f_{j} { }\left\| {\overline{y}\left( t \right) - y\left( t \right)} \right\| + \mathop \sum \limits_{i = 1}^{n} { }\mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left| {c_{ij} } \right|v_{j} { }\left\| {\overline{\phi }\left( t \right) - \phi \left( t \right)} \right\| \\ \quad & + \mathop \sum \limits_{i = 1}^{n} { }\mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left| {c_{ij} } \right|v_{j} { }\left\| {\overline{y}\left( t \right) - y\left( t \right)} \right\|. \\ \end{aligned} $$

In addition to,

$$ \begin{aligned} & \left\| {\bar{x}\left( t \right) - ~x\left( t \right)} \right\| \le \frac{{\mathop \sum \nolimits_{{i = 1}}^{n} {\text{~}}~\mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m~}}}} \left| {b_{{ij}} } \right|f_{j} {\text{~}} + \mathop \sum \nolimits_{{i = 1}}^{n} {\text{~}}\mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m~}}}} \left| {c_{{ij}} } \right|v_{j} {\text{~~}}}}{{1 - max{\text{~}}a_{i} {\text{~}}}} \\ & \left\| {\bar{y}\left( t \right) - ~y\left( t \right)} \right\| + \frac{{1 + maxa_{i} {\text{~~}}}}{{1 - max{\text{~}}a_{i} {\text{~~}}}} \\ & \left\| {\bar{\delta }_{i} \left( t \right) - ~\delta \left( t \right)} \right\| + \frac{{\mathop \sum \nolimits_{{i = 1}}^{n} {\text{~}}\mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m~}}}} \left| {c_{{ij}} } \right|v_{j} {\text{~}}}}{{1 - max{\text{~}}a_{i} {\text{~}}}}\left\| {\bar{\phi }\left( t \right) - \phi \left( t \right)} \right\| \\ & = \frac{{S_{2} }}{{S_{1} }}\left\| {\bar{y}\left( t \right) - ~y\left( t \right)} \right\| + \frac{{S_{3} }}{{S_{1} }}\left\| {\bar{\delta }\left( t \right) - ~\delta \left( t \right)} \right\| + \frac{{S_{4} }}{{S_{1} }}\left\| {\bar{\phi }\left( t \right) - \phi \left( t \right)} \right\| \\ \end{aligned} $$
$$ \left\| {\overline{x}\left( t \right) - x\left( t \right)} \right\| \le \varepsilon_{1} . $$
(3.2)

By performing the operations similar to the above, the following result is obtained

$$ \left\| {\overline{y}\left( t \right) - y\left( t \right)} \right\| \le \varepsilon_{2} . $$
(3.3)

From (3.2) and (3.3), we say that, for any ε = \(max\) {\({\varepsilon }_{1},{\varepsilon }_{2}\)} > 0, such that.

\(\overline{x }\) (t) – x (t)‖ < ε, ‖\(\overline{y }\) (t) − y(t)‖ < ε, when ‖\(\overline{\delta }\) (t) − \(\delta \) (t)‖ < δ, ‖\(\overline{\phi }\) (t) − ϕ(t)‖ < δ, this result indicates that system (2.1) has a solution is uniformly stable. So the proof of the theorem is seen.

Theorem 3.2

If all conditions in Lemma 2 and Theorem 3.1 are satisfied, such that.

$$ \mathop {\min }\limits_{{1 \le {\text{i}} \le {\text{n }}}} \left( {d_{i} - \mathop \sum \limits_{j = 1}^{m} \left| {b_{ij} } \right|I_{i} { }} \right){ } > \mathop {max}\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left( {\mathop \sum \limits_{i = 1}^{n} \left| {b_{ij} } \right|I_{i} { }} \right) > 0 $$
(3.4)

then system (2.1) can be achieved uniformly asimptotically stable.

Proof.

Let's take it as,

$$ \Psi = \mathop \sum \limits_{i = 1}^{n} \left| {x_{i} \left( t \right)} \right| + { }\mathop \sum \limits_{j = 1}^{m} \left| {y_{j} \left( t \right)} \right|.{ } $$

To simplify our proof, using translation \({\rm X}_{i}(t)\) = \({x}_{i}(t)\)\({{x}_{i}}^{*}\) and \({\Omega }_{j}\left(t\right)\) = \({y}_{j}(t)\)\({{y}_{j}}^{*}\) transforming system

$$ D ^{{{\upalpha } }} {\rm X}_{i} \left( t \right) = - a_{i} {\rm X}_{i} \left( t \right) + \mathop \sum \limits_{j = 1}^{m} b_{ij} \left( t \right)\overline{{f_{j} }} \left( {{\rm X}_{i} \left( t \right)} \right) + \mathop \sum \limits_{j = 1}^{m} c_{ij} \left( t \right)v_{j} \left( {\Omega_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right) + I_{i} \left( t \right), $$
$$ D ^{{{\upalpha } }} \Omega_{j} \left( t \right) = - d_{j} \Omega_{j} \left( t \right) + \mathop \sum \limits_{i = 1}^{n} e_{ij} \left( t \right)\overline{{g_{i} }} \left( {\Omega_{j} \left( t \right)} \right) - \mathop \sum \limits_{i = 1}^{n} \kappa_{ij} \left( t \right)s_{i} \left( {\Omega_{i} \left( {t - \xi_{ij} \left( t \right)} \right)} \right) + J_{j} \left( t \right), $$
(3.5)

Lemma (2.7) we have

$$ \begin{aligned} & D~^{{\alpha ~}} \Psi \left( t \right) \le \mathop \sum \limits_{{i = 1}}^{n} \text{sgn} (~{\rm X}_{i} \left( t \right))~D~^{{\alpha ~}} {\rm X}_{i} \left( t \right)~ + ~\mathop \sum \limits_{{j = 1}}^{m} \text{sgn} (~\Omega _{j} \left( t \right))~D~^{{\alpha ~}} \Omega _{j} \left( t \right). \\ & D~^{{\alpha ~}} \Psi \left( t \right)~ \le \mathop \sum \limits_{{i = 1}}^{n} ~( - a_{i} \left| {{\rm X}_{i} \left( t \right)} \right|~~ + ~\mathop \sum \limits_{{i = 1}}^{n} ~\left| {b_{{ij}} } \right|~\left| {\overline{{f_{j} }} \left( {{\rm X}_{i} \left( t \right)} \right)} \right|~ + ~~\mathop \sum \limits_{{j = 1}}^{m} ~\left| {c_{{ij}} } \right|\left| {v_{j} \left( {\Omega _{j} \left( {t - \tau _{{ij}} \left( t \right)} \right)} \right)} \right|~~ \\ & \left( { - d_{j} } \right)\left| {\Omega _{j} \left( t \right)} \right| + \mathop \sum \limits_{{j = 1}}^{m} ~\left| {e_{{ij}} } \right|\left| {\overline{{g_{i} }} \left( {\Omega _{j} \left( t \right)} \right)} \right|~ + \mathop \sum \limits_{{i = 1}}^{n} ~\left| {\kappa _{{ij}} } \right|\left| {s_{i} \left( {\Omega _{j} \left( {t - \tau _{{ij}} \left( t \right)} \right)} \right)} \right|~~ \\ \end{aligned} $$
$$ \begin{aligned} = & \mathop \sum \limits_{i = 1}^{n} \left| {{\rm X}_{i} \left( t \right)} \right|\left( { - a_{i} } \right){ } + \mathop \sum \limits_{j = 1}^{m} \left| {\Omega_{j} \left( t \right)} \right|\left( { - d_{j} } \right) + { }\mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{i = 1}^{n} \left| {b_{ij} } \right|\left| {\overline{{f_{j} }} \left( {{\rm X}_{i} \left( t \right)} \right)} \right| \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{j = 1}^{n} \left| {c_{ij} } \right|\left| {v_{j} \left( {\Omega_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right)} \right| + \mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{j = 1}^{n} \left| {e_{ij} } \right|\left| {\overline{{g_{i} }} \left( {\Omega_{j} \left( t \right)} \right)} \right| \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{i = 1}^{n} \left| {\kappa_{ij} } \right|\left| {s_{i} \left( {\Omega_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right)} \right|{ } \\ \end{aligned} $$

Under Assumption (H2) we get

$$ \begin{aligned} \le & \mathop \sum \limits_{i = 1}^{n} \left| {{\rm X}_{i} \left( t \right)} \right|\left( { - a_{i} } \right){ + } \mathop \sum \limits_{j = 1}^{m} \left| {\Omega_{j} \left( t \right)} \right|\left( { - d_{j} } \right) + { }\mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{i = 1}^{n} \left| {b_{ij} } \right|\left| {I_{i} } \right|\left| {\left( {{\rm X}_{i} \left( t \right)} \right)} \right| \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{ = 1}^{n} \left| {c_{ij} } \right|\left| {\left( {\Omega_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right)} \right| + \mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{ = 1}^{n} \left| {\left| {e_{ij} } \right|\left| {J_{j} } \right|\left( {\Omega_{j} \left( t \right)} \right)} \right| \\ \quad & + \mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{i = 1}^{n} \left| {\kappa_{ij} } \right|\left| {J_{j} \left( {\Omega_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right)} \right| \\ \end{aligned} $$
$$ \begin{aligned} \le & \mathop \sum \limits_{i = 1}^{n} \left( { - a_{i} { + } \mathop \sum \limits_{j = 1}^{m} \left| {b_{ij} } \right|\left| {I_{i} } \right|} \right)\left| {{\rm X}_{i} \left( t \right)} \right| + { }\mathop \sum \limits_{j = 1}^{m} \left( { - d_{j} + \mathop \sum \limits_{i = 1}^{n} { }\left| {e_{ij} } \right|\left| {J_{j} } \right|} \right)\left| {\Omega_{j} \left( t \right)} \right| \\ \quad & + A_{max} \mathop \sum \limits_{j = 1}^{m} { }\mathop \sum \limits_{j = 1}^{n} { }\left( { \left| {c_{ij} } \right| + \left| {\kappa_{ij} } \right|\left| {J_{j} } \right| } \right)\left| {\left( {\Omega_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right)} \right| \\ \le & - B\mathop \sum \limits_{i = 1}^{n} \left| {{\rm X}_{i} \left( t \right)} \right|{ } - \dot{B}\mathop \sum \limits_{j = 1}^{m} { }\left| {\Omega_{j} \left( t \right)} \right| + A_{max} \mathop \sum \limits_{j = 1}^{m} { }\left| {\left( {\Omega_{j} \left( {t - \tau_{ij} \left( t \right)} \right)} \right)} \right| \\ \le & - \Psi \left( t \right) + \omega \Psi \left( {t - \tau_{ij} } \right) \\ \le & - \Psi \left( t \right) + \omega \mathop {{\text{sup}}}\limits_{{ - \tau \le \theta \le 0{ }}} \Psi \left( {t + \theta } \right), \\ \end{aligned} $$

where \(B\)=\(\underset{1\le {\text{i}}\le \mathrm{n }}{min}({a}_{i}-\sum_{j=1}^{m} \left|{b}_{ij}\right|\left|{I}_{i}\right|) \),\( \dot{B}\) = \(\underset{1\le {\text{j}}\le {\text{m}}}{min}\)(\({d}_{j}-\sum_{i=1}^{n} \left|{e}_{ij}\right|\left|{J}_{j}\right|)\),

\({A}_{max}\)=\(\mathop {max}\limits_{{1 \le {\text{i}} \le {\text{n}},,1 \le {\text{j}} \le {\text{m }}}} \left( {\left| {c_{ij} } \right| + \left| {\kappa_{ij} } \right|\left| {J_{j} } \right|} \right)\) and.

\(\dot{\omega } =\) \(min\{ { }B,{ }\dot{B}\)} ˃ 0, ω = \(max\) A ˃ 0.

With the inequality (3.4), we get that

$$ \dot{\omega } > \omega > 0, $$
(3.6)

and

$$ D ^{{{\upalpha } }} \Psi \left( t \right) \le \dot{\omega } \Psi \left( t \right) + \omega \Psi \left( {t + \theta } \right). $$
(3.7)

According to Lemma (2.6), (3.6) and (3.7) the solution for system (3.5) is globally uniformly asymptotically stable. Therefore, the equilibrium \(({x}^{*},{y}^{*})\) for the system (2.1) is globally uniformly asymptotically stable.

Finally, under assumption (H2) and the conditions of Theorem (3.1), the system (2.1) appears to have unique equilibrium point that is equally stable if whereas \(\overline{{{ }G_{1} }}\)< \({min}_{1\le j\le m }\left\{{b}_{j}\right\}\) and \(\overline{{{ }G_{2} }} { }\)< \({min}_{1\le i\le n }\{{a}_{i}\}\) hald where

$$ \begin{aligned} \overline{{{ }G_{1} }} = & \mathop \sum \limits_{i = 1}^{n} \mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left\{ {\left| {b_{ij} } \right|f_{j} } \right\}{ } + \mathop \sum \limits_{i = 1}^{n} \mathop {\max }\limits_{{1 \le {\text{j}} \le {\text{m }}}} \left\{ {\left| {c_{ij} } \right|v_{j} } \right\}{ } \\ \overline{{{ }G_{2} }} = & \mathop \sum \limits_{j = 1}^{m} \mathop {\max }\limits_{{1 \le {\text{i}} \le {\text{n }}}} \left\{ {\left| {e_{ij} } \right|g_{j} } \right\}{ } + \mathop \sum \limits_{j = 1}^{m} \mathop {\max }\limits_{{1 \le {\text{i}} \le {\text{n }}}} \left\{ {\left| {\kappa_{ij} } \right|s_{j} } \right\}.{ } \\ \end{aligned} $$

Thus, we obtain that the model given by (2.1) is globally asymptotically stable and has a unique equilibrium point.

4 An Example

Consider the following system,

$$ \begin{aligned} D{}_{t}^{{\upalpha }} x_{1} \left( t \right) = & - a_{1} x_{1} \left( t \right) + b_{11} f\left( {x_{1} \left( t \right)} \right) + b_{12} f\left( {x_{1} \left( t \right)} \right) + c_{11} v_{1} \left( {y_{1} \left( {t - \tau_{11} } \right)} \right) + c_{12} v_{2} \left( {y_{1} \left( {t - \tau_{12} } \right)} \right) + I_{1} { } \\ D{}_{t}^{{\upalpha }} x_{2} \left( t \right) = & { } - a_{2} x_{2} \left( t \right) + b_{21} f\left( {x_{2} \left( t \right)} \right) + b_{22} f\left( {x_{2} \left( t \right)} \right) + c_{21} v_{1} \left( {y_{2} \left( {t - \tau_{21} } \right)} \right) + c_{22} v_{2} \left( {y_{2} \left( {t - \tau_{22} } \right)} \right) + I_{2} \\ D{}_{t}^{{\upalpha }} y_{1} \left( t \right) = & - d_{1} y_{1} \left( t \right) + e_{11} g\left( {y_{1} \left( t \right)} \right) + e_{12} g\left( {y_{1} \left( t \right)} \right) + \kappa_{11} s_{1} \left( {y_{1} \left( {t - \xi_{11} } \right)} \right) + \kappa_{12} s_{1} \left( {y_{1} \left( {t - \xi_{12} } \right)} \right) + J_{1} { } \\ D{}_{t}^{{\upalpha }} y_{2} \left( t \right) = & - d_{2} y_{2} \left( t \right) + e_{21} g\left( {y_{2} \left( t \right)} \right) + e_{22} g\left( {y_{2} \left( t \right)} \right) + \kappa_{21} s_{2} \left( {y_{2} \left( {t - \xi_{21} } \right)} \right) + \kappa_{22} s_{2} \left( {y_{2} \left( {t - \xi_{22} } \right)} \right) + J_{2} \\ \end{aligned} $$

where

$$ \begin{aligned} f_{ij} \left( x \right) = & g_{ij} \left( x \right) = k_{ij} \left( x \right) = \frac{{\left| {x + 1} \right| - \left| {x - 1} \right|}}{2},\;i,j = 1,2. \\ a_{1} x_{1} \left( t \right) = & 2 + \sin \left( {x_{1} \left( t \right)} \right), a_{2} x_{2} \left( t \right) = 2 + \cos \left( {x_{2} \left( t \right)} \right),\; d_{1} y_{1} \left( t \right) = 1 + \cos \left( {y_{1} \left( t \right)} \right), \\ d_{2} y_{2} \left( t \right) = & 1 + \sin \left( {y_{2} \left( t \right)} \right),\;c_{11} = c_{12} = 1,\;c_{21} = c_{22} = 1,\;\kappa_{11} = \kappa_{12} = 1,\;\kappa_{21} = \kappa_{22} = 2, \\ b_{11} = & e_{11} = \frac{1}{4},\;b_{12} = e_{21} = \frac{1}{8},\;b_{21} = e_{12} = \frac{1}{2},\;b_{22} = e_{22} = \frac{1}{4}, \\ s_{1} y_{1} \left( t \right) = & \frac{1}{4}, s_{2} y_{2} \left( t \right) = \frac{1}{2} . \\ \end{aligned} $$

\(\tau =\mathrm{0,2} \xi = \mathrm{0,3} \alpha =\mathrm{0,50}\) and

$$ \begin{aligned} I_{1} = & \frac{5 }{{8 }}{\text{cos t}} + { }\frac{11}{8}{\text{ sin t,}}\; \, I_{2} = \frac{1 }{{4 }}{\text{cos t}} + { }\frac{2}{5}{\text{ sin t}}{.} \\ J_{1} = & \frac{3}{4 }{\text{cos t}} + { }\frac{7}{4}{\text{ sin t, }}\;J_{2} = \frac{1 }{{2 }}{\text{cos t}} + { }\frac{3 }{5}{\text{ sin t}}. \\ \end{aligned} $$

We have \({A}_{ij}={C}_{ij}\) \({=B}_{ij}= {L}_{ij}=1\). It follows that system (2.1) satisfies all the conditions in Theorem (3.1). Hence system (2.1) exists a \(T\)-anti-periodic solution. Thus, the fractional order of the anti-periodic solution globally exponentially stable. The graphs of the results can be seen in the following figures: Fig. 1, shows the (\({x}_{1}\),\({y}_{1}\)) graph for the system in Eq. (2.1), Fig. 2 shows the (\({x}_{2}\),\({y}_{2}\)) graph for the system in Eq. (2.1).

Fig. 1
figure 1

Solutions (\({x}_{1}\),\({y}_{1}\)) of system in Eq. (2.1)

Fig. 2
figure 2

Solutions (\({x}_{2}\),\({y}_{2}\)) of system in Eq. (2.1)

5 Conclusion

This study investigated the existence, global stability, and effects of an antiperiodic solution for bidirectional fractional order BAM neural networks with time-varying delays. The study initially defined sufficient conditions for existence and stability, along with some demonstrations and preliminary results. The stability and overall stability of the anti-periodic solution for fractional-order BAM neural networks are demonstrated using the Lyapunov functional method. Due to the finite speed of neurons, chaos, oscillation, and instability may occur in the signal transmission between neurons. Time-varying delays in network activation are possible. This situation, which occurs mostly when the system is stable, manifests itself in some dynamics with behaviors that affect the stability of the system. It is very important to determine the stability of ANNs designed for such applications. In this type of ANNs, a single global asymptotic stable equilibrium of the ANNs is created for each input vector externally given to the system. Time delay is decisive in the implementation of a signal or effect passing through neural networks. The signal transmission process of neural networks generally corresponds to an anti-periodic problem. According to the field and application to be used in the neural network model, the existence uniqueness and stability analysis of the balance point should be particularly examined. If it is used to solve network optimization problems, it must have a single equilibrium point that is globally asymptotically stable. In relational memory design, if the network has more than one balance point, it may be possible to ensure full stability and store more information. Otherwise, determining too many balance points to solve such a problem will cause the system to produce incorrect results. By showing that the network is globally asymptotic stable in the model we are considering, if an associative memory design is to be made, we provide the opportunity to store more information for such a pattern. In addition, although the use of the symmetric connection matrix when proving full-order equations ensures the global stability of the system, it cannot be commented on whether the balance point is single or multiple. To make this interpretation, the characteristics of the activation functions and the values of the connection coefficients between neurons are important. The fact that the model is fractional and delayed leads to significant changes in its dynamic behavior. Because of this delayed model, neurons represent a circuit formed by an operational amplifier and its connected resistance and capacitance elements, eliminating ambiguity. Our study shows that fractional calculus is a more effective tool for expressing the objective world than integer calculus because it is capable of expressing the objective world with its memory and inheritance aspects for various types of dynamical processes. The model and method used, how to control the stability zone, and how to set the time? It answers the questions and has a significant impact on the design of neural networks. In addition, the delay of the system under negative feedback conditions has a significant impact on the performance of the system. Although the results obtained cannot be directly applied to many applications, they extend some previously known neural networks to some extent. Therefore, these results are important because they complement previous studies. Dynamic neural network models can be successfully applied in applications such as the classification of examples, optimization, and associative memory.

Examining the global asymptotic stability analysis of the equation can have important implications in different fields, such as neural networks and synchronization in secure communications. It serves as a reference for new studies because it elucidates networks with real virtual-type activation functions and distributed delays using an optical circuit for the proposed system.