1 Introduction

The differential dynamics model is one of the basic tools in the characterization of natural and engineering processes [7, 14, 19, 22, 40, 41], and it is also a basic block of the complicated neural network [18, 65]. A simplified mathematical description of natural neural networks is known as the artificial neural networks (ANNs) model. Since 1940, due to its application, network modeling has been used in the area of pattern recognition, optimization, classification, parallel computation, signal and image processing, associative memory, system identification and control, sequence recognition, medical diagnosis, data mining, and visualization [2, 13, 17, 21, 23, 30, 35, 46, 47, 60, 61]. Over the past few years, several types of neural networks (NNs) have been applied in many areas, while in 1984 Hopfield proposed the following differential equations model [10]:

$$ \textstyle\begin{cases} \dot{z}_{i}(t) =-d_{i}z_{i}(t)+\sum_{j=1}^{n}\alpha _{ij}f_{j}(z _{j}(t))+I_{i}(t), \\ z_{i}(t) =\eta _{i}(t), \quad -\infty < t\leq 0. \end{cases} $$

Here \(z_{i}(t)\) denotes the state vector at time t, for \(t\geq 0\), \(i,j=1,2,3,\ldots,n\); \(\eta _{i}(t)\) represents the initial value; \(d_{i}>0\); \(\alpha _{ij}\) is a positive constants; the activation function is denoted by \(f_{j}\) and the external input is \(I_{i}(t)\). At the same time, the dynamical nature of bifurcation, attractors, oscillation, chaotic, almost periodic solution, periodic solution, stability, synchronisation and instability of various types of differential equations models is focused by lots of researchers [4, 12, 15, 49, 52,53,54, 58, 64].

Till now, the time delay of dynamics systems also has received interest from many researchers [5, 6, 8, 11, 15, 16, 20, 24,25,26,27,28, 42, 51, 55, 56, 63]. From the practical perspective, it is familiar that the natural NNs as well as ANNs due to the information processing, delay appears. Frequently, it may originate in the network system and leads to instability, oscillation and chaos. Because of the finite switching speed of amplifiers, the delay occurs in the transmission and responses of the neuron, more particularly in the electronic application of analog NNs. In recent days, the dynamical behavior of RNNs with discrete time-varying delays has been extensively studied [3, 34, 38].

Huang, Cao and Wang [28] investigate the following RNN with constant time delays:

$$ \textstyle\begin{cases} \dot{z}_{i}(t) =-d_{i}z_{i}(t)+\sum_{j=1}^{n}\alpha _{ij}f_{j}(z _{j}(t))+\sum_{j=1}^{n}\beta _{ij}g_{j}(z_{j}(t-\tau _{i}))+I _{i}(t), \\ z_{i}(t) =\eta _{i}(t), \quad -\infty < t\leq 0. \end{cases} $$

For \(i=1,2,3,\ldots,n\); and \(t\geq 0\), \(z_{i}(t)\) indicates the state vector, \(d_{i}, \alpha _{ij},\beta _{ij}>0\); the activation functions are represented by \(f_{j}\), \(g_{j}\), the constant delay is \(\tau _{i}>0\) and the external input is \(I_{i}(t)\).

In addition, NNs are of a spatial nature because of its parallel pathways having various sizes and lengths of axons, which entails that the signal propagation no longer is instantaneous but distributed during a certain period of time. Even though it is observed that in the propagation of signals distributed over certain duration of time which can be modeled as distributed time delays, it may also be instantaneous in some moment so that the distributed delay should be incorporated. It is necessary to introduce the infinite distributed delay during a certain period of time in such a manner that the current behavior of the state is in contrast to the distant past, which has less influence. Several research works are related to the stability of mixed delay, for instance, see [1, 8, 34, 37, 39, 43, 49, 58, 62] for the case of continuously distributed delay. In 2009 Xiang and Cao [58] discussed the RNN with continuously distributed delays,

$$ \textstyle\begin{cases} \dot{z}_{i}(t)= -d_{i}z_{i}(t)+\sum_{j=1}^{n}\alpha _{ij}f_{j}(z _{j}(t))+\sum_{j=1}^{n}\beta _{ij}g_{j}(z_{j}(t-\tau _{i}(t))) \\ \hphantom{\dot{z}_{i}(t)={}}{} +\sum_{j=1}^{n}\gamma _{ij}\int _{-\infty }^{t} \kappa _{ij}(t-s)h_{j}(z_{j}(s))\,ds, \\ z_{i}(t)= \eta _{i}(t),\quad -\infty < t\leq 0. \end{cases} $$

Here, for \(i=1,2,3,\ldots,n\); and \(t\geq 0\), \(d_{i}\), \(\alpha _{ij}\), \(\beta _{ij}\), \(\gamma _{ij}\) are all positive constants; the state response is denoted by \(z_{i}(t)\), the activation functions are represented by \(f_{j}\), \(g_{j}\), \(h_{j}\), \(\tau _{i}(t)>0\) denotes the time-varying delays and \(\kappa _{ij}\) indicates the delay kernels.

Furthermore, the release of neurotransmitters and other probability causes in the synapsis transmission is a noisy process induced by the random fluctuation. In the literature [45] Mao demonstrated the stabilization and destabilization of NN systems with stochastic input. In this work, the NN models with external random perturbations are viewed as nonlinear dynamical systems with white noise (perturbation of Brownian motion). In 2007, Sun and Cao [51] investigate the following stochastic recurrent neural networks (SRNNs) with discrete and distributed time-varying delays:

$$ \textstyle\begin{cases} dz_{i}(t)= (-d_{i}z_{i}(t)+\sum_{j=1}^{n}\alpha _{ij}f_{j}(z _{j}(t))+\sum_{j=1}^{n}\beta _{ij}g_{j}(z_{j}(t-\tau _{i}(t)))+I _{i}(t))\,dt \\ \hphantom{dz_{i}(t)={}}{} +\sigma _{ij}(t,z_{j}(t),z_{j}(t-\tau _{i}(t)))\,d\omega _{j}(t), \\ z_{i}(t)= \eta _{i}(t),\quad -\infty < t\leq 0, \end{cases} $$

where \(d_{i}>0\); \(\alpha _{ij}\), \(\beta _{ij}\) are all positive constants; the activation functions are represented by \(f_{j}\), \(g_{j}\), the discrete time-varying transmission delays \(\tau _{i}(t)\), stochastic noise \(\omega _{j}\) and the external input is \(I_{i}(t)\).

Moreover, the transformation process of the state of neurons changes instantaneously or abruptly at particular instants of time. These effects of the dynamical system are known as impulsive effects. Owing to its application in the fields like electronics, biology, economics, medicine, and telecommunication, impulsive effects of NNs receive more attention [1, 29, 31, 59]. In 2016, Tang and Wu studied the following impulsive RNNs:

$$ \textstyle\begin{cases} dz_{i}(t)= (-d_{i}z_{i}(t)+\sum_{j=1}^{n}\alpha _{ij}f_{j}(z _{j}(t))+\sum_{j=1}^{n}\beta _{ij}g_{j}(z_{j}(t-\tau _{i}(t))))\,dt \\ \hphantom{dz_{i}(t)={}}{} +\sigma _{ij}(t,z_{j}(t),z_{j}(t-\tau _{i}(t)))\,d\omega _{j}(t), \\ \Delta z_{i}(t_{k})= z_{i}(t_{k}^{+})-z_{i}(t_{k}^{-})=P_{ik}(z_{i}(t _{k})),\quad k=1,2,3,\ldots, \\ z_{i}(t)= \eta _{i}(t),\quad -\infty < t\leq 0. \end{cases} $$

Here, for \(i,j=1,2,3,\ldots,n\); the state vector is denoted by \(z_{i}(t)\); \(d_{i}>0\) and positive constants \(\alpha _{ij}\), \(\beta _{ij}\), \(\gamma _{ij}\). The activation functions are denoted by \(f_{j}\), \(g_{j}\), \(h _{j}\). The instantaneous change of the state at the impulsive moment \(t_{k}\), \(k=1,2,3,\ldots,n\) is represented by the impulsive function \(P_{ik}(z_{i}(t_{k}))\).

In recent years, exponential stability analysis of neural networks with time-varying delays, stochastic, impulsive effects was investigated by many researchers. Among them there exist various approaches to showing the stability of neural networks. For instance, [29, 45] Raja et al. investigate the exponential stability of neural networks by using Lyapunov and linear matrix inequality; in [59] Congcong et al. study the exponential stability by using the Lyapunov and impulsive delay differentiable inequality techniques, Li et al. [31] investigate exponential stability by using Razumikhin techniques and the Lyapunov functional approach as well as stochastic analysis. In [1] one studies the fixed point theorem, the generalized Gronwall–Bellman inequality and differential inequalities and in [36] Li et al. investigate the exponential neural networks system by utilizing the \(\mathcal{L}\)-operator delay differential inequality with impulses and using the stochastic analysis technique. Moreover, the stability of neural networks has been applied in various areas [44] like image encryption [57] and character recognition, forecasting, marketing, retails and sales, banking and finance, and medicine.

It is well known that in stability theory the Lyapunov method plays a major role, but the construction of a suitable Lyapunov function for large scale systems is quite harder, so in order to overcome this problem a novel technique was explored on the basis of Kirchhoff’s matrix tree theorem and the Lyapunov function, which is originated by Li et al. [33]. More specifically, a mathematical representation of the NNs is viewed as a directed graph with a vertex system as a single neuron, and interaction or interconnection among neurons in the synaptic connections as directed arcs. The main advantage of this work is to construct a global Lyapunov function to the large scale system that is more related to the topological structure of the framework. The benefit of this approach is to avoid constructing a particular Lyapunov function for a particular system directly. Utilizing the achievements of the pioneering works, few researchers have initiated their work and applied this approach. For instance, in [48], exponential synchronization of stochastic reaction–diffusion Cohen–Grossberg neural networks with time-varying delays was studied by using graph theory and the Lyapunov functional method; in [9], global exponential stability for multi-group neutral delayed systems was studied based on Razumikhin method and graph theory, in [50] exponential stability of BAM neural networks with delays and reaction–diffusion was studied with the help of a graph-theoretic approach. In this novel approach, to the best of the authors’ knowledge, researchers have not correlated them with NNs; there are one or two of them that are used in NNs. However, this novel approach was frequently used.

Compared with the outcome of some existing research work, in this work we concentrated on the pth moment’s exponential stability issues of ISRNNs with continuous time delay through the novel graph-theoretic approach. We proceed by applying some inequality techniques and also constructing a systematic method of a global Lyapunov function for ISRNNs by using the combination of graph theory and the Lyapunov function. The main contribution of this proposed work is as follows:

  • To the best of the authors’ knowledge, the problem of exponential stability of RNNs with continuously distributed delay and impulsive effects in the sense of a graph-theoretic approach is still open. Hence we desire to solve this complicated problem.

  • In this considered RNNs model impulse, discrete time delay, continuously distributed delay and white noise are simultaneously taken into consideration to show the pth moment’s exponential stability.

  • By using the results in graph theory, we construct a suitable Lyapunov function for the vertex system to avoid the complication in construction of the Lyapunov function straightly.

  • Using the combination of graph theory and the Lyapunov function as well as some inequality techniques, novel sufficient conditions are provided in terms of Lyapunov and coefficient-type theorems, respectively.

  • To illustrate the exactness of our proposed work, we provide a numerical example and some simulations.

The remainder of this work is summarized as follows: In the upcoming section, we present the mathematical description of the impulsive stochastic recurrent neural networks with mixed delays (ISRNNMDs), preliminaries which correspond to the given work, some assumptions and basic notations are given. The main results of this work is presented in the third section, which yields the sufficient condition to ensure the exponential stability of the given system. Finally, an example and numerical simulations are given to illustrate the effectiveness of our present study.

2 Mathematical model of impulsive stochastic recurrent neural networks

Inspired by the above analysis, we consider the system of RNNs with mixed time-varying delays, stochastic and impulsive effects described as follows:

$$ \begin{aligned} &\begin{aligned} dz_{i}(t)= {}&\Biggl[-d_{i}z_{i}(t)+\sum _{j=1}^{n}\alpha _{ij}f_{j} \bigl(z _{j}(t)\bigr)+\sum_{j=1}^{n} \beta _{ij}g_{j}\bigl(z_{j}\bigl(t-\tau _{i}(t)\bigr)\bigr) \\ &{}+\sum_{j=1}^{n}\gamma _{ij} \int _{-\infty }^{t} \kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds\Biggr]\,dt \\ &{}+\sigma _{ij}\biggl(t,z_{j}(t),z_{j} \bigl(t-\tau _{i}(t)\bigr), \int _{-\infty }^{t}\kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds\biggr)\,d\omega _{j};\quad t\neq t_{k}, \end{aligned} \\ &\Delta z_{i}(t_{k})= z_{i} \bigl(t_{k}^{+}\bigr)-z_{i} \bigl(t_{k}^{-}\bigr)=P_{ik} \bigl(z_{i}(t _{k})\bigr),\quad k=1,2,3,\ldots,n, \end{aligned} $$
(1)

for \(i,j=1,2,3,\ldots,n\), in which \(n\geq 2\) indicates the number of components in the NNs, at time \(t>0\) the state vector of the ith component is denoted by \(z_{i}(t)\in \mathbb{R}\). We have the positive constant \(d_{i}>0\); the positive weight matrices \(A=(\alpha _{ij})_{n \times n}\), \(B=(\beta _{ij})_{n\times n}\), \(C=(\gamma _{ij})_{n\times n}\) denotes the connection strength and delayed connection strength of the jth neuron to ith neuron separately. Neuronal activation functions of the jth neurons are represented by \(f_{j}:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\), \(g_{j}:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\), \(h_{j}: \mathbb{R}^{n}\rightarrow \mathbb{R}^{n}\). The time-varying transmission delays \(\tau _{i}>0\) with the condition \(0<\tau _{i}(t)<\tau \) and the non-negative continuous real-valued delay kernel function \(\kappa _{ij}(\cdot)>0\) is defined on \([0,\infty )\). Moreover, a Borel measure function \(\sigma _{ij}:\mathbb{R}\times \mathbb{R}^{n}\times \mathbb{R}^{n}\rightarrow \mathbb{R}^{m\times n}\) indicates the diffusion coefficient of the stochastic effects; the \(\omega _{j}(t)\) identify Brownian motion on a complete probability space \((\varOmega , \mathfrak{f},\mathbb{P})\) with natural filtration \(\mathfrak{f}_{t \geq 0}\). In addition the second part is the discrete part of (1), where \(P_{ik}(z_{i}(t_{k}))\) represents the impulsive perturbation of the sudden change of the state \(z_{i}\) at the impulsive moment \(t_{k}\), the discrete set of impulsive moments satisfy \(0=t_{0}< t_{1}< t_{2}<\cdots<t_{k}<\cdots\) and \(\lim_{k\rightarrow \infty }t _{k}=\infty \). The left and right hand side limits at the moment \(t_{k}\) are represented individually by \(z_{i}(t_{k}^{-})\) and \(z_{i}(t_{k}^{+})\). For the system (1) the initial conditions are given in the form

$$ z_{i}(t)=\varpi _{i}(t),\quad -\infty < t\leq 0. $$
(2)

Remark 2.1

Assume that if \(g_{j}=h_{j}=\sigma _{ij}=0\) (\(i,j=1,2,3,\ldots,n\)) (1) is simplified to

$$ \dot{z}_{i}(t)=-d_{i} z_{i}(t)+\sum _{j=1}^{n}\alpha _{ij}f_{j} \bigl(z _{j}(t)\bigr),\quad i=1,2,3,\ldots,n. $$
(3)

Hence, the outcome of our study generalizes the one in [39] strictly.

3 Preliminaries

In this work, we investigate the exponential stability analysis of ISRNNMD, for the whole of this work we consider the following: In this article we consider \(\mathbb{D}=\{1,2,\ldots,n\}\), \(\mathbb{R}=(-\infty ,+\infty )\) to represent the set of real numbers, \(\mathbb{R}^{+}=(0,+ \infty )\), \(\mathbb{N}\) indicates the set of natural numbers, the n-dimensional Euclidean space is \(\mathbb{R}^{n}\) and the set of all \(n\times m\) real matrices is \(\mathbb{R}^{n\times m}\). We represent the mathematical expectation \(\mathbb{E}(\cdot)\) with respect to the probability measure \(\mathbb{P}\), the Euclidean norm for any vector z and the trace norm for any matrix A are denoted \(|z|\) and \(\sqrt{ \operatorname{trace}(A^{T}A)}\), respectively. Denote the n-dimensional Brownian motion as \(\omega (t)=(\omega _{1}(t),\omega _{2}(t),\ldots,\omega _{n}(t))^{T}\) for \(t\geq 0\), which is defined on the complete probability space \((\varOmega ,\mathfrak{f},\mathbb{E},\mathbb{P})\) with the natural filtration \(\{\mathfrak{f}_{t}\} for t\geq 0\).

Graph theory

  • A non-empty directed graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) consists of vertices or nodes \(\mathcal{V}=\{1,2,\ldots,n\}\) and the set of all edges or links \(\mathcal{E}\) consist of the arcs \((i,j)\) from the nodes i to the node j.

  • If we allocate a positive weight \(w_{ij}\) for every arc \((i,j)\) then the digraph is said to be weighted digraph. The product of the weights on all of its arcs in subgraphs \(\mathcal{H}\) is denoted by \(\mathcal{W}( \mathcal{H})\).

  • A dipath \(\mathcal{P}=(\mathcal{I},\mathcal{X})\) is a subgraph of \(\mathcal{G}\) which connects the sequence of nodes \((h_{i},h_{i+1}), \forall i\in \mathcal{D}\) where all \(h_{i}\) are distinct and all the arcs are oriented in the same direction. Moreover, the directed path \(\mathcal{P}\) is said to be dicycle \(\mathfrak{C}\) if the initial and terminal nodes are similar, that is, \(h_{1}=h_{n}\).

  • A subgraph \(\mathfrak{T}\) of \(\mathcal{G}\) is said to be a tree if \(\mathfrak{T}\) is a connected digraph without directed cycle. The node i is called the root of the rooted tree of \(\mathcal{T}\), if for any arc, i is not an end vertex of exactly one arc. A subgraph \(\mathfrak{U}\) of \(\mathcal{G}\) is unicyclic if it is a disjoint union of rooted trees whose roots form a directed cyclic.

  • The Laplacian matrix of \(\mathcal{G}\) is defined as

    $$ L_{p}= \textstyle\begin{cases} -w_{ij}, & \mbox{if }i\neq j, \\ \sum_{h\neq j}w_{ih}, & i=j. \end{cases} $$
  • A directed \(\mathcal{G}\) is called strongly connected if for any two distinct pair of nodes \((i,j)\) there exist directed paths from i to j and vice versa.

For the Lyapunov function \(v_{i}(t,z_{i})\in \mathbf{C}^{1,2}( \mathbb{R}^{+}\times \mathbb{R}^{n};\mathbb{R}^{+})\), \(i\in \mathbb{D}\), which is differentiable and continuous at t and twice differentiable at \(z_{i}\) we define the differential operator \(\mathfrak{L}v_{i}(t,z _{i})\) conjoined with the ith vertex of (1) by

$$\begin{aligned} \mathfrak{L}v_{i}\bigl(t,z_{i}(t)\bigr)={} & \frac{\partial v_{i}(t,z_{i}(t))}{ \partial t}+\frac{\partial v_{i}(t,z_{i}(t))}{\partial z_{i}(t)} \Biggl[-d_{i}z_{i}(t)+ \sum_{j=1}^{n}\alpha _{ij}f_{j} \bigl(z_{j}(t)\bigr) \\ &{}+\sum_{j=1}^{n}\beta _{ij}g_{j}\bigl(z_{j}\bigl(t-\tau _{i}(t)\bigr)\bigr)+ \sum_{j=1}^{n} \gamma _{ij} \int _{-\infty }^{t}\kappa _{ij}(t-s)h _{j}\bigl(z_{j}(s)\bigr)\,ds \Biggr] \\ &{}+\frac{1}{2}\operatorname{Tr} \biggl[\sigma _{ij}^{T} \biggl(t,z_{j}(t),z_{j}\bigl(t-\tau _{i}(t) \bigr), \int _{-\infty }^{t}\kappa _{ij}(t-s) \\ &{}\times h_{j}\bigl(z_{j}(s)\bigr)\,ds\biggr) \frac{\partial ^{2}v_{i}(t,z_{i}(t))}{ \partial z_{i}^{2}(t)} \\ &{}\times \sigma _{ij}\biggl(t,z_{j}(t),z_{j} \bigl(t-\tau _{i}(t)\bigr) \int _{-\infty }^{t}\kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds\biggr) \biggr]. \end{aligned}$$
(4)

Here

$$\begin{aligned} &\frac{\partial v_{i}(t,z_{i}(t))}{\partial z_{i}(t)}= \biggl(\frac{ \partial v_{i}(t,z_{i}(t))}{\partial z^{(1)}_{i}(t)},\frac{\partial v _{i}(t,z_{i}(t))}{\partial z^{(2)}_{i}(t)}, \ldots,\frac{\partial v_{i}(t,z _{i}(t))}{\partial z^{(n)}_{i}(t)} \biggr), \\ &\frac{\partial ^{2}v_{i}(t,z_{i}(t))}{\partial z_{i}^{2}(t)}= \biggl(\frac{ \partial ^{2}v_{i}(t,z_{i}(t))}{\partial z^{(k)}_{i}(t)\partial z^{(h)} _{i}(t)} \biggr)_{n \times n}. \end{aligned}$$

4 Basic definition and lemmas in graph theory

Definition 4.1

([26])

I f for any given \(\epsilon >0\), there exist a positive constant δ and \(a>0\) such that

$$ \mathbb{E} \bigl\vert z(t) \bigr\vert ^{p}\leq e^{at}\xi\quad \mbox{whenever }\bigl\vert z(t) \bigr\vert ^{p}< \delta , $$

for all \(t\geq 0\), then the given system (1) is as regards the pth moment exponentially stable. If \(p=2\) the given system is exponentially stable in the mean square sense.

Definition 4.2

The function \(v_{i}(t,z_{i}(t))\in \mathbf{C}^{1,2}(\mathbb{R}^{+} \times \mathbb{R}^{n};\mathbb{R}^{+})\), \(i\in \mathbb{D,}\) is said to be a vertex-Lyapunov function for system (1) if the following conditions are satisfied:

\((D_{1})\):

Let us assume that \(a_{i}\), \(b_{i}\) be the positive constants such that

$$ a_{i} \vert z_{i} \vert ^{p}\leq v_{i}\bigl(z_{i}(t),t\bigr)\leq b_{i} \vert z_{i} \vert ^{p}. $$
(5)
\((D_{2})\):

If there exist positive scalars \(\sigma _{i}\), \(\lambda _{i} and \eta _{i}\), \((a_{ij})_{n\times n}\) is a matrix, for \(a_{ij}>0\) and an arbitrary function \(F_{ij}(z_{i}(t),z_{j}(t),t)\) for every i, j, then

$$\begin{aligned} \mathfrak{L}v_{i}\bigl(z_{i}(t),t\bigr)\leq {}&{-}\sigma _{i}v_{i}\bigl(z_{i}(t),t\bigr)+ \lambda _{i}v_{i}\bigl(z_{i}\bigl(t-\tau _{i}(t)\bigr),t\bigr)+\sum_{j=1}^{n}a_{ij}F_{ij} \bigl(z _{i}(t),z_{j}(t),t\bigr) \\ &{}+\eta _{i} \int _{-\infty }^{t}\mathcal{K}_{ij}(t-s)v_{i} \bigl(s,z _{i}(s)\bigr)\,ds,\quad \mbox{for }t\neq t_{k}. \end{aligned}$$
(6)
\((D_{3})\):

Along the directed cycle \(\mathfrak{C}\) of the directed graph \((\mathcal{G}, \mathcal{A})\)

$$ \sum_{(k,h)\in \mathcal{E}(\mathfrak{C})}F_{kh}\bigl(z_{k}(t),z_{h}(t),t \bigr) \leq 0. $$
(7)

Lemma 4.3

(Young’s inequality)

Let \(s,t\geq 0\), \(m\geq n\geq 0\). Then

$$ s^{m-n}t^{n}\leq \frac{(m-n)s^{m}+nt^{m}}{m}. $$

Lemma 4.4

([32])

Let the number of vertices be \(n \geq 2\)and the weighted digraph \((\mathcal{G},\mathcal{A})\)with \(\mathcal{A}=(a_{ij})_{n \times n}\). Let the set of all spanning unicyclic graphs of \((\mathcal{G},\mathcal{A})\)be \(\mathfrak{U}\)and theith diagonal element of cofactor of the \(L_{p}\)be denoted as \(c_{k}\), then the following identity holds:

$$ \sum_{i,j=1}^{n}c_{i}a_{i}F_{ij} \bigl(t,z_{i}(t),z_{j}(t)\bigr)\leq \sum _{\mathcal{U}\in \mathfrak{U}}\mathcal{W}(\mathcal{U}) \sum _{(k,h)\in \mathcal{E}_{\mathfrak{C}_{\mathcal{U}}}}F_{kh}\bigl(t,z _{k}(t),z_{h}(t) \bigr). $$

Here, for \(k,h\in \mathbb{D}\), an arbitrary function \(F_{kh}(t,z_{k}(t),z _{h}(t))\), the set of all spanning unicyclic graph \(\mathfrak{U}\), \(\mathcal{W}(\mathcal{U})\)and \(\mathcal{U}\), \(\mathfrak{C}_{ \mathcal{U}}\)denotes, respectively, the weight and dicycle of \(\mathcal{U}\). Additionally, \(c_{i}>0\)if \((\mathcal{G},\mathcal{A})\)is strongly connected for \(i=1,2,\ldots,n\).

5 Main results

Throughout this work, to get our main results for the given ISRNNMD (1), we suggest the following standard postulations:

\((P_{1})\):

For every \(j\in \mathbb{D}\) the functions \(f_{j}(\cdot)\), \(g _{j}(\cdot)\) and \(h_{j}(\cdot)\) are Lipschitz continuous with Lipschitz constants \(L_{j}(\cdot)\), \(M_{j}(\cdot)\) and \(N_{j}(\cdot)\) individually.

\((P_{2})\):

For any positive constant ν such that \(V(t_{k},z+P_{ik}(z))\leq \nu _{i}V(t_{k}^{-},z)\) for \(t=t_{k}\).

\((P_{3})\):

There exist non-negative constants \(\mu _{i}\), \(\phi _{i}\), \(\psi _{i} (i\in \mathbb{D})\), such that

$$ \operatorname{tr} \bigl(\sigma _{ij}^{T}(t,u_{j},v_{j},w_{j}) \sigma _{ij}(t,u_{j},v_{j},w _{j}) \bigr) \leq \mu _{i} \vert u_{j} \vert ^{2}+\phi _{i} \vert v_{j} \vert ^{2}+\psi _{i} \vert w_{j} \vert ^{2}. $$
\((P_{4})\):

There exist a delay kernel function \(\kappa _{ij}\) and a non-negative constant \(\chi _{ij}\) and \(\mathfrak{K}_{ij}\) such that

$$ \bigl\vert \kappa _{ij}(t) \bigr\vert \leq \chi _{ij},\quad t\in [0,\infty )\ (i,j\in \mathbb{D}). $$
\((P_{5})\):
$$ \int _{0}^{\infty }\kappa _{ij}(t)\,dt=1 \quad \mbox{and}\quad \int _{0}^{\infty }e^{rt}\kappa _{ij}(t)\,dt=\mathfrak{K}_{ij}< \infty . $$
\((P_{6})\):

\(f_{j}(0)=g_{j}(0)=h_{j}(0)=0\) and \(\sigma _{ij}(0,0,0)=0\).

\((P_{7})\):

For every \(k\in \mathbb{L}\) there exist scalars \(\upsilon _{i}\) such that

$$ \bigl\vert P_{ik}\bigl(z_{i}\bigl(t_{k}^{-} \bigr)\bigr) \bigr\vert \leq \upsilon _{i} \bigl\vert z_{i}\bigl(t_{k}^{-}\bigr) \bigr\vert . $$

Theorem 5.1

Let \(\zeta =\inf_{k\in \mathbb{N}}\{t_{k}-t_{k-1}\}\)be finite and let \((\mathcal{G},\mathcal{A})\)be strongly connected, there exist constantsσ, λ, ν, η, \(1<\nu <e^{(\sigma -\lambda \nu )\zeta }\)and suppose that the system (1) allows the vertex-Lyapunov function \(v_{i}(t,z_{i}(t))\)and the assumption \((P_{2})\)holds, then the trivial solution of the system (1) is exponentially stable in thepth moment.

Proof

There exists a positive constant \(\delta (\varepsilon )>0\), for any \(\varepsilon >0\) such that \(b_{i}\delta <\nu _{i}a_{i}\varepsilon \). We assign \(z(t)=z(t,t_{0},\xi )\) to be the solution of (1) by means of \((t_{0},\xi )\), for some \(t_{0}\geq 0\) and \(z(t_{0})=z_{t_{0}}= \xi \) which is in \(PC_{\mathcal{F}_{0}(\delta )}^{b}\). We will show that

$$ \mathbb{E} \bigl\vert z_{i}(t) \bigr\vert ^{p} < \varepsilon , \quad \forall t\geq t_{0}. $$

Let us consider the global Lyapunov function

$$ V\bigl(t,z(t)\bigr)=\sum_{i=1}^{n} c_{i} v_{i}\bigl(t,z_{i}(t)\bigr). $$

Here \(c_{i}\) indicates the cofactor element of \(L_{p}\) of the digraph \((\mathcal{G},\mathcal{A})\), since the digraph is strongly connected, by Lemma 4.4 we have \(c_{i}>0\) for any \(i\in \mathbb{D}\). Choose an arbitrary constant θ and we set

$$ W\bigl(t,z(t)\bigr)=e^{\theta t} V\bigl(t,z(t)\bigr). $$

When \(t\neq t_{k}\), we use Itô’s formula,

$$\begin{aligned} dW\bigl(t,z(t)\bigr) =&\mathcal{L}W\bigl(t,z(t)\bigr)\,dt \\ &{}+\frac{\partial W(t,z(t))}{\partial z(t)} \sigma \biggl(t,z(t),z\bigl(t-\tau (t)\bigr), \int _{-\infty }^{t}\kappa (t-s)h\bigl(z(s)\bigr)\,ds \biggr)\,d\omega (t), \end{aligned}$$

where \(t\in [t_{k-1},t_{k})\), \(k\in \mathbb{N}\). By integrating the above expression from \(t_{k}\) to \(t+\Delta t\) for small enough \(\Delta t>0\) and taking the mathematical expectation, we obtain, for \(t\geq 0\), \(t+ \Delta t\in [t_{k},t_{k+1})\),

$$ \mathbb{E}W(t+\Delta t)= \mathbb{E}W(t_{k})+\mathbb{E} \int _{t _{k}}^{t}\mathcal{L}W\bigl(t,z(t)\bigr)\,dt, $$

which implies that

$$ \mathbb{E}D^{+}W\bigl(t,z(t)\bigr)=\mathbb{E}\mathcal{L}W \bigl(t,z(t)\bigr), \quad t\in [t_{k-1},t_{k}), k\in \mathbb{D}. $$

Here, \(D^{+}W(t,z(t))\) denotes the upper-right Dini derivative of \(W(t,z(t))\) defined by

$$ D^{+}W\bigl(t,z(t)\bigr)= \limsup_{h\rightarrow 0^{+}} \frac{\mathbb{E}W(t+h,z(t+h))-\mathbb{E}W(t,z(t))}{h} . $$

Also

$$ \mathbb{E}\mathcal{L}W\bigl(t,z(t)\bigr)=\theta e^{\theta t}\mathbb{E}V \bigl(t,z(t)\bigr)+e ^{\theta t}\mathbb{E}\mathcal{L}V\bigl(t,z(t)\bigr) $$

and we can choose \(\delta >0\) for any given \(\varepsilon > 0\), such that \(b_{i}\delta <\nu _{i}a_{i}\varepsilon \). Currently, we assign \(\mathbb{E}\|\xi \|^{p}<\delta \). By (3) we obtain

$$ \mathbb{E}W\bigl(t,z(t)\bigr)\leq b_{i}\mathbb{E} \Vert \xi \Vert ^{p}< b_{i}\delta < \nu _{i}a_{i} \varepsilon ,\quad t\in [t_{0}-\tau ,t_{0}]. $$

We mainly prove that

$$ \mathbb{E}W\bigl(t,z(t)\bigr)\leq a_{i}\varepsilon ,\quad t\in (t_{0},t_{1}). $$
(8)

Suppose, on the contrary, that we have \(s\in (t_{0},t_{1})\) such that

$$ \mathbb{E}W\bigl(t,z(t)\bigr)> a_{i}\varepsilon . $$

Set

$$\begin{aligned}& s_{1}= \inf \bigl\{ t\in (t_{0},t_{1}): \mathbb{E}W\bigl(t,z(t)\bigr)>a_{i}\varepsilon \bigr\} ,\quad \mbox{then } s_{1}\in (t_{0},t_{1}) \\& s_{2}= \sup \biggl\{ t\in [t_{0},s_{1}): \mathbb{E}W\bigl(t,z(t)\bigr)< \frac{1}{\nu _{i}} a_{i}\varepsilon \biggr\} . \end{aligned}$$

In such a way it is obvious for \(t\in [s_{2},s_{1}]\), for \(z_{i}(t- \tau _{i}(t))= (z_{1}(t-\tau _{1}(t)),z_{2}(t-\tau _{2}(t)),\ldots,z_{n}(t- \tau _{n}(t)) )\), where \(\tau =\max_{1\leq j\leq n}\{\tau _{j}\}\), for \(t\in [s_{2},s_{1}]\) that

$$ \mathbb{E}W\bigl(t,z\bigl(t-\tau (t)\bigr)\bigr)\leq a_{i} \varepsilon = \frac{1}{\nu _{i}}\nu _{i} a_{i} \varepsilon \leq \nu _{i} \mathbb{E}W\bigl(t,z(t)\bigr), \quad \forall - \tau \leq \theta \leq 0. $$

Hence,

$$ \mathbb{E}V\bigl(t,z\bigl(t-\tau (t)\bigr)\bigr)\leq \nu _{i} e^{\theta \tau }\mathbb{E}V\bigl(t,z(t)\bigr), \quad \forall t\in [s_{2},s_{1}]. $$

Therefore from (6), \(t\in [s_{2},s_{1}]\), we induce that

$$\begin{aligned} D^{+}\mathbb{E}W\bigl(t,z(t)\bigr)\leq{} & \mathcal{L}\mathbb{E}W \bigl(t,z(t)\bigr) \\ \leq{} &\sum_{i=1}^{n}c_{i}e^{\theta t} \mathbb{E} \Biggl[-\sigma _{i} v_{i} \bigl(t,z_{i}(t)\bigr)+\lambda _{i} v_{i} \bigl(t,z_{i}\bigl(t-\tau (t)\bigr)\bigr) \\ &{}+\sum_{j=1}^{n}a_{ij}F_{ij} \bigl(t,z_{i}(t),z_{j}(t)\bigr) \\ &{}+\eta _{i} \int _{-\infty }^{t}\kappa _{ij}(t-s)v_{i} \bigl(s,z_{i}(s)\bigr)\,ds \Biggr]+ \sum_{i=1}^{n} \theta c_{i} e^{\theta t}\mathbb{E}v_{i}\bigl(t,z _{i}(t)\bigr) \\ \leq{} & \bigl[-\sigma _{i} +\lambda _{i} \nu _{i} e^{\theta \tau }+\eta _{i}+ \theta \bigr] \mathbb{E}W\bigl(t,z(t)\bigr). \end{aligned}$$

Integrating the above inequality, for \(t\in [s_{1},s_{2}]\), we get

$$\begin{aligned}& \int _{s_{2}}^{s_{1}} \frac{D^{+}\mathbb{E}W(t,z(t))}{\mathbb{E}W(t,z(t))}\leq \mathbb{E} \int _{s_{2}}^{s_{1}}\bigl[-\sigma _{i} + \lambda _{i} \nu _{i} e^{ \theta \tau }+\eta _{i}+\theta \bigr]\,dt, \\& \begin{aligned} \mathbb{E}W\bigl(s_{1},z(s_{1})\bigr)&\leq \mathbb{E}W \bigl(s_{2},z(s_{2})\bigr)e^{(- \sigma _{i} +\theta +\eta _{i}+\lambda _{i} \nu _{i} e^{\theta \tau }) \zeta } \\ &\leq \nu _{i} a_{i}\varepsilon e^{(-\sigma _{i} +\theta +\eta _{i}+ \lambda _{i} \nu _{i} e^{\theta \tau })\zeta } \\ &< a_{i}\varepsilon . \end{aligned} \end{aligned}$$

Hence,

$$ \mathbb{E}W\bigl(s_{1},z(s_{1})\bigr)< a_{i} \varepsilon $$

which is a contradiction with

$$ \mathbb{E}W\bigl(t,z(t)\bigr)\leq a_{i}\varepsilon . $$

Next, we consider, for \(m=1,2,3,\ldots,k\) and \(k\in \mathbb{N}\),

$$ \mathbb{E}W\bigl(t,z(t)\bigr)\leq a_{i}\varepsilon , \quad \forall t\in [t_{m-1},t_{m}) $$
(9)

We want to prove that

$$ \mathbb{E}W\bigl(t,z(t)\bigr)\leq a_{i}\varepsilon ,\quad \forall t\in [t_{k},t_{k+1}). $$
(10)

On the contrary, there exist some \(t\in [t_{k},t_{k+1})\) such that

$$ \mathbb{E}W\bigl(t,z(t)\bigr)>a_{i}\varepsilon . $$

By using assumption \((P_{2})\) and (9), we obtain

$$\begin{aligned} \mathbb{E}W\bigl(t_{k},z(t_{k})\bigr) = &\mathbb{E} \bigl(e^{\theta t_{k}}V\bigl(t_{k},z(t _{k})\bigr) \bigr) \\ \leq &\mathbb{E}\bigl(e^{\theta t_{k}}V\bigl(t_{k},z+P_{ik}(z) \bigr)\bigr) \\ \leq &\nu _{i} \mathbb{E}W\bigl(t_{k}^{-},z \bigl(t_{k}^{-}\bigr)\bigr) \\ \leq &a_{i}\nu _{i}\varepsilon . \end{aligned}$$

Now, set

$$ s_{1}=\inf \bigl\{ t\in (t_{k},t_{k+1}): \mathbb{E}W\bigl(t,z(t)\bigr)>a_{i}\varepsilon \bigr\} ,\quad \mbox{then } s_{1}\in (t_{k},t_{k+1}). $$

Let

$$ s_{2}=\sup \bigl\{ t\in [t_{k},s_{1}): \mathbb{E}W\bigl(t,z(t)\bigr)< \nu _{i}a_{i} \varepsilon \bigr\} . $$

For \(t\in (s_{2},s_{1})\), we get

$$ \mathbb{E}W\bigl(t,z\bigl(t-\tau (t)\bigr)\bigr)\leq a_{i} \varepsilon \leq \frac{1}{\nu _{i}}\nu _{i}a_{i} \varepsilon \leq \frac{1}{\nu _{i}}EW\bigl(t,z(t)\bigr),\quad - \tau \leq \theta \leq 0. $$

Hence,

$$ \mathbb{E}W\bigl(t,z\bigl(t-\tau (t)\bigr)\bigr)\leq \frac{e^{a_{i}\tau }}{\nu _{i}} \mathbb{E}V\bigl(t,z(t)\bigr). $$

Similarly, we can derive \(\mathbb{E}W(s_{2},z(s_{2}))< a_{i}\varepsilon \) which is a contradiction to \(\mathbb{E}W(t,z(t))\leq a_{i}\varepsilon \) for \(t\epsilon [t_{k},t_{k+1})\). Hence the proof of the theorem is completed.

By mathematical induction \(\mathbb{E}W(t,z(t))\leq a_{i} \varepsilon \) for \(t\geq t_{0}\). Hence,

$$ \mathbb{E} \bigl\vert z(t) \bigr\vert ^{p}\leq e^{-\theta t} \varepsilon ,\quad t\geq t_{0}. $$

 □

Remark 5.2

In the study of the stability of ISNNMD, to construct a Lyapunov function is a formidable task. However, Theorem 5.1 offers a technique to construct systematically a Lyapunov function for (1) by using the Lyapunov function \(v_{i}(t,z_{i}(t))=v_{i1}(t,z_{i}(t))+v_{i2}(t,z_{i}(t))+v_{i3}(t,z_{i}(t))\) of each vertex system, which avoids the difficulty of finding a Lyapunov function directly for ISNNMD. In the final section, an example is presented to show the validity of the technique.

Remark 5.3

It should be noticed that Theorem 5.1 holds if \(c_{i}>0\), that is, the graph \((\mathcal{G},\mathcal{A})\) is strongly connected, which means that exponential stability of RNNs has a close relationship with the topology property of the network. Therefore, we can get some better results in the following.

Theorem 5.4

Assume that the assumptions \((P_{1})\)\((P_{7})\)hold; then the considered system (1) is exponentially stable.

Proof

Let us define the subsequent Lyapunov–Krasovskii functional for (1) as follows:

$$\begin{aligned} v_{i}\bigl(t,z_{i}(t)\bigr)={} &e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p}+\sum _{j=1}^{n}e^{r \tau } \int _{t-\tau _{i}(t)}^{t} e^{rs} \bigl\vert z_{i}(s) \bigr\vert ^{p-2}g_{j}^{2} \bigl(z_{j}(s)\bigr)\,ds \\ &{}+\sum_{j=1}^{n}\sum _{l=1}^{n}m_{l} \int _{0}^{ \infty }\kappa _{lj}(\theta ) \\ &{}\times \int _{t-\theta }^{t} e^{r(s+\theta )} \bigl\vert z_{i}(s) \bigr\vert ^{p-2}h _{j}^{2} \bigl(z_{j}(s)\bigr)\,ds\,d\theta . \end{aligned}$$
(11)

Now, we can calculate the Lie derivative of \(v_{i}(t,z_{1}(t))\) for \(t\neq t_{k}\). By using Itô’s formula along the trajectories of the model (1), we obtain

$$ \mathcal{L}v_{i}\bigl(t,z_{i}(t)\bigr)= \mathcal{L}v_{i1}\bigl(t,z_{i}(t)\bigr)+ \mathcal{L}v_{i2}\bigl(t,z_{i}(t)\bigr)+ \mathcal{L}v_{i3}\bigl(t,z_{i}(t)\bigr), $$
(12)

where

$$\begin{aligned}& \mathcal{L}v_{i1}\bigl(t,z_{i}(t)\bigr) \\& \quad \leq r e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p}+pe^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z_{i}(t) \Biggl[-d _{i}z_{i}(t)+ \sum_{j=1}^{n}\alpha _{ij}f_{j} \bigl(z_{j}(t)\bigr)+\sum_{j=1}^{n} \beta _{ij} \\& \qquad {} \times g_{j}\bigl(z_{j}\bigl(t-\tau _{i}(t)\bigr)\bigr)+\sum_{j=1}^{n} \gamma _{ij} \int _{-\infty }^{t}\kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds \Biggr]+ \frac{p(p-1)}{2}e^{rt} \\& \qquad {} \times \bigl\vert z_{i}(t) \bigr\vert ^{p-2}\sum _{j=1}^{n}\sigma _{ij}^{2} \biggl(t,z_{i}(t),z_{i}\bigl(t-\tau _{i}(t)\bigr), \int _{-\infty }^{t}\kappa _{ij}(t-s)h _{j}\bigl(z_{j}(s)\bigr)\,ds \biggr) \\& \quad \leq re^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p}-d_{i}pe^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z^{2}_{i}(t)+pe ^{rt}\sum_{j=1}^{n}\alpha _{ij} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z_{i}(t)f_{j}\bigl(z _{j}(t)\bigr) \\& \qquad {} +pe^{rt}\sum_{j=1}^{n} \beta _{ij} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z_{i}(t)g _{j}\bigl(z_{j} \bigl(t-\tau _{i}(t)\bigr)\bigr) \\& \qquad {} +\sum_{j=1}^{n}\gamma _{ij}pe^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z_{i}(t) \int _{-\infty }^{t}\kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds \\& \qquad {} +\frac{p(p-1)}{2}e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2} \\& \qquad {} \times \sigma _{ij}^{2} \biggl(t,z_{i}(t),z_{i} \bigl(t-\tau _{i}(t)\bigr), \int _{-\infty }^{t}\kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds \biggr). \end{aligned}$$
(13)

By using Lemma 4.3, we obtain

$$\begin{aligned}& pe^{rt}\sum_{j=1}^{n} \alpha _{ij} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z_{i}(t)f_{j}\bigl(z _{j}(t)\bigr) \\& \quad \leq \sum_{j=1}^{n} \vert \alpha _{ij} \vert pe^{rt}L_{j} \bigl[ \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z _{i}(t) \bigl\vert z_{j}(t) \bigr\vert \bigr] \\& \quad \leq \sum_{j=1}^{n} \vert \alpha _{ij} \vert pe^{rt}L_{j} \biggl[ \frac{p-1}{p} \bigl\vert z _{i}(t) \bigr\vert ^{p}+\frac{1}{p} \bigl\vert z_{j}(t) \bigr\vert ^{p} \biggr] \\& \quad = \sum_{j=1}^{n} \vert \alpha _{ij} \vert e^{rt}L_{j} \bigl[(p-1) \bigl\vert z_{i}(t) \bigr\vert ^{p}+ \bigl\vert z _{j}(t) \bigr\vert ^{p} \bigr]. \end{aligned}$$
(14)

Similarly,

$$\begin{aligned}& pe^{rt}\sum_{j=1}^{n}\beta _{ij} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z_{i}(t)g_{j}\bigl(z _{j} \bigl(t-\tau _{i}(t)\bigr)\bigr) \\& \quad \leq pe^{rt}\sum_{j=1}^{n} \vert \beta _{ij} \vert \bigl\vert z_{i}(t) \bigr\vert ^{p-1}M_{j} \bigl\vert z _{j} \bigl(t-\tau _{i}(t)\bigr) \bigr\vert \\& \quad \leq pe^{rt}\sum_{j=1}^{n} \vert \beta _{ij} \vert M_{j} \biggl[ \frac{p-1}{p} \bigl\vert z _{i}(t) \bigr\vert ^{p}+\frac{1}{p} \bigl\vert z_{j}\bigl(t- \tau _{i}(t)\bigr) \bigr\vert ^{p} \biggr] \\& \quad \leq \sum_{j=1}^{n} \vert \beta _{ij} \vert e^{rt}M_{j} \bigl[(p-1) \bigl\vert z_{i}(t) \bigr\vert ^{p}+ \bigl\vert z _{j}\bigl(t-\tau _{i}(t)\bigr) \bigr\vert ^{p} \bigr] \end{aligned}$$
(15)

and

$$\begin{aligned}& \sum_{j=1}^{n}\gamma _{ij}pe^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}z_{i}(t) \int _{-\infty }^{t}\kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds \\& \quad \leq \sum_{j=1}^{n} \vert \gamma _{ij} \vert pe^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-1} \int _{-\infty }^{t}\kappa _{ij}(t-s)N_{j} \bigl\vert z_{j}(s) \bigr\vert \,ds \\& \quad \leq \sum_{j=1}^{n} \vert \gamma _{ij} \vert pe^{rt}N_{j} \int _{-\infty } ^{t}\kappa _{ij}(t-s) \bigl\vert z_{i}(t) \bigr\vert ^{p-1} \bigl\vert z_{j}(s) \bigr\vert \,ds \\& \quad \leq \sum_{j=1}^{n}(p-1) \vert \gamma _{ij} \vert e^{rt}N_{j} \bigl\vert z_{i}(t) \bigr\vert ^{p} \int _{-\infty }^{t}\kappa _{ij}(t-s)\,ds \\& \qquad {} +\sum_{j=1}^{n} \vert \gamma _{ij} \vert e^{rt}N_{j} \int _{-\infty } ^{t}\kappa _{ij}(t-s) \bigl\vert z_{j}(s) \bigr\vert ^{p}\,ds \\& \quad \leq \sum_{j=1}^{n} \vert \gamma _{ij} \vert e^{rt}N_{j} [(p-1) \bigl\vert z_{i}(t) \bigr\vert ^{p}+ \sum _{j=1}^{n} \vert \gamma _{ij} \vert \\& \qquad {} \times e^{rt}N_{j} \int _{-\infty }^{t}\kappa _{ij}(t-s) \bigl\vert z_{j}(s) \bigr\vert ^{p}\,ds. \end{aligned}$$
(16)

We use the assumption and the well-known Cauchy–Schwartz inequality

$$\begin{aligned}& \frac{p(p-1)}{2}e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}\sigma _{ij}^{2} \biggl(t,z_{i}(t),z _{i}\bigl(t-\tau _{i}(t)\bigr), \int _{-\infty }^{t}\kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds \biggr) \\& \quad \leq \frac{p(p-1)}{2}e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2} \biggl[\mu _{i}\bigl(z _{i}(t)\bigr)^{2}+\phi _{i} \biggl[ \frac{p-2}{p} \bigl\vert z_{i}(t) \bigr\vert ^{p}+\frac{2}{p} \bigl\vert z _{i}\bigl(t- \tau _{i}(t)\bigr) \bigr\vert ^{p} \biggr] \\& \qquad {} +\psi _{i}\biggl( \int _{-\infty }^{t}\kappa _{ij}(t-s)h_{j} \bigl(z_{j}(s)\bigr)\,ds\biggr)^{2} \biggr]. \end{aligned}$$
(17)

Substituting (13)–(17) in (12) we get

$$\begin{aligned}& \mathcal{L}v_{1i}\bigl(t,z_{i}(t)\bigr) \\& \quad \leq re^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p}-d_{i}pe^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p}+\sum _{j=1}^{n} \vert \alpha _{ij} \vert e^{rt}L_{j}(p-1) \bigl\vert z_{i}(t) \bigr\vert ^{p}+\sum _{j=1} ^{n} \vert \alpha _{ij} \vert e^{rt}L_{j} \bigl\vert z_{j}(t) \bigr\vert ^{p} \\& \qquad {} +\sum_{j=1}^{n} \vert \beta _{ij} \vert (p-1)e^{rt}M_{j} \bigl\vert z_{i}(t) \bigr\vert ^{p}+ \sum _{j=1}^{n} \vert \beta _{ij} \vert e^{rt}M_{j} \bigl\vert z_{j}\bigl(t-\tau _{i}(t)\bigr) \bigr\vert ^{p}+ \sum _{j=1}^{n}(p-1) \vert \gamma _{ij} \vert \\& \qquad {} \times e^{rt}N_{j} \bigl\vert z_{i}(t) \bigr\vert ^{p}+\sum_{j=1}^{n} \vert \gamma _{ij} \vert e^{rt}N_{j} \int _{-\infty }^{t}\kappa _{ij}(t-s) \bigl\vert z_{j}(s) \bigr\vert ^{p}\,ds+ \frac{p(p-1)}{2}e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2} \\& \qquad {} \times \biggl[\mu _{i}\bigl(z_{i}(t) \bigr)^{2}+\phi _{i} \biggl[\frac{p-2}{p}z _{i}(t) \vert ^{p}+\frac{2}{p} \bigl\vert z_{i}\bigl(t-\tau _{i}(t)\bigr) \bigr\vert ^{p} \biggr]+\psi _{i}\biggl( \int _{-\infty }^{t}\kappa _{ij}(t-s) \\& \qquad {} \times h_{j}\bigl(z_{j}(s)\bigr)\,ds \biggr)^{2} \biggr]. \end{aligned}$$
(18)

Next,

$$\begin{aligned}& \mathcal{L}v_{2i}\bigl(t,z_{i}(t)\bigr) \\& \quad \leq \sum _{j=1}^{n}e^{r\tau } \bigl[ e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}g_{j}^{2}\bigl(z_{j}(t) \bigr)-e^{r(t-\tau _{i}(t))} \bigl\vert z _{i}\bigl(t-\tau _{ij}(t)\bigr) \bigr\vert ^{p-2}g_{j}^{2} \bigl(z_{j}\bigl(t-\tau _{i}(t)\bigr)\bigr) \bigr] \\& \quad \leq \sum_{j=1}^{n}M_{j}^{2}e^{r(t+\tau )} \biggl[\frac{p-2}{p} \bigl\vert z _{i}(t) \bigr\vert ^{p}+\frac{2}{p} \bigl\vert z_{j}(t) \bigr\vert ^{p} \biggr]-\sum_{j=1}^{n}M _{j}^{2}e^{rt} \biggl[\frac{p-2}{p} \bigl\vert z_{i}\bigl(t-\tau _{i}(t)\bigr) \bigr\vert ^{p} \\& \qquad {} +\frac{2}{p} \bigl\vert z_{j}\bigl(t-\tau _{i}(t)\bigr) \bigr\vert ^{p} \vert \biggr]. \end{aligned}$$
(19)

And

$$\begin{aligned}& \mathcal{L}v_{3i}\bigl(t,z_{i}(t)\bigr) \\& \quad = \sum _{j=1}^{n}\sum _{l=1} ^{n}m_{l} \int _{0}^{\infty }\kappa _{lj}(\theta )e^{r(t+\theta )} \bigl\vert z_{i}(t) \bigr\vert ^{p-2}h _{j}^{2}\bigl(z_{j}(t) \bigr)\,d\theta -\sum_{j=1}^{n}\sum _{l=1}^{n}m _{l} \int _{0}^{\infty }\kappa _{lj}(\theta )e^{rt} \\& \qquad {} \times \bigl\vert z_{i}(t-\theta ) \bigr\vert ^{p-2}h_{j}^{2}\bigl(z_{j}(t- \theta )\bigr)d \theta \\& \quad \leq \sum_{j=1}^{n}\sum _{l=1}^{n}m_{l}N_{j}^{2} \biggl[\frac{p-2}{p} \bigl\vert z _{i}(t) \bigr\vert ^{p}+\frac{2}{p} \bigl\vert z_{j}(t) \bigr\vert ^{p} \biggr] \int _{0}^{\infty } \kappa _{lj}(\theta )e^{r\theta }\,d\theta -\sum_{j=1}^{n} \sum_{l=1}^{n}m_{l}e^{rt} \\& \qquad {} \times \biggl( \int ^{t}_{-\infty }\kappa _{lj}(t-s) \bigl\vert z_{i}(s) \bigr\vert ^{p-2}h _{j}^{2} \bigl(z_{j}(s)\bigr)\,ds \biggr)^{2}. \end{aligned}$$
(20)

Substituting (18)–(20) in (11) we get

$$\begin{aligned}& \mathcal{L}v_{i}\bigl(t,z_{i}(t)\bigr) \\& \quad \leq re^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p}-d_{i}pe^{rt} \bigl\vert z _{i}(t) \bigr\vert ^{p}+\sum _{j=1}^{n} \vert \alpha _{ij} \vert e^{rt}L_{j}(p-1) \bigl\vert z_{i}(t) \bigr\vert ^{p}+ \sum _{j=1}^{n} \vert \alpha _{ij} \vert e^{rt}L_{j} \bigl\vert z_{j}(t) \bigr\vert ^{p} \\& \qquad {}+\sum_{j=1}^{n} \vert \beta _{ij} \vert (p-1)e^{rt}M_{j} \bigl\vert z_{i}(t) \bigr\vert ^{p}+ \sum _{j=1}^{n} \vert \beta _{ij} \vert e^{rt}M_{j} \bigl\vert z_{j}\bigl(t-\tau _{ij}(t)\bigr) \bigr\vert ^{p}+ \sum _{j=1}^{n}(p-1) \vert \gamma _{ij} \vert \\& \qquad {} \times e^{rt}N_{j} \bigl\vert z_{i}(t) \bigr\vert ^{p}+\sum_{j=1}^{n} \vert \gamma _{ij} \vert e^{rt}N_{j} \int _{-\infty }^{t}\kappa _{ij}(t-s) \bigl\vert z_{j}(s) \bigr\vert ^{p}\,ds+ \frac{p(p-1)}{2}e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p-2} \\& \qquad {} \times \biggl[\mu _{i}\bigl(z_{i}(t) \bigr)^{2}+\phi _{i} \biggl[\frac{p-2}{p}z _{i}(t) \vert ^{p}+\frac{2}{p} \bigl\vert z_{i}\bigl(t-\tau _{j}(t)\bigr) \bigr\vert ^{p} \biggr]+\psi _{i}\biggl( \int _{-\infty }^{t}\kappa _{ij}(t-s) \\& \qquad {} \times h_{i}\bigl(z_{j}(s)\bigr)\,ds \biggr)^{2} \biggr]+\sum_{j=1}^{n}M _{j}^{2}e^{r(t+\tau )} \biggl[\frac{p-2}{p} \bigl\vert z_{i}(t) \bigr\vert ^{p}+ \frac{2}{p} \bigl\vert z _{j}(t) \bigr\vert ^{p} \biggr]-\sum_{j=1}^{n}M_{j}^{2}e^{rt} \\& \qquad {} \times \biggl[\frac{p-2}{p} \bigl\vert z_{i}\bigl(t-\tau _{ij}(t)\bigr) \bigr\vert ^{p}+ \frac{2}{p} \bigl\vert z_{j}\bigl(t-\tau _{ij}(t)\bigr) \bigr\vert ^{p} \vert \biggr]+\sum_{j=1}^{n} \sum_{l=1}^{n}m_{l}N_{j}^{2} \biggl[\frac{p-2}{p} \bigl\vert z_{i}(t) \bigr\vert ^{p} \\& \qquad {} +\frac{2}{p} \bigl\vert z_{j}(t) \bigr\vert ^{p} \biggr] \int _{0}^{\infty }\kappa _{lj}( \theta )e^{r\theta }\,d\theta -\sum_{j=1}^{n} \sum_{l=1} ^{n}m_{l}e^{rt} \biggl( \int ^{t}_{-\infty }\kappa _{lj}(t-s) \bigl\vert z_{i}(s) \bigr\vert ^{p-2} \\& \qquad {} \times h_{j}^{2}\bigl(z_{j}(s)\bigr)\,ds \biggr)^{2} \end{aligned}$$
(21)
$$\begin{aligned}& \quad \leq e^{rt} \bigl\vert z_{i}(t) \bigr\vert ^{p} [r+ \vert d_{i} \vert p+\sum _{j=1}^{n} \vert \alpha _{ij} \vert L_{j}(p-1)+\sum_{j=1}^{n} \vert \beta _{ij} \vert (p-1)M_{j}+ \sum _{j=1}^{n}(p-1) \vert \gamma _{ij} \vert N_{j} \\& \qquad {} +\frac{p(p-1)}{2} \vert \mu _{i} \vert + \frac{(p-1)(p-2)}{2}\phi _{i}+ \sum_{j=1}^{n} \frac{p-2}{p}e^{r\tau }M_{j}^{2}+ \frac{p-2}{p} \sum_{j=1}^{n}\sum _{l=1}^{n}m_{l}N_{j}^{2} \mathcal{K} \\& \qquad {} +e^{rt} \bigl\vert z_{j}(t) \bigr\vert ^{p} \Biggl[\sum_{j=1}^{n} \vert \alpha _{ij} \vert L _{j}+e^{r\tau } \sum_{j=1}^{n}\frac{2}{p}M_{j}^{2}+ \sum_{j=1}^{n}\sum _{l=1}^{n}m_{l}N_{j}^{2} \frac{2}{p}\mathcal{k} \Biggr]+ \bigl\vert z_{j}\bigl(t- \tau _{ij}(t)\bigr) \bigr\vert ^{p}e^{rt} \\& \qquad {} \times \Biggl[\sum_{j=1}^{n} \vert \beta _{ij} \vert M_{j}-\frac{2n}{p}\Biggr]+ \Biggl[e ^{rt}\sum_{j=1}^{n} \vert \gamma _{ij} \vert e^{rt}N_{j} \Biggr] \int _{-\infty } ^{t}\kappa _{ij}(t-s) \bigl\vert z_{j}(s) \bigr\vert ^{p}\,ds \\& \quad \leq -T_{i1}v_{i}\bigl(t,z_{i}(t)\bigr)+ \sum_{j=1}^{n}a_{ij}F_{ij} \bigl(z_{i}(t),z _{j}(t),t\bigr)+T_{i2}v_{i}(t,z_{j} \bigl(t-\tau _{i}(t)\bigr)+T_{i3} \int _{-\infty } ^{t}\kappa _{ij}(t-s) \\& \qquad {} \times \bigl\vert z_{j}(s) \bigr\vert ^{p}\,ds, \quad \mbox{for } t\neq t_{k}. \end{aligned}$$
(22)

On the other hand, for \(t=t_{k}\),

$$\begin{aligned}& v_{i}\bigl(t_{k},z_{i} \bigl(t_{k}^{-}+P_{ik}\bigl(z_{i} \bigl(t_{k}^{-}\bigr)\bigr)\bigr)\bigr) \\& \quad = e^{rt_{k}} \bigl\vert z _{i}\bigl(t_{k}^{-} \bigr)+P_{ik}\bigl(z_{i}\bigl(t_{k}^{-} \bigr)\bigr) \bigr\vert ^{p}+\sum_{j=1}^{n}e ^{r\tau } \int _{t_{k}-\tau _{i}(t_{k})}^{t_{k}} e^{rs} \bigl\vert z_{i}(s) \bigr\vert ^{p-2}g _{j}^{2} \bigl(z_{j}(s)\bigr)\,ds \\& \qquad {} +\sum_{j=1}^{n}\sum _{l=1}^{n}m_{l} \int _{0}^{ \infty }\kappa _{lj}(\theta ) \int _{t_{k}-\theta }^{t_{k}} e^{r(s+ \theta )} \bigl\vert z_{i}(s) \bigr\vert ^{p-2} h_{j}^{2} \bigl(z_{j}(s)\bigr)\,ds\,d\theta \\& \quad \leq e^{rt_{k}} \bigl\vert z_{i}\bigl(t_{k}^{-} \bigr) \bigr\vert ^{p}+\sum_{j=1}^{n}e^{r \tau } \int _{t_{k}-\tau _{i}(t_{k})}^{t_{k}} e^{rs} \bigl\vert z_{i}(s) \bigr\vert ^{p-2}g _{j}^{2} \bigl(z_{j}(s)\bigr)\,ds+\sum_{j=1}^{n} \sum_{l=1}^{n}m_{l} \\& \qquad {} \times \int _{0}^{\infty }\kappa _{lj}(\theta ) \int _{t_{k}- \theta }^{t_{k}} e^{r(s+\theta )} \bigl\vert z_{i}(s) \bigr\vert ^{p-2}h_{j}^{2} \bigl(z_{j}(s)\bigr)\,ds\,d\theta + \bigl\vert P_{ik} \bigl(z_{i}\bigl(t_{k}^{-}\bigr)\bigr) \bigr\vert ^{p} \\& \quad \leq V_{i}\bigl(t_{k}^{-},z_{i} \bigl(t_{k}^{-}\bigr)\bigr)+\upsilon _{i} \bigl\vert z_{i}\bigl(t_{k}^{-}\bigr) \bigr\vert ^{p} \\& \quad \leq \varUpsilon _{i}V_{i}\bigl(t_{k}^{-},z_{i} \bigl(t_{k}^{-}\bigr)\bigr). \end{aligned}$$

Now, we define \(v_{i}(t,z_{i}(t))=e^{rt}|z_{i}(t)|^{p}\) and the suitable constants \(T_{i1}\), \(T_{i2}\), \(T_{i3}\) such that the condition (4) is satisfied, then by using Theorem 5.1 we ensure the pth moment’s exponential stability of (1). Here

$$\begin{aligned}& \begin{aligned} T_{i1}= {}& r+ \vert d_{i} \vert p+\sum _{j=1}^{n} \vert \alpha _{ij} \vert L_{j}(p-1)+ \sum_{j=1}^{n} \vert \beta _{ij} \vert (p-1)M_{j}+\sum _{j=1}^{n}(p-1) \vert \gamma _{ij} \vert N_{j} \\ &{} +\frac{p(p-1)}{2} \vert \mu _{i} \vert + \frac{(p-1)(p-2)}{2}\phi _{i}+ \sum_{j=1}^{n} \frac{p-2}{p}e^{r\tau }M_{j}^{2}+ \frac{p-2}{p} \sum_{j=1}^{n}\sum _{l=1}^{n}m_{l} \\ &{} \times N_{j}^{2}\mathcal{K}+\sum _{i=1}^{n} \vert \alpha _{ij} \vert L _{i}+e^{r\tau }\sum_{i=1}^{n} \frac{2}{p}M_{i}^{2}+\sum _{i=1}^{n}\sum_{l=1}^{n}m_{l}N_{i}^{2} \frac{2}{p}\mathcal{k}, \end{aligned} \\& T_{i2}= \sum_{j=1}^{n} \vert \beta _{ij} \vert M_{j}-\frac{2n}{p}, \\& T_{i3}= e^{rt}\sum_{j=1}^{n} \vert \gamma _{ij} \vert e^{rt}N_{j}, \\& \sum_{j=1}^{n}F_{ij} \bigl(z_{i}(t),z_{j}(t),t\bigr) \\& \quad = e^{rt} \Biggl[ \bigl\vert z_{j}(t) \bigr\vert ^{p} \Biggl[ \sum_{j=1}^{n} \vert \alpha _{ij} \vert L_{j}+e^{r\tau }\sum _{j=1} ^{n}\frac{2}{p}M_{j}^{2}+ \sum_{j=1}^{n}\sum _{l=1}^{n}m_{l}N_{j}^{2} \frac{2}{p}\mathcal{k} \Biggr] \Biggr] \\& \qquad {} -e^{rt} \Biggl[ \bigl\vert z_{i}(t) \bigr\vert ^{p} \Biggl[\sum_{i=1}^{n} \vert \alpha _{ij} \vert L_{i}+e^{r\tau }\sum _{i=1}^{n}\frac{2}{p}M_{i}^{2}+ \sum_{i=1}^{n}\sum _{l=1}^{n}m_{l}N_{i}^{2} \frac{2}{p}\mathcal{K} \Biggr] \Biggr]. \end{aligned}$$

 □

6 Example

To show the efficiency of our work we provide an example and some numerical simulations in this section.

Example 6.1

Consider the following two-dimensional RNNs with impulse effects, stochastic and mixed delays of two neurons:

$$ \begin{aligned} &\begin{aligned} dz_{i}(t)={} & \biggl(-Dz_{i}(t)+Af_{i} \bigl(z(t)\bigr)+Bg\bigl(z_{i}\bigl(t-\tau (t)\bigr)\bigr)+C \int ^{t}_{-\infty }\kappa _{ij}(t-s)h \bigl(z_{i}(s)\bigr)\,ds \biggr) \\ &{} +\sigma _{ij}\biggl(z_{i}(t),z_{i} \bigl(t-\tau (t)\bigr), \int ^{t}_{-\infty } \kappa _{ij}(t-s)\phi _{i}\bigl(z_{i}(s)\bigr)\,ds\biggr)\,d\omega _{i}(t), \end{aligned} \\ &\Delta z_{i}(t_{k})= P_{ik} \bigl(z_{i}(t_{k})\bigr), \quad k=1,2,3,\ldots, \end{aligned} $$
(23)

where the state vector is \(z_{i}=(z_{1},z_{2})\), \((d_{1},d_{2})=(2.5,0.2)\), and

$$\begin{aligned}& A=(\alpha _{kh})_{2\times 2} = \begin{pmatrix} 1.03& 1.5 \\ 1.03& 1.7 \end{pmatrix} ,\qquad B=(\beta _{kh})_{2\times 2}= \begin{pmatrix} 0.5& 1.02 \\ 0.1& 1.3 \end{pmatrix} , \\& C=(\gamma _{kh})_{2\times 2} = \begin{pmatrix} 0.38& 0.2 \\ 0.3& 0.25 \end{pmatrix} . \end{aligned}$$

Here

$$\begin{aligned}& f_{1}\bigl(z_{1}(t)\bigr)=g_{1} \bigl(z_{1}(t)\bigr)= h_{1}\bigl(z_{1}(t) \bigr)=\tan z_{1}(t), \\& f_{2}\bigl(z _{2}(t) \bigr)=g_{2}\bigl(z_{2}(t)\bigr)=h_{2} \bigl(z_{2}(t)\bigr)=\tanh z_{2}(t), \end{aligned}$$

\(\kappa _{ij}(t)=e^{-t}\) for \(i,j=1,2\) and

$$ \sigma (t,x,y,z)= \begin{pmatrix} \sigma _{11}(t,x,y,z) & \sigma _{12}(t,x,y,z) \\ \sigma _{21}(t,x,y,z) & \sigma _{22}(t,x,y,z) \end{pmatrix} = \begin{pmatrix} 3.3 & 0.25 & 1.04 \\ 1.5 & 0.03 & 1.2 \\ 0.002 & 1.5 & 2.75 \\ 2.75 & 0.0004 & 2.05 \end{pmatrix} \begin{pmatrix} x \\ y \\ z \end{pmatrix} . $$

The impulsive function is

$$\begin{aligned}& z_{1}(t_{k})= e^{0.005}\sin z_{1}(t_{k}), \\& z_{2}(t_{k})= 0.34\tan z_{2}(t_{k}). \end{aligned}$$

Then we check that the Lipschitz constants \(L_{j}=M_{j}=N_{j}=1\) and all the given assumptions \((P_{1})\)\((P_{7})\) are verified. Let us consider \(v_{i}(t,z_{i}(t))=|z_{i}(t)|^{2}\). It is easy to check that the condition \((D_{1})\) is true with the values \(\alpha _{1}=0.25\), \(\alpha _{2}=0.1\), \(\beta _{1}=2.5\), \(\beta _{2}=4\). Let us calculate the \(\mathfrak{L}v_{i}(t,z_{i}(t))\) as follows:

$$\begin{aligned} \mathfrak{L} v_{i}\bigl(t,z_{i}(t)\bigr)={} &2z_{i}(t)\biggl[-Dz_{i}(t)+Af\bigl(z_{j}(t) \bigr)+Bg\bigl(z _{j}\bigl(t-\tau _{j}(t)\bigr)\bigr)+C \int _{-\infty }^{t}\kappa (s-t)h\bigl(z_{j}(s) \bigr)\,ds\biggr] \\ &{}+ \biggl\vert \sigma \biggl(t,z_{j}(t),z_{j} \bigl(t-\tau _{j}(t)\bigr), \int _{\infty }^{t} \kappa (s)h_{j} \bigl(z_{j}(s)\bigr)\,ds\biggr) \biggr\vert ^{2} \\ \leq{} &\bigl(-2D+2 \vert A \vert + \vert B \vert + \vert C \vert \bigr)z_{i}^{2}(t)+ \vert B \vert z^{2}_{j}\bigl(t-\tau _{j}(t)\bigr)+ \vert C \vert \int _{-\infty }^{t}e^{t-s}z_{j}^{2}(s)\,ds \\ &{}+\sum_{j=1}^{2}a_{ij} \bigl(z_{j}^{2}-z_{i}^{2}(t) \bigr)+ \biggl\vert \sigma \biggl(t,z _{j}(t),z_{j} \bigl(t-\tau _{j}(t)\bigr), \int _{\infty }^{t}\kappa (s)h_{j} \bigl(z_{j}(s)\bigr)\,ds\biggr) \biggr\vert ^{2}. \end{aligned}$$

If \(\max (-2D+2|A|+|B|+|C|)<0\), then we have

$$ \mathfrak{L} v_{i}\bigl(t,z_{i}(t)\bigr)\leq -a_{i} \bigl\vert z_{i}(t) \bigr\vert ^{2}+b_{i} \vert z_{j} \vert ^{2}+c _{i} \int _{-\infty }^{t}e^{t-s} \vert z_{j} \vert ^{2}(s)\,ds+\sum _{j=1}^{2}a_{ij}\bigl(z _{j}^{2}-z_{i}^{2}(t)\bigr). $$

Here

$$ a_{i}=\min \bigl\{ \bigl(-2D+2 \vert A \vert + \vert B \vert + \vert C \vert \bigr), \vert B \vert , \vert C \vert \bigr\} $$

and

$$ F_{ij}\bigl(t,z_{i}(t),z_{j}(t)\bigr)= \bigl(z_{j}^{2}-z_{i}^{2}(t) \bigr). $$

Hence the conditions \((D_{1})\)\((D_{3})\) satisfied. Therefore by Theorem 5.1 the given system (1) is exponentially stable.

Figure 1
figure 1

For the initial condition \(\eta _{1}(t)=0.3\), \(\eta _{2}=0.5\), the trajectory of \(z_{1}(t)\) in (20) with infinite delay, stochastic and impulsive effects

Figure 2
figure 2

For the initial condition \(\eta _{1}(t)=0.3\), \(\eta _{2}=0.5\), the trajectory of \(z_{2}(t)\) in (20) with infinite delay, stochastic and impulsive effects

Figure 3
figure 3

Trajectory of the second moment of (20)

7 Conclusions

This paper mainly focuses on exploring a graph theory-based approach investigating the pth moment’s exponential stability for a RNNs with impulse, discrete and continuously distributed time-varying delays and stochastic disruptions. Here we successfully obtain two new principles which guarantee the pth moment’s exponential stability of the RNNs. Further it is pointed out that the method and techniques presented here are more precise variations of the previous methods and techniques such as using the linear matrix inequality and the method of variation of parameters. As far as it is concerned, the graph-theoretic approach could be extended to many kinds of neural networks, such as complex neural networks, competitive neural networks, bidirectional memory neural networks, with Markovian jumping, impulses, infinite delay, which are either in continuous or discrete time neural networks. Such a kind of problems which occurred in the real-world applications described by using this new approach could be discussed in the near future.