1 Introduction

All the time neural networks have attracted much attention for its wide application [1,2,3,4,5,6,7]. In 1983, Cohen–Grossberg neural networks (CGNNs) were originally proposed in [8]. Since then, the CGNNs have gained increasing research attention [9,10,11,12,13,14] due to their extensive applications, such as pattern recognition, image and signal processing, quadratic optimization, and artificial intelligence. Whether the above application is successful depends on a key prerequisite that the system has some stability. So stability analysis of various neural networks has become a hot research topic [15,16,17,18,19,20,21,22,23,24,25]. In practical experience, neural networks are often disturbed by environmental noise. The noise may influence the stability of the equilibrium and vary some structure parameters, which is usually satisfied by Markov processes. During the recent decade, neural networks with Markovian jumping parameters have been extensively studied due to the fact that systems with Markovian jumping parameters are useful in modeling abrupt phenomena, such as random failures, changing in the interconnections of subsystems, and operating in different points of a nonlinear plant. Moreover, Markovian jump dynamics have been applied to various complex systems, such as dissipative fault-tolerant control for nonlinear singular perturbed systems with Markov jumping parameters based on slow state feedback, slow state variables feedback stabilization for semi-Markov jump systems with singular perturbations, finite-time nonfragile \(l^{2}\)\(l^{\infty}\) control for jumping stochastic systems subject to input constraints via an event-triggered mechanism, and so on ([26,27,28,29] and the references therein).

High-order Cohen–Grossberg neural networks, as an important class of dynamical systems, have been the object of intensive analysis by many authors in both theory and applications due to the fact that high-order neural networks can be with impressive computational, learning, and storage capabilities. In fact, most researchers focused on low-order neural networks and did not consider the high-order terms, which have faster convergence rate and stronger approximation property. Furthermore, the high-order neural networks have been shown stronger approximation property, impressive computational, storage, and learning capabilities, greater storage capacity and higher fault tolerance, and faster convergence rate than in traditional low-order neural networks. There are a lot of literature related to the stability analysis of high-order neural networks [30,31,32,33,34,35,36]. In practical engineering the diffusion phenomenon cannot be avoided in the neural networks model when electrons are moving in an asymmetric electromagnetic field. So various reaction–diffusion models were considered [13, 14, 31, 37, 38]. For example, in [31] the following reaction–diffusion high-order Hopfield neural network was investigated:

$$\begin{aligned} \frac{\partial u_{i}(t,x) }{\partial t} ={}& \sum_{k=1}^{m} \frac{\partial }{\partial x_{k}} \biggl(D_{ik}(t,x,u)\frac{\partial u_{i}(t,x)}{\partial x_{k}} \biggr) -a_{ri}u_{i}(t,x)+\sum_{j=1}^{n}W_{rij}f_{j} \bigl(u_{j}(t,x)\bigr) \\ &{}+\sum_{j=1}^{n}g_{j}(u_{j} \bigl(t-\tau(t),x\bigr) \\ &{}+\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}g_{j} \bigl(u_{j}\bigl(t-\tau(t)\bigr)\bigr)g_{l} \bigl(u_{l}\bigl(t-\tau(t),x\bigr)\bigr)+v_{i}. \end{aligned}$$
(1.1)

In addition, p-Laplacian reaction–diffusion models were recently studied in [37] and [14]. For example, in [37] the following p-Laplacian reaction–diffusion system was investigated:

$$\begin{aligned} \frac{\partial u}{\partial t}-a\bigl(l(u)\bigr)\Delta_{p}u=f(u)+h(t). \end{aligned}$$
(1.2)

Note that seldom literature involves both boundedness analysis and robust stability analysis of high-order neural networks, which inspires our current work. In this paper, we present a sufficient condition for the boundedness and robust stability of the reaction–diffusion high-order Markovian jump Cohen–Grossberg neural network with nonlinear Laplacian diffusion. Of course, the existence and uniqueness of the equilibrium solution of the system will be first presented by employing the M-matrix and the topological degree technique.

For convenience, we need introduce some standard notation.

  • \(Q=(q_{ij})_{n\times n}>0\ (<0)\): a positive (negative) definite matrix, that is, \(y^{T}Qy>0\ (<0)\) for all \(0\neq y\in R^{n}\).

  • \(Q=(q_{ij})_{n\times n}\geqslant0\ (\leqslant0)\): a semipositive (seminegative) definite matrix, that is, \(y^{T}Qy\geqslant0\) \((\leqslant 0)\) for all \(y\in R^{n}\).

  • \(Q_{1}\geqslant Q_{2}\ (Q_{1}\leqslant Q_{2})\): \(Q_{1}-Q_{2}\) is a semipositive (seminegative) definite matrix.

  • \(Q_{1}\succcurlyeq Q_{2} \ (Q_{1}\preccurlyeq Q_{2})\): \(Q_{1}-Q_{2}\) is a nonnegative (nonpositive) matrix.

  • \(Q_{1}> Q_{2}\ (Q_{1}< Q_{2})\): \(Q_{1}-Q_{2}\) is a positive (negative) definite matrix.

  • \(\lambda_{\max}(\varPhi)\) and \(\lambda_{\min}(\varPhi)\) denote the largest and smallest eigenvalues of a matrix Φ, respectively.

  • \(|C|=(|c_{ij}|)_{n\times n}\) for any matrix \(C=(c_{ij})_{n\times n}\); \(|u(t,x)|=(|u_{1}(t,x)|,|u_{2}(t,x)|, \ldots,|u_{n}(t,x)|)^{T}\) for any \(u(t,x)=(u_{1}(t,x),u_{2}(t,x),\ldots,u_{n}(t,x))^{T}\).

  • I: the identity matrix of compatible dimension.

  • The symmetric terms in a symmetric matrix are denoted by ∗.

Motivated by some methods and results of the related literature [30,31,32,33,34, 39,40,41], we present the existence and uniqueness of the equilibrium solution of the system and study the boundedness and robustness of dynamics of reaction–diffusion high-order Markovian jump Cohen–Grossberg neural networks (CGNNs) with p-Laplacian diffusion, including the common reaction–diffusion CGNNs.

2 Model description and preparation

Consider the following high-order Markovian jump Cohen–Grossberg neural network with nonlinear Laplacian diffusion:

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial u_{i}(t,x) }{\partial t} = \sum_{k=1}^{n} \frac{\partial }{\partial x_{k}} (D_{i} \vert \nabla u_{i}(t,x) \vert ^{p-2}\frac{\partial u_{i}(t,x)}{\partial x_{k}} )\\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-a_{i}(u_{i}(t,x)) [b_{i}(u_{i}(t,x))-\sum_{j=1}^{n}W_{rij}(t)f_{j}(u_{j}(t,x))\\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-\sum_{j=1}^{n}T_{rij}(t)g_{j}(u_{j}(t-\tau(t),x)) \\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-\sum_{j=1}^{n}\sum_{l=1}^{n}T_{rijl}(t)g_{j}(u_{j}(t-\tau (t),x))g_{l}(u_{l}(t-\tau(t),x))+\alpha_{i} ],\\ \quad \forall r\in S, i=1,2,\ldots,n,\\ \frac{\partial u_{i}(t,x)}{\partial x_{j}}=0,\quad i,j=1,2,\ldots,n, (t,x)\in [0,+\infty)\times\partial\varOmega, \\ u_{i}(s,x)=\xi_{i}(s,x), \quad x\in\varOmega, -\tau\leqslant s\leqslant0, \tau (t)\in[0,\tau], \end{cases}\displaystyle \end{aligned}$$
(2.1)

where \(x\in\varOmega\), and Ω is a bounded domain in \(R^{n}\) with smooth boundary ∂Ω of class \(\mathcal{C}^{2}\). The initial value function \(\xi_{i}(s,x)\) is bounded and continuous on \((-\infty, \tau]\times\varOmega\), \(\alpha_{i}\) is the input from outside of the networks, \(u_{i}(t,x)\) is the state variable of the ith neuron at time t and space variable x, \(a_{i}(u_{i}(t,x))\) presents an amplification function, whereas \(b_{i}(u_{i}(t,x))\) denotes an appropriate behavior function, \(D_{i}=D_{i}(t,x)\geqslant0\) is the diffusion operator, \(f_{i}\) and \(g_{j}\) are active functions, and \(T_{rij}\) and \(T_{rijl}\) are the first- and second-order synaptic weights of system (2.1) (see, e.g., [42]). \((\breve{\varOmega}, \varUpsilon, \mathbb{P})\) is the given probability space, where Ω̆ is sample space, ϒ is a σ-algebra of subsets of the sample space, and \(\mathbb{P}\) is the probability measure defined on ϒ. Let \(S=\{1, 2, \ldots, n_{0}\}\), and let \(\{ r(t):[0, +\infty)\to S\}\) be a homogeneous, finite-state right-continuous Markovian process with generator \(\varPi=(\gamma_{ij})_{n_{0}\times n_{0}}\) and the transition probability from mode \(i\in S\) at time t to mode \(j\in S\) at time \(t +\Delta t\)

$$\mathbb{P }\bigl(r(t+\delta)=j\mid r(t)=i\bigr)= \textstyle\begin{cases} \gamma_{ij}\delta+o(\delta),& j\neq i, \\ 1+\gamma_{ij}\delta+o(\delta),& j=i, \end{cases} $$

where \(\gamma_{ij}\geqslant0\) is the transition probability rate from i to \(j\ (j\neq i)\), \(\gamma_{ii}=-\sum_{j=1, j\neq i}^{n_{0}}\gamma_{ij}, \delta>0\), and \(\lim_{\delta\to0}o(\delta)/\delta=0\).

Let \(u^{*}=(u_{1}^{*},u_{2}^{*},\ldots,u_{n}^{*})^{T}\in R^{n}\) be a constant vector. Then it is not difficult to deduce the following fact:

$$\begin{aligned} &\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl} \bigl[g_{j} \bigl(u_{j}\bigl(t-\tau (t),x\bigr)\bigr)g_{l} \bigl(u_{l}\bigl(t-\tau(t),x\bigr)\bigr)-g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*}\bigr) \bigr] \\ &\quad =\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}\varsigma_{l} \bigl(g_{j}-g_{j}\bigl(u_{j}^{*}\bigr)\bigr)+ \sum _{j=1}^{n}\sum_{l=1}^{n}T_{rijl} \varsigma_{j}\bigl(g_{l}-g_{l} \bigl(u_{l}^{*}\bigr)\bigr), \end{aligned}$$
(2.2)

where

$$\begin{aligned} \varsigma_{l}=\frac{g_{l}(u_{l}(t-\tau(t),x))+g_{l}(u_{l}^{*})}{2}. \end{aligned}$$
(2.3)

Indeed,

$$\begin{aligned} &\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl} \bigl[g_{j} \bigl(u_{j}\bigl(t-\tau (t),x\bigr)\bigr)g_{l} \bigl(u_{l}\bigl(t-\tau(t),x\bigr)\bigr)-g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*}\bigr) \bigr] \\ &\quad =\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl} \biggl[\frac{g_{l}(u_{l}(t-\tau (t),x))+g_{l}(u_{l}^{*})}{2} \bigl(g_{j}\bigl(u_{j}\bigl(t-\tau(t),x\bigr) \bigr)-g_{j}\bigl(u_{j}^{*}\bigr)\bigr) \\ &\qquad{}+ \frac{g_{j}(u_{j}(t-\tau(t),x))+g_{j}(u_{j}^{*})}{2} \bigl(g_{l}\bigl(u_{l}\bigl(t-\tau (t),x\bigr)\bigr)-g_{l}\bigl(u_{l}^{*}\bigr)\bigr) \biggr] \\ &\quad =\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}\varsigma_{l} \bigl(g_{j}\bigl(u_{j}\bigl(t-\tau (t),x\bigr) \bigr)-g_{j}\bigl(u_{j}^{*}\bigr)\bigr)\\ &\qquad{}+ \sum _{j=1}^{n}\sum_{l=1}^{n}T_{rijl} \varsigma _{j}\bigl(g_{l}\bigl(u_{l}\bigl(t- \tau(t),x\bigr)\bigr)-g_{l}\bigl(u_{l}^{*}\bigr)\bigr), \end{aligned}$$

which proves (2.2).

In (2.2), i and j are symmetric, which implies that

$$\begin{aligned} \sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}\varsigma_{j} \bigl(g_{l}-g_{l}\bigl(u_{l}^{*}\bigr)\bigr) =\sum _{j=1}^{n}\sum_{l=1}^{n}T_{rilj} \varsigma_{l}\bigl(g_{j}-g_{j} \bigl(u_{j}^{*}\bigr)\bigr), \end{aligned}$$
(2.4)

and hence

$$\begin{aligned} &\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl} \bigl[g_{j} \bigl(u_{j}\bigl(t-\tau (t),x\bigr)\bigr)g_{l} \bigl(u_{l}\bigl(t-\tau(t),x\bigr)\bigr)-g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*}\bigr) \bigr] \\ &\quad =\sum_{j=1}^{n}\sum _{l=1}^{n}(T_{rijl}+T_{rijl})\varsigma _{l}\bigl(g_{j}-g_{j}\bigl(u_{j}^{*} \bigr)\bigr). \end{aligned}$$
(2.5)

Taking \(g_{l}(u_{l}^{*})=0=g_{j}(u_{j}^{*})\) in (2.5), system (2.1) can be rewritten as follows:

{ u ( t , x ) t = ( D p u ( t , x ) ) A ( u ( t , x ) ) u ( t , x ) t = × [ B ( u ( t , x ) ) + W r ( t ) f ( u ( t , x ) ) + T r ( t ) g ( u ( t τ ( t ) , x ) ) u ( t , x ) t = + ( ς 0 T ς 0 T ς 0 T ) n × n 2 ( T r 1 ( t ) + T r 1 T ( t ) T r 2 ( t ) + T r 2 T ( t ) T r n ( t ) + T r n T ( t ) ) n 2 × n u ( t , x ) t = × g ( u ( t τ ( t ) , x ) ) + α ] , u ( t , x ) ν = 0 , ( t , x ) [ 0 , + ) × Ω , u ( s , x ) = ξ ( s , x ) , x Ω , τ s 0 , τ ( t ) [ 0 , τ ] ,
(2.6)

where \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{n})^{T}\), \(f(u)=(f_{1}(u_{1}), f_{2}(u_{2}),\ldots,f_{n}(u_{n}))^{T}\), \(g(u)=(g_{1}(u_{1}), g_{2}(u_{2}), \ldots, g_{n}(u_{n}))^{T}\) for \(u=(u_{1},u_{2},\ldots,u_{n})^{T}\), the Neumann boundary value \(\frac{\partial u(t,x)}{\partial\nu}=(\frac {\partial u_{1}(t,x)}{\partial\nu},\frac{\partial u_{2}(t,x)}{\partial\nu }, \ldots, \frac{\partial u_{n}(t,x)}{\partial\nu})\) with \(\frac{\partial u_{i}(t,x)}{\partial\nu}=(\frac{\partial u_{i}(t,x)}{\partial x_{1}}, \frac{\partial u_{i}(t,x)}{\partial x_{2}}, \ldots, \frac{\partial u_{i}(t,x)}{\partial x_{n}})^{T}\), the matrix \(D=(D_{i}(t,x))_{n\times m}\), and the vector function \(\varsigma_{0}\) is defined as

ς 0 = 1 2 ( g 1 ( u 1 ( t τ ( t ) , x ) ) g 2 ( u 2 ( t τ ( t ) , x ) ) g n ( u n ( t τ ( t ) , x ) ) ) .
(2.7)

Throughout this paper, we assume the following hypotheses:

  1. (H1)

    There is a positive constant vector \(M=(M_{1},M_{2},\ldots,M_{n})^{T}\in R^{n}\) such that \(|g_{j}(\cdot)|\leqslant M_{j}\), \(j=1,2,\ldots,n\).

  2. (H2)

    There is a real number matrix \(\mathbb{B}=\operatorname{diag}(\tilde{b}_{1},\tilde {b}_{2},\ldots,\tilde{b}_{n})\) such that \(b_{i}(0)=0\) and

    $$\frac{b_{i}(s)-b_{i}(r)}{s-r}\geqslant\tilde{b}_{i}>0, \quad \forall s,r\in R, s\neq r, i=1,2,\ldots,n. $$
  3. (H3)

    There exist real number diagonal matrices \(\overline{A}=\operatorname{diag}(\bar {a}_{1},\bar{a}_{2},\ldots,\bar{a}_{n})\) and \(\underline{A}=\operatorname{diag}(\underline {a}_{1},\underline{a}_{2}, \ldots,\underline{a}_{n})\) such that

    $$0< \underline{A}\leqslant A(s)\leqslant\overline{A}. $$
  4. (H4)

    There are real matrices \(F_{1}=\operatorname{diag}(F_{11},F_{12},\ldots,F_{1n}), G_{1}=\operatorname{diag}(G_{11},G_{12},\ldots,G_{1n})\), \(F_{2}=\operatorname{diag}(F_{21},F_{22},\ldots,F_{2n})\), and \(G_{2}=\operatorname{diag}(G_{21},G_{22},\ldots,G_{2n})\) such that

    $$F_{1j}\leqslant\frac{f_{j}(s)-f_{j}(t)}{s-t}\leqslant F_{2j},\qquad G_{1j}\leqslant\frac{g_{j}(s)-g_{j}(t)}{s-t}\leqslant G_{2j} $$

    with

    $$\vert F_{1j} \vert \leqslant F_{2j},\qquad \vert G_{1j} \vert \leqslant G_{2j},\quad \forall j=1,2,\ldots,n. $$

Remark 1

\(F_{1j}, G_{1j} \) may be negative numbers, which implies conditions weaker than the corresponding conditions of [43].

Denote

Γ = ( M T M T M T ) n × n 2 , T ˜ r ( t ) = ( T r 1 ( t ) + T r 1 T ( t ) T r 2 ( t ) + T r 2 T ( t ) T r n ( t ) + T r n T ( t ) ) n 2 × n , Δ T ˜ r ( t ) = ( Δ T r 1 ( t ) + Δ T r 1 T ( t ) Δ T r 2 ( t ) + Δ T r 2 T ( t ) Δ T r n ( t ) + Δ T r n T ( t ) ) n 2 × n ,
(2.8)
T ˜ r = ( T r 1 + T r 1 T T r 2 + T r 2 T T r n + T r n T ) n 2 × n , T ς = ( ς 0 T ς 0 T ς 0 T ) n × n 2 , | T ς | = ( | ς 0 | T | ς 0 | T | ς 0 | T ) n × n 2 .
(2.9)

For any mode \(r(t)=i\in S\), we assume that \(W_{r}, T_{r}\), and \(\tilde {T}_{r}\) are real constant matrices of appropriate dimensions, and \(\Delta W_{r}(t), \Delta T_{r}(t)\), and \(\Delta\tilde{T}_{r}(t)\) are real-valued matrix functions representing time-varying parameter uncertainties satisfying

$$\begin{aligned} W_{r}(t)=W_{r}+\Delta W_{r}(t),\qquad T_{r}(t)=T_{r}+\Delta T_{r}(t),\qquad \widetilde{T}_{r}(t)=\widetilde{T}_{r}+\Delta \widetilde{T}_{r}(t) \end{aligned}$$
(2.10)

with

$$\begin{aligned} \bigl(\Delta W_{r}(t), \Delta T_{r}(t), \varGamma\Delta T_{ri}(t) \bigr)=E_{r}K(t) (N_{r}, N_{r0}, \widetilde{N}_{r} ), \end{aligned}$$
(2.11)

where \(E_{r}, N_{r}, N_{r0}\), and \(\widetilde{N}_{r}\) are real matrices, and \(|K(t)|\leqslant I\) with the identity matrix I.

In the case \(p=2\), system (2.1) becomes the following common reaction–diffusion high-order Markovian jump Cohen–Grossberg neural network:

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial u_{i}(t,x) }{\partial t} = \sum_{k=1}^{n} \frac{\partial }{\partial x_{k}} (D_{i}\frac{\partial u_{i}(t,x)}{\partial x_{k}} ) -a_{i}(u_{i}(t,x)) [b_{i}(u_{i}(t,x))\\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-\sum_{j=1}^{n}W_{rij}(t)f_{j}(u_{j}(t,x))-\sum_{j=1}^{n}T_{rij}(t)g_{j}(u_{j}(t-\tau(t),x))\\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-\sum_{j=1}^{n}\sum_{l=1}^{n}T_{rijl}(t)g_{j}(u_{j}(t-\tau (t),x))g_{l}(u_{l}(t-\tau(t),x))+\alpha_{i} ],\\ \quad r\in S, i=1,2,\ldots,n,\\ \frac{\partial u_{i}(t,x)}{\partial x_{j}}=0,\quad i,j=1,2,\ldots,n, (t,x)\in [0,+\infty)\times\partial\varOmega, \\ u_{i}(s,x)=\xi_{i}(s,x), x\in\varOmega,\quad {-}\tau\leqslant s\leqslant0, \tau (t)\in[0,\tau], \end{cases}\displaystyle \end{aligned}$$
(2.12)

or

{ u ( t , x ) t = ( D u ( t , x ) ) A ( u ( t , x ) ) u ( t , x ) t = × [ B ( u ( t , x ) ) + W r ( t ) f ( u ( t , x ) ) + T r ( t ) g ( u ( t τ ( t ) , x ) ) u ( t , x ) t = + ( ς 0 T ς 0 T ς 0 T ) n × n 2 ( T r 1 ( t ) + T r 1 T ( t ) T r 2 ( t ) + T r 2 T ( t ) T r n ( t ) + T r n T ( t ) ) n 2 × n u ( t , x ) t = × g ( u ( t τ ( t ) , x ) ) + α ] , u ( t , x ) ν = 0 , ( t , x ) [ 0 , + ) × Ω , u ( s , x ) = ξ ( s , x ) , x Ω , τ s 0 , τ ( t ) [ 0 , τ ] .
(2.13)

Lemma 2.1

([44])

Let \(\varepsilon>0\) be any given scalar, and let \(\mathcal{M},\mathfrak{E}\), and \(\mathcal{K}\) be matrices of appropriate dimensions. If \(\mathcal{K}^{T}\mathcal{K}\leqslant I\), then we have

$$\mathcal{M}\mathcal{K}\mathfrak{E}+\mathfrak{E}^{T} \mathcal{K}^{T}\mathcal {M}^{T}\leqslant\varepsilon^{-1} \mathcal{M}\mathcal{M}^{T}+\varepsilon \mathfrak{E}^{T} \mathfrak{E}. $$

Lemma 2.2

(Schur complement [45])

Given matrices \(\mathcal{Q}(t)\), \(\mathcal{S}(t)\), and \(\mathcal{R}(t)\) of appropriate dimensions, where \(\mathcal{Q}(t)=\mathcal{Q}(t)^{T}\) and \(\mathcal{R}(t)=\mathcal{R}(t)^{T}\), we have

( Q ( t ) S ( t ) S T ( t ) R ( t ) ) >0

if and only if

$$\mathcal{R}(t)>0,\quad \mathcal{Q}(t)-\mathcal{S}(t)\mathcal {R}^{-1}(t) \mathcal{S}^{T}(t)>0, $$

or

$$\mathcal{Q}(t)>0, \quad \mathcal{R}(t)-\mathcal{S}^{T}(t)\mathcal {Q}^{-1}(t)\mathcal{S}^{T}(t)>0, $$

where \(\mathcal{Q}(t)\), \(\mathcal{S}(t)\), and \(\mathcal{R}(t)\) are dependent on t.

Lemma 2.3

(Poincaré integral inequality (see [46]))

Let Ω be a bounded domain of \(R^{n}\) with a smooth boundary ∂Ω of class \(\mathcal{C}^{2}\), and let \(h(x)\) be a real-valued function belonging to \(H_{0}^{1}(\varOmega)\) such that \(\frac {\partial h(x)}{\partial\nu}|_{\partial\varOmega}=0\). Then

$$\begin{aligned} \int_{\varOmega}\bigl\vert \nabla h(x) \bigr\vert ^{2} \,dx\geqslant\lambda_{1} \int_{\varOmega}\bigl\vert h(x) \bigr\vert ^{2} \,dx, \end{aligned}$$
(2.14)

where \(\lambda_{1}\) is the smallest positive eigenvalue of the Neumann boundary problem

$$\begin{aligned} \textstyle\begin{cases} -\Delta h(x)=\lambda h(x),\quad x\in\varOmega,\\ \frac{\partial h(x)}{\partial x_{j}}=0,\quad x\in\partial\varOmega. \end{cases}\displaystyle \end{aligned}$$
(2.15)

3 Main results

Before giving the main results of this paper, we need to present two necessary technical lemmas.

Lemma 3.1

Let \(u^{*}=(u_{1}^{*},u_{2}^{*},\ldots, u_{n}^{*})^{T}\) be an equilibrium point of system (2.6), and let \(\tilde{u}=u-u^{*}\), where \(u=u(t,x)\) is any solution of system (2.6). Then

$$\begin{aligned} \sum_{k=1}^{n}\frac{\partial}{\partial x_{k}} \biggl(D_{i} \vert \nabla u_{i} \vert ^{p-2} \frac{\partial u_{i}}{\partial x_{k}}\biggr)\,dx=\sum_{k=1}^{n} \frac{\partial}{\partial x_{k}}\biggl(D_{i} \vert \nabla\tilde{u}_{i} \vert ^{p-2}\frac{\partial\tilde{u}_{i}}{\partial x_{k}}\biggr)\,dx \end{aligned}$$
(3.1)

and

$$\begin{aligned} & \int_{\varOmega}\sum_{i=1}^{n}q_{i} \tilde{u}_{i}\sum_{k=1}^{n} \frac{\partial}{\partial x_{k}}\biggl(D_{i} \vert \nabla\tilde{u}_{i} \vert ^{p-2}\frac{\partial\tilde{u}_{i}}{\partial x_{k}}\biggr)\,dx \\ &\quad=-\sum _{k=1}^{n} \sum_{i=1}^{n} \int_{\varOmega}q_{i} D_{i} \vert \nabla \tilde{u}_{i} \vert ^{p-2}\biggl(\frac{\partial\tilde{u}_{i}}{\partial x_{k}} \biggr)^{2}\,dx, \end{aligned}$$
(3.2)

where \(q_{i}>0\) for all i.

Proof

Indeed, since \(u_{i}^{*}\) is a real number,

$$\tilde{u}_{i}\triangleq u_{i}-u_{i}^{*} \Rightarrow\frac{\partial u_{i}}{\partial x_{k}}=\frac{\partial\tilde{u}_{i}}{\partial x_{k}}\Rightarrow\nabla u_{i}=\nabla\tilde{u}_{i}, $$

and hence

$$\begin{aligned} \sum_{k=1}^{n}\frac{\partial}{\partial x_{k}} \biggl(D_{i} \vert \nabla u_{i} \vert ^{p-2} \frac{\partial u_{i}}{\partial x_{k}}\biggr)\,dx&=\sum_{k=1}^{n} \frac{\partial}{\partial x_{k}}\biggl(D_{i} \vert \nabla u_{i} \vert ^{p-2}\frac{\partial\tilde{u}_{i}}{\partial x_{k}}\biggr)\,dx\\ &=\sum_{k=1}^{n} \frac{\partial}{\partial x_{k}}\biggl(D_{i} \vert \nabla\tilde{u}_{i} \vert ^{p-2}\frac{\partial\tilde{u}_{i}}{\partial x_{k}}\biggr)\,dx. \end{aligned}$$

On the other hand, the Neumann zero boundary condition yields

$$\begin{aligned} \int_{\varOmega}\sum_{i=1}^{n}q_{i} \tilde{u}_{i}\sum_{k=1}^{n} \frac{\partial}{\partial x_{k}}\biggl(D_{i} \vert \nabla\tilde{u}_{i} \vert ^{p-2}\frac{\partial\tilde{u}_{i}}{\partial x_{k}}\biggr)\,dx =&-\sum _{k=1}^{n} \sum_{i=1}^{n} \int_{\varOmega}q_{i} D_{i} \vert \nabla \tilde{u}_{i} \vert ^{p-2}\biggl(\frac{\partial\tilde{u}_{i}}{\partial x_{k}} \biggr)^{2}\,dx. \end{aligned}$$

 □

In addition, from (H4) and the Weber theorem of one-variable quadratic equation it is not difficult to get the following conclusion.

Lemma 3.2

Let \(u^{*}=(u_{1}^{*},u_{2}^{*},\ldots, u_{n}^{*})^{T}\) be an equilibrium point of system (2.6), and let \(\tilde{u}=u-u^{*}\), \(\overline{f}(\tilde{u})=f(u)-f(u^{*})\), and \(\overline{g}(\tilde{u})=g(u)-g(u^{*})\). Then there are two positive definite diagonal matrices \(K_{0}\) and \(K_{1}\) such that

$$\begin{aligned} &2 \bigl\vert \overline{f}\bigl(\tilde{u}(t,x)\bigr) \bigr\vert ^{T}K_{0} \bigl\vert \overline{f}\bigl(\tilde {u}(t,x) \bigr) \bigr\vert -2 \bigl\vert \tilde{u}(t,x) \bigr\vert ^{T}K_{0}(F_{1}+F_{2}) \bigl\vert \overline{f}\bigl(\tilde {u}(t,x)\bigr) \bigr\vert \\ &\quad{}+2 \bigl\vert \tilde{u}(t,x) \bigr\vert ^{T}F_{1}K_{0}F_{2} \bigl\vert \tilde{u}(t,x) \bigr\vert \leqslant0 \end{aligned}$$
(3.3)

and

$$\begin{aligned} &2 \bigl\vert \overline{g}\bigl(\tilde{u}\bigl(t-\tau(t),x\bigr)\bigr) \bigr\vert ^{T}K_{1} \bigl\vert \overline {g}\bigl(\tilde{u} \bigl(t-\tau(t),x\bigr)\bigr) \bigr\vert \\ &\quad{}-2 \bigl\vert \tilde{u}\bigl(t-\tau (t),x\bigr) \bigr\vert ^{T}K_{1}(G_{1}+G_{2}) \bigl\vert \overline{g}\bigl(\tilde{u}\bigl(t-\tau(t),x\bigr)\bigr) \bigr\vert \\ &\quad{}+2 \bigl\vert \tilde{u}\bigl(t-\tau(t),x\bigr) \bigr\vert ^{T}G_{1}K_{1}G_{2} \bigl\vert \tilde{u}\bigl(t-\tau(t),x\bigr) \bigr\vert \leqslant 0, \end{aligned}$$
(3.4)

where \(u=u(t,x)\) is any solution of system (2.6).

We now give the main result of this paper.

Theorem 3.3

Assume that

$$\begin{aligned} &\bigl(\mathbb{B}- \bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2} - \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr) G_{2}-\bigl(\varGamma \vert \widetilde{T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert \bigr) G_{2} \bigr)^{-1} \succcurlyeq0, \\ &\quad r\in S. \end{aligned}$$
(3.5)

Then:

  1. (a)

    System (2.1) or system (2.6) has a unique equilibrium point.

  2. (b)

    All the solutions of system (2.1) and system (2.6) are bounded.

  3. (c)

    If there is a sequences of positive definite diagonal matrices \(P_{r}\) \((r\in S)\), \(K_{0}\), and \(K_{1}\) such that

    $$\begin{aligned} \widetilde{\varPhi}_{r}< 0\quad \forall r\in S, \end{aligned}$$
    (3.6)

    where

    Φ ˜ r = ( A ˜ r 0 P r A | W r | + K 0 ( F 1 + F 2 ) P r A ( | T r | + Γ | T ˜ r | ) P r A | E r | 0 2 G 1 K 1 G 2 0 K 1 ( G 1 + G 2 ) 0 0 2 K 0 0 0 | N r | T 2 K 1 0 | N r 0 | T + | N ˜ r | T I 0 I )

with \(\widetilde{\mathcal{A}}_{r}=\mathcal{A}_{r}-2F_{1}K_{0}F_{2}\) and

$$\begin{aligned} \mathcal{A}_{r}= -2P_{r}\underline{A}\mathbb{B}+\sum _{j=1}^{n_{0}} \gamma _{rj}P_{j},\quad r\in S, \end{aligned}$$
(3.7)

then the unique equilibrium point of system (2.1) or system (2.6) is globally asymptotically stochastic robust stable.

Proof

System (2.1) is equivalent to system (2.6). We divide the proof of the theorem into four steps.

Step 1. We first prove that there is at least one equilibrium point for system (2.6).

If \(u^{*}\in R^{n}\) is an equilibrium point of (2.1), then by (2.1) we get

$$h_{ri}\bigl(u_{i}^{*}\bigr)=0,\quad \forall r\in S, i=1,2, \ldots,n, $$

where

$$\begin{aligned} h_{ri}\bigl(u_{i}^{*}\bigr)={}& b_{i} \bigl(u_{i}^{*}\bigr)-\sum_{j=1}^{n}W_{rij}(t)f_{j} \bigl(u_{j}^{*}\bigr)-\sum_{j=1}^{n}T_{rij}(t)g_{j} \bigl(u_{j}^{*}\bigr) \\ &{}-\sum_{j=1}^{n} \sum_{l=1}^{n}T_{rijl}(t)g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*}\bigr)- \alpha_{i}. \end{aligned}$$

Since i and j are symmetric, exchanging i and j results in

$$\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}(t)\frac{g_{l}(u_{l}^{*})}{2} g_{j}\bigl(u_{j}^{*}\bigr)=\sum_{j=1}^{n} \sum_{l=1}^{n}T_{rilj}(t) \frac{g_{j}(u_{j}^{*})}{2}g_{l}\bigl(u_{l}^{*}\bigr) =\sum _{j=1}^{n}\sum_{l=1}^{n}T_{rilj}(t) \frac{g_{l}(u_{l}^{*})}{2}g_{j}\bigl(u_{j}^{*}\bigr), $$

and hence

$$\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}(t)g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*}\bigr)=\sum _{j=1}^{n}\sum_{l=1}^{n} \frac{g_{l}(u_{l}^{*})}{2}\bigl(T_{rijl}(t)+T_{rilj}(t)\bigr) g_{j}\bigl(u_{j}^{*}\bigr). $$

Next, taking

$$H_{ri}\bigl(u_{i}^{*},\lambda\bigr)=\lambda h_{ri} \bigl(u_{i}^{*}\bigr)+(1-\lambda)u_{i}^{*},\quad \lambda\in[0,1], $$

by (H2) and (H4) we get

$$\begin{aligned} \bigl\vert H_{ri}\bigl(u_{i}^{*},\lambda\bigr) \bigr\vert \geqslant{}&\lambda \bigl\vert b_{i}(u_{i}) \bigr\vert - \lambda\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert \bigl\vert f_{j} \bigl(u_{j}^{*}\bigr) \bigr\vert -\lambda\sum _{j=1}^{n} \bigl\vert T_{rij}(t) \bigr\vert \bigl\vert g_{j}\bigl(u_{j}^{*}\bigr) \bigr\vert \\ &{}- \lambda\frac{1}{2}\sum_{j=1}^{n} \sum_{l=1}^{n} \bigl\vert g_{l} \bigl(u_{l}^{*}\bigr) \bigr\vert \bigl( \bigl\vert T_{rijl}(t)+T_{rilj}(t) \bigr\vert \bigr) \bigl\vert g_{j}\bigl(u_{j}^{*}\bigr) \bigr\vert -(1-\lambda ) \bigl\vert u^{*} \bigr\vert \\ \geqslant{}&\bigl[1+\lambda(\tilde{b}_{i}-1)\bigr] \bigl\vert u_{i}^{*} \bigr\vert -\lambda\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert F_{2j} \bigl\vert u_{j}^{*} \bigr\vert -\lambda\sum_{j=1}^{n} \bigl\vert T_{rij}(t) \bigr\vert G_{2j} \bigl\vert u_{j}^{*} \bigr\vert \\ &{}- \lambda\frac{1}{2}\sum_{j=1}^{n} \sum_{l=1}^{n}M_{l}\bigl( \bigl\vert T_{rijl}(t)+T_{rilj}(t) \bigr\vert \bigr) G_{2j} \bigl\vert u_{j}^{*} \bigr\vert -\lambda \vert \alpha_{i} \vert -\lambda\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert \bigl\vert f_{j}(0) \bigr\vert \\ &{}-\lambda\sum_{j=1}^{n} \bigl\vert T_{rij}(t) \bigr\vert \bigl\vert g_{j}(0) \bigr\vert - \lambda\frac{1}{2}\sum_{j=1}^{n}\sum _{l=1}^{n}M_{l}\bigl( \bigl\vert T_{rijl}(t)+T_{rilj}(t) \bigr\vert \bigr) \bigl\vert g_{j}(0) \bigr\vert . \end{aligned}$$
(3.8)

Moreover, we can rewrite (3.8) in the matrix and vector form:

$$\begin{aligned} & \bigl\vert H_{r}(t) \bigl(u^{*},\lambda\bigr) \bigr\vert \\ &\quad \geqslant (1-\lambda) \bigl\vert u^{*} \bigr\vert +\lambda \biggl(\mathbb{B}- \bigl\vert W_{r}(t) \bigr\vert F_{2} - \bigl\vert T_{r}(t) \bigr\vert G_{2}-\frac{1}{2}\varGamma \bigl\vert \widetilde{T}_{r}(t) \bigr\vert G_{2} \biggr) \bigl\vert u^{*} \bigr\vert \\ &\qquad{}-\lambda \biggl[ \vert \alpha \vert + \bigl\vert W_{r}(t) \bigr\vert \bigl\vert f(0) \bigr\vert - \bigl\vert T_{r}(t) \bigr\vert \bigl\vert g(0) \bigr\vert +\frac{1}{2} \varGamma \bigl\vert \widetilde{T}_{r}(t) \bigr\vert \bigl\vert g(0) \bigr\vert \biggr] \\ &\quad \geqslant(1-\lambda) \bigl\vert u^{*} \bigr\vert +\lambda \bigl(\mathbb{B}- \bigl\vert W_{r}(t) \bigr\vert F_{2} - \bigl\vert T_{r}(t) \bigr\vert G_{2}- \varGamma \bigl\vert \widetilde{T}_{r}(t) \bigr\vert G_{2} \bigr) \bigl\vert u^{*} \bigr\vert \\ &\qquad{}-\lambda \biggl[ \vert \alpha \vert + \bigl\vert W_{r}(t) \bigr\vert \bigl\vert f(0) \bigr\vert - \bigl\vert T_{r}(t) \bigr\vert \bigl\vert g(0) \bigr\vert +\frac{1}{2} \varGamma \bigl\vert \widetilde{T}_{r}(t) \bigr\vert \bigl\vert g(0) \bigr\vert \biggr] \\ &\quad\geqslant(1-\lambda) \bigl\vert u^{*} \bigr\vert +\lambda \bigl(\mathbb{B}- \bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2} - \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr) G_{2}\\ &\qquad{}-\bigl(\varGamma \vert \widetilde {T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert \bigr) G_{2} \bigr) \biggl[ \bigl\vert u^{*} \bigr\vert - \bigl(\mathbb{B}- \bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2} - \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr) G_{2}\\ &\qquad{}-\bigl(\varGamma \vert \widetilde{T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert \bigr) G_{2} \bigr)^{-1} \biggl( \vert \alpha \vert + \bigl\vert W_{r}(t) \bigr\vert \bigl\vert f(0) \bigr\vert \\ &\qquad{}- \bigl\vert T_{r}(t) \bigr\vert \bigl\vert g(0) \bigr\vert +\frac{1}{2} \varGamma \bigl\vert \widetilde{T}_{r}(t) \bigr\vert \bigl\vert g(0) \bigr\vert \biggr) \biggr]. \end{aligned}$$

Let

$$\begin{aligned} \widetilde{\varOmega}={}& \biggl\{ u\in R^{n}, \vert u \vert \leqslant \mathfrak{R}+ \bigl(\mathbb{B}- \bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2} - \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr) G_{2}- \varGamma \mathbb{T}_{r} G_{2} \bigr)^{-1} \\ &{}\times\biggl( \vert \alpha \vert + \bigl\vert W_{r}(t) \bigr\vert \bigl\vert f(0) \bigr\vert - \bigl\vert T_{r}(t) \bigr\vert \bigl\vert g(0) \bigr\vert +\frac{1}{2} \varGamma \bigl\vert \widetilde{T}_{r}(t) \bigr\vert \bigl\vert g(0) \bigr\vert \biggr) \biggr\} , \end{aligned}$$

where \(\mathfrak{R}\in R^{n}\) is a positive vector such that \((\mathbb{B}- (|W_{r}|+|E_{r}||N_{r}|)F_{2} - (|T_{r}|+|E_{r}||N_{r0}|) G_{2}-(\varGamma |\widetilde{T}_{r}|+|E_{r}||\widetilde{N}_{r}|) G_{2} )\mathfrak{R}>0\), since \((\mathbb{B}- (|W_{r}|+|E_{r}||N_{r}|)F_{2} - (|T_{r}|+|E_{r}||N_{r0}|) G_{2}-(\varGamma|\widetilde{T}_{r}|+|E_{r}||\widetilde{N}_{r}|) G_{2} )\) is an M-matrix.

Then Ω̃ is not empty since \(0\in\widetilde{\varOmega}\), and for any \(u\in\partial\widetilde{\varOmega}\),

$$\begin{aligned} H_{r}(u,\lambda)\geqslant{}& (1-\lambda) \vert u \vert +\lambda \bigl(\mathbb{B}- \bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2} - \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr) G_{2}\\ &{}-\bigl(\varGamma \vert \widetilde {T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert \bigr) G_{2} \bigr)\mathfrak{R}>0. \end{aligned}$$

So

$$H_{r}(u,\lambda)\neq0 \quad \forall u\in\partial\widetilde{\varOmega}, \lambda \in [0,1], r\in S. $$

Now the homotopy invariance theorem yields

$$\operatorname{deg}(h, \widetilde{\varOmega}, 0)=\operatorname{deg}\bigl(H(u,1),\widetilde{\varOmega },0\bigr)=\operatorname{deg} \bigl(H(u,0),\widetilde{\varOmega},0\bigr)=1, $$

where \(\operatorname{deg}(h, \widetilde{\varOmega}, 0)\) denotes topological degree. Moreover, topological degree theory tells us that there is at least one solution for \(h(u)=0\) in Ω̃, which implies that there exists at least one equilibrium point \(u^{*}\) for system (2.1).

Step 2. We prove that \(u^{*}\) is the unique equilibrium point of system (2.1).

Indeed, if \(v^{*}\) is another equilibrium point of (2.1), then

$$\begin{aligned} 0={}& {-}b_{i}\bigl(u_{i}^{*}\bigr)+\sum _{j=1}^{n}W_{rij}(t)f_{j} \bigl(u_{j}^{*}\bigr)+(t)\sum_{j=1}^{n}T_{rij}(t)g_{j} \bigl(u_{j}^{*}\bigr) \\ &{}+\sum_{j=1}^{n} \sum_{l=1}^{n}T_{rijl}(t)g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*}\bigr)+ \alpha_{i}, \\ 0={}& -b_{i}\bigl(v_{i}^{*}\bigr)+\sum _{j=1}^{n}W_{rij}(t)f_{j} \bigl(v_{j}^{*}\bigr)+(t)\sum_{j=1}^{n}T_{rij}(t)g_{j} \bigl(v_{j}^{*}\bigr) \\ &{}+\sum_{j=1}^{n} \sum_{l=1}^{n}T_{rijl}(t)g_{j} \bigl(v_{j}^{*}\bigr)g_{l}\bigl(v_{l}^{*}\bigr)+ \alpha_{i}. \end{aligned}$$

Since

$$\begin{aligned} &\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}(t) \bigl(g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*} \bigr)-g_{j}\bigl(v_{j}^{*}\bigr)g_{l} \bigl(v_{l}^{*}\bigr) \bigr) \\ &\quad =\sum_{j=1}^{n}\sum _{l=1}^{n}\frac{g_{l}(u_{l}^{*})+g_{l}(v_{l}^{*})}{2}\bigl(T_{rijl}(t)+T_{rilj}(t) \bigr) \bigl(g_{j}\bigl(u_{j}^{*}\bigr)-g_{l} \bigl(v_{j}^{*}\bigr) \bigr), \end{aligned}$$

we have

$$\begin{aligned} \tilde{b}_{i} \bigl\vert u_{i}^{*}-v_{i}^{*} \bigr\vert \leqslant{}& \bigl\vert b_{i}\bigl(u_{i}^{*} \bigr)-b_{i}\bigl(v_{i}^{*}\bigr) \bigr\vert \\ \leqslant{}&\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert F_{2j}(t) \bigl\vert u_{j}^{*}-v_{j}^{*} \bigr\vert +\sum _{j=1}^{n} \bigl\vert T_{rij}(t) \bigr\vert G_{2j}(t) \bigl\vert u_{j}^{*}-v_{j}^{*} \bigr\vert \\ &{}+\sum_{j=1}^{n}\sum _{l=1}^{n}M_{l} \bigl\vert T_{rijl}(t)+T_{rilj}(t) \bigr\vert G_{2j}(t) \bigl\vert u_{j}^{*}-v_{j}^{*} \bigr\vert \end{aligned}$$

or

$$\begin{aligned} &\mathbb{B} \bigl\vert u^{*}-v^{*} \bigr\vert \\ &\quad\leqslant \bigl\vert W_{r}(t) \bigr\vert F_{2} \bigl\vert u^{*}-v^{*} \bigr\vert + \bigl\vert T_{r}(t) \bigr\vert G_{2} \bigl\vert u^{*}-v^{*} \bigr\vert +\varGamma\widetilde{T}_{r}(t)G_{2} \bigl\vert u^{*}-v^{*} \bigr\vert \\ &\quad\leqslant \bigl[\bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2}+ \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr)G_{2} +\bigl(\varGamma\widetilde{T}_{r} + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert \bigr)G_{2} \bigr] \bigl\vert u^{*}-v^{*} \bigr\vert , \end{aligned}$$

that is,

$$\bigl(\mathbb{B}- \bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2} - \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr) G_{2}-\bigl(\varGamma \vert \widetilde{T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert \bigr) G_{2} \bigr) \bigl\vert u^{*}-v^{*} \bigr\vert \leqslant0. $$

Since

$$\bigl(\mathbb{B}- \bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2} - \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr) G_{2}-\bigl(\varGamma \vert \widetilde{T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert \bigr) G_{2} \bigr)^{-1} \succcurlyeq0, $$

we get \(|u^{*}-v^{*}|\leqslant0\), and hence \(u^{*}=v^{*}\).

Thus we have proved the existence of the unique equilibrium point of system (2.6), and so conclusion (a) is proved.

Step 3. Next, we prove the boundedness of all the solutions of system (2.1).

First, we note the following fact:

$$\begin{aligned} &\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}(t) \bigl[g_{j} \bigl(u_{j}\bigl(t-\tau (t),x\bigr)\bigr)g_{l} \bigl(u_{l}\bigl(t-\tau(t),x\bigr)\bigr)-g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*}\bigr) \bigr] \\ &\quad =\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}(t)\varsigma_{l} \bigl(g_{j}-g_{j}\bigl(u_{j}^{*}\bigr)\bigr)+ \sum _{j=1}^{n}\sum_{l=1}^{n}T_{rijl}(t) \varsigma_{j}\bigl(g_{l}-g_{l} \bigl(u_{l}^{*}\bigr)\bigr), \end{aligned}$$
(3.9)

where \(g_{j}=g_{j}(u_{j}(t-\tau(t),x))\), \(g_{l}=g_{l}(u_{l}(t-\tau(t),x))\), and

$$\begin{aligned} \varsigma_{l}=\frac{g_{l}(u_{l}(t-\tau(t),x))+g_{l}(u_{l}^{*})}{2}. \end{aligned}$$
(3.10)

Moreover, since i and j are symmetric, exchanging i and j results in

$$\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}(t)\varsigma_{j} \bigl(g_{l}(u_{l})-g_{l}\bigl(u_{l}^{*} \bigr)\bigr) =\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rilj}(t)\varsigma_{l} \bigl(g_{j}(u_{j})-g_{j}\bigl(u_{j}^{*} \bigr)\bigr). $$

So we get

$$\begin{aligned} &\sum_{j=1}^{n}\sum _{l=1}^{n}T_{rijl}(t) \bigl[g_{j} \bigl(u_{j}\bigl(t-\tau (t),x\bigr)\bigr)g_{l} \bigl(u_{l}\bigl(t-\tau(t),x\bigr)\bigr)-g_{j} \bigl(u_{j}^{*}\bigr)g_{l}\bigl(u_{l}^{*}\bigr) \bigr] \\ &\quad =\sum_{j=1}^{n}\sum _{l=1}^{n}\bigl(T_{rijl}(t)+T_{rijl}(t) \bigr)\varsigma _{l}\bigl(g_{j}-g_{j} \bigl(u_{j}^{*}\bigr)\bigr). \end{aligned}$$

Moreover, by Lemma 3.1 we may rewrite system (2.1) in the following equivalent form:

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial \tilde{u}_{i}(t,x)}{\partial t} = -\sum_{k=1}^{n} \frac {\partial}{\partial x_{k}} (D_{i} \vert \nabla\tilde{u}_{i}(t,x) \vert ^{p-2}\frac {\partial\tilde{u}_{i}(t,x)}{\partial x_{k}} ) -a_{i}(\tilde{u}_{i}(t,x)+u_{i}^{*}) [\tilde{b}_{i}(\tilde{u}_{i}(t,x))\\ \phantom{\frac{\partial \tilde{u}_{i}(t,x)}{\partial t} =}{}-\sum_{j=1}^{n}W_{rij}(t)\overline{f}_{j}(\tilde{u}_{j}(t,x))-\sum_{j=1}^{n}T_{rij}(t)\overline{g}_{j}(\tilde{u}_{j}(t-\tau(t),x))\\ \phantom{\frac{\partial \tilde{u}_{i}(t,x)}{\partial t} =}{} -\sum_{j=1}^{n}\sum_{l=1}^{n}(T_{rijl}(t)+T_{rijl}(t))\varsigma _{l}\overline{g}_{j}(\tilde{u}_{j}(t-\tau(t),x)) ],\\ \frac{\partial\tilde{u}_{i}(t,x)}{\partial x_{j}}=0, \quad i,j=1,2,\ldots,n, (t,x)\in[0,+\infty)\times\partial\varOmega, \\ \tilde{u}_{i}(s,x)=\xi_{i}(s,x)-u_{i}^{*},\quad x\in\varOmega, -\tau\leqslant s\leqslant0, \tau(t)\in[0,\tau], \end{cases}\displaystyle \end{aligned}$$
(3.11)

where \(\tilde{u}=u-u^{*}\), \(\overline{f}_{j}(\tilde {u}_{j}(t,x))=f_{j}(u_{j}(t,x))-f_{j}(u_{j}^{*})\), and \(\overline{g}_{j}(\tilde{u}_{j}(t,x))=g_{j}(u_{j}(t,x))-g_{j}(u_{j}^{*})\).

Let

$$\bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert ^{2}= \int_{\varOmega}\bigl\vert u_{i}-u_{i}^{*} \bigr\vert ^{2}\,dx. $$

From Lemma 3.1 and (3.11) we can derive

$$\begin{aligned} &\frac{d}{dt} \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert ^{2}\\ &\quad=2 \int_{\varOmega}\bigl(u_{i}-u_{i}^{*}\bigr) \frac{\partial (u_{i}-u_{i}^{*})}{\partial t}\,dx \\ &\quad \leqslant 2 \int_{\varOmega}\Biggl[-\underline{a}_{i}\tilde {b}_{i} \bigl\vert u_{i}(t,x)-u_{i}^{*} \bigr\vert ^{2}+\bar{a}_{i}\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert \bigl\vert u_{i}(t,x)-u_{i}^{*} \bigr\vert F_{2j} \bigl\vert u_{j}(t,x)-u_{j}^{*} \bigr\vert \\ &\qquad{}+\bar{a}_{i}\sum_{j=1}^{n} \bigl\vert u_{i}(t,x)-u_{i}^{*} \bigr\vert \Biggl( \bigl\vert T_{rij}(t) \bigr\vert + \sum_{l=1}^{n} \bigl\vert T_{rijl}(t)+T_{rilj}(t) \bigr\vert M_{l} \Biggr)\\ &\qquad{}\times G_{2j} \bigl\vert u_{j}\bigl(t- \tau (t),x\bigr)-u_{j}^{*} \bigr\vert \Biggr]\,dx, \end{aligned}$$

which, together with the Hölder inequality, implies

$$\begin{aligned} & \int_{\varOmega}\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert \bigl\vert u_{i}(t,x)-u_{i}^{*} \bigr\vert F_{2j}|u_{j}(t,x)-u_{j}^{*})|\,dx \\ &\quad \leqslant\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert F_{2j} \bigl\Vert u_{i}(t,x)-u_{i}^{*} \bigr\Vert \bigl\Vert u_{j}(t,x)-u_{j}^{*} \bigr\Vert \end{aligned}$$

and

$$\begin{aligned} &2 \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert \frac{d}{dt} \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert \\ &\quad=\frac{d}{dt} \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert ^{2}\\ &\quad=2 \int_{\varOmega}\bigl(u_{i}-u_{i}^{*}\bigr) \frac{\partial (u_{i}-u_{i}^{*})}{\partial t}\,dx \\ &\quad \leqslant 2 \Biggl\{ -\underline{a}_{i}\tilde{b}_{i} \bigl\Vert u_{i}(t,x)-u_{i}^{*} \bigr\Vert ^{2}+\bar {a}_{i}\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert \bigl\Vert u_{i}(t,x)-u_{i}^{*} \bigr\Vert F_{2j} \bigl\Vert u_{j}(t,x)-u_{j}^{*} \bigr\Vert \\ &\qquad{}+\bar{a}_{i}\sum_{j=1}^{n} \bigl\Vert u_{i}(t,x)-u_{i}^{*} \bigr\Vert \Biggl( \bigl\vert T_{rij}(t) \bigr\vert + \sum_{l=1}^{n} \bigl\vert T_{rijl}(t)+T_{rilj}(t) \bigr\vert M_{l} \Biggr)\\ &\qquad{}\times G_{2j} \bigl\Vert u_{j}\bigl(t- \tau (t),x\bigr)-u_{j}^{*} \bigr\Vert \Biggr\} , \end{aligned}$$

which in turn implies that

$$\begin{aligned} &\frac{d}{dt} \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert \\ &\quad \leqslant -\underline{a}_{i}\tilde{b}_{i} \bigl\Vert u_{i}(t,x)-u_{i}^{*} \bigr\Vert + \bar{a}_{i}\sum_{j=1}^{n} \bigl\vert W_{rij}(t) \bigr\vert F_{2j} \bigl\Vert u_{j}(t,x)-u_{j}^{*} \bigr\Vert \\ &\qquad{}+\bar{a}_{i}\sum_{j=1}^{n} \Biggl( \bigl\vert T_{rij}(t) \bigr\vert + \sum _{l=1}^{n} \bigl\vert T_{rijl}(t)+T_{rilj}(t) \bigr\vert M_{l} \Biggr)G_{2j} \bigl\Vert u_{j}\bigl(t-\tau (t),x\bigr)-u_{j}^{*} \bigr\Vert . \end{aligned}$$

Note that

$$\begin{aligned} &\bigl\vert W_{r}(t) \bigr\vert \leqslant \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert ,\qquad \bigl\vert T_{r}(t) \bigr\vert \leqslant \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \quad \text{and}\\ & \varGamma \bigl\vert \widetilde{T}_{r}(t) \bigr\vert \leqslant \varGamma \vert \widetilde{T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert . \end{aligned}$$

Define the matrix \(\hat{W}_{r}=(\hat{W}_{rij})_{n\times n}\) as

$$\hat{W}_{r}= \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert , $$

the matrix \(\hat{T}_{r}=(\hat{T}_{rij})_{n\times n}\) as

$$\hat{T}_{r}= \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert , $$

and the matrix \(\check{T}_{r}=(\check{T}_{rij})_{n\times n}\) as

$$\check{T}_{r}=\varGamma \vert \widetilde{T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert . $$

Then we have

$$\begin{aligned} \frac{d}{dt} \bigl\Vert u_{i}-u_{i}^{*} \bigr\Vert \leqslant{}& {-}\underline{a}_{i}\tilde{b}_{i} \bigl\Vert u_{i}(t,x)-u_{i}^{*} \bigr\Vert +\bar{a}_{i}\sum _{j=1}^{n} \hat{W}_{rij} F_{2j} \bigl\Vert u_{j}(t,x)-u_{j}^{*} \bigr\Vert \\ &{}+\bar{a}_{i}\sum_{j=1}^{n} ( \hat{T}_{rij} + \check{T}_{rij} )G_{2j} \bigl\Vert u_{j}\bigl(t-\tau(t),x\bigr)-u_{j}^{*} \bigr\Vert . \end{aligned}$$

Since \((\mathbb{B}- (|W_{r}|+|E_{r}||N_{r}|)F_{2} - (|T_{r}|+|E_{r}||N_{r0}|) G_{2}-(\varGamma|\widetilde{T}_{r}|+|E_{r}||\widetilde{N}_{r}|) G_{2} )\) is an M-matrix, there is a positive vector \(Q=(q_{1},q_{2},\ldots,q_{n})^{T}>0\) such that

$$\bar{a}_{i} \Biggl[q_{i}\tilde{b}_{i} -\sum _{j=1}^{n}q_{j} \hat{W}_{rij} F_{2j} - \sum_{j=1}^{n}q_{j} ( \hat{T}_{rij} + \check{T}_{rij} )G_{2j} \Biggr]>0\quad \forall i, r. $$

Let

$$\begin{aligned} \kappa_{i}={}&\frac{\bar{a}_{i}\sum_{j=1}^{n}q_{j} \hat{W}_{rij} F_{2j} +\bar{a}_{i} \sum_{j=1}^{n}q_{j} ( \hat{T}_{rij} + \check{T}_{rij} )G_{2j}}{q_{i}\underline{a}_{i}\tilde{b}_{i}} \\ ={}& \frac{\bar{a}_{i}\sum_{j=1}^{n}q_{j} \hat{W}_{rij} F_{2j} +\bar{a}_{i} \sum_{j=1}^{n}q_{j} ( \hat{T}_{rij} + \check{T}_{rij} )G_{2j}}{q_{i}\bar{a}_{i}\tilde{b}_{i}}\frac{\bar{a}_{i}}{\underline{a}_{i}}< \frac {\bar{a}_{i}}{\underline{a}_{i}}\quad \forall i, r. \end{aligned}$$

Let \(a_{*}=\max_{i}\frac{\bar{a}_{i}}{\underline{a}_{i}}\). Then \(a_{*}\geqslant 1\) and

$$\kappa_{i}< a_{*} \quad \forall i, r. $$

Since

$$\begin{aligned} \frac{d}{dt} \bigl\Vert u_{i}-u_{i}^{*} \bigr\Vert \leqslant{}& {-}\underline{a}_{i}\tilde{b}_{i} \bigl\Vert u_{i}(t,x)-u_{i}^{*} \bigr\Vert +\bar{a}_{i}\sum _{j=1}^{n} \hat{W}_{rij} F_{2j} \bigl\Vert u_{j}(t,x)-u_{j}^{*} \bigr\Vert \\ &{}+\bar{a}_{i}\sum_{j=1}^{n} ( \hat{T}_{rij} + \check{T}_{rij} )G_{2j} \bigl\Vert u_{j}\bigl(t-\tau(t),x\bigr)-u_{j}^{*} \bigr\Vert , \end{aligned}$$

we get

$$\begin{aligned} &e^{\underline{a}_{i}\tilde{b}_{i}t}\frac{d}{dt} \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert \\ &\quad \leqslant -e^{\underline{a}_{i}\tilde{b}_{i}t}\underline{a}_{i} \tilde{b}_{i} \bigl\Vert u_{i}(t,x)-u_{i}^{*} \bigr\Vert +e^{\underline{a}_{i}\tilde{b}_{i}t}\bar{a}_{i}\sum _{j=1}^{n} \hat{W}_{rij} F_{2j} \bigl\Vert u_{j}(t,x)-u_{j}^{*} \bigr\Vert \\ &\qquad{}+e^{\underline{a}_{i}\tilde{b}_{i}t}\bar{a}_{i}\sum_{j=1}^{n} ( \hat {T}_{rij} + \check{T}_{rij} )G_{2j} \bigl\Vert u_{j}\bigl(t-\tau(t),x\bigr)-u_{j}^{*} \bigr\Vert , \end{aligned}$$

or

$$\begin{aligned} \frac{d}{dt} \bigl( e^{\underline{a}_{i}\tilde{b}_{i}t} \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert \bigr)\leqslant{}& e^{\underline{a}_{i}\tilde{b}_{i}t}\bar{a}_{i}\sum _{j=1}^{n} \hat {W}_{rij} F_{2j} \bigl\Vert u_{j}(t,x)-u_{j}^{*} \bigr\Vert \\ &{}+e^{\underline{a}_{i}\tilde {b}_{i}t}\bar{a}_{i}\sum_{j=1}^{n} ( \hat{T}_{rij} + \check{T}_{rij} )G_{2j} \bigl\Vert u_{j}\bigl(t-\tau(t),x\bigr)-u_{j}^{*} \bigr\Vert . \end{aligned}$$

So we have

$$\begin{aligned} &\int_{0}^{t}\frac{d}{ds} \bigl( e^{\underline{a}_{i}\tilde{b}_{i}s} \bigl\Vert u_{i}(s)-u_{i}^{*} \bigr\Vert \bigr)\,ds\\ &\quad \leqslant \int_{0}^{t} \Biggl[ e^{\underline{a}_{i}\tilde {b}_{i}s} \bar{a}_{i}\sum_{j=1}^{n} \hat{W}_{rij} F_{2j} \bigl\Vert u_{j}(s,x)-u_{j}^{*} \bigr\Vert \\ &\qquad{}+e^{\underline{a}_{i}\tilde{b}_{i}s}\bar{a}_{i}\sum_{j=1}^{n} ( \hat {T}_{rij} + \check{T}_{rij} )G_{2j} \bigl\Vert u_{j}\bigl(s-\tau(s),x\bigr)-u_{j}^{*} \bigr\Vert \Biggr]\,ds, \end{aligned}$$

or

$$\begin{aligned} &\frac{1}{q_{i}} \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert \\ &\quad \leqslant \frac{1}{q_{i}} e^{-\underline {a}_{i}\tilde{b}_{i}t} \bigl\Vert \xi_{i}(0)-u_{i}^{*} \bigr\Vert + \frac{1}{q_{i}} \int_{0}^{t} \Biggl[ e^{-\underline{a}_{i}\tilde{b}_{i}(t-s)} \bar{a}_{i}\sum_{j=1}^{n} \hat {W}_{rij} F_{2j} \bigl\Vert u_{j}(s,x)-u_{j}^{*} \bigr\Vert \\ & \qquad{}+ e^{-\underline{a}_{i}\tilde{b}_{i}(t-s)}\bar{a}_{i}\sum_{j=1}^{n} ( \hat{T}_{rij} + \check{T}_{rij} )G_{2j} \bigl\Vert u_{j}\bigl(s-\tau(s),x\bigr)-u_{j}^{*} \bigr\Vert \Biggr]\,ds. \end{aligned}$$
(3.12)

Let

$$k_{0}=1+\frac{1}{q_{i}} \Bigl(\sup_{s\in(-\infty,\delta_{0}]} \bigl\Vert u_{i}(s)-u_{i}^{*} \bigr\Vert \Bigr). $$

The boundedness of \(\xi(\cdot)\) yields that there is \(\delta_{0}>0\) such that

$$e^{-\underline{a}_{i}\tilde{b}_{i}t} \bigl\Vert \xi_{i}(0)-u_{i}^{*} \bigr\Vert < \min\biggl\{ 1-\frac{\kappa _{i}}{a_{*}},\biggl(1-\frac{\kappa_{i}}{a_{*}} \biggr)q_{i}\biggr\} ,\quad t\geqslant\delta_{0}. $$

We will prove that

$$\begin{aligned} \bigl\Vert u_{i}(t)-u_{i}^{*} \bigr\Vert \leqslant q_{i}k_{0}\leqslant\max_{1\leqslant i\leqslant n} \{q_{i}k_{0}\}. \end{aligned}$$
(3.13)

We will prove this by contradiction. Assume that (3.13) does not hold. Then there must exist \(i\in\{1,2,\ldots,n\}\) and \(t_{*}>\delta_{0}\) such that

$$\bigl\Vert u_{i}(t_{*})-u_{i}^{*} \bigr\Vert = q_{i}k_{0}, \qquad \bigl\Vert u_{j}(t)-u_{j}^{*} \bigr\Vert \leqslant q_{j}k_{0},\quad j\in\{ 1,2,\ldots,n\}, t \leqslant t_{*}. $$

On the other hand, (3.12) and the definition of \(\kappa_{i}\) result in

$$\begin{aligned} &\frac{1}{q_{i}} \bigl\Vert u_{i}(t_{*})-u_{i}^{*} \bigr\Vert \\ &\quad \leqslant \frac{1}{q_{i}} e^{-\underline {a}_{i}\tilde{b}_{i}t_{*}} \bigl\Vert \xi_{i}(0)-u_{i}^{*} \bigr\Vert + \frac{1}{q_{i}} \int_{0}^{t_{*}} \Biggl[ e^{-\underline{a}_{i}\tilde{b}_{i}(t_{*}-s)} \bar{a}_{i}\sum_{j=1}^{n} \hat {W}_{rij} F_{2j} \bigl\Vert u_{j}(s,x)-u_{j}^{*} \bigr\Vert \\ &\qquad{} + e^{-\underline{a}_{i}\tilde{b}_{i}(t_{*}-s)}\bar{a}_{i}\sum_{j=1}^{n} ( \hat{T}_{rij} + \check{T}_{rij} )G_{2j} \bigl\Vert u_{j}\bigl(s-\tau(s),x\bigr)-u_{j}^{*} \bigr\Vert \Biggr]\,ds \\ &\quad \leqslant \biggl(1-\frac{\kappa_{i}}{a_{*}}\biggr)+ \kappa_{i} k_{0} \leqslant k_{0}\biggl(1-\frac {\kappa_{i}}{a_{*}}\biggr)+ \kappa_{i} k_{0}\leqslant k_{0}+ \kappa_{i} k_{0}\biggl(1-\frac{1}{a_{*}}\biggr)\leqslant k_{0}, \end{aligned}$$
(3.14)

which is contradictory to \(\|u_{i}(t_{*})-u_{i}^{*}\|=q_{i}k_{0}\).

So we have proved the boundedness of all the solutions of system (2.1) and thus obtained conclusion (b).

Step 4. We will prove that the equilibrium point \(u^{*}\) is globally robustly asymptotically stochastically stable.

From (H4) we have

$$\bigl\vert f(u)-f(v) \bigr\vert \leqslant F_{2} \vert u-v \vert ,\qquad \bigl\vert g(u)-g(v) \bigr\vert \leqslant G_{2} \vert u-v \vert $$

and

{ u ˜ ( t , x ) t = ( D p u ˜ ( t , x ) ) A ( u ˜ ( t , x ) + u ) u ˜ ( t , x ) t = × [ B ˜ ( u ˜ ( t , x ) ) W r ( t ) f ( u ˜ ( t , x ) ) T r ( t ) g ( u ˜ ( t τ ( t ) , x ) ) u ˜ ( t , x ) t = ( ς T ς T ς T ) n × n 2 ( T r 1 ( t ) + T r 1 T ( t ) T r 2 ( t ) + T r 2 T ( t ) T r n ( t ) + T r n T ( t ) ) n 2 × n g ( u ˜ ( t τ ( t ) , x ) ) ] , u ( s , x ) = ξ ( s , x ) , x Ω , τ s 0 , τ ( t ) [ 0 , τ ] ,
(3.15)

with the Neumann boundary value condition

$$\frac{\partial\tilde{u}(t,x)}{\partial\nu}=0, \quad (t,x)\in[0,+\infty )\times\partial\varOmega, $$

where \(T_{ri}=(T_{rijl})_{n\times n}\), \(T_{ri}^{T}=(T_{rilj})_{n\times n}\), and

$$\begin{aligned} &\varsigma_{l}=\frac{g_{l}(u_{l}(t-\tau(t),x))+g_{l}(u_{l}^{*})}{2}, \\ &\varsigma=(\varsigma_{1},\varsigma_{2},\ldots, \varsigma_{n})^{T},\qquad \varsigma ^{T}=( \varsigma_{1},\varsigma_{2},\ldots,\varsigma_{n}), \\ &\tilde{B}(\tilde{u})=B(u)-B\bigl(u^{*}\bigr),\qquad \overline{f}(\tilde{u})=f(u)-f \bigl(u^{*}\bigr),\qquad \overline{g}(\tilde{u})=g(u)-g\bigl(u^{*}\bigr). \end{aligned}$$

Remark 2

If system (3.15) is under the Dirichlet boundary value condition

$$\tilde{u}(t,x)=0, \quad (t,x)\in[0,+\infty)\times\partial\varOmega, $$

then we can still derive a formula similar to (3.2):

$$\begin{aligned} & \int_{\varOmega}\sum_{i=1}^{n}q_{i} \tilde{u}_{i}\sum_{k=1}^{n} \frac{\partial}{\partial x_{k}}\biggl(D_{i} \vert \nabla\tilde{u}_{i} \vert ^{p-2}\frac{\partial\tilde{u}_{i}}{\partial x_{k}}\biggr)\,dx \\ &\quad =-\sum _{k=1}^{n} \sum_{i=1}^{n} \int_{\varOmega}q_{i} D_{i} \vert \nabla \tilde{u}_{i} \vert ^{p-2}\biggl(\frac{\partial\tilde{u}_{i}}{\partial x_{k}} \biggr)^{2}\,dx \\ &\quad \leqslant-\tilde{\lambda}_{1}q_{i}D_{i}\sum _{i=1}^{n} \int_{\varOmega} \vert \tilde{u}_{i} \vert ^{p}\,dx, \end{aligned}$$
(3.2*)

where

$$\tilde{\lambda}_{1}=\inf_{0\neq\varphi\in W_{0}^{1,p}(\varOmega)}\frac{\int _{\varOmega} \vert \nabla\varphi \vert ^{p}\,dx}{\int_{\varOmega} \vert \varphi \vert ^{p}\,dx}>0. $$

If the Neumann boundary condition \(\frac{\partial u_{i}(t,x)}{\partial x_{j}}=0\) was replaced by the Dirichlet boundary condition \(u_{i}(t,x)|_{\partial\varOmega}=0\) in system (2.1), we would not derive a formula similar to (3.2), since system (2.1) involves \(\alpha_{i}\) (the input from outside the networks), so that \(\tilde{u}_{i}= u_{i}-u_{i}^{*}=-u_{i}^{*}\) on ∂Ω, that is, the equilibrium solution \(u_{i}^{*}\) is not necessarily zero. Of course, we can still deal with the Dirichlet boundary problem by employing the Ekeland variational principle and Lyapunov–Krasovskii functional method [47]. However, system (3.15) does not involve the input \(\alpha_{i}\), which implies that Dirichlet boundary value problem gives the same results as the Neumann boundary problem.

Obviously, the null solution of system (3.15) is globally asymptotically stable if and only if the unique equilibrium point \(u^{*}\) of system (2.6) is globally asymptotically stable.

Consider the Lyapunov–Krasovskii functional

$$\mathbb{V}(t,r)= \int_{\varOmega}\tilde{u}^{T}(t,x)P_{r} \tilde{u}(t,x)\,dx= \int _{\varOmega}\bigl\vert \tilde{u}(t,x) \bigr\vert ^{T}P_{r} \bigl\vert \tilde{u}(t,x) \bigr\vert \,dx,\quad r \in S. $$

Moreover, (H2) and (H3) yield

$$\begin{aligned} \int_{\varOmega}\tilde{u}^{T}(t,x) \bigl(A(u)\tilde{B}( \tilde {u})\bigr)\,dx\geqslant{}& \int_{\varOmega}\tilde{u}^{T}(t,x)\underline{A}\mathbb{B} \tilde {u}(t,x)\,dx\\ ={}& \int_{\varOmega}\bigl\vert \tilde{u}^{T}(t,x) \bigr\vert \underline{A}\mathbb{B} \bigl\vert \tilde {u}(t,x) \bigr\vert \,dx. \end{aligned}$$

Diagonal matrices derive

$$\begin{aligned} \int_{\varOmega}\tilde{u}^{T}(t,x)\sum _{j=1}^{n_{0}} \gamma _{rj}P_{j} \tilde{u}(t,x)\,dx =& \int_{\varOmega}\bigl\vert \tilde{u}(t,x) \bigr\vert ^{T}\sum_{j=1}^{n_{0}} \gamma_{rj}P_{j} \bigl\vert \tilde {u}(t,x) \bigr\vert \,dx. \end{aligned}$$

Moreover, (3.14)–(3.15) and Lemma 3.1 yield

$$\begin{aligned} \mathcal{L}\mathbb{V}(t,r)\leqslant{}&0-2 \int_{\varOmega}\bigl\vert \tilde {u}(t,x) \bigr\vert ^{T}(\underline{A}\mathbb{B}P_{r}) \bigl\vert \tilde{u}(t,x) \bigr\vert \,dx \\ &{} + \int_{\varOmega}\bigl\vert \tilde{u}(t,x) \bigr\vert ^{T}\sum_{j=1}^{n_{0}} \gamma_{rj}P_{j} \bigl\vert \tilde {u}(t,x) \bigr\vert \,dx \\ &{}+ \int_{\varOmega}\bigl[ \bigl\vert \overline{f}\bigl(\tilde{u}(t,x) \bigr) \bigr\vert ^{T} \bigl\vert W_{r}(t) \bigr\vert ^{T}\overline {A}P_{r} \bigl\vert \tilde{u}(t,x) \bigr\vert \\ &{} + \bigl\vert \tilde{u}(t,x) \bigr\vert ^{T}P_{r} \overline {A} \bigl\vert W_{r}(t) \bigr\vert \bigl\vert \overline{f}\bigl(\tilde{u}(t,x)\bigr) \bigr\vert \bigr]\,dx \\ &{}+ \int_{\varOmega}\bigl[ \bigl\vert \overline{g}\bigl(\tilde{u} \bigl(t-\tau (t),x\bigr)\bigr) \bigr\vert ^{T} \bigl\vert T_{r}(t) \bigr\vert ^{T}\overline{A}P_{r} \bigl\vert \tilde{u}(t,x) \bigr\vert \\ &{}+ \bigl\vert \tilde {u}(t,x) \bigr\vert ^{T}P_{r}\overline{A} \bigl\vert T_{r}(t) \bigr\vert \bigl\vert \overline{g}\bigl(\tilde{u}\bigl(t-\tau (t),x\bigr)\bigr) \bigr\vert \bigr]\,dx \\ &{}+ \int_{\varOmega}\bigl[ \bigl\vert \overline{g}\bigl(\tilde{u} \bigl(t-\tau(t),x\bigr)\bigr) \bigr\vert ^{T} \bigl\vert \widetilde {T}_{r}(t) \bigr\vert ^{T}\varGamma^{T}\overline{A} P_{r} \bigl\vert \tilde{u}(t,x) \bigr\vert \\ &{}+ \bigl\vert \tilde{u}(t,x) \bigr\vert ^{T}P_{r} \overline{A}\varGamma \bigl\vert \widetilde{T}_{r}(t) \bigr\vert \bigl\vert \overline {g}\bigl(\tilde{u}\bigl(t-\tau(t),x\bigr)\bigr) \bigr\vert \bigr]\,dx, \end{aligned}$$
(3.16)

where \(\mathcal{L}\) is the weak infinitesimal operator, Γ and \(\widetilde{T}_{r}(t)\) are the matrices defined in (2.8).

Letting

χ ( | u ˜ ( t , x ) | | u ˜ ( t τ ( t ) , x ) | | f ( u ˜ ( t , x ) ) | | g ( u ˜ ( t τ ( t ) , x ) | ) ,

by (3.3)–(3.4) and (3.16) we can get

L V ( t , r ) χ T ( A ˜ r 0 P r A | W r ( t ) | + K 0 ( F 1 + F 2 ) P r A ( | T r ( t ) | + Γ | T ˜ r ( t ) | ) 2 G 1 K 1 G 2 0 K 1 ( G 1 + G 2 ) 2 K 0 0 2 K 1 ) χ , χ T Θ r χ
(3.17)

where \(\widetilde{\mathcal{A}}_{r}=\mathcal{A}_{r}-2F_{1}K_{0}F_{2}\),

$$\mathcal{A}_{r}= -2P_{r}\underline{A}\mathbb{B} +\sum _{j=1}^{n_{0}} \gamma_{rj}P_{j}, $$

and

Θ r = ( A r 2 F 1 K 0 F 2 0 P r A ( | W r | + | E r | | K ( t ) | | N r | ) + K 0 ( F 1 + F 2 ) P ˜ r 2 G 1 K 1 G 2 0 K 1 ( G 1 + G 2 ) 2 K 0 0 2 K 1 ) = Φ r + Δ Φ r
(3.18)

with \(\widetilde{P}_{r}=P_{r}\overline{A}(|T_{r}|+\varGamma|\widetilde {T}_{r}|+|E_{r}||K(t)|(|N_{r0}|+|\widetilde{N}_{r}|))\),

Φ r = ( A r 2 F 1 K 0 F 2 0 P r A | W r | + K 0 ( F 1 + F 2 ) P r A ( | T r | + Γ | T ˜ r | ) 2 G 1 K 1 G 2 0 K 1 ( G 1 + G 2 ) 2 K 0 0 2 K 1 )

and

Δ Φ r = ( 0 0 P r A | E r | | K ( t ) | | N r | P r A | E r | | K ( t ) | ( | N r 0 | + | N ˜ r | ) 0 0 0 0 0 0 ) .

Denote

M= ( P r A | E r | 0 0 0 ) ,E= ( 0 0 | N r | T | N r 0 | T + | N ˜ r | T ) .

Applying the Schur complement theorem twice yields

Φ ˜ = ( Φ M E T I 0 I ) <0Φ+M M T + E T E<0.

Moreover, Lemma 2.1 yields

$$\begin{aligned} \varTheta_{r}=\varPhi_{r}+\Delta\varPhi \leqslant \varPhi_{r}+\mathcal{M}\mathcal{M}^{T}+\mathfrak{E}^{T} \mathfrak{E}< 0. \end{aligned}$$
(3.19)

Combining (3.19) and (3.17) results in

$$\mathcal{L}\mathbb{V}(t,r) \leqslant\chi^{T}\varTheta_{r}\chi \leqslant0. $$

It follows by the standard Lyapunov functional theory that the null solution of system (2.1) is globally robustly asymptotically stochastically stable. Thus conclusion (c) is proved. □

Remark 3

It is the first time that the boundedness of p-Laplacian reaction–diffusion high-order neural networks is investigated and the robust stability criterion is derived for such complex systems. If \(p=2\), then we will further derive a better corollary.

4 Applications and analysis

In the case of \(p=2\), system (2.1) reduces to the common reaction–diffusion Cohen–Grossberg neural networks (2.12). Applying Theorem 3.3 and Lemma 2.3 to the Sobolev space \(W_{0}^{1,2}(\varOmega)\) results in the following corollary.

Corollary 4.1

Assume that

$$\bigl(\mathbb{B}- \bigl( \vert W_{r} \vert + \vert E_{r} \vert \vert N_{r} \vert \bigr)F_{2} - \bigl( \vert T_{r} \vert + \vert E_{r} \vert \vert N_{r0} \vert \bigr) G_{2}-\bigl(\varGamma \vert \widetilde{T}_{r} \vert + \vert E_{r} \vert \vert \widetilde{N}_{r} \vert \bigr) G_{2} \bigr)^{-1} \succcurlyeq0,\quad r\in S. $$

Then:

  1. (a)

    System (2.12) or System (2.13) has a unique equilibrium point.

  2. (b)

    All the solutions of system (2.12) and system (2.13) are bounded.

    Suppose, in addition, that there are positive definite diagonal matrices \(P_{r}\) \((r\in S)\), \(K_{0}\), and \(K_{1}\) such that the following condition holds for all \(r\in S\):

    ( A r 0 P r A | W r | + K 0 ( F 1 + F 2 ) P r A ( | T r | + Γ | T ˜ r | ) P r A | E r | 0 2 G 1 K 1 G 2 0 K 1 ( G 1 + G 2 ) 0 0 2 K 0 0 0 | N r | T 2 K 1 0 | N r 0 | T + | N ˜ r | T I 0 I ) <0

    with \(\overline{\mathcal{A}}_{r}=\mathcal{A}_{r}-2F_{1}K_{0}F_{2}-2\lambda_{1}DP_{r}\). Then:

  3. (c)

    The unique equilibrium point of system (2.12) or system (2.13) is globally robustly asymptotically stochastically stable, where \(\lambda_{1}\) is the smallest positive eigenvalue of the Neumann boundary problem (2.15).

Remark 4

Corollary 4.1 illustrates that the diffusion item plays its role in stability criterion of high-order reaction–diffusion system, whereas the influence of the diffusion term was ignored in [31, Thm. 1].

Remark 5

The Weber theorem of one-variable quadratic equation was flexibly applied to the LMI approach of robust stability criterion for reaction–diffusion neural networks. As far as we are concerned, seldom literature related to reaction–diffusion neural networks involves such a technique.

Remark 6

It is the first time that both boundedness result and robust stability criterion of reaction–diffusion high-order neural networks are derived.

If the stochastic factor, the input variable, and parameter uncertainty are neglected, then (2.12) becomes a deterministic system. Furthermore, letting \(a_{i}(s)\equiv1\), \(b_{i}(s)=\bar{b}_{i}s\), and \(T_{ijl}(t)=0\), then system (2.12) is reduced to the following reaction–diffusion cellular neural network:

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial u_{i}(t,x) }{\partial t} = \sum_{k=1}^{n} \frac{\partial }{\partial x_{k}} (D_{i}\frac{\partial u_{i}(t,x)}{\partial x_{k}} ) - [\bar{b}_{i} u_{i}(t,x) -\sum_{j=1}^{n}W_{ij} f_{j}(u_{j}(t,x))\\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-\sum_{j=1}^{n}T_{ij} f_{j}(u_{j}(t-\tau(t),x)) ],\quad i=1,2,\ldots,n,\\ \frac{\partial u_{i}(t,x)}{\partial x_{j}}=0,\quad i,j=1,2,\ldots,n, (t,x)\in [0,+\infty)\times\partial\varOmega, \\ u_{i}(s,x)=\xi_{i}(s,x), x\in\varOmega,\quad -\tau\leqslant s\leqslant0, \tau (t)\in[0,\tau]. \end{cases}\displaystyle \end{aligned}$$
(4.1)

Definition 4.2

For any \(T>0\) and \(x\in \varOmega\), \(u=\{(u_{1}(t,x),u_{2}(t,x),\ldots, u_{n}(t,x))\}_{[0,T]}\) with \(T\in(0,\infty]\) is called a mild solution of (4.1) if for any \(i\in\mathcal{N}\triangleq \{1,2,\ldots,n\}\), \(u_{i}(t,\cdot)\in\mathcal{C}([0,T]; L^{2}(\varOmega))\) and the following integral equations hold for \(t\in[0,T]\) and \(x\in\varOmega\):

$$\begin{aligned} u_{i}(t,x)={}&e^{q_{i}t\Delta}\xi_{i}(0,x)- \int_{0}^{t}e^{q_{i}(t-\theta)\Delta} \Biggl[ \bar{b}_{i}u_{i}(\theta,x)- \sum _{j=1}^{n}W_{ij}f_{j} \bigl(u_{j}(\theta,x)\bigr) \\ &{}-\sum_{j=1}^{n}T_{ij}f_{j} \bigl(u_{j}\bigl(\theta-\tau(\theta),x\bigr)\bigr) \Biggr]\,d\theta, \end{aligned}$$

and

$$\begin{aligned} &u_{i}(t,x)=\xi_{i}(t,x) \quad\forall(s,x)\in[-\tau,0]\times \varOmega, \\ & \partial_{\nu}u_{i}(t,x)=0\quad \forall(t,x)\in[0, +\infty] \times\partial \varOmega. \end{aligned}$$

Moreover, if the diffusion phenomenon is ignored, (4.1)  degenerates into the following cellular neural network:

$$\begin{aligned} \textstyle\begin{cases} dx_{i}(t) = - \bar{b}_{i}x_{i}(t)+\sum_{j=1}^{n}W_{ij}f_{j}(x_{j}(t))+\sum_{j=1}^{n}T_{ij}f_{j}(x_{j}(t-\tau(t))),\quad t\geqslant0, i\in\mathcal{N,}\\ x_{i}(s)=\xi_{i}(s),\quad s\in[-\tau,0]. \end{cases}\displaystyle \end{aligned}$$
(4.2)

For system (4.2), we get the following concise conclusion under the meaning of Definition 4.2.

Theorem 4.3

If \(f_{i}\) is Lipschitz continuous with Lipschitz constants \(L_{i}>0\) and \(f_{i}(0)=0\) for all \(i=1,2,\ldots,n\), then system (4.2) is globally exponentially mean-square stable.

To prove Theorem 4.3, we need to utilize [6, Thm. 5], to derive the following lemma.

Lemma 4.4

Let \(f_{i}\) and \(\sigma_{i}\) be Lipschitz continuous with Lipschitz constants \(L_{i}>0\) and \(T_{i}>0\) for \(i\in\mathcal{N}\), respectively. Let, in addition, \(f_{i}(0)=0=\sigma _{i}(0)\) for \(i\in\mathcal{N}\). Then the following time-delay ordinary differential equations are globally stochastically exponentially mean-square stable:

$$\begin{aligned} \textstyle\begin{cases} dx_{i}(t) = - [a_{i}x_{i}(t)- \sum_{j=1}^{n}b_{ij}f_{j}(x_{j}(t))-\sum_{j=1}^{n}c_{ij}f_{j}(x_{j}(t-\tau(t)))\\ \phantom{dx_{i}(t) =}{}-\sum_{j=1}^{n} h_{ij}\int_{t-\rho(t)}^{t}f_{j}(x_{j}(s))\,ds ]\,dt +\sigma _{i}(x_{i}(t)) \,dw_{i}(t), \quad t\geqslant0, i\in\mathcal{N,}\\ x_{i}(t)=\zeta_{i}(t),\quad (s)\in[-\tau,0]. \end{cases}\displaystyle \end{aligned}$$
(4.3)

Proof

Rao and Zhong [6] utilized the Banach fixed point theorem, the Hölder inequality, the Burkhold–Davis–Gundy inequality, and the continuous semigroup of the Laplace operator to derive the globally stochastically exponential stability in mean square of the following impulsive stochastic reaction–diffusion cellular neural network with distributed delay:

$$\begin{aligned} \textstyle\begin{cases} du_{i}(t,x) = q_{i} div \nabla u_{i}(t,x)\,dt\\ \phantom{du_{i}(t,x) = }{}- [a_{i}u_{i}(t,x)- \sum_{j=1}^{n}b_{ij}f_{j}(u_{j}(t,x))-\sum_{j=1}^{n}c_{ij}f_{j}(u_{j}(t-\tau(t),x))\\ \phantom{du_{i}(t,x) = }{}-\sum_{j=1}^{n} h_{ij}\int_{t-\rho(t)}^{t}f_{j}(u_{j}(s,x))\,ds ]\,dt +\sigma _{i}(u_{i}(t,x)) \,dw_{i}(t),\\ \quad t\neq t_{k}, x\in\varUpsilon, i\in\mathcal{N,}\\ u(t_{k}^{+},x)=u(t_{k}^{-},x)+g(u(t_{k},x)),\quad x\in\varUpsilon, k=1,2,\ldots,\\ u_{i}(t,x)=\zeta_{i}(t,x), \quad (s,x)\in[-\tau,0]\times\varUpsilon,\\ u(t,x)=0, \quad u\in[0, +\infty]\times\partial\varUpsilon. \end{cases}\displaystyle \end{aligned}$$
(4.4)

Letting \(q_{i}=0\) in [6, Thm. 5], the diffusion phenomenon is ignored. Furthermore, if the impulse phenomenon is neglected, then partial differential equations (4.4) degenerate into the ordinary differential equations (4.2). In [6, (H1)], \(q_{i}=0\) implies that \(\gamma>0\) can be big enough if Ω is selected well. In [6, (6)], \(G_{i}=0\) (impulse phenomenon being ignored). So by [6, (6)] we can get

$$\kappa\triangleq6M^{2} \Biggl[\frac{1}{\gamma^{2}} \Bigl(\max_{i\in\mathcal{N}} a_{i}^{2}\Bigr)+n \frac{1}{\gamma^{2}}\max _{i\in\mathcal{N}} \Biggl(\sum_{j=1}^{n} \bigl( \vert b_{ij} \vert ^{2}+ \vert c_{ij} \vert ^{2}\bigr)L_{j}^{2} \Biggr)+ \frac{n\tau^{2}}{\gamma^{2}} + \frac{2}{\gamma} \Bigl(\max_{i\in\mathcal {N}}T_{i}^{2} \Bigr) \Biggr]. $$

Obviously, \(\kappa\in(0,1)\) if letting γ big enough. Due to [6, Thm. 5], we complete the proof. □

Proof of Theorem 4.3

Now, letting \(h_{ij}=0\) and \(\sigma_{i}(\cdot)=0\) in Lemma 4.4, we can deduce Theorem 4.3 immediately. □

Next, we discuss the boundedness of all the mild solutions of system (4.1). We further assume that the initial value \(\xi_{i}(s,x)\) is bounded for all \((s,x)\in[-\tau,0]\times\varOmega\).

Definition 4.5

Model (4.1) is said to be uniformly bounded in \(L^{\infty}\) if for any given \(\tau_{1}>0\) and for all \(t\in[\tau_{1},T]\) with \(T\in(0,\infty)\), we have

$$\bigl\Vert u_{i}(t,\cdot) \bigr\Vert _{L^{\infty}(\varOmega)}\leqslant C\quad \forall i\in\mathcal{N}. $$

Lemma 4.6

([48, 49])

Let \(\varOmega\subset \mathbb{R}^{N}(N\in\mathbb{N})\) be a bounded domain with smooth boundary, and let Δ denote the Laplacian in \(L^{s}(\varOmega)\) with domain

$$\bigl\{ z\in W^{2,s}(\varOmega)| \nabla\cdot\nu=0 \textit{ on } \partial\varOmega \bigr\} $$

for \(s\in(1,\infty)\). Then the operator \(-\Delta+1\) is sectorial and possesses closed fractional powers \((-\Delta+1)^{\eta}, \eta\in(0, 1)\), with dense domain \(D((-\Delta+1)^{\eta})\). Moreover, the following three properties hold.

(i) If \(m\in\{0,1\}, p\in[1,\infty]\), and \(q\in(1,\infty)\), then there exists a constant \(C_{1}>0\) such that, for all \(z\in D((-\Delta+1)^{\eta})\),

$$\Vert z \Vert _{W^{m,p}(\varOmega)}\leqslant C_{1} \bigl\Vert (- \Delta+1)^{\eta}z \bigr\Vert _{L^{q}(\varOmega)}, $$

(ii) Suppose \(p\in[1,\infty)\). Then the associated heat semigroup \((e^{t\Delta})_{t\geqslant0}\) maps \(L^{p}(\varOmega)\) into \(D((-\Delta +1)^{\eta})\) in \(L^{p}(\varOmega)\), and there exist constants \(C_{2}>0\) and \(\lambda_{2}>0\) such that

$$\bigl\Vert (-\Delta+1)^{\eta}e^{t(\Delta-1)}z \bigr\Vert _{L^{p}(\varOmega)}\leqslant C_{2}t^{-\eta }e^{-\lambda_{2}t} \Vert z \Vert _{L^{p}(\varOmega)} $$

for all \(z\in L^{p}(\varOmega)\) and \(t>0\).

The initial value \(\xi_{i}(s,x)\) is further assumed to be bounded for all \((s,x)\in[-\tau,0]\times\varOmega\).

Theorem 4.7

If \(f_{i}\) is Lipschitz continuous with Lipschitz constants \(L_{i}>0\) with \(f_{i}(0)=0\) for all \(i=1,2,\ldots,n\) and \(\|e^{D_{i}t\Delta}\| \leqslant Me^{-\gamma t}\), then model (4.1) is uniformly bounded in \(L^{\infty}\), where \(M>0\) and \(\gamma>0\) are constants.

Proof

Employing the variation-of-constants formula for \(u_{i}\), we derive that, for any \(\tau_{1}>0\),

$$\begin{aligned} u_{i}(t,x)={}&e^{D_{i}t(\Delta-1)}u_{i}(\tau_{1},x)- \int_{\tau _{1}}^{t}e^{D_{i}(t-s)(\Delta-1)} \Biggl\{ \Biggl[ \bar{b}_{i}u_{i}(s,x)- \sum_{j=1}^{n}W_{ij}f_{j} \bigl(u_{j}(s,x)\bigr) \\ &{}-\sum_{j=1}^{n}T_{ij}f_{j} \bigl(u_{j}\bigl(s-\tau(s),x\bigr)\bigr) \Biggr]-D_{i}u_{i}(s,x) \Biggr\} \,ds,\quad t\geqslant\tau_{1}, \end{aligned}$$
(4.5)

Letting \(q=2\) and \(\eta\in(\frac{1}{2},\frac{2}{3})\) in Lemma 4.6, we see that

$$\begin{aligned} \bigl\Vert e^{D_{i}t(\Delta-1)}u_{i}(\tau_{1},\cdot) \bigr\Vert _{L^{\infty}(\varOmega )}\leqslant{}& \bigl(C_{1} \bigl\Vert (- \Delta+1)^{\eta}e^{D_{i}t(\Delta-1)}u_{i}(\tau _{1}, \cdot) \bigr\Vert _{L^{2}(\varOmega)} \bigr) \\ \leqslant{}& \bigl(C_{1}C_{2}t^{-\eta}e^{-D_{i}\lambda_{2} t} \bigl\Vert u_{i}(\tau_{1},\cdot) \bigr\Vert _{L^{2}(\varOmega)} \bigr)\leqslant C_{3}. \end{aligned}$$
(4.6)

Here \(\lambda_{2}>0\) is the first positive eigenvalue of the Neumann boundary problem

$$\textstyle\begin{cases} (-\Delta+1) \varphi=\lambda\varphi \quad\text{in } \varOmega,\\ \partial_{\nu}\varphi=0 \quad\text{on } \partial\varOmega, \end{cases} $$

where \(\partial_{\nu}\) denotes differentiation with respective to the outward normal of ∂Ω.

In view of Definition 4.5, we can similarly derive

$$\begin{aligned} & \Biggl\Vert \int_{\tau_{1}}^{t}e^{D_{i}(t-s)(\Delta-1)} \sum _{j=1}^{n}c_{ij}f_{j} \bigl(u_{j}\bigl(s-\tau(s),x\bigr)\bigr) \,ds \Biggr\Vert _{L^{\infty}(\varOmega)} \\ &\quad \leqslant \sum_{j=1}^{n}C_{1}C_{2} \vert c_{ij} \vert L_{j} \int_{\tau_{1}}^{t} (t-s)^{-\eta } e^{-D_{i}\lambda_{2}t} \bigl\Vert u_{j}\bigl(s-\tau(s),x\bigr) \bigr\Vert _{L^{2}(\varOmega)} \,ds \leqslant C_{4}. \end{aligned}$$

Similarly, we can utilize the trigonometric inequality to prove

$$\begin{aligned} & \Biggl\Vert \int_{\tau_{1}}^{t}e^{D_{i}(t-s)(\Delta-1)} \Biggl\{ \Biggl[\bar {b}_{i}u_{i}(s,x)- \sum_{j=1}^{n}W_{ij}f_{j} \bigl(u_{j}(s,x)\bigr) \\ &\quad -\sum_{j=1}^{n}T_{ij}f_{j} \bigl(u_{j}\bigl(s-\tau(s),x\bigr)\bigr) \Biggr]-D_{i}u_{i}(s,x) \Biggr\} \,ds \Biggr\Vert _{L^{\infty}(\varOmega)}\leqslant C_{5}. \end{aligned}$$
(4.7)

Combining (4.5)–(4.7) results in

$$\bigl\Vert u_{i}(t,\cdot) \bigr\Vert _{L^{\infty}(\varOmega)}\leqslant C,\quad \forall i\in\mathcal{N}, $$

which completes the proof. □

By employing the method similar to that of the proof of Lemma 4.4, we get the following corollary from Theorem 4.7.

Corollary 4.8

If \(f_{i}\) is Lipschitz continuous with Lipschitz constants \(L_{i}>0\) with \(f_{i}(0)=0\) for all \(i=1,2,\ldots,n\), then model (4.2) is uniformly bounded under in \(L^{\infty}\).

5 Numerical example

Example 5.1

Consider system (2.1) or (2.6) with the following data. Let \(n=2\), \(S=\{1,2\}\), and rewrite system (2.1) as follows:

$$\begin{aligned} \textstyle\begin{cases} \frac{\partial u_{i}(t,x) }{\partial t} = \sum_{k=1}^{2} \frac{\partial }{\partial x_{k}} (D_{i} \vert \nabla u_{i}(t,x) \vert ^{p-2}\frac{\partial u_{i}(t,x)}{\partial x_{k}} )\\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-a_{i}(u_{i}(t,x)) [b_{i}(u_{i}(t,x))-\sum_{j=1}^{2}W_{rij}(t)f_{j}(u_{j}(t,x))\\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-\sum_{j=1}^{2}T_{rij}(t)g_{j}(u_{j}(t-\tau(t),x)) \\ \phantom{\frac{\partial u_{i}(t,x) }{\partial t} =}{}-\sum_{j=1}^{2}\sum_{l=1}^{2}T_{rijl}(t)g_{j}(u_{j}(t-\tau (t),x))g_{l}(u_{l}(t-\tau(t),x))+\alpha_{i} ],\\ \quad \forall r\in S=\{1,2\}, i=1,2,\\ \frac{\partial u_{i}(t,x)}{\partial x_{j}}=0,\quad i,j=1,2, (t,x)\in[0,+\infty )\times\partial\varOmega, \\ u_{i}(s,x)=\xi_{i}(s,x)=\cos^{200}(s^{2}+x^{3})+\sin(sx^{3}),\\ \quad x\in\varOmega, -\tau \leqslant s\leqslant0, \tau(t)\in[0,\tau], i=1,2, \end{cases}\displaystyle \end{aligned}$$
(5.1)

where \(\varOmega=[0,1]\times[0,1]\subset R^{2}\), \(p=2.116\), and

$$\begin{aligned} &a_{1}(s)=0.8+0.05\bigl(1+\cos^{2}s\bigr),\qquad a_{2}(s)=0.8+0.08\bigl(1+\cos^{2}s^{2}\bigr),\quad s\in R; \\ &b_{1}(s)=3s+\sin s, \qquad b_{2}(s)=2.9s-\sin s, \quad s\in R; \\ &f_{1}(s)=0.2s+0.01\sin s,\qquad f_{2}(s)=0.2s+\sin(0.01 s), \quad s\in R; \\ &g_{1}(s)=0.11\sin s,\qquad g_{2}(s)=\sin(0.12 s),\quad s\in R. \end{aligned}$$

Remark 7

Here, we only verify that \(b_{1}(\cdot)\) satisfies (H2). Other functions can be similarly verified to satisfy the corresponding conditions. Obviously, \(b_{1}(0)=0\), and the Lagrange mean value theorem yields

$$b_{1}(s)-b_{1}(r)=(s-r) (3+\cos\eta)\Rightarrow \frac {b_{1}(s)-b_{1}(r)}{s-r}=3+\cos\eta\geqslant2,\quad s,r\in R, s\neq r. $$

This verifies that \(b_{1}(\cdot)\) satisfies condition (H2).

Next, we propose the following data for system (2.1) or (2.6):

B = ( 2 0 0 1.9 ) , F 1 = ( 0.13 0 0 0.15 ) , F 2 = ( 0.21 0 0 0.22 ) , G 1 = ( 0.131 0 0 0.151 ) , G 2 = ( 0.18 0 0 0.19 ) , D = ( 0.016 0 0 0.019 ) , T 1 = ( 0.11 0.01 0 0.119 ) , T 2 = ( 0.101 0.01 0.01 0.119 ) , M = ( 0.11 0.12 ) , Γ = ( 0.11 0.12 0 0 0 0 0.11 0.12 ) . T 11 = ( 0.11 0.01 0.12 0.12 ) , T 12 = ( 0.12 0.011 0.13 0.112 ) , T ˜ 1 = ( 0.22 0.013 0.13 0.24 0.24 0.141 0.141 0.224 ) , T 21 = ( 0.117 0.011 0.12 0.121 ) , T 22 = ( 0.112 0.011 0.113 0.118 ) , T ˜ 2 = ( 0.234 0.131 0.131 0.242 0.224 0.124 0.124 0.236 ) , W 1 = ( 0.101 0.011 0.0112 0.103 ) , W 2 = ( 0.133 0.021 0.0112 0.11265 ) , E 1 = ( 0.118 0.011 0.012 0.111 ) , E 2 = ( 0.113 0.01 0.02 0.115 ) , N 1 = ( 0.111 0.0011 0 0.1002 ) , N 2 = ( 0.103 0.011 0.012 0.11795 ) , N ˜ 1 = ( 0.117 0.012 0.011 0.1101 ) , N ˜ 2 = ( 0.103 0.001 0.012 0.1215 ) , N 10 = ( 0.1011 0.001 0.01 0.1003 ) , N 20 = ( 0.1143 0.011 0.012 0.11555 ) , A _ = ( 0.833 0 0 0.8101 ) , A = ( 1.001 0 0 0.8993 ) , Π = ( γ 11 γ 12 γ 21 γ 22 ) = ( 0.3896 0.3896 0.5788 0.5788 ) .

Let \(\alpha_{i}=\frac{1}{2^{i+1}}, \xi_{i}(s,x)= \cos(s^{2}+x^{2}+i), i=1,2; \tau (t)=9.878\cos^{2}t\), and \(\tau=9.878\). Now we can compute by Matlab that

( B ( | W 1 | + | E 1 | | N 1 | ) F 2 ( | T 1 | + | E 1 | | N 10 | ) G 2 ( Γ | T ˜ 1 | + | E 1 | | N ˜ 1 | ) G 2 ) 1 = ( 1.9444 0.0111 0.0113 1.8398 ) 1 = ( 0.5143 0.0031 0.0032 0.5436 ) 0

and

( B ( | W 2 | + | E 2 | | N 2 | ) F 2 ( | T 2 | + | E 2 | | N 20 | ) G 2 ( Γ | T ˜ 2 | + | E 2 | | N ˜ 2 | ) G 2 ) 1 = ( 1.9395 0.0160 0.0133 1.8364 ) 1 = ( 0.5156 0.0045 0.0037 0.5446 ) 0 ,

so that (3.5) is satisfied.

Moreover, running Matlab LMI toolbox on LMI condition (3.6) results in

P 1 = ( 2.4653 0 0 1.5081 ) , P 2 = ( 11.5294 0 0 7.4705 ) , K 0 = ( 522.2023 0 0 455.3373 ) , K 1 = ( 0.0479 0 0 0.0422 ) .

Therefore, according to Theorem 3.3, there exists a unique equilibrium point for system (5.1), which is globally robustly asymptotically stochastically stable, and all the solutions of system (5.1) are bounded (see Figs. 12).

Figure 1
figure 1

Computer simulations of the state \(u_{1}(t, x)\)

Figure 2
figure 2

Computer simulations of the state \(u_{2}(t, x)\)

Remark 8

In [31, Thm. 1], the equilibrium point of system (1.1) is globally exponentially stable in norm \(\|\cdot\|_{2}\) in the mean square for any time-varying delays \(\tau(t)\) satisfying \(\dot{\tau}(t)\leqslant\eta<1\), whereas the condition \(\dot{\tau}(t)\leqslant\eta<1\) is not necessary in our Theorem 3.3 (see, e.g., Example 5.1). Due to the ingenious employment of our Lemma 3.2, we get robust stability criteria and boundedness results in our Theorem 3.3 and Corollary 4.1 whereas such newly obtained results did not appear in [31, Thm. 1]. Motivated by some good methods and results, we have created some new methods and results in this paper.

In [50], stability of periodic solution for reaction–diffusion high-order Hopfield neural networks with time-varying delays was derived, which gives us a lot of beneficial inspiration.

Remark 9

In comparison with [50, Thms. 3.1–3.3], our Theorem 3.3 and Corollary 4.1 give criteria of LMI conditions, which can be applicable to Matlab LMI toolbox, implying that our Theorem 3.3 and Corollary 4.1 are more practical than [50, Thms. 3.1–3.3]. In addition, the boundedness is not considered in [50], whereas in this paper we presented boundedness results.

6 Conclusions and further considerations

It is the first time that the boundedness of p-Laplacian reaction–diffusion Markovian jump high-order neural networks is obtained, and the given robust stability criteria are applied to computer Matlab LMI toolbox, which is applicable to wide calculations of practical complex engineering. Finally, a numerical example demonstrates the effectiveness of the proposed method.

Under the Lipschitz condition on the active function, Theorem 4.3 and Corollary 4.8 present the stability and boundedness result for system (4.2). So we want to know whether the following system is bounded and stable under similar concise conditions:

$$\textstyle\begin{cases} \frac{dx_{i}(t) }{d t} = -a_{i}(x_{i}(t)) [b_{i}(x_{i}(t))-\sum_{j=1}^{n}W_{ij}(t)f_{j}(x_{j}(t))-\sum_{j=1}^{n}T_{ij}(t)g_{j}(x_{j}(t-\tau(t)))\\ \phantom{\frac{dx_{i}(t) }{d t} =} {}-\sum_{j=1}^{n}\sum_{l=1}^{n}T_{ijl}(t)g_{j}(x_{j}(t-\tau (t)))g_{l}(x_{l}(t-\tau(t)))+\alpha_{i} ],\quad i=1,2,\ldots,n,\\ x_{i}(s)=\xi_{i}(s),\qquad x\in\varOmega, -\tau\leqslant s\leqslant0, \tau(t)\in [0,\tau]. \end{cases} $$

This is an interesting problem.