1 Introduction

Since twenty years, the area of fractional calculus has gained much attentions by the researchers and numerous works has been published in this context [1,2,3,4,5,6,7,8,9,10,11]. In fact, in [1] a phase dynamics of inline Josephson junction in voltage state is discussed, also, phase difference between the wave functions is analyzed via fractional calculus and a finite element scheme is used for the simulations of governing equations. In addition, authors in [5] have replaced the integer first-order derivative in time with the Caputo fractional derivative in which a numerical approach to chaotic pattern formation in diffusive predator-prey system is investigated. In the frame of novelty study, a new approach for the solution of fractional diffusion problems with conformable derivative is elaborated with authors in [8].

Fractional differential equations have recently proved to be valuable tools in the modeling of many phenomena in different domain applications, whether in biology [12,13,14], diffusion [8], control theory [15,16,17,18,19,20,21] or viscoelasticity [22]. In fact, regrading biological application, authors in [12] have modeled a fractional order system for COVID-19 pandemic transmission. In regards control theory, authors in [17] have presented a novel controller of fractional sliding mode type based on nonlinear fractional-order Proportional Integrator (PI) derivative controller. In addition, with systems dealing with time varying delay, a new Finite Time Stability (FTS) analysis of singular fractional differential equations is investigated. Furthermore, regarding fuzzy neural networks, a finite-time stability is studied in the work of [16].

For some basic results in the theory of fractional partial differential equations (FPDEs), the reader is referred to [23,24,25,26,27,28,29,30]. For example, authors in [27] have presented a Lyapunov-type inequality for the Darboux Problem for FPDEs. Furthermore, such inequality is used to study the existence of nontrivial solutions of FPDEs.

In the literature, for the existence and uniqueness of Darboux fractional partial differential equations with time delay, the exist the work of [29]. Compared to the previous cited work, the existence and uniqueness of solutions is given without any the Lipschitz constant. Furthermore, there are many works which treat FTS of time delay fractional order systems when the solution depends on one variable (see [15, 16, 21, 31]), unlike our studied work treats the case when the solution depends of two variables.

Based on the above interpretation, the contribution of this work is summarized as follow:

  • The existence and uniqueness of the global solution of Darboux fractional partial differential equations with time delay is proved.

  • An estimation of the solutions is given.

  • The FTS results of such systems are given and the theoretical contributions are validated by two numerical examples.

The paper is organized as follows. In Sect. 2, Some basic results related to the fractional calculus are given. In Sect. 3, the existence, the uniqueness of the global solutions, the estimation of solutions and the FTS results are investigated. In Sect. 4, we present some numerical examples which illustrate and prove the efficiency of the main results.

2 Basic results

Definition 1

The Riemann Liouville fractional integral of order \(\gamma =\left( \gamma _{1},\gamma _{2}\right) \) of w is defined by:

$$\begin{aligned} I_{a^{+}}^{\gamma }w\left( \xi ,\zeta \right) =\big [\Gamma \left( \gamma _{1}\right) \Gamma \left( \gamma _{2}\right) \big ]^{-1}\int _{a_{1}^{+}}^{\xi } \int _{a_{2}^{+}}^{\zeta }(\xi -s)^{\gamma _{1}-1}(\zeta -t)^{\gamma _{2}-1}w(s,t) dtds, \end{aligned}$$

where \(a=(a_1, a_2) \in \mathbb {R}^2\) and \(\gamma _{1}, \, \gamma _{2}\) are strictly positives.

Definition 2

The Riemann Liouville fractional derivative of order \(\gamma =\left( \gamma _{1},\gamma _{2}\right) \) of w is defined by:

$$\begin{aligned} D_{a^{+}}^{\gamma }w\left( \xi ,\zeta \right)= & {} D_{\xi ,\zeta }^{2}I_{a^{+}}^{1-\gamma }w\left( \xi ,\zeta \right) , \\= & {} \big [\Gamma \left( 1-\gamma _{1}\right) \Gamma \left( 1-\gamma _{2}\right) \big ]^{-1}D_{\xi ,\zeta }^{2}\int _{a_{1}^{+}}^{\xi }\int _{a_{2}^{+}}^{\zeta } (\xi -s)^{-\gamma _{1}}(\zeta -t)^{-\gamma _{2}}w( s,t)\, dtds, \end{aligned}$$

where \(a=(a_1, a_2) \in \mathbb {R}^2\), \(\left( \gamma _{1},\gamma _{2}\right) \in \left( 0,1\right) ^{2}\) and \(D_{\xi ,\zeta }^{2}=\frac{\partial ^{2}}{\partial \xi \partial \zeta }\).

Definition 3

The Caputo fractional derivative of order \(\gamma =\left( \gamma _{1},\gamma _{2}\right) \) of w is defined by:

$$\begin{aligned} ^{C}D_{a^{+}}^{\gamma }w\left( \xi ,\zeta \right)= & {} D_{a^{+}}^{\gamma }\big [ w\left( \xi ,\zeta \right) -w\left( \xi ,a_{2}\right) -w\left( a_{1},\zeta \right) +w\left( a_{1},a_{2}\right) \big ], \\= & {} \big [\Gamma \left( 1-\gamma _{1}\right) \Gamma \left( 1-\gamma _{2}\right) \big ]^{-1}D_{\xi ,\zeta }^{2}\int _{a_{1}^{+}}^{\xi }\int _{a_{2}^{+}}^{\zeta } (\xi -s)^{-\gamma _{1}}(\zeta -t)^{-\gamma _{2}} \\&\times \left[ w\left( s,t\right) -w\left( s,a_{2}\right) -w\left( a_{1},t\right) +w\left( a_{1},a_{2}\right) \right] dtds, \end{aligned}$$

where \(a=(a_1, a_2) \in \mathbb {R}^2\), \(\left( \gamma _{1},\gamma _{2}\right) \in \left( 0,1\right) ^{2}\) and \(D_{\xi ,\zeta }^{2}=\frac{\partial ^{2}}{\partial \xi \partial \zeta }\).

Definition 4

The Mittag-Leffler function is defined by:

$$\begin{aligned} E_{\xi }(\varrho )=\displaystyle \sum _{m=0}^{\infty } \frac{\varrho ^m}{\Gamma (m \xi +1)}, \end{aligned}$$

where \(\xi >0\), \(\varrho \in \mathbb {C}\).

Remark 1

Let \(\varepsilon \) be a nonzero real. The function \(\nu (s)=E_{\tau }\big (\varepsilon (s-p)^\tau \big )\) satisfies:

$$\begin{aligned} \frac{1}{\Gamma (\tau )} \int _{p}^s \;(s-t)^{\tau -1} \nu (t) dt=\frac{1}{\varepsilon }\big [\nu (s)-1\big ], \end{aligned}$$

where \(s, \,p \in \mathbb {R}\), \(p \le s\).

Definition 5

A mapping \(\varpi : \Upsilon \times \Upsilon \rightarrow [0,\infty ]\) is called a generalized metric on a nonempty set \(\Upsilon \), if:

(i):

\(\varpi (\gamma _1,\gamma _2)=0\) if and only if \(\gamma _1=\gamma _2\),

(ii):

\(\varpi (\gamma _1,\gamma _2)=\varpi (\gamma _2,\gamma _1)\), \(\forall \) \(\gamma _1,\gamma _2\in \Upsilon \),

(iii):

\(\varpi (\gamma _1,\gamma _3)\le \varpi (\gamma _1,\gamma _2)+\varpi (\gamma _2,\gamma _3)\), \(\forall \) \(\gamma _1,\gamma _2,\gamma _3\in \Upsilon \).

The following theorem describes a main result of the fixed point theory.

Theorem 1

[32] Suppose that \((\Upsilon ,\varpi )\) is a generalized complete metric space. Let \(\Psi : \Upsilon \rightarrow \Upsilon \) is a strictly contractive operator with \(C<1\). If one can find a nonnegative integer \(j_0\) such that \(\varpi (\Psi ^{j_0+1}y_0,\Psi ^{j_0}y_0)<\infty \) for some \(y_0\in \Upsilon \), then:

(i):

\(\Psi ^n y_0\) converges to a fixed point \(y_1\) of \(\Psi \),

(ii):

\(y_1\) is the unique fixed point of \(\Psi \) in \(\Upsilon ^*:=\{y_2\in \Upsilon : \varpi (\Psi ^{j_0} y_0,y_2)<\infty \}\),

(iii):

If \(y_2\in \Upsilon ^*\), then \(\varpi (y_2,y_1)\le \frac{1}{1-C} \varpi (\Psi y_2,y_2)\).

3 Main results

Throughout the paper, we use the following notations:

$$\begin{aligned}&\Sigma ^{np}= {\mathbb {R}}^n \times {\mathbb {R}}^n \times {\mathbb {R}}^p, \qquad I=[0, T_1]\times [0, T_2],\\&\mathbf{t}=(t, s) \in {\mathbb {R}}_+^2,\qquad \mathbf{\tau }(\mathbf{t})=(\tau _1(t), \tau _2(s)), \\&\mathbf{r}=(u,v) \in {\mathbb {R}}_+^2, \qquad \mathbf{\tau }(\mathbf{r})=(\tau _1(u), \tau _2(v)), \end{aligned}$$

where \(\tau _1\), \(\tau _2\) are two functions which will be specified later.

We consider the fractional-order system, with the variable \(\mathbf{t}\), as follows:

$$\begin{aligned} ^CD_{0}^{\alpha } x(\mathbf{t})= & {} A x(\mathbf{t})+B x(\mathbf{t}-\mathbf{\tau }(\mathbf{t}))+Cd(\mathbf{t}) \nonumber \\&+\, F(\mathbf{t},x(\mathbf{t}),x(\mathbf{t}-\mathbf{\tau }(\mathbf{t})),d(\mathbf{t})), \quad \text{ for } \text{ all } \ \mathbf{t} \in I, \end{aligned}$$
(1)

with the initial condition:

$$\begin{aligned} x(\mathbf{t})= \Phi (\mathbf{t}), \quad \text{ for } \text{ all } \ \mathbf{t} \in \tilde{J}, \end{aligned}$$

where \(\alpha =( \alpha _1,\alpha _2)\), \(0<\alpha _1,\alpha _2<1\). The function \(\mathbf{\tau }\) is continuous on I and \(\tau _1\), \(\tau _2\) are positives. The matrices \(A, \, B\in \mathbb {R}^{n\times n}\), \(C\in \mathbb {R}^{n\times p}\) and the function \(\Phi \in C(\tilde{J}, \mathbb {R}^n)\). Here, the domain \(\tilde{J}\) is defined by:

$$\begin{aligned} \tilde{J}= J \backslash {(0, T_1]\times (0, T_2]} \quad \text{ and } \quad J=[-r_1, T_1]\times [-r_2, T_2] \end{aligned}$$

where the constants \(r_1,\,r_2\) are given by:

$$\begin{aligned} r_1=\underset{t\in [0, T_1]}{\sup }(\tau _1(t)) \quad \text{ and } \quad r_2=\underset{t\in [0, T_2]}{\sup }(\tau _2(t)). \end{aligned}$$

The source term \(F \in C(\mathbb {R}_+^2\times \Sigma ^{np}, \mathbb {R}^n)\), (in Eq. (1)), is continuous and satisfies:

$$\begin{aligned} \Vert F(\mathbf{t},\mathbf{u})-F(\mathbf{t},\mathbf{v})\Vert \le \kappa (\mathbf{t}) \sum _{i=1}^{3} \Vert u_i-v_i\Vert \quad \text{ and }\quad F(\mathbf{t},\mathbf{0})=0, \end{aligned}$$
(2)

for all \(\mathbf{t}\in \mathbb {R}_+^2\) and for all \(\mathbf{u}=(u_1,u_2,u_3)\), \(\mathbf{v}=(v_1,v_2,v_3)\) \(\in \Sigma ^{np} \), where \(\kappa \) is a continuous function on \(\mathbb {R}_+^2\). \(\Vert .\Vert \) is the Euclidean norm.

The function \(d\in \mathbb {R}^p\) is the disturbance. We suppose that the function \(d \in C(\mathbb {R}_+^2, \mathbb {R}^p)\) is continuous and satisfies:

$$\begin{aligned} \exists \rho >0: \qquad d^T(\mathbf{t}) d(\mathbf{t})\le \rho ^2. \end{aligned}$$
(3)

Let us introduce the following constants \(a_0, \, a_1, \, a_2\) which are defined by:

$$\begin{aligned} a_0 = \Vert A\Vert + \displaystyle \max _{\mathbf{t}\in I}(\kappa (\mathbf{t})), \quad a_1= \Vert B\Vert + \displaystyle \max _{\mathbf{t}\in I}(\kappa (\mathbf{t})), \quad a_2= \Vert C\Vert + \displaystyle \max _{\mathbf{t}\in I}(\kappa (\mathbf{t})), \end{aligned}$$

where the function \(\kappa \) is given in relation (2).

Definition 6

The system (1) is robustly FTS with respect to \(\{\varepsilon , \sigma , \rho ,T_1, T_2\}\), \(\varepsilon < \sigma \), if the following relation is satisfied:

$$\begin{aligned} \Vert \Phi \Vert \le \varepsilon \Longrightarrow \Vert x(\mathbf{t})\Vert \le \sigma ,\ \forall \mathbf{t} \in I, \end{aligned}$$

for all disturbance \(d \in \mathbb {R}^p\) satisfying (3).

Recall that the solution of the system (1) is defined on the extended domain \(J=\tilde{J} \cup I\) as follows:

$$\begin{aligned} x(\mathbf{t})= \left\{ \begin{array}{ll} \Phi (\mathbf{t}),&{} \text{ for } \text{ all } \ \mathbf{t} \in \tilde{J}, \\ \\ \theta (t, s) + \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \pi _{x}(\mathbf{r})\,dvdu , &{} \forall \ (t, s) \in I, \end{array} \right. \nonumber \\ \end{aligned}$$
(4)

where the functions \(\pi _{x}, \theta \) are defined by:

$$\begin{aligned} \theta (t, s)= & {} \Phi (t, 0)+ \Phi (0, s) - \Phi (0, 0), \end{aligned}$$
(5)
$$\begin{aligned} \pi _{x}(\mathbf{r})= & {} Ax(\mathbf{r}) + Bx(\mathbf{r}-\tau (\mathbf{r}))+ Cd(\mathbf{r})+ F(\mathbf{r},x(\mathbf{r}),x(\mathbf{r}-\tau (\mathbf{r})),d(\mathbf{r})).\nonumber \\ \end{aligned}$$
(6)

The first main result is given by the following theorem.

Theorem 2

Let \(\eta _1, \,\, \eta _2 >0\) such that \(1-\frac{a_0+a_1}{\eta _1\eta _2} >0\). Assume that hypothesis (2) is satisfied. Then, Eq. (1) has a unique solution \(y_0\) on I. In addition, the following inequality holds:

$$\begin{aligned} \Vert y_0(\mathbf{t})\Vert\le & {} 3\Big [ 1+(a_0+a_1)M_0(\eta _1, \eta _2) E_{\alpha _1}(\eta _1T_1^{\alpha _1})E_{\alpha _2}(\eta _2T_2^{\alpha _2}) \Big ] \Vert \Phi \Vert \nonumber \\&+ a_2M_0(\eta _1, \eta _2) E_{\alpha _1}(\eta _1T_1^{\alpha _1})E_{\alpha _2}(\eta _2T_2^{\alpha _2}) \Vert d\Vert , \end{aligned}$$
(7)

where \(M_0(\eta _1, \eta _2)\) is given by:

$$\begin{aligned} M_0(\eta _1, \eta _2)=\frac{T_1^{\alpha _1}T_2^{\alpha _2}}{[1-\frac{a_0+a_1}{\eta _1\eta _2}]\Gamma (\alpha _1+1)\Gamma (\alpha _2+1)}. \end{aligned}$$
(8)

The proof of Theorem 2 will be established later.

Let us consider the complete metric space \((E, \delta )\) that is defined as follows:

$$\begin{aligned} E= C(J, \mathbb {R}^n) \quad \text{ and } \quad \delta (\zeta _1, \zeta _2)=\inf \Big \{M>0, \frac{\Vert \zeta _1(\mathbf{t})-\zeta _2(\mathbf{t})\Vert }{h(\mathbf{t})} \le M, \forall \mathbf{t}\in J\Big \}, \end{aligned}$$

where the function \(h \in C(J, \mathbb {R}_+)\) and is defined by:

$$\begin{aligned} h(\mathbf{t}) = \left\{ \begin{array}{ll} E_{\alpha _1}(\eta _1t^{\alpha _1}) E_{\alpha _2}(\eta _2s^{\alpha _2}),&{} \text{ for } \ \mathbf{t}=(t, s) \in I, \\ 1, &{} \text{ for } \ \mathbf{t} \in [-r_1, 0]\times [-r_2, 0], \\ E_{\alpha _2}(\eta _2s^{\alpha _2}), &{} \text{ for } \ \mathbf{t} \in [-r_1, 0]\times [0, T_2], \\ E_{\alpha _1}(\eta _1t^{\alpha _1}), &{} \text{ for } \ \mathbf{t} \in [0, T_1]\times [-r_2, 0]. \end{array} \right. \end{aligned}$$
(9)

Let \(\Phi \in C(\tilde{J}, \mathbb {R}^n)\). We consider the operator \(\mathcal {A}: E \rightarrow E\) defined by:

$$\begin{aligned} (\mathcal{{A}}y)(\mathbf{t})= \left\{ \begin{array}{ll} \Phi (\mathbf{t}),\quad \text{ for } \text{ all } \ \mathbf{t} \in \tilde{J}, \\ \\ \Phi (t, 0)+ \Phi (0, s) - \Phi (0, 0) \\ \quad + \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \pi _{y}(\mathbf{r})\,dvdu , &{} \forall \, (t, s) \in I, \end{array} \right. \nonumber \\ \end{aligned}$$
(10)

where the function \(\pi _{y}\) is given by:

$$\begin{aligned} \pi _{y}(\mathbf{r}) = Ay(\mathbf{r}) + By(\mathbf{r}-\tau (\mathbf{r}))+ Cd(\mathbf{r})+ F(\mathbf{r},y(\mathbf{r}),y(\mathbf{r}-\tau (\mathbf{r})),d(\mathbf{r})). \end{aligned}$$

Immediately, we have the following proposition.

Proposition 1

The operator \(\mathcal{{A}}: E \rightarrow E\) is contractive.

Proof

Recall that \(\mathbf{r}=(u,v) \in \mathbb {R}_+^2\). Let \(y_1, \,\,y_2 \in E\), we can deduce, from system (10), that:

$$\begin{aligned} (\mathcal{{A}}y_1)(\mathbf{t}) - (\mathcal{{A}}y_2)(\mathbf{t})=0, \quad \forall \mathbf{t} \in \tilde{J}. \end{aligned}$$

On the other hand, for \(\mathbf{t}=(t,s) \in I\) we have:

$$\begin{aligned}&\Vert (\mathcal{{A}}y_1)(\mathbf{t}) - (\mathcal{{A}}y_2)(\mathbf{t})\Vert \\&\quad =\Big \Vert \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \Big [ A\big (y_1(\mathbf{r})-y_2(\mathbf{r})\big ) \\&\qquad +B\big (y_1(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))- y_2(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))\big ) + \Big (F\big (\mathbf{r},y_1(\mathbf{r}),y_1(\mathbf{r}-\mathbf{\tau }(\mathbf{r})),d(\mathbf{r})\big ) \\&\qquad -F\big (\mathbf{r},y_2(\mathbf{r}),y_2(\mathbf{r}-\mathbf{\tau }(\mathbf{r})),d(\mathbf{r})\big )\Big ) \Big ]\,dvdu \Big \Vert . \end{aligned}$$

Now, by using relation (2), we obtain:

$$\begin{aligned}&\Vert (\mathcal{{A}}y_1)(\mathbf{t}) - (\mathcal{{A}}y_2)(\mathbf{t})\Vert \\&\le \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \Big [ \big (\kappa (\mathbf{r})+\Vert A\Vert \big ) \Vert y_1(\mathbf{r})-y_2(\mathbf{r})\Vert \\&+ \big (\kappa (\mathbf{r})+\Vert B\Vert \big ) \Vert y_1(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))- y_2(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))\Vert \Big ]\,dvdu. \end{aligned}$$

From the definition of the constants \(a_0, \,a_1, \,a_2\), we deduce that:

$$\begin{aligned}&\Vert (\mathcal{{A}}y_1)(\mathbf{t}) - (\mathcal{{A}}y_2)(\mathbf{t})\Vert \le \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)} \\&\times \Big \{ a_0 \int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \Vert y_1(\mathbf{r})-y_2(\mathbf{r})\Vert \,dvdu \\&+\, a_1 \int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \Vert y_1(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))- y_2(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))\Vert \,dvdu \Big \}. \end{aligned}$$

Or, equivalently:

$$\begin{aligned}&\Vert (\mathcal{{A}}y_1)(\mathbf{t}) - (\mathcal{{A}}y_2)(\mathbf{t})\Vert \le \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)} \\&\times \Big \{ a_0\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \frac{\Vert y_1(\mathbf{r})-y_2(\mathbf{r})\Vert }{h(\mathbf{r})}h(\mathbf{r}) \,dvdu \\&+\, a_1\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \frac{\Vert y_1(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))- y_2(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))\Vert }{h(\mathbf{r}-\mathbf{\tau }(\mathbf{r}))}h(\mathbf{r}-\mathbf{\tau }(\mathbf{r})) \,dvdu \Big \}. \end{aligned}$$

Finally, from the definition of the metric space E, we get:

$$\begin{aligned} \Vert (\mathcal{{A}}y_1)(\mathbf{t})- & {} (\mathcal{{A}}y_2)(\mathbf{t})\Vert \nonumber \\&\le {a_0\delta (y_1,y_2)}\int _0^t\int _0^s\frac{(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1}}{{\Gamma (\alpha _1)\Gamma (\alpha _2)}}h(\mathbf{r}) \,dvdu \nonumber \\&+\, {a_1\delta (y_1,y_2)}\int _0^t\int _0^s\frac{(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1}}{{\Gamma (\alpha _1)\Gamma (\alpha _2)}} h(\mathbf{r}-\mathbf{\tau }(\mathbf{r})) \,dvdu. \nonumber \\ \end{aligned}$$
(11)

Let us mention that we have:

$$\begin{aligned} h(\mathbf{r}-\mathbf{\tau }(\mathbf{r})) \le h(\mathbf{r}), \quad \text{ for } \text{ all } \ \mathbf{r}\in I. \end{aligned}$$
(12)

Then, we can deduce from relations (11) and (12) that:

$$\begin{aligned}&\Vert (\mathcal{{A}}y_1)(\mathbf{t}) - (\mathcal{{A}}y_2)(\mathbf{t})\Vert \\&\quad \le {(a_0+a_1)\delta (y_1,y_2)}\int _0^t\int _0^s\frac{(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1}}{{\Gamma (\alpha _1)\Gamma (\alpha _2)}}h(\mathbf{r}) \,dvdu. \end{aligned}$$

Using Remark 1, we get:

$$\begin{aligned} \Vert (\mathcal{{A}}y_1)(\mathbf{t}) - (\mathcal{{A}}y_2)(\mathbf{t})\Vert \le {\frac{a_0+a_1}{\eta _1\eta _2}\delta (y_1,y_2)}h(\mathbf{t}). \end{aligned}$$

Thus,

$$\begin{aligned} \delta (\mathcal{{A}}y_1, \mathcal{{A}}y_2) \le \frac{a_{0}+a_{1}}{\eta _{1}\eta _{2}}\delta (y_{1}, y_{2}). \end{aligned}$$

Therefore, \(\mathcal{{A}}\) is contractive. The proof is complete. \(\square \)

In the following, we establish the proof of Theorem 2.

Proof

(Theorem 2). Let \(\Phi \in C(\tilde{J}, \mathbb {R}^n)\), with \( \Vert \Phi \Vert \le \varepsilon _1\). We consider a function \(\mu \) defined as follows:

$$\begin{aligned} \mu (\mathbf{t})= \left\{ \begin{array}{ll} \Phi (\mathbf{t}),&{} \forall \ \mathbf{t} \in \tilde{J}, \\ \\ \Phi (t, 0)+ \Phi (0, s) - \Phi (0, 0), &{} \forall \ (t, s) \in I. \end{array} \right. \end{aligned}$$
(13)

It’s easy to see that we have the following estimation:

$$\begin{aligned} \Vert \mu (\mathbf{t})\Vert \le 3\Vert \Phi \Vert , \quad \forall \, \mathbf{t} \in J. \end{aligned}$$
(14)

From the definition of the operator \(\mathcal {A}\), see (10), and the definition of the function \(\mu \), see (13), we get:

$$\begin{aligned} (\mathcal{{A}}\mu )(\mathbf{t}) - \mu (\mathbf{t})=0, \quad \forall \, \mathbf{t} \in \tilde{J}. \end{aligned}$$

For all \(\mathbf{t}=(t, s) \in I\), we obtain:

$$\begin{aligned}&\Vert (\mathcal{{A}}\mu )(\mathbf{t}) - \mu (\mathbf{t})\Vert \\&\quad = \Big \Vert \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \pi _{\mu }(\mathbf{r})\,dvdu \Big \Vert , \\&\quad \le \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \Big [ 3(a_0+a_1)\Vert \Phi \Vert +a_2\Vert d\Vert \Big ]\,dvdu, \\&\quad \le \Big (\frac{3(a_0+a_1)\Vert \Phi \Vert +a_2\Vert d\Vert }{\Gamma (\alpha _1+1)\Gamma (\alpha _2+1)}\Big )t^{\alpha _1}s^{\alpha _2}, \end{aligned}$$

where the function \(\pi _{\mu }\) is given as in Eq. (6). Then, we deduce that:

$$\begin{aligned} \frac{\Vert (\mathcal{{A}}\mu )(\mathbf{t}) - \mu (\mathbf{t})\Vert }{h(\mathbf{t})} \le \Big (\frac{3(a_0+a_1)\Vert \Phi \Vert +a_2\Vert d\Vert }{\Gamma (\alpha _1+1)\Gamma (\alpha _2+1)}\Big )T_1^{\alpha _1}T_2^{\alpha _2}. \end{aligned}$$

Hence, we deduce that:

$$\begin{aligned} \delta (\mathcal{{A}}\mu , \mu ) \le \Big (\frac{3(a_{0}+a_{1})\Vert \Phi \Vert +a_2\Vert d\Vert }{\Gamma (\alpha _{1}+1)\Gamma (\alpha _{2}+1)}\Big )T_{1}^{\alpha _{1}}T_{2}^{\alpha _{2}}. \end{aligned}$$

By using Theorem 1 and Proposition 1, there exists a unique solution \(y_0\) to the problem (1) such that:

$$\begin{aligned} y_0(\mathbf{t})=\Phi (\mathbf{t}), \quad \forall \ \mathbf{t} \in I, \end{aligned}$$

and we have the following estimation:

$$\begin{aligned} \delta (y_0, \mu ) \le \frac{1}{1-\frac{a_0+a_1}{\eta _1\eta _2}} \Big (\frac{3(a_0+a_1)\Vert \Phi \Vert +a_2\Vert d\Vert }{\Gamma (\alpha _1+1)\Gamma (\alpha _2+1)}\Big )T_1^{\alpha _1}T_2^{\alpha _2}, \end{aligned}$$

or, equivalently

$$\begin{aligned} \delta (y_0, \mu ) \le \Big (3(a_0+a_1)\Vert \Phi \Vert +a_2\Vert d\Vert \Big )M_0(\eta _1, \eta _2), \end{aligned}$$

where \(M_0(\eta _1, \eta _2)\) is given by Eq. (8). So, for all \(\mathbf{t} \in I\) we have

$$\begin{aligned} \Vert y_0(\mathbf{t})- \mu (\mathbf{t})\Vert \le \Big (3(a_0+a_1)\Vert \Phi \Vert +a_2\Vert d\Vert \Big )M_0(\eta _1, \eta _2)E_{\alpha _1}(\eta _1T_1^{\alpha _1})E_{\alpha _2}(\eta _2T_2^{\alpha _2}).\nonumber \\ \end{aligned}$$
(15)

It’s well known that:

$$\begin{aligned} \Vert y_0(\mathbf{t})\Vert \le \Vert \mu (\mathbf{t})\Vert + \Vert y_0(\mathbf{t})- \mu (\mathbf{t})\Vert ,\quad \mathbf{t} \in I. \end{aligned}$$

Consequently, using (14) and (15), we can establish that:

$$\begin{aligned} \Vert y_0(\mathbf{t})\Vert\le & {} 3\Vert \Phi \Vert + \Big (3(a_0+a_1)\Vert \Phi \Vert +a_2\Vert d\Vert \Big )M_0(\eta _1, \eta _2)E_{\alpha _1}(\eta _1T_1^{\alpha _1})E_{\alpha _2}(\eta _2T_2^{\alpha _2}), \\\le & {} 3\Big [ 1+(a_0+a_1)M_0(\eta _1, \eta _2) E_{\alpha _1}(\eta _1T_1^{\alpha _1})E_{\alpha _2}(\eta _2T_2^{\alpha _2}) \Big ] \Vert \Phi \Vert \\&+\, a_2M_0(\eta _1, \eta _2) E_{\alpha _1}(\eta _1T_1^{\alpha _1})E_{\alpha _2}(\eta _2T_2^{\alpha _2}) \Vert d\Vert . \end{aligned}$$

The proof is complete. \(\square \)

The second main result of this paper is given by the following theorem.

Theorem 3

If there exists \(\eta _1, \,\eta _2>0\) such that: \(a_0+a_1<\eta _1\eta _2\) and the following inequality holds:

$$\begin{aligned}&3\Big [ 1 + (a_0+a_1)M_0(\eta _1, \eta _2) E_{\alpha _1}(\eta _1T_1^{\alpha _1})E_{\alpha _2}(\eta _2T_2^{\alpha _2}) \Big ] \varepsilon \nonumber \\&\quad + \Big [a_2M_0(\eta _1, \eta _2) E_{\alpha _1}(\eta _1T_1^{\alpha _1})E_{\alpha _2}(\eta _2T_2^{\alpha _2})\Big ] \rho \le \sigma . \end{aligned}$$
(16)

Then, the system (1) is FTS w.r.t. \(\{\varepsilon , \sigma , \rho ,T_1, T_2\}\).

Proof

It follows from (7) and (16) that (1) is FTS. \(\square \)

4 Numerical simulation

Recall that the solution of the system (1) is given by the relation (4) as follows:

$$\begin{aligned} x(t, s)= \theta (t, s) + \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^t\int _0^s(t-u)^{\alpha _1-1} (s-v)^{\alpha _2-1} \pi _{x}(u, v)\,dvdu,\nonumber \\ \end{aligned}$$
(17)

for all \((t, s) \in [0, T_1]\times [0, T_2]\), where the functions \(\pi _{x}, \theta \) are given by relations (5) and (6). In this section, we study the system (1) where \(x, \, \theta ,\,\pi _{x} \in \mathbb {R}^2\), then we suppose that the solution vector x is of the form:

$$\begin{aligned} x(t, s)=\big (x_1(t, s), x_2(t, s)\big )^T. \end{aligned}$$

We consider an uniform grid in the extended domain \([-r_1, T_1]\times [-r_2, T_2]\). Let \(\lambda =\frac{T_1}{N}=\frac{r_1}{q}\) and \(\beta =\frac{T_2}{M}=\frac{r_2}{p}\), where \(N,\,M,\,p,\,q \in \mathbb {N}\). We build two sequences \((t_i)_i\), \((s_j)_j\) as follows:

$$\begin{aligned} t_i= & {} i\lambda , \qquad \forall i=-q,-q+1,-q+2,\ldots ,-1,0, \ldots , N, \\ s_j= & {} j\beta , \qquad \forall j=-p,-p+1,-p+2,\ldots ,-1,0, \ldots , M. \end{aligned}$$

At the point \((t_i, s_j)\), we have:

$$\begin{aligned} x(t_i, s_j)= \theta (t_i, s_j) + \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^{t_i}\int _0^{s_j}(t_i-u)^{\alpha _1-1} (s_j-v)^{\alpha _2-1} \pi _{x}(u, v)\,dvdu,\nonumber \\ \end{aligned}$$
(18)

where \(\theta (t_i, s_j)= \Phi (0, s_j) + \Phi (t_i, 0) - \Phi (0, 0)\). Now, we consider the following approximations:

$$\begin{aligned} x(t_i, s_j) \approx x_{ij}, \,\, \theta (t_i, s_j) \approx \theta _{ij}, \,\, \Phi (0, s_j)\approx \Phi _{0j} , \,\, \Phi (t_i, 0)\approx \Phi _{i0}, \,\, \Phi (0, 0)\approx \Phi _{00}.\nonumber \\ \end{aligned}$$
(19)

Then Eq. (18) becomes:

$$\begin{aligned} x_{ij}= \theta _{ij} + \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\int _0^{t_i}\int _0^{s_j}(t_i-u)^{\alpha _1-1} (s_j-v)^{\alpha _2-1} \pi _{x}(u, v)\,dvdu.\nonumber \\ \end{aligned}$$
(20)

We deduce from (19) and (20) that:

$$\begin{aligned} x_{0j}=\theta _{0j}=\Phi _{0j} \quad \text{ and } \quad x_{i0}=\theta _{i0}=\Phi _{i0}. \end{aligned}$$

Equation (20) can be rewritten as follows:

$$\begin{aligned} x_{ij}= \theta _{ij} + \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\sum _{k=0}^{i-1}\sum _{l=0}^{j-1} \int _{t_k}^{t_{k+1}}\int _{s_l}^{s_{l+1}}(t_i-u)^{\alpha _1-1} (s_j-v)^{\alpha _2-1} \pi _{x}(u, v)\,dvdu. \end{aligned}$$

Now, we can used the approximation proposed in [26]:

$$\begin{aligned} x_{ij}= & {} \theta _{ij} + \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\sum _{k=0}^{i-1}\sum _{l=0}^{j-1} \int _{t_k}^{t_{k+1}}\int _{s_l}^{s_{l+1}}(t_i-u)^{\alpha _1-1} (s_j-v)^{\alpha _2-1} \pi _{x}(t_k, s_l)\,dvdu, \nonumber \\= & {} \theta _{ij} + \frac{1}{\Gamma (\alpha _1)\Gamma (\alpha _2)}\sum _{k=0}^{i-1}\sum _{l=0}^{j-1}\pi _{x}^{kl} \int _{t_k}^{t_{k+1}}\int _{s_l}^{s_{l+1}}(t_i-u)^{\alpha _1-1} (s_j-v)^{\alpha _2-1} \,dvdu,\nonumber \\ \end{aligned}$$
(21)

where we have the approximation \(\pi _{x}(t_k, s_l) \approx \pi _{x}^{kl}\) and:

$$\begin{aligned} \pi _{x}(t_k, s_l)= & {} Ax(t_k, s_l) + Bx(t_k-r_1, s_l-r_2)+ Cd(t_k, s_l) \\&+\, F(t_k, s_l,x(t_k, s_l),x(t_k-r_1, s_l-r_2),d(t_k, s_l)), \end{aligned}$$

and the term

$$\begin{aligned} x(t_k-r_1, s_l-r_2)= & {} x(k\lambda -q\lambda , l\beta -p\beta ) \\= & {} x((k-q)\lambda , (l-p)\beta )=x(t_{k-q}, s_{l-p})\approx x_{k-q,l-p}, \end{aligned}$$

so, we deduce that:

$$\begin{aligned} \pi _{x}^{kl}= A x_{kl}+B x_{k-q,l-p}+Cd_{kl}+ F(t_k,s_l,x_{kl},x_{k-q,l-p},d_{kl}). \end{aligned}$$

By calculating the integral in the right hand side of Eq. (21) and after simplification, we obtain:

$$\begin{aligned} x_{ij}= \theta _{ij} + \frac{\lambda ^{\alpha _1}\beta ^{\alpha _2}}{\Gamma (\alpha _1+1) \Gamma (\alpha _2+1)}\sum _{k=0}^{i-1}\sum _{l=0}^{j-1} b_{ik}c_{lj} \pi _{x}^{kl}, \end{aligned}$$
(22)

where \(b_{ik}, \,c_{lj}\) are given by:

$$\begin{aligned} b_{ik}= (i-k-1)^{\alpha _1}-(i-k)^{\alpha _1}, \qquad c_{lj}= (j-l-1)^{\alpha _2}-(j-l)^{\alpha _2}. \end{aligned}$$

The convergence of the method can be deduced from [26]. The error in this method is given by:

$$\begin{aligned} \Vert x(t_i, s_j)-x_{ij}\Vert = O(\lambda ^{\alpha _1} + \beta ^{\alpha _2}), \quad \text{ as } \ \lambda \mapsto 0, \quad \beta \mapsto 0. \end{aligned}$$

5 Numerical examples

In the following numerical examples, we prove that the solution of the system (1) satisfies the Definition 6. Indeed, we show that for any \(\varepsilon >0\) and \(\sigma >0\) such that: \(\varepsilon <\sigma \), we have

$$\begin{aligned} \Vert \Phi \Vert \le \varepsilon \Rightarrow \Vert x(t, s)\Vert \le \sigma , \quad \forall (t, s) \in [0, T_1]\times [0, T_2]. \end{aligned}$$
(23)

Recall that the system (1) is given as follows:

$$\begin{aligned} ^CD_{0}^{(\alpha _1,\alpha _2)} x(t, s)= & {} A x(t, s)+B x(t-r_1, s-r_2) +Cd(t, s)\nonumber \\&+\, F(t, s,x(t, s),x(t-r_1, s-r_2),d(t, s)), \end{aligned}$$
(24)

for all \((t, s) \in [0, T_1]\times [0, T_2]\), and the initial condition is defined by:

$$\begin{aligned} x(t, s)= \Phi (t, s), \quad \forall (t, s) \in [-r_1, 0]\times [-r_2, 0], \end{aligned}$$

where the solution \(x(t,s)=(x_1(t,s), x_2(t,s))^T\)

Example 1

We have taken the following data:

$$\begin{aligned}&A=10^{-2}\left( \begin{array}{cc} 1 &{}\quad 2 \\ -3 &{}\quad -4 \\ \end{array} \right) , \ B=10^{-2}\left( \begin{array}{cc} 2 &{}\quad -3 \\ 5 &{}\quad -4 \\ \end{array} \right) ,\\&C=10^{-2}\left( \begin{array}{cc} 1 &{}\quad 2 \\ -1 &{}\quad -1 \\ \end{array} \right) , \qquad d(t,s)=10^{-3}\left( \begin{array}{c} 5 \\ 4 \\ \end{array} \right) . \end{aligned}$$

The source term is in the form:

$$\begin{aligned} F(t,s,x(t,s),x(t-r_1,s-r_2),d(t,s))=0.02 \left( \begin{array}{c} \sin (x_2(t,s) \\ \sin (x_1(t-r_1,s-r_2)) \\ \end{array} \right) . \end{aligned}$$

where \((r_1, r_2)= (0.1, 0.2)\). The initial condition:

$$\begin{aligned} \Phi (t,s)= \left( \begin{array}{c} 0.07\cos (10\pi ts) \\ 0.07\cos (9\pi ts) \\ \end{array} \right) , \end{aligned}$$

for all \((t, s) \in [-0.1, 0]\times [-0.2, 0]\). Moreover, we have taken: \(N=70\), \(M=60\) \(\eta _1=\eta _2=1\), \(\varepsilon =0.1\), \(\sigma =10\) and \(\rho =0.01\). In the following, we have plotted the solution for different values of \(\alpha =(\alpha _1,\alpha _2)\). Remark that \(\Vert \Phi \Vert \approx 0.09899<\varepsilon \). Also, the stability relation given in (23) is well satisfied \(\Vert x(t,s)\Vert < 10\). Indeed, the norm of the solution x(ts) is given in each figure: Figures 1, 2 and 3. In this case, the fractional-order system (24) is FTS with respect to \(\{\varepsilon , \sigma ,\rho , T_1, T_2 \}\).

Fig. 1
figure 1

The numerical solution x(ts) for \((t,s) \in [0, 1.8]\times [0, 1.52]\) and \((\alpha _1,\alpha _2)=(0.4,0.9)\), with a norm \(\Vert x(t,s)\Vert =6.4160\)

Fig. 2
figure 2

The numerical solution x(ts) for \((t,s) \in [0, 1.8]\times [0, 1.8]\) and \((\alpha _1,\alpha _2)=(0.9,0.8)\), with a norm \(\Vert x(t,s)\Vert =6.4178\)

Fig. 3
figure 3

The numerical solution x(ts) for \((t,s) \in [0, 1.8]\times [0, 1.2]\) and \((\alpha _1,\alpha _2)=(0.8,0.3)\), with a norm \(\Vert x(t,s)\Vert =6.4190\)

In the experiment illustrated by Fig. 4, we take the same data as considered in Fig. 1, but with a height perturbation \(d(t,s)=(2,3)^T\), \(\Vert d(t,s)\Vert \approx 3.605\) and \(\Vert x(t,s)\Vert =9.9441\). It’s clear that the stabilization is slower than in Fig. 1 (where \(d(t,s)=10^{-3}(5,4)^T\), \(\Vert d(t,s)\Vert \approx 0.0064\) and \(\Vert x(t,s)\Vert =6.4160\)). In fact, it’s quite in agreement.

Fig. 4
figure 4

The numerical solution x(ts) for \((t,s) \in [0, 1.8]\times [0, 1.52]\), \((\alpha _1,\alpha _2)=(0.4,0.9)\) and a perturbation \(d(t,s)=(2,3)^T\), with a norm \(\Vert x(t,s)\Vert =9.9441\)

Example 2

Now, we consider the same fractional-order system given in (24), but we take the following data:

$$\begin{aligned}&A=10^{-3}\left( \begin{array}{cc} 1 &{}\quad -3 \\ -2 &{}\quad 1 \\ \end{array} \right) , \ B=10^{-3}\left( \begin{array}{cc} -1 &{} \quad -2 \\ 0.5 &{}\quad 3 \\ \end{array} \right) , \\&C=10^{-3}\left( \begin{array}{cc} -3 &{}\quad -1 \\ 2 &{}\quad 0.5 \\ \end{array} \right) , d(t,s)=10^{-2}\left( \begin{array}{c} 1 \\ 0.5 \\ \end{array} \right) . \end{aligned}$$

The source term is in the form:

$$\begin{aligned} F(t,s,x(t,s),x(t-r_1,s-r_2),d(t,s))=10^{-3} \left( \begin{array}{c} \sin (\pi x_1(t-r_1,s-r_2)) \\ \sin (\pi x_2(t,s)) \\ \end{array} \right) . \end{aligned}$$

where \((r_1,r_2)= (0.1, 0.1)\). The initial condition:

$$\begin{aligned} \Phi (t,s)= \left( \begin{array}{c} 0.01\cos (\pi ts) \\ 0.01\sin (\pi ts) \\ \end{array} \right) , \end{aligned}$$

for all \((t, s) \in [-0.1, 0]\times [-0.1, 0]\). Moreover, we have taken: \(N=80\), \(M=70\) \(\eta _1=\eta _2=1\), \(\varepsilon =0.1\), \(\sigma =1\) and \(\rho =0.02\). Remark that \(\Vert \Phi \Vert \approx 0.01414<\varepsilon \). Also, the stability relation given in (23) is well satisfied \(\Vert x(t,s)\Vert < 1\). Indeed, the norm of the solution x(ts) is given in each figure: Figures 5, 6 and 7. Also, in this case, the fractional-order system (24) is FTS with respect to \(\{\varepsilon , \sigma ,\rho , T_1, T_2 \}\).

Fig. 5
figure 5

The numerical solution x(ts) for \((t,s) \in [0, 1.4]\times [0, 1.3]\) and \((\alpha _1,\alpha _2)=(0.2,0.4)\), with a norm \(\Vert x(t,s)\Vert =0.7480\)

Fig. 6
figure 6

The numerical solution x(ts) for \((t,s) \in [0, 1.99]\times [0, 1.8]\) and \((\alpha _1,\alpha _2)=(0.8,0.7)\), with a norm \(\Vert x(t,s)\Vert =0.7481\)

Fig. 7
figure 7

The numerical solution x(ts) for \((t,s) \in [0, 1.64]\times [0, 1.8]\) and \((\alpha _1,\alpha _2)=(0.9,0.3)\), with a norm \(\Vert x(t,s)\Vert =0.7481\)

6 Conclusion

In this work, several goals have been achieved. Indeed, we have proved the existence and uniqueness of a global solution of (FPHDSs) using an approach based on the fixed-point theory. Moreover, a new sufficient condition for the (FTS) of such systems is obtained. Finally, some illustrative examples were presented to prove the validity of our result.