1 Introduction

Recently, there has been a rapid increase in researching fractional differential equations since their practical applications in various fields of physics, engineering, control theory, economics, etc. Fractional differential models can always make the description more accurate, and make the physical significance of parameters more explicit than the integer order ones. So, many monographs and literature works have appeared on fractional calculus and fractional differential equations, see [1,2,3,4,5,6].

It is well known that p-Laplace operator has deep background in analyzing mechanics, chemical physics, dynamic systems, etc. In the last ten years, fractional boundary value problems with p-Laplace operator have been widely studied, and there have been some excellent results on the existence, nonexistence, uniqueness, multiplicity of the solutions and positive solutions, we refer the readers to [7,8,9,10,11,12,13,14] and the references therein.

Meanwhile, boundary value problems with integral boundary conditions arise in lots of applied models [15,16,17] and some scholars have been interested in the BVP with the Riemann–Stieltjes integral boundary conditions, see [18,19,20]. Specially, multi-strip integral boundary value problems have drawn the attention of many scholars and have been extensively used in semiconductor, blood flow, hydrodynamics, etc., see [21,22,23,24,25].

In [23], Ahmad etal. investigated the following fractional differential equation:

(1.1)

supplemented with the boundary conditions of the form

$$ \textstyle\begin{cases} {ax(0)+bx(1)=\sum_{i=1}^{m-2} \alpha _{i} x (\sigma _{i} )+ \sum_{j=1}^{p-2} r_{j} \int _{\xi _{j}}^{\eta _{j}} x(s) \,ds}, \\ {c x^{\prime }(0)+d x^{\prime }(1)=\sum_{i=1}^{m-2} \delta _{i} x^{ \prime } (\sigma _{i} )+\sum_{j=1}^{p-2} \gamma _{j} \int _{\xi _{j}}^{\eta _{j}} x^{\prime }(s) \,ds}, \\ 0< \sigma _{1}< \cdots < \sigma _{m-2}< \cdots < \xi _{1}< \eta _{1}< \cdots < \xi _{p-2}< \eta _{p-2}< 1, \end{cases} $$
(1.2)

where , denote the Caputo fractional derivatives of order q and β, respectively, f is a given continuous function, a, b, c, d are real constants, and \(\alpha _{i}\), \(\delta _{i}\) (\(i=1,2,\ldots ,m-2\)), \(r_{j}\), \(\gamma _{j}\) (\(j=1,2,\ldots ,p-2\)) are positive real constants. Several existence and uniqueness results are established by applying the tools of fixed-point theory.

Furthermore, n-dimensional differential systems are high generalizations of differential equations, which have broad application prospects and profound practical significance. However, n-dimensional differential systems have not been fully studied, and only a few results have been obtained (see [26,27,28,29] for instance); and the studies of n-dimensional fractional differential system boundary value problems are even fewer, see [29].

In [27], Feng etal. considered the following fourth-order n-dimensional m-Laplace system:

$$ \textstyle\begin{cases} \phi _{m}(\boldsymbol{x}''(t))''=\boldsymbol{\varPsi }(t)\boldsymbol{f}(t, \boldsymbol{x}(t)),\quad 0< t< 1,\\ \boldsymbol{x}(0)=\boldsymbol{x}(1)=\int _{0}^{1} \boldsymbol{g}(s)\boldsymbol{x}(s)\,ds,\\ \phi _{m}(\boldsymbol{x}''(0))= \phi _{m}(\boldsymbol{x}''(1))=\int _{0}^{1}\boldsymbol{h}(s)\phi _{m}( \boldsymbol{x}''(s))\,ds, \end{cases} $$
(1.3)

where the vector-valued function x is defined by \(\boldsymbol{x}=[x_{1},x_{2},\ldots ,x_{n}]^{T}\). The authors investigated the existence, multiplicity, and nonexistence of symmetric positive solutions by the fixed point theorem in a cone and the inequality technique.

Inspired by the above achievements, we consider the following \(\alpha _{1}+\alpha _{2}\) fractional order n-dimensional p-Laplace system:

$$ \textstyle\begin{cases} D_{0+}^{\alpha _{2}}(\varPhi _{p}(D_{0+}^{\alpha _{1}}\boldsymbol{u}(t)))= \kappa \boldsymbol{f}(t,\boldsymbol{u}(t),D_{0+}^{\alpha _{1}} \boldsymbol{u}(t)),\quad t\in (0,1),\\ \boldsymbol{u}(0)=\boldsymbol{0},\qquad \boldsymbol{u}(1)=\sum_{i=1}^{m}b_{i}\int _{0}^{\xi _{i}} \boldsymbol{u}(s)\,d\boldsymbol{A}(s),\\ D_{0+}^{\alpha _{1}} \boldsymbol{u}(0)=\boldsymbol{0},\qquad \varPhi _{p}(D_{0+}^{\alpha _{1}}\boldsymbol{u}(1))=\lambda \varPhi _{p}(D_{0+} ^{\alpha _{1}}\boldsymbol{u}(\eta )), \end{cases} $$
(1.4)

where \(1<\alpha _{k}\leq 2\), \(D^{\alpha _{k}}\) is the standard Riemann–Liouville fractional derivative of order \(\alpha _{k}\) for \(k=1,2\); \(\varPhi _{p}(s)= \vert s \vert ^{p-2}s\), \(p>1\); \(\kappa >0\); \(0<\xi _{i}< 1\), \(b_{i}\geq 0\), \(\int _{0}^{\xi _{i}}\boldsymbol{u}(s)\,d \boldsymbol{A}(s)\) denotes a Riemann–Stieltjes integral and \(\boldsymbol{A}(s)\) is a matrix composed of functions of bounded variations for \(i=1,2,\ldots ,m\); \(\lambda >0\); \(0<\eta <1\) and

$$\begin{aligned} &\boldsymbol{u}(t)=\bigl(u_{1}(t),u_{2}(t), \ldots ,u_{n}(t)\bigr)^{T}, \\ & \boldsymbol{f}\bigl(t,\boldsymbol{u},D_{0+}^{\alpha _{1}} \boldsymbol{u}\bigr)= \bigl(f_{1}\bigl(t,\boldsymbol{u},D_{0+}^{\alpha _{1}} \boldsymbol{u}\bigr),f_{2}\bigl(t, \boldsymbol{u},D_{0+}^{\alpha _{1}} \boldsymbol{u}\bigr),\ldots ,f_{n}\bigl(t, \boldsymbol{u},D_{0+}^{\alpha _{1}} \boldsymbol{u}\bigr) \bigr)^{T}, \\ &\varPhi _{p}\bigl(D_{0+}^{\alpha _{1}} \boldsymbol{u}(t)\bigr)= \bigl(\varPhi _{p}\bigl(D_{0+}^{ \alpha _{1}}u_{1}(t) \bigr),\varPhi _{p}\bigl(D_{0+}^{\alpha _{1}}u_{2}(t) \bigr),\ldots , \varPhi _{p}\bigl(D_{0+}^{\alpha _{1}}u_{n}(t) \bigr) \bigr)^{T}, \\ &\boldsymbol{A}(s)=\operatorname{diag}\bigl[A _{1}(s),A_{2}(s),\ldots ,A_{n}(s)\bigr]. \end{aligned}$$

Here, we should understand that \(f_{j}(t,\boldsymbol{u},D_{0+}^{\alpha _{1}}\boldsymbol{u})\) means \(f_{j}(t,u_{1},u_{2},\ldots ,u_{n},D_{0+} ^{\alpha _{1}}u_{1},D_{0+}^{\alpha _{1}}u_{2},\ldots , D_{0+}^{\alpha _{1}}u_{n})\) for \(j=1,2,\ldots ,n\).

Therefore, system (1.4) means that

{(D0+α2(Φp(D0+α1u1(t)))D0+α2(Φp(D0+α1u2(t)))D0+α2(Φp(D0+α1un(t))))=κ(f1(t,u,D0+α1u)f2(t,u,D0+α1u)fn(t,u,D0+α1u)),(u1(0)u2(0)un(0))=(000),(u1(1)u2(1)un(1))=i=1mbi0ξi(u1(s)u2(s)un(s))d(A1(s)000A2(s)000An(s)),(D0+α1u1(0)D0+α1u2(0)D0+α1un(0))=(000),(Φp(D0+α1u1(1))Φp(D0+α1u2(1))Φp(D0+α1un(1)))=λ(Φp(D0+α1u1(η))Φp(D0+α1u2(η))Φp(D0+α1un(η))).
(1.5)

And then it follows respectively from (1.5) that

$$\begin{aligned} &\textstyle\begin{cases} D_{0+}^{\alpha _{2}}(\varPhi _{p}(D_{0+}^{\alpha _{1}}u_{1}(t)))=\kappa f _{1}(t,u_{1},u_{2},\ldots ,u_{n},D_{0+}^{\alpha _{1}}u_{1},D_{0+}^{ \alpha _{1}}u_{2},\ldots ,D_{0+}^{\alpha _{1}}u_{n}), \\ D_{0+}^{\alpha _{2}}(\varPhi _{p}(D_{0+}^{\alpha _{1}}u_{2}(t)))=\kappa f _{2}(t,u_{1},u_{2},\ldots ,u_{n},D_{0+}^{\alpha _{1}}u_{1},D_{0+}^{ \alpha _{1}}u_{2},\ldots ,D_{0+}^{\alpha _{1}}u_{n}), \\ \vdots \\ D_{0+}^{\alpha _{2}}(\varPhi _{p}(D_{0+}^{\alpha _{1}}u_{n}(t)))=\kappa f _{n}(t,u_{1},u_{2},\ldots ,u_{n},D_{0+}^{\alpha _{1}}u_{1},D_{0+}^{ \alpha _{1}}u_{2},\ldots ,D_{0+}^{\alpha _{1}}u_{n}), \end{cases}\displaystyle \end{aligned}$$
(1.6)
$$\begin{aligned} &\textstyle\begin{cases} u_{1}(0)=0, \qquad u_{1}(1)=\sum_{i=1}^{m}b_{i}\int _{0}^{\xi _{i}}u_{1}(s)\,dA_{1}(s), \\ u_{2}(0)=0, \qquad u_{2}(1)=\sum_{i=1}^{m}b_{i}\int _{0}^{\xi _{i}}u_{2}(s)\,dA_{2}(s), \\ \vdots \\ u_{n}(0)=0,\qquad u_{n}(1)=\sum_{i=1}^{m}b_{i}\int _{0}^{\xi _{i}}u_{n}(s)\,dA_{n}(s), \end{cases}\displaystyle \end{aligned}$$
(1.7)
$$\begin{aligned} &\textstyle\begin{cases} D_{0+}^{\alpha _{1}}u_{1}(0)=0,\qquad \varPhi _{p}(D_{0+}^{\alpha _{1}}u _{1}(1))=\lambda \varPhi _{p}(D_{0+}^{\alpha _{1}}u_{1}(\eta )), \\ D_{0+}^{\alpha _{1}}u_{2}(0)=0,\qquad \varPhi _{p}(D_{0+}^{\alpha _{1}}u _{2}(1))=\lambda \varPhi _{p}(D_{0+}^{\alpha _{1}}u_{2}(\eta )), \\ \vdots \\ D_{0+}^{\alpha _{1}}u_{n}(0)=0,\qquad \varPhi _{p}(D_{0+}^{\alpha _{1}}u _{n}(1))=\lambda \varPhi _{p}(D_{0+}^{\alpha _{1}}u_{n}(\eta )). \end{cases}\displaystyle \end{aligned}$$
(1.8)

Our model has the following characteristics. Firstly, the equations are fractional derivative differential, if \(\alpha _{1}\) and \(\alpha _{2}\) both equal to 2, our equations degenerate into the model in [27]. Secondly, the nonlinear terms of the equations are related not only to the vector-valued function, but also to the derivative of vector-valued function. Thirdly, the boundary conditions are multi-point and multi-strip mixed boundary conditions.

In addition, we give the following assumptions ahead:

  1. (F1)

    \(f_{j}:[0,1]\times \mathbb{R}_{+}^{n}\times \mathbb{R}_{-}^{n}\rightarrow \mathbb{R}_{+}\) is continuous for \(j=1,2,\ldots ,n\);

  2. (F2)

    \(A_{j}(s)\) is a monotone nondecreasing function for \(j=1,2,\ldots ,n\);

    Let \(1-\sum_{i=1}^{m}b_{i}\int _{0}^{\xi _{i}}s ^{\alpha _{1}-1} \,dA_{j}(s)=\Delta _{j}\) satisfying \(0<\Delta _{j}<1\) for \(j=1,2,\ldots ,n\);

  3. (F3)

    \(\lambda \geq 0\) with \(0<\lambda \eta ^{\alpha _{2}-1}<1\).

The structure of this paper is as follows. In Sect. 2, we give some necessary preliminaries, which will be used in the main proof. In Sect. 3, we establish the existence results of positive solutions by using the Leggett–Williams fixed point theorem. In Sect. 4, we investigate the nonexistence results of positive solutions. In Sect. 5, we illustrate two examples to demonstrate the main results.

2 Preliminaries

In this section, we consider the n-dimensional fractional order system (1.4) and put forward some indispensable definitions and theorems in advance.

Definition 2.1

The Riemann–Liouville fractional integral of order \(\alpha >0\) of a function \(f:(0,\infty )\rightarrow \mathbb{R}\) is given by

$$ I_{0+}^{\alpha }f(t)=\frac{1}{\varGamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}f(s)\,ds, $$

provided the right-hand side is pointwise defined on \((0,\infty )\), where \(\varGamma (\alpha )\) is the Euler gamma function defined by \(\varGamma (\alpha )=\int _{0}^{\infty }t^{\alpha -1}e^{-t}\,dt\) for \(\alpha >0\).

Definition 2.2

The Riemann–Liouville fractional derivative of order \(\alpha >0\) for a function \(f:(0,\infty )\rightarrow \mathbb{R}\) is given by

$$ D_{0+}^{\alpha }f(t)=\frac{1}{\varGamma (n-\alpha )} \biggl( \frac{d}{dt} \biggr)^{n} \int _{0}^{t}(t-s)^{n-\alpha -1}f(s)\,ds, $$

where \(n=[\alpha ]+1\), \([\alpha ]\) stands for the largest integer not greater than α.

According to the definition of Riemann–Liouville’s derivative, the following lemmas can be achieved.

Lemma 2.1

For \(\alpha >0\), if we assume that \(u\in C[0,\infty ) \cap L^{1}(0,1)\), then we have

$$ I_{0+}^{\alpha }\bigl(D_{0+}^{\alpha }u(t) \bigr)=u(t)+m_{1}t^{\alpha -1}+m_{2}t ^{\alpha -2}+\cdots +m_{n}t^{\alpha -n} $$

for some \(m_{i}\in \mathbb{R},i=1,2,\ldots,n\), whilenis the smallest integer greater than or equal toα.

Definition 2.3

Let E be a real Banach space. A nonempty, closed, and convex set \(K\subset E\) is a cone if the following two conditions are satisfied:

  1. (1)

    if \(x\in K\) and \(\mu \geq 0\), then \(\mu x\in K\);

  2. (2)

    if \(x\in K\) and \(-x\in K\), then \(x=0\).

Every cone \(K\subset E\) induces the ordering in E given by \(x_{1}\leq x_{2}\) if and only if \(x_{2}-x_{1}\in K\).

Definition 2.4

The map γ is said to be a continuous nonnegative convex (concave) function on a cone K of a real Banach space E provided that \(\gamma :K\rightarrow [0,\infty )\) is continuous and

$$ \gamma \bigl(tx+(1-t)y\bigr)\leq (\geq )t\gamma (x)+(1-t)\gamma (y),\quad x,y \in K,t \in [0,1]. $$

For \(h_{j}(t)\in C(0,1)\cap L^{1}(0,1)\), \(j=1,2,\ldots ,n\), we consider a component of the corresponding linearization problem according to (1.6)–(1.8):

$$ \textstyle\begin{cases} D_{0+}^{\alpha _{2}}(\varPhi _{p}(D_{0+}^{\alpha _{1}}u_{j}(t)))=h_{j}(t),\quad t\in (0,1),\\ u_{j}(0)=0, \qquad u_{j}(1)=\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}u_{j}(s)\,dA_{j}(s),\\ D_{0+}^{\alpha _{1}}u_{j}(0)=0,\qquad \varPhi _{p}(D_{0+}^{\alpha _{1}}u_{j}(1))=\lambda \varPhi _{p}(D_{0+} ^{\alpha _{1}}u_{j}(\eta )). \end{cases} $$
(2.1)

By means of the transformation

$$ \varPhi _{p}\bigl(D_{0+}^{\alpha _{1}}u_{j}(t) \bigr)=-v_{j}(t), $$
(2.2)

we can convert equation (2.1) into

$$ \textstyle\begin{cases} D_{0+}^{\alpha _{1}}u_{j}(t)=-\varPhi _{q}(v_{j}(t)),\quad t\in (0,1),\\ u_{j}(0)=0, \qquad u_{j}(1)=\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}u_{j}(s)\,dA_{j}(s) \end{cases} $$
(2.3)

and

$$ \textstyle\begin{cases} D_{0+}^{\alpha _{2}}v_{j}(t)=-h_{j}(t),\quad t\in (0,1),\\ v_{j}(0)=0,\qquad v_{j}(1)=\lambda v_{j}(\eta ), \end{cases} $$
(2.4)

where \(\varPhi _{q}=\varPhi _{p}^{-1}\), \(\frac{1}{p}+\frac{1}{q}=1\).

For \(k=1,2\), define the Green’s function as follows:

$$ G_{k}(t,s)=\frac{1}{\varGamma (\alpha _{k})} \textstyle\begin{cases} t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1}-(t-s)^{\alpha _{k}-1},\quad 0\leq s \leq t \leq 1, \\ t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1},\quad 0\leq t\leq s\leq 1. \end{cases} $$
(2.5)

Lemma 2.2

Boundary value problem (2.3) has a unique solution

$$ u_{j}(t)= \int _{0}^{1}H_{j}(t,s)\varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds, $$
(2.6)

where

$$ H_{j}(t,s)=G_{1}(t,s)+\frac{t^{\alpha _{1}-1}}{\Delta _{j}} \sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}G_{1}(\tau ,s) \,dA_{j}(\tau ), $$
(2.7)

and \(G_{1}(t,s)\)is given by (2.5) for \(k=1\).

Proof

From Lemma 2.1, we can reduce \(D_{0+}^{\alpha _{1}}u_{j}(t)=-\varPhi _{q}(v_{j}(t))\) to the following equivalent equation:

$$ u_{j}(t) =-\frac{1}{\varGamma (\alpha _{1})} \int _{0}^{t}(t-s)^{\alpha _{1}-1} \varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds+c_{1}t^{\alpha _{1}-1}+c_{2}t^{\alpha _{1}-2}, $$
(2.8)

where \(c_{1}\) and \(c_{2}\) are arbitrary real constants.

According to \(u_{j}(0)=0\), we have \(c_{2}=0\), thus

$$ u_{j}(1)=-\frac{1}{\varGamma (\alpha _{1})} \int _{0}^{1}(1-s)^{\alpha _{1}-1} \varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds+c_{1}, $$

with \(u_{j}(1)=\sum_{i=1}^{m}b_{i}\int _{0}^{\xi _{i}}u_{j}(s)\,dA _{j}(s)\), we get

$$ c_{1}=\frac{1}{\varGamma (\alpha _{1})} \int _{0}^{1}(1-s)^{\alpha _{1}-1} \varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds+\sum _{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}u _{j}(s) \,dA_{j}(s) $$

and

$$\begin{aligned} u_{j}(t)={}&{-}\frac{1}{\varGamma (\alpha _{1})} \int _{0}^{t}(t-s)^{\alpha _{1}-1} \varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds \\ &{}+\frac{1}{\varGamma (\alpha _{1})} \int _{0}^{1}t^{ \alpha _{1}-1}(1-s)^{\alpha _{1}-1} \varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds \\ &{} +t^{\alpha _{1}-1}\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}u_{j}(s)\,dA _{j}(s) \\ ={}& \int _{0}^{1}G_{1}(t,s)\varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds+t^{\alpha _{1}-1}\sum _{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}u_{j}(s) \,dA_{j}(s), \end{aligned}$$
(2.9)

where \(G_{1}(t,s)\) is given by (2.5). Because of

$$\begin{aligned} &\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}u_{j}(\tau ) \,dA_{j}( \tau ) \\ &\quad=\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}} \Biggl[ \int _{0}^{1}G_{1}( \tau ,s) \varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds+\tau ^{\alpha _{1}-1}\sum_{i=1}^{m}b _{i} \int _{0}^{\xi _{i}}u_{j}(s) \,dA_{j}(s) \Biggr]\,dA_{j}(\tau ) \\ &\quad=\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}} \int _{0}^{1}G_{1}( \tau ,s) \varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds \,dA_{j}(\tau )\\ &\qquad{}+\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}\tau ^{\alpha _{1}-1} \,dA_{j}(\tau )\sum_{i=1} ^{m}b_{i} \int _{0}^{\xi _{i}}u_{j}(s) \,dA_{j}(s), \end{aligned}$$

we get

$$\begin{aligned} &\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}u_{j}(\tau ) \,dA_{j}( \tau ) \\ &\quad=\frac{1}{\Delta _{j}}\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}} \int _{0}^{1}G_{1}(\tau ,s)\varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds \,dA_{j}(\tau ). \end{aligned}$$
(2.10)

Thus, we have

$$\begin{aligned} u_{j}(t) &= \int _{0}^{1}G_{1}(t,s)\varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds+\frac{t^{\alpha _{1}-1}}{\Delta _{j}}\sum _{i=1}^{m}b_{i} \int _{0}^{\xi _{i}} \int _{0}^{1}G_{1}(\tau ,s)\varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds\,dA_{j}(\tau ) \\ &= \int _{0}^{1}H_{j}(t,s)\varPhi _{q}\bigl(v_{j}(s)\bigr)\,ds, \end{aligned}$$
(2.11)

where \(H_{j}(t,s)\) is given by (2.7).

This completes the proof of the lemma. □

Lemma 2.3

Boundary value problem (2.4) has a unique solution

$$ v_{j}(t)= \int _{0}^{1}H(t,s)h_{j}(s)\,ds, $$
(2.12)

where

$$ H(t,s)=G_{2}(t,s)+\frac{\lambda t^{\alpha _{2}-1}}{1-\lambda \eta ^{\alpha _{2}-1}}G_{2}( \eta ,s), $$
(2.13)

and \(G_{2}(t,s)\)is given by (2.5) for \(k=2\).

Proof

From Lemma 2.1, we can reduce \(D_{0+}^{\alpha _{2}}v_{j}(t)=-h_{j}(t)\) to

$$ v_{j}(t) =-\frac{1}{\varGamma (\alpha _{2})} \int _{0}^{t}(t-s)^{\alpha _{2}-1}h_{j}(s) \,ds+d_{1}t^{\alpha _{2}-1}+d_{2}t^{\alpha _{2}-2}, $$
(2.14)

where \(d_{1}\) and \(d_{2}\) are arbitrary real constants.

According to \(v_{j}(0)=0\), we have \(d_{2}=0\). Thus

$$ v_{j}(1)=-\frac{1}{\varGamma (\alpha _{2})} \int _{0}^{1}(1-s)^{\alpha _{2}-1}h _{j}(s)\,ds+d_{1}, $$

with \(v_{j}(1)=\lambda v_{j}(\eta )\), we get

$$ d_{1}=\frac{1}{\varGamma (\alpha _{2})} \int _{0}^{1}(1-s)^{\alpha _{2}-1}h _{j}(s)\,ds+\lambda v_{j}(\eta ) $$

and

$$\begin{aligned} v_{j}(t)& =-\frac{1}{\varGamma (\alpha _{2})} \int _{0}^{t}(t-s)^{\alpha _{2}-1}h _{j}(s)\,ds+\frac{1}{\varGamma (\alpha _{2})} \int _{0}^{1}t^{\alpha _{2}-1}(1-s)^{ \alpha _{2}-1}h_{j}(s) \,ds+\lambda t^{\alpha _{2}-1}v_{j}(\eta ) \\ &= \int _{0}^{1}G_{2}(t,s)h_{j}(s) \,ds+\lambda t^{\alpha _{2}-1}v_{j}( \eta ), \end{aligned}$$
(2.15)

where \(G_{2}(t,s)\) is given by (2.5).

From \(v_{j}(\eta )=\frac{1}{1-\lambda \eta ^{\alpha _{2}-1}}\int _{0} ^{1}G_{2}(\eta ,s)h_{j}(s)\,ds\), we get

$$\begin{aligned} v_{j}(t) &= \int _{0}^{1}G_{2}(t,s)h_{j}(s) \,ds+\frac{\lambda t^{\alpha _{2}-1}}{1-\lambda \eta ^{\alpha _{2}-1}} \int _{0}^{1}G_{2}(\eta ,s)h _{j}(s)\,ds \\ &= \int _{0}^{1}H(t,s)h_{j}(s)\,ds, \end{aligned}$$
(2.16)

where \(H(t,s)\) is given by (2.13).

This completes the proof of the lemma. □

Above all, equation (2.1) has the unique solution

$$ u_{j}(t)= \int _{0}^{1}H_{j}(t,s)\varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )h _{j}(\tau )\,d\tau \biggr)\,ds, \quad j=1,2,\ldots ,n. $$
(2.17)

Next, we present some properties of \(G_{1}(t,s)\), \(G_{2}(t,s)\), \(H_{j}(t,s)\), and \(H(t,s)\).

Lemma 2.4

Suppose thatθis a positive constant satisfying \(0<\theta < \frac{1}{2}<1-\theta <1\), then \(G_{1}(t,s)\), \(G_{2}(t,s)\), \(H_{j}(t,s)\), and \(H(t,s)\)satisfy the following properties:

  1. (a)

    For \(t,s\in [0,1]\), \(k=1,2\), \(0\leq G_{k}(t,s)\leq \frac{1}{ \varGamma (\alpha _{k})}(1-s)^{\alpha _{k}-1}\);

  2. (b)

    For \(t,s\in [\theta ,1-\theta ]\), \(k=1,2\),

    $$ \frac{1}{\varGamma (\alpha _{k}-1)}t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1}(1-t)s \leq G_{k}(t,s)\leq \frac{1}{\varGamma (\alpha _{k})}(1-s)^{\alpha _{k}-1}; $$
  3. (c)

    For \(t,s\in [\theta ,1-\theta ]\), \(j=1,2,\ldots ,n\),

    $$\begin{aligned} &\rho _{j}\frac{M_{j}}{\varGamma (\alpha _{1})} (1-s)^{\alpha _{1}-1}\leq H _{j}(t,s)\leq \frac{M_{j}}{\varGamma (\alpha _{1})}(1-s)^{\alpha _{1}-1}, \\ &\frac{1}{\varGamma (\alpha _{2}-1)}t^{\alpha _{2}-1}(1-s)^{\alpha _{2}-1}(1-t)s \leq H(t,s) \leq \frac{M}{\varGamma (\alpha _{2})}(1-s)^{\alpha _{2}-1}, \end{aligned}$$

    where

    $$\begin{aligned} &M=1+\frac{\lambda }{1-\lambda \eta ^{\alpha _{2}-1}},\qquad \rho _{j}=\frac{\alpha _{1}-1}{M_{j}}\theta ^{2}(1-\theta ), \\ &M_{j}=1+\frac{1}{\Delta _{j}}\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}\,dA_{j}(s),\quad j=1,2, \ldots ,n. \end{aligned}$$

Proof

(a) For \(0\leq s\leq t\leq 1\), we have

$$\begin{aligned} G_{k}(t,s) &=\frac{1}{\varGamma (\alpha _{k})}\bigl[t^{\alpha _{k}-1}(1-s)^{ \alpha _{k}-1}-(t-s)^{\alpha _{k}-1} \bigr] \\ &=\frac{1}{\varGamma (\alpha _{k})}\bigl[(t-ts)^{\alpha _{k}-1}-(t-s)^{\alpha _{k}-1} \bigr] \\ &\geq 0; \\ G_{k}(t,s) &=\frac{1}{\varGamma (\alpha _{k})}\bigl[t^{\alpha _{k}-1}(1-s)^{ \alpha _{k}-1}-(t-s)^{\alpha _{k}-1} \bigr] \\ &\leq \frac{1}{\varGamma (\alpha _{k})}t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1} \\ &\leq \frac{1}{\varGamma (\alpha _{k})}(1-s)^{\alpha _{k}-1}. \end{aligned}$$

For \(0\leq t\leq s \leq 1\), we have

$$\begin{aligned} G_{k}(t,s) &=\frac{1}{\varGamma (\alpha _{k})}t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1} \geq 0 ; \\ G_{k}(t,s) &=\frac{1}{\varGamma (\alpha _{k})}t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1} \\ &\leq \frac{1}{\varGamma (\alpha _{k})}(1-s)^{\alpha _{k}-1}. \end{aligned}$$

(b) For \(\theta \leq s\leq t\leq 1-\theta \), we have

$$\begin{aligned} G_{k}(t,s) &=\frac{1}{\varGamma (\alpha _{k})}\bigl[t^{\alpha _{k}-1}(1-s)^{ \alpha _{k}-1}-(t-s)^{\alpha _{k}-1} \bigr] \\ &=\frac{\alpha _{k}-1}{\varGamma (\alpha _{k})} \int _{t-s}^{t(1-s)}x^{ \alpha _{k}-2}\,dx \\ &\geq \frac{1}{\varGamma (\alpha _{k}-1)}\bigl[t(1-s)\bigr]^{\alpha _{k}-2}\bigl[t(1-s)-(t-s) \bigr] \\ &\geq \frac{1}{\varGamma (\alpha _{k}-1)}t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1}(1-t)s. \end{aligned}$$

For \(\theta \leq t\leq s\leq 1-\theta \), we have

$$\begin{aligned} G_{k}(t,s) &=\frac{1}{\varGamma (\alpha _{k})}t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1} \\ &\geq \frac{1}{\varGamma (\alpha _{k})}t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1}(1-t)s \\ &\geq \frac{1}{\varGamma (\alpha _{k}-1)}t^{\alpha _{k}-1}(1-s)^{\alpha _{k}-1}(1-t)s. \end{aligned}$$

(c) For \(t,s\in [0,1]\), we have

$$\begin{aligned} H_{j}(t,s) &=G_{1}(t,s)+ \frac{t^{\alpha _{1}-1}}{\Delta _{j}}\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}G_{1}(\tau ,s) \,dA_{j}(\tau ) \\ &\leq \frac{1}{\varGamma (\alpha _{1})}(1-s)^{\alpha _{1}-1}+\frac{1}{ \Delta _{j}}\sum _{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}\frac{1}{ \varGamma (\alpha _{1})}(1-s)^{\alpha _{1}-1} \,dA_{j}(\tau ) \\ &=\frac{M_{j}}{\varGamma (\alpha _{1})}(1-s)^{\alpha _{1}-1}, \end{aligned}$$
(2.18)

for \(t,s\in [\theta ,1-\theta ]\), we have

$$\begin{aligned} H_{j}(t,s) &=G_{1}(t,s)+ \frac{t^{\alpha _{1}-1}}{\Delta _{j}}\sum_{i=1}^{m}b_{i} \int _{0}^{\xi _{i}}G_{1}(\tau ,s) \,dA_{j}(\tau ) \\ &\geq G_{1}(t,s)\geq \frac{\alpha _{1}-1}{\varGamma (\alpha _{1})}t^{ \alpha _{1}-1}(1-s)^{\alpha _{1}-1}(1-t)s \\ &\geq \frac{\alpha _{1}-1}{M_{j}}t(1-t)s\frac{M_{j}}{\varGamma (\alpha _{1})} (1-s)^{\alpha _{1}-1} \\ &\geq \frac{\alpha _{1}-1}{M_{j}}\theta ^{2}(1-\theta ) \frac{M_{j}}{ \varGamma (\alpha _{1})} (1-s)^{\alpha _{1}-1} \\ &=\frac{\rho _{j} M_{j}}{\varGamma (\alpha _{1})} (1-s)^{\alpha _{1}-1}. \end{aligned}$$
(2.19)

And for \(t,s\in [0,1]\), we have

$$\begin{aligned} H(t,s) &=G_{2}(t,s)+\frac{\lambda t^{\alpha _{2}-1}}{1-\lambda \eta ^{\alpha _{2}-1}}G_{2}(\eta ,s) \\ &\leq \frac{1}{\varGamma (\alpha _{2})}(1-s)^{\alpha _{2}-1}+\frac{\lambda }{1-\lambda \eta ^{\alpha _{2}-1}} \frac{1}{\varGamma (\alpha _{2})}(1-s)^{ \alpha _{2}-1} \\ &\leq \frac{M}{\varGamma (\alpha _{2})}(1-s)^{\alpha _{2}-1}, \end{aligned}$$

for \(t,s\in [\theta ,1-\theta ]\), we have

$$\begin{aligned} H(t,s) &=G_{2}(t,s)+\frac{\lambda t^{\alpha _{2}-1}}{1-\lambda \eta ^{\alpha _{2}-1}}G_{2}(\eta ,s) \\ &\geq G_{2}(t,s)\geq \frac{1}{\varGamma (\alpha _{2}-1)}t^{\alpha _{2}-1}(1-s)^{ \alpha _{2}-1}(1-t)s. \end{aligned}$$

Then the proof is completed. □

Let \(J=[0,1]\), \(I=[\theta ,1-\theta ]\), \(E= \{u_{j}(t) \vert u_{j}(t)\in C[0,1]\) and \(D_{0^{+}}^{\alpha _{1}}u_{j}(t)\in C[0,1], j=1,2, \ldots ,n \}\), \(U=\underbrace{E\times E\times \cdots \times E} _{n}\) for all \(\boldsymbol{u}=(u_{1},u_{2},\ldots ,u_{n})^{T} \in U\), define the norms as follows:

$$ \Vert \boldsymbol{u} \Vert _{1}=\sum _{j=1}^{n}\sup_{t \in J} \bigl\vert u_{j}(t) \bigr\vert , \qquad \Vert \boldsymbol{u} \Vert _{2}=\sum_{j=1}^{n}\sup _{t \in J} \bigl\vert D_{0^{+}}^{\alpha _{1}}u_{j}(t) \bigr\vert ,\qquad \Vert \boldsymbol{u} \Vert =\max \bigl\{ \Vert \boldsymbol{u} \Vert _{1}, \Vert \boldsymbol{u} \Vert _{2} \bigr\} . $$

Then \((U,\Vert \cdot \Vert )\) is a real Banach space.

Define set K in U by

$$ K= \Biggl\{ \boldsymbol{u}\in U:u_{j}(t)\geq 0,D_{0^{+}}^{\alpha _{1}}u _{j}(t)\leq 0, \min_{t\in I}\sum _{j=1}^{n}u_{j}(t)\geq \rho \Vert \boldsymbol{u} \Vert _{1}, j=1,2,\ldots ,n \Biggr\} , $$
(2.20)

where

$$ \rho _{j}=\frac{\alpha _{1}-1}{M_{j}}\theta ^{2}(1-\theta ),\qquad \rho = \min_{1\leq j\leq n}\rho _{j}. $$

From \(M_{j}\geq 1\), we can get \(0<\rho <1\).

For \(\boldsymbol{u},\boldsymbol{v}\in K\) and \(m_{1},m_{2}\geq 0\), it is not difficult to see that

$$ m_{1}\boldsymbol{u}(t)+m_{2}\boldsymbol{v}(t)\geq 0,\qquad D_{0^{+}} ^{\alpha _{1}}\bigl(m_{1}\boldsymbol{u}(t)+m_{2} \boldsymbol{v}(t)\bigr)=m_{1} D _{0^{+}}^{\alpha _{1}} \boldsymbol{u}(t)+m_{2} D_{0^{+}}^{\alpha _{1}} \boldsymbol{v}(t)\leq 0, $$

and

$$\begin{aligned} &\min_{t\in I} \Biggl\{ \sum _{j=1}^{n} m_{1} u_{j}(t)+ \sum_{j=1}^{n} m_{2} v_{j}(t) \Biggr\} \\ &\quad\geq\min_{t\in I} \Biggl\{ \sum_{j=1}^{n} m_{1} u_{j}(t) \Biggr\} +\min_{t\in I} \Biggl\{ \sum_{j=1}^{n} m_{2} v_{j}(t) \Biggr\} \\ &\quad\geq\rho m_{1} \Vert \boldsymbol{u} \Vert _{1}+ \rho m_{2} \Vert \boldsymbol{v} \Vert _{1}=\rho \bigl(m_{1} \Vert \boldsymbol{u} \Vert _{1}+m _{2} \Vert \boldsymbol{v} \Vert _{1}\bigr) \\ &\quad\geq\rho \Vert m_{1}\boldsymbol{u}+m_{2} \boldsymbol{v} \Vert _{1}. \end{aligned}$$
(2.21)

Thus, for \(\boldsymbol{u},\boldsymbol{v}\in K\) and \(m_{1}\),\(m_{2} \geq 0\), \(m_{1}\boldsymbol{u}+m_{2}\boldsymbol{v}\in K\). And if \(\boldsymbol{u}\in K\), \(\boldsymbol{u}\neq 0\), it is easy to prove that \(-\boldsymbol{u}\notin K\). Therefore, K is a cone in U.

Let \(\boldsymbol{T}:K\rightarrow U\) be a map with components \(T_{1},\ldots ,T_{j},\ldots ,T_{n}\). Here we understand \(\boldsymbol{Tu}=(T_{1}\boldsymbol{u},\ldots , T_{j}\boldsymbol{u}, \ldots ,T_{n}\boldsymbol{u})^{T}\), where

$$ (T_{j}\boldsymbol{u}) (t)= \int _{0}^{1}H_{j}(t,s)\varPhi _{q} \biggl( \int _{0} ^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}(\tau ),D_{0^{+}}^{ \alpha _{1}} \boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds. $$
(2.22)

From Lemma 2.2 and Lemma 2.3, we have the following remark.

Remark 2.1

From (2.22), we know that \(\boldsymbol{u}\in U\) is a solution of system (1.4) if and only if u is a fixed point of the map T.

Lemma 2.5

\(\boldsymbol{T}:K\rightarrow K\)is completely continuous.

Proof

For all \(\boldsymbol{u}\in K\), by the continuity and nonnegativity of \(f_{j}(t,\boldsymbol{u}(t),D_{0^{+}}^{\alpha _{1}}\boldsymbol{u}(t))\), \(H_{j}(t,s)\) and \(H(t,s)\), T is continuous and \((\boldsymbol{T}\boldsymbol{u})(t)\geq 0\), \(D_{0^{+}}^{\alpha _{1}}( \boldsymbol{T}\boldsymbol{u})(t)\leq 0\). Furthermore, from (2.18) and (2.19), we have

$$\begin{aligned} &\min_{t\in I}\sum_{j=1}^{n}(T_{j} \boldsymbol{u}) (t) \\ &\quad=\min_{t\in I}\sum_{j=1}^{n} \int _{0}^{1}H_{j}(t,s)\varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}( \tau ),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ &\quad\geq\sum_{j=1}^{n} \int _{0}^{1}\min_{t\in I}H_{j}(t,s) \varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}( \tau ),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ &\quad=\sum_{j=1}^{n} \int _{0}^{1}\frac{\rho _{j} M_{j}}{\varGamma ( \alpha _{1})}(1-s)^{\alpha _{1}-1} \varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau ) \kappa f_{j}\bigl(\tau ,\boldsymbol{u}(\tau ),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ &\quad\geq\sum_{j=1}^{n} \int _{0}^{1}\rho \sup_{t\in J}H_{j}(t,s) \varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}( \tau ),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ &\quad\geq\rho \sum_{j=1}^{n}\sup _{t\in J} \bigl\vert T_{j} \boldsymbol{u}(t) \bigr\vert =\rho \bigl\Vert T_{j} \boldsymbol{u}(t) \bigr\Vert _{1}. \end{aligned}$$
(2.23)

We can get \((T_{j}\boldsymbol{u})(K)\subseteq K\) for \(j=1,2,\ldots ,n\), thus \((\boldsymbol{T}\boldsymbol{u})(K)\subseteq K\).

Then, in order to show T is uniformly bounded, we show \(T_{j}\) is uniformly bounded. Let D be a bounded closed convex set in K, i.e., there exists a positive constant l such that \(\Vert \boldsymbol{u}\Vert \leq l\). Let \(M_{0}^{j}=\sup_{t\in J} \{f_{j}(t,\boldsymbol{u}(t),D_{0^{+}}^{\alpha _{1}}\boldsymbol{u}(t)) \vert \boldsymbol{u}\in U,\Vert \boldsymbol{u}\Vert \leq l \}>0\). For all \((\boldsymbol{u}_{m})_{m\in \mathbb{N}} \in D\), we have

$$\begin{aligned} &\bigl\vert (T_{j}\boldsymbol{u}_{m}) (t) \bigr\vert \\ &\quad= \biggl\vert \int _{0}^{1}H _{j}(t,s)\varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau , \boldsymbol{u}_{m}(\tau ),D_{0^{+}}^{\alpha _{1}}\boldsymbol{u}_{m}( \tau ) \bigr)\,d\tau \biggr)\,ds \biggr\vert \\ &\quad\leq \int _{0}^{1} \bigl\vert H_{j}(t,s) \bigr\vert \varPhi _{q} \biggl( \int _{0}^{1} \bigl\vert H(s,\tau ) \bigr\vert \bigl\vert \kappa f_{j}\bigl( \tau ,\boldsymbol{u}_{m}( \tau ),D_{0^{+}}^{\alpha _{1}}\boldsymbol{u} _{m}(\tau ) \bigr) \bigr\vert \,d\tau \biggr)\,ds \\ &\quad\leq \int _{0}^{1}\frac{M_{j}}{\varGamma (\alpha _{1})}(1-s)^{\alpha _{1}-1} \varPhi _{q} \biggl( \int _{0}^{1}\frac{M}{\varGamma (\alpha _{2})}(1-\tau )^{ \alpha _{2}-1}\kappa M_{0}^{j} \,d\tau \biggr)\,ds \\ &\quad\leq \frac{M_{j}}{\varGamma (\alpha _{1})} \biggl(\frac{\kappa M M_{0}^{j}}{ \varGamma (\alpha _{2})} \biggr)^{q-1} \int _{0}^{1}(1-s)^{\alpha _{1}-1}\varPhi _{q} \biggl( \int _{0}^{1}(1-\tau )^{\alpha _{2}-1}\,d\tau \biggr)\,ds \\ &\quad=\frac{M_{j}}{\varGamma (\alpha _{1}+1)} \biggl(\frac{\kappa M M_{0}^{j}}{ \varGamma (\alpha _{2}+1)} \biggr)^{q-1}:=N_{1}^{j}. \end{aligned}$$

Furthermore,

$$\begin{aligned} &\bigl\vert D_{0^{+}}^{\alpha _{1}}(T_{j} \boldsymbol{u}_{m}) (t) \bigr\vert \\ &\quad= \biggl\vert -\varPhi _{q} \biggl( \int _{0}^{1}H(t,s)\kappa f_{j} \bigl(s, \boldsymbol{u}_{m}(s),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u}_{m}(s)\bigr)\,ds \biggr) \biggr\vert \\ &\quad=\varPhi _{q} \biggl( \int _{0}^{1}H(t,s)\kappa f_{j} \bigl(s,\boldsymbol{u}_{m}(s),D _{0^{+}}^{\alpha _{1}} \boldsymbol{u}_{m}(s)\bigr)\,ds \biggr) \\ &\quad\leq \varPhi _{q} \biggl( \int _{0}^{1}\frac{M}{\varGamma (\alpha _{2})}(1-s)^{ \alpha _{2}-1} \kappa M_{0}^{j} \,ds \biggr) \\ &\quad= \biggl(\frac{\kappa M M_{0}^{j}}{\varGamma (\alpha _{2}+1)} \biggr)^{q-1}:=N _{2}^{j}. \end{aligned}$$

Thus, \(\Vert T_{j}\boldsymbol{u}_{m}\Vert \leq \max \{N_{1}^{j},N_{2} ^{j}\}\), which implies that \(T_{j}(D)\) is uniformly bounded.

Then we show \((T_{j}\boldsymbol{u}_{m})(t)_{m\in \mathbb{N}}\) is equicontinuous. Because \(H_{j}(t,s)\) is continuous on \(J\times J\), \(H_{j}(t,s)\) is uniformly continuous on \(J\times J\), so for any \(\varepsilon >0\), there exists \(\delta _{1}>0\) such that, for \(t_{1},t_{2}\in J\) with \(\vert t_{1}-t_{2}\vert <\delta _{1}\), \(\vert H_{j}(t_{1},s)-H_{j}(t_{2},s)\vert < \varepsilon _{1} [\frac{ \kappa M M_{0}^{j}}{\varGamma (\alpha _{2}+1)} ]^{1-q}\). We can infer that

$$\begin{aligned} & \bigl\vert (T_{j}\boldsymbol{u}_{m}) (t_{2})-(T_{j}\boldsymbol{u}_{m}) (t _{1}) \bigr\vert \\ &\quad\leq \int _{0}^{1} \bigl\vert H_{j}(t_{2},s)-H_{j}(t_{1},s) \bigr\vert \varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau , \boldsymbol{u}_{m}(\tau ) \bigr),D_{0^{+}}^{\alpha _{1}}\boldsymbol{u}_{m}( \tau ) \biggr)\,d\tau )\,ds \\ &\quad\leq \biggl[\frac{\kappa M M_{0}^{j}}{\varGamma (\alpha _{2}+1)} \biggr]^{q-1} \int _{0}^{1} \bigl\vert H_{j}(t_{2},s)-H_{j}(t_{1},s) \bigr\vert \,ds \\ &\quad < \varepsilon _{1}. \end{aligned}$$

On the other hand, from \(H(t,s)\) is continuous on \(J\times J\), we know \(H(t,s)\) is uniformly continuous on \(J\times J\), then for any \(\varepsilon >0\), there exists \(\delta _{2}>0\) such that, for any \(t_{1},t_{2} \in J\) and \(\vert t_{1}-t_{2}\vert < \delta _{2}\), we have \(\vert H(t_{1},s)-H(t_{2},s)\vert < \delta _{3}(\kappa M_{0}^{j})^{-1}\). Hence,

$$\begin{aligned} & \biggl\vert \int _{0}^{1}H(t_{2},s)\kappa f_{j}\bigl(s,\boldsymbol{u}_{m}(s),D _{0^{+}}^{\alpha _{1}}\boldsymbol{u}_{m}(s)\bigr)\,ds- \int _{0}^{1}H(t_{1},s) \kappa f_{j}\bigl(s,\boldsymbol{u}_{m}(s),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u}_{m}(s)\bigr)\,ds \biggr\vert \\ &\quad\leq \int _{0}^{1} \bigl\vert H(t_{2},s)-H(t_{1},s) \bigr\vert \kappa f_{j}\bigl(s,\boldsymbol{u}_{m}(s),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u} _{m}(s)\bigr)\,ds \\ &\quad\leq \delta _{3}. \end{aligned}$$

Because \(\varPhi _{q}(s)\) is continuous, when \(\vert s_{2}-s_{1}\vert < \delta _{3}\), we have \(\vert \varPhi _{p}(s_{2})-\varPhi _{p}(s_{1})\vert < \varepsilon _{2}\), thus,

$$\begin{aligned} & \bigl\vert D_{0^{+}}^{\alpha _{1}}(T_{j} \boldsymbol{u}_{m}) (t_{2})-D_{0^{+}} ^{\alpha _{1}}(T_{j}\boldsymbol{u}_{m}) (t_{1}) \bigr\vert \\ &\quad= \biggl\vert -\varPhi _{q} \biggl( \int _{0}^{1}H(t_{2},s)\kappa f_{j}\bigl(s, \boldsymbol{u}_{m}(s),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u}_{m}(s)\bigr)\,ds \biggr)\\ &\qquad{}+\varPhi _{q} \biggl( \int _{0}^{1}H(t_{1},s)\kappa f_{j}\bigl(s, \boldsymbol{u}_{m}(s),D_{0^{+}}^{\alpha _{1}} \boldsymbol{u}_{m}(s)\bigr)\,ds \biggr) \biggr\vert \\ &\quad< \varepsilon _{2}. \end{aligned}$$

Therefore, it follows from the Arzelà–Ascoli theorem that \((T _{j}\boldsymbol{u}_{m})_{m\in \mathbb{N}}\) is compact on J.

Finally, we will prove the continuity of \(T_{j}\). Let \(( \boldsymbol{u}_{m})_{m\in \mathbb{N}}\) be any sequence converging on K to \(\boldsymbol{u}\in K\), and let \(S>0\) be such that \(\Vert \boldsymbol{u}_{m}\Vert \leq S\) for all \(m\in \mathbb{N}\). Note that \(f_{j}(t,\boldsymbol{u},D_{0^{+}}^{\alpha _{1}}\boldsymbol{u})\) is continuous on \(J\times K_{S}\). It is easy to see that the dominated convergence theorem guarantees that

$$ \lim_{m\rightarrow \infty }(T_{j}\boldsymbol{u}_{m}) (t)=(T_{j} \boldsymbol{u}) (t) $$
(2.24)

and

$$ \lim_{m\rightarrow \infty }D_{0^{+}}^{\alpha _{1}}(T_{j} \boldsymbol{u} _{m}) (t)=D_{0^{+}}^{\alpha _{1}}(T_{j} \boldsymbol{u}) (t), $$
(2.25)

for each \(t\in J\). Moreover, the compactness of \(T_{j}\) implies that \((T_{j}\boldsymbol{u}_{m})(t)\) converges uniformly to \((T_{j} \boldsymbol{u})(t)\) on J. If not, then there exist \(\varepsilon _{0}>0\) and a subsequence \((\boldsymbol{u}_{m_{k}})_{k\in \mathbb{N}}\) of \((\boldsymbol{u}_{m})_{m\in \mathbb{N}}\) such that

$$ \sup_{t\in J} \bigl\vert (T_{j} \boldsymbol{u}_{m_{k}}) (t)-(T_{j} \boldsymbol{u}) (t) \bigr\vert \geq \varepsilon _{0}, \quad k\in \mathbb{N}. $$
(2.26)

Now, it follows from the compactness of \(T_{j}\) that there exists a subsequence of \(\boldsymbol{u}_{m_{k}}\) (without loss of generality, assume that the subsequence is \(\boldsymbol{u}_{m_{k}}\)) such that \(T_{j}\boldsymbol{u}_{m_{k}}\) converges uniformly to \(y_{0}\in C[0,1]\). Thus,we easily see that

$$ \sup_{t\in J} \bigl\vert y_{0}(t)-(T_{j} \boldsymbol{u}) (t) \bigr\vert \geq \varepsilon _{0},\quad k\in \mathbb{N}. $$
(2.27)

On the other hand, from the pointwise convergence (2.24) we obtain

$$ y_{0}(t)=(T_{j}\boldsymbol{u}) (t),\quad t\in J. $$

This is a contradiction to (2.27). Similarly, we can get that \(D_{0^{+}}^{\alpha _{1}}(T_{j}\boldsymbol{u}_{m})(t)\) converges uniformly to \(D_{0^{+}}^{\alpha _{1}}(T_{j}\boldsymbol{u})(t)\). Therefore, \(T_{j}\) is continuous.

Thus, we assert that \(T_{j}:K\rightarrow K\) is completely continuous for \(j=1,2,\ldots ,n\). This completes the proof of Lemma 2.5. □

3 Existence results

In this section, by using Lemmas 2.12.5, we show the existence of at least three positive solutions for system (1.4).

Before the main results, we give the Leggett–Williams fixed point theorem.

Let γ and μ be nonnegative continuous convex functions on K, ω be a nonnegative concave function on K, and ψ be a nonnegative continuous function on K. For \(a,b,c,d>0\), we define the following convex sets:

$$\begin{aligned} &K(\gamma ;d)= \bigl\{ \boldsymbol{u}\in K:\gamma (\boldsymbol{u})< d \bigr\} , \\ &K(\gamma ,\omega ;b,d)= \bigl\{ \boldsymbol{u}\in K:b\leq \omega ( \boldsymbol{u}), \gamma (\boldsymbol{u})\leq d\bigr\} , \\ &K(\gamma ,\mu ,\omega ;b,c,d)= \bigl\{ \boldsymbol{u}\in K:b\leq \omega ( \boldsymbol{u}),\mu (\boldsymbol{u})\leq c; \gamma (\boldsymbol{u}) \leq d\bigr\} , \end{aligned}$$

and a closed set

$$\begin{aligned} K(\gamma ,\psi ;a,d)= \bigl\{ \boldsymbol{u}\in K:a\leq \psi ( \boldsymbol{u}), \gamma (\boldsymbol{u})\leq d\bigr\} . \end{aligned}$$

Lemma 3.1

(Leggett–Williams fixed point theorem [30])

LetKbe a cone in a real Banach spaceE. Letγandμbe nonnegative continuous convex functions onK, ωbe a nonnegative concave function onK, andψbe a nonnegative continuous function onKsatisfying \(\psi (\zeta x)\leq \zeta \psi (x)\)for \(0\leq \zeta \leq 1\)such that, for some positive numbersLandd,

$$ \omega (x)\leq \psi (x),\qquad \Vert \boldsymbol{u} \Vert \leq L\gamma (x) $$
(3.1)

for all \(x\in \overline{K(\gamma ;d)}\). Suppose that

$$ T:\overline{K(\gamma ;d)}\rightarrow \overline{K(\gamma ;d)} $$

is completely continuous and there exist positive numbersa, b, andcwith \(a< b\)such that

  1. (H1)

    \(\{x\in K(\gamma ,\mu ,\omega ;b,c,d):\omega (x)>b\} \neq \varnothing\), and \(\omega (Tx)>b\)for \(x\in K(\gamma ,\mu ,\omega ;b,c,d)\);

  2. (H2)

    \(\omega (Tx)>b\)for \(x\in K(\gamma ,\omega ;b,d)\)with \(\mu (Tx)>c\);

  3. (H3)

    \(x\notin K(\gamma ,\psi ;a,d)\)and \(\psi (Tx)< a\)for \(x \in K(\gamma ,\psi ;a,d)\)with \(\psi (x)=a\).

ThenThas at least three fixed points \(x_{1},x_{2},x_{3}\in \overline{K( \gamma ;d)}\)such that

$$ \gamma (x_{i})\leq d,\quad i=1,2,3;\qquad \omega (x_{1})>b,\qquad a< \omega (x_{2}), \qquad\psi (x_{2})< b,\qquad \psi (x_{3})< a. $$

Denote the positive constants

$$\begin{aligned} &J_{1}=\sum_{j=1}^{n} \frac{M_{j}}{\varGamma (\alpha _{1}+1)} \biggl[\frac{M}{ \varGamma (\alpha _{2}+1)} \biggr]^{q-1},\qquad J_{2}=\sum_{j=1}^{n} \biggl[\frac{M}{ \varGamma (\alpha _{2}+1)} \biggr]^{q-1},\\ & J_{3}=\sum _{j=1}^{n}\frac{M_{j}B(q,q+1)}{ \varGamma (\alpha _{1})\varGamma (\alpha _{2}-1)^{q-1}6^{q}}, \end{aligned}$$

where \(B(q,q+1)\) is the beta function defined by \(B(P,Q)=\int _{0}^{1}x ^{P-1}(1-x)^{Q-1}\,dx\).

Define the functions as follows:

$$ \gamma (\boldsymbol{u})= \Vert \boldsymbol{u} \Vert ,\qquad \mu (\boldsymbol{u})= \psi (\boldsymbol{u})= \Vert \boldsymbol{u} \Vert _{1},\qquad \omega ( \boldsymbol{u})=\min_{t\in I}\sum_{j=1}^{n} u_{j}(t), $$

then γ and μ are continuous nonnegative convex functions, ω is a continuous nonnegative concave function, ψ is a continuous nonnegative function, and

$$ \rho \mu (\boldsymbol{u})\leq \omega (\boldsymbol{u})\leq \mu ( \boldsymbol{u})= \psi (\boldsymbol{u}), \qquad \Vert \boldsymbol{u} \Vert \leq L\gamma ( \boldsymbol{u}), $$

where \(L=1\). Therefore, condition (3.1) in Lemma 3.1 is satisfied.

Theorem 3.2

Suppose that (F1)(F3) hold, and there exist positive constants \(a,b,d\)with \(a< b<\rho d \min \{\frac{J_{3}}{J_{1}},\frac{J_{3}}{J _{2}}\}\)and \(c=\frac{b}{\rho }\), for \(j=1,2,\ldots ,n\), such that

  1. (L1)

    \(f_{j}(t,\boldsymbol{u},\boldsymbol{w})\leq \frac{1}{\kappa } \min \{\varPhi _{p}(\frac{d}{J_{1}}),\varPhi _{p}(\frac{d}{J_{2}})\}\)for \((t,\boldsymbol{u},\boldsymbol{w})\in J\times [0,d]^{n}\times [-d,0]^{n}\);

  2. (L2)

    \(f_{j}(t,\boldsymbol{u},\boldsymbol{w})>\frac{1}{\kappa }\varPhi _{p}(\frac{b}{\rho _{j} J_{3}})\)for \((t,\boldsymbol{u},\boldsymbol{w}) \in I\times [b,\frac{b}{\rho }]^{n}\times [-d,0]^{n}\);

  3. (L3)

    \(f_{j}(t,\boldsymbol{u},\boldsymbol{w})<\frac{1}{\kappa }\varPhi _{p}(\frac{a}{J_{1}})\)for \((t,\boldsymbol{u},\boldsymbol{w})\in J \times [0,a]^{n}\times [-d,0]^{n}\).

Then system (1.4) has at least three positive solutions \(\boldsymbol{u}^{1}\), \(\boldsymbol{u}^{2}\), \(\boldsymbol{u}^{3}\)satisfying

$$\begin{aligned} &\bigl\Vert \boldsymbol{u}^{i} \bigr\Vert \leq d\quad (i=1,2,3), \end{aligned}$$
(3.2)
$$\begin{aligned} \begin{aligned}&\min_{t\in I} \Biggl\vert \sum_{j=1}^{n} u_{j}^{1}(t) \Biggr\vert >b,\qquad a< \min _{t\in I} \Biggl\vert \sum_{j=1}^{n} u_{j}^{2}(t) \Biggr\vert ,\\ & \sum _{j=1}^{n}\sup_{t\in J} \bigl\vert u_{j}^{2}(t) \bigr\vert < b, \qquad\sum _{j=1}^{n}\sup_{t\in J} \bigl\vert u_{j}^{3}(t) \bigr\vert < a. \end{aligned} \end{aligned}$$
(3.3)

Proof

For \(\boldsymbol{u}\in \overline{K(\gamma ,d)}\), we have

$$ \gamma (\boldsymbol{u})= \Vert \boldsymbol{u} \Vert \leq d, $$

this implies

$$ \sum_{j=1}^{n}\sup _{t\in J} \bigl\vert u_{j}(t) \bigr\vert \leq d,\qquad \sum_{j=1}^{n}\sup_{t\in J} \bigl\vert D_{0^{+}}^{\alpha _{1}}u_{j}(t) \bigr\vert \leq d, $$

then, for \(t\in J\), we have

$$ 0\leq \sum_{j=1}^{n} u_{j}(t)\leq d,\qquad -d\leq \sum_{j=1}^{n} D_{0^{+}}^{\alpha _{1}}u_{j}(t)\leq 0. $$

By (L1), we have

$$\begin{aligned} \Vert \boldsymbol{T}\boldsymbol{u} \Vert _{1}= {}& \sum_{j=1}^{n}\sup_{t \in J} \bigl\vert (T_{j}\boldsymbol{u}) (t) \bigr\vert \\ = {}&\sum_{j=1}^{n}\sup _{t\in J} \int _{0}^{1}H_{j}(t,s)\varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}(\tau ),D_{0^{+}} ^{\alpha _{1}}\boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ \leq {}&\sum_{j=1}^{n} \int _{0}^{1}\sup_{t\in J}H_{j}(t,s) \varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}(\tau ),D _{0^{+}}^{\alpha _{1}} \boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ \leq {}&\sum_{j=1}^{n} \int _{0}^{1}\frac{M_{j}}{\varGamma (\alpha _{1})}(1-s)^{ \alpha _{1}-1} \varPhi _{q} \biggl( \int _{0}^{1}\frac{M}{\varGamma (\alpha _{2})}(1- \tau )^{\alpha _{2}-1}\varPhi _{p} \biggl(\frac{d}{J_{1}} \biggr) \,d\tau \biggr)\,ds \\ = {}&\frac{d}{J_{1}}\sum_{j=1}^{n} \frac{M_{j}}{\varGamma (\alpha _{1}+1)} \biggl[\frac{M}{\varGamma (\alpha _{2}+1)} \biggr]^{q-1}=d, \end{aligned}$$
(3.4)

and

$$\begin{aligned} \Vert \boldsymbol{T}\boldsymbol{u} \Vert _{2}= {}& \sum_{j=1}^{n}\sup_{t \in J} \bigl\vert D_{0^{+}}^{\alpha _{1}}(T_{j} \boldsymbol{u}) (t) \bigr\vert \\ = {}&\sum_{j=1}^{n}\sup _{t\in J}\varPhi _{q} \biggl( \int _{0}^{1}H(t,s)\kappa f _{j} \bigl(s,\boldsymbol{u}(s),D_{0^{+}}^{\alpha _{1}}\boldsymbol{u}(s) \bigr)\,ds \biggr) \\ \leq {}&\sum_{j=1}^{n}\varPhi _{q} \biggl( \int _{0}^{1}\frac{M}{\varGamma (\alpha _{2})}(1-s)^{\alpha _{2}-1} \varPhi _{p} \biggl(\frac{d}{J_{2}} \biggr)\,ds \biggr) \\ ={} &\frac{d}{J_{2}}\sum_{j=1}^{n} \biggl[\frac{M}{\varGamma (\alpha _{2}+1)} \biggr]^{q-1}=d. \end{aligned}$$
(3.5)

So,

$$ \gamma (\boldsymbol{T}\boldsymbol{u})= \Vert \boldsymbol{T} \boldsymbol{u} \Vert =\max \bigl\{ \Vert \boldsymbol{T}\boldsymbol{u} \Vert _{1}, \Vert \boldsymbol{T}\boldsymbol{u} \Vert _{2} \bigr\} \leq d. $$

Therefore T: \(\overline{K(\gamma ,d)}\rightarrow \overline{K( \gamma ,d)}\).

Let \(u_{j}(t)=\frac{b}{n\rho }\), for \(j=1,2,\ldots ,n\). Then \(\boldsymbol{u}(t)\in K(\gamma ,\mu ,\omega ;b,c,d)\) and \(\sum_{j=1}^{n} u_{j}(t)=\frac{b}{\rho }>b\), which implies that

$$ \bigl\{ \boldsymbol{u}\in K(\gamma ,\mu ,\omega ;b,c,d):\omega ( \boldsymbol{u})>b \bigr\} \neq \varnothing . $$

For \(\boldsymbol{u}\in K(\gamma ,\mu ,\omega ;b,c,d)\), we know that \(b<\sum_{j=1}^{n} u_{j}(t)\leq c=\frac{b}{\rho }\) for \(t\in I\) and \(-d\leq \sum_{j=1}^{n} D_{0^{+}}^{\alpha _{1}}u_{j}(t)\leq 0\).

In view of (L2),

$$\begin{aligned} \omega (\boldsymbol{T}\boldsymbol{u})={}&\min_{t\in I} \sum_{j=1}^{n}(T _{j} \boldsymbol{u}) (t) \\ ={}&\min_{t\in I}\sum_{j=1}^{n} \int _{0}^{1}H_{j}(t,s)\varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}(\tau ),D_{0^{+}} ^{\alpha _{1}}\boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ \geq{}& \sum_{j=1}^{n} \int _{0}^{1}\min_{t\in I}H_{j}(t,s) \varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}(\tau ),D _{0^{+}}^{\alpha _{1}} \boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ \geq{} &\sum_{j=1}^{n} \int _{0}^{1}\frac{\rho _{j} M_{j}}{\varGamma (\alpha _{1})}(1-s)^{\alpha _{1}-1} \\ &{}\times \varPhi _{q} \biggl( \int _{0}^{1}\frac{1}{\varGamma ( \alpha _{2}-1)}s^{\alpha _{2}-1}(1- \tau )^{\alpha _{2}-1}(1-s)\tau \varPhi _{p} \biggl( \frac{b}{\rho _{j} J_{3}} \biggr)\,d\tau \biggr)\,ds \\ \geq{}& \sum_{j=1}^{n} \int _{0}^{1}\frac{\rho _{j} M_{j}}{\varGamma (\alpha _{1})}(1-s)\varPhi _{q} \biggl( \int _{0}^{1}\frac{1}{\varGamma (\alpha _{2}-1)}s(1- \tau ) (1-s)\tau \varPhi _{p} \biggl(\frac{b}{\rho _{j} J_{3}} \biggr)\,d\tau \biggr)\,ds \\ ={}&\sum_{j=1}^{n}\frac{\rho _{j} M_{j}}{\varGamma (\alpha _{1})} \biggl[\frac{1}{ \varGamma (\alpha _{2}-1)} \biggr]^{q-1}\frac{b}{\rho _{j} J_{3}} \int _{0}^{1}(1-s)^{q}s ^{q-1}\,ds\varPhi _{q} \biggl( \int _{0}^{1}(1-\tau )\tau \,d\tau \biggr) \\ >{}&\sum_{j=1}^{n}\frac{M_{j} b}{\varGamma (\alpha _{1})\varGamma (\alpha _{2}-1)^{q-1}J _{3}}B(q,q+1)6^{-q}=b. \end{aligned}$$
(3.6)

So \(\omega (\boldsymbol{T}\boldsymbol{u})>b\) for all \(\boldsymbol{u} \in K(\gamma ,\mu ,\omega ;b,c,d)\). Hence, condition (H1) in Lemma 3.1 is satisfied.

For all \(\boldsymbol{u}\in K(\gamma ,\omega ;b,d)\) with \(\mu ( \boldsymbol{T}\boldsymbol{u})>c=\frac{b}{\rho }\), from (2.23) we have

$$ \omega (\boldsymbol{T}\boldsymbol{u})\geq \rho \mu (\boldsymbol{T} \boldsymbol{u})>\rho c=\rho \frac{b}{\rho }=b. $$

Thus, condition (H2) of Lemma 3.1 holds.

Because of \(\psi (\boldsymbol{0})=0< a\), then \(\boldsymbol{0}\notin K(\gamma ,\psi ;a,d)\). For \(\boldsymbol{u}\in K(\gamma ,\psi ;a,d)\) with \(\psi (\boldsymbol{u})=a\), we know \(\gamma (\boldsymbol{u}) \leq d\), which means that \(\sum_{j=1}^{n}\sup_{t\in J}u_{j}(t)=a\) and \(-d\leq \sum_{j=1}^{n} \sup_{t\in J}D_{0^{+}}^{\alpha _{1}}u_{j}(t)\leq 0\).

From (L3), we can obtain

$$\begin{aligned} \psi (\boldsymbol{T}\boldsymbol{u}) &=\sum _{j=1}^{n}\sup_{t\in J} \bigl\vert (T_{j}\boldsymbol{u}) (t) \bigr\vert \\ &=\sum_{j=1}^{n}\sup _{t\in J} \int _{0}^{1}H_{j}(t,s)\varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}(\tau ),D_{0^{+}} ^{\alpha _{1}}\boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ &\leq \sum_{j=1}^{n} \int _{0}^{1}\sup_{t\in J}H_{j}(t,s) \varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl(\tau ,\boldsymbol{u}(\tau ),D _{0^{+}}^{\alpha _{1}} \boldsymbol{u}(\tau )\bigr)\,d\tau \biggr)\,ds \\ &< \sum_{j=1}^{n} \int _{0}^{1}\frac{M_{j}}{\varGamma (\alpha _{1})}(1-s)^{ \alpha _{1}-1} \varPhi _{q} \biggl( \int _{0}^{1}\frac{M}{\varGamma (\alpha _{2})}(1- \tau )^{\alpha _{2}-1}\varPhi _{p} \biggl(\frac{a}{J_{1}} \biggr) \,d\tau \biggr)\,ds \\ &=\frac{a}{J_{1}}\sum_{j=1}^{n} \frac{M_{j}}{\varGamma (\alpha _{1}+1)} \biggl[\frac{M}{\varGamma (\alpha _{2}+1)} \biggr]^{q-1}=a. \end{aligned}$$
(3.7)

Therefore, condition (H3) of Lemma 3.1 is satisfied.

To sum up, the conditions of Lemma 3.1 are all verified. Hence, system (1.4) has at least three positive solutions \(\boldsymbol{u}^{1}\), \(\boldsymbol{u}^{2}\), \(\boldsymbol{u}^{3}\) satisfying (3.2) and (3.3).

The proof is completed. □

4 Nonexistence results

In this section, we focus on the nonexistence results of positive solutions for system (1.4).

We introduce some notations in advance for \(j=1,2,\ldots ,n\):

$$\begin{aligned} &f_{j}^{0}= \liminf_{ \Vert \boldsymbol{u} \Vert _{1}+ \Vert \boldsymbol{w} \Vert _{1} \rightarrow 0}\min _{t\in I}\frac{f_{j}(t,\boldsymbol{u}, \boldsymbol{w})}{\varPhi _{p}( \Vert \boldsymbol{u} \Vert _{1}+ \Vert \boldsymbol{w} \Vert _{1})}, \\ &f_{j}^{\infty }= \liminf_{ \Vert \boldsymbol{u} \Vert _{1}+ \Vert \boldsymbol{w} \Vert _{1} \rightarrow \infty }\min _{t\in I}\frac{f_{j}(t,\boldsymbol{u}, \boldsymbol{w})}{\varPhi _{p}( \Vert \boldsymbol{u} \Vert _{1}+ \Vert \boldsymbol{w} \Vert _{1})}. \end{aligned}$$

Then we have the following nonexistence results of positive solutions.

Theorem 4.1

If \(f_{j}^{0}>0\)and \(f_{j}^{\infty }>0\)for \(j=1,2,\ldots ,n\), then there exists \(\kappa _{0}>0\)such that, for all \(\kappa >\kappa _{0}\), system (1.4) has no positive solutions.

Proof

Since \(f_{j}^{0}>0\) and \(f_{j}^{\infty }>0\), there exist positive constants \(h_{1}\), \(h_{2}\), \(r_{1}\), \(r_{2}\), \(r_{3}\), \(r_{4}\) such that \(r_{1}< r_{3}\), \(r_{2}< r_{4}\) and

$$\begin{aligned} &f_{j}(t,\boldsymbol{u},\boldsymbol{w})\geq h_{1} \varPhi _{p}\bigl( \Vert \boldsymbol{u} \Vert _{1}+ \Vert \boldsymbol{w} \Vert _{1}\bigr),\quad \text{for }(t, \boldsymbol{u}, \boldsymbol{w})\in I\times [0,r_{1}]^{n}\times [-r_{2},0]^{n}, \\ &f_{j}(t,\boldsymbol{u},\boldsymbol{w})\geq h_{2} \varPhi _{p}\bigl( \Vert \boldsymbol{u} \Vert _{1}+ \Vert \boldsymbol{w} \Vert _{1}\bigr),\quad \text{for }(t, \boldsymbol{u}, \boldsymbol{w})\in I\times \bigl[r_{3},\infty )^{n} \times (- \infty ,-r_{4}\bigr]^{n}. \end{aligned}$$

Let

$$ h_{3}=\min \biggl\{ h_{1},h_{2}, \inf _{(\boldsymbol{u},\boldsymbol{w})\in (r_{1},r_{3})^{n}\times (-r _{4},-r_{2})^{n}}\frac{\min_{t\in I}f_{j}(t,\boldsymbol{u}, \boldsymbol{w})}{\varPhi _{p}( \Vert \boldsymbol{u} \Vert _{1}+ \Vert \boldsymbol{w} \Vert _{1})} \biggr\} >0, $$
(4.1)

then we have

$$ f_{j}(t,\boldsymbol{u},\boldsymbol{w})\geq h_{3} \varPhi _{p}\bigl( \Vert \boldsymbol{u} \Vert _{1}+ \Vert \boldsymbol{w} \Vert _{1}\bigr),\quad \text{for }(t, \boldsymbol{u}, \boldsymbol{w})\in I\times [0,\infty )^{n}\times (- \infty ,0 ]^{n}. $$
(4.2)

Suppose that \(\widetilde{\boldsymbol{u}}\) is a positive solution of system (1.4); let

$$ \kappa _{0}=\frac{\varGamma (\alpha _{1})\varGamma (\alpha _{2}-1)^{q-1}6^{q}}{ \rho _{j} M_{j} B(q,q+1)}, $$
(4.3)

then, for all \(t\in I\), we get

$$\begin{aligned} \Vert \widetilde{\boldsymbol{u}} \Vert _{1}={}& \Vert \boldsymbol{T} \widetilde{\boldsymbol{u}} \Vert _{1} \geq \sum_{j=1}^{n}\sup_{t\in I} \int _{0}^{1}H_{j}(t,s)\varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau )\kappa f_{j}\bigl( \tau ,\widetilde{\boldsymbol{u}}(\tau ),D_{0^{+}}^{\alpha _{1}} \widetilde{\boldsymbol{u}}(\tau )\bigr)\,d \tau \biggr)\,ds \\ \geq{}& \int_{0}^{1}H_{j}(t,s)\varPhi _{q} \biggl( \int _{0}^{1}H(s,\tau ) \kappa f_{j}\bigl(\tau ,\widetilde{\boldsymbol{u}}(\tau ),D_{0^{+}}^{\alpha _{1}}\widetilde{\boldsymbol{u}}(\tau )\bigr)\,d \tau \biggr)\,ds \\ \geq{}& \int _{0}^{1}\frac{\rho _{j} M_{j}}{\varGamma (\alpha _{1})}(1-s)^{ \alpha _{1}-1} \\ &{}\times \varPhi _{q} \biggl( \int _{0}^{1}\frac{s^{\alpha _{2}-1}(1-\tau )^{ \alpha _{2}-1}(1-s)\tau }{\varGamma (\alpha _{2}-1)}\kappa h_{3}\varPhi _{p}\bigl( \Vert \widetilde{ \boldsymbol{u}} \Vert _{1}+ \bigl\Vert D_{0^{+}}^{\alpha _{1}} \widetilde{\boldsymbol{u}} \bigr\Vert _{1}\bigr)\,d\tau \biggr) \,ds \\ >{}&\frac{\kappa _{0}\rho _{j} M_{j}}{\varGamma (\alpha _{1})\varGamma (\alpha _{2}-1)^{q-1}}B(q,q+1)6^{-q}\bigl( \Vert \widetilde{ \boldsymbol{u}} \Vert _{1}+ \bigl\Vert D_{0^{+}}^{\alpha _{1}} \widetilde{\boldsymbol{u}} \bigr\Vert _{1}\bigr) \\ \geq{}& \Vert \widetilde{\boldsymbol{u}} \Vert _{1}, \end{aligned}$$
(4.4)

which is a contraction. Therefore, system (1.4) has no positive solution. □

5 Example

In this section, we give examples to illustrate the results.

Example 5.1

Consider the following system with \(n=2\), \(p=\frac{9}{5}\), \(\kappa =1\), \(m=2\):

$$ \textstyle\begin{cases} D_{0+}^{\frac{19}{10}}(\varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}} \boldsymbol{u}(t)))=\boldsymbol{f}(t,\boldsymbol{u}(t),D_{0+}^{ \frac{17}{9}}\boldsymbol{u}(t)),\quad t\in (0,1),\\ \boldsymbol{u}(0)=\boldsymbol{0}, \qquad \boldsymbol{u}(1)= \frac{1}{8}\int _{0}^{\frac{3}{5}}\boldsymbol{u}(s)\,d\boldsymbol{A}(s)+ \frac{1}{10}\int _{0}^{\frac{4}{5}}\boldsymbol{u}(s)\,d\boldsymbol{A}(s), \\ D_{0+}^{\frac{17}{9}}\boldsymbol{u}(0)=\boldsymbol{0},\qquad \varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}\boldsymbol{u}(1))= \frac{1}{20}\varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}\boldsymbol{u}( \frac{1}{10})), \end{cases} $$
(5.1)

where \(\alpha _{1}=\frac{17}{9}\), \(\alpha _{2}=\frac{19}{10}\), \(b_{1}=\frac{1}{8}\), \(b_{2}=\frac{1}{10}\), \(\xi _{1}=\frac{3}{5}\), \(\xi _{2}=\frac{4}{5}\), \(\lambda =\frac{1}{20}\), \(\eta =\frac{1}{10}\), and

$$\begin{aligned} &\boldsymbol{A}(s) = \begin{pmatrix} A_{1}(s)&0\\ 0&A_{2}(s) \end{pmatrix}, \\ &A_{1}(s) = \textstyle\begin{cases} 0,& s\in [0,\frac{1}{2}),\\ 1,& s\in [\frac{1}{2}, \frac{3}{4}),\\ \frac{1}{2}, & s\in [\frac{3}{4},1), \end{cases}\displaystyle \qquad A_{2}(s) = \textstyle\begin{cases} 0,& s\in [0,\frac{1}{3}),\\ \frac{3}{2},& s\in [\frac{1}{3}, \frac{2}{3}),\\ \frac{1}{2},& s\in [\frac{2}{3},1). \end{cases}\displaystyle \end{aligned}$$

For \(t\in J\), \(\boldsymbol{w}\in \mathbb{R}^{2}\), set

(f1(t,u,w)f2(t,u,w))={((u1+u2)tw1tw1+w2(u1+u2)),u[0,0.1]2,((0.4t3020830)(110u1)+0.2t(0.4w1tw1+w26sin(w2t+0.15))(110u2)+0.2w1tw1+w2),u(0.1,0.15)2,(u1+u24+4153sin(w2t+u1)+434),u[0.15,3000]2.

Thus system (5.1) is equivalent to the following problem:

$$ \textstyle\begin{cases} D_{0+}^{\frac{19}{10}}(\varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}u_{1}(t)))=f _{1}(t,\boldsymbol{u}(t),D_{0+}^{\frac{17}{9}}\boldsymbol{u}(t)),\quad t\in (0,1),\\ D_{0+}^{\frac{19}{10}}(\varPhi _{\frac{9}{5}}(D_{0+}^{ \frac{17}{9}}u_{2}(t)))=f_{2}(t,\boldsymbol{u}(t),D_{0+}^{ \frac{17}{9}}\boldsymbol{u}(t)),\quad t\in (0,1),\\ u_{1}(0)=0, \qquad u_{1}(1)=\frac{1}{8} (u_{1}( \frac{1}{2}) )+\frac{1}{10} (u_{1}(\frac{1}{2})-\frac{1}{2}u _{1}(\frac{3}{4}) ),\\ u_{2}(0)=0, \qquad u_{2}(1)=\frac{1}{8} (\frac{3}{2}u_{2}(\frac{1}{3}) )+\frac{1}{10} (\frac{3}{2}u _{2}(\frac{1}{3})-u_{2}(\frac{2}{3}) ),\\ D_{0+}^{\frac{17}{9}}u _{1}(0)=0, \qquad \varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}u_{1}(1))= \frac{1}{20}\varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}u_{1}( \frac{1}{10})),\\ D_{0+}^{\frac{17}{9}}u_{2}(0)=0,\qquad \varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}u_{2}(1))=\frac{1}{20} \varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}u_{2}(\frac{1}{10})). \end{cases} $$
(5.2)

Choose \(a=0.1\), \(b=0.15\), \(d=3000\), \(\theta =\frac{1}{4}\), by calculations, we can obtain

$$\begin{aligned} &\Delta _{1}=0.9172, \qquad\Delta _{2}=0.9426, \\ &\rho _{1}= 0.0350,\qquad \rho _{2}= 0.0333, \\ &M_{1}=1.1908,\qquad M_{2}=1.2520,\qquad M=1.0503, \\ &J_{1}=0.6756,\qquad J_{2}= 1.0009,\qquad J_{3}= 0.0023. \end{aligned}$$

So we can check that \(f_{j}(t,\boldsymbol{u},\boldsymbol{w})\) satisfy (for \(j=1,2\)):

  1. (L1)

    \(f_{1}(t,\boldsymbol{u},\boldsymbol{w})\leq 434.3649 , f_{2}(t, \boldsymbol{u},\boldsymbol{w})\leq 437, f_{j}(t,\boldsymbol{u}, \boldsymbol{w})\leq \min \{\varPhi _{\frac{9}{5}} ( \frac{d}{J_{1}} ),\varPhi _{\frac{9}{5}} (\frac{d}{J_{2}} ) \}=437.0214\) for \((t,\boldsymbol{u},\boldsymbol{w})\in [0,1] \times [0,3000]^{2} \times [-3000,0]^{2}\);

  2. (L2)

    \(f_{1}(t, \boldsymbol{u},\boldsymbol{w})\geq 415.7506>\varPhi _{\frac{9}{5}} (\frac{b}{ \rho _{1} J_{3}} )=413.5925, f_{2}(t,\boldsymbol{u},\boldsymbol{w}) \geq 431>\varPhi _{\frac{9}{5}} (\frac{b}{\rho _{2} J_{3}} )=430.5003\) for \((t,\boldsymbol{u},\boldsymbol{w})\in [\frac{1}{4}, \frac{3}{4} ]\times [0.15,4.5070]^{2}\times [-3000,0]^{2}\);

  3. (L3)

    \(f_{1}(t,\boldsymbol{u},\boldsymbol{w})\leq 0.2<\varPhi _{\frac{9}{5}} (\frac{a}{J_{1}} )=0.2169, f_{2}(t,\boldsymbol{u}, \boldsymbol{w})\leq 0.2<\varPhi _{\frac{9}{5}} (\frac{a}{J_{1}} )=0.2169\) for \((t,\boldsymbol{u},\boldsymbol{w})\in [0,1]\times [0,0.1]^{2} \times [-3000,0]^{2}\).

Thus all conditions in Theorem 3.2 are satisfied. System (5.1) has at least three positive solutions \(\boldsymbol{u}^{1}\), \(\boldsymbol{u} ^{2}\), \(\boldsymbol{u}^{3}\) satisfying

$$\begin{aligned} &\bigl\Vert \boldsymbol{u}^{i} \bigr\Vert \leq 3000\quad (i=1,2,3), \\ &\min_{t\in I} \Biggl\vert \sum_{j=1}^{n} u_{j}^{1}(t) \Biggr\vert >0.15,\qquad 0.1< \min _{t\in I} \Biggl\vert \sum_{j=1}^{n} u_{j}^{2}(t) \Biggr\vert ,\\ & \sum _{j=1}^{n}\sup_{t\in J} \bigl\vert u_{j}^{2}(t) \bigr\vert < 0.15,\qquad \sum _{j=1}^{n}\sup_{t\in J} \bigl\vert u_{j}^{3}(t) \bigr\vert < 0.1. \end{aligned}$$

Example 5.2

Consider the following system with \(n=2\), \(p=\frac{9}{5}\), \(\kappa =26\text{,}000\), \(m=2\):

$$ \textstyle\begin{cases} D_{0+}^{\frac{19}{10}}(\varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}} \boldsymbol{u}(t)))=26\text{,}000\boldsymbol{f}(t,\boldsymbol{u}(t),D_{0+} ^{\frac{17}{9}}\boldsymbol{u}(t)),\quad t\in (0,1),\\ \boldsymbol{u}(0)=\boldsymbol{0},\qquad \boldsymbol{u}(1)= \frac{1}{8}\int _{0}^{\frac{3}{5}}\boldsymbol{u}(s)\,d\boldsymbol{A}(s)+ \frac{1}{10}\int _{0}^{\frac{4}{5}}\boldsymbol{u}(s)\,d\boldsymbol{A}(s), \\ D_{0+}^{\frac{17}{9}}\boldsymbol{u}(0)=\boldsymbol{0},\qquad \varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}\boldsymbol{u}(1))= \frac{1}{20}\varPhi _{\frac{9}{5}}(D_{0+}^{\frac{17}{9}}\boldsymbol{u}( \frac{1}{10})), \end{cases} $$
(5.3)

where \(\alpha _{1}=\frac{17}{9}\), \(\alpha _{2}=\frac{19}{10}\), \(b_{1}=\frac{1}{8}\), \(b_{2}=\frac{1}{10}\), \(\xi _{1}=\frac{3}{5}\), \(\xi _{2}=\frac{4}{5}\), \(\lambda =\frac{1}{20}\), \(\eta =\frac{1}{10}\), \(\theta =\frac{1}{4}\),

$$ f_{j}(t,\boldsymbol{u},\boldsymbol{w})= \bigl( \vert u_{1} \vert + \vert u _{2} \vert + \vert w_{1} \vert + \vert w_{2} \vert \bigr)^{2},\quad j=1,2. $$

We can easily get that all conditions in Theorem 4.1 are satisfied. Because \(\kappa _{0}=25\text{,}507<\kappa \), system (5.3) has no positive solutions.