1 Introduction

Fractional differential equations can describe some phenomena in various fields of engineering and scientific, disciplines such as control theory, chemistry, physics, biology, economics, mechanics and electromagnetic. Especially in recent years, a large number of papers dealt with the existence of positive solutions of boundary value problems for nonlinear differential equations of fractional order; for details, see [16]. In addition, the existence of positive solutions to fractional differential equations and their systems, especially coupled systems, were well studied by many authors; for details, see [710].

In [10], Su studied the existence of solutions for a coupled system of fractional differential equations

$$ \textstyle\begin{cases} D_{0^{+}}^{\alpha}u(t)=f(t,v(t),D_{0^{+}}^{\gamma}v(t)), \quad 0< t< 1 , \\ D_{0^{+}}^{\beta}v(t)=g(t,u(t),D_{0^{+}}^{\eta}u(t)),\quad 0< t< 1 , \\ u(0)=u(1)=v(0)=v(1), \end{cases} $$

where \(1<\alpha, \beta<2\), \(\gamma, \eta>0\), \(\alpha-\eta\geq1\), \(\beta-\gamma\geq1\), \(f, g:[0,1]\times R^{2}\rightarrow R^{2}\) are given functions and \(D_{0^{+}}\) is the standard Riemann-Liouville fractional derivative.

In [11], Dunninger and Wang considered the existence and multiplicity of positive radial solutions for elliptic systems of the form

$$ \textstyle\begin{cases} \triangle u+\lambda k_{1}(|x|)f(u,v)=0, \\ \triangle v+\mu k_{2}(|x|)g(u,v)=0, \\ u|_{\partial\Omega}=v|_{\partial\Omega}=0, \end{cases} $$

where \((u,v)\in C^{2}(\overline{\Omega})\times C^{2}(\overline {\Omega})\), with \(\Omega=\{x\in\mathbb{R}^{N}:R_{1}<|x|<R_{2}, R_{1}, R_{2}>0\}\) an annulus with boundary Ω.

Motivated by [10] and [11], in this paper, we consider the system of fractional differential equations with parameters

$$ \textstyle\begin{cases} D^{p}x(t)+\lambda_{1}w(t,x(t),y(t))=0,\quad t\in J=[0, 1], 3< p\leq4, \\ D^{q}y(t)+\lambda_{2}h(t,x(t),y(t))=0,\quad t\in J, 3< q\leq4, \\ D^{q_{1}}x(0)=D^{p_{1}}x(0)=D^{\gamma_{1}}x(0)=0,\qquad x(1)=\alpha _{1}x(\eta), \\ D^{q_{2}}y(0)=D^{p_{2}}y(0)=D^{\gamma_{2}}y(0)=0,\qquad y(1)=\alpha _{2}y(\xi), \end{cases} $$
(1)

where \(D^{p}\) is the standard Riemann-Liouville derivative. Moreover, in the rest of this paper we always suppose that the following assumptions hold.

  1. (A1)
    1. (i)

      \(w, h:[0,1]\times[0,+\infty)\times[0,+\infty)\rightarrow [0,+\infty)\) are continuous;

    2. (ii)

      \(\lambda_{1}\) and \(\lambda_{2}\) are positive parameters;

    3. (iii)

      \(q_{i}\in(0,1)\), \(p_{i}\in(1,2)\), \(\gamma_{i}\in(2,3)\), \(\eta, \xi \in(0,1)\) (\(i=1,2\)), \(0<\alpha_{1}\eta^{p-1}<1\), \(0<\alpha_{2}\xi^{p-1}<1\).

  2. (A2)

    \(w(t,x,y), h(t,x,y)>0\) for \(x, y>0\), \(t\in J\).

By applying the Krasnosel’skii fixed point theorem for a cone map, we obtained the existence of at least one and two positive solutions for the system (1).

2 Preliminaries

For the sake of convenience, we introduce following notations:

$$\begin{aligned}& w_{0}=\lim_{(x,y)\rightarrow0}\max_{t\in[0,1]} \frac{w(t,x,y)}{x+y}, \\& w_{\infty}=\lim_{(x,y)\rightarrow\infty}\min _{t\in[0,1]}\frac {w(t,x,y)}{x+y}, \\& h_{0}=\lim_{(x,y)\rightarrow0}\max_{t\in[0,1]} \frac{h(t,x,y)}{x+y}, \\& h_{\infty}=\lim_{(x,y)\rightarrow\infty}\min _{t\in[0,1]}\frac {h(t,x,y)}{x+y}. \end{aligned}$$

Theorem A

[12]

Let X be a Banach space, and let \(K \subset X\) be a cone. Assume \(\Omega_{1}\), \(\Omega_{2}\) are two open bounded subsets of X with \(0\in\Omega_{1}\), \(\overline{\Omega}_{1}\subset \Omega_{2}\), and let \(T: K\cap(\overline{\Omega}_{2}\setminus\Omega_{1}) \rightarrow K\) be a completely continuous operator such that

  1. (i)

    \(\|Tx\|\leq \|x\|\), \(x\in K\cap\partial\Omega_{1}\), and \(\|Tx\|\geq\|x\|\), \(x\in K\cap\partial\Omega_{2}\);

or

  1. (ii)

    \(\|Tx\|\geq\|x\|\), \(x\in K\cap\partial\Omega_{1}\), and \(\|Tx\| \leq\|x\|\), \(x\in K\cap\partial\Omega_{2}\).

Then T has a fixed point in \(K\cap(\overline{\Omega }_{2}\setminus \Omega_{1})\).

Definition 1

[13]

We call \(D^{p}w(x)=\frac{1}{\Gamma (m-p)}(\frac{d}{dt})^{m}\int_{0}^{t}\frac{w(t)}{(t-s)^{p-m+1}}\,dt\), \(p>0\), \(m=[p]+1\) is the Riemann-Liouville fractional derivative of order p. \([p]\) denotes the integer part of number p.

Definition 2

[13]

We call \(I^{p}w(x)=\frac{1}{\Gamma (\alpha)}\int_{0}^{t}(t-s)^{p-1}w(s)\,ds\), \(t>0\), \(p>0 \) is Riemann-Liouville fractional integral of order p.

Lemma 1

[13]

Let \(p>0\), then, for \(\forall C_{i}\in R\), \(i=0, 1, 2, \ldots, m\), \(m=[p]+1\), we have

$$I^{p}D^{p}x(t)=x(t)+C_{1}t^{p-1}+C_{2}t^{p-2}+ \cdots+C_{m}t^{p-m}. $$

Lemma 2

Suppose that \(\varphi\in C(J)\) and (A1) holds, then the unique solution of the linear boundary value problem

$$ \textstyle\begin{cases} D^{p}x(t)+\varphi(t)=0,\quad t\in J, 3< p\leq4, \\ D^{q_{1}}x(0)=D^{p_{1}}x(0)=D^{\gamma_{1}}x(0)=0,\qquad x(1)=\alpha _{1}x(\eta), \end{cases} $$
(2)

is provided by

$$x(t)= \int_{0}^{1}G_{1}(t,s)\varphi(s)\,ds, $$

where \(G_{1}(t,s)\) is the Green’s function defined by

$$ G_{1}(t,s)= \textstyle\begin{cases} \frac{t^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma (p)}[(1-s)^{p-1}-\alpha_{1}(\eta-s)^{p-1}]-\frac{(t-s)^{p-1}}{\Gamma(p)} ,& 0\leq s\leq t\leq\eta\leq1, \\ \frac{t^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma (p)}[(1-s)^{p-1}-\alpha_{1}(\eta-s)^{p-1}] ,& 0\leq t\leq s\leq\eta\leq1, \\ \frac{t^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)}(1-s)^{p-1}-\frac {(t-s)^{p-1}}{\Gamma(p)} ,& 0\leq\eta\leq s\leq t\leq1, \\ \frac{t^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)}(1-s)^{p-1} ,& 0\leq\eta\leq t\leq s\leq1. \end{cases} $$
(3)

Proof

Let \(x(t)=-I^{p}\varphi (t)+C_{1}t^{p-1}+C_{2}t^{p-2}+C_{3}t^{p-3}+C_{4}t^{p-4}\), then

$$\begin{aligned}& D^{q_{1}}x(t)=-I^{p-q_{1}}\varphi(t)+C_{1} \frac{\Gamma (p)t^{p-q_{1}-1}}{\Gamma(p-q_{1})}+C_{2}\frac{\Gamma (p-1)t^{p-q_{1}-2}}{\Gamma(p-q_{1}-1)} \\& \hphantom{D^{q_{1}}x(t)={}}{}+C_{3} \frac{\Gamma (p-2)t^{p-q_{1}-3}}{\Gamma(p-q_{1}-2)}+C_{4}\frac{\Gamma (p-3)t^{p-q_{1}-4}}{\Gamma(p-q_{1}-3)}, \\& D^{p_{1}}x(t)=-I^{p-p_{1}}\varphi(t)+C_{1} \frac{\Gamma (p)t^{p-p_{1}-1}}{\Gamma(p-p_{1})}+C_{2}\frac{\Gamma (p-1)t^{p-p_{1}-2}}{\Gamma(p-p_{1}-1)} \\& \hphantom{D^{p_{1}}x(t)={}}{}+C_{3} \frac{\Gamma (p-2)t^{p-p_{1}-3}}{\Gamma(p-p_{1}-2)}+C_{4}\frac{\Gamma (p-3)t^{p-p_{1}-4}}{\Gamma(p-p_{1}-3)}, \\& D^{\gamma_{1}}x(t)=-I^{p-\gamma_{1}}\varphi(t)+C_{1} \frac{\Gamma (p)t^{p-\gamma_{1}-1}}{\Gamma(p-\gamma_{1})}+C_{2}\frac{\Gamma (p-1)t^{p-\gamma_{1}-2}}{\Gamma(p-\gamma_{1}-1)} \\& \hphantom{D^{\gamma_{1}}x(t)={}}{}+C_{3} \frac{\Gamma (p-2)t^{p-\gamma_{1}-3}}{\Gamma(p-\gamma_{1}-2)}+C_{4}\frac{\Gamma (p-3)t^{p-\gamma_{1}-4}}{\Gamma(p-\gamma_{1}-3)}, \end{aligned}$$

\(D^{q_{1}}x(0)=0\) implies that \(C_{4}=0\). In fact, if \(t=0\), we see that \(t^{p-q_{1}-1}=0\), \(t^{p-q_{1}-2}=0\), \(t^{p-q_{1}-3}=0\), \(t^{p-q_{1}-4}=\frac{1}{t^{4+q_{1}-p}}\) is not well defined. Similarly, \(D^{p_{1}}x(0)=0\) implies that \(C_{3}=0\) and \(D^{\gamma_{1}}x(0)=0\) implies that \(C_{2}=0\). Thus, \(x(t)=-I^{p}\varphi(t)+C_{1}t^{p-1}\). Now, by using boundary condition \(x(1)=\alpha_{1}x(\eta)\), we get \(C_{1}=\frac{1}{1-\alpha_{1}\eta^{p-1}}(I^{p}\varphi(1)-\alpha _{1}I^{p}\varphi(t))\). Hence, we get the solution as follows:

$$\begin{aligned} x(t) =&-I^{p}\varphi(t)+\frac{t^{p-1}}{1-\alpha _{1}\eta^{p-1}}\bigl(I^{p} \varphi(1)-\alpha_{1}I^{p}\varphi(t)\bigr) \\ = & \frac{t^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \biggl[ \int _{0}^{1}(1-s)^{p-1}\varphi(s)\,ds- \alpha_{1} \int_{0}^{\eta}(\eta -s)^{p-1}\varphi(s)\,ds \biggr] \\ &{}-\frac{1}{\Gamma(p)} \int _{0}^{t}(t-s)^{p-1}\varphi(s)\,ds \\ = & \int_{0}^{1}G_{1}(t,s)\varphi(s)\,ds. \end{aligned}$$

Then \(G_{1}(t,s)\) can easily be obtained. □

Notation 1

Similarly, we can get

$$\begin{aligned} y(t) = & \frac{t^{q-1}}{(1-\alpha_{2}\xi ^{q-1})\Gamma(q)} \biggl[ \int_{0}^{1}(1-s)^{q-1}\varphi(s)\,ds- \alpha _{2} \int_{0}^{\xi}(\xi-s)^{q-1}\varphi(s)\,ds \biggr] \\ &{}-\frac{1}{\Gamma (q)} \int_{0}^{t}(t-s)^{q-1}\varphi(s)\,ds \\ = & \int_{0}^{1}G_{2}(t,s)\varphi(s)\,ds \end{aligned}$$

and

$$ G_{2}(t,s)= \textstyle\begin{cases} \frac{t^{q-1}}{(1-\alpha_{2}\xi^{q-1})\Gamma (q)}[(1-s)^{q-1}-\alpha_{2}(\xi-s)^{q-1}]-\frac{(t-s)^{q-1}}{\Gamma(q)} ,& 0\leq s\leq t\leq\xi\leq1, \\ \frac{t^{q-1}}{(1-\alpha_{2}\xi^{q-1})\Gamma(q)}[(1-s)^{q-1}-\alpha _{2}(\xi-s)^{q-1}] ,& 0\leq t\leq s\leq\xi\leq1, \\ \frac{t^{q-1}}{(1-\alpha_{2}\xi^{q-1})\Gamma(q)}(1-s)^{q-1}-\frac {(t-s)^{q-1}}{\Gamma(q)} ,& 0\leq\xi\leq s\leq t\leq1, \\ \frac{t^{q-1}}{(1-\alpha_{2}\xi^{q-1})\Gamma(q)}(1-s)^{q-1} ,& 0\leq\xi\leq t\leq s\leq1. \end{cases} $$

In view of Lemma 2, the system (1) is equivalent to the Fredholm integral system of

$$\textstyle\begin{cases}x(t)=\lambda_{1}\int _{0}^{1}G_{1}(t,s)w(s,x(s),y(s))\,ds, \\ y(t)=\lambda_{2}\int_{0}^{1}G_{2}(t,s)h(s,x(s),y(s))\,ds, \end{cases} $$

where \(G_{i}(t,s)\) is the Green’s function defined by Lemma 2.

Define \(X=\{x(t)\mid x\in C(J)\}\), endowed with the norm \(\|x\|=\max_{t\in J}|x(t)|\), further the norm for the product space \(X\times X\), we define as \(\|x+y\|=\|x\|+\|y\|\). Obviously, \((X,\|\cdot\|)\) is a Banach space. We define the cone \(K\subset X\times X\) by

$$\begin{aligned}& K=\Bigl\{ (x,y)\in X\times X: x,y\geq0, \min_{t\in J} \bigl[x(t)+y(t)\bigr]\geq\theta \bigl\Vert (x,y) \bigr\Vert \Bigr\} , \\& \quad \theta= \min\bigl\{ \theta_{1}=\delta^{p-1},\theta _{2}= \delta^{q-1}\bigr\} . \end{aligned}$$

Define an operator \(T:X\times X\rightarrow X\times X\) as

$$\begin{aligned} T(x,y) (t) =& \biggl(\lambda_{1} \int _{0}^{1}G_{1}(t,s)w \bigl(s,x(s),y(s)\bigr)\,ds, \lambda_{2} \int _{0}^{1}G_{2}(t,s)h\bigl(s,x(s),y(s) \bigr)\,ds \biggr) \\ = &\bigl(T_{1}(x,y), T_{2}(x,y)\bigr). \end{aligned}$$

The solutions of the system (1) and the fixed points of operator T coincide with each other.

Lemma 3

The Green’s function \(G_{i}(t,s)\) (\(i=1, 2\)) are continuous on \(J\times J\) and satisfy the following properties:

  1. (1)

    \(G_{i}(t,s)\in C(J\times J)\) and \(G_{i}(t,s)\geq0\), \(\forall t, s\in J\);

  2. (2)

    \(\max_{t\in J}G_{i}(t,s)=G_{i}(1,s)\);

  3. (3)

    \(\frac{\min_{t\in[\delta,1-\delta]}G_{i}(t,s)}{G_{i}(1,s)}\geq \theta_{i}(s)\), \(\delta\in(0,1)\).

Proof

(1) If \(0\leq s\leq t\leq\eta\leq1\), then

$$\begin{aligned} G_{1}(t,s) =&\frac{t^{p-1}}{(1-\alpha_{1}\eta ^{p-1})\Gamma(p)}\bigl[(1-s)^{p-1}- \alpha_{1}(\eta-s)^{p-1}\bigr]-\frac {(t-s)^{p-1}}{\Gamma(p)} \\ \geq& \frac{t^{p-1}[(1-s)^{p-1}-\alpha_{1}\eta^{p-1}(1-\frac {s}{\eta})^{p-1}]-(1-\alpha_{1}\eta^{p-1})t^{p-1}(1-\frac {s}{t})^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ \geq&\frac{t^{p-1}[(1-s)^{p-1}-\alpha_{1}\eta ^{p-1}(1-s)^{p-1}]-(1-\alpha_{1}\eta^{p-1})t^{p-1}(1-\frac {s}{t})^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ =&\frac{t^{p-1}[(1-s)^{p-1}-(1-\frac{s}{t})^{p-1}]}{\Gamma(p)}. \end{aligned}$$

Since \(s\leq t\), \(G_{1}(t,s)\geq0\).

If \(0\leq t\leq s\leq\eta\leq1\), then

$$\begin{aligned} G_{1}(t,s) =&\frac{t^{p-1}}{(1-\alpha_{1}\eta ^{p-1})\Gamma(p)}\bigl[(1-s)^{p-1}- \alpha_{1}(\eta-s)^{p-1}\bigr] \\ \geq& \frac{t^{p-1}[(1-s)^{p-1}-\alpha_{1}\eta^{p-1}(1-\frac {s}{\eta})^{p-1}]}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ \geq&\frac{t^{p-1}(1-\alpha_{1}\eta^{p-1})(1-s)^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)} \\ =&\frac{t^{p-1}(1-s)^{p-1}}{\Gamma(p)}. \end{aligned}$$

Hence \(G_{1}(t,s)\geq0\).

If \(0\leq\eta\leq s\leq t\leq1\), then

$$\begin{aligned} G_{1}(t,s) =&\frac{t^{p-1}(1-s)^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)}-\frac{(t-s)^{p-1}}{\Gamma(p)} \\ =& \frac{t^{p-1}(1-s)^{p-1}-(1-\alpha_{1}\eta ^{p-1})(t-s)^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ \geq&\frac{t^{p-1}(1-s)^{p-1}\alpha_{1}\eta^{p-1} }{(1-\alpha _{1}\eta^{p-1})\Gamma(p)} \\ \geq&0. \end{aligned}$$

If \(0\leq\eta\leq t\leq s\leq1\), then

$$G_{1}(t,s)=\frac{t^{p-1}(1-s)^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma (p)}\geq0. $$

Similarly, we can obtain \(G_{2}(t,s)\geq0\). Thus, \(G_{i}(t,s)\geq0\) for every \(t, s\in J\).

(2) If \(0\leq s\leq t\leq\eta\leq1\), then

$$\begin{aligned} \begin{aligned} G_{1}(1,s)&=\frac{(1-s)^{p-1}-\alpha_{1}\eta ^{p-1}(1-\frac{s}{\eta})^{p-1}-(1-\alpha_{1}\eta ^{p-1})(1-s)^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ &\geq \frac{(1-s)^{p-1}-\alpha_{1}\eta^{p-1}(1-\frac{s}{\eta })^{p-1}-(1-\alpha_{1}\eta^{p-1})(1-\frac{s}{\eta })^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ &\geq\frac{(1-s)^{p-1}-(1-\frac{s}{\eta})^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)} \end{aligned} \end{aligned}$$

and

$$\begin{aligned} G_{1}(t,s) =&\frac{t^{p-1}[(1-s)^{p-1}-\alpha _{1}\eta^{p-1}(1-\frac{s}{\eta})^{p-1}]-(1-\alpha_{1}\eta ^{p-1})(1-\frac{s}{t})^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ \leq& \frac{(1-\alpha_{1}\eta^{p-1})[(1-\frac{s}{\eta })^{p-1}-(1-\frac{s}{t})^{p-1}]}{(1-\alpha_{1}\eta^{p-1})\Gamma (p)} \\ \leq& \frac{(1-\alpha_{1}\eta^{p-1})[(1-s)^{p-1}-(1-\frac {s}{t})^{p-1}]}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)}. \end{aligned}$$

Therefore, \(G_{1}(t,s)\leq G_{1}(1,s)\).

If \(0\leq t\leq s\leq\eta\leq1\), then

$$\begin{aligned} G_{1}(1,s) =&\frac{(1-s)^{p-1}-\alpha_{1}(\eta -s)^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ \geq& \frac{t^{p-1}[(1-s)^{p-1}-\alpha_{1}(\eta -s)^{p-1}]}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ =&G_{1}(t,s). \end{aligned}$$

If \(0\leq\eta\leq s\leq t\leq1\), then

$$\begin{aligned} G_{1}(1,s) =&\frac{(1-s)^{p-1}-(1-\alpha_{1}\eta ^{p-1})(1-s)^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ =&\frac{\alpha_{1}\eta^{p-1}(1-s)^{p-1}}{(1-\alpha_{1}\eta ^{p-1})\Gamma(p)} \end{aligned}$$

and

$$\begin{aligned} G_{1}(t,s) =&\frac{t^{p-1}(1-s)^{p-1}-t^{p-1}(1-\frac {s}{t})^{p-1}(1-\alpha_{1}\eta^{p-1})}{(1-\alpha_{1}\eta ^{p-1})\Gamma(p)} \\ \leq& \frac{(1-s)^{p-1}-(1-\frac{s}{t})^{p-1}(1-\alpha_{1}\eta ^{p-1})}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ \leq& \frac{(1-\frac{s}{t})^{p-1}\alpha_{1}\eta^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)} \\ \leq& \frac{(1-s)^{p-1}\alpha_{1}\eta^{p-1}}{(1-\alpha_{1}\eta ^{p-1})\Gamma(p)} \\ =& G_{1}(1,s). \end{aligned}$$

If \(0\leq\eta\leq t\leq s\leq1\), then

$$G_{1}(1,s)=\frac{(1-s)^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma (p)}\geq\frac{t^{p-1}(1-s)^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma (p)}=G_{1}(t,s). $$

Similarly, we can obtain \(G_{2}(t,s)\leq G_{2}(1,s)\).

(3) If \(0\leq s\leq t\leq\eta\leq1\), then

$$ G_{1}(1,s) \leq \frac{(1-s)^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)} $$

and

$$\begin{aligned} G_{1}(t,s) \geq&\frac{t^{p-1}[(1-s)^{p-1}-\alpha _{1}(\eta-s)^{p-1}]-t^{p-1}(1-\alpha_{1}\eta^{p-1})(1-\frac {s}{t})^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ \geq& \frac{t^{p-1}[(1-s)^{p-1}-\alpha_{1}\eta^{p-1}(1-\frac {s}{t})^{p-1}-(1-\alpha_{1}\eta^{p-1})(1-\frac {s}{t})^{p-1}]}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)} \\ =&\frac{t^{p-1}[(1-s)^{p-1}-(1-\frac{s}{t})^{p-1}]}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)}. \end{aligned}$$

Let \(\sigma_{1}\) be a positive number such that \(\min_{t\in[\delta ,1-\delta]}G_{1}(t,s)\geq\sigma_{1}G_{1}(1,s)\). Then we can obtain

$$\sigma_{1}\leq\frac {t^{p-1}(1-s)^{p-1}-(t-s)^{p-1}}{(1-s)^{p-1}}=t^{p-1}-\biggl( \frac {t-s}{1-s}\biggr)^{p-1}\leq t^{p-1}. $$

If \(0\leq t\leq s\leq\eta\leq1\), then

$$ G_{1}(1,s) \leq \frac{(1-s)^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)} $$

and

$$G_{1}(t,s)\geq\frac{t^{p-1}(1-s)^{p-1}}{\Gamma(p)}. $$

Let \(\sigma_{2}\) be a positive number such that \(\min_{t\in[\delta ,1-\delta]}G_{1}(t,s)\geq\sigma_{2}G_{1}(1,s)\). Then we can obtain \(\sigma_{2}\leq t^{p-1}(1-\alpha_{1}\eta^{p-1})\).

If \(0\leq\eta\leq s\leq t\leq1\), then

$$ G_{1}(1,s) \leq \frac{(1-s)^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)} $$

and

$$ G_{1}(t,s) \geq \frac{t^{p-1}(1-s)^{p-1}\alpha_{1}\eta^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)}. $$

Let \(\sigma_{3}\) be a positive number such that \(\min_{t\in[\delta ,1-\delta]}G_{1}(t,s)\geq\sigma_{3}G_{1}(1,s)\). Then we can obtain \(\sigma_{3}\leq t^{p-1}\alpha_{1}\eta^{p-1}\).

If \(0\leq\eta\leq t\leq s\leq1\), then

$$ G_{1}(1,s) \leq \frac{(1-s)^{p-1}}{(1-\alpha _{1}\eta^{p-1})\Gamma(p)} $$

and

$$G_{1}(t,s)=\frac{t^{p-1}(1-s)^{p-1}}{(1-\alpha_{1}\eta^{p-1})\Gamma(p)}. $$

Let \(\sigma_{4}\) be a positive number such that \(\min_{t\in[\delta ,1-\delta]}G_{1}(t,s)\geq\sigma_{4}G_{1}(1,s)\). Then we can obtain \(\sigma_{4}\leq t^{p-1}\).

Define \(\theta_{1}=\min\{\sigma_{1}, \sigma_{2}, \sigma_{3}, \sigma_{4}\}\). Then \(\frac{\min_{t\in[\delta,1-\delta ]}G_{1}(t,s)}{G_{1}(1,s)}\geq\theta_{1}(s)\), \(\delta\in(0,1)\). Similarly, we can prove \(\frac{\min_{t\in[\delta,1-\delta ]}G_{2}(t,s)}{G_{2}(1,s)}\geq\theta_{2}(s)\), \(\delta\in(0,1)\). □

Lemma 4

If (A1) holds, then \(T(K)\subset K\) and \(T:K\rightarrow K\) is a completely continuous operator.

Proof

The continuity of T is obvious. To prove \(T(K)\subset K\), let us choose \((x,y)\in K\). Since \(G_{i}(t,s)\leq G_{i}(1,s)\) for \(0\leq s\leq1\), and \(G_{i}(t,s)\geq\theta G_{i}(1,s)\) for \(\delta \leq t\leq1-\delta\), we have

$$\begin{aligned} \min_{t\in[\delta,1-\delta]}T_{1}(x,y) (t) \geq & \lambda_{1}\delta^{p-1} \int_{0}^{1}G_{1}(1,s)w\bigl(s,x(s),y(s) \bigr)\,ds \\ \geq& \delta^{p-1}\lambda_{1} \int _{0}^{1}G_{1}(1,s)w \bigl(s,x(s),y(s)\bigr)\,ds \\ \geq&\delta^{p-1} \bigl\Vert T_{1}(x,y) \bigr\Vert . \end{aligned}$$

Similarly,

$$\min_{t\in[\delta,1-\delta]}T_{2}(x,y) (t)\geq\delta^{q-1} \bigl\Vert T_{2}(x,y) \bigr\Vert . $$

Thus,

$$\begin{aligned} \min_{t\in[\delta,1-\delta]}\bigl(T_{1}(x,y) (t)+T_{2}(x,y) (t)\bigr) \geq&\min_{t\in[\delta,1-\delta]}T_{1}(x,y) (t)+\min _{t\in[\delta,1-\delta ]}T_{2}(x,y) (t) \\ \geq&\theta \bigl\Vert \bigl(T_{1}(x,y), T_{2}(x,y)\bigr) \bigr\Vert . \end{aligned}$$

Since \(G_{i}(t,s)\geq0\), \(\forall t, s\in J\) and (A1) holds, we conclude that \(T(K)\subset K\). It is not difficult to show that T is uniformly bounded. Combining this with the Arzelà-Ascoli Theorem, we see that \(T:K\rightarrow K\) is a completely continuous operator. □

3 Main results and proofs

Theorem 1

Assume that (A1) holds, then, for all \(\lambda _{i}>0\), \(i=1, 2\), the system (1) has at least one positive solution in the following cases:

  1. (a)

    \(w_{0}=h_{0}=0\), and either \(w_{\infty}=\infty\) or \(h_{\infty }=\infty\) (superlinear).

  2. (b)

    \(w_{\infty}=h_{\infty}=0\), and either \(w_{0}=\infty\) or \(h_{0}=\infty\) (sublinear).

Proof

(a) Since \(w_{0}=h_{0}=0\), we may choose \(H_{1}>0\) such that \(w(t,x,y)\leq\varepsilon(x+y)\) and \(h(t,x,y)\leq\varepsilon (x+y)\) for \(0< x+y\leq H_{1}\), \(t\in J\), where the constant \(\varepsilon >0\) satisfies

$$2\varepsilon\lambda_{1} \int_{0}^{1}G_{1}(1,s)\,ds\leq1,\qquad 2 \varepsilon \lambda_{2} \int_{0}^{1}G_{2}(1,s)\,ds\leq1. $$

Set \(\Omega_{1}=\{(x,y):(x,y)\in X\times X, \|(x,y)\|< H_{1}\}\). If \((x,y)\in K\cap\partial\Omega_{1}\), \(\|(x,y)\|=H_{1}\), we have

$$\begin{aligned} T_{1}(x,y) (t) \leq&\lambda_{1} \int _{0}^{1}G_{1}(t,s)w \bigl(s,x(s),y(s)\bigr)\,ds \\ \leq& \varepsilon\lambda_{1} \int_{0}^{1}G_{1}(t,s) \bigl(x(s)+y(s) \bigr)\,ds \\ \leq&\varepsilon\lambda_{1}\bigl( \Vert x \Vert + \Vert y \Vert \bigr) \int_{0}^{1}G_{1}(1,s)\,ds \\ \leq&\frac{ \Vert (x,y) \Vert }{2}. \end{aligned}$$

Similarly, \(T_{2}(x,y)(t)\leq\frac{\|(x,y)\|}{2}\). Hence,

$$\bigl\Vert T(x,y) \bigr\Vert = \bigl\Vert T_{1}(x,y), T_{2}(x,y) \bigr\Vert = \bigl\Vert T_{1}(x,y) \bigr\Vert + \bigl\Vert T_{2}(x,y) \bigr\Vert \leq \bigl\Vert (x,y) \bigr\Vert $$

for \((x,y)\in K\cap\partial\Omega_{1}\). If we further assume that \(w_{\infty}=\infty\), then there exists \(\widehat{H}>0\) such that \(w(t,x,y)\geq\beta(x+y)\) for \((x+y)\geq\widehat{H}\), \(t\in J\), where \(\beta>0\) is chosen so that \(\lambda_{1}\beta\int _{0}^{1}G_{1}(1,s)\,ds\geq1\). Let \(H_{2}=\max\{2H_{1}, \delta ^{1-p}\widehat{H}\}\) and set \(\Omega_{2}=\{(x,y):(x,y)\in X\times X, \|(x,y)\|< H_{2}\}\). If \((x,y)\in K\cap\partial\Omega_{2}\), we have \(\min_{t\in[\delta,1-\delta]}(x(t)+y(t))\geq\delta^{p-1}\|(x,y)\| \geq\widehat{H}\) and for \(\forall t[\delta,1-\delta]\),

$$\begin{aligned} \min_{t\in[\delta,1-\delta]}T_{1}(x,y) (t) \geq &\min _{t\in[\delta,1-\delta]}\lambda_{1} \int _{0}^{1}G_{1}(t,s)w \bigl(s,x(s),y(s)\bigr)\,ds \\ \geq& \lambda_{1}\beta \int_{\delta}^{1-\delta }G_{1}(t,s) \bigl(x(s)+y(s) \bigr)\,ds \\ \geq&\lambda_{1}\beta\delta^{p-1} \bigl\Vert (x,y) \bigr\Vert \int _{0}^{1}G_{1}(1,s)\,ds \\ \geq& \bigl\Vert (x,y) \bigr\Vert . \end{aligned}$$

Therefore, \(\|T(x,y)\|=\|T_{1}(x,y), T_{2}(x,y)\|=\|T_{1}(x,y)\|+\| T_{2}(x,y)\|\geq\|(x,y)\|\) for \((x,y)\in K\cap\partial\Omega_{2}\). An analogous estimate holds for \(h_{\infty}=\infty\).

Now by Theorem A, T has a fixed point \((x,y)\in K\cap\partial (\overline{\Omega_{2}}\setminus\Omega_{1})\) such that \(H_{1}\leq\| (x,y)\|\leq H_{2}\) and the system (1) has a positive solution.

(b) If \(w_{0}=\infty\), we choose \(H_{1}>0\) so that \(w(t,x,y)\geq \widetilde{\beta}(x+y)\) for \(0< x+y\leq H_{1}\), \(t\in J\), where β̃ satisfies \(\lambda_{1}\widetilde{\beta}\delta ^{p-1}\int_{0}^{1}G_{1}(1,s)\,ds\geq1\). Let \(\Omega_{1}=\{ (x,y):(x,y)\in X\times X, \|(x,y)\|< H_{1}\}\), if \((x,y)\in K\cap \partial\Omega_{1}\), \(\|(x,y)\|=H_{1}\), and for \(\forall t\in[\delta ,1-\delta]\),

$$\begin{aligned} \min_{t\in[\delta,1-\delta]}T_{1}(x,y) (t) \geq &\min _{t\in[\delta,1-\delta]}\lambda_{1} \int _{0}^{1}G_{1}(t,s)w \bigl(s,x(s),y(s)\bigr)\,ds \\ \geq& \lambda_{1}\widetilde{\beta} \int_{\delta}^{1-\delta }G_{1}(t,s) \bigl(x(s)+y(s) \bigr)\,ds \\ \geq&\lambda_{1}\widetilde{\beta}\delta^{p-1} \bigl\Vert (x,y) \bigr\Vert \int _{0}^{1}G_{1}(1,s)\,ds \\ \geq& \bigl\Vert (x,y) \bigr\Vert . \end{aligned}$$

Therefore, \(\|T(x,y)\|=\|T_{1}(x,y), T_{2}(x,y)\|=\|T_{1}(x,y)\|+\| T_{2}(x,y)\|\geq\|(x,y)\|\) for \((x,y)\in K\cap\partial\Omega_{2}\) An analogous estimate holds for \(h_{0}=\infty\).

Set \(w^{\ast}(t)=\max_{0\leq x+y\leq t}w(t,x,y)\) and \(h^{\ast }(t)=\max_{0\leq x+y\leq t}h(t,x,y)\). Then \(w^{\ast}\) and \(h^{\ast}\) are nondecreasing in their respective arguments. Moreover, from \(w_{\infty}=h_{\infty}=0\), we see that \(\lim_{t\rightarrow\infty }\frac{w^{\ast}(t)}{t}=0\), \(\lim_{t\rightarrow\infty}\frac{h^{\ast }(t)}{t}=0\). Therefore, there is an \(H_{2}>2H_{1}\) such that \(w^{\ast }(t)\leq\varepsilon t\), \(h^{\ast}(t)\leq\varepsilon t\) for \(t\geq H_{2}\), where the constant \(\varepsilon>0\) satisfies

$$2\varepsilon\lambda_{1} \int_{0}^{1}G_{1}(1,s)\,ds\leq1,\qquad 2 \varepsilon \lambda_{2} \int_{0}^{1}G_{2}(1,s)\,ds\leq1. $$

Set \(\Omega_{2}=\{(x,y):(x,y)\in X\times X, \|(x,y)\|< H_{2}\}\). If \((x,y)\in K\cap\partial\Omega_{2}\), \(\|(x,y)\|=H_{2}\), we have

$$\begin{aligned} T_{1}(x,y) (t) \leq&\lambda_{1} \int _{0}^{1}G_{1}(t,s)w \bigl(s,x(s),y(s)\bigr)\,ds \\ \leq& \lambda_{1} \int_{0}^{1}G_{1}(t,s)w^{\ast}(H_{2}) \,ds \\ \leq&\varepsilon\lambda_{1}H_{2} \int_{0}^{1}G_{1}(1,s)\,ds \\ \leq&\frac{ \Vert (x,y) \Vert }{2}. \end{aligned}$$

Similarly, \(T_{2}(x,y)(t)\leq\frac{\|(x,y)\|}{2}\). Hence,

$$\bigl\Vert T(x,y) \bigr\Vert = \bigl\Vert T_{1}(x,y), T_{2}(x,y) \bigr\Vert = \bigl\Vert T_{1}(x,y) \bigr\Vert + \bigl\Vert T_{2}(x,y) \bigr\Vert \leq \bigl\Vert (x,y) \bigr\Vert $$

for \((x,y)\in K\cap\partial\Omega_{2}\).

Applying Theorem A, we conclude to the existence of a positive solution \((x,y)\in K\cap(\overline{\Omega_{2}}\setminus\Omega_{1})\) for the system (1). □

Theorem 2

Assume that (A1) and (A2) hold.

  1. (a)

    If \(w_{0}=h_{0}=w_{\infty}=h_{\infty}=0\), then there is a positive constant \(\sigma_{1}\) such that (1) has at least two positive solutions for all \(\lambda_{1}, \lambda_{2}\geq\sigma_{1}\).

  2. (b)

    If \(w_{0}=\infty\) or \(h_{0}=\infty\), and either \(w_{\infty}=0\) or \(h_{\infty}=0\), then there is a positive constant \(\sigma_{2}\) such that the system (1) has at least two positive solutions for all \(\lambda_{1}, \lambda_{2}\geq\sigma_{2}\).

Proof

(a) For \((x,y)\in K\) and \(\|(x,y)\|=l\), let

$$m(l)=\min \biggl\{ \lambda_{1} \int_{\delta}^{1-\delta }G_{1}(1,s)w\bigl(s,x(s),y(s) \bigr)\,ds, \lambda_{2} \int_{\delta}^{1-\delta }G_{2}(1,s)h\bigl(s,x(s),y(s) \bigr)\,ds \biggr\} . $$

By assumption \(m(l)>0\) for \(l>0\). Choose two numbers \(0< H_{3}< H_{4}\), and let

$$\begin{aligned} \begin{aligned} &\sigma_{1}=\max\biggl\{ \frac{H_{3}}{2m(H_{3})}, \frac{H_{4}}{2m(H_{4})}\biggr\} , \\ &\Omega_{i}=\bigl\{ (x,y):(x,y)\in X\times X, \mbox{and } \bigl\Vert (x,y) \bigr\Vert < H_{i}\bigr\} \quad (i=3, 4). \end{aligned} \end{aligned}$$

Then, for \(\lambda_{1}, \lambda_{2}\geq\sigma_{1}\), \((x,y)\in K\cap \partial\Omega_{i}\) (\(i=3, 4\)), and \(\|(x,y)\|=H_{i}\), we have

$$\min_{t\in[\delta,1-\delta]}T_{1}(x,y) (t)\geq\lambda _{1} \widetilde{\beta} \int_{\delta}^{1-\delta }G_{1}(t,s)w\bigl(s,x(s),y(s) \bigr)\,ds\geq\lambda_{1}m(H_{i})\geq\frac {H_{i}}{2}\quad (i=3,4). $$

Similarly, \(\min_{t\in[\delta,1-\delta]}T_{2}(x,y)(t)\geq\frac {H_{i}}{2}\) (\(i=3, 4\)). This implies that \(\|T(x,y)\|\geq H_{i}=\|(x,y)\|\) for \((x,y)\in K\cap\partial\Omega_{i}\) (\(i=3, 4\)). Since \(w_{0}=h_{0}=w_{\infty}=h_{\infty}=0\), it follows from the proof of Theorem 1(a) and (b), respectively, we can choose \(H_{1}<\frac {H_{3}}{2}\) and \(H_{2}>2H_{4}\) such that \(\|T(x,y)\|\leq\|(x,y)\|\) for \((x,y)\in K\cap\partial\Omega_{i}\) (\(i=1, 2\)), where \(\Omega_{i}=\{ (x,y):(x,y)\in X\times X, \|(x,y)\|< H_{i}\}\) (\(i=1, 2\)).

Applying Theorem A to \(\Omega_{1}\), \(\Omega_{3}\) and \(\Omega_{2}\), \(\Omega_{4}\) we get a positive solution \((x_{1},y_{1})\) such that \(H_{1}\leq\|(x_{1},y_{1})\|\leq H_{3}\) and another positive solution \((x_{2},y_{2})\) such that \(H_{4}\leq\|(x_{1},y_{1})\|\leq H_{2}\). Since \(H_{3}< H_{4}\), these two solutions are distinct.

(b) For \((x,y)\in K\) and \(\|(x,y)\|=L\), let

$$M(L)=\max \biggl\{ \lambda_{1} \int_{0}^{1}G_{1}(1,s)w\bigl(s,x(s),y(s) \bigr)\,ds, \lambda_{2} \int_{0}^{1}G_{2}(1,s)h\bigl(s,x(s),y(s) \bigr)\,ds \biggr\} . $$

Then \(M(L)>0\) for \(L>0\). Choose two numbers \(0< H_{3}< H_{4}\), let \(\sigma_{2}=\min\{\frac{H_{3}}{2M(H_{3})}, \frac{H_{4}}{2M(H_{4})}\} \) and set \(\Omega_{i}=\{(x,y):(x,y)\in X\times X, \|(x,y)\|< H_{i}\}\) (\(i=3, 4\)). Then, for \(\lambda_{1}, \lambda_{2}\leq\sigma_{2}\) and \((x,y)\in K\cap\partial\Omega_{i}\) (\(i=3, 4\)), \(\|(x,y)\|=H_{i}\), we have \(T_{1}(x,y)(t)\leq\lambda_{1}M(H_{i})\leq\frac{H_{i}}{2}\) (\(i=3, 4\)), and, \(T_{2}(x,y)(t)\leq\frac{H_{i}}{2}\) (\(i=3,4\)), which implies \(\| T(x,y)\|\leq H_{i}=\|(x,y)\|\) for \((x,y)\in K\cap\partial\Omega _{i}\) (\(i=3, 4\)). Since either \(w_{0}=\infty\) or \(h_{0}=\infty\), and either \(w_{\infty}=\infty\) or \(h_{\infty}=\infty\), it follows from the proof of Theorem 1(a) and (b), we can choose \(H_{1}<\frac {H_{3}}{2}\) and \(H_{2}>2H_{4}\) such that \(\|T(x,y)\|\geq\|(x,y)\|\) for \((x,y)\in K\cap\partial\Omega_{i}\) (\(i=1,2\)), where \(\Omega_{i}=\{ (x,y):(x,y)\in X\times X, \|(x,y)\|< H_{i}\}\) (\(i=1, 2\)).

Once again, we conclude to the existence of two distinct positive solutions. □

Theorem 3

Assume (A1) and (A2) hold.

  1. (a)

    If \(w_{0}=h_{0}=0\) or \(w_{\infty}=h_{\infty}=0\), then there is a positive constant \(\sigma_{3}\) such that (1) has at least two positive solutions for all \(\lambda_{1}, \lambda_{2}\geq\sigma_{3}\).

  2. (b)

    If \(w_{0}=\infty\) or \(h_{0}=\infty\), or if \(w_{\infty}=\infty\) or \(h_{\infty}=\infty\), then there is a positive constant \(\sigma _{4}\) such that the system (1) has at least two positive solutions for all \(\lambda_{1}, \lambda_{2}\leq\sigma_{4}\).

Example 1

Consider the system of fractional differential equation provided by

$$ \textstyle\begin{cases} D^{\frac{10}{3}}x(t)+ [x(t)+y(t)]^{4}=0,\quad t\in[0,1], \\ D^{\frac{13}{4}}y(t)+ 2[x(t)+y(t)]^{6}=0,\quad t\in[0,1], \\ D^{\frac{1}{2}}x(0)=D^{\frac{4}{3}}x(0)=D^{\frac{9}{4}}x(0)=0,\qquad x(1)=\frac{1}{2}x(\frac{1}{2}), \\ D^{\frac{2}{3}}x(0)=D^{\frac{3}{2}}x(0)=D^{\frac{5}{2}}x(0)=0,\qquad y(1)=\frac{1}{3}y(\frac{1}{3}), \end{cases} $$
(4)

where for \(\forall x, y>0\), \(w(t,x(t),y(t))=[x(t)+y(t)]^{4}>0\), \(h(t,x(t),y(t))=[x(t)+y(t)]^{6}>0\), \(q_{1}=\frac{1}{2}\), \(q_{2}=\frac {2}{3}\in(0,1)\); \(p_{1}=\frac{4}{3}\), \(p_{2}=\frac{3}{2}\in(1,2)\); \(\gamma_{1}=\frac{9}{4}\), \(\gamma_{2}=\frac{5}{2}\in(2,3)\); \(\lambda _{1}=1\), \(\lambda_{2}=2\), \(\alpha_{1}=\frac{1}{2}\), \(\alpha_{2}=\frac {1}{3}\), \(\eta=\frac{1}{2}\), \(\xi=\frac{1}{3}\), and \(\alpha_{1}\eta ^{p-1}=\frac{1}{2^{p}}\), \(\alpha_{2}\xi^{p-1}=\frac{1}{3^{p}}\in (0,1)\). By direct calculation we obtain \(w_{0}=h_{0}=0\) and \(w_{\infty }=h_{\infty}=\infty\). Then, by Theorem 1(a), the system (4) has at least one positive solution.

Example 2

Consider the system of fractional differential equation provided by

$$ \textstyle\begin{cases} D^{\frac{7}{2}}x(t)+ \sqrt[4]{x(t)+y(t)}=0,\quad t\in[0,1], \\ D^{\frac{10}{3}}y(t)+ \sqrt[3]{x(t)+y(t)}=0,\quad t\in[0,1], \\ D^{\frac{1}{3}}x(0)=D^{\frac{3}{2}}x(0)=D^{\frac{9}{4}}x(0)=0,\qquad x(1)=x(\frac{1}{2}), \\ D^{\frac{1}{3}}x(0)=D^{\frac{3}{2}}x(0)=D^{\frac{9}{4}}x(0)=0,\qquad y(1)=y(\frac{1}{2}), \end{cases} $$
(5)

where \(\forall x, y>0\), \(w(t,x(t),y(t))=\sqrt[4]{x(t)+y(t)}>0\), \(h(t,x(t),y(t))=\sqrt[3]{x(t)+y(t)}>0\), \(q_{1}=q_{1}=\frac{1}{3}\in (0,1)\); \(p_{1}=p_{2}=\frac{3}{2}\in(1,2)\); \(\gamma_{1}=\gamma_{2}=\frac {9}{4}\in(2,3)\); \(\lambda_{1}=\lambda_{2}=1\), \(\alpha_{1}=\alpha _{2}=1\), \(\eta=\xi=\frac{1}{2}\), and \(\alpha_{1}\eta^{p-1}=\alpha _{2}\xi^{p-1}=\frac{1}{2^{p-1}}\in(0,1)\). By direct calculation we see that \(w_{0}=h_{0}=\infty\) and \(w_{\infty}=h_{\infty}=0\). Then, by Theorem 1(b), the system (5) has at least one positive solution.

4 Conclusions

In this research, by using the Krasnosel’skii fixed point theorem for a cone map, we studied the existence and multiplicity of positive solutions for a class of systems of fractional differential equations with parameters, and we obtained the existence of at least one and two solutions for our considered system.