1 Introduction

Fractional differential equations, which provide a natural description of memory and hereditary properties of various materials and processes, are regarded as an important mathematical tool for better understanding of many real world problems in applied sciences, such as physics, chemistry, aerodynamics, Bode’s analysis of feedback amplifiers, capacitor theory, electrical circuits and so on. This is the main advantage of fractional differential equations in comparison with classical integer-order models. For applications and explanations of fractional differential equations, we refer the reader to the texts [14]. In particular, many authors have shown great interest in the subject of fractional-order boundary value problems (BVPs), and many excellent results for BVPs equipped with different kinds of boundary conditions have been obtained, for more details and examples, see [523] and the references cited therein.

Moreover, Alsulami, Ntouyas, Agarwal, Ahmad and Alsaedi [24] pointed out that fractional differential system constitute an important and interesting field of investigation because of their applications in many real world problems such as anomalous diffusion [25], disease models [2628], ecological models [29], synchronization of chaotic systems [3032] and so forth. For some theoretical work on the fractional differential system, we refer the reader to [3341]. However, it is not difficult to see that there are only a few results on n-dimensional fractional differential systems; for example, see [42, 43], especially for n-dimensional higher-order singular fractional differential systems.

At the same time, we notice that a class of boundary value problem with integral boundary conditions have attracted the attention of Boucherif [44], Zhang et al. [45, 46], Hao et al. [47], Jiang et al. [48, 49], Kong [50], Feng et al. [51], Ahmad et al. [52], Mao et al. [53], and Liu et al. [54, 55].

To the best of our knowledge, in the literature there are not articles on denumerably many positive solutions for the analogous of n-dimensional higher-order singular fractional differential with integral boundary conditions. More precisely, the study of \(g_{i}\in L^{p}[0,1]\) for some \(p\geq1\), \(i=1,2,\ldots, n\), and having infinitely many singularities in \([0,\frac{1}{2})\), is still open for the following n-dimensional higher-order singular fractional differential system:

$$ \mathbf{D}_{0^{+}}^{\alpha}\mathbf{x}(t)+\lambda\mathbf{g}(t) \mathbf {f} \bigl(t,\mathbf{x}(t) \bigr)=0, \quad0< t< 1 $$
(1.1)

with the following boundary conditions:

$$ \textstyle\begin{cases} \mathbf{x}(0)=\mathbf{x}^{\prime}(0)=\cdots=\mathbf{x}^{(n-2)}(0)=0,\\ a\mathbf{x}(1)+b\mathbf{x}'(1)=\int_{0}^{1}\mathbf{h}(t)\mathbf{x}(t)\,dt, \end{cases} $$
(1.2)

where \(\mathbf{D}_{0+}^{\alpha}\) is the standard Riemann–Liouville fractional derivative of order \(n-1<\alpha\leq n\), \(n\geq3\), λ is a positive parameter, \(a>0\), \(b\geq0\) and \(a>(\alpha-1)b\). In addition,

$$\begin{aligned}& \mathbf{x}=[x_{1},x_{2},\dots,x_{n}]^{\top}, \\& \mathbf{g}(t)=\operatorname{diag} \bigl[g_{1}(t), g_{2}(t), \ldots, g_{n}(t) \bigr], \\& \mathbf{f}(t,\mathbf{x})= \bigl[f_{1}(t, \mathbf{x}),f_{2}(t,\mathbf{x}),\ldots ,f_{i}(t,\mathbf{x}), \ldots,f_{n}(t,\mathbf{x}) \bigr]^{\top}, \\& \mathbf{h}(t)=\operatorname{diag} \bigl[h_{1}(t), h_{2}(t), \ldots, h_{n}(t) \bigr], \end{aligned}$$

where

$$f_{i}(t,\mathbf{x})=f_{i}(t,x_{1},x_{2}, \ldots,x_{i},\ldots, x_{n}). $$

Therefore, system (1.1) means that

$$ \textstyle\begin{cases} -\mathbf{D}_{0^{+}}^{\alpha}x_{1}(t)=\lambda g_{1}(t)f_{1}(t,x_{1}(t),x_{2}(t),\ldots,x_{n}(t)),& 0< t< 1,\\ -\mathbf{D}_{0^{+}}^{\alpha}x_{2}(t)=\lambda g_{2}(t)f_{2}(t,x_{1}(t),x_{2}(t),\ldots,x_{n}(t)),& 0< t< 1,\\ \cdots& \cdots,\\ -\mathbf{D}_{0^{+}}^{\alpha}x_{n}(t)=\lambda g_{n}(t)f_{n}(t,x_{1}(t),x_{2}(t),\ldots,x_{n}(t)),& 0< t< 1, \end{cases} $$
(1.3)

(1.2) means that

$$ \textstyle\begin{cases} x_{1}^{\prime}(0)=x_{1}^{\prime}(1)=\cdots=x_{1}^{(n-2)}(t)=0,\\ ax_{1}(1)+bx_{1}^{\prime}(1)=\int_{0}^{1}h_{1}(t)x_{1}(t)\,dt,\\ x_{2}^{\prime}(0)=x_{2}^{\prime}(1)=\cdots=x_{2}^{(n-2)}(t)=0,\\ ax_{2}(1)+bx_{2}^{\prime}(1)=\int_{0}^{1}h_{2}(t)x_{2}(t)\,dt,\\ \cdots,\\ x_{n}^{\prime}(0)=x_{n}^{\prime}(1)=\cdots=x_{n}^{(n-2)}(t)=0,\\ ax_{n}(1)+bx_{n}^{\prime}(1)=\int_{0}^{1}h_{n}(t)x_{n}(t)\,dt. \end{cases} $$
(1.4)

Here we emphasize that our problem is new in the sense of the n-dimensional higher-order singular fractional differential systems introduced here. To the best of our knowledge, the existence of single or multiple positive solutions for n-dimensional higher-order singular fractional differential system (1.1)–(1.2) has not yet to be studied, especially for the existence of denumerably many positive solutions and two infinite families of positive solutions for system (1.1)–(1.2). In consequence, our main results of the present work will be a useful contribution to the existing literature on the topic of n-dimensional higher-order singular fractional differential systems. The existence of denumerably many positive solutions and two infinite families of positive solutions for the given problem are new, though they are proved by applying the well-known method based on the fixed theory in cones and the partially ordered structure of Banach space.

Throughout this paper, we use \(i=1,2,\dots,n\), unless otherwise stated.

Let the components of g and f satisfy the following conditions:

\((H_{1})\) :

\(g_{i}\in L^{p}[0,1]\) for some \(p\in[1,+\infty)\), and there exists \(N_{i}>0\) such that \(g_{i}(t)\geq N_{i}\) a.e. on \(J=[0,1]\);

\((H_{2})\) :

there exists a sequence \(\{t_{j}^{\prime}\} _{j=1}^{\infty} \) such that \(t_{1}^{\prime}<\frac{1}{2}\), \(t_{j}^{\prime } \downarrow t_{0}^{\prime}>0 \) and \(\lim_{t\rightarrow t_{j}^{\prime}} g_{i}(t) =+\infty\) for all \(j=1, 2,\ldots\) ;

\((H_{3})\) :

\(f_{i}(t,\mathbf{x})\in C(J\times R_{+}^{n}, R_{+})\), where \(R^{+}=[0,+\infty)\) and \(R_{+}^{n}=\prod_{i=1}^{n}R_{+}\);

\((H_{4})\) :

\(h_{i}\in L^{1}[0,1]\) is nonnegative with \(\mu_{i}\in [0,a-(\alpha-1)b)\), where \(\mu_{i}\) is defined in (2.18).

The plan of this paper is as follows. We shall introduce some basic definitions and lemmas of fractional calculus in the rest of this section. In Sect. 2, we give the expression and new properties of Green’s function associated with system (1.1)–(1.2). In Sect. 3, we present some characteristics of the integral operator associated with system (1.1)–(1.2) and state two fixed point theorems in cones. In Sect. 4, we discuss the existence of denumerably many positive solutions of system (1.1)–(1.2). In Sect. 5, we will prove the existence of two infinite families of positive solutions of system (1.1)–(1.2). In Sect. 6, we give some interesting comments and remarks associated with system (1.1)–(1.2).

In the rest of this section, we introduce some basic definitions and lemmas of fractional calculus.

Definition 1.1

([2])

The integral

$$I_{0+}^{\alpha}f(x)=\frac{1}{\Gamma(\alpha)} \int_{0}^{x} \frac{f(t)}{(x-t)^{1-\alpha}}\,dt, \quad x>0, $$

where \(\alpha>0\), is called Riemann–Liouville fractional integral of order α.

Definition 1.2

([2])

For a function \(f(x)\) given in the interval \([0,1)\), the expression

$$D_{0+}^{\alpha}f(x)=\frac{1}{\Gamma(n-\alpha)} \biggl(\frac{d}{dx} \biggr)^{n} \int_{0}^{x} \frac{f(t)}{(x-t)^{\alpha-n+1}}\,dt, $$

where \(n=[\alpha]+1\), \([\alpha]\) denotes the integer part of number α, is called the Riemann–Liouville fractional derivative of order α.

Lemma 1.1

([7])

Assume that \(u\in C(0,1)\cap L(0,1)\) with a fractional derivative of order \(\alpha>0\) that belongs to \(u\in C(0, 1)\cap L(0,1)\). Then

$$I_{0+}^{\alpha}D_{0+}^{\alpha}u(t)=u(t)+C_{1}t^{\alpha-1}+C_{2}t^{\alpha-2} +\cdots++C_{N}t^{\alpha-N}, $$

for some \(C_{i}\in R\), \(i=1,2,\ldots, N\), where N is the smallest integer greater than or equal to α.

2 Green’s function associated with system (1.1)–(1.2)

In this section, we discuss the expression and properties of the Green’s function associated with system (1.1)–(1.2).

Let \(\mathbf{y}=[y_{1},y_{2},\ldots,y_{n}]^{\top}\). Consider the fractional differential system

$$ \mathbf{D}_{0^{+}}^{\alpha}\mathbf{x}(t)+\mathbf{y}(t)=0,\quad 0< t< 1, $$
(2.1)

with the boundary conditions (1.2). Equation (2.1) means that

$$ \textstyle\begin{cases} -\mathbf{D}_{0^{+}}^{\alpha}x_{1}(t)=y_{1}(t),& 0< t< 1,\\ -\mathbf{D}_{0^{+}}^{\alpha}x_{2}(t)=y_{2}(t),& 0< t< 1,\\ \cdots& \cdots,\\ -\mathbf{D}_{0^{+}}^{\alpha}x_{n}(t)=y_{n}(t),& 0< t< 1. \end{cases} $$
(2.2)

Lemma 2.1

If \(\int_{0}^{1}h_{i}(t)t^{\alpha-1}\,dt \neq a-(\alpha-1)b\) and \(y_{i}\in C[0,1]\), \(i=1,2,\ldots,n\), then system (2.1)(1.2) has a unique solution \(\mathbf{x}=[x_{1},x_{2},\dots,x_{n}]^{\top}\in R_{+}^{n}\) in which \(x_{i}(t)\) is given by

$$ x_{i}(t) = \int_{0}^{1}G_{i}(t,s)y_{i}(s) \,ds, $$
(2.3)

where

$$\begin{aligned}& G_{i}(t,s)=G_{1}(t,s)+G_{2i}(t,s), \end{aligned}$$
(2.4)
$$\begin{aligned}& G_{1}(t,s)=\frac{1}{\Gamma(\alpha)(a+(\alpha-1)b)} \textstyle\begin{cases} at^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha-1)t^{\alpha-1}(1-s)^{\alpha-2}\\ \quad{}-(a+(\alpha-1)b)(t-s)^{\alpha-1}, \quad 0\leq s \leq t\leq1,\\ at^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha-1)t^{\alpha-1}(1-s)^{\alpha -2},\\ \hphantom{\quad{}-(a+(\alpha-1)b)(t-s)^{\alpha-1}, \quad{}} 0\leq t \leq s \leq1, \end{cases}\displaystyle \end{aligned}$$
(2.5)
$$\begin{aligned}& G_{2i}(t,s)=\frac{t^{\alpha-1}}{(a-(\alpha-1)b)-\int _{0}^{1}h_{i}(t)t^{\alpha-1}\,dt} \int_{0}^{1}h_{i}(t)G_{1}(t,s) \,dt. \end{aligned}$$
(2.6)

Proof

By the fact that system (2.1)–(1.2) is equivalent to system (2.2)–(1.4). Therefore system (2.1)–(1.2) has a unique solution x which is equivalent to the following system:

$$ \textstyle\begin{cases} \mathbf{D}_{0+}^{\alpha}x_{i}(t)+y_{i}(t)=0, \quad 0< t < 1, \\ x_{i}(0)=x_{i}^{\prime}(0)=\cdots=x_{i}^{(n-2)}(0)=0,\\ ax_{i}(1)+bx_{i}^{\prime}(1)=\int_{0}^{1}h_{i}(t)x_{i}(t)\,dt \end{cases} $$
(2.7)

has a unique solution \(x_{i}\), which is given by (2.3).

Next, by a proof which is similar to that of Lemma 2.1 in [8], we can show that (2.3) holds. This finishes the proof of Lemma 2.1. □

From (2.4), (2.5) and (2.6), we can prove that \(G_{i}(t,s)\), \(G_{1}(t,s)\) and \(G_{2i}(t,s)\) have the following properties.

Proposition 2.1

The function \(G_{1}(t,s)\) defined by (2.5) satisfies

  1. (i)

    \(G_{1}(t,s)\geq0\) is continuous for all \(t,s \in J\), \(G_{1}(t,s)>0\), \(\forall t, s\in(0,1)\);

  2. (ii)

    For all \(t\in J\), \(s\in(0,1)\), we have

    $$ \begin{aligned}[b]G_{1}(t,s)&\leq G_{1} \bigl( \tau(s),s \bigr) \\ &= \bigl(a \bigl(\tau(s) \bigr)^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha -1) \bigl(\tau(s) \bigr)^{\alpha-1}(1-s)^{\alpha-2} \\ &\quad {} - \bigl(a+b(\alpha-1) \bigr) \bigl(\tau(s)-s \bigr)^{\alpha-1} \bigr) / \bigl(\Gamma(\alpha) \bigl(a+b(\alpha-1) \bigr) \bigr), \end{aligned} $$
    (2.8)

    where

    $$ \tau(s)=\frac{s}{1-e(s)(1-s)^{\frac{\alpha-1}{\alpha-2}}}, \qquad e(s)= \biggl[\frac{a+\frac{b(\alpha-1)}{1-s}}{a+b(\alpha-1)} \biggr]^{\frac{1}{\alpha -2}}. $$
    (2.9)

Proof

(i) It is obvious that \(G_{1}(t,s)\) is continuous on \(J\times J\) and \(G_{1}(t,s)\geq0\) when \(s\geq t\).

For \(0\leq s< t\leq1\), we have

$$\begin{aligned}[b] &at^{\alpha-1}(1-s)^{\alpha-1}+b( \alpha-1)t^{\alpha-1}(1-s)^{\alpha -2}- \bigl(a+(\alpha-1)b \bigr) (t-s)^{\alpha-1} \\ &\quad =(1-s)^{\alpha-1} \biggl[ \bigl(a+b(\alpha-1) (1-s)^{-1} \bigr)t^{\alpha-1}- \bigl(a+b(\alpha -1) \bigr) \biggl(\frac{t-s}{1-s} \biggr)^{\alpha-1} \biggr] \\ &\quad \geq0. \end{aligned} $$

So, by (2.5), we have

$$G_{1}(t,s)\geq0, \quad\forall t, s \in J. $$

Similarly, for \(t,s\in(0,1)\), we have \(G_{1}(t,s)>0\).

(ii) Since \(n-1<\alpha\leq n\), \(n\geq3\), it is clear that \(G_{1}(t,s)\) is increasing with respect to t for \(0\leq t\leq s\leq1\).

On the other hand, from the definition of \(G_{1}(t,s)\), for given \(s\in(0,1)\), \(s< t\leq1\), we have

$$\begin{aligned}[b] \frac{\partial G_{1}(t,s)}{\partial t}&=\frac{\alpha-1}{\Gamma(\alpha)(a+b(\alpha-1))} \bigl\{ at^{\alpha -2}(1-s)^{\alpha-1}+b(\alpha-1)t^{\alpha-2}(1-s)^{\alpha-2} \\ &\quad{}- \bigl[a+b(\alpha-1) \bigr](t-s)^{\alpha-2} \bigr\} . \end{aligned} $$

Let

$$\frac{\partial G_{1}(t,s)}{\partial t}=0. $$

Then we have

$$at^{\alpha-2}(1-s)^{\alpha-1}+b(\alpha-1)t^{\alpha-2}(1-s)^{\alpha -2}= \bigl[a+b(\alpha-1) \bigr](t-s)^{\alpha-2}, $$

and so

$$ \biggl(a+\frac{b(\alpha-1)}{1-s} \biggr) (1-s)^{\alpha-1}= \bigl[a+b(\alpha-1) \bigr] \biggl(1-\frac {s}{t} \biggr)^{\alpha-2}. $$
(2.10)

Noticing \(\alpha>2\), from (2.10), we have

$$t=\frac{s}{1-e(s)(1-s)^{\frac{\alpha-1}{\alpha-2}}}=:\tau(s), \qquad e(s)= \biggl[\frac{a+\frac{b(\alpha-1)}{1-s}}{a+b(\alpha-1)} \biggr]^{\frac{1}{\alpha-2}}. $$

Then, for given \(s\in(0,1)\), we have \(G_{1}(t,s)\) arriving at a maximum at \((\tau(s),s)\) when \(s< t\). From this, together with the fact that \(G_{1}(t,s)\) is increasing on \(s\geq t\), we see that (2.8) holds. □

Remark 2.1

From Fig. 1, we can see \(G_{1}(s,s)\leq G_{1}(\tau(s),s)\) for \(\alpha>2\). If \(1<\alpha\leq2\), then

$$G_{1}(t,s)\leq G_{1}(s,s)=\frac{as^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha-1)s^{\alpha -1}(1-s)^{\alpha-2}}{\Gamma(\alpha)(a+b(\alpha-1))}. $$
Figure 1
figure 1

Graph of function \(G_{1}(\tau(s),s)\), \(G_{1}(s,s)\) for \(\alpha=5/2\), \(a=1\), \(b=0\)

Remark 2.2

From Fig. 2, we can see that \(\tau (s)\) is increasing with respect to s.

Figure 2
figure 2

Graph of function \(\tau(s)\), for \(\alpha=5/2\), \(a=1\), \(b=0\)

Remark 2.3

From Fig. 3, we can see that \(G_{1}(\tau(s),s)>0\) for \(s\in J_{\theta}=[\theta,1-\theta]\), where \(\theta\in (0,\frac{1}{2})\).

Figure 3
figure 3

Graph of function \(G_{1}(\tau(s),s)\), for \(\alpha=5/2\), \(a=1\), \(b=0\)

Remark 2.4

Let \(\bar{G}_{1}(\tau(s),s)=a(\tau(s))^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha-1)(\tau (s))^{\alpha-1}(1-s)^{\alpha-2} -(a+b(\alpha-1))(\tau(s)-s)^{\alpha-1}\). From (2.8), for \(s\in (0,1)\), we have

$$ \begin{aligned}[b] \frac{d \bar{G}_{1}(\tau(s),s)}{d s}&=-a(\alpha-1) (1-s)^{\alpha-2} \bigl(\tau(s) \bigr)^{\alpha-1} \\ &\quad{} +a(\alpha-1) (1-s)^{\alpha-1} \bigl(\tau(s) \bigr)^{\alpha-1} \bigl(\tau'(s) \bigr) \\ &\quad{} -b(\alpha-1) (\alpha-2) (1-s)^{\alpha-3} \bigl(\tau(s) \bigr)^{\alpha-1} \\ &\quad{} +b(\alpha-1)^{2}(1-s)^{\alpha-2} \bigl(\tau(s) \bigr)^{\alpha-2} \bigl(\tau '(s) \bigr) \\ &\quad{} -(\alpha-1) (\alpha-1) \bigl(a+b(\alpha-1) \bigr) \bigl(\tau(s)-s \bigr)^{\alpha -2} \bigl(\tau'(s)-1 \bigr). \end{aligned} $$
(2.11)

Remark 2.5

From (2.11), we have

$$\begin{aligned}[b] \lim_{s\rightarrow0}\frac{d \bar{G}_{1}(\tau(s),s)}{d s}&=( \alpha-1) \biggl[- \bigl(a+b(\alpha-2) \bigr) \biggl(\frac{\alpha-2}{\alpha-1} \biggr)^{\alpha-1}+ \bigl(a+b(\alpha-1) \bigr) \biggl(\frac{\alpha-2}{\alpha-1} \biggr)^{\alpha-2} \biggr] \\ & = \biggl(\frac{\alpha-2}{\alpha-1} \biggr)^{\alpha-2} \bigl(a+b(2\alpha-3) \bigr) \\ & :=f(\alpha). \end{aligned} $$

Remark 2.6

Noticing that \(\alpha>2\), it follows from Remark 2.5 that \(f(\alpha)>0\).

Remark 2.7

It is interesting to point out that \(f(\alpha)\) is not decreasing with respect to α in the case with \(a>0\) and \(b>0\). If \(a>0\) and \(b=0\), then \(f(\alpha)\) is decreasing with respect to α.

Proposition 2.2

There exists \(\gamma>0\) such that

$$ \min_{t\in[\theta,1-\theta]}G_{1}(t,s)\geq\gamma G_{1} \bigl(\tau(s),s \bigr), \quad\forall s\in J. $$
(2.12)

Proof

For \(t\in J_{\theta}\), we divide the proof into the following three cases for \(s\in J\).

Case 1. If \(s\in J_{\theta}\), then from (i) of Proposition 2.1 and Remark 2.3, we have

$$G_{1}(t,s)>0, \qquad G_{1} \bigl(\tau(s),s \bigr)>0,\quad \forall t, s \in J_{\theta}. $$

It is obvious that \(G_{1}(t,s)\) and \(G_{1}(\tau(s),s)\) are bounded on \(J_{\theta}\). So, there exists a constant \(\gamma_{1}>0\) such that

$$ G_{1}(t,s)\geq\gamma_{1}G_{1} \bigl(\tau(s),s \bigr), \quad\forall t, s \in J_{\theta}. $$
(2.13)

Case 2. If \(s\in[1-\theta,1]\), then from (2.5), we have

$$G_{1}(t,s)=\frac{at^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha-1)t^{\alpha -1}(1-s)^{\alpha-2}}{\Gamma(\alpha)(a+b(\alpha-1))}. $$

On the other hand, from the definition of \(\tau(s)\), we see that \(\tau(s)\) takes its maximum 1 at \(s=1\). So

$$ \begin{aligned}[b] G_{1} \bigl(\tau(s),s \bigr)&= \bigl(a \bigl(\tau(s) \bigr)^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha -1) \bigl(\tau(s) \bigr)^{\alpha-1}(1-s)^{\alpha-2} \\ &\quad {}- \bigl(a+b(\alpha-1) \bigr) \bigl(\tau(s)-s \bigr)^{\alpha-1} \bigr)/ \bigl(\Gamma(\alpha) \bigl(a+b(\alpha-1) \bigr) \bigr) \\ &\leq\frac{a(\tau(s))^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha-1)(\tau (s))^{\alpha-1}(1-s)^{\alpha-2}}{\Gamma(\alpha)(a+b(\alpha-1))} \\ &=\frac{(\tau(s))^{\alpha-1}}{t^{\alpha-1}}\frac{a(1-s)^{\alpha-1} t^{\alpha-1}+b(\alpha-1)(1-s)^{\alpha-2}t^{\alpha-1}}{\Gamma(\alpha )(a+b(\alpha-1))} \\ &\leq\frac{1}{\theta^{\alpha-1}}G_{1}(t,s). \end{aligned} $$
(2.14)

Therefore, \(G_{1}(t,s)\geq\theta^{\alpha-1}G_{1}(\tau(s),s)\). Letting \(\theta^{\alpha-1}=\gamma_{2}\), we have

$$ G_{1}(t,s)\geq \gamma_{2}G_{1} \bigl(\tau(s),s \bigr). $$
(2.15)

Case 3. If \(s\in[0,\theta]\), from (i) of Proposition 2.1, it is clear that

$$G_{1}(t,s)>0, \qquad G_{1} \bigl(\tau(s),s \bigr)>0, \quad \forall t\in J_{\theta}, s\in(0,\theta]. $$

In view of the Remarks 2.42.6, we have

$$ \begin{aligned}[b] &\lim_{s\rightarrow0}\frac{G_{1}(t,s)}{G_{1}(\tau(s),s)} \\ &\quad =\lim_{s\rightarrow 0}\frac{at^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha-1)t^{\alpha -1}(1-s)^{\alpha-2}-(a+b(\alpha-1))(t-s)^{\alpha-1}}{ a(\tau(s))^{\alpha-1}(1-s)^{\alpha-1}+b(\alpha-1)(\tau(s))^{\alpha -1}(1-s)^{\alpha-2}-(a+b(\alpha-1))(\tau(s)-s)^{\alpha-1}}\hspace{-30pt} \\ &\quad =\lim_{s\rightarrow 0}\biggl(\bigl(-a(\alpha-1)t^{\alpha-1}(1-s)^{\alpha-2}-b( \alpha-1)t^{\alpha -1}(1-s)^{\alpha-3} \\ &\qquad {}+(\alpha-1) \bigl(a+b(\alpha-1)\bigr) (t-s)^{\alpha-2}\bigr)\Big/ \biggl( \frac{d \bar{G}_{1}(\tau(s),s)}{ds}\biggr)\biggr) \\ &\quad >0. \end{aligned} $$
(2.16)

From (2.16), there exists a constant \(\gamma_{3}\) such that

$$ G_{1}(t,s)\geq\gamma_{3}G_{1} \bigl(\tau(s),s \bigr). $$
(2.17)

Letting \(\gamma=\min\{\gamma_{1},\gamma_{2},\gamma_{3}\}\) and using (2.13), (2.15) and (2.17), it follows that (2.12) holds. This completes the proof. □

Let

$$ \mu_{i}= \int_{0}^{1}h_{i}(t)t^{\alpha-1} \,dt. $$
(2.18)

Proposition 2.3

If \(\mu_{i}\in[0,a-b(\alpha-1))\), then we have

  1. (i)

    \(G_{2i}(t,s)\geq0\) is continuous for all \(t,s \in J\), \(G_{2i}(t,s)>0\), \(\forall t, s\in(0,1)\);

  2. (ii)

    \(G_{2i}(t,s)\leq\frac{1}{(a-b(\alpha-1))-\mu_{i}} \int_{0}^{1}h_{i}(t)G_{1}(t,s)\,dt\), \(\forall t\in J\), \(s\in(0,1)\).

Proof

Using the properties of \(G_{1}(t,s)\), definition of \(G_{2i}(t,s)\), it can easily be shown that (i) and (ii) hold. □

Theorem 2.1

If \(\mu_{i}\in[0,a-b(\alpha-1))\), the function \(G_{i}(t,s)\) defined by (2.4) satisfies

  1. (i)

    \(G_{i}(t,s)\geq0\) is continuous for all \(t,s \in J\), \(G_{i}(t,s)>0\), \(\forall t, s\in(0,1)\);

  2. (ii)

    \(G_{i}(t,s)\leq G_{i}(s)\) for each \(t, s \in J\), and

    $$ \min_{t\in[\theta,1-\theta]}G_{i}(t,s)\geq\gamma^{*} G_{i}(s),\quad \forall s\in J, $$
    (2.19)

    where

    $$ \gamma^{*}=\min \bigl\{ \gamma, \theta^{\alpha-1} \bigr\} , \qquad G_{i}(s)=G_{1} \bigl(\tau (s),s \bigr)+G_{2i}(1,s), $$
    (2.20)

    \(\tau(s)\) is defined by (2.9), γ is defined in Proposition 2.2.

Proof

(i) From Proposition 2.1 and Proposition 2.3, we see that \(G_{i}(t,s)\geq0\) is continuous for all \(t,s \in J\), and \(G_{i}(t,s)>0\), \(\forall t, s\in(0,1)\).

(ii) From (ii) of Proposition 2.1 and (ii) of Proposition 2.3, we have \(G_{i}(t,s)\leq G_{i}(s)\) for each \(t, s \in J\).

Now, we show that (2.19) holds.

In fact, from Proposition 2.2, we have

$$\begin{aligned}[b] \min_{t\in J_{\theta}}G_{i}(t,s) &\geq\gamma G_{1} \bigl(\tau(s),s \bigr)+\frac{\theta^{\alpha-1}}{(a-b(\alpha -1))-\mu_{i}} \int_{0}^{1}h_{i}(t)G_{1}(t,s) \,dt \\ & \geq\gamma^{*} \biggl[G_{1} \bigl(\tau(s),s \bigr) + \frac{1}{(a-b(\alpha-1))-\mu_{i}} \int_{0}^{1}h_{i}(t)G_{1}(t,s) \,dt \biggr] \\ & =\gamma^{*} G_{i}(s),\quad \forall s\in J. \end{aligned} $$

Then the proof of Theorem 2.1 is completed. □

Remark 2.8

From the definition of \(\gamma^{*}\), it is clear that \(0<\gamma^{*}<1\).

3 Preliminaries

Let \(E=C[0,1]\), \(\mathbf{X}=\underbrace{E\times\cdots\times E}_{n}\), and, for any \(\mathbf{x}=[x_{1},x_{2},\dots,x_{n}]^{\top}\in\mathbf{X}\),

$$ \Vert \mathbf{X} \Vert =\sum_{i=1}^{n} \sup_{t\in J} \bigl\vert x_{i}(t) \bigr\vert . $$
(3.1)

Then \((\mathbf{X},\cdot)\) is a real Banach space.

To establish the existence of positive solutions to system (1.1)–(1.2), for a fixed \(\theta\in(t_{0}^{\prime},\frac {1}{2})\), we construct the cone \(\mathbf{K}_{\theta}\) in X by

$$ \begin{aligned}[b] \mathbf{K}_{\theta}&= \Biggl\{ \mathbf{x}=(x_{1},x_{2},\dots,x_{n}) \in \mathbf{X}: x_{i}(t) \geq0, i=1,2,\ldots,n, t\in J, \\ &\quad {}\min_{t\in[\theta,1-\theta]}\sum_{i=1}^{n}x_{i}(t) \geq\gamma^{*} \Vert \mathbf{x} \Vert \Biggr\} , \end{aligned} $$
(3.2)

where \(\gamma^{*}\) is defined in (2.20), and it is easy to see \(\mathbf{K}_{\theta}\) is a closed convex cone of X.

Let \(\{\theta_{j}\}_{j=1}^{\infty}\) be such that \(t_{j+1}^{\prime }<\theta_{j}<t_{j}^{\prime}\), \(j=1,2,\ldots\) . So we get \(0<\cdots <t_{j+1}^{\prime}<\theta_{j}<t_{j}^{\prime}<\cdots<t_{3}^{\prime}<\theta _{2}<t_{2}^{\prime}<\theta_{1}<t_{1}^{\prime}<\frac{1}{2}<1\). Then, for any \(j\in\textrm{N}\), we define the cone \(\mathbf{K}_{\theta_{j}}\) by

$$ \mathbf{K}_{\theta_{j}}= \Biggl\{ \mathbf{x} \in\mathbf{X}: x_{i}(t) \geq0, t\in J, i=1,2,\ldots,n, \min_{t\in[\theta_{j},1-\theta_{j}]}\sum _{i=1}^{n}x_{i}(t)\geq \gamma_{j}^{*} \Vert \mathbf{x} \Vert \Biggr\} , $$
(3.3)

where

$$ \gamma_{j}^{*}=\min \bigl\{ \gamma_{j}, \theta_{j}^{\alpha-1} \bigr\} , $$
(3.4)

here \(\gamma_{j}=\min\{\gamma_{1j},\gamma_{2j},\gamma_{3j}\}\), \(\gamma _{1}\), \(\gamma_{2}\) and \(\gamma_{3}\) is similarly defined in (2.13), (2.14) and (2.17), respectively. It is easy to see \(\mathbf{K}_{\theta_{j}}\) is a closed convex cone of X.

Also, for a positive number τ, define \(\mathbf{K}_{\tau\theta _{j}}\) by

$$\mathbf{K}_{\tau\theta_{j}}= \bigl\{ \mathbf{x}\in\mathbf{K}_{\theta _{j}}: \Vert \mathbf{x} \Vert < \tau \bigr\} . $$

Let \(\mathbf{T}_{\lambda}: \mathbf{K}_{\theta_{j}} \to\mathbf{X}\) be a map with components \((T_{\lambda}^{1},T_{\lambda}^{2},\ldots,T_{\lambda }^{i},\ldots,T_{\lambda}^{n})\). We understand that \(\mathbf{T}_{\lambda }\mathbf{x}=(T_{\lambda}^{1}\mathbf{x},T_{\lambda}^{2}\mathbf{x},\ldots ,T_{\lambda}^{i}\mathbf{x},\ldots, T_{\lambda}^{n}\mathbf{x})^{\top}\), where

$$ \bigl(T_{\lambda}^{i}\mathbf{x} \bigr) (t) =\lambda \int_{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds,\quad i=1,2,\ldots,n. $$
(3.5)

Remark 3.1

It follows from Lemma 2.1 and the definition of \(\mathbf{T}_{\lambda}\) that

$$\mathbf{x}=[x_{1},x_{2},\dots,x_{n}]^{\top} \in\mathbf{X} $$

is a solution of system (1.1)–(1.2) if and only if \(\mathbf {x}=[x_{1},x_{2},\dots,x_{n}]^{\top}\) is a fixed point of operator \(\mathbf {T}_{\lambda}\). And then \(\mathbf{x}=[x_{1},x_{2},\dots,x_{n}]^{\top}\in \mathbf{X}\) is a solution of the system (1.1)–(1.2) if and only if \(x_{i}\) (\(i=1,2,\ldots,n\)) is a fixed point of operator \(T_{\lambda}^{i}\).

Lemma 3.1

Assume that \((H_{1})\)\((H_{4})\) hold. Then \(\mathbf{T}_{\lambda}(\mathbf{K}_{\theta_{j}})\subset\mathbf{K}_{\theta _{j}} \) and \(\mathbf{T}_{\lambda}: \mathbf{K}_{\theta_{j}} \to\mathbf {K}_{\theta_{j}}\) is a completely continuous.

Proof

By the theory of matrix analysis, if we want to prove that \(\mathbf{T}_{\lambda}(\mathbf{K}_{\theta_{j}})\subset\mathbf{K}_{\theta _{j}} \) and \(\mathbf{T}_{\lambda}: \mathbf{K}_{\theta_{j}} \to\mathbf {K}_{\theta_{j}}\) is a completely continuous, then, for \(i=1,2,\ldots, n\), we only prove that \(T_{\lambda}^{i}(\mathbf{K}_{\theta_{j}})\subset \mathbf{K}_{\theta_{j}} \) and \(T_{\lambda}^{i}: \mathbf{K}_{\theta_{j}} \to \mathbf{K}_{\theta_{j}}\) is a completely continuous.

Firstly, we prove that \(T_{\lambda}^{i}(\mathbf{K}_{\theta_{j}})\subset \mathbf{K}_{\theta_{j}}\). For \(t\in J\), it follows from (ii) of Theorem 2.1 and (3.5) that

$$ \begin{aligned}[b] \bigl(T_{\lambda}^{i}\mathbf{x} \bigr) (t) &=\lambda \int_{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ & \leq \lambda \int_{0}^{1}G_{i}(s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds. \end{aligned} $$
(3.6)

On the other hand, it follows from (2.19) and (3.5) that

$$ \begin{aligned}[b] \min_{t\in[\theta_{j},1-\theta_{j}]} \bigl(T_{\lambda}^{i} \mathbf{x} \bigr) (t) &=\min_{t\in[\theta_{j},1-\theta_{j}]}\lambda \int_{0}^{1}G_{i}(t,s) g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ & \geq\gamma_{j}^{*} \lambda \int_{0}^{1}G_{i}(s) g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\geq\gamma_{j}^{*} \bigl\Vert T_{\lambda}^{i} \mathbf{x} \bigr\Vert . \end{aligned} $$
(3.7)

This shows that \(T_{\lambda}^{i}(\mathbf{K}_{\theta_{j}})\subset\mathbf{K}_{\theta_{j}}\).

Next, by using similar arguments of Lemma 3.1 in [7] one can prove that the operator \(T_{\lambda}^{i}: \mathbf{K}_{\theta_{j}} \to\mathbf {K}_{\theta_{j}}\) is completely continuous. So the proof of Lemma 3.1 is complete. □

To obtain some of the norm inequalities in our main results, we employ the famous Hölder inequality.

Lemma 3.2

(Hölder)

Let \(e\in L^{p}[a,b]\) with \(p>1\), \(h\in L^{q}[a,b]\) with \(q>1\) and \(\frac{1}{p}+\frac{1}{q}=1\). Then \(eh\in L^{1}[a,b]\) and

$$\Vert eh \Vert _{1}\le \Vert e \Vert _{p} \Vert h \Vert _{q}. $$

Let \(e\in L^{1}[a,b]\), \(h\in L^{\infty}[a,b]\). Then \(eh\in L^{1}[a,b]\) and

$$\Vert eh \Vert _{1}\le \Vert e \Vert _{1} \Vert h \Vert _{\infty}. $$

Finally, we state the well-known fixed point theorems, which can be found in [56].

Lemma 3.3

Let E be a Banach space, K be a cone in E. Assume that \(\Omega _{1}\), \(\Omega_{2}\) are bounded open subsets in E with \(\theta\in\Omega_{1}\) and \(\bar{\Omega}_{1}\subset\Omega_{2}\), where θ denotes the zero operator. Suppose \(A : K \cap(\bar{\Omega}_{2} \backslash\Omega_{1})\rightarrow K \) is completely continuous such that either

  1. (i)

    \(\Vert Ax \Vert\leq\Vert x \Vert\), \(\forall x\in K\cap \partial\Omega_{1}\); \(\Vert Ax \Vert\geq\Vert x \Vert\), \(\forall x\in K\cap\partial\Omega_{2} \);

  2. (ii)

    \(\Vert Ax \Vert\leq\Vert x \Vert\), \(\forall x\in K\cap \partial\Omega_{2}\); \(\Vert Ax \Vert\geq\Vert x \Vert\), \(\forall x\in K\cap\partial\Omega_{1} \).

Then A has a fixed point in \(K\cap(\bar{\Omega}_{2} \backslash\Omega_{1})\).

Lemma 3.4

Let E be a real Banach space and let K be a cone in E. For \(r>0\), we define \(K_{r}= \{x\in K: \Vert x \Vert < r \}\). Assume that \(T:\bar{K}_{r}\rightarrow K\) is completely continuous such that \(Tx\neq x\) for \(x\in\partial K{r}= \{x\in K: \Vert x \Vert=r \}\).

  1. (i)

    If \(\Vert Tx \Vert\geq \Vert x \Vert\) for \(x\in\partial K_{r}\), then \(\mathbf{i}(T,K_{r},K)=0\).

  2. (ii)

    If \(\Vert Tx \Vert\leq \Vert x \Vert\) for \(x\in \partial K_{r}\), then \(\mathbf{i}(T,K_{r},K)=1\).

Remark 3.2

It is well known that Lemma 3.3 and Lemma 3.4 have been instrumental in proving the existence of positive solutions to various boundary value problems for integer-order or fractional-order differential equations-for details; see Sect. 6.

4 The existence of denumerably many positive solutions

In this section, we establish the existence of the denumerably many positive solutions for system (1.1)–(1.2). We give our main results in the cases with \(g_{i}\in L^{P}[0,1]\); \(p>1\), \(p=1\) and \(p=\infty\).

Firstly, we consider the case \(p>1\).

Theorem 4.1

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\) and \(\{R_{j}\}_{j=1}^{\infty}\) be such that

$$R_{j+1}< \gamma_{j}^{*}r_{j}< r_{j}< R_{j},\quad j=1,2,\ldots. $$

For each natural number j, we assume that f satisfies

\((H_{5})\) :

For any \(t\in J\), \(\Vert\mathbf{x} \Vert\in [0,R_{j}]\), \(f_{i}(t,\mathbf{x})\leq LR_{j}\), where

$$ \begin{aligned}[b] &0< L\leq\max \biggl\{ \frac{1}{n\lambda \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p}}, \frac{1}{n\lambda \Vert G_{i} \Vert _{1} \Vert g_{i} \Vert _{\infty}},\frac {1}{n\lambda M_{i} \Vert g_{i} \Vert _{1}} \biggr\} ,\\ &\quad M_{i}=\max_{s\in J}G_{i}(s); \end{aligned} $$
(4.1)
\((H_{6})\) :

for any \(t\in[\theta_{j},1-\theta_{j}]\), \(\Vert \mathbf{x} \Vert\in[\gamma_{j}^{*}r_{j},r_{j}]\), \(f_{i}(t,\mathbf {x})\geq lr_{j}\), where \(l>0\).

Then there exists \(\lambda_{0}>0\), such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has denumerably many positive solutions \(\{\mathbf{x}_{j}\}_{j=1}^{\infty}\) such that

$$r_{j}\leq \Vert \mathbf{x}_{j} \Vert \leq R_{j}, \quad j=1,2,\ldots. $$

Proof

Let \(\lambda_{0}=\sup\{\lambda_{j}\}\), \(\lambda_{j}=\frac{1}{\gamma _{j}^{*} N_{i}l\int_{\theta_{j}}^{1-\theta_{j}}G_{i}(s)\,ds}\), \(i=1,2,\ldots,n\), \(j=1,2,\ldots\) . Then, for any \(\lambda>\lambda_{0}\), (3.5) and Lemma 3.1 imply that \(\mathbf{T}_{\lambda}\) and \(T_{\lambda}^{i}\) (\(i=1,2,\ldots,n\)) are all completely continuous.

We consider the open subset sequences \(\{\Omega_{1,j}\}_{j=1}^{\infty}\) and \(\{\Omega_{2,j}\}_{j=1}^{\infty}\) of X

$$\begin{aligned}& \{\Omega_{1,j}\}_{j=1}^{\infty}= \bigl\{ \mathbf{x}\in \mathbf{X}: \Vert \mathbf{x} \Vert < R_{j} \bigr\} ; \\& \{\Omega_{2,j}\}_{j=1}^{\infty}= \bigl\{ \mathbf{x}\in\mathbf{X}: \Vert \mathbf{x} \Vert < r_{j} \bigr\} . \end{aligned}$$

Let \(\{\theta_{j}\}_{j=1}^{\infty}\) be as in Sect. 3 and note that \(0< t_{j+1}^{\prime}<\theta_{j}<t_{j}^{\prime}<\frac{1}{2}\), \(j=1,2,\ldots\) .

For fixed j, we assume that \(\mathbf{x}\in K_{\theta_{j}}\cap\partial \Omega_{2,j}\), then for any \(t\in J\)

$$r_{j}= \Vert \mathbf{x} \Vert =\sum_{i=1}^{n} \sup_{t\in J} \bigl\vert x_{i}(t) \bigr\vert \geq \min_{t\in[\theta_{j},1-\theta _{j}]}\sum_{i=1}^{n}x_{i}(t) \geq\gamma_{j}^{*} \Vert \mathbf{x} \Vert = \gamma_{j}^{*}r_{j}. $$

Noticing (2.19) and (3.5), for all \(\mathbf{x}\in K_{\theta_{j}}\cap\partial\Omega_{2,j}\), by \((H_{1})\) and \((H_{6})\), we have

$$\begin{aligned}[b] \Vert \mathbf{T}_{\lambda}\mathbf{x} \Vert &\geq\sup_{t\in J} \bigl\vert \bigl(T_{\lambda}^{i} \mathbf{x} \bigr) (t) \bigr\vert \\ &=\lambda\sup_{t\in J} \int_{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf {x}(s) \bigr)\,ds \\ &\geq\min_{t\in[\theta_{j},1-\theta_{j}]} \lambda \int _{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\geq\gamma_{j}^{*}\lambda N_{i} \int_{0}^{1}G_{i}(s)f_{i} \bigl(s,\mathbf {x}(s) \bigr)\,ds \\ &\geq\gamma_{j}^{*}\lambda N_{i} \int_{\theta_{j}}^{1-\theta _{j}}G_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\geq\gamma_{j}^{*}\lambda N_{i}l \int_{\theta_{j}}^{1-\theta _{j}}G_{i}(s)\,ds r_{j} \\ &>\lambda_{0}\gamma_{j}^{*} N_{i}l \int_{\theta_{j}}^{1-\theta _{j}}G_{i}(s)\,ds r_{j} \\ &=r_{j}= \Vert \mathbf{x} \Vert , \end{aligned} $$

which shows that

$$ \Vert T_{\lambda}\mathbf{x} \Vert \geq \Vert \mathbf {x} \Vert , \quad\forall\mathbf{x}\in K_{\theta_{j}}\cap\partial \Omega_{2,j}. $$
(4.2)

On the other hand, for all \(t\in J\), \(\mathbf{x}\in K_{\theta_{j}}\cap \partial\Omega_{1,j}\), then \(\Vert\mathbf{x} \Vert=R_{j}\).

Noticing Theorem 2.1 and (3.5), for all \(t\in J\), \(\mathbf{x}\in K_{\theta_{j}}\cap\partial\Omega_{1,j}\), by \((H_{5})\), we have

$$\begin{aligned}[b] \bigl(T_{\lambda}^{i}\mathbf{x} \bigr) (t)&=\lambda \int _{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1}G_{i}(s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1} \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p}f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p} \int_{0}^{1}f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p}L R_{j} \\ &\leq\frac{1}{n}R_{j}= \frac{1}{n} \Vert \mathbf{x} \Vert , \end{aligned} $$

which shows that

$$ \Vert \mathbf{T}_{\lambda}\mathbf{x} \Vert =\sum _{i=1}^{n}\sup_{t\in J} \bigl\vert \bigl(T_{\lambda}^{i}\mathbf{x} \bigr) (t) \bigr\vert \leq \Vert \mathbf{x} \Vert , \quad\forall\mathbf {x}\in K_{\theta_{j}}\cap \partial\Omega_{1,j}. $$
(4.3)

Applying Lemma 3.3 to (4.2) and (4.3) yields that operator \(\mathbf{T}_{\lambda}\) has a fixed point \(\mathbf {x}_{j}\in K_{\theta_{j}}\cap(\bar{\Omega}_{2,j}/ \Omega_{1,j})\) such that \(r_{j}\leq \Vert\mathbf{x}_{j} \Vert\leq R_{j} \). And then it follows from Remark 3.2 that system (1.1)–(1.2) has a solution \(\mathbf{x}_{j}=(x_{1j},x_{2j},\ldots, x_{nj})^{\top}\). Since \(j\in\mathcal{N}\) was arbitrary, the proof is complete. □

The following theorem deals with the case \(p=\infty\).

Theorem 4.2

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\) and \(\{R_{j}\}_{j=1}^{\infty}\) be such that

$$R_{j+1}< \gamma_{j}^{*}r_{j}< r_{j}< R_{j},\quad j=1,2,\ldots. $$

For each natural number j, we assume that f satisfies \((H_{5})\) and \((H_{6})\), then there exists \(\lambda_{0}>0\), such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has denumerably many positive solutions \(\{\mathbf{x}_{j}(t)\}_{j=1}^{\infty}\) such that

$$r_{j}\leq \Vert \mathbf{x}_{j} \Vert \leq R_{j}, \quad j=1,2,\ldots. $$

Proof

Let \(\Vert G_{i} \Vert_{1} \Vert g_{i} \Vert_{\infty}\) replace \(\Vert G_{i} \Vert_{q} \Vert g_{i} \Vert_{p}\) and repeat the previous argument. □

Finally, we consider the case of \(p=1\).

Theorem 4.3

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\) and \(\{R_{j}\}_{j=1}^{\infty}\) be such that

$$R_{j+1}< \gamma_{j}^{*}r_{j}< r_{j}< R_{j},\quad j=1,2,\ldots. $$

For each natural number j, we assume that f satisfies \((H_{5})\) and \((H_{6})\), then there exists \(\lambda_{0}>0\), such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has denumerably many positive solutions \(\{\mathbf{x}_{j}(t)\}_{j=1}^{\infty}\) such that

$$r_{j}\leq \Vert \mathbf{x}_{j} \Vert \leq R_{j}, \quad j=1,2,\ldots. $$

Proof

Let \(M_{i} \Vert g_{i} \Vert_{1}\) replace \(\Vert G_{i} \Vert_{q} \Vert g_{i} \Vert_{p}\) and repeat the previous argument. □

Corollary 4.1

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\) and \(\{R_{j}\}_{j=1}^{\infty}\) be such that

$$R_{j+1}< \gamma_{j}^{*}r_{j}< r_{j}< R_{j},\quad j=1,2,\ldots. $$

For each natural number j, we assume that f satisfies:

\((H_{5}^{\prime})\) :

for any \(t\in J\), \(\Vert\mathbf{x} \Vert\in [0,r_{j}]\), \(f(t,\mathbf{x})\leq Lr_{j}\), where L is defined in (4.1);

\((H_{6}^{\prime})\) :

for any \(t\in[\theta_{j},1-\theta_{j}]\), \(\Vert\mathbf{x} \Vert\in[\gamma_{j}^{*}R_{j},R_{j}]\), \(f(t,\mathbf {x})\geq lR_{j}\), where \(l>0\).

Then there exists \(\lambda_{0}>0\), such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has denumerably many positive solutions \(\{\mathbf{x}_{j}(t)\}_{j=1}^{\infty}\) such that

$$r_{j}\leq \Vert \mathbf{x}_{j} \Vert \leq R_{j}, \quad j=1,2,\ldots. $$

5 The existence of two infinite families of positive solutions

In this section, we use Lemma 3.4 to establish the existence of two infinite families of positive solutions for system (1.1)–(1.2).

For ease of expression, we introduce the following notations:

$$\begin{aligned}& \bigl(f_{0}^{\tau} \bigr)^{i}=\max \biggl\{ \max _{t\in J}\frac{f_{i}(t,\mathbf {x})}{\tau}, 0\leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} , \qquad F_{0}^{\tau}=\max_{1\leq i\leq n} \bigl(f_{0}^{\tau} \bigr)^{i}, \\& \bigl(f_{\gamma_{j}^{*}\tau}^{\tau} \bigr)^{i}=\min \biggl\{ \min _{t\in[\theta _{j},1-\theta_{j}]}\frac{f_{i}(t,\mathbf{x})}{\tau}, \gamma _{j}^{*} \tau\leq \Vert \mathbf{x} \Vert \leq \tau \biggr\} , \qquad F_{\gamma_{j}^{*}\tau}^{\tau}=\min_{1\leq i\leq n} \bigl(f_{\gamma _{j}^{*}\tau}^{\tau} \bigr)^{i}, \end{aligned}$$

where \(i=1,2,\ldots,n\), \(j=1,2,\ldots\) .

In this section, we also consider the following three cases for \(g_{i}\in L^{P}[0,1]\): \(p>1\), \(p=1\) and \(p=\infty\). Case \(p>1\) is treated in the following theorem.

Theorem 5.1

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\), \(\{\eta_{j}\}_{j=1}^{\infty}\)and \(\{ R_{j}\}_{j=1}^{\infty}\) be such that

$$ R_{j+1}< \sigma_{j}r_{j}< r_{j}< \sigma_{j}\eta_{j}< \eta_{j} < R_{j}, \quad j=1,2,\ldots. $$
(5.1)

Furthermore, for each natural number j, we assume that f satisfies

\((H_{7})\) :

\(F_{0}^{r_{j}}\leq L\) and \(F_{0}^{R_{j}}\leq L\), where L is defined in (4.1);

\((H_{8})\) :

\(F_{\gamma_{j}^{*}\eta_{j}}^{\eta_{j}}\geq l\), where \(l>0\).

Then there exists \(\lambda_{0}>0\) such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\mathbf{x}_{j}^{(1)}(t)\}_{j=1}^{\infty}\), \(\{\mathbf {x}_{j}^{(2)}(t)\}_{j=1}^{\infty}\) and \(\Vert\mathbf{x}_{j}^{(1)} \Vert>\gamma_{j}^{*}\eta_{j}\).

Proof

Let \(\lambda_{0}\) be defined as in Theorem 4.1. Then, for any \(\lambda>\lambda_{0}\), (3.5) and Lemma 3.1 imply that \(\mathbf{T}_{\lambda}\) and \(T_{\lambda}^{i}\) (\(i=1,2,\ldots,n\)) are all completely continuous.

Let \(t\in J\), \(\mathbf{x}\in\partial\mathbf{K}_{r_{j}\theta_{j}}\). Then \(\Vert\mathbf{x} \Vert= r_{j}\).

Therefore, for any \(\mathbf{x}\in\partial\mathbf{K}_{r_{j}\theta_{j}}\), it follows from \((H_{7})\) that

$$\begin{aligned}[b] \bigl(T_{\lambda}^{i}\mathbf{x} \bigr) (t)&=\lambda \int _{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1}G_{i}(s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1} \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p}f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p} \int_{0}^{1}f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p}L r_{j} \\ &\leq\frac{1}{n}r_{j}= \frac{1}{n} \Vert \mathbf{x} \Vert , \end{aligned} $$

which shows that

$$ \Vert \mathbf{T}_{\lambda}\mathbf{x} \Vert =\sum _{i=1}^{n}\sup_{t\in J} \bigl\vert \bigl(T_{\lambda}^{i}\mathbf{x} \bigr) (t) \bigr\vert \leq \Vert \mathbf{x} \Vert , \quad\forall\mathbf {x}\in\partial \mathbf{K}_{r_{j}\theta_{j}}. $$
(5.2)

And then, by Lemma 3.4, we get

$$ \mathbf{i}(\mathbf{T}_{\lambda}, \mathbf{K}_{r_{j}\theta_{i}},\mathbf {K}_{\theta_{j}})=1. $$
(5.3)

Similarly, for \(\mathbf{x}\in\partial\mathbf{K}_{R_{j}\theta_{j}}\), we have \(\Vert\mathbf{T}_{\lambda}\mathbf{x} \Vert\leq\Vert\mathbf{x} \Vert\), and it follows from Lemma 3.4 that

$$ \mathbf{i}(\mathbf{T}_{\lambda}, \mathbf{K}_{R_{j}\theta_{j}}, \mathbf{K}_{\theta_{j}})=1. $$
(5.4)

On the other hand, letting

$$\mathbf{x}\in\bar{\mathbf{K}}_{\gamma_{j}^{*}\eta_{j}\theta_{j}}^{\eta _{j}}= \Biggl\{ \mathbf{x} \in\mathbf{K}_{\theta_{j}}: \Vert \mathbf{x} \Vert \leq\eta_{j}, \min_{t\in[\theta_{j},1-\theta_{j}]}\sum_{i=1}^{n}x_{i}(t) \geq\gamma_{j}^{*}\eta_{j} \Biggr\} , $$

then \(\Vert\mathbf{x} \Vert\leq\eta_{j}\). And hence, similar to the proof of (5.2), we have

$$ \Vert \mathbf{T}_{\lambda}\mathbf{x} \Vert \leq\eta _{j}. $$
(5.5)

Furthermore, for \(\mathbf{x}\in\bar{\mathbf{K}}_{\gamma_{j}^{*}\eta _{j}\theta_{j}}^{\eta_{j}}\), we have

$$\Vert \mathbf{x} \Vert \leq\eta_{j}, \quad t\in J, \min _{t\in [\theta_{j},1-\theta_{j}]}\sum_{i=1}^{n}x_{i}(t) \geq\gamma_{j}^{*}\eta_{j}, $$

and then it follows from \((H_{8})\) that

$$ \begin{aligned}[b] (\mathbf{T}_{\lambda}\mathbf{x}) (t)&\geq\sup _{t\in J} \bigl\vert \bigl(T_{\lambda}^{i} \mathbf{x} \bigr) (t) \bigr\vert \\ &\geq\lambda \int_{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\geq\min_{t\in[\theta_{j},1-\theta_{j}]} \lambda \int _{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\geq\gamma_{j}^{*}\lambda N_{i} \int_{0}^{1}G_{i}(s)f_{i} \bigl(s,\mathbf {x}(s) \bigr)\,ds \\ &\geq\gamma_{j}^{*}\lambda N_{i} \int_{\theta_{j}}^{1-\theta _{j}}G_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\geq\gamma_{j}^{*}\eta_{j}\lambda N_{i}l \int_{\theta_{j}}^{1-\theta _{j}}G_{i}(s)\,ds \\ &>\lambda_{0}\gamma_{j}^{*}\eta_{j} N_{i}l \int_{\theta_{j}}^{1-\theta _{j}}G_{i}(s)\,ds \\ &=\eta_{j}= \Vert \mathbf{x} \Vert . \end{aligned} $$
(5.6)

Letting \(\mathbf{x}_{0}=(x_{0}^{1},x_{0}^{2},\ldots,x_{0}^{i},\ldots ,x_{0}^{n})\) and \(\mathbf{F}(t,\mathbf{x})=(1-t)\mathbf{T}_{\lambda }\mathbf{x}+t\mathbf{x}_{0}\), where \(x_{0}^{i}\equiv\frac{\gamma_{j}^{*}\eta _{j}+\eta_{j}}{2}\), \(i=1,2,\ldots,n\), then \(\mathbf{F}: J\times\bar{\mathbf{K}}_{\gamma _{j}^{*}\eta_{j}\theta_{j}}^{\eta_{j}}\rightarrow\mathbf{K}_{\theta_{j}}\) is completely continuous, and from the analysis above, we obtain, for \((t,\mathbf{x})\in J\times\bar{\mathbf{K}}_{\gamma_{j}^{*}\eta_{j}\theta _{j}}^{\eta_{i}}\),

$$ \mathbf{F}(t,\mathbf{x})\in\bar{\mathbf{K}}_{\gamma_{j}^{*}\eta_{j}\theta _{j}}^{\eta_{j}}. $$
(5.7)

Therefore, for \(t\in J\), \(\mathbf{x}\in\bar{\mathbf{K}}_{\sigma_{j}\eta _{j}\theta_{j}}^{\eta_{j}}\), we have \(\mathbf{F}(t,\mathbf{x})\neq\mathbf {x}\). Hence, by the normality property and the homotopy invariance property of the fixed point index, we obtain

$$ \mathbf{i} \bigl(\mathbf{T}_{\lambda}, \mathbf{K}_{\gamma_{j}^{*}\eta _{j}\theta_{j}}^{\eta_{j}}, \mathbf{K}_{\theta_{j}} \bigr)=\mathbf{i} \bigl(\mathbf{x}_{0}, \mathbf{K}_{\gamma_{j}^{*}\eta_{j}\theta_{j}}^{\eta_{j}}, \mathbf{K}_{\theta _{j}} \bigr)=1. $$
(5.8)

Consequently, by the solution property of the fixed point index, \(\mathbf{T}_{\lambda}\) has a fixed point \(\mathbf{x}_{j}^{(1)}\) and \(\mathbf{x}_{j}^{(1)}\in\bar{\mathbf{K}}_{\gamma_{j}^{*}\eta_{j}\theta _{j}}^{\eta_{j}}\). By Remark 3.1, it follows that \(\mathbf {x}_{j}^{(1)}\) is a solution to system (1.1)–(1.2), and

$$\bigl\Vert \mathbf{x}_{j}^{(1)} \bigr\Vert \geq\min _{t\in[\theta _{j},1-\theta_{j}]}\sum_{i=1}^{n}x_{j}^{(1i)}) (t)>\gamma_{j}^{*}\eta_{j}. $$

On the other hand, from (5.3), (5.4) and (5.8) together with the additivity of the fixed point index, we get

$$ \begin{aligned}[b] & \mathbf{i} \bigl(\mathbf{T}_{\lambda}, \mathbf{K}_{R_{j}\theta_{j}}/ \bigl(\bar{\mathbf {K}}_{r_{j}\theta_{j}}\cup\bar{ \mathbf{K}}_{\gamma_{j}^{*}\eta_{j}\theta _{j}}^{\eta_{j}} \bigr),\mathbf{K}_{\theta_{j}} \bigr) \\ &\quad=\mathbf{i}(\mathbf{T}_{\lambda},\mathbf{K}_{R_{j}\theta_{j}},\mathbf {K}_{\theta_{j}})-\mathbf{i} \bigl(\mathbf{T}_{\lambda}, \bar{\mathbf {K}}_{\gamma_{j}^{*}\eta_{j}\theta_{j}}^{\eta_{j}},\mathbf{K}_{\theta _{j}} \bigr)-\mathbf{i}( \mathbf{T}_{\lambda},\bar{\mathbf{K}}_{r_{j}\theta _{j}},\mathbf{K}_{\theta_{j}}) \\ &\quad=1-1-1=-1. \end{aligned} $$
(5.9)

Hence, by the solution property of the fixed point index, \(\mathbf {T}_{\lambda}\) has a fixed point \(\mathbf{x}_{j}^{(2)}\) and \(\mathbf {x}_{j}^{(2)}\in\mathbf{K}_{R_{j}}/(\bar{\mathbf{K}}_{r_{j}}\cup\bar {\mathbf{K}}_{\gamma_{j}^{*}\eta_{j}\theta_{j}}^{\eta_{j}})\). Since \(j\in\mathcal{N}\) was arbitrary, the proof is complete. □

The following corollary deals with the case \(p=\infty\).

Corollary 5.1

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\), \(\{\eta_{j}\}_{j=1}^{\infty}\)and \(\{R_{j}\}_{j=1}^{\infty}\) satisfy (5.1). Furthermore, for each natural number j, we assume that f satisfies \((H_{7})\) and \((H_{8})\). Then there exist \(\lambda_{0}>0\) such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\mathbf{x}_{j}^{(1)}(t)\}_{j=1}^{\infty}\), \(\{\mathbf {x}_{j}^{(2)}(t)\}_{j=1}^{\infty}\) and \(\Vert\mathbf{x}_{j}^{(1)} \Vert>\gamma_{j}^{*}\eta_{j}\).

Proof

Let \(\Vert G_{i} \Vert_{1} \Vert g_{i} \Vert_{\infty}\) replace \(\Vert G_{i} \Vert_{q} \Vert g_{i} \Vert_{p}\) and repeat the argument above. □

Finally, we consider the case of \(p=1\).

Corollary 5.2

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\), \(\{\eta_{j}\}_{j=1}^{\infty}\)and \(\{ R_{j}\}_{j=1}^{\infty}\) satisfy (5.1). Furthermore, for each natural number j, we assume that f satisfies \((H_{7})\) and \((H_{8})\). Then there exist \(\lambda_{0}>0\) such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has two infinite families of positive solutions \(\{\mathbf{x}_{j}^{(1)}(t)\}_{j=1}^{\infty}\), \(\{\mathbf {x}_{j}^{(2)}(t)\}_{j=1}^{\infty}\) and \(\Vert\mathbf{x}_{j}^{(1)} \Vert>\gamma_{j}^{*}\eta_{j}\).

Proof

Let \(M_{i} \Vert g_{i} \Vert_{1}\) replace \(\Vert G_{i} \Vert_{q} \Vert g_{i} \Vert_{p}\) and repeat the previous argument. Similar to the proof of Theorem 5.1, we can get Corollary 5.2. □

Corollary 5.3

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\), \(\{\eta_{j}\}_{j=1}^{\infty}\)and \(\{R_{j}\}_{j=1}^{\infty}\) satisfy (5.1). Furthermore, for each natural number j, we assume that f satisfies \((H_{8})\). Then there exist \(\lambda_{0}>0\) such that, for \(\lambda >\lambda_{0}\), system (1.1)(1.2) has one infinite family of positive solutions.

If we replace \((H_{7})\) by the following condition:

\((H_{7}^{\prime})\) :

\(F_{0}^{r_{j}}\leq L\) or \(F_{0}^{R_{j}}\leq L\), where L is defined in (4.1), then we have the following corollary.

Corollary 5.4

Assume that \((H_{1})\)\((H_{4})\) hold. Let \(\{r_{j}\}_{j=1}^{\infty}\), \(\{\eta_{j}\}_{j=1}^{\infty}\)and \(\{R_{j}\}_{j=1}^{\infty}\) satisfy (5.1). Furthermore, for each natural number j, we assume that f satisfies \((H_{7}^{\prime})\). Then for all \(\lambda>0\), system (1.1)(1.2) has one infinite family of nonnegative solutions.

6 Comments and remarks

In this section, we give some comments and remarks associated with system (1.1)–(1.2).

It is well known that Lemma 3.3 and Lemma 3.4 have been instrumental in proving the existence of positive solutions to various boundary value problems for integer-order differential equations. See for instance [4446, 51] and the references therein. Several authors have investigated boundary value problems of fractional differential equations; see for instance [8, 11, 13].

However, it is not difficult to see that if \((H_{1})\)\((H_{4})\) hold, and f satisfies \((H_{6})\), by using Lemma 3.3, we can not obtain the results of Corollary 5.3.

At the same time, it is not difficult to see that \((H_{2})\) plays an important role in the proof that system (1.1)–(1.2) has two infinite families of positive solutions. However, if the condition \((H_{2})\) does not hold, then we can only obtain the existence results of one and two positive solution for system (1.1)–(1.2).

Theorem 6.1

Assume that \((H_{1})\), \((H_{3})\) and \((H_{4})\) hold. Furthermore, we assume that f satisfies:

\((H_{5}^{\prime})\) :

for any \(t\in J\), \(\Vert\mathbf{x} \Vert\in [0,R]\), \(f_{i}(t,\mathbf{x})\leq LR\), where L satisfies (4.1);

\((H_{6}^{\prime})\) :

for any \(t\in[\theta,1-\theta]\), \(\Vert \mathbf{x} \Vert\in[\gamma^{*}r,r]\), \(f_{i}(t,\mathbf{x})\geq lr\), where \(l>0\).

Letting \(0< r< R\), then there exists \(\lambda_{0}>0\), such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has at least one positive solution x with

$$r\leq \Vert \mathbf{x} \Vert \leq R. $$

Theorem 6.2

Assume that \((H_{1})\), \((H_{3})\), \((H_{4})\) hold and \(0< r<\eta<R\). Furthermore, we assume that f satisfies

\((H_{7}^{\prime\prime})\) :

\(F_{0}^{r}\leq L\) and \(F_{0}^{R}\leq L\), where L is defined in (4.1);

\((H_{8}^{\prime})\) :

\(F_{\sigma\eta}^{\eta}\geq l\), where \(l>0\).

Then there exist \(\lambda_{0}>0\) such that, for \(\lambda>\lambda_{0}\), system (1.1)(1.2) has at least two positive solutions \(\mathbf{x}^{(1)}\), \(\mathbf{x}^{(2)}\) and \(\sum_{i=1}^{n}x_{i}^{(1)}(t)>\gamma^{*}\eta\), \(\forall t\in[\theta,1-\theta]\).

Remark 6.1

If \((H_{2})\) does not hold, we can also consider the other cases.

To this aim, we begin by introducing the notations

$$\begin{aligned}& f_{i}^{0}=\limsup_{ \Vert \mathbf{y} \Vert \rightarrow0}\max _{t\in J} \frac {f_{i}(t,\mathbf{y})}{ \Vert \mathbf{y} \Vert }, \qquad f_{i}^{\infty}= \limsup_{ \Vert \mathbf{y} \Vert \rightarrow\infty}\max_{t\in J} \frac{f_{i}(t,\mathbf{y})}{ \Vert \mathbf{y} \Vert }, \\& \mathbf{K}_{r,R}= \bigl\{ \mathbf{x} \vert\mathbf{x}\in \mathbf{K}_{\theta}, r< \Vert \mathbf{x} \Vert < R \bigr\} . \end{aligned}$$

Theorem 6.3

Assume that \((H_{1})\), \((H_{3})\) and \((H_{4})\) hold. Furthermore, let the following two conditions hold:

  1. (i)

    \(f_{i}^{0}=0\) or \(f_{i}^{\infty}=0\);

  2. (ii)

    There exist \(\rho>0\), \(\delta>0\), such that \(f_{i}(t,\mathbf{x})\geq\delta\) for \(\Vert\mathbf{x} \Vert \geq\rho\), \(t \in J\)

be satisfied, then there exists \(\lambda_{0}>0\) such that, for all \(\lambda>\lambda _{0}\), system (1.1)(1.2) has at least one positive solution \(\mathbf{x}^{*}\).

Proof

Considering \(f_{i}^{0}=0\), there exists \(0< r<\rho\) such that \(f_{i}(t,\mathbf{x})\leq\varepsilon_{1}r\), for \(0\leq \Vert\mathbf{x} \Vert\leq r\), \(t \in J\), where \(\varepsilon_{1}>0\) satisfies \(\lambda \varepsilon_{1} \Vert G_{i} \Vert_{q} \Vert g_{i} \Vert_{p}\leq\frac{1}{n}\).

So, for \(x\in\partial K_{r\theta}\), we have from (3.5)

$$\begin{aligned} \bigl(T_{\lambda}^{i}\mathbf{x} \bigr) (t)&=\lambda \int _{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1}G_{i}(s)g_{i}(s)f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \int_{0}^{1} \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p}f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p} \int_{0}^{1}f_{i} \bigl(s,\mathbf{x}(s) \bigr)\,ds \\ &\leq\lambda \Vert G_{i} \Vert _{q} \Vert g_{i} \Vert _{p}\varepsilon_{1}r \\* &\leq\frac{1}{n}r= \frac{1}{n} \Vert \mathbf{x} \Vert , \end{aligned}$$

which shows that

$$ \Vert \mathbf{T}_{\lambda}\mathbf{x} \Vert =\sum _{i=1}^{n}\sup_{t\in J} \bigl\vert \bigl(T_{\lambda}^{i}\mathbf{x} \bigr) (t) \bigr\vert \leq \Vert \mathbf{x} \Vert , \quad\forall\mathbf {x}\in\partial K_{r\theta}. $$
(6.1)

If \(f_{i}^{\infty}=0\), similar to the proof of (4.1), there exists \(R>\rho\) such that \(f_{i}(t,\mathbf{x})\leq\varepsilon_{2} \Vert\mathbf{x} \Vert\), for \(\Vert\mathbf{x} \Vert\geq R\), \(t \in J\), where \(\varepsilon_{2}>0\) satisfies \(\lambda\varepsilon_{2} \Vert G_{i} \Vert_{q} \Vert g_{i} \Vert _{p}\leq \frac{1}{n}\), and, for \(\mathbf{x} \in\partial K_{R\theta}\), we have

$$ \Vert \mathbf{T}_{\lambda}\mathbf{x} \Vert \leq \Vert \mathbf{x} \Vert . $$
(6.2)

On the other hand, from (ii), when a \(\rho>0\) is fixed, then there exists a \(\lambda_{0}>0\) such that \(f_{i}(t,\mathbf{x})\geq \delta>\frac{1}{\lambda} [\gamma^{*}\delta N_{i}\int_{\theta_{j}}^{1-\theta _{j}}G_{i}(s)\,ds]^{-1}\rho\) for \(\lambda>\lambda_{0}\), \(\mathbf{x} \in \partial K_{\rho\theta}\).

Therefore, for \(\mathbf{x} \in\partial K_{\rho\theta}\), \(t \in J\), we have

$$\begin{aligned}[b] \Vert \mathbf{T}_{\lambda}\mathbf{x} \Vert &\geq\sup_{t\in J} \bigl\vert \bigl(T_{\lambda}^{i} \mathbf{x} \bigr) (t) \bigr\vert \\ &=\lambda\sup_{t\in J} \int_{0}^{1}G_{i}(t,s)g_{i}(s)f_{i} \bigl(s,\mathbf {x}(s) \bigr)\,ds \\ &\geq\min_{t\in[\theta_{j},1-\theta_{j}]} \lambda\delta \int _{0}^{1}G_{i}(t,s)g_{i}(s) \,ds \\ &\geq\gamma^{*}\lambda\delta N_{i} \int_{0}^{1}G_{i}(s)\,ds \\ &\geq\gamma^{*}\lambda\delta N_{i} \int_{\theta_{j}}^{1-\theta _{j}}G_{i}(s)\,ds \\ &>\rho= \Vert \mathbf{x} \Vert . \end{aligned} $$

Consequently, for \(x \in\partial K_{\rho\theta}\), we have

$$ \Vert \mathbf{T}_{\lambda}\mathbf{x} \Vert > \Vert \mathbf{x} \Vert . $$
(6.3)

By Lemma 3.3, for all \(\lambda>\lambda_{0}\), (6.1) and (6.3), (6.2) and (6.3), respectively, show that \(\mathbf{T}_{\lambda}\) has a fixed point \(\mathbf{x}^{*}\in\bar{\mathbf{K}}_{r,\rho}\), \(r \leq \Vert\mathbf {x}^{*} \Vert < \rho\) and \(\sum_{i=1}^{n}x_{i}^{*}(t)\geq\gamma\Vert\mathbf{x}^{*} \Vert\), \(t\in[\theta,1-\theta]\) or \(\mathbf{x}^{*}\in\bar{\mathbf{K}}_{\rho,R}\), \(\rho< \Vert\mathbf {x}^{*} \Vert \leq R\) and \(\sum_{i=1}^{n}x_{i}^{*}(t)\geq\gamma \Vert x^{*} \Vert>0\), \(t\in [\theta,1-\theta]\). Thus it follows that system (1.1)–(1.2) has at least one positive solution \(\mathbf{x}^{*}\) for all \(\lambda>\lambda_{0}\). □

Theorem 6.4

Assume that \((H_{1})\) and \((H_{3})\) hold. Furthermore, lett the following two conditions hold:

  1. (i)

    \(f_{i}^{0}=0\) and \(f_{i}^{\infty}=0\);

  2. (ii)

    there exist \(\rho>0\), \(\delta>0\), such that \(f_{i}(t,\mathbf{x})\geq\delta\) for \(\Vert\mathbf{x} \Vert \geq\rho\), \(t \in J\),

are satisfied, then there exists \(\lambda_{0}>0\) such that, for all \(\lambda>\lambda _{0}\), system (1.1)(1.2) has at least two positive solutions \(\mathbf{x}^{*}\) and \(\mathbf{x}^{**}\).

Proof

The proof is similar to that of Theorem 6.3. By Lemma 3.3, (6.1)–(6.3) show that \(\mathbf{T}_{\lambda}\) has at least two fixed points \(\mathbf{x}^{*}\), \(\mathbf{x}^{**}\), where \(\mathbf{x}^{*}\in\bar{\mathbf{K}}_{r,\rho}\), \(r \leq \Vert\mathbf {x}^{*} \Vert < \rho\) and \(\sum_{i=1}^{n}x_{i}^{*}(t)\geq\gamma \Vert\mathbf{x}^{*} \Vert >0\), \(t\in[\theta,1-\theta]\), \(\mathbf{x}^{**}\in\bar{\mathbf {K}}_{\rho,R}\), \(\rho< \Vert\mathbf{x}^{*} \Vert \leq R\) and \(\sum_{i=1}^{n}x_{i}^{**}(t)\geq\gamma \Vert\mathbf{x}^{**} \Vert \), \(t\in[\theta,1-\theta]\). Thus it follows that system (1.1)–(1.2) has at least two positive solutions \(\mathbf{x}^{*}\), \(\mathbf{x}^{**}\) for all \(\lambda>\lambda_{0}\). □

Remark 6.2

Results similar to Theorems 6.36.4 have been established by Wang [57] for other types of n-dimensional system.

Remark 6.3

Boundary value problem with infinitely many singularities has been studied by Kaufmann, Kosmatov, Wang and Feng. For more details on this study, we refer the reader to [58, 59] for a second order two point boundary value problem.

7 An example

In Example 7.1, we select \(n=3\), \(\alpha=\frac{5}{2}\), \(a=2\), \(b=1\).

Example 7.1

Consider the following system:

$$ \textstyle\begin{cases} \mathbf{D}_{0^{+}}^{\alpha}\mathbf{x}(t)+\lambda\mathbf{g}(t)\mathbf {f}(t,\mathbf{x}(t))=0,\quad 0< t< 1,\\ \mathbf{x}(0)={\mathbf{x}'}(0)=0,\\ a\mathbf{x}(1)+b{\mathbf{x}'}(1)=\int_{0}^{1}\mathbf{h}(t)\mathbf{x}(t)\,dt, \end{cases} $$
(7.1)

where

$$\begin{aligned}& \mathbf{g}(t)=\left ( \begin{matrix} g_{1}(t) & 0 & 0\\ 0 & g_{2}(t) & 0 \\ 0 & 0 & g_{3}(t) \end{matrix} \right ),\qquad \mathbf{h}(t)=\left ( \begin{matrix} 5 & 0 & 0\\ 0 & 5 & 0 \\ 0 & 0 & 5 \end{matrix} \right ), \\& \mathbf{f}(t,\mathbf{x})=\left ( \begin{matrix} (\frac{1}{6}+\frac{1}{6} t)Lx_{1}+\frac{1}{3}Lx_{2}++\frac{1}{3}Lx_{3} \\ (\frac{1}{6}+\frac{1}{6} t)Lx_{1}+\frac{1}{3}Lx_{2}++\frac{1}{3}Lx_{3} \\ (\frac{1}{6}+\frac{1}{6} t)Lx_{1}+\frac{1}{3}Lx_{2}++\frac{1}{3}Lx_{3} \end{matrix} \right ), \end{aligned}$$

where L is defined in (4.1), \(h_{i}(t)\equiv5\) on J, \(f_{i}(t, \mathbf{x})=f_{i}(t,x_{1},x_{2},x_{3})=(\frac{1}{6}+\frac{1}{6} t)Lx_{1}+\frac{1}{3}Lx_{2}++\frac{1}{3}Lx_{3} \) on \(t\in J\), \(g_{i}(t)\) (\(i=1,2,3\)) are singular at \(t_{j}^{\prime}\), \(j=1,2,\ldots\) , and

$$ t_{j}^{\prime}=\frac{2}{5}-\frac{1}{10}\sum _{i=1}^{j}\frac {1}{(2i-1)^{4}}, \quad j=1,2, \ldots. $$
(7.2)

It follows from (7.2) that

$$\begin{aligned}& t_{1}^{\prime}=\frac{2}{5}-\frac{1}{10}= \frac{3}{10}, \\& t_{j}^{\prime}-t_{j+1}^{\prime}= \frac{1}{10(2j+1)^{4}}, \quad j=1,2,\ldots, \end{aligned}$$

and from \(\sum_{j=1}^{\infty}\frac{1}{(2j-1)^{4}}=\frac{\pi^{4}}{96}\), there is

$$t_{0}^{\prime}=\lim_{j\rightarrow\infty}t_{j}^{\prime}= \frac {2}{5}-\frac{1}{10}\sum_{j=1}^{\infty} \frac{1}{(2j-1)^{4}} =\frac{2}{5}-\frac{1}{10}\cdot\frac{\pi^{4}}{96}= \frac{2}{5}-\frac{\pi ^{4}}{960}>\frac{1}{10}. $$

Consider the function

$$g_{1}(t)=\sum_{j=1}^{\infty}g_{j}^{(1)}(t), \qquad g_{2}(t)=\sum_{j=1}^{\infty}g_{j}^{(2)}(t), \qquad g_{3}(t)=\sum_{j=1}^{\infty }g_{j}^{(3)}(t), \quad t\in J, $$

where

$$\begin{aligned} &g_{j}^{(1)}(t)= \textstyle\begin{cases} \frac{j+2}{(j+1)!(t_{j}^{\prime}+t_{j+1}^{\prime})},& t\in[0,\frac {t_{j}^{\prime}+t_{j+1}^{\prime}}{2}),\\ \frac{1}{\sqrt{t_{j}^{\prime}-t}}, & t\in[\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2},t_{j}^{\prime}),\\ \frac{1}{\sqrt{t-t_{j}^{\prime}}}, & t\in(t_{j}^{\prime},\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2}],\\ \frac{j+2}{(j+1)!(2-t_{j}^{\prime}-t_{j-1}^{\prime})},& t\in(\frac {t_{j}^{\prime}+t_{j-1}^{\prime}}{2},1], \end{cases}\displaystyle \\ &g_{j}^{(2)}(t)= \textstyle\begin{cases} \frac{2}{(2j-2)!(t_{j}^{\prime}+t_{j+1}^{\prime})},& t\in[0,\frac {t_{j}^{\prime}+t_{j+1}^{\prime}}{2}),\\ \frac{1}{\sqrt{t_{j}^{\prime}-t}}, & t\in[\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2},t_{j}^{\prime}),\\ \frac{1}{\sqrt{t-t_{j}^{\prime}}}, & t\in(t_{j}^{\prime},\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2}],\\ \frac{2}{(2j-2)!(2-t_{j}^{\prime}-t_{j-1}^{\prime})},& t\in(\frac {t_{j}^{\prime}+t_{j-1}^{\prime}}{2},1], \end{cases}\displaystyle \\ &g_{j}^{(3)}(t)= \textstyle\begin{cases} \frac{j}{2^{j}(t_{j}^{\prime}+t_{j+1}^{\prime})},& t\in[0,\frac {t_{j}^{\prime}+t_{j+1}^{\prime}}{2}),\\ \frac{1}{\sqrt{t_{j}^{\prime}-t}}, & t\in[\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2},t_{j}^{\prime}),\\ \frac{1}{\sqrt{t-t_{j}^{\prime}}}, & t\in(t_{j}^{\prime},\frac{t_{j}^{\prime}+t_{j-1}^{\prime}}{2}],\\ \frac{j}{2^{j}(2-t_{j}^{\prime}-t_{j-1}^{\prime})},& t\in(\frac {t_{j}^{\prime}+t_{j-1}^{\prime}}{2},1]. \end{cases}\displaystyle \end{aligned}$$

From \(\sum_{j=1}^{\infty}\frac{j+2}{(j+1)!}=2e-3\), \(\sum_{j=1}^{\infty }\frac{2}{(2j-2)!}=e+e^{-1}\), \(\sum_{j=1}^{\infty}\frac{j}{2^{j}}=2\) and \(\sum_{j=1}^{\infty}\frac{1}{(2j-1)^{2}}=\frac{\pi^{2}}{8}\), we get

$$\begin{aligned}& \begin{aligned}[b] \sum_{j=1}^{\infty} \int_{0}^{1}g_{j}^{(1)}(t) \,dt&= \sum_{j=1}^{\infty} \biggl\{ \int_{0}^{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}\frac {j+2}{(j+1)!(t_{j}^{\prime}+t_{j+1}^{\prime})}\,dt+ \int_{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}^{1}\frac {j+2}{(j+1)!(2-t_{j}^{\prime}-t_{j-1}^{\prime})}\,dt\hspace{-20pt} \\ &\quad{} + \int_{\frac{t_{j}^{\prime}+t_{j+1}^{\prime }}{2}}^{t_{j}'}\frac{1}{\sqrt{t_{j}^{\prime}-t}}\,dt+ \int_{t_{j}'}^{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}\frac{1}{\sqrt {t-t_{j}^{\prime}}}\,dt \biggr\} \\ &=\sum_{j=1}^{\infty}\frac{j+2}{(j+1)!} + \sqrt{2}\sum_{j=1}^{\infty} \Bigl(\sqrt {t_{j}^{\prime}-t_{j+1}^{\prime}} +\sqrt {t_{j-1}^{\prime}-t_{j}^{\prime}} \Bigr) \\ &=2e-3+\frac{\sqrt{2}}{\sqrt{10}}\sum_{j=1}^{\infty} \biggl(\frac{1}{(2j+1)^{2}} +\frac{1}{(2j-1)^{2}} \biggr) \\ &=2e-3+\frac{\sqrt{2}}{\sqrt{10}} \biggl(\frac{\pi^{2}}{8}-1+\frac{\pi ^{2}}{8} \biggr) \\ &=2e-3+\frac{\sqrt{2}}{\sqrt{10}} \biggl(\frac{\pi^{2}}{4}-1 \biggr), \end{aligned} \end{aligned}$$
(7.3)
$$\begin{aligned}& \begin{aligned}[b] \sum_{j=1}^{\infty} \int_{0}^{1}g_{j}^{(2)}(t) \,dt&= \sum_{j=1}^{\infty} \biggl\{ \int_{0}^{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}\frac {2}{(2j-2)!(t_{j}^{\prime}+t_{j+1}^{\prime})}\,dt+ \int_{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}^{1}\frac {2}{(2j-2)!(2-t_{j}^{\prime}-t_{j-1}^{\prime})}\,dt\hspace{-30pt} \\ &\quad{}+ \int_{\frac{t_{j}^{\prime}+t_{j+1}^{\prime}}{2}}^{t_{j}'}\frac {1}{\sqrt{t_{j}^{\prime}-t}}\,dt+ \int_{t_{j}'}^{\frac{t_{j-1}^{\prime}+t_{j}^{\prime}}{2}}\frac{1}{\sqrt {t-t_{j}^{\prime}}}\,dt \biggr\} \\ &=\sum_{j=1}^{\infty}\frac{2}{(2j-2)!} + \sqrt{2}\sum_{j=1}^{\infty} \Bigl(\sqrt {t_{j}^{\prime}-t_{j+1}^{\prime}} +\sqrt {t_{j-1}^{\prime}-t_{j}^{\prime}} \Bigr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\sqrt{10}}\sum_{j=1}^{\infty} \biggl(\frac{1}{(2j+1)^{2}} +\frac{1}{(2j-1)^{2}} \biggr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\sqrt{10}} \biggl(\frac{\pi^{2}}{8}-1+ \frac{\pi ^{2}}{8} \biggr) \\ &=e+e^{-1}+\frac{\sqrt{2}}{\sqrt{10}} \biggl(\frac{\pi^{2}}{4}-1 \biggr), \end{aligned} \end{aligned}$$
(7.4)
$$\begin{aligned}& \begin{aligned}[b] \sum_{j=1}^{\infty} \int_{0}^{1}g_{j}^{(3)}(t) \,dt&= \sum_{j=1}^{\infty} \biggl\{ \int_{0}^{(t_{j}^{\prime}+t_{j+1}^{\prime})/2}\frac {j}{2^{j}(t_{n}^{\prime}+t_{j+1}^{\prime})}\,dt+ \int_{(t_{j-1}^{\prime}+t_{j}^{\prime})/2}^{1}\frac {j}{2^{j}(2-t_{j}^{\prime}-t_{j-1}^{\prime})}\,dt \\ &\quad{}+ \int_{(t_{j}^{\prime}+t_{j+1}^{\prime})/2}^{t_{j}'}\frac {1}{\sqrt{t_{j}^{\prime}-t}}\,dt+ \int_{t_{j}'}^{(t_{j-1}^{\prime}+t_{j}^{\prime})/2}\frac{1}{\sqrt {t-t_{j}^{\prime}}}\,dt \biggr\} \\ &=\sum_{j=1}^{\infty}\frac{j}{2^{j}}+ \sqrt{2}\sum_{j=1}^{\infty} \Bigl(\sqrt {t_{j}^{\prime}-t_{j+1}^{\prime}}+\sqrt {t_{j-1}^{\prime}-t_{j}^{\prime }} \Bigr) \\ &=2+\frac{\sqrt{2}}{\sqrt{10}}\sum_{j=1}^{\infty} \biggl(\frac {1}{(2j+1)^{2}}+\frac{1}{(2j-1)^{2}} \biggr) \\ &=2+\frac{\sqrt{2}}{\sqrt{10}} \biggl(\frac{\pi^{2}}{8}-1+\frac{\pi^{2}}{8} \biggr) \\ &=2+\frac{\sqrt{2}}{\sqrt{10}} \biggl(\frac{\pi^{2}}{4}-1 \biggr). \end{aligned} \end{aligned}$$
(7.5)

Thus, from (7.3), (7.4) and (7.5), it is easy to see that

$$\begin{aligned}& \int_{0}^{1}g_{1}(t)\,dt= \int_{0}^{1}\sum_{j=1}^{\infty}g_{j}^{(1)}(t) \,dt =\sum_{j=1}^{\infty} \int_{0}^{1}g_{j}^{(1)}(t) \,dt=2e-3+ \frac{\sqrt {2}}{\sqrt{10}} \biggl(\frac{\pi^{2}}{4}-1 \biggr)< +\infty, \\& \int_{0}^{1}g_{2}(t)\,dt= \int_{0}^{1}\sum_{j=1}^{\infty}g_{j}^{(2)}(t) \,dt =\sum_{j=1}^{\infty} \int_{0}^{1}g_{j}^{(2)}(t) \,dt=e+e^{-1}+ \frac{\sqrt {2}}{\sqrt{10}} \biggl(\frac{\pi^{2}}{4}-1 \biggr)< + \infty, \\& \int_{0}^{1}g_{3}(t)\,dt= \int_{0}^{1}\sum_{j=1}^{\infty}g_{j}^{(3)}(t) \,dt =\sum_{j=1}^{\infty} \int_{0}^{1}g_{j}^{(3)}(t) \,dt=2+ \frac{\sqrt{2}}{\sqrt {10}} \biggl(\frac{\pi^{2}}{4}-1 \biggr)< +\infty. \end{aligned}$$

Therefore \(g_{i}(t)\in L^{1}[0,1]\), \(i=1,2,3\), which shows that conditions \((H_{1})\) and \((H_{2})\) hold.

On the other hand, it is not difficult to see that by means of the definition of \(\mathbf{f}(t, \mathbf{x})\) condition \((H_{3})\) holds. At the same time, it follows from \(a=2\), \(b=1\), \(\alpha=\frac{5}{2}\) and \(h_{i}(t)\equiv5\) that

$$\mu_{i}= \int_{0}^{1}h_{i}(t)t^{\alpha-1} \,dt=2< a+b( \alpha-1)=\frac {7}{2},\quad i=1,2,3, $$

which shows that condition \((H_{4})\) holds.

Next, we show that conditions \((H_{5})\) and \((H_{6})\) of Theorem 4.1 hold. In fact, for any \(t\in J\), \(\Vert\mathbf{x} \Vert\in [0,R_{j}]\), we get

$$f_{i}(t,\mathbf{x})= \biggl(\frac{1}{6}+\frac{1}{6} t \biggr)Lx_{1}+\frac {1}{3}Lx_{2}++\frac{1}{3}Lx_{3} \leq L R_{j}; $$

for any \(t\in[\theta_{j},1-\theta_{j}]\), \(\Vert\mathbf{x} \Vert\in [\gamma_{j}^{*}r_{j},r_{j}]\), we get

$$f_{i}(t,\mathbf{x})= \biggl(\frac{1}{6}+\frac{1}{6} t \biggr)Lx_{1}+\frac {1}{3}Lx_{2}++\frac{1}{3}Lx_{3} \geq l r_{j},\quad \mbox{where }l= \biggl(\frac {1}{3}L \theta_{j} \gamma_{j}^{*}+\frac{1}{3}L+ \frac{1}{3}L \biggr). $$

Therefore, it follows from Theorem 4.1 that there exists \(\lambda_{0}>0\), such that, for \(\lambda>\lambda_{0}\), system (7.1) has denumerably many positive solutions \(\{\mathbf{x}_{j}\}_{j=1}^{\infty}\) satisfying

$$r_{j}\leq \Vert \mathbf{x}_{j} \Vert \leq R_{j}, \quad j=1,2,\ldots. $$