1 Introduction

In this paper we consider a system of Caputo fractional difference boundary value problem (FBVP) of the form:

$$\begin{aligned}& \Delta _{{\mathrm{C}}}^{\nu_{j}}y_{j}(t)=-\lambda_{j}f_{j} \bigl(y_{1}(t+\nu_{1}-1),\ldots,y_{n}(t+ \nu_{n}-1)\bigr), \end{aligned}$$
(1.1)
$$\begin{aligned}& y_{j}(\nu_{j}-3)=\Delta y_{j}( \nu_{j}+b)=\Delta ^{2} y_{j}(\nu _{j}-3)=0, \end{aligned}$$
(1.2)

where \(t\in[0,b+1]_{{\mathbb{N}}_{0}}:=\{0,1,\ldots,b+1\}\), \(b>3\), \(\lambda_{j}>0\), \(2<\nu_{j}\leqslant3\), \(f_{j}:[0,+\infty)\times\cdots\times[0,+\infty)\rightarrow [0,+\infty)\) are continuous functions for each j (\(j=1,2,\ldots, n\)). \(\Delta _{{\mathrm{C}}}^{\nu}y(t)\) is the standard Caputo difference.

Fractional difference equations have been of great interest recently. It is caused by intensive development of the theory of discrete fractional calculus itself, see [119] and the references therein. Abdeljawad [1] defined left and right Caputo fractional sums and differences, studied some of their properties. Holm [2] introduced the fractional sum and difference operators. He developed and presented a complete and precise theory for composing fractional sums and differences. Atici and Sengül [3] provided some analysis of discrete fractional variational problems, their paper also provided some initial attempts at using the discrete fractional calculus to model biological processes. Abdeljawad and Baleanu [4] defined the right fractional sum and difference operators and obtained many of their properties. Then by using those properties they obtained a by-part formula analogous to that in the usual fractional calculus. In [5] the authors studied the stability of discrete nonautonomous systems within the frame of the Caputo fractional difference by using the Lyapunov direct method. They discussed the conditions for uniform stability, uniform asymptotic stability, and uniform global stability. Mohammadi and Rezapour [6] discussed the existence and uniqueness of solutions for some nonlinear fractional differential equations via some boundary value problems by using fixed point results on ordered complete gauge spaces. Recently, Wu and Baleanu introduced some applications of the Caputo fractional difference to discrete chaotic maps in [7, 8].

In particular, the authors [919] developed some of the basic theory of fractional difference both IVPs and BVPs with delta derivative on the time scale \(\mathbb{Z}\). In [10], we obtained some results on the existence of one or more positive solutions for the Caputo fractional boundary value problems by means of cone theoretic fixed point theorems. Thus, the fractional difference equation has recently attracted increasing attention from a growing number of researchers. However, systems of discrete fractional boundary value problems are limited (see [1419]). Among them, Atici and Eloe [14] studied a linear system of fractional nabla difference equation with constant coefficients of the form

$$\nabla_{0}^{\nu}y(t)=Ay(t)+f(t),\quad t=1,2,\ldots, $$

where \(0<\nu<1\), A is an \(n\times n\) matrix with constant entries, and f are n-vector valued functions. The operator \(\nabla_{0}^{\nu}\) is a Riemann-Liouville fractional difference. They constructed the fundamental matrix for the homogeneous system and the causal Green’s function for the nonhomogeneous system.

In [15], the authors investigated the existence of solutions for a k-dimensional system of fractional finite difference equations:

$$\begin{aligned}& \Delta^{\nu_{1}}y_{1}(t)+f_{1}\bigl(y_{1}(t+\nu_{1}-1),y_{2}(t+\nu_{2}-1),\ldots, y_{k}(t+\nu_{k}-1)\bigr)=0, \\& \Delta^{\nu_{2}}y_{2}(t)+f_{2}\bigl(y_{1}(t+\nu_{1}-1),y_{2}(t+\nu_{2}-1),\ldots, y_{k}(t+\nu_{k}-1)\bigr)=0, \\& \ldots, \\& \Delta^{\nu_{k}}y_{k}(t)+f_{k}\bigl(y_{1}(t+\nu_{1}-1),y_{2}(t+\nu_{2}-1),\ldots, y_{k}(t+\nu_{k}-1)\bigr)=0, \\& y_{1}(\nu_{1}-2)=\Delta y_{1}( \nu_{1}+b)=0, \\& y_{2}(\nu_{2}-2)=\Delta y_{2}( \nu_{2}+b)=0, \\& \ldots, \\& y_{k}(\nu_{k}-2)=\Delta y_{k}( \nu_{k}+b)=0, \end{aligned}$$

where \(b\in{{\mathbb{N}}_{0}}\), \(1<\nu_{i}\leqslant2\), \(f_{i}:{{\mathbb{R}}^{k}}\rightarrow{\mathbb{R}}\) are continuous functions for \(k = 1, 2,\ldots \) . They investigated the existence of solutions for this k-dimensional system of fractional finite difference equations by using the Krasnosel’skiĭ fixed point theorem.

In [16], Goodrich studied the following pair of discrete fractional boundary value problems:

$$\begin{aligned}& -\Delta^{\nu_{1}}y_{1}(t)=\lambda_{1}a_{1}(t+ \nu_{1}-1)f_{1}\bigl(y_{1}(t+\nu_{1}-1),y_{2}(t+\nu_{n}-1)\bigr), \\& -\Delta^{\nu_{2}}y_{2}(t)=\lambda_{2}a_{2}(t+ \nu_{1}-1)f_{2}\bigl(y_{1}(t+\nu_{1}-1),y_{2}(t+\nu_{n}-1)\bigr), \\& y_{1}(\nu_{1}-2)=\psi_{1}(y_{1}), \qquad y_{2}(\nu_{2}-2)=\psi_{2}(y_{2}), \\& y_{1}(\nu_{1}+b)=\phi_{1}(y_{1}), \qquad y_{2}(\nu_{2}+b)=\phi_{2}(y_{2}), \end{aligned}$$

where \(t\in[0,b]_{{\mathbb{N}}_{0}}:=\{0,1,\ldots,b\}\), \(\lambda_{1},\lambda_{2}>0\), \(\nu_{1},\nu_{2}\in(1,2]\). Goodrich obtained the existence of at least one positive solution to this problem by means of the Krasnosel’skiĭ theorem for cons.

In [18], the authors considered the existence of at least one positive solution to the discrete fractional system:

$$\left \{ \begin{array}{l} -\Delta ^{\nu_{1}}y_{1}(t)=\lambda_{1}f_{1} (t+\nu _{1}-1,y_{1}(t+\nu_{1}-1),y_{2}(t+\nu_{2}-1) ), \quad t\in[1,b+1], \\ -\Delta ^{\nu_{2}}y_{2}(t)=\lambda_{2}f_{2} (t+\nu _{2}-1,y_{1}(t+\nu_{1}-1),y_{2}(t+\nu_{2}-1) ), \quad t\in[1,b+1], \\ y_{1}(\nu_{1}-2)=y_{1}(\nu_{1}+b+1)=0, \\ y_{2}(\nu_{2}-2)=y_{2}(\nu_{2}+b+1)=0, \end{array} \right . $$

where \(\nu_{1}, \nu_{2}\in(1,2]\).

Following this trend, in [19], we discussed the boundary value problems of fractional difference system of the form

$$\begin{aligned}& -\Delta^{\nu_{j}}y_{j}(t)=\lambda_{j}f_{j} \bigl(y_{1}(t+\nu_{1}-1), \ldots,y_{n}(t+ \nu_{n}-1)\bigr), \\& y_{j}(\nu_{j}-2)=\psi_{j}(y_{j}), \qquad y_{j}(\nu_{j}+b)=\phi_{j}(y_{j}), \end{aligned}$$

where \(t\in[0,b]_{{\mathbb{N}}_{0}}:=\{0,1,\ldots,b\}\), \(\lambda_{j}>0\), \(1<\nu_{j}\leqslant2\), \(f_{j}:[0,+\infty)\times\cdots\times[0,+\infty)\rightarrow [0,+\infty)\) are continuous functions. For each j we have that \(\psi_{j}, \phi_{j}:{\mathbb{R}}^{b+3}\rightarrow\mathbb{R}\) (\(j=1,2,\ldots,n\)) are given functions. We obtained the sufficient conditions for the existence of two positive solutions to the boundary value problem of a fractional difference system. In this paper we open our studies in this field. We establish some conditions on parameters \(\lambda_{i}\) which are able to guarantee that FBVP (1.1)-(1.2) has at least two positive solutions and one positive solution, respectively, based on the Krasnosel’skiĭ theorem.

This paper is organized as follows. In Section 2, we provide basic definitions and demonstrate some lemmas in order to prove our main results. In Section 3, we establish some results for the existence of at least two positive solutions to FBVP (1.1)-(1.2), and we conclude with an example explicating our main result.

2 Preliminaries

In this section, we present some basic definitions in the discrete fractional calculus and establish some lemmas.

Definition 2.1

[1]

We define

$$ t^{(\nu)}=\frac{\Gamma(t+1)}{\Gamma(t+1-\nu)} $$

for any t and ν for which the right-hand side is defined. We also appeal to the convention that if \(t+1-\nu\) is a pole of the gamma function and \(t+1\) is not a pole, then \(t^{(\nu)}=0\).

Definition 2.2

[1]

The νth fractional sum of a function f is defined by

$$\Delta ^{-\nu}f(t)=\frac{1}{\Gamma(\nu)}\sum_{s=a}^{t-\nu }(t-s-1)^{\underline{\nu-1}}f(s) $$

for \(\nu>0\) and \(t\in \{a+\nu,a+\nu+1,\ldots \}={\mathbb{N}}_{a+\nu}\). We also define the νth Caputo fractional difference for \(\nu>0\) by

$$\Delta ^{\nu}_{\mathrm{C}}\, f(t)=\Delta ^{-(n-\nu)} \Delta ^{n}f(t)=\frac{1}{\Gamma(n-\nu)}\sum_{s=a}^{t-(n-\nu)}(t-s-1)^{\underline{n-\nu-1}} \Delta ^{n}f(s), $$

where \(n-1<\nu\leqslant n \).

Lemma 2.3

[1, 2]

Assume that \(\nu>0\) and f is defined on domains \({\mathbb{N}}_{a}\), then

$$\Delta _{a+(n-\nu)}^{-\nu} \Delta ^{\nu}_{\mathrm{C}}\, f(t)=f(t)-\sum_{k=0}^{n-1}c_{k}(t-a)^{\underline{k}}, $$

where \(c_{i}\in{\mathbb{R}}\), \(i=0,1,\ldots,n-1\); \(n-1<\nu\leqslant n\).

Lemma 2.4

[2]

Let \(f:{\mathbb{N}}_{a+\nu}\times{\mathbb{N}}_{a}\to{\mathbb{R}}\) be given. Then

$$\Delta \Biggl(\sum_{s=a}^{t-\nu}f(t,s) \Biggr)=\sum_{s=a}^{t-\nu }\Delta _{t}\, f(t,s)+f(t+1,t+1- \nu) \quad \textit{for } t\in{\mathbb{N}}_{a+\nu}. $$

In order to get our main results, we now state an important lemma. This lemma gives a representation for the solution of (1.1)-(1.2), provided that the solution exists.

Lemma 2.5

[10]

Let \(2<\nu\leqslant3\) and \(g:[\nu-2, \nu-1, \ldots, \nu+b]_{{\mathbb{N}}_{\nu-2}}\to{\mathbb{R}}\) be given. Then the solution of the FBVP

$$\begin{aligned}& \Delta _{{\mathrm{C}}}^{\nu}y(t)=-g(t+\nu-1), \end{aligned}$$
(2.1)
$$\begin{aligned}& y(\nu-3)=\Delta y(\nu+b)=\Delta ^{2}y(\nu-3)=0 \end{aligned}$$
(2.2)

is given by

$$y(t)=\sum_{s=0}^{b+1}G(t,s)g(s+\nu-1), $$

where the Green’s function \(G:[\nu-2,\nu-1,\ldots,\nu+b]_{{\mathbb{N}}_{\nu-2}}\times[0,b+1]_{{\mathbb{N}}_{0}}\rightarrow{\mathbb{R}}\) is defined by

$$G(t,s)=\frac{1}{\Gamma(\nu)}\left \{ \begin{array}{l@{\quad}l} (\nu-1)(t-\nu+3)(\nu+b-s-1)^{\underline {\nu-2}} \\ \quad {}-(t-s-1)^{\underline{\nu-1}} ,& 0\leqslant s< t-\nu+1\leqslant{b+1}, \\ (\nu-1)(t-\nu+3)(\nu+b-s-1)^{\underline{\nu-2}} , &0\leqslant t-\nu+1\leqslant s\leqslant b+1. \end{array} \right . $$

Remark

Notice that \(G(\nu-3,s)=0\), \(G(t,b+2)=0\). G could be extended to \([\nu-3,\nu+b]_{{\mathbb{N}}_{\nu-3}}\times [0,b+2]_{{\mathbb{N}}_{0}}\), so we only discuss on \((t,s)\in [\nu-2,\nu+b]_{{\mathbb{N}}_{\nu-2}}\times[0,b+1]_{{\mathbb{N}}_{0}}\).

Lemma 2.6

[10]

The Green’s function G satisfies the following conditions:

  1. (i)

    \(G(t,s)>0\), \((t,s)\in[\nu-2,\nu+b]_{{\mathbb{N}}_{\nu-2}}\times[0,b+1]_{{\mathbb{N}}_{0}}\).

  2. (ii)

    \(\max_{t\in[\nu-2,\nu+b]_{{\mathbb{N}}_{\nu-2}}}G(t,s)=G(\nu+b, s)\), \(s\in[0, b+1]_{{\mathbb{N}}_{0}}\).

  3. (iii)

    \(\min_{\frac{\nu+b}{4}\leqslant t\leqslant\frac{3(\nu+b)}{4}} G(t,s)\geqslant \frac{1}{4} \max_{t\in[\nu-2,\nu+b]_{{\mathbb{N}}_{\nu-2}}}G(t,s)=\frac{1}{4}G(\nu+b, s)\), \(s\in[0, b+1]_{{\mathbb{N}}_{0}}\).

The proofs of Lemma 2.5 and Lemma 2.6 can be found in [10], so we omit their proofs.

Let

$$\mathcal{B}_{j}:=\bigl\{ y_{j}:[\nu_{j}-3, \nu_{j}+b]_{{\mathbb{N}}_{\nu _{j}-3}}\to{\mathbb{R}}, y_{j}( \nu_{j}-3)=\Delta y_{j}(\nu_{j}+b)= \Delta ^{2}y_{j}(\nu_{j}-3)=0\bigr\} $$

be equipped with the usual maximum norm \(\|\cdot\|\), it is easy to verify that \(\mathcal{B}_{j}\) is the Banach space. Then we put \(\mathcal{K}:={\mathcal{B}}_{1}\times {\mathcal{B}}_{2}\times\cdots\times{\mathcal{B}}_{n}\). By equipping \(\mathcal{K}\) with the norm

$$ \bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert = \|y_{1}\|+\cdots+\|y_{n}\|, $$

it follows that \((\mathcal{K},\|\cdot\|)\) is a Banach space.

Now consider the operator \(T:\mathcal{K}\rightarrow\mathcal{K}\) defined by

$$ T(y_{1},\ldots,y_{n}) (t_{1}, \ldots,t_{n})=\bigl(T_{1}(y_{1},\ldots ,y_{n}) (t_{1}),\ldots,T_{n}(y_{1}, \ldots,y_{n}) (t_{n})\bigr), $$
(2.3)

where we define \(T_{j}:\mathcal{K}\rightarrow{\mathcal{B}}_{j}\) by

$$ T_{j}(y_{1},\ldots,y_{n}) (t_{j}) = \lambda_{j}\sum_{s=0}^{b+1}G_{j}(t_{j},s)f_{j} \bigl(y_{1}(s+\nu_{1}-1),\ldots,y_{n}(s+ \nu_{n}-1)\bigr). $$
(2.4)

Let \(I:=[\frac{\nu_{1}+b}{4},\frac{3(\nu_{1}+b)}{4}]\times\cdots\times[\frac{\nu_{n}+b}{4},\frac{3(\nu_{n}+b)}{4}]\). In the sequel, we shall also make use of the cone

$$\begin{aligned} \Lambda :=&\biggl\{ (y_{1},\ldots,y_{n})\in\mathcal{K}: y_{1},\ldots,y_{n}\geqslant0, \\ &\min_{(t_{1},\ldots,t_{n})\in I}\bigl[y_{1}(t_{1})+ \cdots+y_{n}(t_{n})\bigr] \geqslant \frac{1}{4} \bigl\| (y_{1},\ldots,y_{n})\bigr\| \biggr\} . \end{aligned}$$

Lemma 2.7

Let T be the operator defined as in (2.3). Then \(T: \Lambda\rightarrow\Lambda\).

Proof

We show first that for \((y_{1},\ldots,y_{n})\in\mathcal{K}\), by the definitions \(T_{j}\) (\(j=1,2,\ldots,n\)), it is clear that

$$T_{j}(y_{1},\ldots,y_{n}) (t_{j}) \geqslant0,\quad j=1,2,\ldots, n. $$

On the other hand, we show that

$$\min_{(t_{1},\ldots,t_{n})\in I}\bigl[T_{1}(y_{1},\ldots ,y_{n}) (t_{1})+\cdots+T_{n}(y_{1}, \ldots, y_{n}) (t_{n})\bigr]\geqslant\frac{1}{4}\bigl\Vert T(y_{1},\ldots,y_{n})\bigr\Vert $$

for \((y_{1},\ldots,y_{n})\in\mathcal{K}\). In fact, by Lemma 2.6(iii), we have

$$\begin{aligned}& \min_{t_{j}\in[\frac{\nu_{j}+b}{4},\frac{3(\nu _{j}+b)}{4}]}T_{j}(y_{1}, \ldots,y_{n}) (t_{j}) \\& \quad \geqslant \min_{t_{j}\in[\frac{b+\nu_{j}}{4},\frac{3(\nu_{j}+b)}{4}]} \lambda_{j}\sum _{s=0}^{b+1}G_{j}(t_{j},s)f_{j} \bigl(y_{1}(s+\nu_{1}-1),\ldots ,y_{n}(s+ \nu_{n}-1)\bigr) \\& \quad \geqslant\lambda_{j}\sum_{s=0}^{b+1} \frac{1}{4}G_{j}(\nu_{j}+b,s)f_{j} \bigl(y_{1}(s+\nu_{1}-1),\ldots ,y_{n}(s+ \nu_{n}-1)\bigr) \\& \quad \geqslant \frac{1}{4}\max_{t_{j}\in[\nu_{j}-2,\nu _{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}} \lambda_{j}\sum_{s=0}^{b+1}G_{j}(t_{j},s)f_{j} \bigl(y_{1}(s+\nu_{1}-1),\ldots,y(s+\nu_{n}-1) \bigr) \\& \quad = \frac{1}{4}\bigl\Vert T_{j}(y_{1}, \ldots, y_{n}) \bigr\Vert \end{aligned}$$

for \((y_{1},\ldots,y_{n})\in\mathcal{K}\) and \(j=1,2,\ldots,n\). Then we obtain

$$\begin{aligned}& \min_{(t_{1},\ldots,t_{n})\in I}\bigl[T_{1}(y_{1}, \ldots,y_{n}) (t_{1})+\cdots+T_{n}(y_{1}, \ldots, y_{n}) (t_{n})\bigr] \\& \quad \geqslant\min_{(t_{1},\ldots,t_{n})\in I}T_{1}(y_{1}, \ldots,y_{n}) (t_{1})+\cdots +\min_{(t_{1},\ldots,t_{n})\in I}T_{n}(y_{1}, \ldots, y_{n}) (t_{n}) \\& \quad \geqslant \frac{1}{4}\bigl\Vert T_{1}(y_{1}, \ldots, y_{n}) \bigr\Vert +\cdots+ \frac{1}{4}\bigl\Vert T_{n}(y_{1}, \ldots, y_{n}) \bigr\Vert \\& \quad \geqslant \frac{1}{4}\bigl\{ \bigl\Vert T_{1}(y_{1}, \ldots, y_{n}) \bigr\Vert +\cdots +\bigl\Vert T_{n}(y_{1}, \ldots, y_{n}) \bigr\Vert \bigr\} \\& \quad = \frac{1}{4}\bigl\Vert T(y_{1}, \ldots, y_{n}) \bigr\Vert \end{aligned}$$

for \((y_{1},\ldots,y_{n})\in\mathcal{K}\). So, we conclude that \(T: \Lambda\rightarrow\Lambda\). This completes the proof. □

Theorem 2.8

Let \(f_{j}:[0,+\infty)\times\cdots\times[0,+\infty )\rightarrow[0,+\infty)\) be given for \(j=1,\ldots,n\). If \((y_{1},\ldots ,y_{n})\in\mathcal{K}\) is a fixed point of T, then \((y_{1},\ldots,y_{n})\in\mathcal{K}\) is a solution of FBVP (1.1)-(1.2).

Proof

Suppose that the operator T has a fixed point, say \((y_{1},\ldots,y_{n})\in\mathcal{K}\). Let \((t_{1},\ldots,t_{n})\in\mathbb{N}_{{\nu_{1}-2}}\times\cdots \times\mathbb{N}_{{\nu_{n}-2}}\), then we have

$$ y_{j}(t_{j})=T_{j}(y_{1}, \ldots, y_{n}) (t_{j}),\quad j=1,2,\ldots,n, $$

where \(T_{j}\) is defined as in (2.4). It is easy to check that

$$T_{j}(y_{1},\ldots,y_{n}) ( \nu_{j}-3)= 0 $$

and

$$\begin{aligned}& \Delta T_{j}(y_{1},\ldots,y_{n}) ( \nu_{j}+b) \\& \quad = T_{j}(y_{1},\ldots,y_{n}) ( \nu_{j}+b+1)-T_{j}(y_{1},\ldots,y_{n}) (\nu_{j}+b) \\& \quad = \lambda_{j}\sum_{s=0}^{b+1}G_{j}( \nu_{j}+b+1,s)f_{j}\bigl(y_{1}(s+ \nu_{1}-1),\ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \qquad {} -\lambda_{j}\sum_{s=0}^{b+1}G_{j}( \nu_{j}+b,s)f_{j}\bigl(y_{1}(s+ \nu_{1}-1),\ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \quad = \lambda_{j}\sum_{s=0}^{b+1} \bigl[G_{j}(\nu_{j}+b+1,s)- G_{j}( \nu_{j}+b,s) \bigr] f_{j}\bigl(y_{1}(s+ \nu_{1}-1),\ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \quad = \frac{\lambda_{j}}{\Gamma(\nu)}\sum_{s=0}^{b+1} \bigl[(\nu_{j}-1) (\nu _{j}+b+1-\nu_{j}+3) ( \nu_{j}+b-s-1)^{\underline{\nu_{j}-2}}-(\nu _{j}+b+1-s-1)^{\underline{\nu_{j}-1}} \\& \qquad{} - (\nu_{j}-1) (\nu_{j}+b-\nu_{j}+3) ( \nu_{j}+b-s-1)^{\underline{\nu_{j}-2}}+(\nu _{j}+b-s-1)^{\underline{\nu_{j}-1}} \bigr] \\& \qquad {}\times f_{j}\bigl(y_{1}(s+\nu_{1}-1), \ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \quad = \frac{\lambda_{j}}{\Gamma(\nu)}\sum_{s=0}^{b+1} \biggl\{ (\nu_{j}-1) (\nu_{j}+b-s-1)^{\underline{\nu_{j}-2}} \\& \qquad {}- \biggl[ \frac{(\nu_{j}+b-s)\Gamma(\nu_{j}+b-s)}{(b-s+1) \Gamma(b-s+1)}-\frac{\Gamma(\nu_{j}+b-s)}{\Gamma(b-s+1)} \biggr] \biggr\} \\& \qquad {}\times f_{j}\bigl(y_{1}(s+\nu_{1}-1), \ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \quad = \frac{\lambda_{j}}{\Gamma(\nu)}\sum_{s=0}^{b+1} \biggl[(\nu_{j}-1) (\nu_{j}+b-s-1)^{\underline{\nu_{j}-2}}- \frac{(\nu_{j}-1)\Gamma(\nu_{j}+b-s)}{(b-s+1) \Gamma(b-s+1)} \biggr] \\& \qquad {} \times f_{j}\bigl(y_{1}(s+\nu_{1}-1), \ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \quad = 0. \end{aligned}$$

Finally, when \(0\leqslant t_{j}-\nu_{j}+1\leqslant s\leqslant b+1\),

$$G_{j}(t_{j},s)=\frac{1}{\Gamma(\nu_{j})} (\nu_{j}-1) (t_{j}-\nu_{j}+3) (\nu_{j}+b-s-1)^{\underline{\nu_{j}-2}} , $$

then

$$\Delta ^{2}_{t_{j}}G_{j}(t_{j},s)=0. $$

Therefore, we can get

$$\Delta ^{2}T_{j}(y_{1},\ldots,y_{n}) (\nu_{j}-3)=0. $$

So the boundary conditions are satisfied, which completes the proof. □

Finally, to accomplish proof of our main results, we state cones theory. In particular, we require the following well-known fixed point theorem for cons in [20].

Theorem 2.9

[20]

Let \(\mathcal{B}\) a Banach space and let \(\mathcal{K}\subseteq\mathcal{B}\) be a cone. Assume that \(\Omega_{1}\) and \(\Omega_{2}\) are bounded open sets contained in \(\mathcal{B}\) such that \(0\in\Omega_{1}\) and \(\overline{\Omega}_{1}\subseteq\Omega_{2}\). Assume further that \(T:{\mathcal{K}}\cap (\overline{\Omega}_{2}\backslash\Omega_{1})\to\mathcal{K}\) is a completely continuous operator. If either

  1. (i)

    \(\|Ty\|\leqslant\|y\|\) for \(y\in{\mathcal{K}}\cap{\partial \Omega_{1}}\) and \(\|Ty\|\geqslant\|y\|\) for \(y\in{\mathcal{K}}\cap {\partial\Omega_{2}}\), or

  2. (ii)

    \(\|Ty\|\geqslant\|y\|\) for \(y\in{\mathcal{K}}\cap{\partial \Omega_{1}}\) and \(\|Ty\|\leqslant\|y\|\) for \(y\in{\mathcal{K}}\cap {\partial\Omega_{2}}\),

then the operator T has at least one fixed point in \({\mathcal{K}}\cap(\overline{\Omega}_{2}\backslash\Omega_{1})\).

3 Main results

In this section, we state and prove the existence of at least two positive solutions regarding FBVP (1.1)-(1.2). Then we conclude this section with examples to illustrate our main results. First, denote

$$\begin{aligned}& \alpha_{j}= \sum_{s=0}^{b+1}G_{j}( \nu_{j}+b,s), \qquad \beta_{j}=\sum _{s=[\frac{\nu_{j}+b}{4}-\nu_{j}+1]}^{[\frac{3(\nu _{j}+b)}{4}-\nu_{j}+1]}G_{j} \biggl( \biggl[ \frac{b-\nu_{j}}{2} \biggr]+\nu _{j},s \biggr), \\& \Omega_{\xi}=\bigl\{ (y_{1},\ldots,y_{n})\in \Lambda: \bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert < \xi \bigr\} , \\& \partial\Omega_{\xi}=\bigl\{ (y_{1},\ldots,y_{n}) \in\Lambda: \bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert = \xi\bigr\} . \end{aligned}$$

For convenience, we now present the conditions that we presume in the sequel.

(H1):

\(\lim_{(y_{1}+\cdots+y_{n})\rightarrow0^{+}} \frac{f_{j}(y_{1},\ldots,y_{n})}{y_{1}+\cdots+y_{n}}=\infty\), \({t_{j}\in[\nu _{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\), \(j=1,2,\ldots,n\),

(H2):

\(\lim_{(y_{1}+\cdots+y_{n})\rightarrow\infty} \frac{f_{j}(y_{1},\ldots,y_{n})}{y_{1}+\cdots+y_{n}}=\infty\), \({t_{j}\in[\nu _{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\), \(j=1,2,\ldots,n\),

(H3):

\(\lim_{(y_{1}+\cdots+y_{n})\rightarrow0^{+}} \frac{f_{j}(y_{1},\ldots,y_{n})}{y_{1}+\cdots+y_{n}}=0\), \({t_{j}\in[\nu_{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\), \(j=1,2,\ldots,n\),

(H4):

\(\lim_{(y_{1}+\cdots+y_{n})\rightarrow\infty} \frac{f_{j}(y_{1},\ldots,y_{n})}{y_{1}+\cdots+y_{n}}=0\), \({t_{j}\in[\nu_{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\), \(j=1,2,\ldots,n\),

(H5):

\(\lim_{(y_{1}+\cdots+y_{n})\rightarrow0^{+}} \frac{ f_{j}(y_{1},\ldots,y_{n})}{y_{1}+\cdots+y_{n}}=l_{j}\), \({t_{j}\in[\nu _{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\), \(j=1,2,\ldots,n\),

(H6):

\(\lim_{(y_{1}+\cdots+y_{n})\rightarrow\infty} \frac {f_{j}(y_{1},\ldots,y_{n})}{y_{1}+\cdots+y_{n}}=L_{j}\), \({t_{j}\in[\nu _{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\), \(j=1,2,\ldots,n\),

where \(0< l_{j}, L_{j}<+\infty\).

Theorem 3.1

Suppose that there exist two different positive numbers \(r_{1}\) and \(r_{2}\) (\(r_{1}< r_{2}\)) such that

$$\begin{aligned}& \max_{0\leqslant y_{1}+\cdots+y_{n}\leqslant r_{1}}f_{j}(y_{1}, \ldots,y_{n})\leqslant\frac{r_{1}}{n\lambda_{j}\alpha_{j}}, \\& \min_{\frac{1}{4} r_{2}\leqslant y_{1}+\cdots+y_{n}\leqslant r_{2}}f_{j}(y_{1}, \ldots,y_{n})\geqslant\frac{r_{2}}{n\lambda_{j}\beta_{j}}. \end{aligned}$$

Then the operator T has a fixed point \((\overline{y}_{1},\ldots,\overline{y}_{n})\in\Lambda\) such that

$$ r_{1}\leqslant\bigl\Vert (\overline{y}_{1},\ldots, \overline{y}_{n})\bigr\Vert \leqslant r_{2}. $$

Proof

For any \((y_{1},\ldots,y_{n})\in\Omega_{r_{1}}\) and \(\|(y_{1},\ldots,y_{n})\|=r_{1}\), we have

$$\begin{aligned}& \bigl\Vert T_{j}(y_{1},\ldots,y_{n})\bigr\Vert \\& \quad = \max_{t_{j}\in[\nu_{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\Biggl\vert \lambda_{j}\sum _{s=0}^{b+1}G_{j}(t_{j},s)f_{j} \bigl(y_{1}(s+\nu_{1}-1),\ldots,y_{n}(s+ \nu_{n}-1)\bigr)\Biggr\vert \\& \quad \leqslant \lambda_{j}\sum_{s=0}^{b+1} \max_{t_{j}\in[\nu_{j}-2,\nu _{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}G_{j}(t_{j},s)f_{j} \bigl(y_{1}(s+\nu_{1}-1),\ldots,y_{n}(s+ \nu_{n}-1)\bigr) \\& \quad \leqslant \lambda_{j}\sum_{s=0}^{b+1}G_{j}( \nu_{j}+b,s) \frac{ r_{1}}{n\lambda_{j}\alpha_{j}} \\& \quad = \frac{r_{1}}{n} \\& \quad = \frac{1}{n}\bigl\Vert (y_{1},\ldots,y_{n}) \bigr\Vert , \end{aligned}$$

\(j=1,\ldots,n\). That is,

$$\begin{aligned}& \bigl\Vert T(y_{1},\ldots,y_{n}) (t_{1}, \ldots,t_{n})\bigr\Vert \\& \quad =\bigl\Vert \bigl(T_{1}(y_{1},\ldots,y_{n}) (t_{1}),\ldots,T_{n}(y_{1}, \ldots,y_{n}) (t_{n})\bigr)\bigr\Vert \\& \quad =\bigl\Vert T_{1}(y_{1},\ldots,y_{n}) \bigr\Vert +\cdots+\bigl\Vert T_{n}(y_{1}, \ldots,y_{n})\bigr\Vert \\& \quad \leqslant \frac{1}{n}\bigl\Vert (y_{1}, \ldots,y_{n})\bigr\Vert +\cdots +\frac{1}{n}\bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert \\& \quad =\bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert \end{aligned}$$

for \((y_{1},\ldots,y_{n})\in\partial\Omega_{r_{1}}\).

On the other hand, for any \((y_{1},\ldots,y_{n})\in\Omega_{r_{2}}\) and \(\frac{\nu_{j}+b}{4}\leqslant t_{j}\leqslant\frac{3(\nu_{j}+b)}{4}\), note that \({[\frac{b-\nu_{j}}{2}]+\nu_{j}}\in[\frac{\nu_{j}+b}{4}, \frac{3(\nu_{j}+b)}{4}]\), we have

$$\begin{aligned}& \bigl(T_{j}(y_{1},\ldots,y_{n})\bigr) \biggl( \biggl[ \frac{b-\nu _{j}}{2} \biggr]+\nu_{j} \biggr) \\& \quad = \lambda_{j}\sum_{s=0}^{b+1}G_{j} \biggl( \biggl[ \frac {b-\nu_{j}}{2} \biggr]+\nu_{j},s \biggr)f_{j}\bigl(y_{1}(s+\nu_{1}-1), \ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \quad \geqslant\lambda_{j}\sum_{s=[\frac{\nu_{j}+b}{4}-\nu _{j}+1]}^{[\frac{3(\nu_{j}+b)}{4}-\nu_{j}+1]}G_{j} \biggl( \biggl[ \frac{b-\nu_{j}}{2} \biggr]+\nu_{j},s \biggr)f_{j}\bigl(y_{1}(s+\nu _{1}-1),\ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \quad \geqslant\lambda_{j}\sum_{s=[\frac{\nu_{j}+b}{4}-\nu _{j}+1]}^{[\frac{3(\nu_{j}+b)}{4}-\nu_{j}+1]}G_{j} \biggl( \biggl[ \frac{b-\nu_{j}}{2} \biggr]+\nu_{j},s \biggr) \frac {r_{2}}{n\lambda_{j} \beta_{j}} \\& \quad = \frac{r_{2}}{n}. \end{aligned}$$

Then

$$\bigl\Vert T_{j}(y_{1},\ldots,y_{n})\bigr\Vert \geqslant \frac{r_{2}}{n}=\frac{1}{n}\bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert , \quad j=1,\ldots,n. $$

That is,

$$\begin{aligned}& \bigl\Vert T(y_{1},\ldots,y_{n}) (t_{1}, \ldots,t_{n})\bigr\Vert \\& \quad = \bigl\Vert \bigl(T_{1}(y_{1},\ldots,y_{n}) (t_{1}),\ldots,T_{n}(y_{1}, \ldots,y_{n}) (t_{n})\bigr)\bigr\Vert \\& \quad =\bigl\Vert T_{1}(y_{1},\ldots,y_{n}) \bigr\Vert +\cdots+\bigl\Vert T_{n}(y_{1}, \ldots,y_{n})\bigr\Vert \\& \quad \geqslant \frac{1}{n}\bigl\Vert (y_{1}, \ldots,y_{n})\bigr\Vert +\cdots +\frac{1}{n}\bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert \\& \quad = \bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert \end{aligned}$$

for \((y_{1},\ldots,y_{n})\in\partial\Omega_{r_{2}}\).

By the use of Theorem 2.9, there exists \((\overline{y}_{1},\ldots,\overline{y}_{n})\in\Lambda\) such that \(T(\overline{y}_{1},\ldots,\overline{y}_{n}) =(\overline{y}_{1},\ldots,\overline{y}_{n})\), the proof is complete. □

Theorem 3.2

Suppose that conditions (H1) and (H2) hold. Then, for every \(\lambda_{j}\in(0,\lambda_{j}^{*})\), FBVP (1.1)-(1.2) has at least two positive solutions, where

$$ \lambda^{*}_{j}=\frac{1}{n\alpha_{j}}\sup _{r>0}\frac{r}{\max_{0\leqslant y_{1}+\cdots+y_{n}\leqslant r}f_{j}(y_{1},\ldots,y_{n})}. $$

Proof

Define the function

$$ \varphi_{j}(r)=\frac{r}{n\alpha_{j}\max_{0\leqslant y_{1}+\cdots+y_{n}\leqslant r}f_{j}(y_{1},\ldots,y_{n})},\quad j=1,\ldots,n. $$

It is easy to know that \(\varphi_{j}:(0,+\infty)\to(0,+\infty)\) is a continuous function. From (H1), we see that \(\lim_{r\rightarrow0}\frac{r}{f_{j}(r)}=0\), that is, \(\lim_{r\rightarrow0}\frac{r}{n\alpha_{j}f_{j}(r)}=0\), and

$$ 0< \varphi_{j}(r)=\frac{r}{n\alpha_{j}\max_{0\leqslant y_{1}+\cdots+y_{n}\leqslant r}f_{j}(y_{1},\ldots,y_{n})}\leqslant\frac{r}{n\alpha_{j}f_{j}(r)}, $$

so \(\lim_{r\rightarrow0}\varphi_{j}(r)=0\).

From (H2), we see further that \(\lim_{r\rightarrow \infty}\varphi_{j}(r)=0\). Then there exists \(r_{0}>0\) such that \(\varphi_{j}(r_{0})=\max_{r>0}\varphi_{j}(r)=\lambda^{*}_{j}\), \(j=1,\ldots,n\). For any \(\lambda_{j}\in(0,\lambda^{*}_{j})\), by the intermediate value theorem, there exist two points \(d_{1}\in(0,r_{0})\), \(d_{2}\in(r_{0},\infty)\) such that \(\varphi_{j}(d_{1})=\varphi_{j}(d_{2})=\lambda_{j}\). Thus, we have

$$\begin{aligned}& f_{j}(y_{1},\ldots,y_{n})\leqslant \frac{d_{1}}{n\lambda_{j}\alpha _{j}},\quad y_{1}+\cdots+y_{n} \in[0,d_{1}], \\& f_{j}(y_{1},\ldots,y_{n})\leqslant \frac{d_{2}}{n\lambda_{j}\alpha _{j}},\quad y_{1}+\cdots+y_{n} \in[0,d_{2}]. \end{aligned}$$

On the other hand, since (H1) and (H2) hold, there exist \(e_{1}\in(0,d_{1})\), \(e_{2}\in(d_{2},\infty)\) such that

$$ \frac{f_{j}(y_{1},\ldots,y_{n})}{y_{1}+\cdots+y_{n}}\geqslant\frac{4}{n\lambda_{j} \beta_{j}},\quad y_{1}+ \cdots+y_{n}\in (0,e_{1}]\cup\biggl[\frac{1}{4}e_{2}, \infty\biggr). $$

Thus

$$\begin{aligned}& f_{j}(y_{1},\ldots,y_{n})\geqslant \frac{e_{1}}{n\lambda_{j} \beta_{j}}, \quad y_{1}+\cdots+y_{n}\in\biggl[ \frac{1}{4} e_{1},e_{1}\biggr], \\& f_{j}(y_{1},\ldots,y_{n})\geqslant \frac{e_{2}}{n\lambda_{j} \beta_{j}},\quad y_{1}+\cdots+y_{n}\in\biggl[ \frac{1}{4} e_{2},e_{2}\biggr]. \end{aligned}$$

Application of Theorem 3.1 and Theorem 2.8 leads to two distinct positive solutions of FBVP (1.1)-(1.2) which satisfy

$$ e_{1} \leqslant\bigl\Vert (\overline{y}_{1},\ldots, \overline{y}_{n})\bigr\Vert \leqslant d_{1},\qquad d_{2}\leqslant\bigl\Vert \bigl(\overline{y}^{\prime}_{1}, \ldots,\overline{y}^{\prime}_{n}\bigr)\bigr\Vert \leqslant e_{2}. $$

The proof is complete. □

By the proof of Theorem 3.2, we obtain the following.

Corollary 3.3

If one of conditions (H1) and (H2) is satisfied, then for every \(0<\lambda_{j}<\lambda_{j}^{*}\), FBVP (1.1)-(1.2) has at least one positive solution.

Theorem 3.4

Suppose that (H3), (H4) hold. Then, for any \(\lambda_{j}\geqslant\lambda_{j}^{**}\), FBVP (1.1)-(1.2) has at least two positive solutions, where

$$ \lambda^{**}_{j}=\frac{1}{n\beta_{j}}\inf _{r>0}\frac{r}{\min_{\frac{1}{4} r\leqslant y_{1}+\cdots+y_{n}\leqslant r}f_{j}(y_{1},\ldots,y_{n})}. $$

Proof

Define the function

$$ \psi_{j}(r)=\frac{r}{n\beta_{j}\min_{r/4\leqslant y_{1}+\cdots+y_{n}\leqslant r}f_{j}(y_{1},\ldots,y_{n})},\quad j=1,\ldots,n. $$

We know that \(\psi_{j}: (0,+\infty)\to (0,+\infty)\) is a continuous function. For \(\lambda_{j}>\lambda_{j}^{**}\), there exists \(0< e_{3}<+\infty \) such that

$$ f_{j}(y_{1},\ldots,y_{n})\geqslant \frac{e_{3}}{n\lambda_{j} \beta_{j}}, \quad y_{1}+\cdots+y_{n}\in\biggl[ \frac{1}{4} e_{3},e_{3}\biggr]. $$

By condition (H3), there exists \(0< d_{3}<e_{3}\) such that

$$ f_{j}(y_{1},\ldots,y_{n})\leqslant \frac{d_{3}}{n\lambda_{j}\alpha _{j}},\quad y_{1}+\cdots+y_{n} \in[0,d_{3}]. $$

From condition (H4), there exists \(e_{3}< d_{0}<+\infty\) such that

$$ \frac{f_{j}(y_{1}, \ldots,y_{n})}{y_{1}+\cdots+y_{n}}\leqslant\frac{1}{n\lambda_{j} \alpha_{j}}, \quad y_{1}+ \cdots+y_{n}\in[d_{0},+\infty). $$

Let \(M_{j}=\max_{0\leqslant y_{1}+\cdots+y_{n}\leqslant d_{0}}f_{j}(y_{1}, \ldots,y_{n})\). Choose \(d_{4}>d_{0}\) such that \(d_{4}\geqslant \lambda_{j}M_{j}\alpha_{j}\). Then

$$ f_{j}(y_{1},\ldots,y_{n})\leqslant \frac{d_{4}}{n\lambda_{j}\alpha _{j}},\quad y_{1}+\cdots+y_{n} \in[0,d_{4}]. $$

By Theorem 3.1 and Theorem 2.8, the proof is complete. □

From the proof of Theorem 3.4, we get the following.

Corollary 3.5

Suppose that one of conditions (H3) and (H4) holds. Then, for every \(\lambda_{j}>\lambda_{j}^{**}\), FBVP (1.1)-(1.2) has at least one positive solution.

Theorem 3.6

Suppose that one of the following cases is satisfied:

  1. (1)

    (H1), (H6) hold, and \(0<\lambda_{j}< \frac{1}{n\alpha_{j} L_{j}}\);

  2. (2)

    (H2), (H5) hold, and \(0<\lambda_{j}< \frac {1}{n\alpha_{j} l_{j}}\).

Then FBVP (1.1)-(1.2) has at least one positive solution.

Proof

(1) From (H6) (namely \(\lim_{(y_{1}+\cdots+y_{n})\rightarrow\infty} \frac{ f_{j}(y_{1},\ldots,y_{n})}{y_{1}+\cdots+y_{n}}=L_{j}\), \({t_{j}\in[\nu _{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\), \(j=1,2, \ldots,n\)), for any \(\epsilon _{j}>0\), there exists a number \(R_{0}>0\), for \(y_{1}+\cdots+y_{n}\in(R_{0},+\infty)\), we have

$$f_{j}(y_{1},\ldots,y_{n})< (L_{j}+ \epsilon _{j}) (y_{1}+\cdots+y_{n}). $$

Let \(M_{j}=\max_{0\leqslant y_{1}+\cdots+y_{n}\leqslant R_{0}}f_{j}(y_{1},\ldots,y_{n})\). Choose \(R>\max\{R_{0}, \frac{M_{j}}{L_{j}+\epsilon _{j}}\}\), then

$$f_{j}(y_{1},\ldots,y_{n})< (L_{j}+ \epsilon _{j}) (y_{1}+\cdots+y_{n}) $$

for \(y_{1}+\cdots+y_{n}\in[0,R]\). As arbitrarily of \(\epsilon _{j}\), so \(f_{j}(y_{1},\ldots,y_{n})\leqslant L_{j}R\). Note that \(0<\lambda_{j}< \frac{1}{n\alpha_{j} L_{j}}\), then

$$ \lambda^{*}_{j}=\frac{1}{n\alpha_{j}} \sup _{R>0}\frac{R}{\max_{0\leqslant y_{1}+\cdots+y_{n} \leqslant R}f(y)}>\frac{1}{n\alpha_{j}}\frac{R}{LR}= \frac{1}{n\alpha _{j}L_{j}}>\lambda_{j}>0. $$

Namely \(0<\lambda_{j}< \lambda_{j}^{*}\). By means of Corollary 3.3, FBVP (1.1)-(1.2) has at least one positive solution.

The proof is similar to (2) and hence omitted. □

Similarly, we have the following.

Theorem 3.7

Suppose that one of the following cases is satisfied:

  1. (1)

    (H3), (H6) hold, and \(\frac{4}{nL_{j}\beta_{j}}<\lambda_{j}<+\infty\);

  2. (2)

    (H4), (H5) hold, and \(\frac{4}{nl_{j}\beta _{j}}<\lambda_{j}<+\infty\).

Then FBVP (1.1)-(1.2) has at least one positive solution.

Theorem 3.8

Suppose that conditions (H5) and (H6) hold. If \(\lambda_{j}\) satisfies \(\frac{4}{n\beta_{j}L_{j}}<\lambda_{j}<\frac{1}{n\alpha_{j}l_{j}}\) or \(\frac{4}{n\beta_{j}l_{j}}<\lambda_{j}<\frac{1}{n\alpha_{j}L_{j}}\), then FBVP (1.1)-(1.2) has at least one positive solution.

Proof

Suppose that \(\frac{4}{n\beta _{j}L_{j}}<\lambda_{j}<\frac{1}{n\alpha_{j}l_{j}}\) holds. Choose \(\epsilon_{j}>0\) such that \(\frac{4}{n\beta_{j}(L_{j}-\epsilon_{j})}\leqslant \lambda_{j}\leqslant\frac{1}{n\alpha_{j}(l_{j}+\epsilon_{j})}\). With condition (H5), there exists \(\tau>0\) such that \(f_{j}(y_{1},\ldots ,y_{n})\leqslant(l_{j}+\epsilon_{j})(y_{1}+\cdots+y_{n}) \) for \(y_{1}+\cdots+y_{n}\in(0,\tau)\). Thus, for \(y_{1}+\cdots+y_{n}\in\partial \Omega_{\tau}\),

$$\begin{aligned}& \bigl\Vert T_{j}(y_{1},\ldots,y_{n})\bigr\Vert \\& \quad = \max_{t_{j}\in[\nu_{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}\Biggl\vert \lambda_{j}\sum _{s=0}^{b+1}G_{j}(t_{j},s)f_{j} \bigl(y_{1}(s+\nu_{1}-1),\ldots,y_{n}(s+ \nu_{n}-1)\bigr)\Biggr\vert \\& \quad \leqslant \lambda_{j}(l_{j}+\epsilon_{j}) (y_{1}+\cdots+y_{n})\sum_{s=0}^{b+1} \max_{t_{j}\in[\nu_{j}-2,\nu_{j}+b]_{{\mathbb{N}}_{\nu_{j}-2}}}G_{j}(t_{j},s) \\& \quad = \lambda_{j}(l_{j}+\epsilon_{j}) (y_{1}+\cdots+y_{n})\sum_{s=0}^{b+1}G_{j}( \nu_{j}+b,s) \\& \quad \leqslant\lambda_{j}\alpha_{j} \frac{\tau}{n\lambda_{j}\alpha_{j}} \\& \quad = \frac{\tau}{n} \\& \quad = \frac{1}{n}\bigl\Vert (y_{1},\ldots,y_{n}) \bigr\Vert , \end{aligned}$$

\(j=1,\ldots,n\). That is,

$$\begin{aligned}& \bigl\Vert T(y_{1},\ldots,y_{n}) (t_{1}, \ldots,t_{n})\bigr\Vert \\& \quad =\bigl\Vert \bigl(T_{1}(y_{1},\ldots,y_{n}) (t_{1}),\ldots,T_{n}(y_{1}, \ldots,y_{n}) (t_{n})\bigr)\bigr\Vert \\& \quad =\bigl\Vert T_{1}(y_{1},\ldots,y_{n}) \bigr\Vert +\cdots+\bigl\Vert T_{n}(y_{1}, \ldots,y_{n})\bigr\Vert \\& \quad \leqslant \frac{1}{n}\bigl\Vert (y_{1}, \ldots,y_{n})\bigr\Vert +\cdots+\frac {1}{n}\bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert \\& \quad =\bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert \end{aligned}$$

for \((y_{1},\ldots,y_{n})\in \partial\Omega_{\tau}\).

By condition (H6), there exists \(R_{1}>0\) such that \(f_{j}(y_{1},\ldots,y_{n})\geqslant(L_{j}-\epsilon_{j})(y_{1}+\cdots+y_{n})\) for \(y_{1}+\cdots+y_{n}\geqslant\frac{1}{4}R_{1}\).

Let \(R_{2}=\max\{2\tau, R_{1}\}\), for \((y_{1},\ldots,y_{n})\in\partial\Omega_{R_{2}}\), we get

$$\begin{aligned}& \bigl(T_{j}(y_{1},\ldots,y_{n})\bigr) \biggl( \biggl[\frac{b-\nu_{j}}{2} \biggr]+\nu _{j} \biggr) \\& \quad =\lambda_{j}\sum_{s=0}^{b+1}G_{j} \biggl( \biggl[\frac{b-\nu _{j}}{2} \biggr]+\nu_{j},s \biggr)f_{j}\bigl(y_{1}(s+\nu_{1}-1), \ldots,y_{n}(s+\nu_{n}-1)\bigr) \\& \quad \geqslant \lambda_{j}(L_{j}-\epsilon_{j}) \sum_{s=[\frac{\nu_{j}+b}{4}-\nu _{j}+1]}^{[\frac{3(\nu_{j}+b)}{4}-\nu_{j}+1]}G_{j} \biggl( \biggl[\frac {b-\nu_{j}}{2} \biggr]+\nu_{j},s \biggr) (y_{1}+ \cdots+y_{n}) \\& \quad \geqslant \lambda_{j} \frac{1}{4}(L_{j}- \epsilon_{j})R_{2}\sum_{s=[\frac{\nu_{j}+b}{4}-\nu_{j}+1]}^{[\frac{3(\nu_{j}+b)}{4}-\nu_{j}+1]} G_{j} \biggl( \biggl[\frac{b-\nu_{j}}{2} \biggr]+\nu_{j},s \biggr) \\& \quad \geqslant \frac{R_{2}}{n}. \end{aligned}$$

Then

$$\bigl\Vert T_{j}(y_{1},\ldots,y_{n})\bigr\Vert \geqslant \frac{R_{2}}{n}=\frac{1}{n}\bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert ,\quad j=1,\ldots,n. $$

That is,

$$\begin{aligned}& \bigl\Vert T(y_{1},\ldots,y_{n}) (t_{1}, \ldots,t_{n})\bigr\Vert \\& \quad = \bigl\Vert \bigl(T_{1}(y_{1},\ldots,y_{n}) (t_{1}),\ldots,T_{n}(y_{1}, \ldots,y_{n}) (t_{n})\bigr)\bigr\Vert \\& \quad = \bigl\Vert T_{1}(y_{1},\ldots,y_{n}) \bigr\Vert +\cdots+\bigl\Vert T_{n}(y_{1}, \ldots,y_{n})\bigr\Vert \\& \quad \geqslant \frac{1}{n}\bigl\Vert (y_{1}, \ldots,y_{n})\bigr\Vert +\cdots+\frac {1}{n}\bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert \\& \quad = \bigl\Vert (y_{1},\ldots,y_{n})\bigr\Vert \end{aligned}$$

for \((y_{1}, \ldots, y_{n})\in\partial\Omega_{R_{2}}\)

By using Theorem 2.9, we obtain the conclusion.

A similar proof holds when \(\frac{4}{n\beta_{j}l_{j}}<\lambda_{j}<\frac{1}{n\alpha_{j}L_{j}}\). The proof is complete. □

We now present an example illustrating the sorts of boundary conditions that can be treated by Theorem 3.2.

Example 3.1

Consider the following boundary value problems:

$$ \left \{ \textstyle\begin{array}{l} \Delta ^{\frac{17}{8}}y_{1}(t)=-\lambda_{1}f_{1} (y_{1} (t+ \frac{9}{8} ),y_{2} (t+ \frac{17}{16} ) ), \\ \Delta ^{\frac{33}{16}}y_{2}(t)=-\lambda_{2}f_{2} (y_{1} (t+ \frac{9}{8} ),y_{2} (t+ \frac {17}{16} ) ), \\ y_{1} (- \frac{7}{8} )=\Delta y_{1} ( \frac{169}{8} )=\Delta ^{2}y_{1} (- \frac{7}{8} )=0, \\ y_{2} (- \frac{15}{16} )=\Delta y_{2} ( \frac{337}{16} )=\Delta ^{2}y_{2} (- \frac{15}{16} )=0, \end{array} \right . $$
(3.1)

where \(b=19\), \(\nu_{1}= \frac{17}{8}\), \(\nu_{2}=\frac{33}{16}\), we take

$$f_{1}(y_{1},y_{2})=\frac{1}{64\sqrt{2}}(y_{1}+y_{2})^{\frac {1}{2}}+(y_{1}+y_{2})^{2}, \qquad f_{2}(y_{1},y_{2})=\frac{1}{25}(y_{1}+y_{2})^{\frac{1}{2}}+ \frac {1}{64}(y_{1}+y_{2})^{\frac{3}{2}}, $$

\(f_{1}, f_{2}:[0,+\infty)\times[0,+\infty)\rightarrow[0,+\infty)\), and \(y_{1}\) is defined on the time scale \(\{- \frac{7}{8},\frac{1}{8}, \ldots, \frac{169}{8}\}\), \(y_{2}\) is defined on the time scale \(\{- \frac{15}{16},\frac{1}{16}, \ldots, \frac{337}{16}\}\). \(f_{1}\) and \(f_{2}\) satisfy conditions of Theorem 3.2. A computation shows that \(\lambda_{1}^{\ast}\approx0.01456\), \(\lambda_{2}^{\ast}\approx0.032845\), then, for every \(\lambda_{j}\in(0,\lambda_{j}^{*})\) (\(j=1,2\)), problem (3.1) has at least two positive solutions.