Introduction

Fractional differential equation is a generalization of ordinary differential equation to arbitrary non-integer orders. Fractional differential equations, therefore, find numerous applications in different branches of physics, chemistry and biological sciences such as visco-elasticity, feed back amplifiers, electrical circuits, electro-analytical chemistry, fractional multipoles and neuron modelling. The reader may refer to the books and monographs [13] for fractional calculus and developments on fractional differential and fractional integro-differential equations with applications.

On the other hand, the theory of impulsive differential equations describes processes which experience a sudden change of their state at certain moments. Processes with such characteristics arise naturally and often; for example, phenomena studied in physics, chemical technology, population dynamics, biotechnology and economics. For an introduction of the basic theory of impulsive differential equation, we refer the reader to [4].

Solvability of boundary value problems for higher order ordinary differential equations were investigated by many authors. For example, in [516], the following \((n,n-k)\) type problems were studied:

$$\begin{aligned} \left\{ \begin{array}{ll} (-1)^{n-k}y^{(n)}=f(t,y),&\quad t\in (0,1),\\ y^{(i)}(0)=0,&\quad i\in \mathrm{I}\!\mathrm{N}_0^{k-1},\\ y^{(j)}(1)=0,&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-k-1}. \end{array}\right. \end{aligned}$$
(1.1)

In [17, 18], the following more general boundary value problems were studied:

$$\begin{aligned} \left\{ \begin{array}{ll} (-1)^{n-k}y^{(n)}=f(t,y),&\quad t\in (0,1),\\ y^{(i)}(0)=0,&\quad i\in \mathrm{I}\!\mathrm{N}_0^{k-1},\\ y^{(j)}(1)=0,&\quad j\in \mathrm{I}\!\mathrm{N}_q^{n+q-k-1} \end{array}\right. \end{aligned}$$
(1.2)

where \(k\in \mathrm{I}\!\mathrm{N}_1^{n-1}, q\in \mathrm{I}\!\mathrm{N}_0^{k}\). In [6, 19, 20], authors studied existence of solutions of the following problems:

$$\begin{aligned} \left\{ \begin{array}{ll} (-1)^{n-p}y^{(n)}=f(t,y,y',\ldots ,y^{(p-1)}),&\quad t\in (0,1),\\ y^{(i)}(0)=0,&\quad i\in \mathrm{I}\!\mathrm{N}_0^{p-1},\\ y^{(j)}(1)=0,&\quad j\in \mathrm{I}\!\mathrm{N}_p^{n-1}. \end{array}\right. \end{aligned}$$
(1.3)

On the one hand, it is interesting to generalize results on boundary value problems for higher order ordinary differential equations; in mentioned papers, in [21], authors studied existence of solutions of the following boundary value problem for higher order fractional differential equation

$$\begin{aligned} \left\{ \begin{array}{ll} D_{0^+}^\alpha u(t) + \lambda f(t, u(t)) = 0, &\quad 0< t <b, \lambda > 0,\alpha \in [n,n+1),\\ u^{(j)}(0) = 0,&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1},\, u^{(n-1)}(b) = 0. \end{array} \right. \end{aligned}$$
(1.4)

In [22], solutions of the following problem were presented:

$$\begin{aligned} \left\{ \begin{array}{ll} D_{0^+}^\alpha u(t) +p(t)u(t) = 0, &\quad 0< t < 1, \lambda > 0,\\ u^{(j)}(0) = 0,&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-2},u(1) = 0. \end{array} \right. \end{aligned}$$
(1.5)

On the other hand, higher order fractional differential equations have applications such as the fractional order elastic beam equations see [23], the fractional order viscoelastic material model see [24], the fractional viscoelastic model see [2527] and so on.

There has been no papers concerned with the solvability of boundary value problems for higher order impulsive fractional differential equations since it is difficult to convert an impulsive fractional differential equation to an equivalent integral equation.

To fill this gap, in this paper, we discuss the following two boundary value problems for nonlinear impulsive singular fractional differential equation

$$\begin{aligned} \left\{ \begin{array}{ll} {}^cD_{0^+}^{\alpha }u(t)=f(t,u(t)),&\quad t\in (t_s,t_{s+1}],~s\in \mathrm{I}\!\mathrm{N}_0^m,\\ u^{(i)}(0)=0,&\quad i\in \mathrm{I}\!\mathrm{N}_0^{k-1},\\ u^{(j)}(1)=0,&\quad j\in \mathrm{I}\!\mathrm{N}_l^{n+l-k-1},\\ \Delta u^{(j)}(t_s)=I_j(t_s,u(t_s)),&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1},s\in \mathrm{I}\!\mathrm{N}_1^m, \end{array}\right. \end{aligned}$$
(1.6)

and

$$\begin{aligned} \left\{ \begin{array}{ll} {}^cD_{0^+}^{\alpha }u(t)=f(t,u(t)),&\quad t\in (t_s,t_{s+1}],~s\in \mathrm{I}\!\mathrm{N}_0^m,\\ u^{(i)}(0)=-u^{(j)}(1),&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1},\\ \Delta u^{(j)}(t_s)=I_j(t_s,u(t_s)),&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1},s\in \mathrm{I}\!\mathrm{N}_1^m, \end{array}\right. \end{aligned}$$
(1.7)

where

  • (a) \( n-1<\alpha <n\), n is a positive integer, \({}^cD_{0^+}^{*}\) is the Caputo fractional derivative of orders \(*\) with starting point 0,

  • (b) \(0=t_0<t_1<\cdots<t_s<\cdots<t_m<t_{m+1}=1\) with m being a positive integer, \(\mathrm{I}\!\mathrm{N}_a^b=\{a,a+1,a+2,\ldots ,\}\) for nonnegative integers \(a<b\),

  • (c) \(k\in \mathrm{I}\!\mathrm{N}_1^{n-1}\), and \(l\in \mathrm{I}\!\mathrm{N}_0^k\),

  • (d) \(f:(0,1)\times \mathrm{I}\!\mathrm{R}\rightarrow \mathrm{I}\!\mathrm{R}\), \(I_j:\{t_s:s\in \mathrm{I}\!\mathrm{N}_1^m\}\times \mathrm{I}\!\mathrm{R}\rightarrow \mathrm{I}\!\mathrm{R}\), f is a Carathéodory function, \(I_j(j\in \mathrm{I}\!\mathrm{N}_1^{m})\) are discrete Carathéodory functions.

A function \(x:(0,1]\rightarrow \mathrm{I}\!\mathrm{R}\) is said to be a solution of (1.6) or (1.7) if

$$\begin{aligned} \begin{array}{l} x|_{(t_s,t_{s+1}]}\in C^0(t_{s},t_{s+1}], s\in N_0^m,\;\; \lim \limits _{t\rightarrow t_s^+}x(t)\hbox { exist for all }s\in \mathrm{I}\!\mathrm{N}_0^m,\\ {}^cD_{0^+}^{\alpha }x \hbox { is measurable on each }(t_i,t_{i+1}],\quad i\in \mathrm{I}\!\mathrm{N}_0^m \end{array} \end{aligned}$$

and x satisfies all equations in (1.6) or (1.7), respectively.

In [28], a general method for converting an impulsive fractional differential equation to an equivalent integral equation was presented. We present a new method (Lemma 2.2) for converting BVP (1.6) to an equivalent integral equation in this paper. We shall construct a weighted Banach space and apply the Leray–Schauder nonlinear alternative to obtain the existence of at least one solution of (1.6) and (1.7), respectively. Our results are new and naturally complement the literature on fractional differential equations.

The paper is outlined as follows. “Preliminaries” contains some preliminary results. Main results are presented in “Main results”. In “Examples”, we give an example to illustrate the efficiency of the results obtained. A conclusion section is given at the end of the paper.

Preliminaries

For the convenience of the readers, we shall state the necessary definitions from fractional calculus theory.

For \(\phi \in L^1(0,\infty )\), denote \(\Vert \phi \Vert _1=\int _0^\infty |\phi (s)|\mathrm{d}s\). Let the Gamma and beta functions \(\Gamma (\alpha )\) and \(\mathbf{B}(p,q)\) be defined by

$$\begin{aligned} \Gamma (\alpha )=\int _0^{+\infty }x^{\alpha -1}e^{-x}\mathrm{d}x,\qquad \mathbf{B}(p,q)=\int _0^1x^{p-1}(1-x)^{q-1}\mathrm{d}x. \end{aligned}$$

Definition 2.1

[3] The Riemann–Liouville fractional integral of order \(\alpha >0\) of a function \(g:(0,\infty )\rightarrow \mathrm{I}\!\mathrm{R}\) is given by

$$\begin{aligned} I_{0^+}^{\alpha }g(t)=\frac{1}{\Gamma (\alpha )}\int _0^t(t-s)^{\alpha -1}g(s)\mathrm{d}s, \end{aligned}$$

provided that the right-hand side exists.

Definition 2.2

[3] The Caputo fractional derivative of order \(\alpha >0\) of a continuous function \(g:(0,\infty )\rightarrow \mathrm{I}\!\mathrm{R}\) is given by

$$\begin{aligned} {}^cD_{0^+}^{\alpha }g(t)=\frac{1}{\Gamma (n-\alpha )}\int _{0}^{t}\frac{g^{(n)}(s)}{(t-s)^{\alpha -n+1}}\mathrm{d}s, \end{aligned}$$

where \(n-1\le \alpha < n\), provided that the right-hand side exists.

Definition 2.3

Let \(p>-1,q\in (-1,0]\). We say \(K:(0,1)\times \mathrm{I}\!\mathrm{R}\rightarrow \mathrm{I}\!\mathrm{R}\) is a Carathéodory function if it satisfies the following:

  1. (i)

    \(t\rightarrow K\left( t,~x\right) \) is integral on \((t_s,t_{s+1}]\) for every \(x\in \mathrm{I}\!\mathrm{R}\), \(s\in \mathrm{I}\!\mathrm{N}_0^m\),

  2. (ii)

    \(x\rightarrow K\left( t,~x\right) \) is continuous on \(\mathrm{I}\!\mathrm{R}\) for all \(t\in (t_s,t_{s+1}]\) \((s\in \mathrm{I}\!\mathrm{N}_0^m),\)

  3. (iii)

    for each \(r>0\) there exists a constant \(A_{r,f}\ge 0\) satisfying

$$\begin{aligned} \begin{array}{l} \left| K\left( t,~x\right) \right| \le A_{r,K}t^p(1-t)^q \end{array} \end{aligned}$$

holds for \( t\in (t_s,t_{s+1}],~s\in \mathrm{I}\!\mathrm{N}_0^m,~|x|\le r.\)

Definition 2.4

\(G:\{t_s:s\in \mathrm{I}\!\mathrm{N}_1^m\}\times \mathrm{I}\!\mathrm{R}\rightarrow \mathrm{I}\!\mathrm{R}\) is called a discrete Carathéodory function if

  1. (i)

    \(x\rightarrow G\left( t_s,~x\right) \) is continuous on \(\mathrm{I}\!\mathrm{R}\) for each \(s\in \mathrm{I}\!\mathrm{N}_1^m\),

  2. (ii)

    for each \(r>0\) there exists \(A_{r,G,s}\ge 0\) such that

$$\begin{aligned} \begin{array}{l} \left| G\left( t_s,~x\right) \right| \le A_{r,G,s} \end{array} \end{aligned}$$

holds for \(|x|\le r,~s\in \mathrm{I}\!\mathrm{N}_1^m\).

Lemma 2.1

(Lemma 2.22 in [29]) Suppose that \(h\in L^1(0,t_1)\bigcap C^0(0,t_1)\). Then x is a solution of \({}^cD_{0^+}^\alpha x(t)=h(t), a.e.,t\in (0,t_{1}]\) if and only if there exist constants \(c_{v0}\in \mathrm{I}\!\mathrm{R}\) such that

$$\begin{aligned} x(t)=\sum \limits _{v=0}^{n-1}\frac{c_{v0}}{\Gamma (v+1)}t^{v}+\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}h(s)\mathrm{d}s,\quad t\in (0,t_{1}]. \end{aligned}$$

Lemma 2.2

Suppose that h is integral on each subinterval of (0, 1). Then x satisfying

$$\begin{aligned} x|_{(t_s,t_{s+1}]}\in C^0(t_{s},t_{s+1}], s\in \mathrm{I}\!\mathrm{N}_0,&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1}, \nonumber \\ \lim \limits _{t\rightarrow t_s^+}x(t)\;{\text {exists\; for \;all}}\,s\in \mathrm{I}\!\mathrm{N}_0,&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1} \end{aligned}$$
(2.1)

is a solution of

$$\begin{aligned} {}^cD_{0^+}^\alpha x(t)=h(t), a.e.,\quad t\in (t_i,t_{i+1}](i\in \mathrm{I}\!\mathrm{N}_0^m) \end{aligned}$$
(2.2)

if and only if there exist constants \(c_{v0}\in \mathrm{I}\!\mathrm{R}\) such that

$$\begin{aligned} x(t)&=\sum \limits _{j=0}^i\sum \limits _{v=0}^{n-1}\frac{c_{vj}}{\Gamma (v+1)}(t-t_j)^{v}\nonumber \\&\quad +\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}h(s)\mathrm{d}s,\quad t\in (t_i,t_{i+1}],i\in \mathrm{I}\!\mathrm{N}_0^m. \end{aligned}$$
(2.3)

Proof

By Lemma 2.1, we know that x satisfying (2.1) is a solution of (2.2) if and only if x satisfies (2.3) on \((t_0,t_1]\). To complete the proof, we consider two steps:

  • Step 1. We prove that x satisfies (2.1) and (2.2) if x satisfies (2.3). From (2.3), we know obviously that (2.1) holds. We need to prove that (2.2) holds on all \((t_i,t_{i+1}](i\in \mathrm{I}\!\mathrm{N}_0^m)\). In fact, for \(t\in (t_0,t_1]\), by Lemma 2.1, we know \(D_{0^+}^\alpha x(t)=h(t)\). For \(t\in (t_i,t_{i+1}]\), we have

    $$\begin{aligned} {}^cD_{0^+}^\alpha x(t)&=\frac{1}{\Gamma (n-\alpha )}\int _0^t(t-s)^{n-\alpha -1}x^{(n)}(s)\mathrm{d}s\\&=\frac{\sum \nolimits _{j=0}^{i-1}\int _{t_j}^{t_{j+1}}(t-s)^{n-\alpha -1}x^{(n)}(s)\mathrm{d}s+ \int _{t_i}^t(t-s)^{n-\alpha -1}x^{(n)}(s)\mathrm{d}s}{\Gamma (n-\alpha )}\\&=\frac{\sum \nolimits _{j=0}^{i-1}\int _{t_j}^{t_{j+1}}(t-s)^{n-\alpha -1}\left( \sum \nolimits _{v=0}^j\sum \nolimits _{\kappa =0}^{n-1}\frac{c_{\kappa v}}{\Gamma (\kappa +1)}(s-t_v)^{\kappa }+\int _0^s\frac{(s-u)^{\alpha -1}}{\Gamma (\alpha )}h(u)\mathrm{d}u\right) ^{(n)}\mathrm{d}s}{\Gamma (n-\alpha )}\\&\quad +\frac{\int _{t_i}^t(t-s)^{n-\alpha -1}\left( \sum \nolimits _{v=0}^i\sum \nolimits _{\kappa =0}^{n-1}\frac{c_{\kappa v}}{\Gamma (\kappa +1)}(s-t_v)^{\kappa }+\int _0^s\frac{(s-u)^{\alpha -1}}{\Gamma (\alpha )} f(u)\mathrm{d}u\right) ^{(n)}\mathrm{d}s}{\Gamma (n-\alpha )} \\&=\frac{\sum \nolimits _{j=0}^{i-1}\int _{t_j}^{t_{j+1}}(t-s)^{n-\alpha -1}\left( \int _0^s\frac{(s-u)^{\alpha -1}}{\Gamma (\alpha )}h(u)\mathrm{d}u\right) ^{(n)}\mathrm{d}s}{\Gamma (n-\alpha )}\\&\quad +\frac{\int _{t_i}^t(t-s)^{n-\alpha -1}\left( \int _0^s\frac{(s-u)^{\alpha -1}}{\Gamma (\alpha )} f(u)\mathrm{d}u\right) ^{(n)}\mathrm{d}s}{\Gamma (n-\alpha )} \\&=\frac{\int _{0}^{t}(t-s)^{n-\alpha -1}\left( \int _0^s\frac{(s-u)^{\alpha -n}}{\Gamma (\alpha -n+1)}h(u)\mathrm{d}u\right) '\mathrm{d}s}{\Gamma (n-\alpha )} \\&=\left[ \frac{\int _{0}^{t}(t-s)^{n-\alpha }\left( \int _0^s\frac{(s-u)^{\alpha -n}}{\Gamma (\alpha -n+1)}h(u)\mathrm{d}u\right) '\mathrm{d}s}{(n-\alpha )\Gamma (n-\alpha )}\right] ' \\&=\left[ \frac{\left. (t-s)^{n-\alpha }\int _0^s\frac{(s-u)^{\alpha -n}}{\Gamma (\alpha -n+1)}h(u)\mathrm{d}u\right| _0^t+(n-\alpha ) \int _{0}^{t}(t-s)^{n-\alpha -1} \int _0^s\frac{(s-u)^{\alpha -n}}{\Gamma (\alpha -n+1)}h(u)\mathrm{d}u\mathrm{d}s}{(n-\alpha )\Gamma (n-\alpha )}\right] ' \\&=\left[ \frac{\int _{0}^{t}(t-s)^{n-\alpha -1} \int _0^s\frac{(s-u)^{\alpha -n}}{\Gamma (\alpha -n+1)}h(u)\mathrm{d}u\mathrm{d}s}{\Gamma (n-\alpha )}\right] ' \\&=\left[ \frac{\int _{0}^{t}\int _s^t(t-s)^{n-\alpha -1} \frac{(s-u)^{\alpha -n}}{\Gamma (\alpha -n+1)}\mathrm{d}sh(u)\mathrm{d}u}{\Gamma (n-\alpha )}\right] '\\&=\left[ \frac{\int _{0}^{t}\int _0^1(1-w)^{n-\alpha -1} \frac{w^{\alpha -n}}{\Gamma (\alpha -n+1)}\mathrm{d}wh(u)\mathrm{d}u}{\Gamma (n-\alpha )}\right] ' \\&=h(t),t\in (t_i,t_{i+1}],i\in \mathrm{I}\!\mathrm{N}_1. \end{aligned}$$

    It follows that x satisfies (2.2).

  • Step 2. We prove that x satisfies (2.3) if x satisfies (2.1) and (2.2). By Lemma 2.1, from (2.1) and (2.2), we know that (2.3) holds for \(i=0\). We suppose that (2.3) holds for \(0,1,2,\ldots ,i\). We will prove that (2.3) holds for \(i+1\). Then by mathematical induction method, we see that (2.3) holds for all \(i\in \mathrm{I}\!\mathrm{N}_0^m\).

In fact, we suppose that

$$\begin{aligned} x(t)=\Phi (t)+\sum \limits _{j=0}^i\sum \limits _{v=0}^{n-1}\frac{c_{vj}}{\Gamma (v+1)}(t-t_j)^{v} +\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )} h(s)\mathrm{d}s,\quad t\in (t_{i+1},t_{i+2}]. \end{aligned}$$
(2.4)

Then for \(t\in (t_{i+1},t_{i+2}]\) we have

$$\begin{aligned} h(t)&={}^cD_{0^+}^\alpha x(t) \\&=\frac{1}{(n-\alpha )}\left[ \sum \limits _{j=0}^{i}\int _{t_j}^{t_{j+1}}(t-s)^{n-\alpha -1}x^{(n)}(s)\mathrm{d}s+ \int _{t_{i+1}}^t(t-s)^{n-\alpha -1}x^{(n)}(s)\mathrm{d}s\right] \\&=\frac{\sum \nolimits _{j=0}^{i}\int _{t_j}^{t_{j+1}}(t-s)^{n-\alpha -1}\left( \sum \nolimits _{\kappa =0}^j\sum \nolimits _{v=0}^{n-1}\frac{c_{v \kappa }}{\Gamma (v+1)}(s-t_\kappa )^v+\int _0^s\frac{(s-v)^{\alpha -1}}{\Gamma (\alpha -1)}h(v)\mathrm{d}v\right) ^{(n)}\mathrm{d}s}{\Gamma (n-\alpha )}\\&\quad +\frac{\int _{t_{i+1}}^t(t-s)^{n-\alpha -1}\left( \Phi (s)+\sum \nolimits _{\kappa =0}^{k}\sum \nolimits _{v=0}^{n-1}\frac{c_{v\kappa }}{\Gamma (\alpha -v+1)}(s-t_\kappa )^{v}+\int _0^s\frac{(s-v)^{\alpha -1}}{\Gamma (\alpha -1)}h(v)\mathrm{d}v\right) ^{(n)}\mathrm{d}s}{\Gamma (n-\alpha )}\\&={}^cD_{t_{i+1}^+}^\alpha \Phi (t)+\frac{\int _0^t(t-s)^{n-\alpha -1}\left( \int _0^s\frac{(s-v)^{\alpha -1}}{\Gamma (\alpha )}h(v)\mathrm{d}v\right) ^{(n)}\mathrm{d}s}{\Gamma (n-\alpha )}. \end{aligned}$$

Similarly to Step 1 we can get that

$$\begin{aligned} \begin{array}{l} h(t)={}^cD_{0^+}^\alpha x(t) =h(t)+{}^cD_{t_{i+1}^+}^\alpha \Phi (t). \end{array} \end{aligned}$$

So \({}^cD_{t_{i+1}^+}^\alpha \Phi (t)=0\) on \((t_{i+1},t_{i+2}]\). Then there exist constants \(c_{vi+1}\in R(v\in \mathrm{I}\!\mathrm{N}_0^{n-1})\) such that \(\Phi (t)=\sum \nolimits _{v=0}^{n}c_{vi+1}\frac{(t-t_{i+1})^{v}}{\Gamma (v+1)}\) on \((t_{i+1},t_{i+2}]\). Substituting \(\Phi \) in (2.4), we get that (2.3) holds for \(i+1\). By mathematical induction method, we know that (2.3) holds for all \(i\in \mathrm{I}\!\mathrm{N}_0^m\). So x satisfies (2.3) if x satisfies (2.1) and (2.2). The proof is complete. \(\square \)

Define

$$\begin{aligned} X=\left\{ x:(0,1]\rightarrow \mathrm{I}\!\mathrm{R}:~ \begin{array}{l} x|_{(t_s,t_{s+1}]}\in C^0(t_s,t_{s+1}]~(s\in \mathrm{I}\!\mathrm{N}_0), \;\; \lim \limits _{t\rightarrow t_s^+}x(t)\hbox { exist}~(s\in \mathrm{I}\!\mathrm{N}_0^m) \end{array}\right\} . \end{aligned}$$

For \(x\in X\), define the norms by \( \begin{array}{l} \Vert x\Vert =\Vert x\Vert _X=\sup \limits _{t\in (0,1]}|x(t)|. \end{array} \)

Lemma 2.3

X is a Banach space.

Proof

The proof is standard and omitted. \(\square \)

Lemma 2.4

Let M be a subset of X. Then M is relatively compact if and only if the following conditions are satisfied:

  • (i) \(\left\{ t\rightarrow x(t):x\in M\right\} \) is uniformly bounded,

  • (ii) \(\left\{ t\rightarrow x(t):x\in M\right\} \) is equicontinuous in any interval \((t_s,t_{s+1}](s\in \mathrm{I}\!\mathrm{N}_0)\).

Proof

The proof is standard and omitted.

For \(x\in X\), denote \(f_x(t)=f(t,x(t))\) and \(I_{jx}(t_s)=I_j(t_s,x(t_s))\). Denote

$$\begin{aligned} \begin{array}{l} M=(m_{ij})_{(n-k)\times (n-k)}\\ \\ =\left( \begin{array}{ccccccccc} \frac{1}{\Gamma (k-l+1)}&{}\frac{1}{\Gamma (k-l)}&{}\cdots &{}1&{}0&{}0&{}0&{}\cdots &{}0\\ \frac{1}{\Gamma (k-l+2)}&{}\frac{1}{\Gamma (k-l+1)}&{}\cdots &{}\frac{1}{\Gamma (2)}&{}1&{}0&{}0&{}\cdots &{}0\\ \frac{1}{\Gamma (k-l+3)}&{}\frac{1}{\Gamma (k-l+2)}&{}\cdots &{}\frac{1}{\Gamma (3)}&{}\frac{1}{\Gamma (2)}&{}1&{}0&{}\cdots &{}0\\ \cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdots &{}\cdot \\ \frac{1}{\Gamma (n-k)}&{}\frac{1}{\Gamma (n-k-1)}&{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}1\\ \frac{1}{\Gamma (n-k+1)}&{}\frac{1}{\Gamma (n-k)}&{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\frac{1}{\Gamma (2)}\\ \cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdots &{}\cdot \\ \frac{1}{\Gamma (n-l)}&{}\frac{1}{\Gamma (n-l-1)}&{}\cdots &{}\frac{1}{\Gamma (n-k)}&{}\frac{1}{\Gamma (n-k-1)}&{}\frac{1}{\Gamma (n-k-2)}&{}\frac{1}{\Gamma (n-k-3)}&{}\cdots &{}\frac{1}{\Gamma (k-l+1)}\end{array} \right) ,\\ \\ N=(n_{ij})_{(n)\times (n)}=\left( \begin{array}{llllll} 1&{}0&{}0&{}0&{}\cdots &{}0\\ \frac{1}{2\Gamma (2)}&{}1&{}0&{}0&{}\cdots &{}0\\ \frac{1}{2\Gamma (3)}&{}\frac{1}{2\Gamma (2)}&{}1&{}0&{}\cdots &{}0\\ \cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots \\ \frac{1}{2\Gamma (n)}&{}\frac{1}{2\Gamma (n-1)}&{}\frac{1}{2\Gamma (n-2)}&{}\frac{1}{2\Gamma (n-3)}&{}\cdots &{}1 \end{array} \right) . \end{array} \end{aligned}$$

Then \(|M|\not =0\) and \(|N|=1\). One has for a determinant \(|a_{i,j}|_{(n-k)\times (n-k)}\) that

$$\begin{aligned} |a_{i,j}|_{(n-k)\times (n-k)}=\sum \limits _{i=1}^{n-k}a_{i,n-j}A_{i,n-j},j\in \mathrm{I}\!\mathrm{N}_k^{n-1},\nonumber \\ \hbox { where }A_{i,n-j}\hbox { is the algebraic cofactor of }a_{i,n-j}. \end{aligned}$$
(2.5)

Suppose that \(|a_{i,j}|\le 1\). It is easy to show that

$$\begin{aligned} \begin{array}{l} |A_{i,n-j}|\le (n-k-1)!=\Gamma (n-k),\quad i\in \mathrm{I}\!\mathrm{N}_1^{n-k},j\in \mathrm{I}\!\mathrm{N}_k^{n-1}. \end{array} \end{aligned}$$
(2.6)

Then

$$\begin{aligned} \begin{array}{l} M^{-1}=M^*=\left( \begin{array}{llllll} M_{11}&{}M_{21}&{}M_{31}&{}M_{41}&{}\cdots &{}M_{n-k~1}\\ M_{12}&{}M_{22}&{}M_{32}&{}M_{42}&{}\cdots &{}M_{n-k~2}\\ M_{13}&{}M_{23}&{}M_{33}&{}M_{43}&{}\cdots &{}M_{n-k~3}\\ \cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdots &{}\cdot \\ M_{1~n-k}&{}M_{2~n-k}&{}M_{3~n-k}&{}M_{4~n-k}&{}\cdots &{}M_{n-k~n-k}\end{array} \right) ,\\ \\ N^{-1}=N^*=\left( \begin{array}{llllll} N_{11}&{}N_{21}&{}N_{31}&{}N_{41}&{}\cdots &{}N_{n~1}\\ N_{12}&{}N_{22}&{}N_{32}&{}N_{42}&{}\cdots &{}N_{n~2}\\ N_{13}&{}N_{23}&{}N_{33}&{}N_{43}&{}\cdots &{}N_{n~3}\\ \cdot &{}\cdot &{}\cdot &{}\cdot &{}\cdots &{}\cdot \\ N_{1n}&{}N_{2n}&{}N_{3n}&{}N_{4n}&{}\cdots &{}N_{n~n}\end{array} \right) , \end{array} \end{aligned}$$

where \(M_{ij}\) and \(N_{ij}\) are the algebraic cofactors of \(m_{ij}\) and \(n_{ij}\), respectively. \(M^*\) and \(N^*\) are the adjoint matrix of M and N, respectively. From (2.5) and (2.6), we know that \(|M_{ij}|\le \Gamma (n-k)\) and \(|N_{ij}|\le \Gamma (n)\). \(\square \)

Lemma 2.5

Suppose that \(u\in X\). Then \(x\in X\) is a solution of

$$\begin{aligned} \left\{ \begin{array}{ll} {}^cD_{0^+}^{\alpha }x(t)=f_{u}(t),&\quad t\in (t_s,t_{s+1}],~s\in \mathrm{I}\!\mathrm{N}_0^m,\\ x^{(i)}(0)=0,&\quad i\in \mathrm{I}\!\mathrm{N}_0^{k-1},\\ x^{(j)}(1)=0,&\quad j\in \mathrm{I}\!\mathrm{N}_l^{n+l-k-1},\\ \Delta x^{(j)}(t_s)=I_{jx}(t_s),&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1},s\in \mathrm{I}\!\mathrm{N}_1^m \end{array}\right. \end{aligned}$$
(2.7)

if and only if

$$\begin{aligned} x(t)&= -\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{M_{j~n-i}}{\Gamma (i+1)|M|}\sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1}\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)} I_{vu}(t_w)t^i \nonumber \\&\quad -\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{M_{j~n-i}}{\Gamma (i+1)|M|}\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-j)-1}}{\Gamma (\alpha -(n+l-k-j))}f_{u}(s)\mathrm{d}s t^i \nonumber \\&\quad +\sum \limits _{w=1}^s\sum \limits _{v=0}^{n-1}\frac{(t-t_w)^v}{\Gamma (v+1)}I_{vu}(t_w)+\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}f_u(s)\mathrm{d}s,t\in (t_s,t_{s+1}],s\in \mathrm{I}\!\mathrm{N}_1^m. \end{aligned}$$
(2.8)

Proof

First, we prove that x satisfies (2.8) if \(x\in X\) and x is a solution of (2.7). Since \(u\in X\), there exists \(r>0\) such that

$$\begin{aligned} \begin{array}{l} \Vert u\Vert =\max \left\{ \sup \limits _{t\in (0,1]}|u(t)|,\sup \limits _{t\in (0,1]}|{}^cD_{0^+}^\beta u(t)|\right\} \le r. \end{array} \end{aligned}$$
(2.9)

Since f is a Carathéodory function, there exist constants \(A_{r,f}\ge 0\) such that

$$\begin{aligned} \begin{array}{l} |f(t,u(t),{}^cD_{0^+}^\beta u(t))|\le A_{r,f}t^p(1-t)^q,\quad t\in (t_s,t_{s+1}],\quad s\in \mathrm{I}\!\mathrm{N}_0^m. \end{array} \end{aligned}$$
(2.10)

Similarly, since \(I_j\) is a discrete Carathéodory function, there exist positive constants \(A_{r,I_j,s}\ge 0~(s\in \mathrm{I}\!\mathrm{N}_1^m,j\in \mathrm{I}\!\mathrm{N}_0^{n-1})\) such that

$$\begin{aligned} \begin{array}{l} |I_j(t_s,u(t_s){}^cD_{0^+}^\beta u(t_s))|\le A_{r,I_j,s}. \end{array} \end{aligned}$$
(2.11)

Suppose that \(x\in X\) and x is a solution of (2.7). By Lemma 2.2, we know that there exist constants \(c_{vj}\in \mathrm{I}\!\mathrm{R}\) such that

$$\begin{aligned} x(t)&=\sum \limits _{w=0}^s\sum \limits _{v=0}^{n-1}\frac{c_{vw}}{\Gamma (v+1)}(t-t_w)^{v}\nonumber \\ &\quad+\int _0^t\frac{(t-s) ^{\alpha -1}}{\Gamma (\alpha )}f_{u}(s)\mathrm{d}s,\quad t\in (t_s,t_{s+1}],\quad s\in \mathrm{I}\!\mathrm{N}_0. \end{aligned}$$
(2.12)

By Definition 2, we have

$$\begin{aligned} {}^cD_{0^+}^{\beta }x(t)&=\sum \limits _{w=0}^s\sum \limits _{v=1}^{n-1}\frac{c_{vw}}{\Gamma (v-\beta +1)}(t-t_w)^{v-\beta } \nonumber \\ &\quad+\int _0^t\frac{(t-s) ^{\alpha -\beta -1}}{\Gamma (\alpha -\beta )}f_{u}(s)\mathrm{d}s,\quad t\in (t_s,t_{s+1}],s\in \mathrm{I}\!\mathrm{N}_0^m, \nonumber \\ x^{(j)}(t)&=\sum \limits _{w=0}^s\sum \limits _{v=j}^{n-1}\frac{c_{vw}}{\Gamma (v-j+1)}(t-t_w)^{v-j}\nonumber \\ &\quad+\int _0^t\frac{(t-s) ^{\alpha -j-1}}{\Gamma (\alpha -j)}f_{u}(s)\mathrm{d}s,\quad t\in (t_s,t_{s+1}],s\in \mathrm{I}\!\mathrm{N}_0^m,j\in \mathrm{I}\!\mathrm{N}_0^{n-1}. \end{aligned}$$
(2.13)

We have

$$\begin{aligned} \left| \int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}f_{u}(s)(s)\mathrm{d}s\right|&\le A_{r,f}\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}s^p(1-s)^q\mathrm{d}s\\&\le A_{r,f}\int _0^t\frac{(t-s)^{\alpha +q-1}}{\Gamma (\alpha )}s^p\mathrm{d}s\\&=A_{r,f}t^{\alpha +p+q}\frac{\mathbf{B}(\alpha +q,p+1)}{\Gamma (\alpha )}, \end{aligned}$$
$$\begin{aligned} \left| \int _0^t\frac{(t-s)^{\alpha -j-1}}{\Gamma (\alpha -j)}f_{u}(s)(s)\mathrm{d}s\right|&\le A_{r,f}\int _0^t\frac{(t-s)^{\alpha -j-1}}{\Gamma (\alpha -j)}s^p(1-s)^q\mathrm{d}s\\&\le A_{r,f}\int _0^t\frac{(t-s)^{\alpha +q-j-1}}{\Gamma (\alpha -j)}s^p\mathrm{d}s\\&=A_{r,f}t^{\alpha -j+p+q}\frac{\mathbf{B}(\alpha -j+q,p+1)}{\Gamma (\alpha -j)},j\in \mathrm{I}\!\mathrm{N}_0^{n-1}. \end{aligned}$$
  1. (i)

    It follows from \(x^{(j)}(0)=0\) that \(c_{j0}=0(j\in \mathrm{I}\!\mathrm{N}_0^{k-1})\).

  2. (ii)

    From \(\Delta x^{(j)}(t_s)=I_{ju}(t_s)\) and (2.13), we get \(c_{js}=I_{ju}(t_s)(j\in \mathrm{I}\!\mathrm{N}_0^{n-1},s\in \mathrm{I}\!\mathrm{N}_1^m)\).

  3. (iii)

    From \(x^{(j)}(1)=0,\;j\in \mathrm{I}\!\mathrm{N}_l^{n+l-k-1}\), we get

    $$\begin{aligned} \sum \limits _{w=0}^m\sum \limits _{v=j}^{n-1}\frac{c_{vw}}{\Gamma (v-j+1)}(1-t_w)^{v-j}+\int _0^1\frac{(1-s) ^{\alpha -j-1}}{\Gamma (\alpha -j)}f_{u}(s)\mathrm{d}s=0. \end{aligned}$$

Use (i) and (ii), we get

$$\begin{aligned} &\sum \limits _{v=j}^{n-1}\frac{c_{v0}}{\Gamma (v-j+1)}+ \sum \limits _{w=1}^m\sum \limits _{v=j}^{n-1}\frac{(1-t_w)^{v-j}}{\Gamma (v-j+1)}I_{vu}(t_w) \nonumber \\ &\quad +\int _0^1\frac{(1-s) ^{\alpha -j-1}}{\Gamma (\alpha -j)}f_{u}(s)\mathrm{d}s=0,\quad j\in \mathrm{I}\!\mathrm{N}_l^{n+l-k-1}. \end{aligned}$$

Then

$$\begin{aligned} \begin{array}{l} \left( \begin{array}{lllllllll} \frac{1}{\Gamma (k-l+1)}&{}\frac{1}{\Gamma (k-l)}&{}\cdots &{}1&{}0&{}0&{}0&{}\cdots &{}0\\ \frac{1}{\Gamma (k-l+2)}&{}\frac{1}{\Gamma (k-l+1)}&{}\cdots &{}\frac{1}{\Gamma (2)}&{}1&{}0&{}0&{}\cdots &{}0\\ \frac{1}{\Gamma (k-l+3)}&{}\frac{1}{\Gamma (k-l+2)}&{}\cdots &{}\frac{1}{\Gamma (3)}&{}\frac{1}{\Gamma (2)}&{}1&{}0&{}\cdots &{}0\\ \cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots \\ \frac{1}{\Gamma (n-k)}&{}\frac{1}{\Gamma (n-k-1)}&{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}1\\ \frac{1}{\Gamma (n-k+1)}&{}\frac{1}{\Gamma (n-k)}&{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\frac{1}{\Gamma (2)}\\ \cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots &{}\cdots \\ \frac{1}{\Gamma (n-l)}&{}\frac{1}{\Gamma (n-l-1)}&{}\cdots &{}\frac{1}{\Gamma (n-k)}&{}\frac{1}{\Gamma (n-k-1)}&{}\frac{1}{\Gamma (n-k-2)}&{}\frac{1}{\Gamma (n-k-3)}&{}\cdots &{}\frac{1}{\Gamma (k-l+1)}\end{array} \right) \end{array} \end{aligned}$$
$$\begin{aligned} \begin{array}{l} \quad \times \left( \begin{array}{c}c_{n-1~0}\\ c_{n-2~0}\\ c_{n-3~0}\\ \cdots \\ \\ c_{k~0}\end{array} \right) =-\left( \begin{array}{l} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-1}^{n-1}\frac{(1-t_w)^{v-(n+l-k-1)}}{\Gamma (v-(n+l-k-1)+1)}I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-1)-1}}{\Gamma (\alpha -(n+l-k-1))}f_{u}(s)\mathrm{d}s\\ \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-2}^{n-1}\frac{(1-t_w)^{v-(n+l-k-2)}}{\Gamma (v-(n+l-k-2)+1)}I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-2)-1}}{\Gamma (\alpha -(n+l-k-2))}f_{u}(s)\mathrm{d}s\\ \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-3}^{n-1}\frac{(1-t_w)^{v-(n+l-k-3)}}{\Gamma (v-(n+l-k-3)+1)}I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-3)-1}}{\Gamma (\alpha -(n+l-k-3))}f_{u}(s)\mathrm{d}s\\ \cdots \\ \sum \limits _{w=1}^m\sum \limits _{v=l}^{n-1}\frac{(1-t_w)^{v-l}}{\Gamma (v-l+1)}I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -l-1}}{\Gamma (\alpha -l)}f_{u}(s)\mathrm{d}s\end{array}\right) . \end{array} \end{aligned}$$

Hence,

$$\begin{aligned} \begin{array}{l} \left( \begin{array}{c}c_{n-1~0}\\ c_{n-2~0}\\ c_{n-3~0}\\ \cdots \\ \\ c_{k~0}\end{array} \right) =-M^{-1}\left( \begin{array}{l} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-1}^{n-1}\frac{(1-t_w)^{v-(n+l-k-1)}}{\Gamma (v-(n+l-k-1)+1)}I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-1)-1}}{\Gamma (\alpha -(n+l-k-1))}f_{u}(s)\mathrm{d}s\\ \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-2}^{n-1}\frac{(1-t_w)^{v-(n+l-k-2)}}{\Gamma (v-(n+l-k-2)+1)}I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-2)-1}}{\Gamma (\alpha -(n+l-k-2))}f_{u}(s)\mathrm{d}s\\ \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-3}^{n-1}\frac{(1-t_w)^{v-(n+l-k-3)}}{\Gamma (v-(n+l-k-3)+1)}I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-3)-1}}{\Gamma (\alpha -(n+l-k-3))}f_{u}(s)\mathrm{d}s\\ \cdots \\ \sum \limits _{w=1}^m\sum \limits _{v=l}^{n-1}\frac{(1-t_w)^{v-l}}{\Gamma (v-l+1)}I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -l-1}}{\Gamma (\alpha -l)}f_{u}(s)\mathrm{d}s\end{array}\right) . \end{array} \end{aligned}$$

It follows for \(i\in \mathrm{I}\!\mathrm{N}_k^{n-1}\) that

$$\begin{aligned} c_{i~0}&=-\sum \limits _{j=1}^{n-k}\frac{M_{j~n-i}}{|M|} \nonumber \\&\quad \times \left( \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1}\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)} I_{vu}(t_w) +\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-j)-1}}{\Gamma (\alpha -(n+l-k-j))}f_{u}(s)\mathrm{d}s\right) . \end{aligned}$$
(2.14)

From (i), (ii) and (iii), we have (2.14) and

$$\begin{aligned} \begin{array}{l} c_{j0}=0,j\in \mathrm{I}\!\mathrm{N}_0^{k-1},\;\;c_{js}=I_{ju}(t_s)(s\in \mathrm{I}\!\mathrm{N}_1^m,\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1}). \end{array} \end{aligned}$$
(2.15)

Substituting (2.14) and (2.15) in (2.12), we get (2.8).

Second, we prove that \(x\in X\) and x satisfies (2.7) if x satisfies (2.8). It is easy to see that \(x\in X\) and

$$\begin{aligned} \lim \limits _{t\rightarrow 0}x^{(j)}(t)=u_j,\quad j\in \mathrm{I}\!\mathrm{N}_0^{k-1},\quad x^{(j)}(1)=v_j,\quad j\in \mathrm{I}\!\mathrm{N}_k^{n-1},\quad \Delta x^{(j)}(t_s)=I_{ju}(t_s),\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1},\quad s\in \in \mathrm{I}\!\mathrm{N}_1^m. \end{aligned}$$

Now, we prove that x satisfies \({}^cD_{0^+}^\alpha x(t)=f_u(t)\) if (2.8) holds. We remember (2.15) and (2.14), then it suffices to prove \({}^cD_{0^+}^\alpha x(t)=f_u(t)\) on (0, 1) if x satisfies (2.8).

In fact, for \(t\in (t_i,t_{i+1}](i\in \mathrm{I}\!\mathrm{N}_0^m\), by Definition 2.2, we have

$$\begin{aligned} D_{0^+}^\alpha x(t)&=\frac{1}{\Gamma (n-\alpha )}\int _0^t(t-s)^{n-\alpha -1}x^{(n)}(s)\mathrm{d}s\\&=\frac{1}{\Gamma (n-\alpha )}\left[ \sum \limits _{j=0}^{i-1}\int _{t_j}^{t_{j+1}}(t-s)^{n-\alpha -1}x^{(n)} (s)\mathrm{d}s+\int _{t_i}^t(t-s)^{n-\alpha -1}x^{(n)}(s)\mathrm{d}s\right] \\&=\frac{\sum \nolimits _{j=0}^{i-1}\int _{t_j}^{t_{j+1}}(t-s)^{n-\alpha -1} \left( \sum \nolimits _{w=0}^j\sum \nolimits _{\theta =0}^{n-1}\frac{c_{\theta w}}{\Gamma (\theta +1)}(s-t_w)^{\theta }+\int _0^s\frac{(s-\sigma ) ^{\alpha -1}}{\Gamma (\alpha )}f_{u}(\sigma )d\sigma \right) ^{(n)}\mathrm{d}s }{\Gamma (n-\alpha )}\\&\quad +\frac{\int _{t_i}^t(t-s)^{n-\alpha -1}\left( \sum \nolimits _{w=0}^i\sum \nolimits _{\theta =0}^{n-1}\frac{c_{\theta w}}{\Gamma (\theta +1)}(s-t_w)^{\theta }+\int _0^s\frac{(s-\sigma ) ^{\alpha -1}}{\Gamma (\alpha )}f_{u}(\sigma )d\sigma \right) ^{(n)}\mathrm{d}s }{\Gamma (n-\alpha )}\\&=\frac{\int _{0}^t(t-s)^{n-\alpha -1}\left( \int _0^s\frac{(s-\sigma ) ^{\alpha -1}}{\Gamma (\alpha )}f_{u}(\sigma )d\sigma \right) ^{(n)}\mathrm{d}s }{\Gamma (n-\alpha )}&=\frac{\int _{0}^t(t-s)^{n-\alpha -1}\left( \int _0^s\frac{(s-\sigma ) ^{\alpha -n}}{\Gamma (\alpha -n+1)}f_{u}(\sigma )d\sigma \right) '\mathrm{d}s }{\Gamma (n-\alpha )}\\&=\left[ \frac{\int _{0}^t(t-s)^{n-\alpha }\left( \int _0^s\frac{(s-\sigma ) ^{\alpha -n}}{\Gamma (\alpha -n+1)}f_{u}(\sigma )d\sigma \right) '\mathrm{d}s }{(n-\alpha )\Gamma (n-\alpha )}\right] '\\&=\left[ \frac{\left. (t-s)^{n-\alpha }\left( \int _0^s\frac{(s-\sigma ) ^{\alpha -n}}{\Gamma (\alpha -n+1)}f_{u}(\sigma )d\sigma \right) \right| _0^t+(n-\alpha )\int _{0}^t(t-s)^{n-\alpha -1}\left( \int _0^s\frac{(s-\sigma ) ^{\alpha -n}}{\Gamma (\alpha -n+1)}f_{u}(\sigma )d\sigma \right) \mathrm{d}s }{(n-\alpha )\Gamma (n-\alpha )}\right] '\\&=\left[ \frac{\int _{0}^t\int _s^t(t-s)^{n-\alpha -1}\frac{(s-\sigma ) ^{\alpha -n}}{\Gamma (\alpha -n+1)}\mathrm{d}sf_{u}(\sigma )d\sigma }{\Gamma (n-\alpha )}\right] ' \\&=\left[ \frac{\int _{0}^t\int _0^1(1-w)^{n-\alpha -1}\frac{w ^{\alpha -n}}{\Gamma (\alpha -n+1)}\mathrm{d}wf_{u}(\sigma )d\sigma }{\Gamma (n-\alpha )}\right] '\\&=\,f_{u}(t),\quad t\in (t_i,t_{i+1}],i\in \mathrm{I}\!\mathrm{N}_0. \end{aligned}$$

From above discussion, we know that \(x\in X\) and x satisfies (2.7) if (2.8) holds. The proof is completed. \(\square \)

Remark 2.1

It is easy to see from Lemma 2.6 that \(x\in X\) is a solution of (2.10) if and only if x satisfies that there exists constants \(d_{vs}\in \mathrm{I}\!\mathrm{R}\) such that

$$\begin{aligned} x(t)=\sum \limits _{v=0}^{n-1}d_{vs}t^v+\int _0^t\frac{(t-w)^{\alpha -1}}{\Gamma (\alpha )}f_{uv}(w)\mathrm{d}w,\quad t\in (t_s,t_{s+1}],\quad s\in \mathrm{I}\!\mathrm{N}_0. \end{aligned}$$

In [28], authors have proved this result but our proof of Lemma 2.6 is different from that in [28].

Now, we define the operator \(T_1\) on X by

$$\begin{aligned} (T_1x)(t)&=-\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{M_{j~n-i}}{\Gamma (i+1)|M|}\sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1 }\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)} I_{vx}(t_w)t^i \nonumber \\&\quad -\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{M_{j~n-i}}{\Gamma (i+1)|M|}\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-j)-1}}{\Gamma (\alpha -(n+l-k-j))}f_{x}(s)\mathrm{d}s t^i\nonumber \\&\quad +\sum \limits _{w=1}^s\sum \limits _{v=0}^{n-1}\frac{(t-t_w)^v}{\Gamma (v+1)}I_{vx}(t_w)+\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}f_x(s)\mathrm{d}s,\quad t\in (t_s,t_{s+1}],\quad s\in \mathrm{I}\!\mathrm{N}_1^m. \end{aligned}$$
(2.16)

Remark 2.2

By Lemma 2.5, we know that \(T_1:X\rightarrow X\) is well defined and \(x\in X\) is a solution of system (1.6) if and only if \(x\in X\) is a fixed point of the operator \(T_1\).

Lemma 2.6

The operator \(T_1:X\rightarrow X\) is completely continuous.

Proof

The proof is standard and is omitted, one may see [21]. \(\square \)

Lemma 2.7

Suppose that \(u\in X\). Then \(x\in X\) is a solution of

$$\begin{aligned} \left\{ \begin{array}{ll} {}^cD_{0^+}^{\alpha }x(t)=f_{u}(t),&\quad t\in (t_s,t_{s+1}],~s\in \mathrm{I}\!\mathrm{N}_0^m,\\ x^{(i)}(0)=-x^{(i)}(1),&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1},\\ \Delta x^{(j)}(t_s)=I_{jx}(t_s),&\quad j\in \mathrm{I}\!\mathrm{N}_0^{n-1},s\in \mathrm{I}\!\mathrm{N}_1^m \end{array}\right. \end{aligned}$$
(2.17)

if and only if

$$\begin{aligned} x(t)&= -\sum \limits _{i=0}^{n-1}\sum \limits _{j=0}^{n-1}\frac{N_{j~n-i}}{\Gamma (i+1) }\sum \limits _{w=1}^m\sum \limits _{v=n-j}^{n-1}\frac{(1-t_w)^{v-(n-j)}}{\Gamma (v-(n -j)+1)} I_{vu}(t_w)t^i \nonumber \\&\quad -\sum \limits _{i=0}^{n-1}\sum \limits _{j=0}^{n-1}\frac{N_{j~n-i}}{\Gamma (i+1)}\int _0^1\frac{(1-s) ^{\alpha -(n-j)-1}}{\Gamma (\alpha -(n-j))}f_{u}(s)\mathrm{d}s t^i\nonumber \\&\quad +\sum \limits _{w=1}^s\sum \limits _{v=0}^{n-1}\frac{(t-t_w)^v}{\Gamma (v+1)}I_{vu}(t_w)+\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}f_u(s)\mathrm{d}s,\quad t\in (t_s,t_{s+1}],\quad s\in \mathrm{I}\!\mathrm{N}_1^m. \end{aligned}$$
(2.18)

Proof

Similarly to the proof of Lemma 2.5, we get Lemma 2.7.

Now, we define the operator \(T_2\) on X by

$$\begin{aligned} (T_2x)(t)&=-\sum \limits _{i=0}^{n-1}\sum \limits _{j=0}^{n-1}\frac{N_{j~n-i}}{\Gamma (i+1) }\sum \limits _{w=1}^m\sum \limits _{v=n-j}^{n-1}\frac{(1-t_w)^{v-(n-j)}}{\Gamma (v-(n -j)+1)} I_{vx}(t_w)t^i\nonumber \\&\quad -\sum \limits _{i=0}^{n-1}\sum \limits _{j=0}^{n-1}\frac{N_{j~n-i}}{\Gamma (i+1)}\int _0^1\frac{(1-s) ^{\alpha -(n-j)-1}}{\Gamma (\alpha -(n-j))}f_{x}(s)\mathrm{d}s t^i\nonumber \\&\quad +\sum \limits _{w=1}^s\sum \limits _{v=0}^{n-1}\frac{(t-t_w)^v}{\Gamma (v+1)}I_{vx}(t_w)+\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}f_x(s)\mathrm{d}s,t\in (t_s,t_{s+1}],\quad s\in \mathrm{I}\!\mathrm{N}_1^m. \end{aligned}$$
(2.19)

\(\square \)

Remark 2.3

By Lemma 2.7, we know that \(T_2:X\rightarrow X\) is well defined, \(x\in X\) is a solution of system (1.7) if and only if \(x\in X\) is a fixed point of the operator \(T_2\).

Lemma 2.8

The operator \(T_2:X\rightarrow X\) is completely continuous.

Proof

The proof is standard and is omitted, one may see [21]. \(\square \)

Main results

In this section, we are ready to present the main theorems. We need the following assumptions:

  • (H1) there exist nonnegative numbers \(\sigma _i\), \(a_i,~A_i(i\in \mathrm{I}\!\mathrm{N}_0^n)\) such that

    $$\begin{aligned} \left| f\left( t,x\right) \right|\le & {} [a_0+\sum \limits _{i=1}^\omega a_i|x|^{\sigma _i}]t^p(1-t)^q,\quad t\in (0,1),x\in \mathrm{I}\!\mathrm{R},\\ \left| I_j\left( t_s,x\right) \right|\le & {} A_0 +\sum \limits _{i=1}^\omega A_i|x|^{\sigma _i},\quad s\in \mathrm{I}\!\mathrm{N}_1^m,x\in \mathrm{I}\!\mathrm{R}. \end{aligned}$$

Denote

$$\begin{aligned} \sigma&=\max \{\sigma _i:i\in \mathrm{I}\!\mathrm{N}_1^n\},\\ M_0&=A_0\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1 }\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)}+A_0\sum \limits _{v=0}^{n-1}\frac{m}{\Gamma (v+1)} \nonumber \\&\quad +a_0\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||}\frac{\mathbf{B}(\alpha -(n+l-k-j)+q,p+1)}{\Gamma (\alpha -(n+l-k-j))} +a_0\frac{\mathbf{B}(\alpha +q,p+1)}{\Gamma (\alpha )},\\ M_u&=\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1 }\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)}A_u+\sum \limits _{v=0}^{n-1}\frac{m}{\Gamma (v+1)}A_u\nonumber \\&\quad +\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||}\frac{\mathbf{B}(\alpha -(n+l-k-j)+q,p+1)}{\Gamma (\alpha -(n+l-k-j))}a_u+\frac{\mathbf{B}(\alpha +q,p+1)}{\Gamma (\alpha )}a_u,\quad u\in \mathrm{I}\!\mathrm{N}_1^\omega . \end{aligned}$$

Theorem 3.1

Suppose that (a)–(d) (defined in “Introduction”) and (H1) hold. Then, the system (1.6) has at least one solution in \(X\times Y\) if

  • (i) \(\sigma <1\) or

  • (ii) \(\sigma =1\) with

    $$\begin{aligned} \sum \limits _{u=1}^\omega M_u<1 \end{aligned}$$
    (3.1)

    or

  • (iii) \(\sigma >1\) with

    $$\begin{aligned} \sigma _{u_0}>1,\;M_0+\sum \limits _{u=1}^\omega M_u\left( \frac{M_0}{M_{u_0}(\sigma _{u_0}-1)}\right) ^{\sigma _u/\sigma _{u_0}}\le \left( \frac{M_0}{M_{u_0}(\sigma _{u_0}-1)}\right) ^{1/\sigma _{u_0}}. \end{aligned}$$
    (3.2)

Proof

We shall apply the Schauder’s fixed point theorem. From Lemma 2.6 and Remark 2.2 we note that \(T_1\) is completely continuous. If x is a fixed point of \(T_1\), the system (1.6) has a solution x.

Let \(\Omega _r=\{x\in X:||x||\le r\}\). For \(x\in \Omega _r\). Then \(||x||\le r\), i.e., \(|x(t)|\le r\) for all \(t\in (0,1]\). So

  • (H1) implies

    $$\begin{aligned} \left| f\left( t,x(t)\right) \right|\le & {} [a_0+\sum \limits _{i=1}^\omega a_i|x(t)|^{\sigma _i}]t^p(1-t)^q \le [a_0+\sum \limits _{i=1}^\omega a_ir^{\sigma _i}]t^p(1-t)^q,\\ \left| I_j\left( t_s,x(t_s)\right) \right|\le & {} A_0 +\sum \limits _{i=1}^\omega A_i|x(t_s)|^{\sigma _i} \le A_0 +\sum \limits _{i=1}^\omega A_ir^{\sigma _i}. \end{aligned}$$

    We know \(|M_{ij}|\le \Gamma (n-k)\). By (2.16), we have

    $$\begin{aligned} |(T_1x)(t)|&\le \sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{|M_{j~n-i}|}{\Gamma (i+1)||M||} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1 }\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)}| I_{vx}(t_w)|t^i\\&\quad +\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{|M_{j~n-i}|}{\Gamma (i+1)||M||}\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-j)-1}}{\Gamma (\alpha -(n+l-k-j))}|f_{x}(s)|\mathrm{d}s t^i\\&\quad +\sum \limits _{w=1}^s\sum \limits _{v=0}^{n-1}\frac{(t-t_w)^v}{\Gamma (v+1)}|I_{vx}(t_w)|+\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}|f_x(s)|\mathrm{d}s \\&\le \sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1 }\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)}\left[ A_0 +\sum \limits _{i=1}^\omega A_ir^{\sigma _i}\right] \\&\quad +\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||}\int _0^1\frac{(1-s) ^{\alpha -(n+l-k-j)-1}}{\Gamma (\alpha -(n+l-k-j))}[a_0+\sum \limits _{i=1}^\omega a_ir^{\sigma _i}]s^p(1-s)^q\mathrm{d}s \\&\quad +\sum \limits _{w=1}^s\sum \limits _{v=0}^{n-1}\frac{(t-t_w)^v}{\Gamma (v+1)}\left[ A_0 +\sum \limits _{i=1}^\omega A_ir^{\sigma _i}\right] +\int _0^t\frac{(t-s)^{\alpha -1}}{\Gamma (\alpha )}[a_0+\sum \limits _{i=1}^\omega a_ir^{\sigma _i}]s^p(1-s)^q\mathrm{d}s\\&\le \sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1 }\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)}\left[ A_0 +\sum \limits _{i=1}^\omega A_ir^{\sigma _i}\right] \\&\quad +\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||}\frac{\mathbf{B}(\alpha -(n+l-k-j)+q,p+1)}{\Gamma (\alpha -(n+l-k-j))}[a_0+\sum \limits _{i=1}^\omega a_ir^{\sigma _i}] \\&\quad +\sum \limits _{v=0}^{n-1}\frac{m}{\Gamma (v+1)}\left[ A_0 +\sum \limits _{i=1}^\omega A_ir^{\sigma _i}\right] +\frac{\mathbf{B}(\alpha +q,p+1)}{\Gamma (\alpha )}[a_0+\sum \limits _{i=1}^\omega a_ir^{\sigma _i}]\\&=A_0\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1 }\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)} +A_0\sum \limits _{v=0}^{n-1}\frac{m}{\Gamma (v+1)} \\&\quad +a_0\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||}\frac{\mathbf{B}(\alpha -(n+l-k-j)+q,p+1)}{\Gamma (\alpha -(n+l-k-j))} +a_0\frac{\mathbf{B}(\alpha +q,p+1)}{\Gamma (\alpha )}\\&\quad +\sum \limits _{u=1}^\omega \left( \sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||} \sum \limits _{w=1}^m\sum \limits _{v=n+l-k-j}^{n-1 }\frac{(1-t_w)^{v-(n+l-k-j)}}{\Gamma (v-(n+l-k -j)+1)}A_u+\sum \limits _{v=0}^{n-1}\frac{m}{\Gamma (v+1)}A_u\right. \\&\quad \left. +\sum \limits _{i=k}^{n-1}\sum \limits _{j=1}^{n-k}\frac{\Gamma (n-k)}{\Gamma (i+1)||M||}\frac{\mathbf{B}(\alpha -(n+l-k-j)+q,p+1)}{\Gamma (\alpha -(n+l-k-j))}a_u+\frac{\mathbf{B}(\alpha +q,p+1)}{\Gamma (\alpha )}a_u\right) r^{\sigma _u}. \end{aligned}$$

    It follows that

    $$\begin{aligned} ||T_1x||\le M_0+\sum \limits _{u=1}^\omega M_ur^{\sigma _u}. \end{aligned}$$
    (3.3)

    To use Schauder’s fixed point theorem, from (3.4), we should choose \(r>0\) such that

    $$\begin{aligned} M_0+\sum \limits _{u=1}^\omega M_ur^{\sigma _u}\le r. \end{aligned}$$
    (3.4)

    Then \(T_1\Omega _{r}\subseteq \Omega _r\). So \(T_1\) has a fixed point in \(\Omega _r\). Then BVP (1.6) has a solution. We consider the following three cases:

    • Case 1 \(\sigma <1\). Since \(\lim \limits _{r\rightarrow \infty }\frac{M_0+\sum \nolimits _{u=1}^\omega M_ur^{\sigma _u}}{r}=0\), we can choose \(r>0\) sufficiently small such that (3.4) holds. Then \(T_1\Omega _{r}\subseteq \Omega _r\). So \(T_1\) has a fixed point in \(\Omega _r\). Then BVP (1.6) has a solution.

    • Case 2 \(\sigma =1\). Since \(\lim \limits _{r\rightarrow \infty }\frac{M_0+\sum \nolimits _{u=1}^\omega M_ur^{\sigma _u}}{r}<\sum \nolimits _{u=1}^nM_u<1\), we can choose \(r>0\) sufficiently small such that (3.4) holds. Then \(T_1\Omega _{r}\subseteq \Omega _r\). So \(T_1\) has a fixed point in \(\Omega _r\). Then BVP (1.6) has a solution.

    • Case 3 \(\sigma >1\). Choose \(r=\left( \frac{M_0}{M_{u_0}(\sigma _{u_0}-1)}\right) ^{1/\sigma _{u_0}}\). Then we have by the inequality in (iii) that

      $$\begin{aligned} ||T_1x||\le M_0+\sum \limits _{u=1}^\omega M_ur^{\sigma _u}\le r. \end{aligned}$$

    Then \(T_1\Omega _{r}\subseteq \Omega _r\). So \(T_1\) has a fixed point in \(\Omega _r\). Then BVP (1.6) has a solution. The proof of Theorem 3.1 is completed.

  • (H2) there exist constants \(M_f,M_I\ge 0\) such that \(\begin{array}{l} \left| f\left( t,~x,~y\right) \right| \le M_f,\;\;\left| I_j\left( t_s,~x,~y\right) \right| \le M_{I} \end{array} \) hold for all \(t\in (0,1),s\in \mathrm{I}\!\mathrm{N}_1^m,j\in \mathrm{I}\!\mathrm{N}_0^{n-1},(x,y)\in \mathrm{I}\!\mathrm{R}^2\).

\(\square \)

Theorem 3.2

Suppose that (a)–(d) and (H2) hold. Then BVP (1.6) has at least one solution.

Proof

Choose \(p=q=0\), \(a_0=M_f, A_0=M_I\) and \(a_i=0,A_i=0\), \(\sigma _i=0\). One sees by (H2) that (H1) holds. By Theorem 3.1 (i), we get its proof. \(\square \)

Remark 3.1

BVP (1.7) can be called a anti-periodic boundary value problem. By similar method, we can establish existence results for BVP (1.7). We omit the details, readers should try it.

Examples

To illustrate the usefulness of our main result, we present an example that Theorem 3.1 can readily apply.

Example 4.1

Consider the following impulsive boundary value problem

$$\begin{aligned} \left\{ \begin{array}{ll}\displaystyle {}^cD_{0^+}^{\frac{18}{5}}u(t)=t^{-\frac{1}{5}}(1-t)^{-\frac{1}{5}}\left[ a_0+a_1[u(t)]^\sigma \right] , &\quad t\in (s,s+1],s\in \mathrm{I}\!\mathrm{N}_0^m,\\ u^{(i)}(0)=0,i\in \mathrm{I}\!\mathrm{N}_0^1\;\;\displaystyle u^{(j)}(1)=0,&\quad j\in \mathrm{I}\!\mathrm{N}_0^1,\\ \displaystyle \Delta u^{(i)}(1/2)=A_0+A_1[u(1/2)]^\sigma , \end{array}\right. \end{aligned}$$
(4.1)

where \(a_i,~A_i(i=0,1)\) are nonnegative constants.

Corresponding to system (1.6) we have \(\alpha =\frac{18}{5}\) with \(n=4\). So equation in BVP (4.1) is a fractional elastic beam equation. We also have \(p=q=-\frac{1}{5}\), \(k=2\) and \(l=0\), \(0=t_0<t_1=\frac{1}{2}<t_{2}=1\) with \(m=1\) and

$$\begin{aligned} \begin{array}{l}f(t,x)=t^{-\frac{1}{5}}(1-t)^{-\frac{1}{5}}\left[ a_0+a_1x^\sigma \right] ,\;\; I_j(s,x)=A_0+A_1x^\sigma . \end{array} \end{aligned}$$

It is easy to know that (a)–(d) and (H1) hold with \(\omega =1\). By direct computation, we have

$$\begin{aligned} \begin{array}{l} M=(m_{ij})_{2\times 2}=\left( \begin{array}{ll} \frac{1}{\Gamma (3)}&{}1\\ \frac{1}{\Gamma (4)}&{}\frac{1}{\Gamma (3)} \end{array} \right) ,\;\;|M|=-2. \end{array} \end{aligned}$$

Now, \(n=4,k=2,l=0,|M|=-2\), \(m=1\), \(t_1=1/2\), \(\omega =1\), \(\alpha =\frac{18}{5}\), \(p=q=-\frac{1}{5}\), we have

$$\begin{aligned} M_0&=A_0\sum \limits _{i=2}^{3}\sum \limits _{j=1}^{2}\frac{1}{2\Gamma (i+1)} \sum \limits _{v=2-j}^{3 }\frac{(1/2)^{v+j-2}}{\Gamma (v+j-1)} +\frac{13A_0}{6}\\&\quad +a_0\sum \limits _{i=2}^{3}\sum \limits _{j=1}^{2}\frac{1}{2\Gamma (i+1)}\frac{\mathbf{B}(7/5+j,4/5)}{\Gamma (8/5+j)} +a_0\frac{\mathbf{B}(17/5,4/5)}{\Gamma (18/5)},\\ M_1&=\sum \limits _{i=2}^{3}\sum \limits _{j=1}^{2}\frac{1}{2\Gamma (i+1)} \sum \limits _{v=2-j}^{3 }\frac{(1/2)^{v+j-2}}{\Gamma (v+j-1)}A_1+\frac{13A_1}{6}\\&\quad +\sum \limits _{i=2}^{3}\sum \limits _{j=1}^{2}\frac{1}{2\Gamma (i+1)}\frac{\mathbf{B}(7/5+j,4/5)}{\Gamma (8/5+j)}a_1+\frac{\mathbf{B}(17/5,4/5)}{\Gamma (18/5)}a_1 . \end{aligned}$$

By Theorem 3.1, BVP (4.1) has at least one solution if one of the following items holds:

  • (i) \(\sigma <1\).

  • (ii) \(\sigma =1\) with \(M_1<1\).

  • (iii) \(\sigma >1\) with \(M_0^{1-1/\sigma }M_1^{1/\sigma }\frac{\sigma }{\sigma -1}\le \frac{1}{(\sigma -1)^{1/\sigma }}.\)

Conclusion

In this paper, we discuss the solvability of two classes of boundary value problems or higher order fractional differential equations involving the Caputo fractional derivatives. Using some fixed point theorems in Banach spaces, we establish sufficient conditions for the existence of solutions of these kinds of problems.

In recent years, there have been several kinds of fractional derivatives proposed such as the Riemann–Liouville fractional derivative, the Hadamard fractional derivative, etc., see [29, 30]. Hence, it is interesting to study the existence and uniqueness of solutions of boundary value problems for other kinds of fractional differential equations. It is also interesting to find the similar properties and the difference properties between these different kinds of fractional differential equations.

The fixed point theorems in Banach spaces [31] are main tools for investigating the solvability of boundary value problems for fractional differential equations. It needs to find other methods for finding solutions for these kinds of problems.