1 Introduction

Fractional order differential equations plays an important role in describing many phenomena and processes in various fields of science and engineering such as mechanics, chemistry, control systems, dynamical processes, viscoelasticity, and so forth. For a detailed account of applications and recent results on initial and boundary value problems of fractional differential equations, we refer the reader to a series of books and papers [117] and references cited therein.

The definition of the fractional order derivative used is either the Caputo or the Riemann-Liouville fractional order derivative involving an integral expression and the gamma function. Recently, Khalil et al. [18] introduced a new well-behaved definition of a local fractional derivative, called the conformable fractional derivative, depending just on the basic limit definition of the derivative. This new theory is improved by Abdeljawad [19]. For recent results on conformable fractional derivatives we refer the reader to [2024].

Impulsive differential equations serve as basic models to study the dynamics of processes that are subject to sudden changes in their states. Recent development in this field has been motivated by many applied problems arising in control theory, population dynamics and medicines. For some recent work on the theory of impulsive differential equations, we refer the reader to [2527].

Due to fact that fractional differential equations and impulsive differential equations serve effective tools in order to improve the mathematical modeling of several concepts arising in engineering and various areas of science, many researchers have paid a considerable attention to the subject of impulsive fractional differential equations in the recent literature [2830].

In this paper, we consider the following periodic boundary value problem for impulsive conformable fractional integro-differential equations:

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\alpha}x(t)=f(t,x(t),(Fx)(t),(Sx)(t)),\quad t\in J:=[0,T], t\neq t_{k}, \\ \Delta x(t_{k})=I_{k}(x(t_{k})), \quad k=1,2,\ldots,m, \\ x(0)=x(T), \end{cases} $$
(1.1)

where \({{}_{a}}D^{\alpha}\) denotes the conformable fractional derivative of order \(0<\alpha\leq1\) starting from \(a\in\{t_{0},\ldots,t_{m}\}\), \(t_{0}=0< t_{1}<\cdots<t_{m}<t_{m+1}=T\), \(f\in C(J\times\mathbb {R}^{3},\mathbb{R})\),

$$ (Fx) (t)= \int_{0}^{t}l(t,s)x(s)\,ds, \qquad (Sx) (t)= \int_{0}^{T}h(t,s)x(s)\,ds, $$

\(l\in C(D,\mathbb{R}^{+})\), \(D=\{(t,s)\in J^{2}:t\geq s\}\), \(h\in C(J^{2},\mathbb{R}^{+})\), \(I_{k}\in C(\mathbb{R},\mathbb{R})\), \(\Delta x(t_{k})=x(t_{k}^{+})-x(t_{k}^{-})\).

The monotone iterative technique coupled with the method of lower and upper solutions provides an effective way to obtain two sequences which approximate the extremal solutions between lower and upper solutions of nonlinear differential and impulsive differential (fractional or non-fractional ordered) equations; see, for instance, [3135]. To the best of the authors’ knowledge, this is the first paper establishing the impulsive fractional differential equations via the conformable fractional calculus developed by [19]. By means of a new maximal principle and new definitions of lower and upper solutions, the monotone iterative technique will be used in our investigation of the problem (1.1).

The rest of the paper is organized as follows: In Section 2 we recall some definitions and results from conformable fractional calculus and prove some basic results used in the sequel. In Section 3 we define the lower and upper solutions, obtain the Green functions and prove a comparison result. The existence results are contained in Section 4, while an example illustrating the main result is presented in Section 5.

2 Conformable fractional calculus

In this section, we recall some definitions, notations and results which will be used in our main results.

Definition 2.1

[19]

The conformable fractional derivative starting from a point a of a function \(f:[a,\infty)\to\mathbb{R}\) of order \(0<\alpha\leq1\) is defined by

$$ {{}_{a}}D^{\alpha} f(t)=\lim_{\varepsilon\to0} \frac{f(t+\varepsilon (t-a)^{1-\alpha})-f(t)}{\varepsilon}, $$
(2.1)

provided that the limit exists.

If f is differentiable then \({{}_{a}}D^{\alpha} f(t)=(t-a)^{1-\alpha }f'(t)\). In addition, if the conformable fractional derivative of f of order α exists on \([a,\infty)\), then we say that f is α-differentiable on \([a,\infty)\).

It is easy to prove the following results.

Lemma 2.2

Let \(\alpha\in(0,1]\), \(k_{1},k_{2},p,\lambda\in\mathbb{R}\), and the functions f, g be α-differentiable on \([a,\infty)\). Then:

  1. (i)

    \({{}_{a}}D^{\alpha} (k_{1}f+k_{2}g)=k_{1}{{}_{a}}D^{\alpha }(f)+k_{2}{{}_{a}}D^{\alpha}(g)\);

  2. (ii)

    \({{}_{a}}D^{\alpha} (t-a)^{p}=p(t-a)^{p-\alpha}\);

  3. (iii)

    \({{}_{a}}D^{\alpha} \lambda=0\) for all constant functions \(f(t)=\lambda\);

  4. (iv)

    \({{}_{a}}D^{\alpha} (fg)=f{{}_{a}}D^{\alpha}g+g{{}_{a}}D^{\alpha}f\);

  5. (v)

    \({{}_{a}}D^{\alpha} (\frac{f}{g} )=\frac {g{{}_{a}}D^{\alpha}f-f{{}_{a}}D^{\alpha}g}{g^{2}}\) for all functions \(g(t)\neq0\).

Definition 2.3

[19]

Let \(\alpha\in(0,1]\). The conformable fractional integral starting from a point a of a function \(f:[a,\infty)\to\mathbb{R}\) of order α is defined as

$$ {{}_{a}}I^{\alpha} f(t)= \int_{a}^{t}(s-a)^{\alpha-1}f(s)\,ds. $$
(2.2)

Remark 2.4

If \(a=0\), the definitions of the conformable fractional derivative and integral above will be reduced to the results in [18].

Theorem 2.5

(Rolle’s theorem)

Let an interval \([c,d]\subset[a,\infty)\) and \(f:[a,\infty)\rightarrow\mathbb{R}\) be a given function that satisfies:

  1. (i)

    f is continuous on \([c,d]\),

  2. (ii)

    f is α-differentiable for some \(\alpha\in (0,1)\) on \([c,d]\),

  3. (iii)

    \(f(c)=f(d)\).

Then there exists a constant \(e\in(c,d)\), such that \({{}_{a}}D^{\alpha} f(e)=0\).

Proof

From (i) f is continuous on \([c,d]\), and since by (iii) \(f(c)=f(d)\), there exists a point \(e\in(c,d)\) such that \(f(e)\) is a maximum or minimum value of f on \([c,d]\). Without loss of generality, we assume that e is a point of local minimum. Then we have

$$ {{}_{a}}D^{\alpha} f(e)=\lim_{\varepsilon\rightarrow0^{+}} \frac {f(e+\varepsilon(e-a)^{1-\alpha})-f(e)}{\varepsilon}=\lim_{\varepsilon\rightarrow0^{-}} \frac{f(e+\varepsilon(e-a)^{1-\alpha})-f(e)}{\varepsilon}. $$

Observe that the first limit is nonnegative and also the second limit is non-positive. Therefore, we obtain \({{}_{a}}D^{\alpha} f(e)=0\). □

Theorem 2.6

(Mean value theorem)

Let an interval \([c,d]\subset[a,\infty)\) and let \(f:[a,\infty)\rightarrow\mathbb {R}\) be a given function satisfying:

  1. (i)

    f is continuous on \([c,d]\),

  2. (ii)

    f is α-differentiable for some \(\alpha\in(0,1)\).

Then there exists a constant \(e\in(c,d)\), such that \({{}_{a}}D^{\alpha} f(e)=\frac{f(d)-f(c)}{\frac{1}{\alpha}(d-a)^{\alpha}-\frac{1}{\alpha }(c-a)^{\alpha}}\).

Proof

Setting the function

$$ g(t)=f(t)-f(c)-\frac{f(d)-f(c)}{\frac{1}{\alpha}(d-a)^{\alpha}-\frac {1}{\alpha}(c-a)^{\alpha}} \biggl({\frac{1}{\alpha}(t-a)^{\alpha}- \frac {1}{\alpha}(c-a)^{\alpha}} \biggr), $$

we see that g satisfy all the conditions of the Rolle’s theorem. Therefore, there exists a constant \(e\in(c,d)\), such that \({{}_{a}}D^{\alpha} g(e)=0\). Using the fact that \({{}_{a}}D^{\alpha} (\frac{1}{\alpha}(t-a)^{\alpha})=1\), we get the desired result. This completes the proof. □

Remark 2.7

If \(a=0\), then Theorems 2.5 and 2.6 are reduced to Theorems 2.3 and 2.4 in [18], respectively.

3 Impulsive results

Let \(J^{-}=J\setminus\{t_{1},t_{2},\ldots,t_{m}\}\), \(J_{0}=[t_{0}, t_{1}]\), \(J_{k}=(t_{k}, t_{k+1}]\) for \(k=1,2,\ldots,m\) be sub-intervals of J and the set \(\operatorname{PC}(J,\mathbb{R})\) = {\(x:J\rightarrow\mathbb{R}: x(t)\) is continuous everywhere except for some \(t_{k}\) at which \(x(t_{k}^{-})\) and \(x(t_{k}^{+})\) exist and \(x(t_{k}^{-})=x(t_{k})\), \(k=1,2,\ldots,m\)}. Let \(E=\operatorname{PC}(J,\mathbb{R})\), then E is the Banach space with the norm \(\|x\| =\sup_{t\in J}|x(t)|\). A function \(x\in E\) is called a solution of the impulsive periodic boundary value problem (1.1) if it satisfies (1.1).

Definition 3.1

A function \(\mu_{0}\in E\) is called a lower solution of the periodic boundary value problem (1.1) if it satisfies

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\alpha}\mu_{0}(t)\leq f(t,\mu_{0}(t), (F\mu_{0})(t), (S\mu _{0})(t)), \quad t\in J^{-}, \\ \Delta\mu_{0}(t_{k})\leq I_{k}(\mu_{0}(t_{k})), \quad k =1,2, \ldots,m, \\ \mu_{0} (0)\leq\mu_{0}(T). \end{cases} $$
(3.1)

Analogously, a function \(\nu_{0}\in E\) is called an upper solution of the periodic boundary value problem (1.1) if the inequalities

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\alpha}\nu_{0}(t)\geq f(t,\nu_{0}(t), (F\nu_{0})(t), (S\nu _{0})(t)), \quad t\in J^{-}, \\ \Delta\nu_{0}(t_{k})\geq I_{k}(\nu_{0}(t_{k})), \quad k =1,2, \ldots,m, \\ \nu_{0}(0)\geq\nu_{0}(T), \end{cases} $$
(3.2)

hold.

Let us introduce the new notations which will be used in this paper. For nonnegative real numbers \(a\leq b\) and \(a,b\in J\), we define

$$\begin{aligned} e^{\frac{M}{\alpha}}(a,b)= \textstyle\begin{cases} e^{\frac{M}{\alpha}(b-a)^{\alpha}},& a,b\in (t_{i},t_{i+1}], i=0,1,\ldots,m, \\ e^{\frac{M}{\alpha}(b-t_{i})^{\alpha}}\cdot e^{\frac {M}{\alpha}(t_{i}-a)^{\alpha}}, & a\in(t_{i-1},t_{i}], b\in (t_{i},t_{i+1}], \\ & i=1,2,\ldots,m, \\ e^{\frac{M}{\alpha}(b-t_{q})^{\alpha}}\cdot\prod_{a< t_{i-1}< t_{i}< b}e^{\frac{M}{\alpha}(t_{i}-t_{i-1})^{\alpha}}\cdot e^{\frac{M}{\alpha}(t_{p}-a)^{\alpha}}, & a< t_{i-1}< t_{i}< b, \\ &i=2,3,\ldots,m, \end{cases}\displaystyle \end{aligned}$$
(3.3)

where \(t_{q}=\max\{t_{k} ; a< t_{k}< b\}\), \(t_{p}=\min\{t_{k} ; a< t_{k}< b\}\) for some \(k\in\{1,2,\ldots,m\}\) and \(\prod_{0}(\cdot)=1\).

Let \(f:J\to\mathbb{R}\) be a function given by

$$ f(t)= \textstyle\begin{cases} f_{0}(t), & t\in J_{0}, \\ f_{1}(t), & t\in J_{1}, \\ \ldots, \\ f_{m}(t),& t\in J_{m}. \end{cases} $$
(3.4)

The impulsive integral notation is defined as

$$ \int_{a}^{b}f(s)\,\hat{d}s= \int_{a}^{t_{p}}f_{p-1}(s)\,ds+ \int _{t_{p}}^{t_{p+1}}f_{p}(s)\,ds+\cdots+ \int_{t_{q}}^{b}f_{q}(s)\,ds, \quad a,b\in J, $$
(3.5)

where \(a\leq t_{p}<\cdots<t_{q}\leq b\).

Example 3.2

For \(J=[0,6]\), \(t_{k}=k\), \(k=1,2,3,4,5\), we have

$$\begin{aligned}& e^{\frac{M}{\alpha}}(2,5) = e^{\frac{M}{\alpha}(5-4)^{\alpha}}\cdot e^{\frac{M}{\alpha}(4-3)^{\alpha}}\cdot e^{\frac{M}{\alpha }(3-2)^{\alpha}}, \\& e^{\frac{-M}{\alpha}}(2.2,3.8) = e^{\frac{-M}{\alpha}(3.8-3)^{\alpha}}\cdot e^{\frac{-M}{\alpha}(3-2.2)^{\alpha}}, \\& \int_{0.5}^{3.5}f(s)\,\hat{d}s = \int_{0.5}^{1}f_{0}(s)\,ds+ \int _{1}^{2}f_{1}(s)\,ds+ \int_{2}^{3}f_{2}(s)\,ds+ \int_{3}^{3.5}f_{3}(s)\,ds. \end{aligned}$$

It is easy to prove the following result.

Property 3.3

Let \(a\leq c\leq b\leq d\) be nonnegative real numbers. The following relations hold:

  1. (i)

    \(e^{\frac{M}{\alpha}}(a,c)e^{\frac{M}{\alpha }}(c,b)=e^{\frac{M}{\alpha}}(a,b)\),

  2. (ii)

    \(e^{\frac{M}{\alpha}}(a,b)e^{\frac{M}{\alpha }}(c,d)=e^{\frac{M}{\alpha}}(a,d)e^{\frac{M}{\alpha}}(c,b)\).

Now, we consider the following boundary value problem of a linear impulsive conformable fractional integro-differential equation subject to periodic boundary condition of the form

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\alpha }x(t)-Mx(t)=H(Fx)(t)+K(Sx)(t)+v(t), \quad 0< \alpha\leq1, t\in J^{-}, \\ \Delta x(t_{k})=L_{k}x(t_{k})+I_{k}(\sigma(t_{k}))-L_{k}\sigma (t_{k}), \quad k=1,2,\ldots,m, \\ x(0)=x(T), \end{cases} $$
(3.6)

where \(M>0\), \(H,K\geq0\), \(L_{k}\geq0\), \(k=1,2,\ldots,m\), are given constants and functions \(v,\sigma\in E\).

Lemma 3.4

The solution \(x\in E\) of the problem (3.6) can be written as the following impulsive integral equation:

$$\begin{aligned} x(t) =& \int_{0}^{T}G_{1}(t,s)P(s)\,\hat{d}s \\ &{}+\sum _{k=1}^{m}G_{2}(t,t_{k}) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr], \quad t\in J, \end{aligned}$$
(3.7)

where \(P(t)=H(Fx)(t)+K(Sx)(t)+v(t)\),

$$ G_{1}(t,s)= \textstyle\begin{cases} \frac{(s-t_{h})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{h},t)e^{-\frac{M}{\alpha}}(t_{h},s)}{1-e^{\frac{M}{\alpha }}(t_{0},T)},&0\leq s< t\leq T, \\ \frac{(s-t_{h})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{0},t)e^{\frac{M}{\alpha}}(t_{h},T)e^{-\frac{M}{\alpha }}(t_{h},s)}{1-e^{\frac{M}{\alpha}}(t_{0},T)},&0\leq t\leq s\leq T, \end{cases} $$
(3.8)

and

$$ G_{2}(t,s)= \textstyle\begin{cases} \frac{e^{\frac{M}{\alpha}}(s,t)}{1-e^{\frac{M}{\alpha }}(t_{0},T)},&0\leq s< t\leq T, \\ \frac{e^{\frac{M}{\alpha}}(t_{0},t)e^{\frac{M}{\alpha }}(s,T)}{1-e^{\frac{M}{\alpha}}(t_{0},T)},&0\leq t\leq s\leq T, \end{cases} $$
(3.9)

with \(t_{h}=\max\{t_{k}; k=0,1,\ldots,m\textit{ and }t_{k}\leq s\}\).

Proof

Let \(x(t)\) be a solution of the problem (3.6). For \(t\in J_{0}\), multiplying by \(e^{-M\frac {(t-t_{0})^{\alpha}}{\alpha}}\) both sides of the first equation of the problem (3.6) and using Lemma 2.2(v),

$$ e^{-M\frac{(t-t_{0})^{\alpha}}{\alpha}}{{}_{t_{0}}}D^{\alpha} x(t)-e^{-M\frac{(t-t_{0})^{\alpha}}{\alpha}}Mx(t)={{}_{t_{0}}}D^{\alpha } \bigl[e^{-M\frac{(t-t_{0})^{\alpha}}{\alpha}}x(t) \bigr], $$

we obtain

$$ {{}_{t_{0}}}D^{\alpha} \bigl[e^{-M\frac{(t-t_{0})^{\alpha}}{\alpha }}x(t) \bigr]=e^{-M\frac{(t-t_{0})^{\alpha}}{\alpha}} \bigl[H(Fx) (t)+K(Sx) (t)+v(t) \bigr]. $$
(3.10)

Applying the conformable fractional integral of order α to both sides of (3.10) for \(t\in J_{0}\), we have

$$\begin{aligned} x(t) =&x(0)e^{M\frac{t^{\alpha}}{\alpha}}+ \int_{0}^{t}s^{\alpha -1}e^{M\frac{t^{\alpha}}{\alpha}}e^{-M\frac{s^{\alpha}}{\alpha }}P(s) \,ds \\ =& x(0)e^{\frac{M}{\alpha}}(t_{0},t)+ \int_{0}^{t}s^{\alpha-1}e^{\frac {M}{\alpha}}(t_{0},t)e^{-\frac{M}{\alpha}}(t_{0},s)P(s) \,ds. \end{aligned}$$

For \(t\in J_{1}\), multiplying both sides of the first equation of the problem (3.6) by \(e^{-M\frac{(t-t_{1})^{\alpha }}{\alpha}}\) and using the product rule, we get

$$ {{}_{t_{1}}}D^{\alpha} \bigl[e^{-M\frac{(t-t_{1})^{\alpha}}{\alpha }}x(t) \bigr]=e^{-M\frac{(t-t_{1})^{\alpha}}{\alpha}}P(t). $$
(3.11)

The conformable fractional integration of order α from \(t_{1}\) to t of (3.11) implies

$$\begin{aligned} x(t) =& x\bigl(t_{1}^{+}\bigr)e^{M\frac{(t-t_{1})^{\alpha}}{\alpha}}+ \int _{t_{1}}^{t}(s-t_{1})^{\alpha-1}e^{M\frac{(t-t_{1})^{\alpha}}{\alpha }}e^{-M\frac{(s-t_{1})^{\alpha}}{\alpha}}P(s) \,ds \\ =& x\bigl(t_{1}^{+}\bigr)e^{\frac{M}{\alpha}}(t_{1},t)+ \int_{t_{1}}^{t}(s-t_{1})^{\alpha -1}e^{\frac{M}{\alpha}}(t_{1},t)e^{-\frac{M}{\alpha}}(t_{1},s)P(s) \,ds. \end{aligned}$$

Since \(x(t_{1}^{+})=x(t_{1})+L_{1}x(t_{1})+I_{1}(\sigma(t_{1}))-L_{1}\sigma(t_{1})\) and

$$ x(t_{1}) = x(0)e^{\frac{M}{\alpha}}(t_{0},t_{1})+ \int_{0}^{t_{1}}s^{\alpha -1}e^{\frac{M}{\alpha}}(t_{0},t_{1})e^{-\frac{M}{\alpha}}(t_{0},s)P(s) \,ds, $$

by using Property 3.3(i), we get

$$\begin{aligned} \begin{aligned} x(t)={}& x(0)e^{\frac{M}{\alpha}}(t_{0},t)+ \int _{t_{0}}^{t_{1}}(s-t_{0})^{\alpha-1}e^{\frac{M}{\alpha}}(t_{0},t)e^{-\frac {M}{\alpha}}(t_{0},s)P(s) \,ds \\ &{}+ e^{\frac{M}{\alpha}}(t_{1},t) \bigl[L_{1}x(t_{1})+I_{1} \bigl(\sigma (t_{1})\bigr)-L_{1}\sigma(t_{1}) \bigr] \\ &{}+ \int_{t_{1}}^{t}(s-t_{1})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{1},t)e^{-\frac{M}{\alpha}}(t_{1},s)P(s) \,ds. \end{aligned} \end{aligned}$$

Repeating the above process, for \(t\in J_{k}\), we have

$$\begin{aligned} x(t) =&x(0)e^{\frac{M}{\alpha}}(t_{0},t)+\sum _{t_{0}< t_{k}< t}e^{\frac {M}{\alpha}}(t_{k},t) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma (t_{k}) \bigr] \\ &{}+ \sum_{t_{0}< t_{k}< t} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha -1}e^{\frac{M}{\alpha}}(t_{k-1},t)e^{-\frac{M}{\alpha }}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{l}}^{t}(s-t_{l})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{l},t)e^{-\frac{M}{\alpha}}(t_{l},s)P(s) \,ds, \end{aligned}$$
(3.12)

where \(t_{l}=\max\{t_{k}; k=0,1,\ldots,m\mbox{ and }t_{k}< t\}\).

Putting \(t=T\) in (3.12), we obtain

$$\begin{aligned} x(T) =&x(0)e^{\frac{M}{\alpha}}(t_{0},T)+\sum _{k=1}^{m}e^{\frac {M}{\alpha}}(t_{k},T) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma (t_{k}) \bigr] \\ &{}+ \sum_{k=1}^{m} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha-1}e^{\frac {M}{\alpha}}(t_{k-1},T)e^{-\frac{M}{\alpha }}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{m}}^{T}(s-t_{m})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{m},T)e^{-\frac{M}{\alpha}}(t_{m},s)P(s) \,ds. \end{aligned}$$

From periodic boundary value condition \(x(0)=x(T)\), we deduce that

$$\begin{aligned} x(0) =&\frac{1}{1-e^{\frac{M}{\alpha}}(t_{0},T)} \Biggl\{ \sum _{k=1}^{m}e^{\frac{M}{\alpha}}(t_{k},T) \bigl[L_{k}x(t_{k})+I_{k}\bigl(\sigma (t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \sum_{k=1}^{m} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha-1}e^{\frac {M}{\alpha}}(t_{k-1},T)e^{-\frac{M}{\alpha }}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{m}}^{T}(s-t_{m})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{m},T)e^{-\frac{M}{\alpha}}(t_{m},s)P(s) \,ds \Biggr\} . \end{aligned}$$
(3.13)

Substituting (3.13) into (3.12), it follows that

$$\begin{aligned} x(t) =&\frac{e^{\frac{M}{\alpha}}(t_{0},t)}{1-e^{\frac{M}{\alpha }}(t_{0},T)} \Biggl\{ \sum_{k=1}^{m}e(t_{k},T) \bigl[L_{k}x(t_{k})+I_{k}\bigl(\sigma (t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \sum_{k=1}^{m} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha-1}e^{\frac {M}{\alpha}}(t_{k-1},T)e^{-\frac{M}{\alpha }}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{m}}^{T}(s-t_{m})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{m},T)e^{-\frac{M}{\alpha}}(t_{m},s)P(s) \,ds \Biggr\} \\ &{}+ \sum_{t_{0}< t_{k}< t}e^{\frac{M}{\alpha}}(t_{k},t) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \sum_{t_{0}< t_{k}< t} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha -1}e^{\frac{M}{\alpha}}(t_{k-1},t)e^{-\frac{M}{\alpha }}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{l}}^{t}(s-t_{l})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{l},t)e^{-\frac{M}{\alpha}}(t_{l},s)P(s) \,ds \\ =&\frac{1}{1-e^{\frac{M}{\alpha}}(t_{0},T)} \Biggl\{ \sum_{k=1}^{m}e^{\frac{M}{\alpha}}(t_{0},t)e^{\frac{M}{\alpha}}(t_{k},T) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \sum_{k=1}^{m} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha-1}e^{\frac {M}{\alpha}}(t_{0},t)e^{\frac{M}{\alpha}}(t_{k-1},T)e^{-\frac {M}{\alpha}}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{m}}^{T}(s-t_{m})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{0},t)e^{\frac{M}{\alpha}}(t_{m},T)e^{-\frac{M}{\alpha }}(t_{m},s)P(s) \,ds \\ &{}- \sum_{t_{0}< t_{k}< t}e^{\frac{M}{\alpha}}(t_{0},T)e^{\frac{M}{\alpha }}(t_{k},t) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}- \sum_{t_{0}< t_{k}< t} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha -1}e^{\frac{M}{\alpha}}(t_{0},T)e^{\frac{M}{\alpha }}(t_{k-1},t)e^{-\frac{M}{\alpha}}(t_{k-1},s)P(s) \,ds \\ &{}- \int_{t_{l}}^{t}(s-t_{l})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{0},T)e^{\frac{M}{\alpha}}(t_{l},t)e^{-\frac{M}{\alpha }}(t_{l},s)P(s) \,ds \\ &{}+ \sum_{t_{0}< t_{k}< t}e^{\frac{M}{\alpha}}(t_{k},t) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \sum_{t_{0}< t_{k}< t} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha -1}e^{\frac{M}{\alpha}}(t_{k-1},t)e^{-\frac{M}{\alpha }}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{l}}^{t}(s-t_{l})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{l},t)e^{-\frac{M}{\alpha}}(t_{l},s)P(s) \,ds \Biggr\} . \end{aligned}$$

Using Property 3.3(ii), we have

$$\begin{aligned} x(t) =&\frac{1}{1-e^{\frac{M}{\alpha}}(t_{0},T)} \biggl\{ \sum_{t< t_{k}< T}e^{\frac{M}{\alpha}}(t_{0},t)e^{\frac{M}{\alpha }}(t_{k},T) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \int_{t}^{t_{l+1}}(s-t_{l})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{0},t)e^{\frac{M}{\alpha}}(t_{l},T)e^{-\frac{M}{\alpha }}(t_{l},s)P(s) \,ds \\ &{}+ \sum_{t_{l+1}< t_{k}< T} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha -1}e^{\frac{M}{\alpha}}(t_{0},t)e^{\frac{M}{\alpha }}(t_{k-1},T)e^{-\frac{M}{\alpha}}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{m}}^{T}(s-t_{m})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{0},t)e^{\frac{M}{\alpha}}(t_{m},T)e^{-\frac{M}{\alpha }}(t_{m},s)P(s) \,ds \\ &{}+ \sum_{t_{0}< t_{k}< t}e^{\frac{M}{\alpha}}(t_{k},t) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \sum_{t_{0}< t_{k}< t} \int_{t_{k-1}}^{t_{k}}(s-t_{k-1})^{\alpha -1}e^{\frac{M}{\alpha}}(t_{k-1},t)e^{-\frac{M}{\alpha }}(t_{k-1},s)P(s) \,ds \\ &{}+ \int_{t_{l}}^{t}(s-t_{l})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{l},t)e^{-\frac{M}{\alpha}}(t_{l},s)P(s) \,ds \biggr\} \\ =&\frac{1}{1-e^{\frac{M}{\alpha}}(t_{0},T)} \biggl\{ \sum_{t< t_{k}< T}e^{\frac{M}{\alpha}}(t_{0},t)e^{\frac{M}{\alpha }}(t_{k},T) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \int_{t}^{T}(s-t_{h})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{0},t)e^{\frac{M}{\alpha}}(t_{h},T)e^{-\frac{M}{\alpha }}(t_{h},s)P(s) \,\hat{d}s \\ &{}+ \sum_{t_{0}< t_{k}< t}e^{\frac{M}{\alpha}}(t_{k},t) \bigl[L_{k}x(t_{k})+I_{k}\bigl( \sigma(t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr] \\ &{}+ \int_{t_{0}}^{t}(s-t_{h})^{\alpha-1}e^{\frac{M}{\alpha }}(t_{h},t)e^{-\frac{M}{\alpha}}(t_{h},s)P(s) \,\hat{d}s \biggr\} . \end{aligned}$$

Therefore, we obtain the integral equation (3.7) as required.

Conversely, it can easily be shown by direct computation that the integral equation (3.7) satisfies the impulsive periodic boundary value problem (3.6). This completes the proof. □

Denote \(a=\max_{k} \{t_{k+1}-t_{k}\}\), \(k=0,1,\ldots,m\). Now we prove the comparison result.

Lemma 3.5

Let \(0<\alpha\leq1\). Assume that \(x\in E\) satisfies

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\alpha}x(t)\geq Mx(t)+H(Fx)(t)+K(Sx)(t),\quad t\in J^{-}, \\ \Delta x(t_{k})\geq L_{k}x(t_{k}),\quad k=1,2,\ldots,m, \\ x(0)\geq x(T), \end{cases} $$
(3.14)

where \(M>0\), \(H,K\geq0\), \(L_{k}\geq0\), \(k=1,2,\ldots,m\) are given constants. Suppose in addition that

$$ \sum_{k=1}^{m}L_{k}+ \frac{a^{\alpha}}{\alpha}(m+2) (M+H\bar {l}T+K\bar{h}T)\leq1, $$
(3.15)

where \(\bar{l}=\sup\{l(t,s):(t,s)\in J^{2}\}\) and \(\bar {h}=\sup\{h(t,s):(t,s)\in J^{2}\}\). Then \(x(t)\leq0\) for all \(t\in J\).

Proof

Suppose, to the contrary, that \(x(t)>0\) for some \(t\in J\). Then there are two cases:

  1. (i)

    There exists a point \(t^{*}\in J\), such that \(x(t^{*})>0\) and \(x(t)\geq0\) for all \(t\in J\).

  2. (ii)

    There exist two points \(t^{*}, t_{*} \in J\), such that \(x(t^{*})>0\) and \(x(t_{*})<0\).

Case (i): Setting \(u(t)=e^{-\frac{M}{\alpha}(t-t_{k})^{\alpha}}x(t)\) for \(t\in J\), then we have

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\alpha}u(t)\geq H\int_{0}^{t}l(t,s)e^{-\frac {M}{\alpha}[(t-t_{k})^{\alpha}-(s-t_{k})^{\alpha}]}u(s)\,ds \\ \hphantom{{{}_{t_{k}}}D^{\alpha}u(t)\geq{}}{}+K\int_{0}^{T} h(t,s)e^{-\frac{M}{\alpha}[(t-t_{k})^{\alpha}-(s-t_{k})^{\alpha}]}u(s)\,ds, \quad t\in J^{-}, \\ \Delta u(t_{k})\geq L_{k}u(t_{k}), \quad k=1,2,\ldots,m, \\ u(0)\geq e^{\frac{M}{\alpha}(T-t_{m})^{\alpha}}u(T). \end{cases} $$

Obviously, the functions \(u(t)\) and \(x(t)\) have the same sign. According to the above, it follows that \({{}_{t_{k}}}D^{\alpha}u(t)\geq0\) for \(t\in J^{-}\) and \(\Delta u(t_{k})\geq0\) for \(k=1,2,\ldots,m\). This implies that \(u(t)\) is nondecreasing in J. Therefore, we have \(u(T)\geq u(t^{*})>0\) and \(u(T)\geq u(0)\geq e^{\frac{M}{\alpha }(T-t_{m})^{\alpha}}u(T)\), which is a contradiction.

Case (ii): Let \(\inf\{x(t):t\in J\}=-b\). Then we can assume that \(b>0\) and also there exists a point \(t_{*}\in(t_{i},t_{i+1}]\), \(i\in\{ 0,1,\ldots,m\}\), such that \(x(t_{*})=-b\) or \(x(t_{i}^{+})=-b\). Now, we only consider the case \(x(t_{*})=-b\). For the case \(x(t_{i}^{+})=-b\), the proof is similar. It is easy to see that

$$\begin{aligned} {{}_{t_{k}}}D^{\alpha}x(t) \geq& -b \biggl(M+H \int_{0}^{t} l(t,s)\,ds+K \int _{0}^{T}h(t,s)\,ds \biggr) \\ \geq&-b(M+H\bar{l}T+K\bar{h}T). \end{aligned}$$

Suppose that \(t^{*}\in(t_{j},t_{j+1})\) for some \(j\in\{0,1,\ldots,m\}\). We assume \(t_{*}< t^{*}\), which implies \(i\leq j\). For the case \(t_{*}>t^{*}\), the proof is similar and thus we omit it. By Theorem 2.6, we get

$$\begin{aligned}& x(T)-x(t_{m}) \geq x(T)-x\bigl(t_{m}^{+}\bigr)+L_{m}x(t_{m})= \frac{1}{\alpha }(T-t_{m})^{\alpha}{_{t_{m}}}D^{\alpha}x(r_{m})+L_{m}x(t_{m}) \\& \hphantom{x(T)-x(t_{m})}\geq -b \biggl[\frac{a^{\alpha}}{\alpha}(M+H\bar {l}T+K \bar{h}T)+L_{m} \biggr], \quad r_{m}\in(t_{m},T), \\& x(t_{m})-x(t_{m-1}) \geq x(t_{m})-x \bigl(t_{m-1}^{+}\bigr)+L_{m-1}x(t_{m-1}) \\& \hphantom{x(t_{m})-x(t_{m-1})}= \frac{1}{\alpha}(t_{m}-t_{m-1})^{\alpha}{_{t_{m-1}}}D^{\alpha}x(r_{m-1})+L_{m-1}x(t_{m-1}) \\& \hphantom{x(t_{m})-x(t_{m-1})}\geq -b \biggl[\frac{a^{\alpha}}{\alpha}(M+H\bar {l}T+K \bar{h}T)+L_{m-1} \biggr], \quad r_{m-1}\in (t_{m-1},t_{m}), \\& \ldots, \\& x(t_{j+1})-x\bigl(t^{*}\bigr) = \biggl[\frac{1}{\alpha}(t_{j+1}-t_{j})^{\alpha}-\frac{1}{\alpha}\bigl(t^{*}-t_{j}\bigr)^{\alpha}\biggr]{_{t_{j}}}D^{\alpha}x\bigl(r^{*}\bigr) \\& \hphantom{x(t_{j+1})-x(t^{*})}\geq -b \biggl[\frac{a^{\alpha}}{\alpha}(M+H\bar {l}T+K\bar{h}T) \biggr], \quad r^{*}\in\bigl(t^{*},t_{j+1}\bigr). \end{aligned}$$

Summing up the above inequalities, we get

$$\begin{aligned} x(T)-x\bigl(t^{*}\bigr) \geq&\sum_{k=j+1}^{m} \biggl(-b \biggl[\frac{a^{\alpha }}{\alpha}(M+H\bar{l}T+K\bar{h}T)+L_{k} \biggr] \biggr) \\ &{}-b \biggl[\frac{a^{\alpha}}{\alpha}(M+H\bar{l}T+K\bar {h}T) \biggr]. \end{aligned}$$
(3.16)

In the same way, we have

$$\begin{aligned}& x(t_{*})-x(t_{i}) \geq -b \biggl[\frac{a^{\alpha}}{\alpha}(M+H\bar {l}T+K \bar{h}T)+L_{i} \biggr], \\& \ldots, \\& x(t_{1})-x(0) \geq -b \biggl[\frac{a^{\alpha}}{\alpha}(M+H\bar {l}T+K \bar{h}T) \biggr]. \end{aligned}$$

Summing up the above inequalities, it follows that

$$\begin{aligned} x(t_{*})-x(0) \geq&\sum_{k=1}^{i} \biggl(-b \biggl[\frac{a^{\alpha}}{\alpha }(M+H\bar{l}T+K\bar{h}T)+L_{k} \biggr] \biggr) \\ &{}-b \biggl[\frac{a^{\alpha}}{\alpha}(M+H\bar{l}T+K\bar {h}T) \biggr]. \end{aligned}$$
(3.17)

From (3.16), (3.17), and the third inequality of (3.14), we obtain

$$ x(t_{*})-x\bigl(t^{*}\bigr)\geq-b\sum_{k=1}^{m}L_{k}- \frac{a^{\alpha}}{\alpha }b(m+2) (M+H\bar{l}T+K\bar{h}T), $$

which leads to

$$ 0< x\bigl(t^{*}\bigr)\leq-b+b\sum_{k=1}^{m}L_{k}+ \frac{a^{\alpha}}{\alpha }b(m+2) (M+H\bar{l}T+K\bar{h}T). $$

Thus, we get

$$ \sum_{k=1}^{m}L_{k}+ \frac{a^{\alpha}}{\alpha}(m+2) (M+H\bar {l}T+K\bar{h}T)>1, $$

which contradicts (3.15). The proof is completed. □

4 Existence results

In view of Lemma 3.4, we define the operator \(\mathcal{A}:E\to E\) by

$$\begin{aligned} \mathcal{A}x(t) =& \int_{0}^{T}G_{1}(t,s) \bigl[H(Fx) (s)+K(Sx) (s)+v(s) \bigr]\,\hat{d}s \\ &{}+ \sum_{k=1}^{m}G_{2}(t,t_{k}) \bigl[L_{k}x(t_{k})+I_{k}\bigl(\sigma (t_{k})\bigr)-L_{k}\sigma(t_{k}) \bigr], \end{aligned}$$
(4.1)

where the Green functions \(G_{1}(t,s)\) and \(G_{2}(t,s)\) are defined by (3.8) and (3.9), respectively. Next, we will prove the existence of a unique solution for the problem (3.6). To accomplish this, we set

$$ \Lambda:=\frac{e^{\frac{M}{\alpha}}(t_{0},T)}{\vert 1-e^{\frac {M}{\alpha}}(t_{0},T)\vert } \Biggl[a^{\alpha-1} \Biggl(\frac {\bar{l}H}{2} \sum_{k=1}^{m+1} \bigl(t_{k}^{2}-t_{k-1}^{2} \bigr)+\bar{h}KT^{2} \Biggr)+\sum_{k=1}^{m}L_{k} \Biggr]. $$

Lemma 4.1

Assume that \(\alpha\in(0,1]\), \(M>0\), \(H,K\geq0\), \(L_{k}\geq0\), \(k=1,2,\ldots,m\). If

$$ \Lambda< 1, $$
(4.2)

then the periodic boundary value problem (3.6) has a unique solution on J.

Proof

Case I. For \(0\leq s< t\leq T\), we see that

$$ (s-t_{h})^{\alpha-1}e^{\frac{M}{\alpha}}(t_{h},t)e^{-\frac{M}{\alpha }}(t_{h},s) \leq a^{\alpha-1}e^{\frac{M}{\alpha}}(t_{0},T) $$

and

$$ e^{\frac{M}{\alpha}}(s,t)\leq e^{\frac{M}{\alpha}}(t_{0},T). $$

Case II. For \(0\leq t\leq s\leq T\), we have

$$ e^{\frac{M}{\alpha}}(t_{0},t)e^{\frac{M}{\alpha}}(s,T)\leq e^{\frac {M}{\alpha}}(t_{0},T). $$

If \(t< t_{h}\leq s\), then we obtain

$$ (s-t_{h})^{\alpha-1}e^{\frac{M}{\alpha}}(t_{0},t)e^{\frac{M}{\alpha }}(t_{h},T)e^{-\frac{M}{\alpha}}(t_{h},s) \leq a^{\alpha-1}e^{\frac {M}{\alpha}}(t_{0},T). $$
(4.3)

If \(t_{h}\leq t\leq s\), we have \(e^{\frac{M}{\alpha}}(t_{h},t)e^{-\frac {M}{\alpha}}(t_{h},s)\leq1\), which implies that the above inequality (4.3) holds. From Cases I and II, it follows that

$$ \sup_{(t,s)\in J^{2}}\bigl\vert G_{1}(t,s)\bigr\vert \leq \frac{a^{\alpha -1}e^{\frac{M}{\alpha}}(t_{0},T)}{\vert 1-e^{\frac{M}{\alpha }}(t_{0},T)\vert }\quad \text{and}\quad \sup_{(t,s)\in J^{2}}\bigl\vert G_{2}(t,s)\bigr\vert \leq\frac{e^{\frac {M}{\alpha}}(t_{0},T)}{\vert 1-e^{\frac{M}{\alpha}}(t_{0},T)\vert }. $$

Transform the problem (3.6) into a fixed point problem, \(x=\mathcal{A}x\), where the operator \(\mathcal{A}\) is defined by (4.1). For any \(x,y\in E\), we have

$$\begin{aligned} \|\mathcal{A}x-\mathcal{A}y\| \leq&\|x-y\| \int_{0}^{T}\bigl\vert G_{1}(t,s)\bigr\vert \bigl[H(F1) (s)+K(S1) (s) \bigr]\,\hat{d}s \\ &{}+ \|x-y\|\sum_{k=1}^{m}\bigl\vert G_{2}(t,s)\bigr\vert L_{k} \\ \leq& \Lambda\|x-y\|. \end{aligned}$$

As \(\Lambda<1\), \(\mathcal{A}\) is a contraction. Therefore, by the Banach contraction mapping principle, we deduce that \(\mathcal{A}\) has a fixed point which is the unique solution of the problem (3.6). The proof is completed. □

For \(\nu_{0}, \mu_{0}\in E\), we denote

$$ [\nu_{0}, \mu_{0}]=\bigl\{ x\in E : \nu_{0}(t) \leq x(t)\leq\mu_{0}(t) , t\in J\bigr\} , $$

and we write \(\nu_{0}\leq\mu_{0}\) if \(\nu_{0}(t)\leq\mu_{0}(t)\) for all \(t\in J\).

Theorem 4.2

Assume that the following conditions hold:

(H1):

the functions \(\mu_{0}\) and \(\nu_{0}\) are lower and upper solutions of the periodic boundary value problem (1.1), respectively, such that \(\nu_{0}(t)\leq\mu_{0}(t)\) on J;

(H2):

the function \(f\in C(J\times\mathbb{R}^{3}, \mathbb {R})\) satisfies

$$ f(t, x, y, z)-f(t,\bar{x}, \bar{y}, \bar{z})\leq M(x-\bar{x})+H(y-\bar{y})+K(z- \bar{z}), $$

for \(\nu_{0}(t)\leq\bar{x}(t)\leq x(t)\leq\mu_{0}(t)\), \((F\nu _{0})(t)\leq\bar{y}(t)\leq y(t)\leq(F\mu_{0})(t)\), \((S\nu _{0})(t)\leq\bar{z}(t)\leq z(t)\leq (S\mu_{0})(t)\), \(t\in J\);

(H3):

the functions \(I_{k}\in C(\mathbb{R},\mathbb{R})\) satisfy

$$ I_{k} \bigl(x(t_{k}) \bigr)-I_{k} \bigl(y(t_{k}) \bigr)\leq L_{k} \bigl(x(t_{k})-y(t_{k}) \bigr), $$

whenever \(\nu_{0}(t_{k})\leq y(t_{k})\leq x(t_{k})\leq\mu_{0}(t_{k})\), \(L_{k}\geq 0\), \(k=1, 2,\ldots,m\);

(H4):

the inequalities (3.15) and (4.2) hold.

Then there exist monotone sequences \(\{\mu_{n}\}, \{\nu_{n}\}\subset E\) such that \(\lim_{n\rightarrow\infty} \mu _{n}(t)=x^{*}(t)\), \(\lim_{n\rightarrow\infty} \nu_{n}(t)=x_{*}(t)\) uniformly on J and \(x^{*}\), \(x_{*}\) are maximal and minimal solutions of the problem (1.1), respectively, such that

$$ \nu_{0}\leq\nu_{1}\leq\nu_{2}\leq\cdots\leq \nu_{n} \leq x_{*}\leq x\leq x^{*}\leq\mu_{n}\leq\cdots\leq \mu_{2}\leq\mu_{1}\leq \mu_{0}, $$

on J, where x is any solution of the periodic boundary value problem (1.1) such that \(\nu_{0}(t)\leq x(t)\leq\mu_{0}(t)\) on J.

Proof

For any \(\sigma\in[\nu_{0}, \mu_{0}]\), we investigate the periodic boundary value problem (3.6) with

$$ v(t)=f\bigl(t, \sigma(t),(F\sigma) (t), (S\sigma) (t)\bigr)-M\sigma(t)-H(F \sigma) (t)-K(S\sigma) (t). $$

Applying Lemma 4.1, the problem (3.6) has a unique solution \(x(t)\) for \(t\in J\). Let us define an operator \(\mathcal{A}\) by \(x=\mathcal{A}\sigma\). Then the operator \(\mathcal{A}\) is an operator from \([\nu_{0}, \mu_{0}]\) to E and \(\mathcal{A}\) has the following properties:

  1. (i)

    \(\nu_{0} \leq\mathcal{A}\nu_{0}\), \(\mathcal{A}\mu _{0}\leq\mu_{0}\);

  2. (ii)

    for any \(\sigma_{1}, \sigma_{2}\in[\nu_{0}, \mu_{0}]\), \(\sigma_{1}\leq\sigma_{2}\) implies \(\mathcal{A}\sigma_{1}\leq\mathcal{A}\sigma_{2}\).

To prove (i), we set a function \(\varphi=\nu_{0}-\nu_{1}\), where \(\nu_{1}=\mathcal{A}\nu_{0}\). Then from condition (H1) and (3.6), we have

$$\begin{aligned} {{}_{t_{k}}}D^{\alpha}\varphi(t) =& {{}_{t_{k}}}D^{\alpha} \nu_{0}(t)- {{}_{t_{k}}}D^{\alpha}\nu_{1}(t) \\ \geq& f\bigl(t, \nu_{0}(t), (F\nu_{0}) (t), (S \nu_{0}) (t)\bigr)- \bigl[M\nu_{1}(t) +H(F\nu_{1}) (t)+K(S\nu_{1}) (t) \\ &{}+f\bigl(t, \nu_{0}(t), (F\nu_{0}) (t), (S \nu_{0}) (t)\bigr)-M\nu_{0}(t)-H (F\nu_{0}) (t)-K(S\nu_{0}) (t) \bigr] \\ =&M\varphi(t)+H(F\varphi) (t)+K(S\varphi) (t), \quad t\in J^{-}, \end{aligned}$$

and

$$\begin{aligned} \Delta\varphi(t_{k}) =& \Delta\nu_{0}(t_{k})- \Delta\nu_{1}(t_{k}) \\ \geq& I_{k} \bigl(\nu_{0}(t_{k}) \bigr)- \bigl[ L_{k}\nu_{1}(t_{k})+I_{k} \bigl( \nu_{0}(t_{k}) \bigr)-L_{k}\nu_{0}(t_{k}) \bigr] \\ =&L_{k}\varphi(t_{k}), \quad k=1, 2, \ldots, m, \end{aligned}$$

and

$$\begin{aligned} \varphi(0) =&\nu_{0}(0)-\nu_{1}(0) \\ \geq& \nu_{0}(T)-\nu_{1}(T) \\ =& \varphi(T) . \end{aligned}$$

Using Lemma 3.5, we deduce that \(\varphi(t)\leq0\) for all \(t\in J\), i.e., \(\nu_{0}\leq\mathcal{A}\nu_{0}\). Similarly, we can prove that \(\mathcal{A}\mu_{0}\leq\mu_{0}\).

To prove (ii), we let \(u_{1}=\mathcal{A}\sigma_{1}\), \(u_{2}=\mathcal {A}\sigma_{2}\), where \(\sigma_{1}\leq\sigma_{2}\) on J and \(\sigma_{1}, \sigma_{2}\in [\nu_{0}, \mu_{0}]\). Setting a function \(\varphi=u_{1}-u_{2}\), then for \(t\in J\) and by (H2), we obtain

$$\begin{aligned} {{}_{t_{k}}}D^{\alpha}\varphi(t) =& {{}_{t_{k}}}D^{\alpha }u_{1}(t)-{{}_{t_{k}}}D^{\alpha}u_{2}(t) \\ =& Mu_{1}(t)+ H(Fu_{1}) (t)+K(Su_{1}) (t)+f \bigl(t, \sigma_{1}(t), (F\sigma_{1}) (t), (S \sigma_{1}) (t)\bigr) \\ &{}-M\sigma_{1}(t) -H(F\sigma_{1}) (t)-K(S \sigma_{1}) (t) \\ &{}- \bigl(Mu_{2}(t) + H(Fu_{2}) (t)+K(Su_{2}) (t) \\ &{}+f\bigl(t, \sigma_{2}(t), (F\sigma_{2}) (t), (S \sigma_{2}) (t)\bigr)-M\sigma_{2}(t)-H (F\sigma_{2}) (t)-K(S\sigma_{2}) (t) \bigr) \\ \geq& M\bigl(u_{1}(t)-u_{2}(t)\bigr)+H\bigl(F(u_{1}-u_{2}) \bigr) (t)+K\bigl(S(u_{1}-u_{2})\bigr) (t), \\ =&M\varphi(t)+H(F\varphi) (t)+K(S\varphi) (t), \quad t\in J^{-}, \end{aligned}$$

and by (H3),

$$\begin{aligned} \Delta\varphi(t_{k}) =& \Delta u_{1}(t_{k})- \Delta u_{2}(t_{k}) \\ =&L_{k} u_{1}(t_{k})+I_{k} \bigl( \sigma_{1}(t_{k}) \bigr)-L_{k} \sigma_{1}(t_{k})- \bigl[ L_{k} u_{2}(t_{k})+I_{k} \bigl(\sigma_{2}(t_{k}) \bigr)-L_{k}\sigma_{2}(t_{k}) \bigr] \\ \geq& L_{k}\bigl[u_{1}(t_{k})-u_{2}(t_{k}) \bigr]=L_{k} \varphi(t_{k}), \quad k=1, 2, \ldots, m. \end{aligned}$$

It is easy to see that

$$\begin{aligned} \varphi(0) =&u_{1}(0)-u_{2}(0) \\ =& u_{1}(T)-u_{2}(T) \\ \geq& \varphi(T) . \end{aligned}$$

Then, from Lemma 3.5, we get \(\varphi(t)\leq 0\), which yields \(\mathcal{A}\sigma_{1}\leq\mathcal{A}\sigma_{2}\).

Next, we define the sequences \(\{\mu_{n}\}\), \(\{\nu_{n}\}\) such that \(\mu_{n+1}=\mathcal{A}\mu_{n}\) and \(\nu_{n+1}=\mathcal{A}\nu _{n}\). From (i) and (ii), we see that the sequences \(\{\mu_{n}\}\), \(\{\nu_{n}\}\) satisfy the following inequality:

$$ \nu_{0}\leq\nu_{1}\leq\cdots\leq\nu_{n} \leq \cdots\leq\mu_{n}\leq\cdots\leq\mu_{1}\leq\mu_{0}, $$

for all \(n\in N\). Obviously, the functions \(\mu_{n}\), \(\nu_{n}\) (\(n=1, 2,\ldots\)) satisfy

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\alpha}\mu_{n}(t)=M\mu_{n}(t)+H(F\mu_{n})(t)+K(S\mu_{n})(t) \\ \hphantom{{{}_{t_{k}}}D^{\alpha}\mu_{n}(t)={}}{}+f(t,\mu_{n-1}(t), (F\mu_{n-1})(t), (S\mu_{n-1})(t)) \\ \hphantom{{{}_{t_{k}}}D^{\alpha}\mu_{n}(t)={}}{}-M\mu_{n-1}(t)-H(F\mu_{n-1})(t)-K(S\mu _{n-1})(t), \quad t\in J^{-}, \\ \Delta\mu_{n}(t_{k})=L_{k} \mu_{n}(t_{k})+I_{k} (\mu_{n-1}(t_{k}) )-L_{k}\mu_{n-1}(t_{k}), \quad k = 1, 2, \ldots,m, \\ \mu_{n}(0) = \mu_{n}(T), \end{cases} $$

and

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\alpha}\nu_{n}(t)=M\nu_{n}(t)+H(F\nu_{n})(t)+K(S\nu_{n})(t) \\ \hphantom{{{}_{t_{k}}}D^{\alpha}\nu_{n}(t)={}}{}+f(t,\nu_{n-1}(t), (F\nu_{n-1})(t), (S\nu_{n-1})(t)) \\ \hphantom{{{}_{t_{k}}}D^{\alpha}\nu_{n}(t)={}}{}-M\nu_{n-1}(t)-H(F\nu_{n-1})(t)-K(S\nu _{n-1})(t), \quad t\in J^{-}, \\ \Delta\nu_{n}(t_{k})=L_{k} \nu_{n}(t_{k})+I_{k} (\nu_{n-1}(t_{k}) )-L_{k}\nu_{n-1}(t_{k}), \quad k = 1, 2, \ldots,m, \\ \nu_{n}(0) = \nu_{n}(T). \end{cases} $$

Therefore, there exist functions \(x_{*}\) and \(x^{*}\) on J, such that \(\lim_{n\rightarrow\infty}\nu_{n}=x_{*}\) and \(\lim_{n\rightarrow\infty}\mu_{n}=x^{*}\) uniformly on J. Clearly, \(x_{*}\), \(x^{*}\) are solutions of the periodic boundary value problem (1.1).

Finally, we are going to show that \(x_{*}\), \(x^{*}\) are minimal and maximal solutions of the problem (1.1). Suppose that \(x(t)\) is any solution of the problem (1.1) for \(t\in J\), such that \(x\in[\nu_{0}, \mu_{0}]\) and also there exists a positive integer n such that \(\nu_{n}(t)\leq x(t)\leq\mu_{n}(t)\) on J. Let \(\varphi=\nu_{n+1}-x\), then for \(t\in J\), we have

$$\begin{aligned} {{}_{t_{k}}}D^{\alpha}\varphi(t) =& {{}_{t_{k}}}D^{\alpha} \nu _{n+1}(t)-{{}_{t_{k}}}D^{\alpha}x(t) \\ =& M\nu_{n+1}(t)+ H(F\nu_{n+1}) (t)+K(S\nu_{n+1}) (t) \\ &{}+f\bigl(t, \nu_{n}(t), (F\nu_{n}) (t), (S \nu_{n}) (t)\bigr) \\ &{}-M\nu_{n}(t) -H(F\nu_{n}) (t)-K(S\nu_{n}) (t)-f\bigl(t, x(t), (Fx) (t), (Sx) (t)\bigr) \\ \geq& M\varphi(t)+H(F\varphi) (t)+K(S\varphi) (t), \quad t\in J^{-}, \end{aligned}$$

and

$$\begin{aligned} \Delta\varphi(t_{k}) =& \Delta\nu_{n+1}(t_{k})- \Delta x(t_{k}) \\ =&L_{k} \nu_{n+1}(t_{k})+I_{k} \bigl(\nu_{n}(t_{k}) \bigr)-L_{k} \nu_{n}(t_{k}) -I_{k} \bigl(x(t_{k}) \bigr) \\ \geq& L_{k}\bigl[\nu_{n+1}(t_{k})-x(t_{k}) \bigr]=L_{k} \varphi(t_{k}), \quad k=1, 2, \ldots, m, \end{aligned}$$

and also

$$\begin{aligned} \varphi(0) =&\nu_{n+1}(0)-x(0) \\ =& \nu_{n+1}(T)-x(T) \\ \geq& \varphi(T) . \end{aligned}$$

Then, by applying Lemma 3.5, we have \(\varphi (t)\leq 0\), which leads to \(\nu_{n+1}\leq x\) on J. By similar method, we can show that \(x\leq\mu_{n+1}\) on J. Since \(\nu_{0}\leq x\leq\mu_{0}\) on J, by mathematical induction, we deduce that \(\nu_{n}\leq x\leq\mu_{n}\) on J for every n. Hence, by taking \(n\rightarrow\infty\), we have \(x_{*}(t)\leq x(t)\leq x^{*}(t)\) on J. The proof is complete. □

5 An example

Example 5.1

Consider the following periodic boundary value problem for impulsive conformable fractional integro-differential equation:

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\frac{1}{2}}x(t)= \frac {t^{\frac{1}{4}}}{12}\sin x(t)+\frac{t^{6}}{15}\int_{0}^{t}t^{3}s^{4}x(s)\,ds \\ \hphantom{{{}_{t_{k}}}D^{\frac{1}{2}}x(t)={}}{}+\frac{t^{\frac{1}{3}}}{18}\int _{0}^{1}\cos^{2}(s^{2}t)x(s)\,ds, \quad t\in [0,1 ]\setminus \{ \frac{1}{2} \}, \\ \Delta x (\frac{1}{2} )= \frac{1}{16}\tan^{-1} (x (\frac{1}{2} ) ),\quad k=1, \\ x(0)=x(1). \end{cases} $$
(5.1)

Here \(\alpha=1/2\), \(T=1\), \(m=1\), \(t_{1}=1/2\). Choosing \(\mu_{0}=0\), \(\nu_{0}= \bigl \{\scriptsize{\begin{array}{l@{\quad}l} -5, & t\in [0,\frac{1}{2} ], \\ -6, & t\in (\frac{1}{2},1 ], \end{array}} \bigr.\) then \(\mu_{0}\) and \(\nu_{0}\) are lower and upper solutions of the problem (5.1), respectively, and also \(\nu_{0}\leq\mu_{0}\). Let

$$ f(t,u,v,w)=\frac{t^{\frac{1}{4}}}{12}\sin x+\frac{t^{6}}{15}v+\frac {t^{\frac{1}{3}}}{18}w, $$

then we have

$$ f(t,u,v,w)-f(t,\bar{u},\bar{v},\bar{w})\leq\frac{1}{12}(x-\bar {x})+ \frac{1}{15}(v-\bar{v})+\frac{1}{18}(w-\bar{w}), $$

where \(\nu_{0}(t)\leq\bar{u}(t)\leq u(t)\leq\mu_{0}(t)\), \((F\nu _{0})(t)\leq\bar{v}(t)\leq v(t)\leq(F\mu_{0})(t)\), \((S\nu_{0})(t)\leq \bar{w}(t)\leq w(t)\leq(S\mu_{0})(t)\), \(t\in[0,1]\). We see that

$$ I_{1}\bigl(u(t_{1})\bigr)-I_{1} \bigl(v(t_{1})\bigr)\leq\frac{1}{16}\bigl(u(t_{1})-v(t_{1}) \bigr), $$

where \(\nu_{0}(t_{1})\leq v(t_{1})\leq u(t_{1})\leq\mu_{0}(t_{1})\). Setting \(M=1/12\), \(H=1/15\), \(K=1/18\), \(L_{1}=1/16\), \(a=1/2\), \(\bar{l}=1\), and \(\bar{h}=1\), it follows that

$$ \sum_{k=1}^{m}L_{k}+ \frac{a^{\alpha}}{\alpha}(m+2) (M+H\bar {l}T+K\bar{h}T)=0.9345983636\leq1 $$

and

$$\begin{aligned} \Lambda =&\frac{e^{\frac{M}{\alpha}}(t_{0},T)}{\vert 1-e^{\frac {M}{\alpha}}(t_{0},T)\vert } \Biggl[a^{\alpha-1} \Biggl(\frac {\bar{l}H}{2} \sum_{k=1}^{m+1} \bigl(t_{k}^{2}-t_{k-1}^{2} \bigr)+\bar{h}KT^{2} \Biggr)+\sum_{k=1}^{m}L_{k} \Biggr] \\ =&0.8962956459 < 1, \end{aligned}$$

where \(e^{\frac{M}{\alpha}}(t_{0},T)=e^{\frac{M}{\alpha} (1-\frac{1}{2} )^{\alpha}}\cdot e^{\frac{M}{\alpha} (\frac{1}{2}-t_{0} )^{\alpha}}=1.265797376\). Therefore, the periodic boundary value problem (5.1) satisfies all conditions of Theorem 4.2. Then by using the monotone iterative scheme,

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\frac{1}{2}}\mu_{n}(t)= \frac{1}{12}\mu _{n}(t)+\frac{1}{15}\int_{0}^{t}t^{3}s^{4}\mu_{n}(s)\,ds+\frac{1}{18}\int _{0}^{1}\cos^{2}(s^{2}t)\mu_{n}(s)\,ds \\ \hphantom{{{}_{t_{k}}}D^{\frac{1}{2}}\mu_{n}(t)={}}{}+\frac{t^{\frac{1}{4}}}{12}\sin\mu _{n-1}(t)+\frac{t^{6}}{15}\int_{0}^{t}t^{3}s^{4}\mu_{n-1}(s)\,ds+\frac{t^{\frac {1}{3}}}{18}\int_{0}^{1}\cos^{2}(s^{2}t)\mu_{n-1}(s)\,ds \\ \hphantom{{{}_{t_{k}}}D^{\frac{1}{2}}\mu_{n}(t)={}}{}-\frac{1}{12}\mu_{n-1}(t)-\frac {1}{15}\int_{0}^{t}t^{3}s^{4}\mu_{n-1}(s)\,ds \\ \hphantom{{{}_{t_{k}}}D^{\frac{1}{2}}\mu_{n}(t)={}}{}-\frac{1}{18}\int_{0}^{1}\cos^{2}(s^{2}t)\mu _{n-1}(s)\,ds, \quad t\in[0,1]\setminus \{\frac{1}{2} \}, \\ \Delta\mu_{n}(t_{1})= \frac{1}{16} \mu_{n}(t_{1})+\frac {1}{16}\tan^{-1} (\mu_{n-1} (\frac{1}{2} ) )-\frac{1}{16}\mu_{n-1}(t_{1}),\quad k = 1, \\ \mu_{n}(0) = \mu_{n}(1), \end{cases} $$

and

$$ \textstyle\begin{cases} {{}_{t_{k}}}D^{\frac{1}{2}}\nu_{n}(t)= \frac{1}{12}\nu _{n}(t)+\frac{1}{15}\int_{0}^{t}t^{3}s^{4}\nu_{n}(s)\,ds+\frac{1}{18}\int _{0}^{1}\cos^{2}(s^{2}t)\nu_{n}(s)\,ds \\ \hphantom{{{}_{t_{k}}}D^{\frac{1}{2}}\nu_{n}(t)={}}{}+\frac{t^{\frac{1}{4}}}{12}\sin\nu _{n-1}(t)+\frac{t^{6}}{15}\int_{0}^{t}t^{3}s^{4}\nu_{n-1}(s)\,ds+\frac{t^{\frac {1}{3}}}{18}\int_{0}^{1}\cos^{2}(s^{2}t)\nu_{n-1}(s)\,ds \\ \hphantom{{{}_{t_{k}}}D^{\frac{1}{2}}\nu_{n}(t)={}}{}-\frac{1}{12}\nu_{n-1}(t)-\frac {1}{15}\int_{0}^{t}t^{3}s^{4}\nu_{n-1}(s)\,ds \\ \hphantom{{{}_{t_{k}}}D^{\frac{1}{2}}\nu_{n}(t)={}}{}-\frac{1}{18}\int_{0}^{1}\cos^{2}(s^{2}t)\nu _{n-1}(s)\,ds, \quad t\in[0,1]\setminus \{\frac{1}{2} \}, \\ \Delta\nu_{n}(t_{1})= \frac{1}{16} \nu_{n}(t_{1})+\frac {1}{16}\tan^{-1} (\nu_{n-1} (\frac{1}{2} ) )-\frac{1}{16}\nu_{n-1}(t_{1}), \quad k = 1, \\ \nu_{n}(0) = \nu_{n}(1), \end{cases} $$

for \(n=1,2,3,\ldots\) , the problem (5.1) has the extremal solution in the segment \([\nu_{0},\mu_{0}]\).