1 Introduction

Several types of fractional derivatives have been introduced using different approaches. Recently, new types of fractional derivatives with nonsingular kernel have been developed and implemented in several applications; see [13, 14, 16, 29]. Abdeljawad and Baleanu [2] have developed a new type of fractional derivative with generalized Mittag-Leffler kernel that admits singular and nonsingular kernels based on some parameters. Fernandez et al. [17, 18] related the derivative to Prabhakar operators and expressed it in a series of Riemann–Liouville operators. The derivative has been implemented in mathematical modeling and there are few analytical and numerical studies [1, 17, 18] devoted to this aspect. On the other hand, very recent work about the application of fixed point theory to integral equations, produced under the presence of different fractional operators, has been published [46, 12, 20]. This will be a motivation to a part of our recent work devoted to the nonlinear fractional case. For recent analysis techniques in ordinary, partial and fractional differential equations where Mittag-Leffler functions and their particular version the exponential functions we refer to [11, 15, 19, 2124, 27]. For a generalized type of weighted fractional differences where the discrete Mittag-Leffler function plays an important role we refer to [3].

Definition 1.1

The left Caputo fractional derivative with generalized Mittag-Leffler kernel is defined by

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } f\bigr) (t)= \frac{\mathbf{B}(\alpha )}{1-\alpha } \int _{0}^{t} (t-s)^{\beta -1} { \mathbf{E}}_{ \alpha,\beta }{ \bigl[-\varepsilon _{\alpha }(t-s)^{\alpha } \bigr]} f^{\prime }(s) \,ds, \end{aligned}$$
(1.1)

where \({\mathbf{B}}(\alpha )>0\) is a normalization function satisfying \({\mathbf{B}}(0)={\mathbf{B}}(1)=1\), \(\varepsilon _{\alpha }=\frac{\alpha }{1-\alpha }, 0<\alpha <1 \), and \({\mathbf{E}}_{\alpha,\beta }\) is the Mittag-Leffler function of two parameters defined by

$$\begin{aligned} {\mathbf{E}}_{\alpha,\beta }(x)=\sum_{k=0}^{\infty } \frac{x^{k}}{\Gamma (\alpha k+\beta )}. \end{aligned}$$

For \(\beta \ge 1 \), the kernel \(k(t)=t^{\beta -1}{\mathbf{E}}_{\alpha,\beta }(-\varepsilon _{\alpha }t^{\alpha })\) is nonsingular, and for \(\beta =1\), we have the Atangana–Baleanu derivative. Here we are interested in the case of singular kernel, that is, \(0<\beta <1\). For more details about the derivatives we refer the reader to [1, 2, 17, 18]. In this paper we consider the multi-term linear and nonlinear equations of the form

$$\begin{aligned} &a_{1} \frac{d v}{dt}+a_{2} {}^{ABC} {}_{0}D^{\alpha,\beta }v+q(t) v=h(t), \quad t>0, \end{aligned}$$
(1.2)
$$\begin{aligned} &a_{1} \frac{d v}{dt}+a_{2} {}^{ABC} {}_{0}D^{\alpha,\beta }v=h(t,v),\quad t>0, \end{aligned}$$
(1.3)

where \(0<\alpha,\beta <1, a_{1},a_{2}\ge 0, a_{1}^{2}+a_{2}^{2}>0\). For \(a_{2}=0\), and \(a_{1}>0\), the above two equations reduce to first order linear and nonlinear differential equations and the theory of such equations is well-developed. So, we are interested here in \(a_{2}>0\). Recently, several maximum-minimum principles were derived and implemented to study fractional differential equations [710, 25]. In this paper we extend the maximum principles techniques to analyze the solutions of problems (1.2)–(1.3).

We organize this paper as follows: In Sect. 2, we derive new estimates of the fractional derivative of a function at its extreme points. In Sect. 3, we develop new comparison principles and analyze the solutions of the linear multi-term equation. In Sect. 4, we establish existence and uniqueness results to the nonlinear multi-term equation via the Banach fixed point theorem. We close with some conclusions in Sect. 5.

2 Estimates of the fractional derivatives at the extreme points

The following results concerning the Mittag-Leffler function are essential to proceed.

Lemma 2.1

[26, 28, 30] The following hold true:

  1. 1.

    \({\mathbf{E}}_{\alpha,\alpha }(-x)=-\alpha \frac{d}{dx}{\mathbf{E}}_{\alpha }(-x), \alpha \ge 0\).

  2. 2.

    For \(\beta >\alpha >0\), we have

    $$\begin{aligned} {\mathbf{E}}_{\alpha,\beta }(-x)=\frac{1}{\alpha \Gamma (\beta -\alpha )} \int _{0}^{1} \bigl(1-t^{\frac{1}{\alpha }} \bigr)^{\beta -\alpha -1} { \mathbf{E}}_{\alpha,\alpha }(-t x) \,dt. \end{aligned}$$
    (2.1)
  3. 3.

    \({\mathbf{E}}_{\alpha,\beta }(-x), x\ge 0\) is completely monotone for \(\alpha, \beta >0\), if and only if, \(0< \alpha \le 1\), and \(\beta \ge \alpha \). That is

    $$\begin{aligned} (-1)^{n} \frac{d^{n}}{dx^{n}} \bigl({\mathbf{E}}_{\alpha,\beta }(-x) \bigr) \ge 0,\quad x\ge 0. \end{aligned}$$
    (2.2)

Proposition 2.1

For \(0<\alpha <\beta <1\), the kernel

$$\begin{aligned} k(x)=x^{\beta -1} {\mathbf{E}}_{\alpha,\beta }\bigl(-\varepsilon _{\alpha }x^{ \alpha }\bigr), \end{aligned}$$

is monotone non-increasing for \(x> 0\).

Proof

We have \(k(x)=\eta _{1}(x) \eta _{2}(x)\), where \(\eta _{1}(x)=x^{\beta -1}\ge 0\), is monotone non-increasing, and \(\eta _{2}(x)={\mathbf{E}}_{\alpha,\beta }(-\varepsilon _{\alpha }x^{\alpha })\). From Eq. (2.2) \(\eta _{2}(x)\) is monotone non-increasing. Since \({\mathbf{E}}_{\alpha }(x)\) is completely monotone, then \({\mathbf{E}}_{\alpha,\alpha }(-x)\ge 0\), and from Eq. (2.1) we have \(\eta _{2}(x)={\mathbf{E}}_{\alpha,\beta }(-\varepsilon _{\alpha }x)\ge 0\). Now \(\eta _{1}\) and \(\eta _{2}\) are both nonnegative and monotone non-increasing, so their product is monotone non-increasing. Indeed, \(k^{\prime }(x)=\underbrace{\eta _{2}^{\prime }(x) \eta _{1}(x)}_{\leq 0}+ \underbrace{\eta _{2}(x)\eta _{1}^{\prime }(x)}_{\leq 0}\leq 0\). □

Lemma 2.2

Assume \(f\in H^{1}(0,1)\) is a function attaining its maximum at a point \(t_{0}\in (0,1]\) and \(0<\alpha <\beta <1\). Then

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } f\bigr) (t_{0})\ge \frac{{\mathbf{B}}(\alpha )}{1-\alpha }t_{0}^{\beta -1}{ \mathbf{E}}_{\alpha, \beta }\bigl[-\varepsilon _{\alpha }t_{0}^{\alpha } \bigr] \bigl(f(t_{0})-f(0)\bigr)\ge 0. \end{aligned}$$
(2.3)

Proof

We shall make use of the auxiliary function \(g(t)=f(t_{0})-f(t), t \in [0,1]\). Then it follows that \(g(t)\ge 0\), on \([0,1]\), \(g(t_{0})=0\) and \(({}^{ABC} {}_{0}D^{\alpha,\beta } g)(t)=-({}^{ABC}{}_{0}D^{\alpha, \beta } f)(t)\). We have

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } g\bigr) (t_{0})= \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \int _{0}^{t_{0}} (t_{0}-s)^{\beta -1} { \mathbf{E}}_{\alpha,\beta }\bigl[-\varepsilon _{\alpha }(t_{0}-s)^{\alpha } \bigr]g'(s)\,ds. \end{aligned}$$

Let

$$\begin{aligned} k_{0}(s)&=(t_{0}-s)^{\beta -1} {\mathbf{E}}_{\alpha,\beta } \bigl[-\varepsilon _{\alpha }(t_{0}-s)^{\alpha } \bigr]=(t_{0}-s)^{\beta -1} \sum_{k=0}^{\infty } \frac{(-\varepsilon _{\alpha })^{k}(t_{0}-s)^{\alpha k}}{\Gamma (\alpha k+\beta )} \\ &=\sum_{k=0}^{\infty }\frac{(-\varepsilon _{\alpha })^{k}(t_{0}-s)^{\alpha k+\beta -1}}{\Gamma (\alpha k+\beta )}, \end{aligned}$$

then

$$\begin{aligned} \frac{dk_{0}}{ds}&= -\sum_{k=0}^{\infty } \frac{(-\varepsilon _{\alpha })^{k}}{\Gamma (\alpha k+\beta )} (\alpha k+ \beta -1) (t_{0}-s)^{\alpha k+\beta -2} \\ &=-(t_{0}-s)^{\beta -2}\sum_{k=0}^{\infty } \frac{(-\varepsilon _{\alpha })^{k}}{\Gamma (\alpha k+\beta -1)} (t_{0}-s)^{ \alpha k} \\ &= -(t_{0}-s)^{\beta -2}{\mathbf{E}}_{\alpha,\beta -1}\bigl[- \varepsilon _{\alpha }(t_{0}-s)^{\alpha }\bigr] \end{aligned}$$

is well defined for \(s< t_{0}\). Since \(k(x)=x^{\beta -1} {\mathbf{E}}_{\alpha,\beta }(-\varepsilon _{\alpha }x^{ \alpha })\) is monotone non-increasing, and \(x=t_{0}-s\), we have \(\frac{dk_{0}}{ds} \ge 0\). By integration by parts with

$$\begin{aligned} u=(t_{0}-s)^{\beta -1} {\mathbf{E}}_{\alpha,\beta }\bigl[{- \varepsilon _{\alpha }(t_{0}-s)^{\alpha }}\bigr] \quad\text{and}\quad \,dv=g'(s) \,ds, \end{aligned}$$

we have

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } g\bigr) (t_{0}) &=\frac{{\mathbf{B}}(\alpha )}{1-\alpha } \biggl((t_{0}-s)^{\beta -1}{ \mathbf{E}}_{\alpha,\beta }\bigl[{-\varepsilon _{\alpha }(t_{0}-s)^{\alpha }} \bigr] g(s)|_{0}^{t_{0}} - \int _{0}^{t_{0}} \frac{dk_{0}}{ds}g(s) \,ds \biggr) \\ &=\frac{{\mathbf{B}}(\alpha )}{1-\alpha } \biggl(-t_{0}^{\beta -1} { \mathbf{E}}_{ \alpha,\beta }\bigl[-\varepsilon _{\alpha }t_{0}^{\alpha } \bigr] g(0)- \int _{0}^{t_{0}} \frac{dk_{0}}{ds} g(s) \,ds \biggr), \\ &\le -\frac{{\mathbf{B}}(\alpha )}{1-\alpha }t_{0}^{\beta -1} {\mathbf{E}}_{ \alpha,\beta } \bigl[-\varepsilon _{\alpha }t_{0}^{\alpha }\bigr] g(0). \end{aligned}$$
(2.4)

Note that since \(g(t_{0})=0\), by L’Hospital’s rule we have

$$\begin{aligned} \lim_{s\rightarrow t_{0}} (t_{0}-s)^{\beta -1} g(s)=\lim _{s \rightarrow t_{0}} (1-\beta )^{-1} g^{\prime }(s) (t_{0}-s)^{\beta }=0, \quad 0< \beta < 1. \end{aligned}$$

Thus,

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } g\bigr) (t_{0})=-\bigl({}^{ABC} {}_{0}D^{ \alpha,\beta } f \bigr) (t_{0}) \le -\frac{{\mathbf{B}}(\alpha )}{1-\alpha }t_{0}^{ \beta -1} { \mathbf{E}}_{\alpha,\beta }\bigl[-\varepsilon _{\alpha }t_{0}^{\alpha } \bigr] g(0), \end{aligned}$$

or

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } f\bigr) (t_{0}) \ge \frac{{\mathbf{B}}(\alpha )}{1-\alpha }t_{0}^{\beta -1} { \mathbf{E}}_{\alpha, \beta }\bigl[-\varepsilon _{\alpha }t_{0}^{\alpha } \bigr] \bigl(f(t_{0})-f(0)\bigr), \end{aligned}$$

which completes the proof. □

If we process instead −f, then we have

Lemma 2.3

Assume \(f\in H^{1}(0,1)\) is a function attaining its minimum at a point \(t_{0}\in (0,1]\) and \(0<\alpha <\beta <1\). Then

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } f\bigr) (t_{0})\le \frac{{\mathbf{B}}(\alpha )}{1-\alpha }t_{0}^{\beta -1} { \mathbf{E}}_{\alpha, \beta }\bigl[-\varepsilon _{\alpha }t_{0}^{\alpha } \bigr] \bigl(f(t_{0})-f(0)\bigr)\le 0. \end{aligned}$$
(2.5)

Lemma 2.4

Let a function \(f\in H^{1}(0,T)\) attain its maximum at a point \(t_{0}\in (0,1]\) and \(0<\alpha <\beta <1\). If \(f(t)\) is not identically constant function on \([0,t_{0}]\), then

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } f\bigr) (t_{0})> \frac{{\mathbf{B}}(\alpha )}{1-\alpha }t_{0}^{\beta -1} { \mathbf{E}}_{\alpha, \beta }\bigl[-\varepsilon _{\alpha }t_{0}^{\alpha } \bigr] \bigl(f(t_{0})-f(0)\bigr)\ge 0. \end{aligned}$$
(2.6)

Proof

Since \(f(t)\) is not constant, \(g(t)=f(t_{0})-f(t)\ge 0\), and it is not identically zero on \([0,t_{0}]\). Thus

$$\begin{aligned} \int _{0}^{t_{0}} \frac{dk_{0}}{ds} g(s) \,ds>0, \end{aligned}$$

and the result follows from Eq. (2.4). □

3 Comparison principles

In this section, we make use of the results derived in Sect. 2 to obtain new comparison principles for linear multi-term fractional equations including fractional derivatives with generalized Mittag-Leffler kernels. Then we use these principles to establish a uniqueness result and pre-norm estimate of solutions to related fractional initial value problems.

Lemma 3.1

Assume a function \(v\in H^{1}(0,1)\cap C[0,1]\) satisfies the fractional inequality

$$\begin{aligned} P_{\alpha,\beta }(v)=a_{1} \frac{d v}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{ \alpha,\beta } v \bigr) (t)+q(t) v(t) \le 0,\quad t>0, 0< \alpha < \beta < 1, \end{aligned}$$
(3.1)

where \(q(t)> 0\) is continuous on \([0,1]\). If \(v(0)\le 0\), then \(v(t)\le 0, t\in [0,1]\).

Proof

Assume the result is untrue, since v is continuous on \([0,1]\), v attains an absolute maximum at \(t_{0}\ge 0\) with \(v(t_{0})>0\). Since \(v(0)\le 0\), we have \(t_{0}\neq 0\). If \(v(t)\) is identically constant on \([0,t_{0}]\), then

$$\begin{aligned} \frac{d v}{dt}(t_{0})= \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v \bigr) (t_{0})=0,\quad q(t_{0})>0, \end{aligned}$$

and thus

$$\begin{aligned} P_{\alpha,\beta }(v) (t_{0})=q(t_{0}) v(t_{0})>0, \end{aligned}$$

which contradicts (3.1).

If \(v(t)\) is not identically constant on \([0,t_{0}]\), then, by virtue of the result in Lemma 2.4, we have

$$\begin{aligned} \frac{d v}{dt}(t_{0})=0, \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v \bigr) (t_{0})>0, \end{aligned}$$

and thus

$$\begin{aligned} P_{\alpha,\beta }(v) (t_{0})=q(t_{0}) v(t_{0})>0, \end{aligned}$$

which contradicts (3.1). □

Corollary 3.1

Let \(v_{1},v_{2}\in H^{1}(0,1)\cap C[0,1]\) be possible solutions to the fractional initial value problems

$$\begin{aligned} &a_{1} \frac{d v_{1}}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v_{1}\bigr) (t)+q(t) v_{1}(t)=h_{1}(t),\quad t>0, 0< \alpha < \beta < 1, \\ &a_{1} \frac{d v_{2}}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v_{2}\bigr) (t)+q(t) v_{2}(t)=h_{2}(t),\quad t>0, 0< \alpha < \beta < 1, \\ &v_{1}(0)=r_{1},\qquad v_{2}(0)=r_{2}, \end{aligned}$$

where \(q(t)> 0, h_{1}(t), h_{2}(t)\) are continuous on \([0,1]\). If \(h_{1}(t)\le h_{2}(t)\) and \(r_{1}\le r_{2}\), then

$$\begin{aligned} v_{1}(t)\le v_{2}(t),\quad t\in [0,1]. \end{aligned}$$

Proof

Let \(z=v_{1}-v_{2}\),

$$\begin{aligned} &P_{\alpha,\beta }(z)= a_{1} \frac{d z}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{ \alpha,\beta } z\bigr) (t)+q(t) z(t)=h_{1}(t)-h_{2}(t) \le 0, \\ &\quad t>0, 0< \alpha < \beta < 1, \end{aligned}$$
(3.2)

and \(z(0)=r_{1}-r_{2} \le 0\). Applying the result in Lemma 3.1 we have \(z(t)\le 0\), which completes the proof. □

Lemma 3.2

Assume \(v \in H^{1}(0,1)\) is a possible solution to

$$\begin{aligned} a_{1} \frac{d v}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v\bigr) (t)+q(t) v(t)=h(t),\quad t>0, 0< \alpha < \beta < 1, \end{aligned}$$

where \(q(t)> 0\) is continuous on \([0,1]\). Then

$$\begin{aligned} \Vert v \Vert _{[0,1]}=\max_{t\in [0,1]} \bigl\vert v(t) \bigr\vert \le M=\max_{t\in [0,1]}\biggl\{ \biggl\vert \frac{h(t)}{q(t)} \biggr\vert , \bigl\vert v(0) \bigr\vert \biggr\} , \end{aligned}$$

provided that the maximum M exists.

Proof

We have \(M\ge |\frac{h(t)}{q(t)}|\), or \(M q(t) \ge |h(t)|, t\in [0,1]\). Let \(v_{1}=v-M\), then

$$\begin{aligned} P_{\alpha,\beta } (v_{1})&= a_{1} \frac{d v_{1}}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{ \alpha,\beta } v_{1}\bigr) (t)+q(t) v_{1}(t) \\ &= a_{1} \frac{d v}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v\bigr) (t)+q(t) (v-M) \\ &= h(t)-q(t) M \le \bigl\vert h(t) \bigr\vert -q(t) M\le 0. \end{aligned}$$

Since \(v_{1}(0)=v(0)-M\le 0\), by virtue of Lemma 3.1 we have \(v_{1}=v-M\le 0\), or

$$\begin{aligned} v\le M. \end{aligned}$$
(3.3)

Analogously, let \(v_{2}=-M-v\), then

$$\begin{aligned} P_{\alpha,\beta } (v_{2})&= a_{1} \frac{d v_{2}}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{ \alpha,\beta } v_{2}\bigr) (t)+q(t) v_{2}(t) \\ &=-a_{1} \frac{d v}{dt}-a_{2} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v\bigr) (t)-q(t) (-M-v) \\ &=-h(t)-q(t) M\le 0, \end{aligned}$$

which together with \(v_{2}(0)=-M-v(0) \le 0\), implies \(v_{2}=-v-M\le 0\), or

$$\begin{aligned} v\ge -M. \end{aligned}$$
(3.4)

If we use both of (3.3) and (3.4), then we have \(|v(t)|\le M, t\in [0,1]\) and hence the result. □

Lemma 3.3

The multi-term fractional initial value problem

$$\begin{aligned} &a_{1} \frac{d v}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v\bigr) (t)+q(t) u(t)=h(t),\quad t>0, 0< \alpha < \beta < 1, \end{aligned}$$
(3.5)
$$\begin{aligned} &v(0)=v_{0}, \end{aligned}$$
(3.6)

where \(q(t)> 0\) is continuous on \([0,1]\), possesses at most one solution \(v(t)\in H^{1}(0,1)\).

Proof

Let \(v_{1}\) and \(v_{2}\) be possible solutions to (3.5)–(3.6). Define \(z(t)=v_{1}(t)-v_{2}(t)\), then

$$\begin{aligned} a_{1} \frac{d z}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } z\bigr) (t)+q(t) z(t)=0,\quad z(0)=0. \end{aligned}$$

Applying the result in Lemma 3.2, we have

$$\begin{aligned} \Vert z \Vert _{[0,1]}\le M=0, \end{aligned}$$

which implies \(z(t)=0\), on \([0,1]\) and completes the proof. □

4 The nonlinear equation

We consider the nonlinear multi-term initial value problem

$$\begin{aligned} &a_{1} \frac{d v}{dt}+a_{2} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v\bigr) (t)=h(t,v),\quad t>0, \end{aligned}$$
(4.1)
$$\begin{aligned} &v(0)=v_{0}. \end{aligned}$$
(4.2)

We apply Banach fixed point theorem to show the existence of a unique solution to the problem (4.1)–(4.2). Since

$$\begin{aligned} \bigl({}^{ABC} {}_{0}D^{\alpha,\beta } v\bigr) (t)= \bigl({}^{ABR} {}_{0}D^{\alpha, \beta } v\bigr) (t)- \frac{{\mathbf{B}}(\alpha )}{1-\alpha } v_{0} {\mathbf{E}}_{\alpha, \beta }\bigl(-\varepsilon _{\alpha }t^{\alpha }\bigr), \end{aligned}$$

see [2], Eq. (4.1) is equivalent to

$$\begin{aligned} a_{1} \frac{d v}{dt}+a_{2} \bigl({}^{ABR} {}_{0}D^{\alpha,\beta } v\bigr) (t)=a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } v_{0} {\mathbf{E}}_{\alpha,\beta }\bigl(- \varepsilon _{\alpha }t^{\alpha }\bigr)+h(t,v),\quad t>0, \end{aligned}$$

or

$$\begin{aligned} &a_{1} \frac{d v}{dt}+a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \frac{d}{dt} \int _{0}^{t} (t-s)^{\beta -1} { \mathbf{E}}_{\alpha,\beta }\bigl(- \varepsilon _{\alpha }(t-s)^{\alpha } \bigr) v(s) \,ds\\ &\quad=a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } v_{0} { \mathbf{E}}_{\alpha,\beta }\bigl(- \varepsilon _{\alpha }t^{\alpha } \bigr)+h(t,v). \end{aligned}$$

Applying the integral operator to the above equations yields

$$\begin{aligned} a_{1} v(t)={}& a_{1} v_{0}+a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } v_{0} \int _{0}^{t} {\mathbf{E}}_{\alpha,\beta }\bigl(- \varepsilon _{\alpha }s^{\alpha }\bigr) \,ds \\ &{}-a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \int _{0}^{t} (t-s)^{ \beta -1} { \mathbf{E}}_{\alpha,\beta }\bigl(-\varepsilon _{\alpha }(t-s)^{\alpha } \bigr) v(s) \,ds+ \int _{0}^{t} h(s,v) \,ds. \end{aligned}$$
(4.3)

Theorem 4.1

For \(0<\alpha <\beta <1\), and \(h: [ 0, T ] \times {\Bbb{R}}\rightarrow {\Bbb{R}}\) be a continuous function that satisfy the Lipschitz condition

$$\begin{aligned} \bigl\vert h(t,u)-h(t,v) \bigr\vert \le K (u-v), K>0, \quad\textit{for all } u,v \in C({ \Bbb{R}}). \end{aligned}$$

If \(a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \frac{1}{\Gamma (\beta +1)} T^{\beta }+ K T<1\), then the fractional initial value problem (4.1)(4.2) has a unique solution on \(H^{1}(0,T)\).

Proof

On \(H^{1}(0,T)\) define the norm

$$\begin{aligned} \Vert f \Vert =\sup_{t\in [0,T]} \bigl\vert f(t) \bigr\vert , \end{aligned}$$

and consider the linear operator \(T:H^{1}(0,T)\rightarrow H^{1}(0,T)\) defined by

$$\begin{aligned} L\bigl(a_{1} v(t)\bigr)={}&a_{1} v_{0}+a_{2} v_{0} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \int _{0}^{t} {\mathbf{E}}_{\alpha, \beta }\bigl(- \varepsilon _{\alpha }s^{\alpha }\bigr) \,ds \\ &{}-a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \int _{0}^{t} (t-s)^{\beta -1} { \mathbf{E}}_{\alpha,\beta }\bigl(-\varepsilon _{\alpha }(t-s)^{\alpha } \bigr) v(s) \,ds \\ &{}+ \int _{0}^{t} h(s,v) \,ds. \end{aligned}$$
(4.4)

Let \(v_{1},v_{2}\in H^{1}(0,T)\), \(t\in (0,T)\) then

$$\begin{aligned} &\bigl\vert L (a_{1} v_{1})-L(a_{1} v_{2}) \bigr\vert \\ &\quad= \biggl\vert -a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \int _{0}^{t} (t-s)^{\beta -1} { \mathbf{E}}_{\alpha,\beta }\bigl(-\varepsilon _{\alpha }(t-s)^{\alpha } \bigr) \bigl(v_{1}(s)-v_{2}(s)\bigr) \,ds \\ &\qquad{}+ \int _{0}^{t} \bigl(h(s,v_{1})-g(s,v_{2}) \bigr) \,ds \biggr\vert \\ &\quad\le a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \Vert v_{1}-v_{2} \Vert \bigl\Vert { \mathbf{E}}_{\alpha,\beta }\bigl(-\varepsilon _{\alpha }(t-s)^{\alpha }\bigr) \bigr\Vert \int _{0}^{t} (t-s)^{\beta -1} \,ds+K \Vert v_{1}-v_{2} \Vert \int _{0}^{t} \,ds \\ &\quad= \biggl(a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \bigl\Vert { \mathbf{E}}_{\alpha, \beta }\bigl(-\varepsilon _{\alpha }(t-s)^{\alpha } \bigr) \bigr\Vert \frac{t^{\beta }}{\beta } + K t \biggr) \Vert v_{1}-v_{2} \Vert . \end{aligned}$$

Since \({\mathbf{E}}_{\alpha,\beta }(-x)\) is decreasing for \(x>0\), we have \({\mathbf{E}}_{\alpha,\beta }(-\varepsilon _{\alpha }(t-s)^{\alpha }) \le { \mathbf{E}}_{\alpha,\beta }(0)=\frac{1}{\Gamma (\beta )}, 0\le s\le t \le T \), and thus

$$\begin{aligned} \bigl\vert L (a_{1} v_{1})-L(a_{1} v_{2}) \bigr\vert &\le \biggl(a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \frac{1}{\Gamma (\beta )} \frac{T^{\beta }}{\beta } + K T \biggr) \Vert v_{1}-v_{2} \Vert \\ &= \biggl(a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \frac{1}{\Gamma (\beta +1)} T^{\beta }+ K T \biggr) \Vert v_{1}-v_{2} \Vert . \end{aligned}$$

Since \(a_{2} \frac{{\mathbf{B}}(\alpha )}{1-\alpha } \frac{1}{\Gamma (\beta +1)} T^{\beta }+ K T<1\), then L is a contraction and by the contraction fixed point principle on Banach spaces, L has a unique fixed point. □

5 Concluding remarks

We have considered linear and nonlinear multi-term fractional differential equations with fractional derivative of Caputo type involving the kernel \(k(t)=t^{\beta -1}{\mathbf{E}}_{\alpha,\beta }(t), 0<\alpha,\beta <1\). We have established several comparison principles for related fractional linear equations and inequalities and used them to analyze the solutions of the multi-term linear fractional differential equations. These results are obtained under the condition \(0<\alpha <\beta <1\), which quarantines the monotonicity property of the kernel \(k(t), t>0\). Whether the results are extendable for arbitrary \(0<\alpha,\beta <1\) is left for a future work. For the nonlinear equation we have established existence and uniqueness results via the Banach fixed point theorem.