1 Introduction and preliminaries

Fractional differential equations have been shown to be very useful in the study of models of many phenomena in various fields of science and engineering, such as physics, chemistry, biology, signal and image processing, biophysics, blood flow phenomena, control theory, economics, aerodynamics, and fitting of experimental data. For examples and recent development of the topic, see [1,2,3,4,5,6,7,8,9,10,11,12,13,14] and the references cited therein.

Impulsive differential equations are used to describe many practical dynamical systems, including evolutionary processes characterized by abrupt changes of the state at certain instants. Such processes are naturally seen in biology, physics, engineering, and so forth. Due to their significance, many authors have established the solvability of impulsive differential equations. In the literature there are two popular types of impulses:

  1. (i)

    Instantaneous impulses. The duration of these changes is relatively short compared to the overall duration of the whole process.

  2. (ii)

    Non-instantaneous impulses. They are impulsive actions, starting abruptly at a fixed point and continuing on a finite time interval.

Differential equations with instantaneous impulses have been treated in several works, see, e.g., the monographs [15,16,17], papers [18,19,20,21,22,23,24], and the references therein.

A non-instantaneous impulsive differential equation was introduced by Hernandez and O’Regan [25] to describe a certain dynamic change of evolution processes in the pharmacotherapy. This kind of impulsive equations can be separated based on two major characteristics as differential equations on intervals \((s_{i}, t_{i+1}]\), \(i = 0, 1, \ldots , m\), and nonlinear equations on \((t_{i}, s_{i}]\), \(i = 1, 2, \ldots , m\). For recent works, we refer the reader to the papers [26,27,28,29,30,31], books [32, 33], and the references therein.

In [34], it is pointed out that the necessary condition for non-instantaneous impulsive problems is having the explicit functions during impulsive intervals \((t_{i},s_{i}]\), \(i=1,2,3,\ldots ,m\). From that paper, we can conclude that the intervals of derivatives of the unknown function and those of the explicit given functions are alternately interchanged.

By using the separated intervals in non-instantaneous impulsive equations, we establish a new kind of mixed-order instantaneous impulsive differential equations which have one order derivative on \((s_{i},t_{i+1}]\), \(i=0,1,2,\ldots ,m\), and that of another order on \((t_{i},s_{i}]\), \(i=1,2,3,\ldots ,m\). We study two initial value problems, one for mixed first and second ordinary derivatives and another for mixed fractional derivatives of orders q and p with \(0 < q \leq 1\), \(1 < p \leq 2\).

More precisely, in this paper, we study the existence and uniqueness of solutions for two new classes of instantaneous impulses of mixed-order ordinary differential equations, as well as, fractional differential equations with initial conditions. The first problem for mixed-order ordinary impulsive differential equations is presented by

$$ \textstyle\begin{cases} x''(t)= f(t,x(t)),\quad t \in (s_{i}, t_{i+1}], i = 0, 1, 2, \ldots ,m, \\ x'(t)= g(t,x(t)), \quad t \in (t_{i}, s_{i}], i = 1, 2, 3, \ldots ,m, \\ x(s^{+}_{i})= \alpha _{i} x(s^{-}_{i}), \qquad x'(s^{+}_{i}) = \beta _{i} x'(s^{-}_{i}), \\ x(t^{+}_{i}) =\gamma _{i} x(t^{-}_{i}),\qquad x(0)=\alpha _{0}, \qquad x'(0)= \beta _{0}, \end{cases} $$
(1.1)

while the second for fractional mixed-order is given by

$$ \textstyle\begin{cases} {{}_{s_{i}^{+}}}D^{p} x(t) = f(t, x(t)), \quad t\in (s_{i}, t_{i+1}], i=0, 1, 2,\ldots , m, \\ {{}_{t_{i}^{+}}}D^{q} x(t) = g(t, x(t)), \quad t\in (t_{i}, s_{i}], i=1, 2, 3, \ldots , m, \\ x(s^{+}_{i})= \alpha _{i} x(s^{-}_{i}), \qquad x'(s^{+}_{i}) = \beta _{i} x'(s^{-}_{i}), \\ x(t^{+}_{i}) =\gamma _{i} x(t^{-}_{i}), \qquad x(0)=\alpha _{0}, \qquad x'(0)= \beta _{0}, \end{cases} $$
(1.2)

where we have nonlinear functions \(f\colon J\times {\mathbb{R}} \rightarrow \mathbb{R}\), \(g\colon J^{*}\times {\mathbb{R}} \rightarrow \mathbb{R}\); \(J=\bigcup_{i=0}^{m}(s_{i}, t_{i+1}]\), \(J^{*}=\bigcup_{i=1}^{m}(t_{i}, s_{i}]\), \(J\cup J^{*}\cup \{0\}=[0,T]\), \(T=t_{m+1}\), constants \(\alpha _{i}\), \(\beta _{i}\), for \(i = 0, 1, \ldots , m\), and \(\gamma _{i}\), \(i = 1, 2,\ldots , m\), are given, and \({{}_{s_{i}^{+}}}D ^{p}\), \({{}_{t_{i}^{+}}}D^{q}\) denote the Caputo fractional derivatives of orders p and q, \(1 < p \leq 2\), \(0 < q \leq 1\), starting at the points \(s_{i}^{+}\), \(i = 0, 1, \ldots ,m\), and \(t_{i}^{+}\), \(i = 1, 2, \ldots , m\).

To the best of the authors’ knowledge, problems (1.1) and (1.2) are new mixed-order impulsive ordinary and fractional differential equations, respectively. The system of integer order derivatives in Eq. (1.1) can be used to explain a mixture of growth, decay, and transient phenomena, while Eq. (1.2) gives some details for description of memory and hereditary properties of various materials and processes with impulses. Observe that problem (1.2) contains the interchanging fractional orders, for \(1< p\leq 2\) and \(0< q\leq 1\), of Caputo type, which has the property of the derivative constant equal to zero and can be suitably used to establish impulsive systems.

Note that problem (1.2) is well-defined in the sense of Caputo fractional derivative of impulsive problem as

$$ {{}_{\phi }}D^{\theta } x(t)=\frac{1}{\varGamma (n-\theta )} \int _{\phi } ^{t}(t-r)^{n-\theta -1}x^{(n)}(r) \,dr, $$
(1.3)

where \(n-1 < \theta < n\), \(n=[\theta ]+1\), and \([\theta ]\) denotes the integer part of the real number θ. Indeed, either \(t\in J\), \(\phi =s_{i}^{+}\), or \(t\in J^{*}\), \(\phi =t_{i}^{+}\), shows that Eq. (1.3) does not contain impulse effects of unknown variable x on the interval of integration. Therefore \(x^{(n)}(r)\) in Eq. (1.3) exists for all \(r\in (\phi ,t)\). Some details of fractional calculus for impulsive problems, we refer the reader to [18]. As is customary, the Riemann–Liouville fractional integral of \(x(t)\) is defined by

$$ {{}_{\phi }}I^{\theta } x(t)=\frac{1}{\varGamma (\theta )} \int _{\phi }^{t} \frac{x(r)}{(t-r)^{1-\theta }}\,dr, \quad \theta >0, $$
(1.4)

provided the integral exists. In addition, we also use the following formula to establish our results:

$$ {{}_{\phi }}I^{\theta } \bigl({{}_{\phi }}D^{\theta } x \bigr) (t)= x(t)+c_{0}+c_{1}(t- \phi )+ \cdots +c_{n-1} (t-\phi )^{n-1}, $$
(1.5)

for some \(c_{i} \in \mathbb{R}\), \(i=0,1, 2,\ldots , n-1\) (\(n=[\theta ]+1\)).

2 Main results

To prove the existence and uniqueness results for problems (1.1) and (1.2), we have to define the structure of the sets of piecewise functions. Now, let us define two increasing sequences of points \(\{t_{i}\}_{i=1}^{m+1}\) and \(\{s_{i}\} _{i=0}^{m}\) by

$$ 0=s_{0}< t_{1}\leq s_{1}< t_{2}\leq s_{2}< t_{3}\leq \cdots < t_{m}\leq s _{m}< t_{m+1}=T. $$

Moreover, we define \(U=J\cup J^{*}\cup \{0\}\), as well as the sets \(PC(J, {\mathbb{R}})\) = {\(x : J\rightarrow {\mathbb{R}}\); \(x(t)\) is continuous on J and \(x(s_{i}^{+})\), \(x'(s_{i}^{+})\) exist for \(i=0,1, \ldots ,m\)} and \(PC(J^{*}, {\mathbb{R}})\) = {\(x : J^{*}\rightarrow {\mathbb{R}} \); \(x(t)\) is continuous on \(J^{*}\) and \(x(t_{i}^{+})\) exist for \(i=1,2, \ldots ,m\)}. In addition, we also define \(PC^{p}_{u}(J, \mathbb{R})\) = {\(x\in PC(J, \mathbb{R}) : {{}_{u}}D^{p}x(t)\) is continuous everywhere for \(t\in J\) for \(u \in \{s_{i}^{+} : i = 0, 1, \ldots ,m\}\), \(1< p\leq 2\)}, \(PC^{q}_{v}(J^{*}, \mathbb{R})\) = {\(x\in PC(J^{*}, \mathbb{R}) : {{}_{v}}D^{q}x(t)\) is continuous everywhere for \(t\in J^{*}\), \(v \in \{t_{i}^{+} : i = 1,2, \ldots ,m\}\), \(0< q\leq 1\)}, and \(PC^{p, q}(U,\mathbb{R})=PC^{p}_{u}(J, \mathbb{R})\cup PC^{q}_{v}(J ^{*}, \mathbb{R})\). Further, the space \(PC^{p, q}(U,\mathbb{R})\) is a Banach space endowed with the norm defined by \(\|x\|=\sup_{t\in U}|x(t)|\).

Let us define several constants as follows:

$$\begin{aligned}& \varLambda _{1}^{(p)} = \frac{(T - s_{m})^{p}}{\varGamma (p+1)} + \sum _{j=1}^{m} \Biggl(\prod_{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \frac{(t_{j} - s _{j-1})^{p}}{\varGamma (p+1)} \Biggr), \\& \varLambda _{2}^{(q)} = \sum_{j=1}^{m} \Biggl(\prod_{j}^{m} \vert \alpha _{j} \vert \prod_{j+1}^{m} \vert \gamma _{j+1} \vert \frac{(s_{j}-t_{j})^{q}}{\varGamma (q+1)} \Biggr)+ \sum _{j=1}^{m} \Biggl(\prod_{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j}+ \vert \beta _{m} \vert T, \\& \varLambda _{3}^{(p)} = \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr)\frac{(t_{j}-s_{j-1})^{p}}{ \varGamma (p+1)} \Biggr], \\& \begin{aligned} \varLambda _{4}^{(q)} &= \vert \gamma _{m} \vert \Biggl[\sum^{m-1}_{j=1} \Biggl(\prod^{m-1}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr)\frac{(s_{j}-t_{j})^{q}}{\varGamma (q+1)}+\sum _{j=1}^{m} \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j} \Biggr] \\ &\quad {}+ \frac{(s_{m}-t_{m})^{q}}{\varGamma (q+1)}, \end{aligned} \\& \varPhi _{1} = \vert \alpha _{0} \vert \prod _{j=1}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert , \qquad \varPhi _{2} = \vert \gamma _{m} \vert \vert \alpha _{0} \vert \Biggl(\prod ^{m-1}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr). \end{aligned}$$

Observe that \(PC^{2,1}(U,\mathbb{R})\) is used to study problem (1.1). In addition, the constants \(\varLambda _{1}^{(2)}\), \(\varLambda _{2}^{(1)}\), \(\varLambda _{3}^{(2)}\), and \(\varLambda _{4}^{(1)}\) appear in the next subsection.

2.1 Mixed-order impulsive ordinary differential equations with initial conditions

In this subsection, we establish a mixture of first and second order ordinary impulsive differential equations by transforming the initial value problem to an integral equation. Using mathematical induction, the following theorem for a linear problem is proved.

Theorem 2.1

Let \(\alpha _{j}\), \(\beta _{j}\) be given constants for \(j=0,1,2,\ldots ,m\), and \(\gamma _{i}\in \mathbb{R}\) for \(i=1,2,3,\ldots ,m\). Consider the functions \(y:J\to \mathbb{R}\) and \(z:J^{*}\to \mathbb{R}\), with \(z(s_{0})=1\). The impulsive problem

$$ \textstyle\begin{cases} x''(t)= y(t), \quad t \in (s_{i}, t_{i+1}], i = 0, 1, 2, \ldots ,m, \\ x'(t)= z(t), \quad t \in (t_{i}, s_{i}], i = 1, 2, 3, \ldots ,m, \\ x(s^{+}_{i})= \alpha _{i} x(s^{-}_{i}),\qquad x'(s^{+}_{i}) = \beta _{i} x'(s^{-}_{i}), \\ x(t^{+}_{i}) =\gamma _{i} x(t^{-}_{i}), \qquad x(0)=\alpha _{0}, \qquad x'(0)= \beta _{0}, \end{cases} $$
(2.1)

can be rewritten as a linear integral equation in the following form:

$$\begin{aligned} x(t) =& \alpha _{0} \Biggl(\prod ^{i}_{j=1} \alpha _{j} \gamma _{j} \Biggr)+ \sum^{i}_{j=1} \Biggl[ \Biggl(\prod^{i}_{j} \alpha _{j} \Biggr) \Biggl(\prod^{i}_{j+1} \gamma _{j+1} \Biggr) \int _{t_{j}}^{s_{j}}z(r)\,dr \Biggr] \\ &{}+ \sum_{j=1}^{i} \Biggl[ \Biggl(\prod _{j}^{i} \alpha _{j} \gamma _{j} \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) y(r)\,dr \Biggr]+\sum_{j=1} ^{i} \Biggl(\prod _{j}^{i} \alpha _{j} \gamma _{j} \Biggr) \beta _{j-1} z(s _{j-1}) t_{j} \\ &{}+ \beta _{i} z(s_{i}) t + \int _{s_{i}}^{t} (t-r) y(r)\,dr, \quad t \in J, \end{aligned}$$
(2.2)

and

$$\begin{aligned} x(t) =& \alpha _{0}\gamma _{i} \Biggl( \prod^{i-1}_{j=1} \alpha _{j} \gamma _{j} \Biggr) +\gamma _{i} \sum ^{i-1}_{j=1} \Biggl[ \Biggl(\prod ^{i-1}_{j} \alpha _{j} \Biggr) \Biggl( \prod^{i-1}_{j+1}\gamma _{j+1} \Biggr) \int _{t_{j}}^{s_{j}}z(r)\,dr \Biggr] \\ &{}+ \gamma _{i} \Biggl[\sum_{j=1}^{i} \Biggl(\prod_{j}^{i-1} \alpha _{j} \gamma _{j} \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) y(r)\,dr \Biggr] + \gamma _{i} \sum_{j=1}^{i} \Biggl(\prod_{j}^{i-1} \alpha _{j} \gamma _{j} \Biggr) \beta _{j-1} z(s_{j-1}) t_{j} \\ &{}+ \int _{t_{i}}^{t} z(r)\,dr, \quad t \in J^{*}. \end{aligned}$$
(2.3)

Proof

For \(t \in (s_{0}, t_{1}]\), the first ordinary differential equation of second order in Eq. (2.1) can be written as

$$ x(t) = \alpha _{0} + \beta _{0}t + \int _{s_{0}}^{t} (t-r)y(r)\,dr, $$

with \(\prod_{1}^{0}(\cdot )=1\), \(\sum^{0}_{1} (\cdot )=0\) and \(z(s_{0}) = 1\). Then Eq. (2.2) holds for \(i=0\). Now, for \(t \in (t_{1}, s_{1}]\), by integrating the second equation of Eq. (2.1) from \(t_{1}\) to t, when \(t\in (t_{1}, s_{1}]\), we have

$$ x(t)=x\bigl(t^{+}_{1}\bigr)+ \int _{t_{1}}^{t}z(r)\,dr. $$

Applying the condition \(x(t^{+}_{1}) = \gamma _{1} x(t^{-}_{1})\), we obtain

$$ x(t)=\gamma _{1} \alpha _{0}+\gamma _{1} t_{1} \beta _{0}+\gamma _{1} \int _{s_{0}}^{t_{1}}(t_{1}-r)y(r)\,dr+ \int _{t_{1}}^{t}z(r)\,dr, $$

which implies that Eq. (2.3) is true for \(i = 1\). In the next step, we assume that Eq. (2.2) holds for \(t \in (s_{i}, t _{i+1}]\). By mathematical induction, we will show that Eq. (2.3) is fulfilled for \(t \in (t_{i+1}, s_{i+1}]\). Indeed, for \(t\in (t_{i+1}, s_{i+1}]\), we get

$$ x(t)=x\bigl(t^{+}_{i+1}\bigr)+ \int _{t_{i+1}}^{t}z(r)\,dr. $$
(2.4)

From \(x(t^{+}_{i+1}) = \gamma _{i+1} x(t^{-}_{i+1})\) and Eq. (2.2), Eq. (2.4) can be expressed as

$$\begin{aligned} x(t) =& \alpha _{0}\gamma _{i+1} \Biggl(\prod ^{i}_{j=1} \alpha _{j} \gamma _{j} \Biggr) + \gamma _{i+1}\sum ^{i}_{j=1} \Biggl[ \Biggl(\prod ^{i}_{j} \alpha _{j} \Biggr) \Biggl( \prod^{i}_{j+1} \gamma _{j+1} \Biggr) \int _{t_{j}}^{s_{j}}z(r)\,dr \Biggr] \\ &{}+ \gamma _{i+1} \Biggl[\sum_{j=1}^{i+1} \Biggl(\prod_{j}^{i} \alpha _{j} \gamma _{j} \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) y(r)\,dr \Biggr] \\ &{}+ \gamma _{i+1} \sum_{j=1}^{i+1} \Biggl(\prod_{j}^{i} \alpha _{j} \gamma _{j} \Biggr) \beta _{j-1} z(s_{j-1}) t_{j} + \int _{t_{i+1}}^{t} z(r)\,dr. \end{aligned}$$
(2.5)

Thus Eq. (2.3) is satisfied for \(t \in (t_{i+1}, s_{i+1}]\).

Finally, we suppose that Eq. (2.3) is true for \((t_{i}, s _{i}]\) and then we will prove that Eq. (2.2) is true for \((s_{i}, t_{i+1}]\). From the first equation of Eq. (2.1), we obtain

$$ x(t) = x\bigl(s^{+}_{i}\bigr) + x' \bigl(s^{+}_{i}\bigr)t + \int _{s_{i}}^{t} (t-r)y(r)\,dr. $$

Using Eq. (2.3) and conditions \(x(s^{+}_{i}) =\alpha _{i} x(s ^{-}_{i})\), \(x'(s^{+}_{i}) = \beta _{i}x'(s^{-}_{i})\), we have

$$\begin{aligned} x(t) =&\alpha _{i} \Biggl\{ \alpha _{0}\gamma _{i} \Biggl(\prod^{i-1}_{j=1} \alpha _{j} \gamma _{j} \Biggr) + \gamma _{i}\sum ^{i-1}_{j=1} \Biggl[ \Biggl(\prod ^{i-1}_{j} \alpha _{j} \Biggr) \Biggl( \prod^{i-1}_{j+1} \gamma _{j+1} \Biggr) \int _{t_{j}}^{s_{j}}z(r)\,dr \Biggr] \\ &{}+ \gamma _{i} \Biggl[\sum_{j=1}^{i} \Biggl(\prod_{j}^{i-1} \alpha _{j} \gamma _{j} \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) y(r)\,dr \Biggr] + \gamma _{i} \sum_{j=1}^{i} \Biggl(\prod_{j}^{i-1} \alpha _{j} \gamma _{j} \Biggr) \beta _{j-1} z(s_{j-1}) t_{j} \\ &{}+ \int _{t_{i}}^{s_{i}} z(r)\,dr \Biggr\} +\beta _{i} z(s_{i}) t + \int _{s_{i}}^{t}(t-r)y(r)\,dr. \end{aligned}$$

Therefore Eq. (2.2) is valid on \((s_{i}, t_{i+1}]\). The converse follows by direct computation. This completes the proof. □

Next, in view of Theorem 2.1, replacing linear functions \(y(t)\), \(z(t)\) by nonlinear functions \(f(t,x)\), \(g(t,x)\), respectively, we define the operator \(\mathcal{A}:PC^{2,1}(U,\mathbb{R})\to PC^{2,1}(U, \mathbb{R})\) by

$$ \mathcal{A}x(t)= \textstyle\begin{cases} \alpha _{0} (\prod^{i}_{j=1} \alpha _{j} \gamma _{j} )+\sum^{i}_{j=1} [ (\prod^{i}_{j} \alpha _{j} ) (\prod^{i}_{j+1} \gamma _{j+1} ) \int _{t_{j}}^{s_{j}}g(r, x(r))\,dr ] \\ \quad {}+ \sum_{j=1}^{i} [ (\prod_{j}^{i} \alpha _{j} \gamma _{j} ) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) f(r, x(r))\,dr ] \\ \quad {}+ \sum_{j=1}^{i} (\prod_{j}^{i} \alpha _{j} \gamma _{j} ) \beta _{j-1} g(s_{j-1}, x(s_{j-1})) t_{j} \\ \quad {}+ \beta _{i} g(s_{i}, x(s_{i}))t +\int _{s_{i}}^{t} (t-r) f(r, x(r))\,dr, \quad t \in J , \\ \alpha _{0}\gamma _{i} (\prod^{i-1}_{j=1} \alpha _{j} \gamma _{j} ) +\gamma _{i} \sum^{i-1}_{j=1} [ (\prod^{i-1}_{j} \alpha _{j} ) ( \prod^{i-1}_{j+1}\gamma _{j+1} ) \int _{t_{j}}^{s_{j}}g(r, x(r))\,dr ] \\ \quad {}+ \gamma _{i} [\sum_{j=1}^{i} (\prod_{j}^{i-1} \alpha _{j} \gamma _{j} ) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) f(r, x(r))\,dr ] \\ \quad {}+ \gamma _{i} \sum_{j=1}^{i} (\prod_{j}^{i-1} \alpha _{j} \gamma _{j} ) \beta _{j-1} g(s_{j-1}, x(s_{j-1})) t_{j} \\ \quad {}+ \int _{t_{i}}^{t} g(r, x(r))\,dr, \quad t \in J^{*}. \end{cases} $$

Then problem (1.1) is transformed to the operator equation \(x=\mathcal{A}x\), which is the fixed point problem. The existence of a unique solution is proved by using the Banach contraction mapping principle.

Theorem 2.2

Assume that the functions \(f\colon J\times \mathbb{R}\rightarrow \mathbb{R}\) and \(g\colon J^{*}\times \mathbb{R}\rightarrow \mathbb{R}\) with \(g(0,\cdot )=1\) satisfy:

(\(\mathrm{H}_{1}\)):

There exist positive constants \(L_{1}\), \(L_{2}\) such that

$$ \bigl\vert f(t_{1}, x) - f(t_{1}, y) \bigr\vert \leq L_{1} \vert x - y \vert \quad \textit{and}\quad \bigl\vert g(t _{2}, x) - g(t_{2}, y) \bigr\vert \leq L_{2} \vert x - y \vert , $$

for all \(t_{1}\in J\), \(t_{2}\in J^{*}\) and \(x,y\in {\mathbb{R}}\).

If \(\varOmega _{1}=\max \{L_{1}\varLambda _{1}^{(2)}+L_{2}\varLambda _{2} ^{(1)}, L_{1}\varLambda _{3}^{(2)}+L_{2}\varLambda _{4}^{(1)}\}<1\), then problem (1.1) has a unique solution on U such that \(\|x\|\leq r^{*}\) with \(r^{*}=\max \{r_{1}, r_{2}\}\),

$$ r_{1}=\frac{M_{1}\varLambda _{1}^{(2)}+M_{2}\varLambda _{2}^{(1)}+\varPhi _{1}}{1-(L_{1}\varLambda _{1}^{(2)}+L_{2}\varLambda _{2}^{(1)})}, \qquad r_{2}= \frac{M_{1}\varLambda _{3}^{(2)}+M_{2}\varLambda _{4}^{(1)}+\varPhi _{2}}{1-(L_{1}\varLambda _{3}^{(2)}+L_{2}\varLambda _{4}^{(1)})}, $$

and \(M_{1}=\sup_{t\in J} \{f(t,0)\}\), \(M_{2}=\sup_{t\in J^{*}}\{g(t,0) \}\).

Proof

Let us consider a ball \(B_{r^{*}} = \{x \in PC^{2,1} : \| x \|\leq r^{*}\}\). We will show that the operator \(\mathcal{A}\) satisfies \(\mathcal{A} B_{r^{*}} \subset B_{r^{*}}\). For \(t \in (s _{i}, t_{i+1}]\), \(i=0, 1, 2, \ldots , m\) and by using \(|\phi (r,x(r))| \leq |\phi (r,x(r))-\phi (r,0)|+|\phi (r,0)|\), \(\phi \in \{f,g\}\), we find that

$$\begin{aligned} \bigl\vert \mathcal{A} x(t) \bigr\vert \leq & \vert \alpha _{0} \vert \Biggl(\prod^{m}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr)+\sum^{m}_{j=1} \Biggl[ \Biggl(\prod ^{m}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \int _{t_{j}}^{s_{j}} \bigl\vert g\bigl(r, x(r)\bigr) \bigr\vert \,dr \Biggr] \\ &{}+ \sum_{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) \bigl\vert f \bigl(r, x(r)\bigr) \bigr\vert \,dr \Biggr] \\ &{}+ \sum_{j=1}^{m} \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) |\beta _{j-1}|\bigl| g \bigl(s_{j-1}, x(s_{j-1})\bigr)\bigr|t_{j} \\ &{}+ |\beta _{m}|\bigl| g\bigl(s_{m}, x(s_{m})\bigr)\bigr|T + \int _{s_{m}}^{T} (T-r)\bigl| f\bigl(r, x(r)\bigr)\bigr| \,dr \\ \leq & \vert \alpha _{0} \vert \Biggl(\prod ^{m}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \\ &{}+\sum^{m}_{j=1} \Biggl[ \Biggl(\prod ^{m}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \int _{t_{j}}^{s_{j}} \bigl(\bigl|g\bigl(r, x(r)\bigr)-g(r, 0) \bigr\vert + \bigl\vert g(r, 0) \bigr\vert \bigr)\,dr \Biggr] \\ &{}+ \sum_{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) \bigl( \bigl\vert f\bigl(r, x(r)\bigr)-f(r, 0) \bigr\vert + \bigl\vert f(r, 0) \bigr\vert \bigr) \,dr \Biggr] \\ &{}+ \sum_{j=1}^{m} \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl( \bigl\vert g\bigl(s_{j-1}, x(s_{j-1}) \bigr)-g(r,0) \bigr\vert + \bigl\vert g(r, 0) \bigr\vert \bigr)t_{j} \\ &{}+ \vert \beta _{m} \vert \bigl( \bigl\vert g \bigl(s_{m}, x(s_{m})\bigr)-g(s_{m}, 0) \bigr\vert + \bigl\vert g(s_{m}, 0) \bigr\vert \bigr)T \\ &{}+ \int _{s_{m}}^{T} (T-r) \bigl(\bigl|f\bigl(r, x(r)\bigr)-f(r, 0) \bigr\vert + \bigl\vert f(r, 0) \bigr\vert \bigr)\,dr \\ \leq & (L_{1}r_{1} +M_{1}) \Biggl[ \frac{(T - s_{m})^{2}}{2} + \sum_{j=1}^{m} \Biggl( \prod_{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \frac{(t_{j} - s _{j-1})^{2}}{2} \Biggr) \Biggr] \\ &{}+ (L_{2}r_{1} + M_{2}) \Biggl[\sum _{j=1}^{m} \Biggl(\prod_{j}^{m} \vert \alpha _{j} \vert \prod_{j+1}^{m} \vert \gamma _{j+1} \vert (s_{j}-t_{j}) \Biggr) \\ &{}+ \sum_{j=1}^{m} \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j}+ \vert \beta _{m} \vert T \Biggr]+ \vert \alpha _{0} \vert \prod_{j=1}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \\ =& (L_{1}r_{1}+ M_{1})\varLambda _{1}^{(2)} +(L_{2}r_{1} + M_{2}) \varLambda _{2}^{(1)} + \varPhi _{1} \\ \leq & r_{1}. \end{aligned}$$

For \(t \in (t_{i}, s_{i}]\), \(i=1, 2, 3,\ldots ,m\), we obtain

$$\begin{aligned}& \bigl\vert \mathcal{A} x(t) \bigr\vert \\& \quad \leq \vert \gamma _{m} \vert \vert \alpha _{0} \vert \Biggl(\prod^{m-1}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr)+ \vert \gamma _{m} \vert \sum^{m-1}_{j=1} \Biggl[ \Biggl(\prod^{m-1} _{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \int _{t_{j}}^{s_{j}} \bigl\vert g\bigl(r, x(r)\bigr) \bigr\vert \,dr \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum _{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) \bigl\vert f \bigl(r, x(r)\bigr) \bigr\vert \,dr \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum _{j=1}^{m} \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl\vert g\bigl(s_{j-1}, x(s_{j-1})\bigr) \bigr\vert t_{j} + \int _{t_{m}}^{s_{m}} \bigl\vert g\bigl(r, x(r)\bigr) \bigr\vert \,dr \,dr \\& \quad \leq \vert \gamma _{m} \vert \vert \alpha _{0} \vert \Biggl(\prod^{m-1}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \\& \qquad {} + \vert \gamma _{m} \vert \sum ^{m-1}_{j=1} \Biggl[ \Biggl(\prod ^{m-1}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \int _{t_{j}}^{s_{j}}\bigl( \bigl\vert g\bigl(r, x(r) \bigr)-g(r, 0) \bigr\vert + \bigl\vert g(r, 0) \bigr\vert \bigr)\,dr \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum _{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) \bigl(\bigl|f \bigl(r, x(r)\bigr)-f(r, 0) \bigr\vert + \bigl\vert f(r, 0) \bigr\vert \bigr)\,dr \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum _{j=1}^{m} \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl(\bigl| g\bigl(s_{j-1}, x(s_{j-1})\bigr)-g(s_{j-1}, 0) \bigr\vert + \bigl\vert g(s _{j-1}, 0) \bigr\vert \bigr)t_{j} \\& \qquad {} + \int _{t_{m}}^{s_{m}}\bigl( \bigl\vert g\bigl(r, x(r) \bigr)-g(r, 0) \bigr\vert + \bigl\vert g(r, 0) \bigr\vert \bigr)\,dr \\& \quad \leq \vert \gamma _{m} \vert \vert \alpha _{0} \vert \Biggl(\prod^{m-1}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \\& \qquad {} + (L_{2}r_{2}+M_{2}) \vert \gamma _{m} \vert \sum^{m-1}_{j=1} \Biggl[ \Biggl(\prod^{m-1}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr) (s_{j}-t_{j}) \Biggr] \\& \qquad {} + (L_{1}r_{2}+M_{1}) \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \frac{(t_{j}-s_{j-1})^{2}}{2} \Biggr] \\& \qquad {} + (L_{2}r_{2}+M_{2}) \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j} + (L_{2}r_{2}+M _{2}) (s_{m}-t_{m}) \\& \quad = (L_{1}r_{2}+M_{1}) \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \frac{(t_{j}-s_{j-1})^{2}}{2} \Biggr] \\& \qquad {} + (L_{2}r_{2}+M_{2}) \Biggl[ \vert \gamma _{m} \vert \Biggl\{ \sum^{m-1}_{j=1} \Biggl(\prod^{m-1}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr) (s_{j}-t_{j}) \\& \qquad {} + \sum_{j=1}^{m} \Biggl(\prod _{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j} \Biggr\} +(s_{m}-t_{m}) \Biggr]+ \vert \gamma _{m} \vert \vert \alpha _{0} \vert \Biggl(\prod^{m-1}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \\& \quad = (L_{1}r_{2}+M_{1})\varLambda _{3}^{(2)}+(L_{2}r_{2}+M_{2}) \varLambda _{4}^{(1)}+\varPhi _{2} \\& \quad \leq r_{2}. \end{aligned}$$

Since \(r^{*}=\max \{r_{1}, r_{2}\}\), we have \(\|\mathcal{A}x\|\leq r ^{*}\), which implies \(\mathcal{A}B_{r^{*}} \subset B_{r^{*}}\). Next, we will prove that operator \(\mathcal{A}\) is a contraction. Let \(x, y \in PC^{2,1}(U,\mathbb{R})\). Now, for \(t \in (s_{i}, t_{i+1}]\) \(i=0, 1, 2, \ldots , m\), we get

$$\begin{aligned}& \bigl\vert \mathcal{A} x(t)- \mathcal{A} y(t) \bigr\vert \\ & \quad \leq \sum^{m}_{j=1} \Biggl[ \Biggl( \prod^{m}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \int _{t_{j}}^{s_{j}} \bigl\vert g\bigl(r, x(r)\bigr)-g \bigl(r, y(r)\bigr) \bigr\vert \,dr \Biggr] \\ & \qquad {} + \sum_{j=1}^{m} \Biggl[ \Biggl( \prod_{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) \bigl\vert f \bigl(r, x(r)\bigr)-f\bigl(r, y(r)\bigr) \bigr\vert \,dr \Biggr] \\ & \qquad {} + \sum_{j=1}^{m} \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl( \bigl\vert g\bigl(s_{j-1}, x(s_{j-1})\bigr)-g\bigl(s_{j-1}, y(s_{j-1})\bigr) \bigr\vert \bigr)t_{j} \\ & \qquad {} + \vert \beta _{m} \vert \bigl( \bigl\vert g \bigl(s_{m}, x(s_{m})\bigr)-g\bigl(s_{m}, y(s_{m})\bigr) \bigr\vert \bigr)T \\ & \qquad {} + \int _{s_{m}}^{T} (T-r) \bigl( \bigl\vert f\bigl(r, x(r)\bigr)-f\bigl(r, y(r)\bigr) \bigr\vert \bigr) \,dr \\& \quad \leq \Biggl\{ \sum^{m}_{j=1} \Biggl[ \Biggl(\prod^{m}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m}_{j+1} \vert \gamma _{j+1} \vert \Biggr)L_{2}(s _{j}-t_{j}) \Biggr] \\& \qquad {} + \sum_{j=1}^{m} \Biggl[ \Biggl( \prod_{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr)L_{1}\biggl(\frac{(t_{j}-s_{j-1})^{2}}{2}\biggr) \Biggr]+\sum _{j=1} ^{m} \Biggl(\prod_{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert L _{2}t_{j} \\& \qquad {} + \vert \beta _{m} \vert L_{2}T +L_{1} \frac{(T-s_{m})^{2}}{2} \Biggr\} \Vert x-y \Vert \\& \quad \leq \bigl(\varLambda _{1}^{(2)}L_{1}+ \varLambda _{2}^{(1)}L_{2}\bigr) \Vert x-y \Vert . \end{aligned}$$

For \(t \in (t_{i}, s_{i}]\), \(i=1, 2, 3, \ldots , m\), we obtain

$$\begin{aligned}& \bigl\vert \mathcal{A} x(t)- \mathcal{A} y(t) \bigr\vert \\& \quad \leq \vert \gamma _{m} \vert \sum ^{m-1}_{j=1} \Biggl[ \Biggl(\prod ^{m-1}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \int _{t_{j}}^{s_{j}} \bigl\vert g\bigl(r, x(r)\bigr)-g \bigl(r, y(r)\bigr) \bigr\vert \,dr \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum _{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \int _{s_{j-1}}^{t_{j}}(t_{j} - r) \bigl( \bigl\vert f\bigl(r, x(r)\bigr)-f\bigl(r, y(r)\bigr) \bigr\vert \bigr)\,dr \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum _{j=1}^{m} \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl( \bigl\vert g \bigl(s_{j-1}, x(s_{j-1})\bigr)-g\bigl(s_{j-1}, y(s _{j-1})\bigr) \bigr\vert \bigr)t_{j} \\& \qquad {} + \int _{s_{m}}^{t_{m}} \bigl\vert g\bigl(r, x(r)\bigr)-g \bigl(r, y(r)\bigr) \bigr\vert \,dr \\& \quad \leq \Biggl\{ \vert \gamma _{m} \vert \sum ^{m}_{j=1} \Biggl[ \Biggl(\prod ^{m-1} _{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr) L_{2}(s_{j}-t_{j})+(s_{m}-t_{m}) \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum _{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr)L_{1}\frac{(t_{j}-s_{j-1})^{2}}{2} \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum _{j=1}^{m-1} \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j} \Biggr\} \Vert x-y \Vert \\& \quad \leq \bigl(\varLambda _{3}^{(2)}L_{1}+ \varLambda _{4}^{(1)}L_{2}\bigr) \Vert x-y \Vert . \end{aligned}$$

From the above results, we can conclude that \(\|\mathcal{A}x- \mathcal{A}y\| \leq \varOmega _{1}\|x-y\|\), which implies that \(\mathcal{A}\) is a contraction operator. Therefore, by the Banach contraction mapping principle, operator \(\mathcal{A}\) has only one fixed point \(x(t)\) in \(B_{r^{*}}\) for \(t\in U\). Then there exists a unique solution of problem (1.1) such that \(\|x\|\leq r^{*}\). This completes the proof. □

Example 2.3

Consider the following mixed-order impulsive ordinary differential equations with initial conditions

$$ \textstyle\begin{cases} x''(t)= \frac{e^{-t^{2}}}{4} (\frac{x^{2}(t)+2 \vert x(t) \vert }{1+ \vert x(t) \vert } )+ \frac{3}{2},\quad t \in (2i, 2i+1], i = 0, 1, 2, 3, \\ x'(t)= \frac{t}{150}\sin \vert x(t) \vert +1, \quad t \in (2i-1, 2i], i = 1, 2, 3, \\ x((2i)^{+})= (\frac{i+1}{i+2} ) x((2i)^{-}), \qquad x'((2i)^{+}) = (\frac{i+2}{i+3} ) x'((2i)^{-}), \\ x((2i-1)^{+}) = (\frac{i+3}{i+4} ) x((2i-1)^{-}), \qquad x(0)= \frac{1}{2},\qquad x'(0)=\frac{2}{3}. \end{cases} $$
(2.6)

Here \(s_{i}=2i\), \(i=0,1,2,3\), and \(t_{i}=2i-1\), \(i=1,2,3\), \(m=3\), \(T=t_{4}=7\). The constants \(\alpha _{i}=(i+1)/(i+2)\), \(\beta _{i}=(i+2)/(i+3)\), \(i=0,1,2,3\), and \(\gamma _{i}=(i+3)/(i+4)\), \(i=1,2,3\). The information in Eq. (2.6) yields by direct computation that \(\varLambda _{1}^{(2)}\approx 0.97559\), \(\varLambda _{2}^{(1)}\approx 11.43571\), \(\varLambda _{3}^{(2)}\approx 0.83929\), \(\varLambda _{4}^{(1)}\approx 6.83461\), \(\varPhi _{1}\approx 0.11429\) and \(\varPhi _{2}\approx 0.14286\). Setting \(f(t,x)=(e^{-t^{2}}(x^{2}+2|x|))/(4(1+|x|))+(3/2)\) and \(g(t,x)=((t \sin |x|)/150)+1\), we can find Lipschitz constants by

$$ \bigl\vert f(t_{1},x)-f(t_{1},y) \bigr\vert \leq \frac{1}{2} \vert x-y \vert \quad \text{and}\quad \bigl\vert g(t _{2},x)-g(t_{2},y) \bigr\vert \leq \frac{1}{25} \vert x-y \vert , $$

for each \(t_{1}\in J\), \(t_{2}\in J^{*}\), \(x,y\in {\mathbb{R}}\), which satisfy condition (\(\mathrm{H}_{1}\)) as \(L_{1}=1/2\) and \(L_{2}=1/25\). Also, we observe that \(g(0,\cdot )=1\), \(M_{1}=3/2\) and \(M_{2}=1\). Then we compute the constants \(\varOmega _{1}=\max \{0.94522, 0.69303\}=0.94522<1\) and \(r^{*}=\max \{237.57197, 15.80093\}=237.57197\). Hence by Theorem 2.2, we obtain that problem (2.6) has a unique solution \(x(t)\) on \([0,7]\) such that \(\|x\|\leq 237.57197\).

Example 2.4

Let the constants \(a,b\in \mathbb{R}\), \(a\neq b\) be given. The function

$$ x(t)= \textstyle\begin{cases} e^{at}, & t \in (2i, 2i+1], i = 0, 1, 2, 3,\ldots , \\ e^{bt}, & t \in (2i-1, 2i], i = 1, 2, 3, \ldots , \end{cases} $$
(2.7)

is a unique solution of the problem

$$ \textstyle\begin{cases} x''(t)= a^{2}x(t), \quad t \in (2i, 2i+1], i = 0, 1, 2, 3,\ldots , \\ x'(t)= bx(t), \quad t \in (2i-1, 2i], i = 1, 2, 3,\ldots , \\ x((2i)^{+})= e^{2i(a-b)} x((2i)^{-}),\qquad x'((2i)^{+}) = (\frac{a}{b} )e ^{2i(a-b)} x'((2i)^{-}), \\ x((2i-1)^{+}) =e^{(2i-1)(b-a)} x((2i-1)^{-}), \qquad x(0)=1, \qquad x'(0)=a. \end{cases} $$
(2.8)

Taking the second and first order derivatives of Eq. (2.7) on \((2i, 2i+1]\), \(i = 0, 1, 2, 3,\ldots \) and \((2i-1, 2i]\), \(i = 1, 2, 3, \ldots \) , respectively, the first two equations in Eq. (2.8) hold. It is obvious that \(x(0)=1\), \(x'(0)=a\). In addition, we get \(x((2i)^{+})=e^{2ia}\), \(x((2i)^{-})=e^{2ib}\), \(x'((2i)^{+})=ae^{2ia}\), \(x'((2i)^{-})=be^{2ib}\), \(x((2i-1)^{+}) =e ^{(2i-1)b} \), and \(x((2i-1)^{-}) =e^{(2i-1)a} \). Therefore, function \(x(t)\) defined in Eq. (2.7) solves problem (2.8).

2.2 Mixed-order impulsive fractional differential equations with initial conditions

The existence results for problem (1.2) which has mixed fractional orders are established in this subsection. In analogy with Theorem 2.1, the operator \(\mathcal{B}:PC^{p,q}(U, \mathbb{R})\to PC^{p,q}(U,\mathbb{R})\) is defined by

$$ \mathcal{B}x(t)= \textstyle\begin{cases} \alpha _{0} (\prod^{i}_{j=1} \alpha _{j} \gamma _{j} )+\sum^{i}_{j=1} [ (\prod^{i}_{j} \alpha _{j} ) (\prod^{i}_{j+1} \gamma _{j+1} ) {{}_{t_{j}}}I^{q}g(r, x(r))({s_{j}}) ] \\ \quad {}+ \sum_{j=1}^{i} [ (\prod_{j}^{i} \alpha _{j} \gamma _{j} ) {{}_{s_{j-1}}}I^{p}f(r, x(r))(t_{j}) ] \\ \quad {} + \sum_{j=1}^{i} (\prod_{j}^{i} \alpha _{j} \gamma _{j} ) \beta _{j-1} g(s_{j-1}, x(s_{j-1})) t_{j} + \beta _{i} g(s_{i}, x(s_{i})) t \\ \quad {} + {{}_{s_{i}}}I^{p}f(r, x(r))(t), \quad t \in J, \\ \alpha _{0}\gamma _{i} (\prod^{i-1}_{j=1} \alpha _{j} \gamma _{j} ) +\gamma _{i}\sum^{i-1}_{j=1} [ (\prod^{i-1}_{j} \alpha _{j} ) ( \prod^{i-1}_{j+1}\gamma _{j+1} ) {{}_{t_{j}}}I^{q}g(r, x(r))({s_{j}}) ] \\ \quad {}+ \gamma _{i} [\sum_{j=1}^{i} (\prod_{j}^{i-1} \alpha _{j} \gamma _{j} ) {{}_{s_{j-1}}}I^{p}f(r, x(r))(t_{j}) ] \\ \quad {}+ \gamma _{i} \sum_{j=1}^{i} (\prod_{j}^{i-1} \alpha _{j} \gamma _{j} ) \beta _{j-1} g(s_{j-1},x(s_{j-1})) t_{j} \\ \quad {}+ {{}_{t_{i}}}I^{q} g(r, x(r))(t),\quad t \in J^{*}, \end{cases} $$

where the Riemann–Liouville fractional integral with respect to r of the function of two variables \({{}_{\phi }}I^{\theta }h(r,x(r))(\pi )\), \(\phi \in \{s_{j-1}, t_{j}, s_{i}, t_{i}\}\), \(\pi \in \{t, s_{j},t _{j}\}\), \(\theta \in \{p,q\}\) and \(h\in \{f,g\}\) is defined by Eq. (1.4). By the Banach fixed point theorem, the existence of a unique solution to problem (1.2) can be similarly proved as for Theorem 2.2.

Theorem 2.5

Suppose that functions f and g satisfy condition (\(\mathrm{H}_{1}\)) in Theorem 2.2. If \(\varOmega _{2}=\max \{L_{1}\varLambda _{1} ^{(p)}+L_{2}\varLambda _{2}^{(q)}, L_{1}\varLambda _{3}^{(p)}+L_{2} \varLambda _{4}^{(q)}\}<1\), then problem (1.2) has a unique solution on U.

Next the existence result will be proved by applying the nonlinear alternative for single-valued maps given as the following statement.

Theorem 2.6

([35])

Let E be a Banach space, and C be a closed, convex subset of E. Also let G be an open subset of C such that \(0\in G\). Assume that \(F:\overline{G}\to C\) is a continuous, compact (that is, \(F( \overline{G})\) is a relatively compact subset of C) map. Then either

  1. (i)

    F has a fixed point in , or

  2. (ii)

    There are a \(u\in \partial G\) (the boundary of G in C) and \(\lambda \in (0,1)\) with \(u=\lambda F(u)\).

Theorem 2.7

Assume that functions \(f\colon J\times \mathbb{R}\rightarrow \mathbb{R}\) and \(g\colon J^{*}\times \mathbb{R}\rightarrow \mathbb{R}\) with \(g(0,\cdot )=1\) are continuous. In addition, we assume that:

(\(\mathrm{H}_{2}\)):

There exist continuous nondecreasing functions \(\psi _{1},\psi _{2} : [0,\infty ) \to (0,\infty )\) and continuous functions \(w_{1}: J\to \mathbb{R}^{+}\), \(w_{2}: J^{*}\to \mathbb{R} ^{+}\) such that

$$ \bigl\vert f(t_{1},x) \bigr\vert \le w_{1}(t_{1}) \psi _{1}\bigl( \vert x \vert \bigr), \qquad \bigl\vert g(t_{2},x) \bigr\vert \le w_{2}(t _{2})\psi _{2}\bigl( \vert x \vert \bigr), $$

for each \((t_{1},x) \in J \times {\mathbb{R}}\), \((t_{2},x) \in J^{*} \times {\mathbb{R}}\).

(\(\mathrm{H}_{3}\)):

There exist constants \(N_{1},N_{2}>0\) such that

$$ \frac{N_{1}}{\varPhi _{1}+\psi _{1}(N_{1}) \Vert w_{1} \Vert \varLambda _{1}^{(p)}+ \psi _{2}(N_{1}) \Vert w_{2} \Vert \varLambda _{2}^{(q)}}>1 $$

and

$$ \frac{N_{2}}{\varPhi _{2}+\psi _{1}(N_{2}) \Vert w_{1} \Vert \varLambda _{3}^{(p)}+ \psi _{2}(N_{2}) \Vert w_{2} \Vert \varLambda _{4}^{(q)}}>1. $$

Then the mixed-order impulsive fractional differential equations with initial conditions given in Eq. (1.2) have at least one solution on U.

Proof

Let us, for a positive number ρ, define the ball \(B_{\rho } = \{x \in PC^{p,q}(U,\mathbb{R}): \|x\| \le \rho \}\). It follows that \(B_{\rho }\) is a closed, convex subset of \(PC^{p,q}(U, \mathbb{R})\). We will show that the operator \(\mathcal{B}:PC^{p,q}(U, \mathbb{R})\to PC^{p,q}(U,\mathbb{R})\) satisfies all the assumptions of Theorem 2.6. To prove the continuity of \(\mathcal{B}\), we define a sequence converging to x by \(\{x_{n}\}\). Then for \(t\in J\), we obtain

$$\begin{aligned}& \bigl\vert {\mathcal{B}}x_{n}(t)-{\mathcal{B}}x(t) \bigr\vert \\& \quad \leq \sum^{m}_{j=1} \Biggl[ \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod_{j+1}^{m} \vert \gamma _{j+1} \vert \Biggr){{}_{t_{j}}}I^{q} \bigl( \bigl\vert g\bigl(r, x_{n} (r)\bigr)-g\bigl(r, x(r)\bigr) \bigr\vert \bigr) (s_{j}) \Biggr] \\& \qquad {} + \sum_{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) {{}_{s_{j-1}}}I^{p} \bigl( \bigl\vert f\bigl(r, x_{n}(r)\bigr)-f\bigl(r, x(r)\bigr) \bigr\vert \bigr) (t_{j}) \Biggr] \\& \qquad {} + \sum_{j=i}^{m} \Biggl[ \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl( \bigl\vert g\bigl(s_{j-1}, x_{n} (s_{j-1})\bigr)-g\bigl(s_{j-1}, x(s _{j-1})\bigr) \bigr\vert \bigr)t_{j} \Biggr] \\& \qquad {} + \vert \beta _{m} \vert \bigl( \bigl\vert g \bigl(s_{m}, x_{n} (s_{m})\bigr)-g \bigl(s_{m}, x(s_{m})\bigr) \bigr\vert \bigr)T \\& \qquad {} + {{}_{s_{m}}}I^{p}\bigl( \bigl\vert f\bigl(r, x_{n} (r)\bigr)-f\bigl(r, x(r)\bigr) \bigr\vert \bigr) (T)\to 0, \quad \text{as } n\to \infty , \end{aligned}$$

and for \(t\in J^{*}\), we have

$$\begin{aligned}& \bigl\vert {\mathcal{B}}x_{n}(t)-{\mathcal{B}}x(t) \bigr\vert \\& \quad \leq \vert \gamma _{m} \vert \sum^{m-1}_{j=1} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod_{j+1}^{m-1} \vert \gamma _{j+1} \vert \Biggr) {{}_{t_{j}}}I^{q} \bigl( \bigl\vert g\bigl(r, x_{n} (r)\bigr)-g\bigl(r, x(r)\bigr) \bigr\vert \bigr) (s_{j}) \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr){{}_{s_{j-1}}}I^{p}\bigl( \bigl\vert f\bigl(r, x_{n}(r)\bigr)-f\bigl(r, x(r)\bigr) \bigr\vert \bigr) (t _{j}) \Biggr] \\& \qquad {} + \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl( \bigl\vert g\bigl(s_{j-1}, x_{n} (s_{j-1})\bigr)-g\bigl(s _{j-1}, x(s_{j-1})\bigr) \bigr\vert \bigr)t_{j} \Biggr] \\& \qquad {} + {{}_{t_{m}}}I^{q}\bigl( \bigl\vert g\bigl(r, x_{n} (r)\bigr)-g\bigl(r, x(r)\bigr) \bigr\vert \bigr) (s_{m})\to 0,\quad \text{as } n\to \infty , \end{aligned}$$

which implies that operator \({\mathcal{B}}\) is continuous. Next, we prove the compactness of operator \({\mathcal{B}}\). Let \(x\in B_{ \rho }\). Then we have, for \(t\in J\),

$$\begin{aligned} \bigl\vert {\mathcal{B}}x(t) \bigr\vert \leq & \vert \alpha _{0} \vert \Biggl(\prod^{m}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) + \sum_{j=1}^{m} \Biggl[ \Biggl(\prod _{j} ^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) {{}_{s_{j-1}}}I^{p} \bigl\vert f\bigl(r, x(r)\bigr) \bigr\vert (t _{j}) \Biggr] \\ &{}+ \sum^{m}_{j=1} \Biggl[ \Biggl(\prod ^{m}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \Biggr] {{}_{t_{j}}}I ^{q} \bigl\vert g\bigl(r, x(r)\bigr) \bigr\vert (s_{j}) \\ &{}+ \sum_{j=1}^{m} \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl\vert g\bigl(s_{j-1}, x(s_{j-1})\bigr) \bigr\vert t_{j} + \vert \beta _{m} \vert \bigl\vert g \bigl(s_{m}, x(s_{m})\bigr) \bigr\vert T \\ &{}+ {{}_{s_{m}}}I^{p} \bigl\vert f\bigl(r, x(r)\bigr) \bigr\vert (T) \\ \leq & \vert \alpha _{0} \vert \Biggl(\prod ^{m}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) + \Vert w_{1} \Vert \psi _{1}(\rho )\sum_{j=1}^{m} \Biggl[ \Biggl(\prod_{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) {{}_{s_{j-1}}}I^{p}(1) (t _{j}) \Biggr] \\ &{}+ \Vert w_{2} \Vert \psi _{2}(\rho )\sum ^{m}_{j=1} \Biggl[ \Biggl(\prod ^{m} _{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m}_{j+1} \vert \gamma _{j+1} \vert \Biggr) {{}_{t_{j}}}I^{q}(1) (s_{j}) \Biggr] \\ &{}+ \Vert w_{2} \Vert \psi _{2}(\rho )\sum _{j=1}^{m} \Biggl(\prod_{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j} + \Vert w_{2} \Vert \psi _{2}(\rho ) \vert \beta _{m} \vert T \\ &{}+ \Vert w_{1} \Vert \psi _{1}(\rho ){{}_{s_{m}}}I^{p}(1) (T) \\ =& \vert \alpha _{0} \vert \Biggl(\prod ^{m}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \\ &{}+ \Vert w_{1} \Vert \psi _{1}(\rho ) \Biggl\{ \sum _{j=1}^{m} \Biggl[ \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr)\frac{(t_{j}-s_{j-1})^{p}}{ \varGamma (p+1)} \Biggr]+ \frac{(T-s_{m})^{p}}{\varGamma (p+1)} \Biggr\} \\ &{}+ \Vert w_{2} \Vert \psi _{2}(\rho _{2}) \Biggl\{ \sum^{m}_{j=1} \Biggl[ \Biggl(\prod ^{m}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m}_{j+1} \vert \gamma _{j+1} \vert \Biggr)\frac{(s _{j} -t_{j})^{q}}{\varGamma (q+1)} \Biggr] \\ &{}+ \sum_{j=1}^{m} \Biggl(\prod _{j}^{m} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j}+\bigl( \vert \beta _{m} \vert T\bigr) \Biggr\} \\ =& \varPhi _{1}+ \Vert w_{1} \Vert \psi _{1}( \rho )\varLambda _{1}^{(p)}+ \Vert w_{2} \Vert \psi _{2}(\rho _{2})\varLambda _{2}^{(q)}:= K_{1}, \end{aligned}$$

and for \(t\in J^{*}\),

$$\begin{aligned} \bigl\vert {\mathcal{B}}x(t) \bigr\vert \leq & \vert \alpha _{0} \vert \vert \gamma _{m} \vert \Biggl(\prod ^{m-1} _{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) + \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) {{}_{s_{j-1}}}I^{p} \bigl\vert f\bigl(r, x(r)\bigr) \bigr\vert (t_{j}) \Biggr] \\ &{}+ \vert \gamma _{m} \vert \sum^{m-1}_{j=1} \Biggl[ \Biggl(\prod^{m-1}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \Biggr] {{}_{t_{j}}}I^{q} \bigl\vert g\bigl(r, x(r)\bigr) \bigr\vert (s_{j}) \\ &{}+ \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert \bigl\vert g\bigl(s_{j-1}, x(s_{j-1})\bigr) \bigr\vert t_{j} \\ &{}+ {{}_{t_{m}}}I^{q} \bigl\vert g\bigl(r, x(r)\bigr) \bigr\vert (s_{m}) \\ \leq & \vert \alpha _{0} \vert \vert \gamma _{m} \vert \Biggl(\prod^{m-1}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) + \Vert w_{1} \Vert \psi _{1}(\rho ) \vert \gamma _{m} \vert \sum_{j=1} ^{m} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) {{}_{s_{j-1}}}I^{p}(1) (t_{j}) \Biggr] \\ &{}+ \Vert w_{2} \Vert \psi _{2}(\rho ) \vert \gamma _{m} \vert \sum^{m-1}_{j=1} \Biggl[ \Biggl(\prod^{m-1}_{j} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod^{m-1}_{j+1} \vert \gamma _{j+1} \vert \Biggr) \Biggr] {{}_{t_{j}}}I^{q}(1) (s_{j}) \\ &{}+ \Vert w_{2} \Vert \psi _{2}(\rho ) \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j} \\ &{}+ \Vert w_{2} \Vert \psi _{2}(\rho ) {{}_{t_{m}}}I^{q}(1) (s_{m}) \\ =& \vert \alpha _{0} \vert \vert \gamma _{m} \vert \Biggl(\prod^{m-1}_{j=1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \\ &{}+ \Vert w_{1} \Vert \psi _{1}(\rho ) \Biggl\{ \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \frac{(t_{j}-s_{j-1})^{p}}{ \varGamma (p+1)} \Biggr] \Biggr\} \\ &{}+ \Vert w_{2} \Vert \psi _{2}(\rho ) \Biggl\{ \vert \gamma _{m} \vert \sum_{j=1}^{m-1} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \Biggr) \Biggl(\prod_{j+1} ^{m-1} \vert \gamma _{j+1} \vert \Biggr)\frac{(s_{j}-t_{j})^{q}}{\varGamma (q+1)} \Biggr] \\ &{}+ \vert \gamma _{m} \vert \sum_{j=1}^{m} \Biggl[ \Biggl(\prod_{j}^{m-1} \vert \alpha _{j} \vert \vert \gamma _{j} \vert \Biggr) \vert \beta _{j-1} \vert t_{j} \Biggr]+\frac{(s_{m}-t _{m})^{q}}{\varGamma (q+1)} \Biggr\} \\ =& \varPhi _{2}+ \Vert w_{1} \Vert \psi _{1}( \rho ) \varLambda _{3}^{(p)}+ \Vert w_{2} \Vert \psi _{2}(\rho )\varLambda _{4}^{(q)}:= K_{2}. \end{aligned}$$

Setting \(K=\max \{K_{1}, K_{2}\}\), we obtain \(\|{\mathcal{B}}x\| \leq K\), which implies that \({\mathcal{B}}B_{\rho }\) is a uniformly bounded set. Next we will prove that the set \({\mathcal{B}}B_{\rho }\) is equicontinuous. Let \(\tau _{1}\), \(\tau _{2}\) be two points in U such that \(\tau _{1}<\tau _{2}\) and let \(x\in B_{\rho }\). Then, for \(\tau _{1}, \tau _{2}\in J\), we have

$$\begin{aligned} \bigl\vert {\mathcal{B}}x(\tau _{2})-{\mathcal{B}}x(\tau _{1}) \bigr\vert =& \bigl\vert \beta _{i} g \bigl(s_{i}, x(s_{i})\bigr) (\tau _{2}-\tau _{1})+{{}_{s_{i}}}I^{p}f\bigl(r, x(r)\bigr) ( \tau _{2})-{{}_{s_{i}}}I^{p}f\bigl(r, x(r)\bigr) (\tau _{1}) \bigr\vert \\ \leq & \vert \beta _{i} \vert \psi _{2}(\rho ) \Vert w_{2} \Vert (\tau _{2}-\tau _{1})+ \frac{ \psi _{1}(\rho ) \Vert w_{1} \Vert }{\varGamma (p+1)} \bigl\vert (\tau _{2}-s_{i})^{p}-( \tau _{1}-s_{i})^{p} \bigr\vert \\ \to & 0 \quad \text{as } \tau _{1}\to \tau _{2}, i=0,1,\ldots ,m, \end{aligned}$$

and, for \(\tau _{1}, \tau _{2}\in J^{*}\),

$$\begin{aligned} \bigl\vert {\mathcal{B}}x(\tau _{2})-{\mathcal{B}}x(\tau _{1}) \bigr\vert =& \bigl\vert {{}_{t_{i}}}I ^{q}g \bigl(r, x(r)\bigr) (\tau _{2})-{{}_{t_{i}}}I^{q}g \bigl(r, x(r)\bigr) (\tau _{1}) \bigr\vert \\ \leq & \Vert w_{2} \Vert \psi _{2}(\rho ) \frac{1}{\varGamma (q+1)} \bigl\vert (\tau _{2}-t _{i})^{q}-( \tau _{1}-t_{i})^{q} \bigr\vert \\ \to & 0 \quad \text{as } \tau _{1}\to \tau _{2}, i=1,2,\ldots ,m. \end{aligned}$$

From the above results we deduce that \({\mathcal{B}}B_{\rho }\) is an equicontinuous set. Hence the set \({\mathcal{B}}B_{\rho }\) is relatively compact. Applying the Arzelá–Ascoli theorem, the operator \({\mathcal{B}}\) is completely continuous.

Finally, we will show that condition (ii) of Theorem 2.6 is not true. Let x be a solution of problem (1.2). Now, we consider the operator equation \(x=\lambda {\mathcal{B}}x\) for \(\lambda \in (0,1)\). From the method to compute \(K_{1}\), \(K_{2}\), we have, for \(t\in J\),

$$ \Vert x \Vert \leq \varPhi _{1}+\psi _{1}\bigl( \Vert x \Vert \bigr) \Vert w_{1} \Vert \varLambda _{1}^{(p)}+ \psi _{2}\bigl( \Vert x \Vert \bigr) \Vert w_{2} \Vert \varLambda _{2}^{(q)}, $$

and, for \(t\in J^{*}\),

$$ \Vert x \Vert \leq \varPhi _{2}+\psi _{1}\bigl( \Vert x \Vert \bigr) \Vert w_{1} \Vert \varLambda _{3}^{(p)}+ \psi _{2}\bigl( \Vert x \Vert \bigr) \Vert w_{2} \Vert \varLambda _{4}^{(q)}. $$

Then we get

$$ \frac{ \Vert x \Vert }{\varPhi _{1}+\psi _{1}( \Vert x \Vert ) \Vert w_{1} \Vert \varLambda _{1}^{(p)}+ \psi _{2}( \Vert x \Vert ) \Vert w_{2} \Vert \varLambda _{2}^{(q)}}\leq 1 $$

and

$$ \frac{ \Vert x \Vert }{\varPhi _{2}+\psi _{1}( \Vert x \Vert ) \Vert w_{1} \Vert \varLambda _{3}^{(p)}+ \psi _{2}( \Vert x \Vert ) \Vert w_{2} \Vert \varLambda _{4}^{(q)}}\leq 1. $$

By assumption (\(\mathrm{H}_{3}\)), there exist two positive constants \(N_{1}\) and \(N_{2}\) such that \(\|x\|\neq N_{1},N_{2}\). Let \(N=\min \{N _{1},N_{2}\}\) and define \(V=\{x\in B_{\rho }:\|x\|< N\}\). It is obvious that \({\mathcal{B}} :\overline{V} \to PC^{p,q}(U,\mathbb{R})\) is continuous and completely continuous. Hence, there is no \(x \in \partial V\) such that \(x=\lambda {\mathcal{B}}x\) for some \(\lambda \in (0,1)\). Therefore, the result follows from Theorem 2.6(i), so that \({\mathcal{B}}\) has a fixed point \(x \in \overline{V}\) which is a solution of problem (1.2) on U. The proof is completed. □

Theorem 2.7 is a very general result because the functions f and g are bounded by nonlinear functions. However, we can get some special cases of linear results as following corollaries.

Corollary 2.8

Suppose that functions \(f\colon J\times \mathbb{R}\rightarrow \mathbb{R}\) and \(g\colon J^{*}\times \mathbb{R}\rightarrow \mathbb{R}\) with \(g(0,\cdot )=1\) satisfy the condition:

(\(\mathrm{H}_{4}\)):

\(|f(t_{1},x)|\le Q_{1}|x|+Q_{2}\) and \(|g(t_{2},x)| \le Q_{3}|x|+Q_{4}\), where \(Q_{1},Q_{3}\geq 0\) and \(Q_{2},Q_{4}>0\), for each \((t_{1},x) \in J \times {\mathbb{R}}\), \((t_{2},x) \in J^{*}\times {\mathbb{R}}\).

If \(Q_{1}\varLambda _{1}^{(p)} + Q_{3}\varLambda _{2}^{(q)} < 1\) and \(Q_{1}\varLambda _{3}^{(p)} + Q_{3}\varLambda _{4}^{(q)} < 1\), then the impulsive fractional differential problem (1.2) has at least one solution on U.

Corollary 2.9

Suppose that condition (\(\mathrm{H}_{2}\)) in Theorem 2.7 holds with \(\psi _{1}(\cdot )=\psi _{2}(\cdot )\equiv 1\), that is,

$$ \bigl\vert f(t_{1},x) \bigr\vert \le w_{1}(t_{1}),\qquad \bigl\vert g(t_{2},x) \bigr\vert \le w_{2}(t_{2}),\quad \textit{for each } (t_{1},x) \in J \times {\mathbb{R}}, (t_{2},x) \in J^{*}\times {\mathbb{R}}. $$

Then the impulsive problem (1.2) has at least one solution on U.

Example 2.10

Consider the following mixed-order impulsive fractional differential equations with initial conditions

$$ \textstyle\begin{cases} {{}_{u}}D^{\frac{3}{2}}x(t)= f(t,x(t)), \quad t \in (2i, 2i+1], i = 0, 1, 2, 3, \\ {{}_{v}}D^{\frac{1}{2}}x(t)= g(t,x(t)), \quad t \in (2i-1, 2i], i = 1, 2, 3, \\ x((2i)^{+})= (\frac{i+1}{i+2} ) x((2i)^{-}), \qquad x'((2i)^{+}) = (\frac{i+2}{i+3} ) x'((2i)^{-}), \\ x((2i-1)^{+}) = (\frac{i+3}{i+4} ) x((2i-1)^{-}),\qquad x(0)= \frac{1}{2}, \qquad x'(0)=\frac{2}{3}. \end{cases} $$
(2.9)

Here the constants \(s_{i}\), \(\alpha _{i}\), \(\beta _{i}\), \(i=0,1,2,3\), and \(t_{i}\), \(\gamma _{i}\), \(i=1,2,3\), \(m=3\), \(T=7\) are defined as in Example 2.3, including constants \(\varPhi _{1}\approx 0.11428\), \(\varPhi _{2} \approx 0.14285\). In addition, we put \(p=3/2\) and \(q=1/2\), which can be computed so that \(\varLambda _{1}^{(3/2)}\approx 1.60117\), \(\varLambda _{2}^{(1/2)}\approx 11.23742\), \(\varLambda _{3}^{(3/2)}\approx 1.26271\) and \(\varLambda _{4}^{(1/2)}\approx 6.47929\).

$$ (\mathrm{i})\quad f(t,x)=\frac{6x^{2}+10|x|}{5(5+6|x|)}+\frac{e^{-t^{2}}}{3+t}\quad \mbox{and}\quad g(t,x)=\frac{t^{2}}{36(4+t)^{2}}\tan ^{-1}|x|+\frac{t}{\sqrt{t+1}}+1. $$

Then we see that the Lipschitz condition (\(\mathrm{H}_{1}\)) holds due to

$$ \bigl\vert f(t_{1},x)-f(t_{1},y) \bigr\vert \leq \frac{2}{5} \vert x-y \vert ,\qquad \bigl\vert g(t_{2},x)-g(t_{2},y) \bigr\vert \leq \frac{1}{100} \vert x-y \vert , $$

for \(t_{1}\in J\), \(t_{2}\in J^{*}\) and \(x,y\in {\mathbb{R}}\), with \(L_{1}=2/5\) and \(L_{2}=1/100\). Also, we get \(g(0,\cdot )=1\). Then \(\varOmega _{2}=\max \{0.75284, 0.56988\}=0.75284<1\). Theorem 2.5 now implies that the impulsive mixed-order fractional differential problem (2.9) with functions in (i) has a unique solution on \([0,7]\).

$$ (\mathrm{ii})\quad f(t,x)=\frac{1}{t+5} (\frac{x^{8}}{3(x^{6}+1)}+\frac{1}{2} ) \quad \mbox{and} \quad g(t,x)=\frac{t}{15} (\frac{|x|^{9}}{4(|x|^{7}+1)}+1 )+e ^{-15t}. $$

Observe that \(g(0,\cdot )=1\). Since function g is defined on \((2i-1, 2i]\), \(i = 1, 2, 3\), and \(e^{-15t}< t/15\) for \(t>0.26826\), we have the following estimates:

$$ \bigl\vert f(t_{1},x) \bigr\vert \leq \frac{1}{t+5} \biggl( \frac{1}{3}x^{2}+\frac{1}{2} \biggr), \qquad \bigl\vert g(t_{2},x) \bigr\vert \leq \frac{t}{15} \biggl( \frac{1}{4}x^{2}+2 \biggr), $$

for \(t_{1}\in J\), \(t_{2}\in J^{*}\) and \(x\in {\mathbb{R}}\). Choosing \(w_{1}(t)=1/(t+5)\), \(w_{2}=t/15\), \(\psi _{1}(x)=(1/3)x^{2}+(1/2)\), and \(\psi _{2}(x)=(1/4)x^{2}+2\), we get \(\|w_{1}\|=1/5\), \(\|w_{2}\|=2/5\). Furthermore, we find that there exist two constants \(N_{1}\), \(N_{2}\) such that \(N_{1}\in (1.29286,4.70714)\) and \(N_{2}\in (1.25216, 3.11147)\) satisfying condition (\(\mathrm{H}_{3}\)) of Theorem 2.7. Then the conclusion follows from Theorem 2.7, namely, the mixed fractional order impulsive problem (2.9) with functions in (ii) has at least one solution on \([0,7]\).

$$ (\mathrm{iii})\quad f(t,x)=\frac{x^{8}}{10(|x|^{7}+1)}+\frac{3}{2}\cos ^{2}t \quad \mbox{and}\quad g(t,x)=\frac{7|x|^{9}\sin ^{2} t}{100(x^{8}+1)}+\frac{t^{2}}{t^{2}+1}+1 . $$

Here we observe that

$$ \bigl\vert f(t,x) \bigr\vert \leq \frac{1}{10} \vert x \vert + \frac{3}{2}, \qquad \bigl\vert g(t,x) \bigr\vert \leq \frac{7}{100} \vert x \vert +\frac{73}{37}, $$

with \(Q_{1}=1/10\), \(Q_{2}=3/2\), \(Q_{3}=7/100\), \(Q_{4}=73/37\) and \(g(0,\cdot )=1\). Therefore, we get \(Q_{1}\varLambda _{1}^{(3/2)} + Q _{3}\varLambda _{2}^{(1/2)}\approx 0.94674< 1\) and \(Q_{1}\varLambda _{3}^{(3/2)} + Q_{3}\varLambda _{4}^{(1/2)}\approx 0.59874 < 1\). Hence, from Corollary 2.8, problem (2.9) with functions in (iii) has at least one solution on \([0,7]\).