1 Introduction

In this paper, we consider the following initial value problem for nonlinear fractional differential equation with sequential fractional derivative:

$$ \left \{ \textstyle\begin{array}{l} {}^{\mathrm{c}}D_{0}^{\alpha_{2}} (\vert {}^{\mathrm{c}}D_{0}^{\alpha_{1}}y(x) \vert ^{p-2}\, {}^{\mathrm{c}}D_{0}^{\alpha_{1}}y(x) ) =f(x,y(x)), \quad x>0, \\ y(0)=b_{0}, \qquad {}^{\mathrm{c}}D_{0}^{\alpha_{1}}y(0)=b_{1}, \end{array}\displaystyle \right . $$
(1.1)

where \({}^{\mathrm{c}}D_{0}^{\alpha_{1}}\), \({}^{\mathrm{c}}D_{0}^{\alpha_{2}}\) are Caputo fractional derivatives, \(0<\alpha_{1},\alpha_{2}\le1\), \(p>1\), \(b_{0},b_{1}\in\mathbb{R}\) and \(x^{\sigma}f(x,y)\) is continuous on \([0,+\infty)\times\mathbb{R}\), \(0\le \sigma<\alpha_{2}\). When \(p=2\), the equation in (1.1) becomes a sequential fractional differential equation. Here, we follow the definition of sequential fractional derivative presented by Podlubny [1]

$$ \mathcal{D}^{\nu}y(x)=D^{\nu_{1}}D^{\nu_{2}} \cdots D^{\nu_{m}}y(x), \quad m\in \mathbb{N}_{+}, $$
(1.2)

where the symbol \(D^{\nu_{i}}\) (\(i=1,2,\ldots,m\)) means the Caputo derivative or the Riemann-Liouville derivative. It is easy to see that (1.2) is a generalized expression presented by Miller and Ross in [2].

Fractional differential equations have been of great interest for the past three decades. This is due to the intensive development of the theory of fractional calculus itself as well as its applications. Apart from diverse areas of pure mathematics, fractional differential equations can be used in modeling of various fields of science and engineering such as rheology, dynamical processes in self similar, porous media, fluid flows, viscoelasticity, electrochemistry, control, electromagnetic, and many other branches of science, see [38]. Recently, we note that the investigation for fractional differential equations with sequential fractional derivative has attracted considerable attention of researchers (see [919]). For example, in [9, 10], Bǎleanu et al. investigated the existence and nonexistence of the solutions for initial value problem of the following linear sequential fractional differential equation:

$$\bigl(D_{0}^{\alpha}y\bigr)'+a(x)y=0, \quad x>0. $$

Besides, for a class of nonlinear sequential fractional differential equations with initial value conditions, the authors [14, 18] considered the existence and uniqueness of solutions on the local interval.

To the best of our knowledge, there is no paper dealing with the existence and uniqueness of solutions of sequential fractional differential equations with initial value conditions on \([0,+\infty)\). In our previous paper [20], we show that the problem (1.1) always has a local solution for any fixed initial value, and further, maximum interval of existence of the local solution is actually \([0,+\infty)\) under certain condition. Now, in this paper, we are concerned with the existence and uniqueness of solutions of the initial value problem (1.1). We first establish the local existence and uniqueness of solutions on the local interval by the Banach fixed point theorem, and then extend them to \([0,+\infty)\) by using an inductive method. The main results of this paper are divided into three cases: \(1< p<2\), \(p>2\) and \(p=2\). It is worthy of mentioning that due to the singularity of (1.1) in the case \(1< p<2\), a growth condition imposed on f is needed to guarantee the existence and uniqueness of solutions on \([0,+\infty)\). In addition, the existence and uniqueness of solutions of ordinary differential equations with p-Laplacian follow as a special case of our results.

The paper is organized as follows. In Section 2, we present some necessary definitions and preliminary results that will be used in our discussions. The main results and their proofs are given in Section 3. In Section 4, we give an example to illustrate our results.

2 Preliminaries

In this section, we introduce some basic definitions and notations (see the monographs [1, 2] for further details) and give several useful preliminary results which are used throughout this paper.

Definition 2.1

Let \(\alpha>0\). The Riemann-Liouville fractional integral of a function \(y:(0,+\infty)\to\mathbb{R}\) of order α is given by

$$J_{0}^{\alpha}y(x):=\frac{1}{{\varGamma }(\alpha)}\int_{0}^{x}(x-t)^{\alpha -1}y(t) \,\mathrm{d}t $$

provided that the right-hand side is pointwise defined on \((0,+\infty)\). Here and in what follows Γ is the gamma function.

Definition 2.2

Let \(\alpha>0\) and n be the smallest integer that exceeds α. The Riemann-Liouville fractional derivative of a continuous function \(y:(0,+\infty)\to\mathbb{R}\) of order α is given by

$$D_{0}^{\alpha}y(x):=\frac{1}{{\varGamma }(n-\alpha)} \biggl(\frac{\mathrm{d}}{ \mathrm{d}x} \biggr)^{n}\int_{0}^{x}(x-t)^{n-\alpha-1}y(t) \,\mathrm{d}t $$

provided that the right-hand side is pointwise defined on \((0,+\infty)\).

Theorem 2.1

Let \(0<\alpha<1\). Assume that y is such that \(J_{0}^{1-\alpha}y\) is absolutely continuous. Then

$$J_{0}^{\alpha}D_{0}^{\alpha}y(x)=y(x)- \frac{x^{\alpha-1}}{{\varGamma }(\alpha)} \lim_{z\to0+}J_{0}^{1-\alpha}y(z), \quad x\ge0. $$

Definition 2.3

Let \(\alpha>0\) and n be the smallest integer that exceeds α. The Caputo fractional derivative of a continuous function \(y:(0,+\infty )\to\mathbb{R}\) of order α is given by

$${}^{\mathrm{c}}D_{0}^{\alpha}y(x):= \Biggl(D_{0}^{\alpha}\Biggl[y-\sum_{k=0}^{n-1}\frac {y^{(k)}(0)}{k!} ( \cdot)^{k} \Biggr] \Biggr) (x) $$

provided that the right-hand side is pointwise defined on \((0,+\infty)\).

We give the definition of solutions for the initial value problem (1.1).

Definition 2.4

A function y is called a solution of (1.1) on \([0,+\infty)\) if, for any \(T>0\),

  1. (i)

    \(y, {{}^{\mathrm{c}}D}^{\alpha_{1}}y\in C([0,T])\), \({{}^{\mathrm{c}}D_{0}^{\alpha_{2}}} (\vert {}^{\mathrm{c}} D_{0}^{\alpha_{1}}y\vert ^{p-2}{{}^{\mathrm{c}}D_{0}^{\alpha_{1}}}y )\in C((0,T])\);

  2. (ii)

    y satisfies problem (1.1) on \((0,T]\).

Next, we give several useful preliminary results which will be used in this paper.

Lemma 2.1

Let \(0<\mu<\alpha\le1\). If g is a continuous function defined on \([0,+\infty)\), then \(\int_{0}^{x}(x-t)^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t\) is continuous with respect to x in \([0,+\infty)\).

Proof

Let \(x_{0}>0\) and take \(\delta_{0}\in(0,x_{0})\). For \(|\delta|<\delta_{0}\), we have

$$\begin{aligned}& \biggl\vert \int_{0}^{x_{0}+\delta}(x_{0}+ \delta-t)^{\alpha -1}t^{-\mu}g(t) \,\mathrm{d}t- \int _{0}^{x_{0}}(x_{0}-t)^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t\biggr\vert \\& \quad \le \biggl|\int_{0}^{x_{0}+\delta}(x_{0}+ \delta-t)^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t- \int _{0}^{x_{0}}(x_{0}+\delta-t)^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t \biggr| \\& \qquad {} + \biggl|\int_{0}^{x_{0}}(x_{0}+ \delta-t)^{\alpha-1}t^{-\mu }g(t) \,\mathrm{d}t- \int _{0}^{x_{0}}(x_{0}-t)^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t \biggr| \\& \quad = \biggl\vert \int_{x_{0}}^{x_{0}+\delta}(x_{0}+ \delta-t)^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t\biggr\vert \\& \qquad {} +\int_{0}^{x_{0}}\bigl\vert \bigl((x_{0}+\delta-t)^{\alpha -1}-(x_{0}-t)^{\alpha-1} \bigr)t^{-\mu}g(t)\bigr\vert \,\mathrm{d}t. \end{aligned}$$
(2.1)

Since g is continuous and bounded in the neighborhood of \(x_{0}\), we conclude that

$$\begin{aligned}& \biggl\vert \int_{x_{0}}^{x_{0}+\delta}(x_{0}+ \delta-t)^{\alpha -1}t^{-\mu}g(t) \,\mathrm{d}t\biggr\vert \\& \quad \le C(x_{0}+\delta)^{\alpha-1}\biggl\vert \int _{x_{0}}^{x_{0}+\delta}\biggl(1-\frac {t}{x_{0}+\delta} \biggr)^{\alpha-1}t^{-\mu} \,\mathrm{d}t\biggr\vert \\& \quad \le C(x_{0}+\delta)^{\alpha-\mu}\biggl\vert \int _{x_{0}/(x_{0}+\delta )}^{1}(1-z)^{\alpha-1}z^{-\mu} \, \mathrm{d}z\biggr\vert \\& \quad \le C\frac{\delta^{\alpha}}{\alpha x_{0}^{\mu}} \end{aligned}$$
(2.2)

and

$$\begin{aligned}& \int_{0}^{x_{0}}\bigl\vert \bigl((x_{0}+\delta-t)^{\alpha -1}-(x_{0}-t)^{\alpha-1} \bigr)t^{-\mu}g(t)\bigr\vert \,\mathrm{d}t \\& \quad \le Cx_{0}^{\alpha-\mu}\int_{0}^{1} \biggl\vert \biggl(\frac{x_{0}+\delta }{x_{0}}-z \biggr)^{\alpha-1}-(1-z)^{\alpha-1} \biggr\vert z^{-\mu} \,\mathrm{d}z \\& \quad = Cx_{0}^{\alpha-\mu}\biggl\vert \int_{0}^{1}(1-z)^{\alpha-1}z^{-\mu} \,\mathrm{d}z- \int_{0}^{1+\delta/x_{0}} \biggl(1+ \frac{\delta}{x_{0}}-z \biggr)^{\alpha -1}z^{-\mu} \,\mathrm{d}z \\& \qquad {} +\int_{1}^{1+\delta/x_{0}} \biggl(1+ \frac{\delta }{x_{0}}-z \biggr)^{\alpha-1}z^{-\mu} \,\mathrm{d}z\biggr\vert \\& \quad \le Cx_{0}^{\alpha-\mu}\biggl\vert \int _{0}^{1}(1-z)^{\alpha-1}z^{-\mu} \, \mathrm{d}z- \int_{0}^{1+\delta/x_{0}} \biggl(1+ \frac{\delta}{x_{0}}-z \biggr)^{\alpha -1}z^{-\mu} \,\mathrm{d}z\biggr\vert \\& \qquad {} +Cx_{0}^{\alpha-\mu}\biggl\vert \int _{1}^{1+\delta/x_{0}} \biggl(1+\frac{\delta}{x_{0}}-z \biggr)^{\alpha-1} (z-1)^{-\mu} \,\mathrm{d}z\biggr\vert \\& \quad \le Cx_{0}^{\alpha-\mu}\frac{{{\varGamma }(\alpha)}{{\varGamma }(1-\mu)}}{{{\varGamma }(\alpha+1-\mu)}} \biggl\vert 1- \biggl(1+\frac{\delta}{x_{0}} \biggr)^{\alpha-\mu}+ \biggl(\frac {\delta}{x_{0}} \biggr)^{\alpha-\mu}\biggr\vert , \end{aligned}$$
(2.3)

where \(C=\sup_{0\le x\le x_{0}+\delta_{0}}|g(x)|\). Combining (2.1), (2.2) with (2.3), we arrive at

$$\lim_{\delta\to0}\biggl\vert \int_{0}^{x_{0}+\delta}(x_{0}+ \delta-t)^{\alpha -1}t^{-\mu}g(t) \,\mathrm{d}t- \int _{0}^{x_{0}}(x_{0}-t)^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t\biggr\vert =0. $$

In addition, it is easy to see that

$$\biggl\vert \int_{0}^{x}x^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t\biggr\vert \le Cx^{\alpha-\mu}. $$

Therefore, \(\int_{0}^{x}(x-t)^{\alpha-1}t^{-\mu}g(t) \,\mathrm{d}t\) is continuous with respect to x in \([0,+\infty)\). □

Remark 2.1

Let \(y\in C([0,+\infty))\). Then \(x^{\sigma}f(x,y(x))\) is continuous on \([0,+\infty)\). According to Lemma 2.1, the function

$$J_{0}^{\alpha_{2}}f\bigl(x,y(x)\bigr)=\frac{1}{{\varGamma }(\alpha_{2})} \int _{0}^{x}(x-t)^{\alpha_{2}-1}f\bigl(t,y(t)\bigr) \, \mathrm{d}t $$

is also continuous with respect to x in \([0,+\infty)\) and \(J_{0}^{\alpha_{2}}f(0,y(0))=0\).

The equivalence between the initial value problem (1.1) and an integral equation is established in the following lemma. For the convenience of the readers, we list some special notations that will be used in this paper: \(q=\frac{p}{p-1}\) and \(\phi_{q}(s)=|s|^{q-2}s\) for \(s\in \mathbb{R}\).

Lemma 2.2

A function y is a solution of problem (1.1) if and only if it satisfies the following integral equation:

$$ y(x)=b_{0}+\frac{1}{{\varGamma }(\alpha_{1})} \int_{0}^{x}(x-t)^{\alpha_{1}-1}{ \varPhi }_{y}(t) \,\mathrm{d}t, \quad x\ge0, $$
(2.4)

where

$${\varPhi }_{y}(x)={\phi}_{q} \biggl({\phi}_{p}(b_{1})+ \frac{1}{{\varGamma }(\alpha_{2})} \int_{0}^{x}(x-t)^{\alpha_{2}-1}f \bigl(t,y(t)\bigr) \,\mathrm{d}t \biggr). $$

Proof

First we prove the necessity. Let \(y\in C([0,T])\) be a solution of problem (1.1) and define

$$g(x)=\bigl\vert {}^{\mathrm{c}}D_{0}^{\alpha_{1}}y(x)\bigr\vert ^{p-2}{{}^{\mathrm{c}}D_{0}^{\alpha_{1}}}y(x), $$

then \(g\in C([0,T])\) and \(g(0)={\phi}_{p}(b_{1})\). According to Definition 2.3 and Theorem 2.1, the differential equation of problem (1.1) can be transformed into the following form:

$$f\bigl(x,y(x)\bigr)=DJ_{0}^{1-\alpha_{2}}\bigl(g-g(0)\bigr) (x). $$

Obviously, \(J_{0}^{1-\alpha_{2}}(g-g(0))\) is absolutely continuous on \([0,T]\). Combining with Theorem 2.1, we have

$$\begin{aligned} g(x) & = g(0)+J_{0}^{\alpha_{2}} {{}^{\mathrm{c}}D_{0}^{\alpha_{2}}}g(x) \\ & = {\phi}_{p}(b_{1})+J_{0}^{\alpha_{2}}f\bigl( \cdot,y(\cdot)\bigr) (x). \end{aligned}$$

That is,

$${}^{\mathrm{c}}D_{0}^{\alpha_{1}}y(x)=\phi_{q} \bigl({ \phi}_{p}(b_{1})+J_{0}^{\alpha_{2}} f\bigl( \cdot,y(\cdot)\bigr) (x) \bigr). $$

Applying Definition 2.3 and Theorem 2.1 again, we have

$${{}^{\mathrm{c}}D}_{0}^{\alpha_{1}}y(x) = DJ_{0}^{1-\alpha_{1}} \bigl(y-y(0)\bigr) (x). $$

Obviously, \(J_{0}^{1-\alpha_{1}}(y-y(0))\in C^{1}([0,T])\). Combining with Theorem 2.1, we have

$$\begin{aligned} y(x) & = y(0)+J_{0}^{\alpha_{1}}D_{0}^{\alpha_{1}} \bigl(y-y(0)\bigr) (x) \\ & = y(0)+J_{0}^{\alpha_{1}}{{}^{\mathrm{c}}D_{0}^{\alpha_{1}}}y(x) \\ & = b_{0}+J_{0}^{\alpha_{1}}{\varPhi }_{y}(x). \end{aligned}$$

Therefore, y satisfies the integral equation (2.4).

Next, we prove the sufficiency. Let \(y\in C([0,T])\) be a solution of the integral equation (2.4). Combined with Definition 2.1, equation (2.4) reduces to

$$ y(x)=b_{0}+J_{0}^{\alpha_{1}}{ \varPhi }_{y}(x). $$
(2.5)

From Remark 2.1, we see that \(J_{0}^{\alpha_{1}} {\varPhi }_{y}\in C([0,T])\) and \(J_{0}^{\alpha_{1}}{\varPhi }_{y}(0)=0\). That is, \(y\in C([0,T])\) and \(y(0)=b_{0}\). Applying the operator \({}^{\mathrm{c}}D^{\alpha_{1}}\) to both sides of equation (2.5), we obtain that

$$\begin{aligned} {}^{\mathrm{c}}D_{0}^{\alpha}y(x) & = {{}^{\mathrm{c}}D}_{0}^{\alpha}\bigl(b_{0}+J_{0}^{\alpha_{1}}{\varPhi }_{y}(x) \bigr) \\ & = {D}_{0}^{\alpha}J_{0}^{\alpha_{1}}{ \varPhi }_{y}(x) \\ & = {\varPhi }_{y}(x). \end{aligned}$$

Then we have \({}^{\mathrm{c}}D_{0}^{\alpha}y\in C([0,T])\) and \({}^{\mathrm{c}}D_{0}^{\alpha}y(0)= {\varPhi }_{y}(0)=b_{1}\). By virtue of g, we transform the above equation into the following form:

$$ g(x)={\phi}_{p}(b_{1})+J_{0}^{\alpha_{2}}f \bigl(\cdot,y(\cdot)\bigr) (x). $$
(2.6)

Similarly, applying the operator \({}^{\mathrm{c}}D^{\alpha_{1}}\) to the both sides of equation (2.6), we arrive at

$${}^{\mathrm{c}}D_{0}^{\alpha_{2}} \bigl(\bigl\vert {}^{\mathrm{c}}D_{0}^{\alpha_{1}}y(x) \bigr\vert ^{p-2}{{}^{\mathrm{c}}D_{0}^{\alpha_{1}}} y(x) \bigr)=f \bigl(x,y(x)\bigr). $$

Therefore, y is a solution of (1.1) on \([0,T]\). Summing up, we complete the proof of Lemma 2.2. □

Corollary 2.1

Assume that \(b_{0},b_{1}\ge0\) and \(f(x,y)\ge0\) for \((x,y)\in(0,+\infty )\times\mathbb{R}\). Let y be a solution of problem (1.1), then \(y(x)\ge b_{0}\) for \(x\ge0\).

3 Main results

In this section, we present the main results of this paper. The following lemma will play a very important role in the proofs of the main results.

Lemma 3.1

Let \(\alpha, R>0\). For any fixed constant \(C>0\), there exist numbers \(0=t_{0}< t_{1}< t_{2}<\cdots<t_{N}=R\) such that for all \(i\in\{0,1,\ldots,N-1\}\) and all \(x\in[t_{i},R]\),

$$C\int_{t_{i}}^{\min\{x,t_{i+1}\}}(x-t)^{\alpha-1} \,\mathrm{d}t \le\frac{1}{2}. $$

Proof

The proof is divided into two cases: \(\alpha\ge1\) and \(0<\alpha<1\).

Case (i). If \(\alpha\ge1\), we take \(k=\sup_{0\le t\le x\le R}(x-t)^{\alpha-1}\), \(N=[2kCR]+1\) and \(t_{i}=iR/N\) for \(i=0,1,\ldots,N\). Then, for \(x\in[t_{i},R]\), we have

$$\begin{aligned} \begin{aligned} C\int_{t_{i}}^{\min\{x,t_{i+1}\}}(x-t)^{\alpha-1} \,\mathrm{d}t & \le C\int_{t_{i}}^{t_{i+1}}(x-t)^{\alpha-1} \, \mathrm{d}t \\ & \le kC(t_{i+1}-t_{i}) \\ & = kCR/N \\ & \le \frac{1}{2}. \end{aligned} \end{aligned}$$

Case (ii). If \(0<\alpha<1\), we take \(N\ge [R (2C/\alpha )^{1/\alpha } ]+1\) and \(t_{i}=iR/N\) for \(i=0,1,\ldots,N\). On the one hand, for \(x\le t_{i+1}\), we have

$$\begin{aligned} \delta(x)&:=C\int_{t_{i}}^{\min\{x,t_{i+1}\}}(x-t)^{\alpha-1} \, \mathrm{d}t \\ & = C\int_{t_{i}}^{x}(x-t)^{\alpha-1} \, \mathrm{d}t \\ & \le \frac{C}{\alpha} \biggl(\frac{R}{N} \biggr)^{\alpha}\\ & \le \frac{1}{2}. \end{aligned}$$

On the other hand, for \(x\ge t_{i+1}\), we have

$$\delta(x)=C\int_{t_{i}}^{\min\{x,t_{i+1}\}}(x-t)^{\alpha-1} \, \mathrm{d}t= C\int_{t_{i}}^{t_{i+1}}(x-t)^{\alpha-1} \, \mathrm{d}t. $$

Since \(\alpha-1<0\), then

$$\delta(x)\le\delta(t_{i+1})\le1/2,\quad x\ge t_{i+1}. $$

This completes the proof of Lemma 3.1. □

Firstly, we consider the existence and uniqueness of solutions when \(1< p<2\).

Proposition 3.1

Suppose that \(1< p<2\) and there exists a constant \(L>0\) such that

$$ \bigl\vert f(x,y_{1})-f(x,y_{2})\bigr\vert \le\frac{L}{x^{\sigma}}|y_{1}-y_{2}| $$
(3.1)

for \((x,y_{1}),(x,y_{2})\in(0,\infty)\times\mathbb{R}\), where \(0\le\sigma <\alpha_{2}\). Then, for any fixed \(K>0\), there exists a sufficiently small constant \(T^{*}\) such that the integral equation (2.4) has a solution in \(C([0,T^{*}])\).

Proof

For any given positive constant K, choose \(T^{*}>0\) sufficiently small which will be determined later. Let

$$U_{(T^{*},K)}= \Bigl\{ y\in C\bigl(\bigl[0,T^{*}\bigr]\bigr); \sup _{0\le x\le T^{*}}\bigl\vert y(x)-b_{0}\bigr\vert \le K \Bigr\} . $$

Obviously, \(U_{(T^{*},K)}\) is a closed, convex and nonempty subset of \(C([0,T^{*}])\). On this set \(U_{(T^{*},K)}\) we define the operator S,

$$Sy(x)=b_{0}+\frac{1}{{\varGamma }(\alpha_{1})} \int_{0}^{x}(x-t)^{\alpha_{1}-1}{ \varPhi }_{y}(t) \,\mathrm{d}t, \quad x\ge0, $$

where

$${\varPhi }_{y}(x)={\phi}_{q} \biggl({\phi}_{p}(b_{1})+ \frac{1}{{\varGamma }(\alpha_{2})} \int_{0}^{x}(x-t)^{\alpha_{2}-1}f \bigl(t,y(t)\bigr) \,\mathrm{d}t \biggr). $$

It is easy to see that \(Sy\in C([0,T^{*}])\) for any \(y\in U_{(T^{*},K)}\). Furthermore, for \(x\in[0,T^{*}]\), we have

$$\begin{aligned} \bigl\vert Sy(x)-b_{0}\bigr\vert & = \biggl\vert \frac{1}{{\varGamma }(\alpha_{1})}\int_{0}^{x}(x-t)^{\alpha _{1}-1}{ \varPhi }_{y}(t) \,\mathrm{d}t\biggr\vert \\ & \le \frac{1}{{\varGamma }(\alpha_{1})}\int_{0}^{x}(x-t)^{\alpha _{1}-1}{ \phi}_{q} \biggl(|b_{1}|^{p-1}+\frac{M_{T^{*},K}{\varGamma }(1-\sigma)}{ {\varGamma }(\alpha_{2}-\sigma+1)} t^{\alpha_{2}-\sigma} \biggr) \,\mathrm{d}t \\ & \le \frac{{T^{*}}^{\alpha_{1}}}{{\varGamma }(1+\alpha_{1})} \biggl(|b_{1}|^{p-1}+ \frac{M_{T^{*},K}{\varGamma }(1-\sigma)}{{\varGamma }(\alpha _{2}-\sigma+1)}{T^{*}}^{\alpha_{2}-\sigma} \biggr)^{q-1}, \end{aligned}$$

where

$$M_{T^{*},K}=\sup_{x\in[0,T^{*}], y\in[b_{0}-K,b_{0}+K]}x^{\sigma} \bigl\vert f(x,y)\bigr\vert . $$

Now we can choose \(T^{*}>0\) small enough such that

$$ \frac{{T^{*}}^{\alpha_{1}}}{{\varGamma }(1+\alpha_{1})} \biggl(|b_{1}|^{p-1}+ \frac{M_{T^{*},K}{\varGamma }(1-\sigma)}{{\varGamma }(\alpha _{2}-\sigma+1)}{T^{*}}^{\alpha_{2}-\sigma} \biggr)^{q-1}\le K. $$
(3.2)

Then \(Sy\in U_{(T^{*},K)}\), that is, the operator S maps the set \(U_{(T^{*},K)}\) into itself.

In what follows, we will show that the operator S has a unique fixed point in \(U_{(T^{*},K)}\). Let us recall Lemma 3.1, where we take

$$\begin{aligned}& R=T^{*},\qquad \alpha=\alpha_{1}, \\& C=\frac{2L(q-1){\varGamma }(1-\sigma){T^{*}}^{\alpha_{2}-\sigma }}{{\varGamma } (\alpha_{1}){\varGamma }(\alpha_{2}-\sigma+1)} \biggl(|b_{1}|^{p-1}+ \frac {M_{T^{*},K}{\varGamma } (1-\sigma)}{{\varGamma }(\alpha_{2}-\sigma+1)}{T^{*}}^{\alpha_{2}-\sigma } \biggr)^{q-2} \end{aligned}$$

and

$$N=\max \bigl\{ [2CR ], \bigl[R(2C/\alpha)^{1/\alpha} \bigr] \bigr\} +1. $$

Then we have N points \(t_{i}=iT^{*}/N\) and N sets \(U_{(t_{i},K)}\), \(i=1,2,\ldots,N\). The proof will be completed by applying the Banach fixed point theorem in \(U_{(t_{i},K)}\), \(i=1,2,\ldots,N\), respectively. We first concentrate on the interval \([0,t_{1}]\). For \(y_{1},y_{2}\in U_{(t_{1},K)}\) and \(t\in[0,t_{1}]\), we have

$$\begin{aligned}& \bigl\vert {\varPhi }_{y_{1}}(t)-{\varPhi }_{y_{2}}(t)\bigr\vert \\& \quad = \biggl\vert {\phi}_{q} \biggl(\phi_{p}(b_{1})+ \frac{1}{{\varGamma }(\alpha_{2})} \int_{0}^{t}(t-s)^{\alpha_{2}-1}f \bigl(s,y_{1}(s)\bigr) \,\mathrm{d}s \biggr) \\& \qquad {}-{\phi}_{q} \biggl(\phi_{p}(b_{1})+ \frac {1}{{\varGamma }(\alpha_{2})} \int_{0}^{t}(t-s)^{\alpha_{2}-1}f \bigl(s,y_{2}(s)\bigr) \,\mathrm{d}s \biggr)\biggr\vert \\& \quad \le \frac{2(q-1)}{{\varGamma }(\alpha_{2})} \biggl(|b_{1}|^{p-1}+ \frac {M_{T^{*},K}{\varGamma } (1-\sigma)}{{\varGamma }(\alpha_{2}-\sigma+1)}{T^{*}}^{\alpha_{2}-\sigma } \biggr)^{q-2} \\& \qquad {}\times\int_{0}^{t}(t-s)^{\alpha_{2}-1} \bigl\vert f\bigl(s,y_{1}(s)\bigr)-f\bigl(s,y_{2}(s)\bigr) \bigr\vert \,\mathrm{d}s \\& \quad \le \frac{2L(q-1){\varGamma }(1-\sigma)}{{\varGamma }(\alpha _{2}-\sigma+1)} \biggl(|b_{1}|^{p-1} + \frac{M_{T^{*},K}{\varGamma }(1-\sigma)}{{\varGamma }(\alpha _{2}-\sigma+1)} {T^{*}}^{\alpha_{2}-\sigma} \biggr)^{q-2}t^{\alpha_{2}-\sigma} \\& \qquad {}\times\sup_{t\in[0,t_{1}]} \bigl\vert y_{1}(t)-y_{2}(t) \bigr\vert \\& \quad \le C{\varGamma }(\alpha_{1})\sup_{t\in[0,t_{1}]}\bigl\vert y_{1}(t)-y_{2}(t)\bigr\vert . \end{aligned}$$

By Lemma 3.1, it follows that

$$\begin{aligned} \bigl\vert Sy_{1}(x)-Sy_{2}(x)\bigr\vert & \le \frac{1}{{\varGamma }(\alpha_{1})}\int_{0}^{x}(x-t)^{\alpha_{1}-1} \bigl\vert {\varPhi }_{y_{1}}(t)-{\varPhi }_{y_{2}}(t)\bigr\vert \, \mathrm{d}t \\ & \le C\int_{0}^{x}(x-t)^{\alpha_{1}-1} \, \mathrm{d}t\sup_{x\in [0,t_{1}]}\bigl\vert y_{1}(x)-y_{2}(x) \bigr\vert \\ & \le \frac{1}{2}\sup_{x\in[0,t_{1}]}\bigl\vert y_{1}(x)-y_{2}(x)\bigr\vert . \end{aligned}$$

Since this inequality holds uniformly for all \(x\in[0,t_{1}]\), we deduce

$$\sup_{x\in[0,t_{1}]}\bigl\vert Sy_{1}(x)-Sy_{2}(x) \bigr\vert \le\frac{1}{2}\sup_{x\in [0,t_{1}]}\bigl\vert y_{1}(x)-y_{2}(x)\bigr\vert , $$

which implies that the operator S is a contraction mapping on \(U_{(t_{1},K)}\). According to the Banach fixed point theorem, S has a unique fixed point in \(U_{(t_{1},K)}\). Then equation (2.4) has a unique solution in \(U_{(t_{1},K)}\), which is denoted by \(y^{(1)}\). Next, we need to extend this existence and uniqueness result from \([0,t_{1}]\) to \([0,t_{N}]\). We will do this in an inductive manner, using our result for the first interval \([0,t_{1}]\) as the basis. Thus we assume that the claim holds on \([0,t_{i}]\) for some i, that is, (2.4) has a unique solution \(y^{(i)}\) in \(U_{(t_{i},K)}\), and we shall prove that if \(i< N\), (2.4) also has a unique solution \(y^{(i+1)}\) in \(U_{(t_{i+1},K)}\) and \(y^{(i)}=y^{(i+1)}\) for \(x\in[0,t_{i}]\). Define the set

$$E= \bigl\{ y\in C\bigl([t_{0},t_{i+1}]\bigr); y\in U_{(t_{i+1},K)}, y(x)=y^{(i)}(x), x\in[t_{0},t_{i}] \bigr\} . $$

Obviously, E is a closed and convex subset of \(C([t_{0},t_{i+1}])\) and S maps E into itself. Proceeding in a way that is very similar to our approach above, for \(y_{1}, y_{2}\in E\), we see that

$$\begin{aligned} \bigl\vert Sy_{1}(x)-Sy_{2}(x)\bigr\vert & \le \frac{1}{{\varGamma }(\alpha_{1})}\int_{0}^{x}(x-t)^{\alpha_{1}-1} \bigl\vert {\varPhi }_{y_{1}}(t)-{\varPhi }_{y_{2}}(t)\bigr\vert \, \mathrm{d}t \\ & \le C\int_{0}^{x}(x-t)^{\alpha_{1}-1} \, \mathrm{d}t\sup_{x\in [0,t_{i+1}]}\bigl\vert y_{1}(x)-y_{2}(x) \bigr\vert . \end{aligned}$$

Combining with Lemma 3.1 and noting that \(y_{1}(x)=y_{2}(x)\) when \(0\le x\le t_{i}\), we deduce

$$\sup_{x\in[0,t_{i+1}]}\bigl\vert Sy_{1}(x)-Sy_{2}(x) \bigr\vert \le\frac{1}{2}\sup_{x\in [0,t_{i+1}]}\bigl\vert y_{1}(x)-y_{2}(x)\bigr\vert , $$

which implies that the operator S is a contraction mapping on E. According to the Banach fixed point theorem, S has a unique fixed point in E, which is denoted by \(y^{(i+1)}\). Then equation (2.4) has a unique solution \(y^{(i+1)}\) in \(U_{(t_{i+1},K)}\) and \(y^{(i)}(x)=y^{(i+1)}(x)\) for \(x\in[0,t_{i}]\). Therefore, by induction, the operator S has a unique fixed point \(y^{(N)}\) in \(U_{(T^{*},K)}\), that is, \(y^{(N)}\) is the unique solution of (2.4) in \(U_{(T^{*},K)}\). The proof of Proposition 3.1 is completed. □

Remark 3.1

Obviously, the solution of (2.4) obtained in Proposition 3.1 is unique in the set \(U_{(T^{*},K)}\).

From inequality (3.2), we see that \(M_{T^{*},K}\) has an important impact on \(T^{*}\) and K. As a result, if there exist some specific growth conditions on \(M_{T^{*},K}\) with respect to \(T^{*}\) and K, the solution obtained in Proposition 3.1 can be extended to \(C([0,+\infty))\).

Theorem 3.1

Suppose that the conditions of Proposition  3.1 hold and there exist \(c_{1},c_{2}\ge0\) and \(\mu\in[0,p-1)\) such that

$$ \bigl\vert f(x,y)\bigr\vert \le\frac{1}{x^{\sigma}} \bigl(c_{1}+c_{2}|y|^{\mu}\bigr), \quad (x,y)\in (0, \infty)\times\mathbb{R}, $$
(3.3)

where \(0\le\sigma<\alpha_{2}\), then problem (1.1) has a unique solution in \(C([0,\infty))\).

Proof

It suffices to prove that problem (1.1) has a unique solution in \(C([0,T'])\) for any fixed \(T'>0\). Since \(\mu\in[0,p-1)\), there exists a sufficiently large constant \(K_{T'}>0\) such that

$$c_{1}+c_{2} \bigl(|b_{0}|+K' \bigr)^{\mu}\le\frac{{\varGamma }(\alpha _{2}-\sigma+1)}{ {\varGamma }(1-\sigma)T^{\prime\alpha_{2}-\sigma}} \biggl( \biggl( \frac{K'{\varGamma }(\alpha_{1}+1)}{T^{\prime\alpha_{1}}} \biggr)^{p-1}-|b_{1}|^{p-1} \biggr) $$

for any \(K'\ge K_{T'}\). By virtue of (3.3), we have

$$\begin{aligned} M_{{T',K'}}&=\sup_{x\in[0,T'], y\in[b_{0}-K',b_{0}+K']}x^{\sigma} \bigl\vert f(x,y)\bigr\vert \\ &\le c_{1}+c_{2}\sup_{y\in[b_{0}-K',b_{0}+K']}|y|^{\mu}\\ & \le c_{1}+c_{2}\bigl(|b_{0}|+K' \bigr)^{\mu}\\ & \le \frac{{\varGamma }(\alpha_{2}-\sigma+1)}{{\varGamma }(1-\sigma)T^{\prime\alpha_{2}-\sigma}} \biggl( \biggl( \frac{K'{\varGamma }(\alpha_{1}+1)}{T^{\prime\alpha_{1}}} \biggr)^{p-1}-|b_{1}|^{p-1} \biggr). \end{aligned}$$

That is,

$$\frac{{T'}^{\alpha_{1}}}{{\varGamma }(1+\alpha_{1})} \biggl(|b_{1}|^{p-1}+ \frac{M_{T',K'}{\varGamma }(1-\sigma)}{{\varGamma }(\alpha _{2}-\sigma+1)}{T'}^{\alpha_{2}-\sigma} \biggr)^{q-1}\le K'. $$

According to Proposition 3.1 and Remark 3.1, where we take \(T^{*}=T'\) and \(K=K'\), problem (1.1) has a unique solution in \(U_{(T',K')}\). Since \(K'\) can be chosen arbitrarily large, the solution is actually unique in \(C([0,T'])\). So we complete the proof of Theorem 3.1. □

Corollary 3.1

Suppose that \(1< p<2\), \(b_{0}, b_{1}\ge0\) and \(f(x,y)\ge0\) for \((x,y)\in (0,\infty)\times\mathbb{R}\). Moreover, if (3.1) and (3.3) hold for \((x,y)\in (0,\infty)\times[b_{0},+\infty)\), then problem (1.1) has a unique positive solution in \(C([0,\infty))\).

Proof

Following the proofs of Proposition 3.1 and Theorem 3.1 and replacing \(U_{(T^{*},K)}\) by \(U^{+}_{(T^{*},K)}\), where

$$U^{+}_{(T^{*},K)}= \Bigl\{ y\in C\bigl(\bigl[0,T^{*}\bigr]\bigr); b_{0} \le\sup_{0\le x\le T^{*}}y(x)\le b_{0}+K \Bigr\} , $$

we see that problem (1.1) has a unique positive solution in \(C([0,\infty))\). □

Secondly, we consider the existence and uniqueness of solutions when \(p>2\).

Theorem 3.2

Suppose that \(p>2\), \(b_{1}\ne0\) and \(b_{1}f(x,y)\ge0\) for \((x,y)\in(0,\infty )\times\mathbb{R}\). If there exists a constant \(L>0\) such that (3.1) holds in \((0,\infty)\times\mathbb{R}\), then problem (1.1) has a unique solution in \(C([0,\infty))\).

Proof

It suffices to prove that problem (1.1) has a unique solution in \(C([0,T])\) for any \(T>0\). Similar to Theorem 3.1, we define the operator S by

$$Sy(x)=b_{0}+\frac{1}{{\varGamma }(\alpha_{1})} \int_{0}^{x}(x-t)^{\alpha_{1}-1}{ \varPhi }_{y}(t) \,\mathrm{d}t, \quad x\ge0, $$

where

$${\varPhi }_{y}(x)={\phi}_{q} \biggl({\phi}_{p}(b_{1})+ \frac{1}{{\varGamma }(\alpha_{2})} \int_{0}^{x}(x-t)^{\alpha_{2}-1}f \bigl(t,y(t)\bigr) \,\mathrm{d}t \biggr). $$

It is easy to see that \(Sy\in C([0,T])\) if \(y\in C([0,T])\). Let us recall the points \(t_{i}\) (\(i=1,2,\ldots,N\)) of Lemma 3.1, where we take

$$\begin{aligned}& R=T,\qquad \alpha=\alpha_{1}, \\& C=\frac{(p-1)L{\varGamma }(1-\sigma)T^{\alpha_{2}-\sigma}}{|b_{1}|^{p-2} {\varGamma }(\alpha_{1}){\varGamma }(\alpha_{2}-\sigma+1)} \end{aligned}$$

and

$$N=\max \bigl\{ [2CR ], \bigl[R(2C/\alpha)^{1/\alpha} \bigr] \bigr\} +1. $$

Similar to Theorem 3.1, in what follows, we shall show that the operator S has a unique fixed point in \(C([0,T])\) by applying an inductive manner. We first concentrate on the interval \([0,t_{1}]\). For any \(y_{1}, y_{2}\in C([0,t_{1}])\), it is easy to see that

$$\begin{aligned}& \biggl(\phi_{p}(b_{1})+ \frac{1}{{\varGamma }(\alpha_{2})}\int _{0}^{x}(x-t)^{\alpha _{2}-1}f\bigl(t,y_{1}(t) \bigr) \,\mathrm{d}t \biggr) \\& \quad {}\times \biggl(\phi_{p}(b_{1})+ \frac{1}{{\varGamma }(\alpha_{2})} \int_{0}^{x}(x-t)^{\alpha _{2}-1}f \bigl(t,y_{2}(t)\bigr) \,\mathrm{d}t \biggr)\ge0,\quad 0\le x\le t_{1}. \end{aligned}$$

Then we have

$$\begin{aligned}& \bigl\vert {\varPhi }_{y_{1}}(t)-{\varPhi }_{y_{2}}(t)\bigr\vert \\& \quad = \biggl\vert {\phi}_{q} \biggl(\phi_{p}(b_{1})+ \frac{1}{{\varGamma }(\alpha_{2})} \int_{0}^{t}(t-s)^{\alpha_{2}-1}f \bigl(s,y_{1}(s)\bigr) \,\mathrm{d}s \biggr) \\& \qquad {}-{\phi}_{q} \biggl(\phi_{p}(b_{1})+ \frac {1}{{\varGamma }(\alpha_{2})} \int_{0}^{t}(t-s)^{\alpha_{2}-1}f \bigl(s,y_{2}(s)\bigr) \,\mathrm{d}t \biggr)\biggr\vert \\& \quad = \biggl\vert \biggl\vert \phi_{p}(b_{1})+ \frac{1}{{\varGamma }(\alpha_{2})} \int_{0}^{t}(t-s)^{\alpha_{2}-1}f \bigl(s,y_{1}(s)\bigr) \,\mathrm{d}s\biggr\vert ^{q-1} \\& \qquad {}-\biggl\vert \phi_{p}(b_{1})+\frac{1}{{\varGamma }(\alpha_{2})} \int_{0}^{t}(t-s)^{\alpha_{2}-1}f \bigl(s,y_{2}(s)\bigr) \,\mathrm{d}s\biggr\vert ^{q-1}\biggr\vert \\& \quad \le \frac{p-1}{|b_{1}|^{p-2}{\varGamma }(\alpha_{2})}\int_{0}^{t}(t-s)^{\alpha_{2}-1} \bigl\vert f\bigl(s,y_{1}(s)\bigr)-f\bigl(s,y_{2}(s) \bigr)\bigr\vert \,\mathrm{d}s \\& \quad \le \frac{(p-1)L}{|b_{1}|^{p-2}{\varGamma }(\alpha_{2})}\int_{0}^{t}(t-s)^{\alpha_{2}-1}s^{-\sigma} \bigl\vert y_{1}(s)-y_{2}(s)\bigr\vert \,\mathrm{d}s \\& \quad \le \frac{(p-1)L{\varGamma }(1-\sigma)T^{\alpha_{2}-\sigma }}{|b_{1}|^{p-2}{\varGamma }(\alpha_{2}-\sigma+1)} \sup_{t\in[t_{0},t_{1}]}\bigl\vert y_{1}(t)-y_{2}(t)\bigr\vert . \end{aligned}$$

Furthermore, according to Lemma 3.1, we obtain that

$$\begin{aligned} \bigl\vert Sy_{1}(x)-Sy_{2}(x)\bigr\vert & \le \frac{1}{{\varGamma }(\alpha_{1})}\int_{0}^{x}(x-t)^{\alpha _{1}-1} \bigl\vert {\varPhi }_{y_{1}}(t)-{\varPhi }_{y_{2}}(t)\bigr\vert \, \mathrm{d}t \\ & \le C\int_{0}^{x}(x-t)^{\alpha_{1}-1} \, \mathrm{d}t\sup_{x\in [t_{0},t_{1}]}\bigl\vert y_{1}(x)-y_{2}(x) \bigr\vert \\ & \le \frac{1}{2}\sup_{x\in[t_{0},t_{1}]}\bigl\vert y_{1}(x)-y_{2}(x)\bigr\vert . \end{aligned}$$

Since this inequality holds uniformly for all \(x\in[0,t_{1}]\), we deduce

$$\sup_{x\in[0,t_{1}]}\bigl\vert Sy_{1}(x)-Sy_{2}(x) \bigr\vert \le\frac{1}{2}\sup_{x\in [0,t_{1}]}\bigl\vert y_{1}(x)-y_{2}(x)\bigr\vert , $$

which implies that the operator S is a contraction mapping on \(C([0,t_{1}])\). According to the Banach fixed point theorem, S has a unique fixed point in \(C([0,t_{1}])\), which is denoted by y. Thus, y is a unique solution of (2.4) in \(C([0,t_{1}])\). Applying an inductive manner similar to that in the proof of Theorem 3.2, we obtain that \(y\in C([0,T])\) is a unique solution of problem (1.1). Summing up, we complete the proof of Theorem 3.2. □

Finally, we consider the existence and uniqueness of solutions when \(p=2\).

Theorem 3.3

Suppose that \(p=2\). If there exists a constant \(L>0\) such that (3.1) holds in \((0,\infty)\times\mathbb{R}\), then problem (1.1) has a unique solution in \(C([0,\infty))\).

Proof

The proof is similar to that of Theorem 3.2, and we omit the details here. □

4 Example

To illustrate our main results, we present an example here.

Example 4.1

Consider the following initial value problem for nonlinear fractional differential equation:

$$ \left \{ \textstyle\begin{array}{l} {}^{\mathrm{c}}D_{0}^{0.75} (\vert {}^{\mathrm{c}}D_{0}^{0.5}y(x) \vert ^{p-2}\, {}^{\mathrm{c}}D_{0}^{0.5}y(x) )=f(x,y(x)), \quad x>0, \\ y(0)=0, \qquad {}^{\mathrm{c}}D_{0}^{0.5}y(0)=1, \end{array}\displaystyle \right . $$
(4.1)

where \(p>1\) and \(f(x,y)=x^{-0.5}(1+\sin y)\). If there exists a solution y, then \(y(x)\ge0\) for \(x\ge0\) by noting \(f(x,y)\ge0\) for \((x,y)\in(0,+\infty)\times\mathbb{R}\). It is easy to see that we have

$$\bigl\vert f(x,y_{1})-f(x,y_{2})\bigr\vert =x^{-0.5}|\sin y_{1}-\sin y_{2}|\le x^{-0.5}|y_{1}-y_{2}| $$

for \((x,y_{1}),(x,y_{2})\in(0,\infty)\times[0,\infty)\). Besides, when \(1< p<2\), we have

$$\bigl\vert f(x,y)\bigr\vert =x^{-0.5}(1+\sin y)\le x^{-0.5} \bigl(1+y^{\mu}\bigr) $$

for \((x,y)\in(0,\infty)\times[0,\infty)\), where \(\mu\in(0,p-1)\). According to Theorem 3.1, Theorem 3.2 and Theorem 3.3, problem (4.1) has a unique positive solution. Meanwhile, by a simple computation, we have an estimate of increasing rate

$$y(x)=O\bigl(x^{\frac{2p-1}{4(p-1)}}\bigr)\quad (x\to+\infty). $$