1 Introduction

In recent years, more experiments and theories show that many abnormal phenomena that occur in engineering and applied sciences can be described by fractional calculus, and fractional differential equations have been proved to be valuable tools in various science fields, such as physics, biological engineering, mechanics, artificial intelligence, chemistry engineering, etc. (see [15]). In [5], Zhang and Tian investigated the following fractional differential system with two nonlinear terms:

$$ \textstyle\begin{cases} D_{0^{+}}^{v}x(t)+f(t,x(t),D_{0^{+}}^{\gamma }x(t))+g(t,x(t))=0, \quad t \in (0,1), n-1< v< n; \\ x^{(i)}=0,\quad i=0,1,2,3,\ldots,n-2; \\ D_{0^{+}}^{\beta }x(1)=k(x(1)), \end{cases} $$

where \(n\geq 3, 1\leq \gamma \leq \beta \leq n-2\), \(f:[0,1]\times [0,\infty )\times [0,\infty )\rightarrow [0,\infty )\) is continuous, \(g:[0,1]\times [0,\infty )\rightarrow [0,\infty ) \) is continuous, and \(k:[0,\infty )\rightarrow [0,\infty )\) is continuous. By means of the sum-type mixed monotone operator fixed point theorems, a unique positive solution was obtained, and the authors constructed two monotone iterative sequences to approximate the unique positive solution.

In addition, in order to describe a physical process model with discontinuous jumps or mutations in disease prevention and control, earthquake and shock absorption systems, and other aspects of research, many researchers have investigated the impulsive problems, see [612]. Moreover, in recent years, optimal control problem to all kinds of differential equations has attracted many researchers. For a small sample of such work, readers can refer to [1316]. In [14], Zhang and Yamazaki investigated a class of second order impulsive differential equations given by

$$ \textstyle\begin{cases} -x''(t)=a(t)f(t,x(t),x(t))+u(t),\quad t\in (0,1)/\{t_{1},t_{2},\ldots,t_{m}\}, \\ \Delta x|_{t=t_{k}}=I_{k}(x(t_{k}),x(t_{k})),\quad k=1,2,\ldots m, \\ x(0)=b_{0},\qquad x'(0)=b. \end{cases} $$

By employing a fixed point theorem of φ-concave-convex mixed monotone operator, existence and uniqueness of positive solutions to the initial value problem were obtained. In addition, the authors investigated the control problem of positive solutions and proved the existence-stability of an optimal control.

In [16], Benchohra investigated the following Caputo fractional differential equations with impulsive terms:

$$ \textstyle\begin{cases} {}^{C}D^{\gamma }y(t)=f(t,y),\quad t\in J=[0,T], t\neq t_{k} \\ \Delta y|_{t=t_{k}}=I_{k}(x(t_{k^{-}})),\quad k=1,2,\ldots m, \\ y(0)=y_{0}, \end{cases} $$

where \(^{C}D^{\gamma }\) is the standard Caputo fractional derivative, \(f:J\times E\rightarrow E\) is a given function, \(I_{k}:E\rightarrow E, k=1,2,\ldots,m\), and \(y_{0}\in E\). By using Monch’s fixed point theorem and the technique of measures of noncompactness, the existence of solutions for a class of initial value problems was investigated in an abstract Banach space.

Inspired by the above literature, in the article, we are devoted to studying the existence-uniqueness and optimal control of positive solutions to impulsive fractional order differential equations with control term as follows:

$$ (IP;u) \textstyle\begin{cases} -{_{0}^{C}D}_{t}^{\alpha }x(t)=f(t,x(t),x(t))+u(t), \quad 0< \alpha \leq 1, \\ \bigtriangleup x|_{t=t_{k}}=I_{k}(x(t_{k}),x(t_{k})),\quad k=1,2,\ldots m, \\ x(0)=x_{0}, \end{cases} $$
(1.1)

where \({}_{0}^{C}D_{t}^{\alpha }\) denotes the standard Caputo fractional derivative of order α, \(J=[0,1]\), \(t\in (0,1)/\{t_{1},t_{2},\ldots,t_{m}\}\), \(R^{+}=[0,\infty ]\), \(f: C[J\times R^{+}\times R^{+}]\rightarrow R^{-}\). u is a given function on \([0,1]\) and \(x_{0}>0\), \(0< t_{1}< t_{2}<\cdots<t_{m}<1\), \(\Delta x|_{t=t_{k}}\) is a jump of \(x(t)\) at \(t=t_{k}\) and \(\Delta x|_{t=t_{k}}=x(t_{k}^{+})-x(t_{k}^{-})\), where \(x(t_{k}^{+})\) is the right limit and \(x(t_{k}^{-})\) is the left limit of \(x(t)\) at \(t=t_{k}\). Also, \(I_{k}\in C[R^{+}\times R^{+},R^{+}], k=1,2,\ldots m\). In addition, let \(J_{0}=[0,t_{1}],J_{1}=(t_{1},t_{2}],\ldots,J_{1}=(t_{m-1},t_{m}]\), and \(J'=J\setminus \{t_{1},t_{2},\ldots,t_{m}\}\).

Problem (OP). Find an optimal control \(u^{*}\in \mathcal{U}_{M}\) such that \(\pi (u^{*})=\inf_{u\in \mathcal{U}_{M}}\pi (u)\). Here, \(\mathcal{U}_{M}\) is a control Banach space defined by

$$ \mathcal{U}_{M}:=\bigl\{ u\in L^{2}(0,1)|-M \le u\le 0 \text{ a.e. } t \in [0,1]\bigr\} , $$
(1.2)

where M is a positive constant and \(\pi (u)\) is the cost functional. Set

$$ \pi (u):=\frac{1}{2} \int _{0}^{1} \bigl\vert (x-x_{d}) (t) \bigr\vert ^{2}\,dt+x(1)+ \frac{1}{2} \int _{0}^{1} \bigl\vert u(t) \bigr\vert ^{2}\,dt, $$
(1.3)

where \(u\in \mathcal{U}_{M}\) is a control function, x is a positive solution to \((IP;u)\), and \(x_{d}\) is the given desired target profiles in \(L^{2}(0,1)\).

To the best of our knowledge, there are few studies that consider the existence-uniqueness and optimal control of positive solutions to Caputo fractional differential equations with impulsive terms. Therefore, in the sense of minimum function, it is particularly important to study this kind of equation by nonlinear theory, which enriches and extends the existing body of literature. The main characteristic features presented in this article are as follows. Firstly, the equations in this paper are the generalization of the equations studied in [16], where \(I_{k}(x(t_{k}),x(t_{k}))=I_{k}(x(t_{k}))\), \(f(t,x(t),x(t))=f(t,x(t))\), and \(u(t)=0\). Secondly, in our work, the nonlinear term is mixed monotone, so by means of the fixed point theorem of φ-concave-convex mixed monotone operator, we can show the existence and uniqueness of positive solution. Here, we should point out that the conditions showed in this paper are weaker than the conditions in [7], in which the operator is completely continuous. Finally, comparing with [15], the optimal control problems in differential equations of integer order are extended to the fractional differential equations; comparing with [5] and [14], in this paper, we consider the fractional differential equations with impulsive terms and control terms. As we all know, in many applications, lots of systems with short-term perturbations are often described by impulsive fractional differential equations, and in the existing literature, there is no paper studying a similar optimal control problem for fractional differential equations with impulsive term. So our study is new and significant.

The structure of this paper is as follows. In Sect. 2, we briefly review some definitions, concepts, notations, and lemmas in a Banach space partially ordered by cone \(P_{h}\). In Sect. 3, the existence and uniqueness of positive solutions are investigated. In Sect. 4, we study the optimal control problems to fractional differential equations involving impulsive terms (1.1). Finally, in Sect. 5, we show a specific example to illustrate our main results.

2 Preliminaries

Suppose that P is a nonempty closed convex set and \(P\subset E\), P is called a cone if it satisfies the following conditions:

\((I_{1})\):

\(x\in P\), \(\lambda \geq 0 \Rightarrow \lambda x \in P\);

\((I_{2})\):

\(x\in P, -x\in P \Rightarrow x=\theta \).

In addition, \((E, \Vert \cdot \Vert )\) is a real Banach space which is partially ordered by a cone \(P\subset E\), that is, \(y-x\in P\) implies that \(x\leq y\). If \(x\leq y\) and \(x\neq y\), then we denote \(x< y\) or \(y>x\). We denote the zero element of E by θ. For all \(x,y\in E\), if there exists \(M>0\) such that \(\theta \leq x\leq y\) implies \(\|x\|\leq \|y\|\), the cone P is called normal; in this case M is the infimum of such a constant, it is called normality constant of P.

Furthermore, for given \(\forall h>\theta \), set \(P_{h}=\{x\in E\mid x\sim h\}\), in which ∼ is an equivalence relation, i.e., for all \(x,y\in E\), \(x\sim y\) means that there exist \(\lambda >0\) and \(\mu >0\) such that \(\lambda x\geq y\geq \mu x\).

Throughout this paper, let \(PC[J,R]:=\{x|x:J\to R, x(t)\) be a continuous function at \(t\ne t_{k}\) and left continuous at \(t=t_{k}\), \(x(t_{k}^{+})\) exists, \(k=1,2,\ldots,m\}\). Then we can easily find that \(PC[J,R]\) is a Banach space and the norm \(\|x\|_{pc}=\sup_{t\in J}|x|\). Set \(H:=L^{2}(J)\) with the usual Hilbert structure, in addition, \(\|\cdot \|\) is the norm in H.

Definition 2.1

([8])

The fractional integral of α order for a function f is defined as follows:

$$ {}_{0}I_{t}^{\alpha }f(t)=\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}f(s)\,ds, \quad\alpha >0, $$

provided that such an integral exists.

Definition 2.2

([8])

The Caputo fractional derivative of α order for a function f is defined as follows:

$$ {}_{0}^{C}D_{t}^{\alpha }f(t)= \frac{1}{\Gamma (n-\alpha )} \int _{0}^{t}(t-s)^{n-\alpha -1}f^{(n)}(s) \,ds,\quad n=[\alpha ]+1, $$

where \([\alpha ]\) denotes the integer part of the real number α.

Definition 2.3

([5])

\(A:P\times P\rightarrow P\) is said to be a mixed monotone operator if \(A(x,y)\) is increasing in x and decreasing in y, i.e., \(u_{1}< u_{2}\) and \(v_{1}>v_{2}\) imply \(A(u_{1},v_{1})\leq A(u_{2},v_{2})\). An element \(x\in P\) is called a fixed point of A if \(A(x,x)=x\).

Definition 2.4

([13])

\(A:P\times P\rightarrow P\) is said to be a φ-concave-convex operator if there exists \(\varphi (t)\in (t,1]\) such that \(A(tu,t^{-1}v)\geq \varphi (t)A(u,v)\) for any \(u,v\in P\) and \(t\in (0,1)\).

Lemma 2.1

([17])

Let P be a normal cone of a real Banach space E. Also, let \(A:P\times P\rightarrow P\) be a mixed monotone operator. Assume that

\((A_{1})\):

there exists \(h\in P\) with \(h\neq \theta \) such that \(A(h,h)\in P_{h}\);

\((A_{2})\):

\(A:P\times P\rightarrow P\) is a φ-concave-convex operator for any \(u,v\in P\).

Then operator A has a unique fixed point \(x^{*}\) in \(P_{h}\). Moreover, for any initial \(x_{0},y_{0}\in P_{h}\), constructing successively the sequences

$$ {}x_{n}=A(x_{n-1},y_{n-1}),\qquad y_{n}=A(y_{n-1},x_{n-1}),\quad n=1,2, \ldots, $$

we have \(\|x_{n}-x^{*}\|\rightarrow 0\) and \(\|y_{n}-x^{*}\|\rightarrow 0\) as \(n\rightarrow \infty \).

3 Initial value problem

In this section, we show the existence-uniqueness of the positive solution to (OP; u) by applying a fixed point theorem of mixed monotone operator (Lemma 2.1). Throughout this section, let \(\widetilde{P}=\{u\in PC[J,R];u(t)\geq 0,\forall t\in J\}\). Obviously, is a normal cone in \(PC[J,R]\); moreover, the normality constant of is 1.

Definition 3.1

([14])

Let \(v\in H\) and M be a given constant. Then a function \(u\in PC[J,R]\cap C'[J',R]\) is called a solution to \((OP;v)\) on J if it satisfies (1.1).

Lemma 3.1

Assume that \(f:J\times R^{+}\times R^{+}\rightarrow R\) is continuous and \(u\in H\). So, \(x\in PC{[J,R]}\cap C[J',R]\) is a solution to \((IP;u)\) on J if and only if \(x\in PC[J,R]\) is the solution to the following integral equation:

$$ x(t)=\textstyle\begin{cases} {} x(0)-\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(t-s)^{ \alpha -1}[f(s,x(s),x(s))+u(s)]\,ds,\quad t\in J_{0}; \\ x(0)-\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(t-s)^{\alpha -1}[f(s,x(s),x(s))+u(s)]\,ds+I_{1}(x(t_{1}),x(t_{1})),\\ \quad t\in J_{1}; \\ x(0)-\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(t-s)^{\alpha -1}[f(s,x(s),x(s)) +u(s)]\,ds\\ \quad {}+\sum_{0< t_{k}< t}I_{k}(x(t_{k}),x(t_{k})),\quad t\in J_{k}. \end{cases} $$
(3.1)

Proof

If \(t\in J_{0}\), we take α times integral for the first equation on both sides of (1.1) at the same time, then the following contents can be obtained:

$$\begin{aligned} &{-}_{0}I_{t}^{\alpha }{_{0}^{C}D_{t}^{\alpha }}x(t)=-_{0}I_{t}^{ \alpha }{_{0}I_{t}^{1-\alpha }}x^{\prime }(t)=- \int _{0}^{t}x^{\prime }(t)\,dt, \\ &-\int _{0}^{t}x^{\prime }(t) \,dt={}_{0}I_{t}^{\alpha }\bigl[f\bigl(t,x(t),x(t) \bigr)+u(t)\bigr])\\ &\phantom{- \int _{0}^{t}x^{\prime }(t) \,dt}{} = \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr])\,ds. \end{aligned}$$

Then

$$ x(t)=x(0)-\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. $$

If \(t\in J_{1}\), integrating on both sides of the first equation of (1.1), we have

$$ -_{0}I_{t}^{\alpha }{_{0}^{C}D_{t}^{\alpha }}x(t)={}_{0}I_{t}^{\alpha } \bigl[f\bigl(t,x(t),x(t)\bigr)+u(t)\bigr]=\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. $$

Since \(x(t)\) has a break point \(t=t_{1}\) within \((0, t) \), we get

$$ -_{0}I_{t}^{\alpha }{_{0}^{C}D_{t}^{\alpha }}x(t)= -{_{0}I_{t}^{ \alpha }} {_{0}I_{t}^{1-\alpha }}x^{\prime }(t)=- \int _{0}^{t}x^{\prime }(s)\,ds=- \int _{0}^{t_{1}}x^{\prime }(s)\,ds- \int _{t_{1}}^{t}x^{\prime }(s)\,ds $$

and

$$\begin{aligned} {-x\bigl(t_{1}^{-}\bigr)+x(0)-x(t)+x\bigl(t_{1}^{+} \bigr)}&={_{0}I_{t}^{ \alpha }\bigl[f\bigl(s,x(s),x(s) \bigr)+u(s)\bigr]} \\ &=\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. \end{aligned}$$

Furthermore, we obtain

$$ x(t)=x(0)+I_{1}\bigl(x(t_{1}),x(t_{1})\bigr)- \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. $$

Similarly, if \(t\in J_{k}\), we have

$$\begin{aligned} {}_{0}I_{t}^{\alpha }{_{0}^{C}D_{t}^{\alpha }}x(t)={}&{}_{0}I_{t}^{ \alpha }{_{0}I_{t}^{1-\alpha }}x^{\prime }(t)= \int _{0}^{t}x^{\prime }(s)\,ds \\ ={}& \int _{0}^{t_{1}}x^{\prime }(s)\,ds+ \int _{t_{1}}^{t_{2}}x^{\prime }(s)\,ds+\cdots+ \int _{t_{k}}^{t}x^{\prime }(s)\,ds \\ ={}&x\bigl(t_{1}^{-}\bigr)-x(0)+x\bigl(t_{2}^{-} \bigr)-x\bigl(t_{1}^{+}\bigr)+\cdots +x(t)-x\bigl(t_{k}^{+} \bigr), - \int _{0}^{t}x^{\prime }(s) \,ds\\ ={}&{}_{0}I_{t}^{\alpha }\bigl[f\bigl(t,x(t),x(t) \bigr)+u(t)\bigr]\\ = {}&\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds \end{aligned}$$

and

$$\begin{aligned} &{-}\bigl[x\bigl(t_{1}^{-}\bigr)-x(0)+x\bigl(t_{2}^{-} \bigr)-x\bigl(t_{1}^{+}\bigr)+\cdots+x(t)-x \bigl(t_{k}^{+}\bigr)\bigr]\\ &\quad = \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. \end{aligned}$$

Finally, we get

$$\begin{aligned} x(t)={}&x(0)+\bigl(x\bigl(t_{1}^{+}\bigr)-x \bigl(t_{1}^{-}\bigr)\bigr)+\bigl(x\bigl(t_{2}^{+} \bigr)-x\bigl(t_{2}^{-}\bigr)\bigr)+\cdots+\bigl(x \bigl(t_{k}^{+}\bigr)-x\bigl(t_{k}^{-} \bigr)\bigr) \\ &{}-\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds \\ ={}&x(0)+I_{1}\bigl(x(t_{1}),x(t_{1}) \bigr)+I_{2}\bigl(x(t_{2})\bigr),x(t_{2})+ \cdots+I_{k}\bigl(x(t_{k}),x(t_{k})\bigr) \\ &{}-\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds \\ ={}&x(0)-\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds+ \sum_{0< t_{k}< t}I_{k} \bigl(x(t_{k}),x(t_{k})\bigr). \end{aligned}$$

Then we know that (3.1) is equivalent to (1.1).

Now, we prove that (3.1) meets the differential system (1.1).

If \(t\in J_{0}\), let \(t=0\), by (3.1) we get \(x(0)=x_{0}\).

If \(t\in J_{1}\), taking derivative on both sides of (3.1), we have

$$\begin{aligned} _{0}^{C}D_{t}^{\alpha }x(t)={}&{}_{0}^{C}D_{t}^{\alpha } \biggl\{ x_{0}+I_{1}\bigl(x(t_{1}),x(t_{1}) \bigr)- \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr] \biggr\} \,ds \\ =&{}_{0}^{C}D_{t}^{\alpha }x_{0}+{_{0}^{C}D}_{t}^{\alpha }I_{1} \bigl(x(t_{1}),x(t_{1})\bigr) \\ &{}-{_{0}^{C}D}_{t}^{\alpha } \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}\bigl[f \bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds \\ ={}&-f\bigl(t,x(t),x(t)\bigr)-u(t). \end{aligned}$$

In the first type of (3.1), let \(t\rightarrow t_{1}^{-}\), we have

$$ x\bigl(t_{1}^{-}\bigr)=x(0)-\frac{1}{\Gamma (\alpha )} \int _{0}^{t_{1}^{-}}\bigl({t_{1}^{-}}-s \bigr)^{ \alpha -1}\bigl[f\bigl(s,x(s),x(s)\bigr)+u(s)\bigr]\,ds. $$

In the second type of (3.1), let \(t\rightarrow t_{1}^{+}\), we have

$$ x\bigl(t_{1}^{+}\bigr)=x(0)-\frac{1}{\Gamma (\alpha )} \int _{0}^{t_{1}^{+}}\bigl({t_{1}^{+}}-s \bigr)^{ \alpha -1}\bigl[f\bigl(s,x(s),x(s)\bigr)+u(s)\bigr] \,ds+I_{1}\bigl(x(t_{1}),x(t_{1})\bigr), $$

and then we know

$$ I_{1}\bigl(x(t_{1}),x(t_{1})\bigr)=x \bigl(t_{1}^{+}\bigr)-x\bigl(t_{1}^{-} \bigr). $$

So it is easy to know, when \(t\in J_{1}\), (3.1) meets all kinds of (1.1). Likewise, if \(t\in J_{k}\), (3.1) meets all kinds of (1.1) too, i.e., (3.1) and (1.1) are completely equivalent. It constitutes a proof. □

For convenience, set \(A:PC[J,R]\times PC[J,R]\rightarrow PC[J,R]\) by

$$ A(x,y) (t)=x(0)-\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\bigl[f \bigl(s,x(s),y(s)\bigr)+u(s)\bigr]\,ds+ \sum_{o< t_{k}< t}I_{k} \bigl(x(t_{k}),y(t_{k})\bigr). $$
(3.2)

Theorem 3.1

Assume that \(M>0\) and

\((H_{1})\):

\(f:J\times R^{+}\times R^{+}\rightarrow R^{-}\) for all \(t\in J\) and \(x,y\in R^{+}\), \(f(t,x,y)\) is monotone decreasing in x for each \(t\in J\) and \(y\in R^{+}\) and is monotone increasing in y for each \(t\in J\) and \(x\in R^{+}\); furthermore, \(f(t,\frac{1}{2},1)<0\) for all \(t\in J\);

\((H_{2})\):

for each \(k=1,2,\ldots,m, I_{k}\in C[R^{+}\times R^{+}]\), and \(I_{k}\geq 0\), \(I_{k}(x,y)\) is monotone increasing in x for each \(y\in R^{+}\) and is monotone decreasing in y for each \(x\in R^{+}\);

\((H_{3})\):

for all \(\gamma \in (0,1)\) and \(x,y\in R^{+}\), there exists \(\varphi _{1}(\gamma )\in (\gamma,1]\) such that

$$ f\bigl(t,\gamma x,\gamma ^{-1}y\bigr)\leq \varphi _{1}( \gamma )f(t,x,y); $$

for all \(\gamma \in (0,1)\), \(\forall x,y\in R^{+}\), and \(k=1,2,\ldots,m\), there exists \(\varphi _{2}(\gamma )\in (\gamma,1]\) such that

$$ I_{k}\bigl(\gamma x,\gamma ^{-1}y\bigr)\geq \varphi _{2}(\gamma )I_{k}(x,y). $$

Then, for all \(u\in H\) with \(-M\leq u(t)\leq 0\), the problem \((OP;u)\) has a unique positive solution \(x^{*}\in \widetilde{P}_{h}\), where \(h(t)=\frac{1}{2}+\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(t-s)^{ \alpha -1}\,ds\) and \(\widetilde{P}_{h}=\{u\in \widetilde{P}\mid u\sim h\}\).

Proof

From (3.2), \((H_{1})\), and \((H_{2})\), we have \((A(x,y))(t)\geq 0\) for \(\forall x,y\in \tilde{P}\), that is, \(A:\tilde{P}\times \tilde{P}\rightarrow \tilde{P}\). Also, the operator \(A:\tilde{P}\times \tilde{P}\rightarrow \tilde{P}\) is a mixed monotone operator. Now, we show that A is a φ-concave-convex operator. Put \(\varphi (\gamma )=\min \{\varphi _{1}(\gamma ),\varphi _{2}(\gamma )\}\), where \(\gamma \in (0,1)\). Since \(\varphi _{1}(\gamma )\in (\gamma,1]\) and \(\varphi _{2}(\gamma )\in (\gamma,1]\), it is easy to see that \(\gamma \leq \varphi (\gamma )\leq 1\). Hence, from \((H_{1})\)\((H_{3})\) and \(u(t)\leq 0\), for \(\forall \gamma \in (0,1)\) and \(x,y\in \tilde{P}\), we obtain

$$\begin{aligned} A\bigl(\gamma x,\gamma ^{-1}y\bigr) (t) ={}&x_{0}- \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{(\alpha -1)}\bigl[f\bigl(s, \gamma x(s),\gamma ^{-1}y(s)\bigr)+u(s)\bigr]\,ds \\ &{}+\sum_{o< t_{k}< t}I_{k}\bigl(\gamma x(t_{k}),\gamma ^{-1}y(t_{k})\bigr) \\ \geq {}& x_{0}-\frac{\varphi _{1}(\gamma )}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{( \alpha -1)}\bigl[f \bigl(s,x(s),y(s)\bigr)+u(s)\bigr]\,ds \\ &{}+\varphi _{2}(\gamma )\sum_{0< t_{k}< t}I_{k} \bigl(x(t_{k}),y(t_{k})\bigr) \\ \geq{} &\varphi (\gamma )A(x,y) (t),\quad \forall t\in J, \end{aligned}$$

that is, \(A(\gamma x,\gamma ^{-1}y)(t)\geq \varphi (\gamma )A(x,y)(t)\) for \(\forall x,y\in \tilde{P}\) and \(\gamma \in (0,1)\).

Let \(h(t):=\frac{1}{2}+\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(t-s)^{ \alpha -1}\,ds=\frac{1}{2}+\frac{t^{\alpha }}{\Gamma (\alpha +1)}, \forall t\in J\). Then we can easily obtain that \(\frac{1}{2}\leq h(t)\leq \frac{1}{2}+\frac{1}{\Gamma (\alpha +1)}, \forall t\in J\). Set

$$ r_{1}=\min_{t\in J}\biggl[-f\biggl(t, \frac{1}{2},\frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)}\biggr) \biggr],\qquad r_{2}=\min_{t\in J}\biggl[-f\biggl(t, \frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)},\frac{1}{2}\biggr)\biggr], $$

then \(0\leq r_{1}\leq r_{2}\). Furthermore, from \(-M\leq u\leq 0\), it is easy to know that

$$\begin{aligned} A(h,h) (t)={}&x_{0}-\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}\bigl[f \bigl(s,h(s),h(s)\bigr)+u(s)\bigr]\,ds+\sum_{o< t_{k}< t}I_{k} \bigl(h(t_{k}),h(t_{k})\bigr) \\ \geq{} & x_{0}+\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\biggl[-f\biggl(s, \frac{1}{2},\frac{1}{2}+\frac{1}{\Gamma (\alpha +1)}\biggr))\,ds\biggr] \\ \geq{} &x_{0}+\frac{r_{1}}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}\,ds \\ \geq{} &\frac{2\Gamma (\alpha +1)}{2+\Gamma (\alpha +1)} (x_{0}+r_{1} )h(t) \\ ={}&r_{3}h(t), \quad\forall t\in J, \end{aligned}$$

where \(r_{3}=\frac{2\Gamma (\alpha +1)}{2+\Gamma (\alpha +1)} (x_{0}+r_{1} )\). Furthermore,

$$\begin{aligned} A(h,h) (t)={}&x_{0}-\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}\bigl[f \bigl(s,h(s),h(s)\bigr)+u(s)\bigr]\,ds+\sum_{0< t_{k}< t}I_{k} \bigl(h(t_{k}),h(t_{k})\bigr) \\ \leq {}& x_{0}+\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\biggl[-f\biggl(s, \frac{1}{2}+\frac{1}{\Gamma (\alpha +1)},\frac{1}{2}\biggr))\biggr]\,ds \\ &+{} M\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}\,ds+\sum _{0< t_{k}< t}I_{k}\biggl(\frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)}, \frac{1}{2}\biggr) \\ \leq {}&x_{0}+\frac{r_{2}}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{ \alpha -1}\,ds+Mh(t)+\sum _{0< t_{k}< 1}I_{k}\biggl(\frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)},\frac{1}{2}\biggr) \\ \leq {}&x_{0}+r_{2}h(t)+Mh(t)+\sum _{0< t_{k}< 1}I_{k} \biggl( \frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)},\frac{1}{2} \biggr) \\ \leq {}&2 \biggl(x_{0}+r_{2}+M+\sum _{0< t_{k}< 1}I_{k} \biggl( \frac{1}{2}+ \frac{1}{\Gamma (\alpha +1)},\frac{1}{2} \biggr) \biggr)h(t) \\ ={}& r_{4}h(t),\quad \forall t\in J, \end{aligned}$$

where \(r_{4}=2 (x_{0}+r_{2}+M+\sum_{0< t_{k}<1}I_{k} ( \frac{1}{2}+\frac{1}{\Gamma (\alpha +1)},\frac{1}{2} ) )\).

From the above, we know that \(r_{3}h\leq A(h,h)\leq r_{4}h\), that is, \(A(h,h)\in \widetilde{P}_{h}\). Therefore, by employing Lemma 2.1, the equation \(x=A(x,x)\) has a unique positive solution in \(\tilde{P_{h}}\) for \((IP;u)\) on J. The proof is complete. □

Corollary 3.1

Suppose that

\((H_{1}')\):

\(f:J\times R^{+}\rightarrow (-\infty,0]\) for all \(t\in J\) and \(x\in R^{+}\). Also, \(f(t,x)\) is nondecreasing in x for each \(t\in J\) and \(x\in R^{+}\). Moreover, \(f(t,\frac{1}{2})<0\) for all \(t\in J\);

\((H_{2}')\):

for each \(k=1,2,\ldots,m, I_{k}:R^{+}\rightarrow R^{+}, I_{k}(x)\) is nondecreasing in x;

\((H_{3}')\):

for all \(\gamma \in (0,1)\) and \(\forall x\in R^{+}\), there exists \(\varphi _{1}(\gamma )\in (\gamma,1]\) such that

$$ f(t,\gamma x)\leq \varphi _{1}(\gamma )f(t,x); $$

for all \(\gamma \in (0,1)\), \(x\in R^{+}\), and \(\forall k=1,2,\ldots,m\), there exists \(\varphi _{2}(\gamma )\in (\gamma,1]\) such that

$$ I_{k}(\gamma x)\geq \varphi _{2}(\gamma )I_{k}(x). $$

Then, for \(\forall u\in H\) with \(-M\leq u(t)\leq 0\), the following optimal control system

$$ (IP_{1};u)\textstyle\begin{cases} -{_{0}^{C}D}_{t}^{\alpha }x(t)=f(t,x(t))+u(t),\quad t\in (0,1)/ \{t_{1},t_{2},\ldots,t_{m}\}, \\ \Delta x|_{t=t_{k}}=I_{k}(x(t_{k})),\quad k=1,2,\ldots m, \\ x(0)=x_{0}, \end{cases} $$

has a unique positive solution \(x^{*}\in P_{h}\) on J, where \(h(t)=\frac{1}{2}+\frac{1}{\Gamma (\alpha )}\int _{0}^{t}(t-s)^{ \alpha -1}\,ds\).

4 Optimal control problem (OP)

In this section, in order to investigate the optimal control problem \((OP)\) to \((IP;u)\), we assume that the following additional conditions hold:

\((H_{4})\):

There exist two constants \(C_{f}>0\) and \(C_{k}>0\) such that

$$\begin{aligned} &\bigl\vert f(s,u,u)-f(s,v,v) \bigr\vert \leq C_{f} \vert u-v \vert ,\quad \forall s\in J, u,v \in R^{+}; \\ &\bigl\vert I_{k}(u,u)-I_{k}(v,v) \bigr\vert \leq C_{k} \vert u-v \vert \quad \forall u,v\in R^{+}, \forall k=1,2,\ldots,m. \end{aligned}$$
\((H_{5})\):

\(x_{d}\) is a given desired target profile in H.

Lemma 4.1

Let \(\{u_{n}\}\subset H\), \(\Phi:H\rightarrow C[J,R]\) be an integral operator defined by

$$ (\Phi z) (t):=\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}z(s)\,ds,\quad \forall z\in H\textit{ and }t\in J. $$
(4.1)

Suppose that there exists \(u_{n}\rightarrow u\) weakly \(n\rightarrow \infty \) for \(u\in H\). Then we have \(\Phi u_{n}\rightarrow \Phi u\) in \(C[J,R]\) as \(n\rightarrow \infty \).

Proof

Because of \(u_{n}\rightarrow u\) weakly in H, it is easy to see that, for \(\forall t\in J\),

$$ \Phi u_{n}(t)=\frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}u_{n}(s) \,ds \rightarrow \frac{1}{\Gamma (\alpha )} \int _{0}^{t}(t-s)^{\alpha -1}u(s)\,ds= \Phi u(t). $$

Moreover,

$$\begin{aligned} \bigl\vert (\Phi u_{n}) (t)-(\Phi u_{n}) (\tau ) \bigr\vert &=\frac{1}{\Gamma {(\alpha )}} \biggl\vert \int _{0}^{t}(t-s)^{\alpha -1}u_{n}(s) \,ds- \int _{0}^{\tau }(\tau -s)^{ \alpha -1}u_{n}(s) \,ds \biggr\vert \\ &\le \frac{ \Vert u_{n} \Vert _{H}}{\Gamma {(\alpha )}}\biggl( \int _{0}^{\tau } \bigl\vert \bigl[(t-s)^{ \alpha -1}-( \tau -s)^{\alpha -1}\bigr] \bigr\vert \,ds \vert +\frac{(t-\tau )^{\alpha }}{\alpha } \biggr). \end{aligned}$$

Since \((t-s)^{\alpha -1}-(\tau -s)^{\alpha -1}\rightarrow 0\) and \(\frac{(t-\tau )^{\alpha }}{\alpha }\rightarrow 0\) as \(t\rightarrow \tau \), we get

$$ \bigl\vert (\Phi u_{n}) (t)-(\Phi u_{n}) (\tau ) \bigr\vert \rightarrow 0 \quad \text{as }n\rightarrow \infty. $$

It shows that \(\{\Phi u_{n}\}\subset C[J,R]\) is equicontinuous, \(u_{n}\rightarrow u\) weakly as \(n\rightarrow \infty \). Finally, by means of the Ascoli–Arzela theorem, it is easy to see that Lemma 4.1 holds. □

Lemma 4.2

Suppose that \((H_{1})\)\((H_{5})\) hold. Let \(\{u_{n}\}\subset \mathcal{U}_{M}\) and \(u\in \mathcal{U}_{M}\). Assume \(u_{n}\rightarrow u\) weakly in H. Then, for the \((OP;u_{n})\) problems, the unique positive solution \(x_{n}\) on J converges to one x of \((OP;u)\). That is, for the Banach space \(PC[J,R]\), we have

$$ x_{n}\rightarrow x \quad \textit{as }n\rightarrow \infty. $$
(4.2)

Proof

Obviously, \(x_{n}\) is a solution of \((OP;u_{n})\) if and only if

$$\begin{aligned} x_{n}(t)={}&x_{0}-\frac{1}{\Gamma (\alpha )} \int ^{t}_{0}(t-s)^{\alpha -1}f \bigl(s,x_{n}(s),x_{n}(s)\bigr)\,ds \\ &{}-\frac{1}{\Gamma (\alpha )} \int ^{t}_{0}(t-s)^{\alpha -1}u_{n}(s) \,ds+ \sum_{0< t_{k}< t}I_{k}\bigl(x_{n}(t_{k}),x_{n}(t_{k}) \bigr),\quad \forall t\in J. \end{aligned}$$

Let \(t\in J_{0}=[0,t_{1}]\subset J\), we get

$$\begin{aligned} \bigl\vert x_{n}(t)-x(t) \bigr\vert \leq {}&\frac{1}{\Gamma (\alpha )} \biggl\vert \int ^{t}_{0}(t-s)^{ \alpha -1}f \bigl(s,x_{n}(s),x_{n}(s)\bigr)\,ds- \int ^{t}_{0}(t-s)^{\alpha -1}f\bigl(s,x(s),x(s) \bigr)\,ds \biggr\vert \\ &{}+\frac{1}{\Gamma (\alpha )} \biggl\vert \int ^{t}_{0}(t-s)^{\alpha -1}u_{n}(s) \,ds- \int ^{t}_{0}(t-s)^{\alpha -1}u(s)\,ds \biggr\vert \\ \leq{}& \frac{C_{f}}{\Gamma (\alpha )} \int ^{t}_{0}(t-s)^{\alpha -1} \bigl\vert x_{n}(s)-x(s) \bigr\vert \,ds+ \bigl\vert (Qu_{n}) (t)-(Qu) (t) \bigr\vert \\ \leq{}& \int ^{t}_{0}\frac{C_{f}(t-s)^{\alpha -1}}{\Gamma (\alpha )} \bigl\vert x_{n}(s)-x(s) \bigr\vert \,ds+ \Vert Qu_{n}-Qu \Vert _{C[J,R]},\quad \forall t\in J_{0}. \end{aligned}$$

By using the Gronwall inequalities, we have

$$\begin{aligned} \bigl\vert x_{n}(t)-x(t) \bigr\vert &\le e^{\int _{0}^{t} \frac{C_{f}(t-s)^{\alpha -1}}{\Gamma (\alpha )}} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]} \\ &\le e^{\frac{C_{f}T^{\alpha }}{\Gamma (\alpha +1)}} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]} \end{aligned}$$

and

$$\begin{aligned} \int _{0}^{t}\frac{C_{f}(t-s)^{\alpha -1}}{\Gamma (\alpha )} \bigl\vert x_{n}(t)-x(t) \bigr\vert \,ds& \leq e^{\frac{C_{f}T^{\alpha }}{\Gamma (\alpha +1)}} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]} \int _{0}^{t} \frac{C_{f}(t-s)^{\alpha -1}}{\Gamma (\alpha )}\,ds \\ &\le \frac{C_{f}T^{\alpha }}{\Gamma (\alpha +1)}e^{ \frac{C_{f}T^{\alpha }}{\Gamma (\alpha +1)}} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]} \\ &=N_{0} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]},\quad \forall t\in J_{0},n=1,2, \ldots. \end{aligned}$$

Hence,

$$\begin{aligned} \bigl\vert x_{n}(t)-x(t) \bigr\vert &\leq \frac{C_{f}T^{\alpha }}{\Gamma (\alpha +1)}e^{ \frac{C_{f}T^{\alpha }}{\Gamma (\alpha +1)}} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]}+ \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]} \\ &=N_{1} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]},\quad \forall t\in J_{0}, n=1,2, \ldots. \end{aligned}$$

Moreover, from \((H_{4})\), we obtain

$$\begin{aligned} \bigl\vert x_{n}\bigl(t^{+}_{1}\bigr)-x \bigl(t^{+}_{1}\bigr) \bigr\vert &= \bigl\vert x_{n}(t_{1})+I_{1}\bigl(x_{n}(t_{1}) \bigr)-x(t_{1})-I_{1}\bigl(x(t_{1})\bigr) \bigr\vert \\ &\leq \bigl\vert x_{n}(t_{1})-x(t_{1}) \bigr\vert + \bigl\vert I_{1}\bigl(x_{n}(t_{1}) \bigr)-I_{1}\bigl(x(t_{1})\bigr) \bigr\vert \\ &\leq (1+C_{1}) \bigl\vert x_{n}(t_{1})-x(t_{1}) \bigr\vert \\ &\leq (1+C_{1})N_{1} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]} \\ &=N'_{1} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]},\quad \forall n=1,2,\ldots, \end{aligned}$$

and for \(\forall t\in J_{1}=(t_{1},t_{2}]\), we get

$$\begin{aligned} & \bigl\vert x_{n}(t)-x(t) \bigr\vert \\ &\quad\leq \frac{ \vert \int ^{t}_{0}(t-s)^{\alpha -1}f(s,x_{n}(s),x_{n}(s))\,ds-\int ^{t}_{0}(t-s)^{\alpha -1}f(s,x(s),x(s))\,ds \vert }{\Gamma (\alpha )} \\ &\qquad{}+\frac{1}{\Gamma (\alpha )} \biggl\vert \int ^{t}_{0}(t-s)^{\alpha -1}u_{n}(s) \,ds- \int ^{t}_{0}(t-s)^{\alpha -1}u(s)\,ds \biggr\vert + \bigl\vert I_{1}\bigl(x_{n}(t_{1}) \bigr)-I_{1}\bigl(x(t_{1})\bigr) \bigr\vert \\ &\quad\leq \frac{C_{f}}{\Gamma (\alpha )} \int ^{t}_{0}(t-s)^{\alpha -1} \bigl\vert x_{n}(s)-x(s) \bigr\vert \,ds+ \bigl\vert (Qu_{n}) (t)-(Qu) (t) \bigr\vert +C_{1} \bigl\vert x_{n}(t_{1})-x(t_{1}) \bigr\vert \\ &\quad\leq \int ^{t}_{0}\frac{C_{f}(t-s)^{\alpha -1}}{\Gamma (\alpha )} \bigl\vert x_{n}(s)-x(s) \bigr\vert \,ds+(1+C_{1}N_{1}) \Vert Qu_{n}-Qu \Vert _{C[J,R]} \\ &\quad\leq N_{0} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]}+(1+C_{1}N_{1}) \Vert \Phi u_{n}- \Phi u \Vert _{C[J,R]}. \end{aligned}$$

Taking \(N_{2}=N_{0}+1+C_{1}N_{1}>0\) such that \(|x_{n}(t)-x(t)|\leq N_{2}\|\Phi u_{n}-\Phi u\|_{C[J,R]}\), \(\forall t\in J_{1}, n=1,2,\ldots \) . Also, from \((H_{4})\), we get

$$\begin{aligned} \bigl\vert x_{n}\bigl(t^{+}_{2}\bigr)-x \bigl(t^{+}_{2}\bigr) \bigr\vert &\leq \bigl\vert x_{n}(t_{2})-x(t_{2}) \bigr\vert + \bigl\vert I_{2}\bigl(x_{n}(t_{2}) \bigr)-I_{2}\bigl(x(t_{2})\bigr) \bigr\vert \\ &\leq (1+C_{2}) \bigl\vert x_{n}(t_{2})-x(t_{2}) \bigr\vert \leq N'_{2} \Vert \Phi u_{n}- \Phi u \Vert _{C[J,R]},\quad n=1,2,\ldots. \end{aligned}$$

Repeat this process until there exist two positive constants \(N_{k}>0,N'_{k}>0\) such that

$$\begin{aligned} &\bigl\vert x_{n}(t)-x(t) \bigr\vert \leq N_{k} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]},\quad \forall t \in J_{k-1},k=1,2,\ldots,m+1, \\ &\bigl\vert x_{n}\bigl(t^{+}_{k}\bigr)-x \bigl(t^{+}_{k}\bigr) \bigr\vert \leq N'_{k} \Vert \Phi u_{n}-\Phi u \Vert _{C[J,R]},\quad n=1,2,\ldots. \end{aligned}$$

Finally, set \(N=\max \{N_{1},N'_{1},N_{2},N'_{2},\ldots,N_{m},N'_{m},N_{m+1}\}\). Then we have \(\|x_{n}-x|_{PC}\leq N\|\Phi u_{n}-\Phi u\|_{C[J,\mathbb{R}]}, n=1,2, \ldots \) . From \(u_{n}\rightarrow u\) weakly as \(n\rightarrow \infty \), we can know that \(\Phi u_{n}\rightarrow \Phi u\) in \(C[J,R]\) as \(n\rightarrow \infty \). Hence, we obtain \(x_{n}\rightarrow x\text{ as }n\rightarrow \infty\text{ in }PC[J,R] \). □

Theorem 4.1

Assume that \((H_{1})\)\((H_{5})\) hold, so the optimal control problem \((OP)\) to \((IP;u)\) has at least one optimal control \(u^{\ast }\in \mathcal{U}_{M}\) such that \(\pi (u^{\ast })=\inf_{u\in \mathcal{U}_{M}}\pi (u)\), where \(\mathcal{U}_{M}\) is a control functions space defined by (1.2), \(\pi (\cdot )\) is a cost functional given in (1.3).

Proof

Let \(\{u_{n}\}\subset \mathcal{U}_{M}\) be a minimizing sequence, we have \(\lim_{n\rightarrow \infty }\pi (u_{n})=\inf_{u\in \mathcal{U}_{M}} \pi (u)\). Since \(\{u_{n}\}\) is a bounded consequence in H, there exist a subsequence \(\{n_{k}\}\subset \{n\}\) and \(u^{\ast }\in \mathcal{U}_{M}\) such that \(n_{k}\rightarrow \infty \) and \(u_{n_{k}}\rightarrow u^{\ast }\) weakly in the space H as \(k\rightarrow \infty \). In addition, set \(x_{n_{k}}\) to be the unique positive solution to \((IP;u_{n_{k}})\) on J. Then, from Lemma 4.2, we get \(x_{n_{k}}\rightarrow x\) as \(k\rightarrow \infty\) in \(PC[J, \mathbb{R}]\). For \((IP;u)\), x is a unique positive solution. Finally, by employing the weak lower semicontinuity of H-norm, it is obvious that

$$ \lim_{k\rightarrow \infty }\pi (u_{n_{k}})=\inf_{u\in \mathcal{U}_{M}} \pi (u)\geq \pi \bigl(u^{\ast }\bigr), $$

that is, \(u^{\ast }\in \mathcal{U}_{M}\) is an optimal control to \((OP)\). □

For convenience, we give the following condition:

\((H_{4}')\) There exist two constants \(C_{f}>0\) and \(C_{k}>0\) such that

$$\begin{aligned} &\bigl\vert f(s,u)-f(s,v) \bigr\vert \leq C_{f} \vert u-v \vert ,\quad \forall s\in J, u,v\in R^{+}; \\ &\bigl\vert I_{k}(u)-I_{k}(v) \bigr\vert \leq C_{k} \vert u-v \vert ,\quad \forall u,v\in R^{+}, \forall k=1,2, \ldots,m. \end{aligned}$$

Corollary 4.1

Suppose that conditions \((H_{1}')\)\((H_{4}')\) and \((H_{5})\) hold, so \((IP_{1};u)\) has at least one optimal control \(u^{\ast }\in \mathcal{U}_{M}\) such that \(\inf_{u\in \mathcal{U}_{M}}\pi (u)=\pi (u^{\ast })\), where \(\mathcal{U}_{M}\) is the control space given by (1.2), \(\pi (\cdot )\) is a cost function defined by (1.3).

5 Application

In this section, in order to verify the validity of our conclusions, we investigate a specific initial value problem of fractional order impulsive differential systems as follows:

$$ (IP;u) \textstyle\begin{cases} -{_{0}^{C}D}_{t}^{\frac{1}{2}}x(t)=2[(1+x(t))^{ \frac{1}{2}}+(1+x(t))^{-\frac{1}{4}}]+u(t),\quad t\in (0,1),t\neq \frac{1}{3} \\ \bigtriangleup x|_{t=\frac{1}{3}}=(1+x(\frac{1}{3}))^{\frac{1}{2}}+(1+x( \frac{1}{3}))^{-\frac{1}{4}} \\ x(0)=1 \end{cases} $$
(5.1)

Conclusion: There exists a unique positive solution to fractional order impulsive differential equation initial value problem (5.1), and the unique positive solution is continuously differentiable on \([0,\frac{1}{3})\cup (\frac{1}{3},1]\). In addition, the impulsive initial value problem (5.1) has at least one optimal control.

Proof

Let \(J=[0,1],t_{1}=\frac{1}{3},f(t,x,y):=f(x,y)=-2(1+x(t))^{\frac{1}{2}}-2(1+y(t))^{- \frac{1}{4}}\). Evidently, the two-variable function \(f(x,y)\) is decreasing in x and increasing in y, respectively. Setting \(I_{1}(x,y)=(1+x(t))^{\frac{1}{2}}+(1+x(t))^{-\frac{1}{4}}\), we know that \(I_{1}(x,y)\) is increasing in x for \(y\geq 0\) and is decreasing in y for \(x\geq 0\).

Set \(\varphi (\gamma )=\gamma ^{\frac{1}{2}}, \gamma \in (0,1)\), then

$$\begin{aligned} f\bigl(\gamma x,\gamma ^{-1}y\bigr)=-2(1+\gamma x)^{\frac{1}{2}}-2(1+ \gamma y)^{- \frac{1}{4}}\leq \varphi (\gamma )f(x,y) \quad\forall x,y\geq 0, \\ I_{1}\bigl(\gamma x,\gamma ^{-1}y\bigr)=(1+\gamma x)^{\frac{1}{2}}+(1+\gamma y)^{- \frac{1}{4}}\geq \varphi (\gamma )I_{1}(x,y) \quad\forall x,y\geq 0. \end{aligned}$$

It is easy to see that \((H_{1}),(H_{2})\), and \((H_{3})\) hold. Hence, for each \(u\in H\) with \(-M\leq u(t)\leq 0\), Theorem 3.1 implies that there exists a unique positive solution on J, where \(M>0\) is a given constant. In addition, let \(C_{f}=C_{1}=1\). Then we can conclude that \((H_{4})\) holds. Finally, by means of Theorem 4.1, for each desired target profile \(x_{d}\) in H, the (OP) to (5.1) has at least one optimal control. □