1 Introduction

In the present article, the following new form of nonlinear problem for Hadamard fractional differential equations on an infinite interval

$$ \textstyle\begin{cases} {}^{H}\!D^{\nu }x(t)+ b(t)f(t,x(t))+c(t)=0, \quad 1< \nu < 2, t\in (1,\infty), \\ x(1)=0, \qquad {}^{H}\!D^{\nu -1}x(\infty )=\sum_{i=1}^{m}\gamma _{i} {}^{H}\!I^{\beta _{i}}x(\eta ), \end{cases} $$
(1.1)

is discussed, here \({}^{H}\!D^{\nu }\) denotes the Hadamard fractional derivative of order ν, \({}^{H}\!I(\cdot )\) is the Hadamard fractional integral, \(\beta _{i}, \gamma _{i}\geq 0\) (\(i=1,2,\ldots , m\)), \(\eta \in (1,\infty )\) are constants and

$$ \varGamma (\nu )>\sum_{i=1}^{m} \frac{\gamma _{i}\varGamma (\nu )}{ \varGamma (\nu +\beta _{i})}(\log \eta )^{\nu +\beta _{i}-1}. $$

The investigations about fractional differential equations have received much development, and many applications of fractional differential equations have appeared in some fields which include physics, engineering, biological science, chemistry, etc; see [4, 5, 7,8,9,10,11,12, 15, 16, 19, 27] and the references cited therein. Among these works, a large part of their topics is always Riemann–Liouville or Caputo-type fractional equations. As we know, Hadamard fractional derivative is also a famous fractional derivative given by Hadamard in 1892 (see [6]), and we can find this kind of derivative in the literature. The key of this definition involves a logarithmic function of arbitrary exponent. In the past decades, there were more studies on Hadamard fractional differential equations under different boundary conditions, see [1,2,3, 13, 14, 17, 18, 20,21,22,23,24,25] for instance. Recently, it is worth mentioning that Ahmad and Ntouyas [1, 2] investigated some Hadamard fractional differential equations or systems involving fractional integral boundary value conditions. By means of two methods: (i) Banach fixed point theorem and (ii) Leray–Schauder alternative, some sufficient conditions of the existence and uniqueness of solutions for these problems were given.

In [3], the authors discussed a Hadamard fractional differential inclusion under three-point boundary conditions

$$ \textstyle\begin{cases} {}^{H}\!D^{\nu }y(t)\in F(t,y(t)), \quad 1< t< e, \\ y(1)=0, \qquad y(e)={}^{H}\!I^{\beta }y(\eta ), \end{cases} $$
(1.2)

where \(1<\nu \leq 2\), \(1<\eta <e\), \(F:[1,e]\times (-\infty ,+\infty ) \rightarrow \varrho (-\infty ,+\infty )\) is a multivalued map, \(\varrho (-\infty ,+\infty )\) denotes the set constituted by all nonempty subsets of \((-\infty ,+\infty )\). By applying usual fixed point theorems of multi-valued maps, the existing results of solutions were given.

In a recent article [17], the authors studied a Hadamard fractional differential equation on infinite intervals

$$ \textstyle\begin{cases} {}^{H}\!D^{\nu }x(t)+a(t)f(x(t))=0, \quad 1< \nu \leq 2, t\in (1,\infty ), \\ x(1)=0,\qquad D^{\nu -1}x(\infty )=\sum_{i=1}^{m}\lambda _{i} {}^{H} \!I^{\beta _{i}}x(\eta ), \end{cases} $$
(1.3)

where \(\eta \in (1,\infty )\), \(\lambda _{i}\geq 0\), \(\beta _{i}>0\) (\(i=1,2, \ldots ,m\)) are constants. They gave the multiple positive solutions via Leggett–Williams and Guo–Krasnoselkii’s fixed point theorems.

However, there are still few papers reported on some investigations of positive solutions to Hadamard fractional differential equations on an infinite interval, and the uniqueness of positive solutions is scarce. So in our paper, we use other methods to discuss problem (1.1) by giving some different conditions. We shall establish the existence and uniqueness results for positive solutions to problem (1.1). It should be pointed out that our results and the methods used here are new to Hadamard fractional differential equations.

2 Preliminaries

Definition 2.1

([7])

For any given function \(\varphi : [1, \infty )\rightarrow \mathbf{R}\), the concept of Hadamard fractional integral of order ν is

$$ {}^{H}\!I^{\nu }\varphi (t)=\frac{1}{\varGamma (\nu )} \int _{1}^{t} \biggl(\log \frac{t}{s} \biggr)^{\nu -1}\frac{\varphi (s)}{s}\,ds , \quad \nu >0, $$

where the right integral exists.

Definition 2.2

([7])

For any given function \(\varphi : [1, \infty )\rightarrow \mathbf {R}\), the concept of Hadamard fractional derivative of order ν is

$$ {}^{H}\!D^{\nu }\varphi (t)=\frac{1}{\varGamma (n-\nu )} \biggl(t \frac{d}{dt} \biggr)^{n} \int _{1}^{t} \biggl(\log \frac{t}{s} \biggr) ^{n-\nu -1}\frac{\varphi (s)}{s}\,ds , \quad n-1< \nu < n, $$

where \(n=[\nu ]+1\), \([\nu ]\) expresses the integer part of ν, \(\log (\cdot )=\log _{e}(\cdot )\).

Throughout this paper, let

$$ \varOmega =\varGamma (\nu )-\sum_{i=1}^{m} \frac{\gamma _{i}\varGamma ( \nu )}{\varGamma (\nu +\beta _{i})}(\log \eta )^{\nu +\beta _{i}-1}, $$

from initial condition, \(\varOmega >0\).

Lemma 2.3

([17])

Assume \(z\in C[1,\infty )\) with \(0<\int _{1} ^{\infty }z(s)\frac{ds}{s}<\infty \), then the solution of the following form of Hadamard fractional differential equation under integral boundary conditions

$$ \textstyle\begin{cases} {}^{H}\!D^{\nu }x(t)+z(t)=0, \quad 1< \nu < 2, t\in (1,\infty ), \\ x(1)=0,\qquad {}^{H}\!D^{\nu -1}x(\infty )=\sum_{i=1}^{m}\gamma _{i} {}^{H}\!I^{\beta _{i}}x(\eta ), \end{cases} $$
(2.1)

can be expressed by

$$ x(t)= \int _{1}^{\infty }G(t,s)z(s)\frac{ds}{s}, $$

where

$$ G(t,s)=g(t,s)+\sum_{i=1}^{m} \frac{\gamma _{i}(\log t)^{\nu -1}}{ \varOmega \varGamma (\nu +\beta _{i})}g_{i}(\eta ,s), $$
(2.2)

and

$$\begin{aligned}& g(t,s)=\frac{1}{\varGamma (\nu )} \textstyle\begin{cases} (\log t)^{\nu -1}- (\log (\frac{t}{s}) )^{\nu -1},& 1\leq s \leq t< \infty , \\ (\log t)^{\nu -1},& 1\leq t\leq s< \infty , \end{cases}\displaystyle \end{aligned}$$
(2.3)
$$\begin{aligned}& g_{i}(\eta ,s)= \textstyle\begin{cases} (\log \eta )^{\nu +\beta _{i}-1}- (\log (\frac{\eta }{s}) ) ^{\nu +\beta _{i}-1},& 1\leq s\leq \eta < \infty , \\ (\log \eta )^{\nu +\beta _{i}-1},& 1\leq \eta \leq s< \infty . \end{cases}\displaystyle \end{aligned}$$
(2.4)

Lemma 2.4

([17])

The Green’s function \(G(t,s)\) showed in (2.2) has several characteristics:

  1. (i)

    \(G(t,s)\) is a continuous function which satisfies \(G(t,s) \geq 0\) for \((t,s)\in [1,\infty )\times [1,\infty )\);

  2. (ii)

    \(\frac{G(t,s)}{1+(\log t)^{\nu -1}}\leq \frac{1}{\varGamma ( \nu )}+\sum_{i=1}^{m}\frac{\gamma _{i}g_{i}(\eta ,s)}{\varOmega \varGamma (\nu +\beta _{i})}\) for \((t,s)\in [1,\infty )\times [1,\infty )\);

  3. (iii)

    \(G(t,s)\leq (\frac{1}{\varGamma (\nu )}+\sum_{i=1} ^{m}\frac{\gamma _{i}g_{i}(\eta ,s)}{\varOmega \varGamma (\nu +\beta _{i})} )( \log t)^{\nu -1}\) for \((t,s)\in [1,\infty )\times [1,\infty )\).

For getting our results, it is essential to list some useful concepts in Banach spaces and the main tool, i.e., a fixed point theorem for generalized concave operators. For this knowledge, we can see [26, 28, 29] for details.

Assume that \((E, \|\cdot \|)\) is a real Banach space, then E can be partially ordered by a cone \(P\subset E\), that is, \(x\leq y \) if and only if \(y-x\in P\). When \(x\leq y\) and \(x\neq y\), we write \(x< y\) or \(y>x\). θ is the zero element in E. We say that \(P\subset E\) is a cone if it is a non-empty closed convex set and it satisfies (i) \(x\in P\), \(\lambda \geq 0\Rightarrow \lambda x\in P\); (ii) \(x\in P\), \(-x \in P\Rightarrow x=\theta \).

For any \(x,y\in E\), \(\theta \leq x\leq y\), there is a constant \(M>0\) such that \(\|x\|\leq M\|y\|\). We call P normal and M is said to be the normality constant of P. Let \(x_{1},x_{2}\in E\) and define a set \([x_{1},x_{2}]=\{x\in E| x_{1}\leq x\leq x_{2}\}\), we call it an order interval between \(x_{1}\) and \(x_{2}\).

For \(x,y\in E\), we define \(x\sim y\), which means that there exist \(\lambda >0 \) and \(\mu >0\) such that \(\lambda x\leq y\leq \mu x\). Evidently, ∼ is an equivalence relation. For fixed \(h>\theta\) (i.e., \(h \geq \theta \) and \(h\neq \theta \)), we denote a set \(P_{h}=\{x\in E| x\sim h\}\). Clearly, \(P_{h}\subset P\) is convex and \(\lambda P_{h}=P _{h}\) for any \(\lambda >0\).

In paper [29], the authors investigated a special operator equation

$$ u=Tu +x_{0}. $$
(2.5)

The existence and uniqueness of positive solutions to (2.5) are obtained. Moreover, an interesting theorem is also given as follows.

Lemma 2.5

(Theorem 2.1 in [29])

If \(h>\theta \) and P is a normal cone. Suppose that:

(\(D_{1}\)):

\(T:P\rightarrow P\) is an increasing operator;

(\(D_{2}\)):

\(x_{0}\in P\) satisfies \(Th +x_{0}\in P_{h}\);

(\(D_{3}\)):

For all \(u\in P\) and \(r\in (0,1)\), there is \(\psi (r) \in (r,1)\) such that \(T(ru)\geq \psi (r)Tu \).

Then there is a unique element in \(P_{h}\) which satisfies equation (2.5).

Remark 2.6

If condition (\(D_{3}\)) for T holds, then the operator T is called generalized concave. Take \(x_{0}= \theta \), the conclusion also holds. That is, operator \(Tu =u\) has a unique solution in \(P_{h}\).

3 Main results

We will discuss (1.1) in a space E given by

$$ E=\biggl\{ x\in C\bigl([1,\infty)\bigr):\sup_{t\in [1,\infty )} \frac{ \vert x(t) \vert }{1+( \log t)^{\nu -1}}< \infty \biggr\} . $$

This space can be always seen in literature. Set

$$ \Vert x \Vert =\sup_{t\in [1,\infty )} \frac{ \vert x(t) \vert }{1+(\log t)^{\nu -1}}. $$

From [14], \((E,\|\cdot \|)\) is a Banach space with this norm. Moreover, we give a cone P in E defined by

$$ P=\bigl\{ x\in E: x(t)\geq 0, t\in [1,\infty )\bigr\} . $$

If \(x, y\in P\) and \(x\leq y\), then we have \(0\leq x(t)\leq y(t)\), and thus

$$ \sup_{1\leq t< \infty }\frac{x(t)}{1+(\log t)^{\nu -1}}\leq \sup_{1\leq t< \infty } \frac{y(t)}{1+(\log t)^{\nu -1}}, $$

that is, \(\|x\|\leq \|y\|\), it means that P is normal.

Theorem 3.1

Assume that

(\(\mathrm{H}_{1}\)):

\(f: [1,\infty )\times [0,\infty )\rightarrow [0,\infty )\) is a continuous function, \(f(t,0)\not \equiv 0\) for \(t\in [1,\infty )\);

(\(\mathrm{H}_{2}\)):

for fixed \(t\in [1,\infty )\), \(f(t,x)\) is increasing in \(x\in [0,\infty )\);

(\(\mathrm{H}_{3}\)):

if x is bounded, then \(f(t,(1+(\log t)^{\nu -1})x)\) is bounded for \([1,\infty )\);

(\(\mathrm{H}_{4}\)):

for \(\tau \in (0,1)\), there is \(\psi (\tau )\in (\tau ,1)\) such that \(f(t,\tau x)\geq \psi (\tau )f(t,x)\), \(t\in [1,\infty )\), \(x\in [0,\infty )\);

(\(\mathrm{H}_{5}\)):

\(b(t)\), \(c(t)\) are continuous with \(0<\int _{1}^{\infty }b(s) \frac{ds}{s}<\infty \), \(0\leq \int _{1}^{\infty }c(s)\frac{ds}{s}< \infty \).

Then problem (1.1) has a unique solution \(x^{*}\) in \(P_{h}\), where \(h(t)=(\log t)^{\nu -1}\), \(t\in [1,\infty )\).

Proof

For \(x\in E\), we define

$$ Tx (t)= \int _{1}^{+\infty }G(t,s)b(s)f\bigl(s,x(s)\bigr) \frac{ds}{s},\qquad x_{0}(t)= \int _{1}^{+\infty }G(t,s)c(s)\frac{ds}{s}, $$

where \(G(t,s)\) is the same as in (2.2). From Lemma 2.3, if x is a solution of problem (1.1), then x is a solution of the operator equation \(Tx +x_{0}=x\). And vice versa.

First, we prove that \(T: P\rightarrow P\). For \(x\in P\), then \(\frac{x(t)}{1+(\log t)^{\nu -1}}<\infty \), \(t\in [1,\infty )\). By (\(\mathrm{H}_{3}\)), we know that there exists \(M_{x}>0\) such that \(f (s,(1+( \log s)^{\nu -1})\frac{x(s)}{1+(\log s)^{\nu -1}} )\leq M_{x}\). Further, by Lemma 2.4, (\(\mathrm{H}_{3}\)), and (\(\mathrm{H}_{5}\)), we can get

$$\begin{aligned} \frac{Tx (t)}{1+(\log t)^{\nu -1}} =& \int _{1}^{\infty }\frac{G(t,s)}{1+( \log t)^{\nu -1}}b(s)f\bigl(s,x(s) \bigr)\frac{ds}{s} \\ \leq & \int _{1}^{\infty } \Biggl(\frac{1}{\varGamma (\nu )}+\sum _{i=1}^{m}\frac{\gamma _{i}g_{i}(\eta ,s)}{\varOmega \varGamma (\nu +\beta _{i})} \Biggr)b(s) \\ &{}\times f \biggl(s,\bigl(1+(\log s)^{\nu -1}\bigr)\frac{x(s)}{1+(\log s)^{\nu -1}} \biggr) \frac{ds}{s} \\ \leq & \Biggl(\frac{1}{\varGamma (\nu )}+\sum_{i=1}^{m} \frac{ \gamma _{i}(\log \eta )^{\nu +\beta _{i}-1}}{\varOmega \varGamma (\nu +\beta _{i})} \Biggr)M_{x} \int _{1}^{\infty }b(s)\frac{ds}{s}< \infty . \end{aligned}$$

Also, it follows from (\(\mathrm{H}_{1}\)) and Lemma 2.4 that \(Tx \in C[1,\infty )\). So we obtain that \(T: P\rightarrow P\). By (\(\mathrm{H}_{2}\)), one can prove that \(T: P\rightarrow P\) is increasing.

In the sequel, we show that T satisfies the three conditions in Lemma 2.5. Note that \(h(t)=(\log t)^{\nu -1}\), \(t\in [1,\infty )\), and \(1<\nu <2\), we get \(\sup_{1\leq t<\infty }\frac{h(t)}{1+( \log t)^{\nu -1}}=1<\infty \), that is, \(h\in P\). Next we mainly prove that \(Th \in P_{h}\).

Since \(\frac{h(t)}{1+(\log t)^{\nu -1}}\leq 1\) for \(t\in [1,\infty )\), and by (\(\mathrm{H}_{3}\)), there is \(M_{h}>0\) such that

$$ f \biggl(t,\bigl(1+(\log t)^{\nu -1}\bigr)\frac{h(t)}{1+(\log t)^{\nu -1}} \biggr) \leq M_{h}. $$
(3.1)

Let

$$\begin{aligned}& l_{1}=\sum_{i=1}^{m} \frac{\gamma _{i}}{\varOmega \varGamma (\nu +\beta _{i})} \int _{1}^{m} b(s)g_{i}(\eta ,s)f(s,0) \frac{ds}{s}, \\& l_{2}=M_{h} \Biggl(\frac{1}{\varGamma (\nu )}+\frac{1}{\varOmega } \sum_{i=1}^{m}\frac{\gamma _{i}(\log \eta )^{\nu +\beta _{i}-1}}{ \varGamma (\nu +\beta _{i})} \Biggr)\cdot \int _{1}^{\infty } b(s) \frac{ds}{s}. \end{aligned}$$

Because \(\varGamma (\nu )>\sum_{i=1}^{m}\frac{\gamma _{i}\varGamma ( \nu )}{\varGamma (\nu +\beta _{i})}(\log \eta )^{\nu +\beta _{i}-1}>0\), there must exist nonzero elements from \(\gamma _{1}\), \(\gamma _{2},\ldots , \gamma _{m}\). So \(\sum_{i=1}^{m}\frac{\gamma _{i}}{\varOmega \varGamma (\nu +\beta _{i})}>0\). From conditions (\(\mathrm{H}_{1}\)) and (\(\mathrm{H}_{5}\)), we see that \(b(s)f(s,0)\) is continuous with \(b(s)f(s,0)\not \equiv 0\) for \(s\in [1,\infty )\). Hence, \(\int _{1}^{m} b(s)f(s,0)\frac{ds}{s}>0\) and one gets \(l_{1}>0\). Further, by (\(\mathrm{H}_{2}\)), we get \(M_{h}\geq f(t,0,0)\) for \(t\in [1,\infty )\). Because \(g_{i}(\eta ,s)\leq (\log \eta )^{ \nu +\beta _{i}-1}\), we easily get \(l_{2}\geq l_{1}\). By (\(\mathrm{H}_{2}\)), we have

$$\begin{aligned} Th (t) =& \int _{1}^{\infty }G(t,s)b(s)f\bigl(s,(\log s)^{\nu -1}\bigr) \frac{ds}{s} \\ \geq & \int _{1}^{\infty }G(t,s)b(s)f(s,0)\frac{ds}{s} \\ \geq & \int _{1}^{\infty }\sum_{i=1}^{m} \frac{\gamma _{i}( \log t)^{\nu -1}}{\varOmega \varGamma (\nu +\beta _{i})}g_{i}(\eta ,s)b(s)f(s,0) \frac{ds}{s} \\ \geq & \sum_{i=1}^{m}\frac{\gamma _{i}}{\varOmega \varGamma (\nu + \beta _{i})} \int _{1}^{m} b(s)g_{i}(\eta ,s)f(s,0) \frac{ds}{s}\cdot ( \log t)^{\nu -1} \\ =& l_{1}(\log t)^{\nu -1}=l_{1}h(t). \end{aligned}$$

Also, from Lemma 2.4, (\(\mathrm{H}_{2}\)), and (3.1),

$$\begin{aligned} Th (t) =& \int _{1}^{\infty }G(t,s)b(s)f\bigl(s,(\log s)^{\nu -1}\bigr) \frac{ds}{s} \\ =& \int _{1}^{\infty }G(t,s)b(s)f \biggl(s,\bigl(1+(\log s)^{\nu -1}\bigr)\frac{( \log s)^{\nu -1}}{1+(\log s)^{\nu -1}} \biggr)\frac{ds}{s} \\ \leq & \int _{1}^{\infty }G(t,s)b(s)M_{h} \frac{ds}{s} \\ \leq & \int _{1}^{\infty }\frac{(\log t)^{\nu -1}}{\varGamma (\nu )}b(s)M _{h}\frac{ds}{s}+ \int _{1}^{+\infty }\sum_{i=1}^{m} \frac{\gamma _{i}(\log t)^{\nu -1}}{\varOmega \varGamma (\nu +\beta _{i})}g_{i}(\eta ,s)b(s)M _{h}\frac{ds}{s} \\ \leq & M_{h} \Biggl(\frac{1}{\varGamma (\nu )}+\frac{1}{\varOmega }\sum _{i=1}^{m}\frac{\gamma _{i}(\log \eta )^{\nu +\beta _{i}-1}}{ \varGamma (\nu +\beta _{i})} \Biggr)\cdot \int _{1}^{\infty } b(s) \frac{ds}{s}\cdot (\log t)^{\nu -1} \\ =& l_{2}(\log t)^{\nu -1}=l_{2}h(t). \end{aligned}$$

Hence, \(l_{1}h(t)\leq Th (t)\leq l_{2}h(t)\), \(t\in [1,\infty )\). That is, \(l_{1}h\leq Th \leq l_{2}h\). So \(Th \in P_{h}\).

Next we indicate that the third condition (\(D_{3}\)) of Lemma 2.5 holds. If \(\tau \in (0,1)\), \(x\in P\), by (\(\mathrm{H}_{4}\)) and Lemma 2.4, we obtain

$$\begin{aligned} T(\tau x) (t) =& \int _{1}^{\infty }G(t,s)b(s)f\bigl(s,\tau x(s)\bigr) \frac{ds}{s} \\ \geq & \psi (\tau ) \int _{1}^{\infty }G(t,s)b(s)f\bigl(s,x(s)\bigr) \frac{ds}{s} \\ =& \psi (\tau )Tx (t). \end{aligned}$$

That is, \(T(\tau x)\geq \psi (\tau )Tx \), \(\tau \in (0,1)\), \(x\in P\).

Next we consider \(x_{0}\). From (\(\mathrm{H}_{5}\)), \(c(t)\not \equiv 0\), and by Lemma 2.4,

$$\begin{aligned} \frac{x_{0}(t)}{1+(\log t)^{\nu -1}} =& \int _{1}^{\infty }\frac{G(t,s)}{1+( \log t)^{\nu -1}}c(s) \frac{ds}{s} \\ \leq & \Biggl(\frac{1}{\varGamma (\nu )}+\sum_{i=1}^{m} \frac{ \gamma _{i}(\log \eta )^{\nu +\beta _{i}-1}}{\varOmega \varGamma (\nu +\beta _{i})} \Biggr) \int _{1}^{\infty }c(s)\frac{ds}{s}< \infty . \end{aligned}$$

So \(x_{0}\in P\). Set

$$ l= \Biggl(\frac{1}{\varGamma (\nu )}+\sum_{i=1}^{m} \frac{\gamma _{i}(\log \eta )^{\nu +\beta _{i}-1}}{\varOmega \varGamma (\nu +\beta _{i})} \Biggr) \int _{1}^{\infty }c(s)\frac{ds}{s}, $$

then \(l>0\). By Lemma 2.4,

$$ x_{0}(t)\leq l(\log t)^{\nu -1}=lh (t),\quad t\in [1,\infty ). $$

Hence, \(0\leq x_{0}\leq lh \). Further,

$$ l_{1}h\leq x_{0}+Th \leq (l_{2}+l)h. $$

Hence \(x_{0}+Ah \in P_{h}\). The second condition (\(D_{2}\)) in Lemma 2.5 is satisfied. Now we can use Lemma 2.5. So \(x=Tx +x_{0}\) has a unique solution \(x^{*}\) in \(P_{h}\). Hence there exist \(\lambda ,\mu >0\) such that

$$ 0\leq \lambda (\log t)^{\nu -1}=\lambda h(t)\leq \mu h(t)=\mu (\log t)^{ \nu -1},\quad t\in [1,\infty ). $$

In other words, \(x^{*}(t)\) is a unique positive solution of our problem (1.1) in \(P_{h}\). □

Considering Theorem 3.1 and Remark 2.6, we can present the following corollary.

Corollary 3.2

Suppose that (\(\mathrm{H}_{1}\))(\(\mathrm{H}_{5}\)) hold with \(c(t)\equiv 0\), \(t\in [1,\infty )\). Then the following fractional problem

$$ \textstyle\begin{cases} {}^{H}\!D^{\nu }x(t)+b(t)f(t,x(t))=0,\quad 1< \nu < 2, t\in (1,\infty ), \\ x(1)=0,\qquad {}^{H}\!D^{\nu -1}x(\infty )=\sum_{i=1}^{m}\gamma _{i} {}^{H}\!I^{\beta _{i}}x(\eta ), \end{cases} $$

has a unique positive solution \(x^{*}\) in \(P_{h}\), here \(h(t)=(\log t)^{ \nu -1}\), \(t\in [1,\infty )\).

Remark 3.3

In literature, for Hadamard fractional problems under various boundary conditions, the unique results similar to Theorem 3.1 and Corollary 3.2 have not been seen. So our results are new to Hadamard fractional problems with boundary conditions. Let \(f\equiv C>0\), then conditions (\(\mathrm{H}_{1}\))–(\(\mathrm{H}_{4}\)) are naturally satisfied and problem (1.1) has a unique solution \(x(t)= \int ^{1}_{0}G(t,s)[b(s)C+c(s)]\,ds\), \(t\in [0,\infty )\). From Lemma 2.4, the unique solution \(x\in P_{h}\), and thus it is a positive solution.

4 An example

Example 4.1

We discuss a specific Hadamard fractional problem

$$ \textstyle\begin{cases} {}^{H}\!D^{\frac{3}{2}}x(t)+ t^{-2} (\frac{x^{\frac{1}{3}}(t)}{1+( \log t)^{\frac{1}{2}}} +1 )+e^{-t}=0,\quad t\in (1,\infty ), \\ x(1)=0,\qquad {}^{H}\!D^{\frac{1}{2}}x(\infty )={}^{H}\!I^{\frac{3}{2}}x(e)+2 {}^{H}\!I^{\frac{5}{2}}x(e), \end{cases} $$
(4.1)

where \(\nu =\frac{3}{2}\), \(m=2\), \(\eta =e\), \(\gamma _{1}=1\), \(\gamma _{2}=2\), \(\beta _{1}=\frac{3}{2}\), \(\beta _{2}=\frac{5}{2}\), \(b(t)=t^{-2}\) and

$$ f(t,x)=\frac{x^{\frac{1}{3}}}{1+(\log t)^{\frac{1}{2}}}+1, \qquad c(t)=e ^{-t}. $$

Then \(\varOmega =\varGamma (\frac{3}{2})-\sum_{i=1}^{2}\frac{\gamma _{i}\varGamma (\frac{3}{2})}{\varGamma (\frac{3}{2}+\beta _{i})}(\log e)^{ \frac{3}{2}+\beta _{i}-1}\approx 0.1477>0\). Evidently, \(f(t,x)\) satisfies (\(\mathrm{H}_{1}\)), (\(\mathrm{H}_{2}\)). It follows from x is bounded that \(f(t,(1+( \log t)^{\frac{1}{2}})x)=x^{\frac{1}{3}}+1<\infty \) for \(t\in [1, \infty )\), so (\(\mathrm{H}_{3}\)) is also satisfied with \(f(t,0)=1>0\).

Let \(\psi (\tau )=\tau ^{\frac{1}{3}}\), then \(\psi (\tau )\in (\tau ,1)\) for \(\tau \in (0,1)\). For \(\tau \in (0,1)\), \(x\geq 0\), we have

$$ f(t,\tau x)=\frac{\tau ^{\frac{1}{3}}x^{\frac{1}{3}}}{1+(\log t)^{ \frac{1}{2}}}+1\geq \tau ^{\frac{1}{3}} \biggl( \frac{x^{\frac{1}{3}}}{1+( \log t)^{\frac{1}{2}}} +1 \biggr)=\psi (\tau )f(t,x). $$

Moreover,

$$ \int _{1}^{\infty }b(t)\frac{dt}{t}= \int _{1}^{\infty }t^{-2} \frac{dt}{t}= \frac{1}{2}< \infty ,\qquad \int _{1}^{\infty }c(t)\frac{dt}{t}= \int _{1}^{\infty }e^{-t}\frac{dt}{t}= \frac{1}{e}< \infty . $$

So conditions (\(\mathrm{H}_{4}\)), (\(\mathrm{H}_{5}\)) are satisfied. From Theorem 3.1, problem (4.1) has a unique solution \(x^{*}\) in \(P_{h}\), here \(h(t)=(\log t)^{\frac{1}{2}}\), \(t\in [1,\infty )\).