1 Background

In [1], it was proved that if the boundary value problem (BVP)

$$ x^{\prime \prime }(t)+r(t) x(t)=0,\quad t \in (c,d), x(c)=x(d)=0, $$

has a nontrivial solution, where r is a real-valued continuous function, then

$$ \int_{c}^{d} \bigl\vert r(s) \bigr\vert \,ds> \frac{4}{d-c}. $$
(3)

Undoubtedly, the Lyapunov inequality (3) proves cooperative and supportive in differential equations. Indeed, it is frequently used to dominate certain quantities for the sake of proving qualitative properties of solutions for differential equations. The Lyapunov inequality has been extensively under consideration for integer-order differential equations. As fractional differential equations become of high interest due to their demonstrated applications [2], nevertheless, the last few years have witnessed the appearance of many papers which systematically studied the fractional analog of this inequality and other types of inequalities; we refer the reader to [311] for Lypanouv type and to [12] for Gronwall type. The newly defined conformable fractional calculus was initiated in [13] and studied later on by the current author in [14, 15], where many properties of conformable operators were introduced. However, the progress in this direction is still at its earliest stage [1618]. In this paper, we prove a Lyapunov-type inequality for a conformable fractional BVP. Moreover, a Lyapunov type inequality is obtained for a sequential conformable BVP. Some examples are given and an application to conformable Sturm-Liouville eigenvalue problem is analyzed. The new obtained inequalities generalize the existing ones.

2 Preliminaries on conformable derivatives

This section is devoted to the presentation of some preliminaries about higher order fractional conformable derivatives developed in [14].

Definition 1

([13, 14])

The (left) conformable fractional derivative starting from c of a function \(f:[c,\infty )\rightarrow \mathbb{R}\) of order \(0< \alpha \leq 1\) is defined by

$$ \bigl(T_{\alpha }^{c} g\bigr) (t)= \lim _{\epsilon \rightarrow 0} \frac{g(t+ \epsilon (t-c)^{1-\alpha } )-g(t)}{\epsilon }. $$
(4)

In the case when \(c=0\), we write \(T_{\alpha }\). If \((T_{\alpha }^{c} g)(t)\) exists on \((c,d)\) then \((T_{\alpha }^{c} g)(c) = \lim_{t \rightarrow c^{+}}(T_{\alpha }^{c} g)(t)\).

Note that if g is differentiable then

$$ \bigl(T_{\alpha }^{c} g\bigr) (t)= (t-c)^{1-\alpha } g^{\prime }(t). $$
(5)

Moreover, the conformable fractional integral of order \(0<\alpha \leq 1\) starting at \(c\geq 0\) is defined by \((I_{\alpha }^{c} g)(t)= \int_{c}^{t}g(x)(x-c)^{\alpha -1}\,dx\) and following [13] by \((I_{\alpha }^{c} g)(t)=\int_{c}^{t}g(x)x^{\alpha -1}\,dx\).

In the case of higher order, the following definition becomes true.

Definition 2

([14])

Let \(n<\alpha \leq n+1 \) and set \(\gamma = \alpha -n\). Then the conformable fractional derivative starting from c of a function \(g:[c,\infty )\rightarrow \mathbb{R}\) of order α, where \(g^{(n)}(t)\) exists, is defined by

$$ \bigl(\textbf{T}_{\alpha }^{c} g\bigr) (t)= \bigl(T_{\gamma }^{c} g^{(n)}\bigr) (t). $$
(6)

In the case \(c=0\), we write \(\textbf{T}_{\alpha }\).

Note that if \(\alpha =n+1\) then \(\gamma =1\) and the fractional derivative of g becomes \(g^{(n+1)}(t)\). Also when \(n=0\) (or \(\alpha \in (0,1)\)) then \(\gamma =\alpha \) and the definition coincides with that in Definition 1. From (6), it is immediate that if \(n< \alpha \leq n+1\) then \(\gamma =\alpha -n\) and if moreover, the \((n+1)\)th derivative (or the derivative of \(g^{(n)}\)) exists then we have

$$ \bigl( \textbf{T}_{\alpha }^{c} g\bigr) (t)= \bigl(T_{c}^{\gamma }g^{(n)}\bigr) (t)=(t-c)^{1- \gamma } g^{(n+1)}(t)=(t-c)^{1-\alpha +n}g^{(n+1)}(t). $$
(7)

Lemma 1

([13])

Assume that \(g:[c,\infty )\rightarrow \mathbb{R}\) is continuous and \(0< \alpha \leq 1\). Then, for all \(t>c\), we have

$$ T_{\alpha }^{c} I_{\alpha }^{c} g(t)=g(t). $$

In the case of higher order, the following definition is valid.

Definition 3

([14])

Let \(\alpha \in (n,n+1]\) and set \(\gamma = \alpha -n\). Then the left conformable fractional integral starting at c of order α is defined by

$$ \bigl(I_{\alpha }^{c} g \bigr) (t)= \textbf{I}_{n+1}^{c} \bigl((t-c)^{\gamma -1}g\bigr)= \frac{1}{n!} \int_{c}^{t} (t-x)^{n}(x-c)^{\gamma -1}g(x)\,dx. $$
(8)

Notice that if \(\alpha =n+1\) then \(\gamma =1\) and hence \((I_{\alpha } ^{c} g)(t)=(\textbf{I}_{n+1}^{c} g)(t)=\frac{1}{n!}\int_{c}^{t} (t-x)^{n} g(x)\,dx\), which is the iterative integral of g, \(n+1\) times over \((c,t]\).

Recall that the left Riemann-Liouville fractional integral of order \(\alpha >0\) starting from c is defined by

$$ \bigl(_{c}\textbf{I}^{\alpha }g\bigr) (t)= \frac{1}{\Gamma (\alpha )} \int_{c} ^{t} (t-s)^{\alpha -1} g(s)\,ds. $$
(9)

We see that \((I_{\alpha }^{c} g)(t)= ({}_{c}\textbf{I}^{\alpha }g)(t)\) for \(\alpha = n+1\), \(n=0,1,2,\ldots \) .

Example 1

In virtue of [14], we recall that \(({}_{c}\textbf{I}^{\alpha }(t-c)^{\mu -1})(x) =\frac{\Gamma (\mu )}{\Gamma (\mu +\alpha)} (x-c)^{\alpha +\mu -1}\), \(\alpha , \mu >0\). Indeed, if \(\mu \in \mathbb{R}\) such that \(\alpha +\mu -n>0\) then the conformable fractional integral of \((t-c)^{\mu }\) of order \(\alpha \in (n,n+1]\) is

$$ \bigl(I_{\alpha }^{c} (t-c)^{\mu }\bigr) (x)=\bigl(\textbf{I}_{n+1}^{c} (t-c)^{\mu + \alpha -n-1}\bigr) (x)= \frac{\Gamma (\alpha +\mu -n)}{\Gamma (\alpha + \mu +1)} (x-c)^{\alpha +\mu }. $$
(10)

The following is a generalization of Lemma 1.

Lemma 2

([14])

Assume that \(f:[c,\infty )\rightarrow \mathbb{R}\) such that \(g^{(n)}(t)\) is continuous and \(n< \alpha \leq n+1\). Then, for all \(t>c\), we have

$$ \textbf{T}_{\alpha }^{c} I_{\alpha }^{c} g(t)=g(t). $$

Theorem 1

([14])

Let \(\alpha \in (n,n+1] \) and \(g:[c,\infty ) \rightarrow \mathbb{R}\) be \((n+1)\) times differentiable for \(t>c\). Then, for all \(t>c\), we have

$$ I_{\alpha }^{c} \textbf{T}_{\alpha }^{c}(g) (t)= g(t)-\sum_{k=0}^{n} \frac{g ^{(k)}(c)(t-c)^{k}}{k!}. $$
(11)

Example 2

From [14], we recall that the solution of the following conformable fractional initial value problem:

$$ \bigl(T_{\alpha }^{c} x\bigr) (t)= \lambda x(t),\quad 0< \alpha \leq 1, x(c)=x_{0}, t>c, $$
(12)

is \(x(t)=x_{0} e^{\lambda \frac{(t-c)^{\alpha }}{\alpha }}\).

The following example shows why it is useful to work in conformable differential systems.

Example 3

Consider the following second order ordinary differential equation:

$$ t^{2-2\alpha } y^{\prime \prime }+t^{1-\alpha }\bigl((1-\alpha )t^{-\alpha }-2\bigr)y^{\prime }-3y=0, \quad 0< \alpha \leq 1, $$
(13)

where \(y(t)\) is assumed to have second order continuous derivative. In view of (13), one can figure out that it is not easy to solve it using standard methods. However, equation (13) is rewritten in the form of the sequential (local) conformable differential equation as

$$ y^{(2\alpha )}-2y^{(\alpha )}-3y=0, $$
(14)

where \(y^{(2\alpha )}=t^{1-\alpha }(t^{1-\alpha }y^{\prime })^{\prime }= t^{1-\alpha }[t^{1-\alpha }y^{\prime \prime }+(1-\alpha )t^{- \alpha }y^{\prime }]\) such that \(y^{(2\alpha )}\) stands for \((T_{\alpha }^{0} T_{\alpha }^{0} y)(t)\) and \(T_{\alpha }^{0} \) is defined in (5). By using the operator method, we may write (14) in the form

$$ \bigl(D^{\alpha }-3\bigr) \bigl(D^{\alpha }+1\bigr)y=0. $$

The solution of the above equation has the form

$$ y(t)=c_{1}e^{3 \frac{t^{\alpha }}{\alpha }}+c_{2} e^{- \frac{t^{ \alpha }}{\alpha }}. $$

For \(\alpha = \frac{1}{2}\), (13) reduces to \(ty^{\prime \prime }+ (\frac{1}{2}-2\sqrt{t})y^{\prime }-3y=0\), which by the above arguments has a solution \(y(t)=c_{1}e^{6 \sqrt{t}}+c_{2} e^{- 2 \sqrt{t}}\). It is worth mentioning that, for \(\alpha =1\), the solution \(y(t)=c_{1}e^{3t}+c_{2} e^{-t}\) of the ordinary differential equation \(y^{\prime \prime }-2y^{\prime }-3y=0\) is verified.

3 A Lyapunov-type inequality for a conformable BVP

Consider the following (local) conformable BVP:

$$ \bigl(\textbf{T}_{\alpha }^{c} x\bigr) (t)+r(t)x(t)=0, \quad 1< \alpha \leq 2, c< t< d, x(c)=x(d)=0. $$
(15)

Lemma 3

\(x(t)\) is a solution of the BVP (15) if and only if it satisfies the integral equation

$$ x(t)= \int_{c}^{d} H(t,s) r(s) x(s)\,ds, $$
(16)

where H is the Green function for (15) defined by

$$ H(t,s)= \textstyle\begin{cases} \frac{(t-c) (d-s)}{d-c}\cdot (s-c)^{ \alpha -2}, & c \leq t \leq s\leq d, \\ ( \frac{(t-c) (d-s)}{d-c}-(t-s) )\cdot (s-c)^{ \alpha -2}, & c \leq s \leq t\leq d. \end{cases} $$

Proof

Applying the integral \(I_{\alpha }^{c}\) to (15) and making use of Definition 3 and Theorem 1 with \(n=1\) and \(\gamma =\alpha -1\), we obtain

$$ x(t)=c_{1}+c_{2} (t-c)- \int_{c}^{t} (t-s)r(s) x(s) (s-c)^{\alpha -2}\,ds. $$

The condition \(x(c)=0\) implies that \(c_{1}=0\) and the condition \(x(d)=0\) implies that \(c_{2}=\frac{1}{d-c} \int_{c}^{d} (d-s)r(s)x(s) (s-c)^{\alpha -2}\,ds\) and hence

$$ x(t)=\frac{t-c}{d-c} \int_{c}^{d} (d-s)r(s)x(s) (s-c)^{\alpha -2}\,ds- \int_{c}^{t} (t-s)r(s) x(s) (s-c)^{\alpha -2}\,ds. $$

Then, using

$$\begin{aligned}& \int_{c}^{d} (d-s)r(s) x(s) (s-c)^{\alpha -2}\,ds \\& \quad = \int_{c}^{t} (d-s)r(s) x(s) (s-c)^{\alpha -2}\,ds+ \int_{t}^{d} (d-s)r(s) x(s) (s-c)^{\alpha -2}\,ds, \end{aligned}$$

the proof is concluded. □

Lemma 4

The Green function H defined above has the following properties:

  1. (i)

    \(H(t,s)\geq 0\) for all \(c\leq t,s \leq d\).

  2. (ii)

    \(\max_{t \in [c,d]} H(t,s)=H(s,s)\) for \(s \in [c,d]\).

  3. (iii)

    \(H(s,s)\) has a unique maximum, given by

    $$ \max_{s \in [c,d]} H(s,s)= H \biggl(\frac{c+(\alpha -1)d}{\alpha }, \frac{c+( \alpha -1)d}{\alpha } \biggr)=\frac{(d-c)^{\alpha -1}(\alpha -1)^{\alpha -1}}{\alpha^{\alpha }}. $$

Proof

Define the functions \(h_{1}(t,s)=\frac{(t-c) (d-s)}{d-c}\cdot (s-c)^{ \alpha -2}\) and \(h_{2}(t,s)= ( \frac{(t-c) (d-s)}{d-c}-(t-s) )\cdot (s-c)^{ \alpha -2}\).

  1. (i)

    It is clear that \(h_{1}\geq 0\). To determine the sign of \(h_{2}\), we observe that \((t-s)=\frac{t-c}{d-c} (d- (c+ \frac{(s-c)(d-c)}{(t-c)}) )\) and that \(c+\frac{(s-c)(d-c)}{(t-c)} \geq s\) if and only if \(s\geq c\). Together with \((s-c)^{ \alpha -2} \geq 0 \) we conclude that \(h_{2}\geq 0\) as well. Hence, the proof of the first part is complete.

  2. (ii)

    It is clear that \(h_{1}(t,s)\) is an increasing function in t. Differentiating \(h_{2}\) with respect to t for every fixed s and following similar analysis as in first part we conclude that \(h_{2}\) is a decreasing function.

  3. (iii)

    Let \(g(s)=H(s,s)=\frac{(s-c)^{\alpha -1}(d-s)}{d-c}\). Then it is sufficient to show that \(g^{\prime }(s)=0\) if \(s=\frac{c+( \alpha -1)d}{\alpha }\) and hence the proof is completed.

 □

Theorem 2

If the BVP (15) has a nontrivial solution, where r is a real-valued continuous function on \([c,d]\), then

$$ \int_{c}^{d} \bigl\vert r(t) \bigr\vert \,dt> \frac{\alpha^{\alpha }}{(\alpha -1)^{\alpha -1}(d-c)^{ \alpha -1}}. $$
(17)

Proof

Let \(x \in Y=C[c,d]\) be a nontrivial solution of the BVP (15), where \(\Vert x \Vert = \sup_{t \in [c,d]}\vert x(t) \vert \). By Lemma 3, x must satisfy

$$ x(t)= \int_{c}^{d} H(t,s) r(s)x(s)\,ds. $$

Therefore

$$ \Vert x \Vert \leq \max_{t \in [c,d]} \int_{c}^{d} \bigl\vert H(t,s)r(s) \bigr\vert \,ds \Vert x \Vert $$

or

$$ \max_{t \in [c,d]} \int_{c}^{d} \bigl\vert H(t,s)r(s) \bigr\vert \,ds \ge 1. $$

By using the properties of the Green function H proved in Lemma 4, we end up with

$$ \frac{(d-c)^{\alpha -1}(\alpha -1)^{\alpha -1}}{\alpha^{\alpha }} \int _{c}^{d} \bigl\vert r(s) \bigr\vert \,ds >1. $$
(18)

Inequality (17) is an immediate conclusion of (18). □

Remark 1

If \(\alpha =2\), then (17) reduces to the classical Lyapunov inequality (3). We also invite the reader to compare the obtained generalized Lyapunov inequality in Theorem 2 and the one obtained recently and independently in [19]. The approach in [19] is different and the authors there proved the existence of solution in the space \(AC^{2}[c,d]=\{u \in C^{1}[c,d]: u^{\prime }\in AC[c,d]\}\). Further, we see that our obtained inequality provides, for example when applied to the Sturm-Liouville eigenvalue problem, sharper lower estimate for the eigenvalues. Indeed, in Section 5 we can see that the lower estimate \(\frac{\alpha^{\alpha }}{(\alpha -1)^{\alpha -1}}\) is bigger than \(4 (\alpha -1)\) for \(1<\alpha \leq 2\). This is due to the observation that \(( \frac{\alpha }{\alpha -1} ) ^{\alpha }\geq 4\), for \(1<\alpha \leq 2\). Further, for convenience, in the next section we prove a sequential type Lyapunov inequality version as well.

4 A Lyapunov-type inequality for a sequential conformable BVP

Consider the following sequential conformable BVP:

$$ x^{(2\alpha )}(t)+r(t)x(t)=0,\quad c< t< d,\frac{1}{2}< \alpha \leq 1, x(c)=x(d)=0. $$
(19)

We shall prove a Lyapunov inequality for (19).

Lemma 5

\(x(t)\) is a solution of the BVP (19) if and only if it satisfies the integral equation

$$ x(t)= \int_{c}^{d} G(t,s) r(s) x(s)\,ds, $$
(20)

where G is the Green function of (19) defined by

$$ G(t,s)= \textstyle\begin{cases} g_{1}(t,s), & c \leq s \leq t\leq d, \\ g_{2}(t,s), & c \leq t \leq s\leq d, \end{cases} $$

such that

$$ g_{1}(t,s)=(s-c)^{\alpha -1} \biggl[ \frac{(s-c)^{\alpha }}{\alpha }- \frac{(t-c)^{ \alpha }(s-c)^{\alpha }}{\alpha (d-c)^{\alpha }} \biggr] $$

and

$$ g_{2}(t,s)=(s-c)^{\alpha -1} \biggl[ \frac{(t-c)^{\alpha }}{\alpha }- \frac{(t-c)^{ \alpha }(s-c)^{\alpha }}{\alpha (d-c)^{\alpha }} \biggr] . $$

Proof

Applying the integral \(I_{\alpha }^{c}\) twice to (19) and using Definition 3, we get

$$ y(t)=c_{1}+c_{2} \frac{(t-c)^{\alpha }}{\alpha }- \int_{c}^{d} r(s) x(s) (s-c)^{\alpha -1} \biggl[ \frac{(t-c)^{\alpha }}{\alpha }-\frac{(s-c)^{ \alpha }}{\alpha } \biggr]\,ds. $$

The condition \(x(c)=0\) implies that \(c_{1}=0\) and the condition \(x(d)=0\) implies that \(c_{2}=\frac{\alpha }{(d-c)^{\alpha }} \int_{c} ^{d} r(s)x(s) (s-c)^{\alpha -1} [ \frac{(d-c)^{\alpha }}{\alpha }-\frac{(s-c)^{ \alpha }}{\alpha } ]\,ds\) and hence

$$\begin{aligned} x(t) &= \frac{(t-c)^{\alpha }}{(d-c)^{\alpha }} \int_{c}^{d} r(s)x(s) (s-c)^{\alpha -1} \biggl[ \frac{(d-c)^{\alpha }}{\alpha }-\frac{(s-c)^{ \alpha }}{\alpha } \biggr]\,ds \end{aligned}$$
(21)
$$\begin{aligned} &\quad {}- \int_{c}^{t} r(s) x(s) (s-c)^{\alpha -1} \biggl[ \frac{(t-c)^{\alpha }}{\alpha }-\frac{(s-c)^{\alpha }}{\alpha } \biggr]\,ds. \end{aligned}$$
(22)

Finally, the proof is concluded by splitting the first integral in (21) to

$$\begin{aligned} &\int_{c}^{t} r(s)x(s) (s-c)^{\alpha -1} \biggl[ \frac{(d-c)^{\alpha }}{ \alpha }-\frac{(s-c)^{\alpha }}{\alpha } \biggr]\,ds \\ &\quad {}+ \int_{t}^{d} r(s)x(s) (s-c)^{\alpha -1} \biggl[ \frac{(d-c)^{\alpha }}{\alpha }-\frac{(s-c)^{\alpha }}{\alpha } \biggr]\,ds. \end{aligned}$$

 □

Lemma 6

The Green function G defined above has the following properties:

  1. (i)

    \(G(t,s)\geq 0\) for all \(c\leq t,s \leq d\).

  2. (ii)

    \(\max_{t \in [c,d]} G(t,s)=G(s,s)\) for \(s \in [c,d]\).

  3. (iii)

    \(f(s)=G(s,s)\) has a unique maximum, given by

    $$\begin{aligned} \max_{s \in [c,d]} G(s,s) &= G\bigl(\Lambda (c,d,\alpha ),\Lambda (c,d, \alpha )\bigr) \\ & = \frac{(d-c)^{2\alpha -1}}{3\alpha -1} \biggl( \frac{2\alpha -1}{3 \alpha -1} \biggr) ^{\frac{2\alpha -1}{\alpha }}, \end{aligned}$$

    where

    $$ \Lambda (c,d,\alpha )=c+ (d-c) \biggl[ \frac{(2\alpha -1)}{(3\alpha -1)} \biggr] ^{\frac{1}{\alpha }}. $$

Proof

Define the two functions \(g_{1}\) and \(g_{2}\) as in Lemma 5.

  1. (i)

    The proof follows by noting that the function \(g_{1}\geq 0\) since \(g_{1}(t,s)\) is decreasing in t for any s and \(g_{1}(d,s)=0\) for any s. Also, \(g_{2}\geq 0\) since \(g_{2}(t,s)\) is increasing in t for any s and \(g_{2}(c,s)= 0\) for any s.

  2. (ii)

    The proof of this part follows by noting that the function \(g_{1}(t,s)\) is decreasing in t for any s and that \(g_{2}(t,s)\) is increasing in t for any s by realizing that \(( 1-\frac{(s-c)^{ \alpha }}{(d-c)^{\alpha }} ) \geq 0\) for all s.

  3. (iii)

    Let \(f(s)=G(s,s)=(s-c)^{\alpha -1} [ \frac{(s-c)^{ \alpha }}{\alpha }-\frac{(t-c)^{\alpha }(s-c)^{\alpha }}{\alpha (d-c)^{ \alpha }} ] \). Then one can show that \(f^{\prime }(s)=0\) if \(s=\Lambda (c,d,\alpha )\) and hence the proof is concluded.

 □

Theorem 3

If the BVP (19) has a nontrivial solution, where r is a real-valued continuous function on \([c,d]\), then

$$ \int_{c}^{d} \bigl\vert r(t) \bigr\vert \,dt> \frac{1}{ G(\Lambda (c,d,\alpha ),\Lambda (c,d, \alpha ))}=\frac{3\alpha -1}{(d-c)^{2\alpha -1}} \biggl( \frac{3\alpha -1}{2\alpha -1} \biggr) ^{\frac{2\alpha -1}{\alpha }}. $$
(23)

Proof

Let \(x \in X=C[c,d]\) be a nontrivial solution of the BVP (19) where \(\Vert x \Vert = \sup_{t \in [c,d]}\vert x(t) \vert \). By Lemma 5, x must satisfy

$$ y(t)= \int_{c}^{d} G(t,s) r(s)x(s)\,ds. $$

Taking the norm leads to

$$ \Vert x \Vert \leq \max_{t \in [c,d]} \int_{c}^{d} \bigl\vert G(t,s)r(s) \bigr\vert \,ds \Vert x \Vert $$

or equivalently

$$ \max_{t \in [c,d]} \int_{c}^{d} \bigl\vert G(t,s)r(s) \bigr\vert \,ds \ge 1. $$

By using the properties of the Green function \(G(t,s)\) given in Lemma 6, we come to the conclusion that

$$ G\bigl(\Lambda (c,d,\alpha ),\Lambda (c,d,\alpha )\bigr) \int_{c}^{d} \bigl\vert r(s) \bigr\vert \,ds >1, $$

from which (23) follows. □

Remark 2

Since \(G(\Lambda (c,d,\alpha ),\Lambda (c,d,\alpha ))\) tends to \(\frac{d-c}{4}\) as \(\alpha \rightarrow 1\) then the classical Lyapunov inequality (3) is obtained again: \(\alpha \rightarrow 1\). In this case, one may also deduce that \(x^{(2\alpha )}(t) \rightarrow x^{\prime \prime }(t)\) as \(\alpha \rightarrow 1\).

5 Application

Consider the following conformable Sturm-Liouville eigenvalue problem of order \(\alpha \in (1,2]\).

$$ \bigl(\textbf{T}_{0}^{\alpha }x\bigr) (t)+\lambda x(t)=0, \quad 0< t< 1, x(0)=x(1)=0. $$
(24)

If λ is an eigenvalue of (24), then by Theorem 2, we must have \(\vert \lambda \vert > \frac{\alpha^{\alpha }}{(\alpha -1)^{\alpha -1}}\). If x is twice differentiable, then by means of (5), equation (24) is equivalent to

$$ t^{2-\alpha }x^{\prime \prime }(t)+\lambda x(t)=0, \quad t \in (0,1), x(0)=x(1)=0, $$
(25)

which can be considered as a type of generalized Sturm-Liouville eigenvalue problem. It is not easy to find the eigenvalues and eigenfunctions of this problem. However, if we consider the sequential conformable Sturm-Liouville eigenvalue problem

$$ x^{(2\alpha )}(t)+\lambda^{2} x(t)=0,\quad 0< t< 1, x(0)=x(1)=0, \frac{1}{2}< \alpha \leq 1, $$

then by using the operator method its solution is given by

$$ x(t)=c_{1}\cos \lambda \frac{t^{\alpha }}{\alpha }+c_{2}\sin \lambda \frac{t^{\alpha }}{\alpha }. $$

The boundary conditions imply that the eigenvalues are \(\lambda = n \alpha \pi \), \(n \in \mathbb{Z}\) and the correspondent eigenfunctions are \(\sin \lambda \frac{t^{\alpha }}{\alpha }\). Notice that, for any \(n \in \mathbb{Z}\), we have

$$ \lambda^{2}=n^{2}\alpha^{2}\pi^{2} > G\bigl(\Lambda (0,1,\alpha ),\Lambda (0,1,\alpha )\bigr)^{-1}= (3\alpha -1) \biggl( \frac{3\alpha -1}{2\alpha -1} \biggr) ^{\frac{2\alpha -1}{\alpha }}. $$

This is a direct verification of Theorem 3.