1 Introduction

Fractional calculus is a generalization of classical calculus which studies noninteger powers of differentiation operators. A rising number of researchers, in the last three decades, have perfectly utilized fractional calculus to describe hereditary and memory properties of various processes and materials. Up to now, there have been various fractional operators based on many kinds of kernels including the power law kernel [1,2,3], the exponential law kernel [4,5,6,7], the Mittag-Leffler function kernel [8, 9], and the sinc-function kernel [10, 11]. And they appear frequently in the various applications in viscoelasticity [12, 13], rheology [14, 15], economy [16], bioengineering [17], electronic circuits [18], control theory [19,20,21], heat transfer [22,23,24], diffusion equation [9, 25, 26], some special equations [27,28,29], etc.

In this paper, we study the generalized Bagley–Torvik (B–T) equation [30], which is a precious paradigm for investigating the higher order fractional differential equations and also popular in viscoelasticity and rheology.

$$\begin{aligned} &\ddot{x}(t) +b{^{C}D^{\alpha}}x(t)+cx(t)=f(t) \quad(0< \alpha< 2), \end{aligned}$$
(1.1)
$$\begin{aligned} &x(0) =A, \quad \quad \dot{x}(0)=B, \end{aligned}$$
(1.2)

where the fractional derivative \(^{C}D^{\alpha}\) is in the Caputo sense. In 2007, Bagley [31] investigated the equivalence between the Caputo and Riemann–Liouville derivatives and pointed out that they are identical in describing the linear viscoelastic material just under two minimal restrictions. For \(\alpha=\frac{3}{2}\), it is the famous general Bagley–Torvik equation

$$\begin{aligned} \ddot{x}(t)+bD^{3/2}x(t)+cx(t)=f(t). \end{aligned}$$
(1.3)

It was surprising success when Bagley [30], in 1984, described the behavior of a rigid body immersed in a viscous Newtonian fluid, which also demonstrates that the fractional derivative is a helpful instrument to express the response of a system with familiar natural elements. For \(1<\alpha<2\), Eq. (1.1) is obviously valid from the general B–T equation (1.3), while for \(0<\alpha \leq1\), Eq. (1.1) can be obtained from a fractional vibration equation for viscoelastic damped structures with a springpot component. The springpot, also known as the Scott Blair element, is an intermediate body between the purely elastic (Hookean element) and the perfectly viscous liquid (Newtonian element), see [32, 33]. In 1967, Slonimsky [32] introduced the viscous element in studying the laws of mechanical relaxation processes in polymers. And ever since, many researchers have attempted to study the fractional rheological models by replacing the dashpot with a springpot in the classical rheological models [33,34,35,36,37]. In 1971, Caputo and Minardi [35] generalized the standard linear solid (Zener model) with Caputo fractional derivative, which was different from previously used Riemann–Liouville fractional derivatives. Moreover, for some materials, he provided many parametric values to the constitutive relations

$$\begin{aligned} \biggl[1+a\frac{\partial^{\mu}}{\partial t^{\mu}} \biggr]\sigma(t)= \biggl[m+b\frac{\partial^{\mu}}{\partial t^{\mu}} \biggr]\varepsilon(t) \quad(0< \mu\leq1). \end{aligned}$$
(1.4)

With the parametric values, he derived a wider agreement between the theory and the experimental data for various viscoelastic solids in his fractional dissipation. In 1983, on the basis of the Scott Blair and Caputo models, Bagley [36] proposed the general form of viscoelastic model with fractional order derivatives

$$\begin{aligned} \sigma(t)+\sum_{m=1}^{M}b_{m}D^{\beta_{m}} \sigma(t)=E_{0} \varepsilon(t)+\sum_{n=1}^{N}E_{n}D^{\alpha_{n}} \varepsilon(t) \end{aligned}$$
(1.5)

and used these models in analyzing viscoelastic damped structures. In 2017, the authors proposed some new fractional calculus to model the rheological phenomena [14]. Many research works about the applications of the viscoelastic models in mechanical systems can be found in [33,34,35,36,37,38,39]. For example, the single degree of freedom oscillator with a springpot [39] is in Fig. 1.

Figure 1
figure 1

The single degree of freedom oscillator with a springpot

By virtue of the physical law, we assume that the restoring force of the spring \(F_{s}=-Kx(t)\) (\(K>0\)), and the springpot force \(F_{p}=-E_{0} {^{C}D^{\alpha}}x(t)\) (\(E_{0}>0\)). If there are only the three forces, \(F_{s}\), \(F_{p}\), and the external force \(f(t)\), acting on the mass M, and applying the Newton second law, the motion of the mass M along a straight line is described by

$$\begin{aligned} M\ddot{x}(t)+E_{0}{^{C}D^{\alpha}}x(t)+Kx(t)=f(t) \quad (0< \alpha< 1). \end{aligned}$$
(1.6)

Therefore, the generalized B–T equation with the fractional order \(0<\alpha<2\) has a significant physical meaning in characterizing the viscoelastic materials and modeling the fluid mechanical process.

Recently, the authors [40] proposed the numerical solution to the generalized B–T equation. In [41], the boundary value problem (BVP) of the generalized B–T equation with the fractional order \(1<\alpha< 2\) was discussed. Besides, the paper [42] investigated the BVP of the generalized B–T equation with fractional integral boundary conditions and the fractional order \(0<\alpha<2\). As for the analytical solution to the generalized B–T equation, there are few of papers on it. Podlubny has presented the analytical solution to the general B–T equation (where \(\alpha= \frac{3}{2}\)) with zero initial conditions by the Green function [1]. In [43], the analytical solution to the general B–T equation (where \(\alpha=\frac{3}{2}\)) was presented for general initial conditions. Motivated by the above articles, we devote this paper to discussing the well-posed problems and the analytical solution of the generalized B–T equation with the fractional order \(0<\alpha<2\). A novel max-metric containing the Caputo derivative is constructed. Subsequently, in a metric space the existence of the initial value problem (IVP) is discussed without using the completeness of the metric space. Furthermore, with Laplace transform, the analytical solutions in terms of the Prabhakar function and the Wiman function are obtained. Therefore, we expand the well-known results about the general B–T equation. The remainder of our paper is arranged as follows. Section 2 collects some basic definitions and results. In Sect. 3, the existence, uniqueness of the solution and the analytical solutions of IVP (1.1)–(1.2) are derived. In the last section, two examples are presented to demonstrate the validity of our main results.

2 Preliminaries

This section collects some basic definitions and necessary lemmas. Up to now, there have been various definitions of fractional calculus, here we only present some of them; for more definitions, one can refer to [1,2,3,4,5, 8]. In this paper we consider the Riemann–Liouville fractional integral and the fractional derivative in the Caputo sense.

Definition 2.1

([2])

The Riemann–Liouville fractional integral \(\mathbb{I}_{a+}^{q}\) of order q of a function \(g(t)\in C[a,b]\) is defined as

$$\begin{aligned} \bigl(\mathbb{I}_{a+}^{q}g\bigr) (t) = \frac{1}{{\varGamma(q)}}{ \int_{a}^{t} {(t- \xi)} ^{q-1}}g(\xi)\, \mathrm{d}\xi, \end{aligned}$$
(2.1)

where \(\operatorname{Re}(q)> 0\), and \(\varGamma(\cdot)\) is the gamma function.

Definition 2.2

([2])

The Caputo fractional derivative \({}^{C}D^{q}_{a+}\) of order q of a function \(g(t)\in C^{n}[a,b]\) is represented by

$$\begin{aligned} ^{C}D^{q}_{a+}g(t) = \textstyle\begin{cases} \frac{1}{\varGamma(n-q )}{\int_{0}^{t} (t - \xi) ^{n-q- 1}}g^{(n)}( \xi)\,\mathrm{d}\xi, & \text{if } q\notin N_{0}, \\ g^{(n)}(t),&\text{if } q=n\in N_{0}, \end{cases}\displaystyle \end{aligned}$$
(2.2)

where \(g^{(n)}(\xi)=\frac{\mathrm{d}^{n} }{\mathrm{d}t^{n}}g(\xi) \), \(\operatorname{Re}(q)\geq0 \), \(n=[\operatorname{Re}(q)]+1\), and \(N_{0}=\{0,1,\ldots\}\).

Definition 2.3

([4])

The Caputo–Fabrizio–Caputo fractional derivative (CFC) is defined as follows:

$$\begin{aligned} ^{\mathrm{CFC}}D^{q}_{0+}f(t) = \frac{M(q)}{n-q} \int_{0}^{t} {f^{(n)}(\xi)\exp \biggl[- \frac{q(t-\xi)}{n-q} \biggr]}\,\mathrm{d}\xi, \end{aligned}$$
(2.3)

where \(n-1< q< n\), and \(M(q)\) is a normalization function such that \(M(0)=M(1)=1\).

Definition 2.4

([8])

The Atangana–Baleanu–Caputo fractional derivative (ABC) is defined as follows:

$$\begin{aligned} ^{\mathrm{ABC}}D^{q}_{0+}f(t) = \frac{M(q)}{n-q} \int_{0}^{t} {f^{(n)}(\xi)E _{q} \biggl[-\frac{q(t-\xi)^{q}}{n-q} \biggr]}\,\mathrm{d}\xi, \end{aligned}$$
(2.4)

where \(n-1< q< n\), and \(M(q)\) is a normalization function such that \(M(0)=M(1)=1\).

Lemma 2.5

([2])

If \(y(x)\in C^{n}[a,b]\), then

$$\begin{aligned}& \bigl({^{C}D^{q}_{a+}}\mathbb{I}_{a+}^{q}y \bigr) (x) =y(x) \end{aligned}$$
(2.5)

and

$$\begin{aligned}& \bigl(\mathbb{I}_{a+}^{q} {^{C}D^{q}_{a+}y} \bigr) (x) = y(x)-\sum _{k=0}^{n-1}\frac{y^{(k)}(a)}{k!}(x-a)^{k}. \end{aligned}$$
(2.6)

Lemma 2.6

([2])

The Laplace transform of the defined fractional derivative is

$$\begin{aligned} \mathfrak{L}\bigl\{ ^{C}D^{q}_{0+}f(t); s\bigr\} &=s^{q}F(s)-\sum_{k = 0} ^{n - 1}{s^{q-k-1}f^{(k)}(0)}. \end{aligned}$$
(2.7)

Lemma 2.7

([2])

Let \(q,\lambda\in\mathbb{C}\), \(\operatorname{Re}(q)>0\), and \(a\in R\), then there holds

$$\begin{aligned} \bigl({^{C}D_{a+}^{q}} {E_{q}} \bigl(\lambda(z-a)^{q} \bigr) \bigr) (t)= \lambda{E_{q}} \bigl(\lambda(t-a)^{q} \bigr). \end{aligned}$$
(2.8)

Definition 2.8

([44] Prabhakar’s function)

The Prabhakar generalized Mittag-Leffler function is defined as

$$\begin{aligned} {E_{\mu,\nu}^{\rho}}(z) = \sum _{k = 0}^{\infty}{\frac{( \rho)_{k}z^{k}}{\varGamma(\mu k + \nu)k!}}, \end{aligned}$$
(2.9)

where \(\mu,\nu, \rho\in\mathbb{C}\) with \(\operatorname{Re}(\mu)>0\); \((\rho)_{0}=1\), \((\rho)_{k}=\frac{\varGamma(\rho+k)}{\varGamma(\rho)}\); \(\varGamma(\cdot)\) is the Euler gamma function.

For \(\rho=1\), it reduces to the Wiman function \({E_{\mu, \nu}}(z)\) and it turns out to be the Mittag-Leffler function \({E_{\mu}}(z)\) for \(\rho=1\), \(\nu=1\). Indeed, \({E_{\mu, \nu}^{1}}(z)= {E_{\mu, \nu}}(z)\), \({E_{\mu, 1 }^{1}}(z)={E_{\mu}}(z)\).

Lemma 2.9

([44])

If \(\mu,\nu,\rho,\lambda\in\mathbb{C}\); \(\operatorname{Re}(\mu) >0\), \(\operatorname{Re}(\nu) >0\), and \(\operatorname{Re}(\rho) >0\), then for \(m\in\mathbb{N}\),

$$\begin{aligned} \mathfrak{ L}\bigl\{ z^{\nu-1}E_{\mu, \nu}^{\rho} \bigl(\pm\lambda z^{\mu}\bigr);s \bigr\} = \int_{0}^{\infty}{e^{-sz}z^{\nu-1}E_{\mu,\nu}^{\rho}} \bigl(\pm \lambda z^{\mu}\bigr)\,\mathrm{d}z=\frac{s^{\mu\rho-\nu}}{(s^{\mu} \mp\lambda)^{\rho}}, \end{aligned}$$
(2.10)

where \((\operatorname{Re}(s)> \vert \lambda \vert ^{\frac {1}{\mu}})\) and \(L\{f(t);s\}=F(s)\) is the Laplace transform of \(f(t)\).

Lemma 2.10

([1])

If \(\mu,\nu,\lambda\in\mathbb{C}\), \(\operatorname{Re}(\mu) >0\), \(\operatorname{Re}(\nu) >0\), then for \(k\in\mathbb{N}\),

$$\begin{aligned} \mathfrak{L}\bigl\{ z^{\mu k+\nu-1}E_{\mu,\nu}^{(k)} \bigl(\pm\lambda z^{ \mu}\bigr);s\bigr\} =\frac{k!s^{\mu-\nu}}{(s^{\mu}\mp\lambda)^{k+1}} \quad\bigl( \operatorname{Re}(s)> \vert \lambda \vert ^{\frac {1}{\mu}}\bigr), \end{aligned}$$
(2.11)

where

$$\begin{aligned} E_{\mu,\nu}^{(k)}(z)=\frac{\mathrm{d}^{k}}{\mathrm{d}z^{k} } {E_{ \mu,\nu}}(z)=\sum_{j = 0}^{\infty}{ \frac{(j+k)!z^{j}}{j! \varGamma(\mu j+\mu k + \nu)}}. \end{aligned}$$
(2.12)

3 Main results

In this section, with a novel max-metric \(d_{\lambda }\), the well-posed problems for the solution of IVP (1.1)–(1.2) are investigated. Then we present the analytical solution to the IVP in terms of the Prabhakar function and the Wiman function, respectively. Throughout this paper, the Caputo fractional derivative of the function \(x(t)\) is denoted by \(x^{( \alpha)}(t)\).

3.1 Existence and uniqueness

We firstly define a novel max-metric \(d_{\lambda}\) containing \(x^{(\alpha)}\); secondly, we show that any two solutions of IVP (1.1)–(1.2) are equivalent in the metric space \((C^{2}[0,T],d_{\lambda})\); and lastly we demonstrate that a solution sequence \(\{x_{i}\}_{i=1}^{\infty}\) of IVP (1.1)–(1.2) is a Cauchy sequence in the metric space. Since in the real numbers every Cauchy sequence converges to some limit, and the limit satisfies IVP (1.1)–(1.2), we derive the unique solution.

Rearranging IVP (1.1)–(1.2), we have

$$\begin{aligned} &\ddot{x}(t) =f(t)-b{^{C}D^{\alpha}}x(t)-cx(t)=g \bigl(t,x(t),x^{( \alpha)}(t) \bigr)\quad (0< \alpha< 2), \end{aligned}$$
(3.1)
$$\begin{aligned} &x(0) =A, \quad\quad \dot{x}(0)=B, \end{aligned}$$
(3.2)

where \(g(t)\) is continuous if \(f(t)\) is. The following equations are equivalent to IVP (1.1)–(1.2) for the same α.

Lemma 3.1

Let \(g(t,x(t),x^{(\alpha)}(t))=f(t)-b{^{C}D^{\alpha}}x(t)-cx(t)\), then IVP (1.1)(1.2) is equivalent to the following equations:

$$\begin{aligned} x(t)= \int_{0}^{t}(t-s)g \bigl(s,x(s),x^{(\alpha)}(s) \bigr)\,\mathrm{d}s+Bt+A, \quad t\in[0,T], \end{aligned}$$
(3.3)

for \(0<\alpha\leq1\),

$$\begin{aligned} x^{(\alpha)}(t)=\frac{1}{\varGamma(2-\alpha)} \int_{0}^{t} (t-s)^{1- \alpha}g \bigl(s,x(s),x^{(\alpha)}(s) \bigr)\,\mathrm{d}s+\frac{B t^{1- \alpha}}{\varGamma(2-\alpha)}, \quad t \in[0,T] \end{aligned}$$
(3.4)

and for \(1<\alpha<2\),

$$\begin{aligned} x^{(\alpha)}(t)=I^{2-\alpha}\ddot{x}= \frac{1}{\varGamma(2-\alpha)} \int_{0}^{t}(t-s)^{1-\alpha} g \bigl(s,x(s),x^{(\alpha)}(s) \bigr) \,\mathrm{d}s, \quad t\in[0,T]. \end{aligned}$$
(3.5)

Proof

Taking the Laplace transform of Eq. (3.1), we derive

$$\begin{aligned} X(s)=s^{-2}G \bigl(s,x(s),x^{(\alpha)}(s) \bigr)+s^{-2}B+s^{-1}A. \end{aligned}$$

According to the convolution property and the inverse Laplace transform, we obtain Eq. (3.3). Taking the derivative with respect to t on both sides of Eq. (3.3) yields

$$\begin{aligned} \dot{x}(t) &=\frac{\mathrm{d}}{\mathrm{d}t} \int_{0}^{t}(t-s)g \bigl(s,x(s),x ^{(\alpha)}(s) \bigr)\,\mathrm{d}s+B \\ &= \int_{0}^{t}g \bigl(s,x(s),x^{(\alpha)}(s) \bigr)\,\mathrm{d}s+B. \end{aligned}$$
(3.6)

For \(0<\alpha<1\) and all \(t\in[0,T]\), by virtue of the Caputo derivative Definition 2.2, we have

$$\begin{aligned} x^{(\alpha)}(t)={}&I^{1-\alpha}\dot{x}(t) =\frac{1}{\varGamma(1-\alpha )} \int_{0}^{t}(t-s)^{-\alpha} \biggl( \int_{0}^{s}g \bigl(\xi,x(\xi),x ^{(\alpha)}( \xi) \bigr)\,\mathrm{d}\xi+B \biggr)\,\mathrm{d}s. \end{aligned}$$

By changing the order of integration in the iterated integrals, we obtain

$$\begin{aligned} x^{(\alpha)}(t)={}&\frac{1}{\varGamma(1-\alpha)} \int_{0}^{t} \biggl( \int_{\xi}^{t}(t-s)^{-\alpha}\,\mathrm{d}s \biggr) g \bigl(\xi,x(\xi),x^{(\alpha )}(\xi) \bigr)\,\mathrm{d}\xi+ \frac{B}{\varGamma(1-\alpha)} \int_{0}^{t} (t-s)^{- \alpha}\,\mathrm{d}s \\ ={}&\frac{1}{\varGamma(2-\alpha)} \int_{0}^{t}(t-\xi)^{1-\alpha}g \bigl(\xi,x( \xi),x^{(\alpha)}(\xi) \bigr)\,\mathrm{d}\xi+\frac{B t ^{1-\alpha}}{\varGamma(2-\alpha)} \\ ={}&\frac{1}{\varGamma(2-\alpha)} \int_{0}^{t}(t-s)^{1-\alpha}g \bigl(s,x(s),x ^{(\alpha)}(s) \bigr)\,\mathrm{d}s+\frac{B t^{1-\alpha}}{\varGamma(2- \alpha)}. \end{aligned}$$

In addition, when \(\alpha=1\), we can derive the result directly from Eq. (3.6), and it is contained in Eq. (3.4) obviously.

For \(1<\alpha<2\), with the Caputo derivative Definition 2.2 and Eq. (3.1), we have

$$\begin{aligned} x^{(\alpha)}(t)=I^{2-\alpha}\ddot{x}= \frac{1}{\varGamma(2-\alpha)} \int_{0}^{t}(t-s)^{1-\alpha} g \bigl(s,x(s),x^{(\alpha)}(s) \bigr) \,\mathrm{d}s, \quad t\in[0,T]. \end{aligned}$$

This completes the proof. □

Define and denote the block \(S:=\{(t,u,v)\in R^{3}:t\in[0,T],(u,v) \in R^{2}\}\). Let the real-valued function \(g:S\rightarrow R\) be Lipschitz continuous with respect to u and v. Let \(\mu>0\) and \(\lambda>0\) be constants and \(\mathbb{X}:=C^{2}([0,T])\) be the set of 2 times continuously differentiable functions on \([0,T]\). Consider the metric space \((\mathbb{X},d_{ \lambda})\) coupled with the novel max-metric:

$$\begin{aligned} d_{\lambda}(x,y):=\max_{t\in[0,T]} \frac{ \vert x(t)-y(t) \vert }{E _{\mu}(\lambda t^{\mu})}+\max_{t\in[0,T]}\frac{ \vert x^{(\alpha)}(t)-y^{(\alpha)}(t) \vert }{E_{\mu }(\lambda t^{\mu})} \quad\text{for }\forall x,y\in\mathbb{X}. \end{aligned}$$
(3.7)

One can check that the space \((X,d_{\lambda})\) is complete.

Theorem 3.2

If there exist two positive real constants \(L\geq0\) and \(M\geq0\) for all \((t,u_{i},v_{i})\in S\) (\(i=1,2\)) such that

$$\begin{aligned} \bigl\vert g(t,u_{1},v_{1})-g(t,u_{2},v_{2}) \bigr\vert \leq L \vert u_{1}-u_{2} \vert +M \vert v_{1}-v_{2} \vert , \end{aligned}$$
(3.8)

and \(\max\{L,M\}\frac{T^{2}}{2}<1\), then IVP (3.1)(3.2) has, at most, one solution \(x=x(t)\) defined on \([0,T]\).

Proof

Let the positive constants L, M be defined in Eq. (3.8), and define \(\delta := \max\{L,M\} (\frac{T^{2}}{2}+\frac{1}{ \lambda} )\). We can choose λ sufficiently large enough such that \(\delta<1\). For any two solutions x, y to IVP (1.1)–(1.2), we show \(x\equiv y\) in the metric space \((\mathbb{X},d_{\lambda})\). With Eq. (3.3) and Eq. (3.8), we derive

$$\begin{aligned} \frac{ \vert x(t)-y(t) \vert }{E_{2-\alpha}(\lambda t^{2-\alpha})} \leq{}&\frac{1}{E _{2-\alpha}(\lambda t^{2-\alpha})} \int_{0}^{t}(t-s) \bigl\vert g \bigl(s,x(s),x^{(\alpha)}(s) \bigr)-g \bigl(s,y(s),y^{(\alpha)}(s) \bigr) \bigr\vert \,\mathrm{d}s \\ \leq{}&\frac{1}{E_{2-\alpha}(\lambda t^{2-\alpha})} \int_{0}^{t}(t-s) \bigl(L \bigl\vert x(s)-y(s) \bigr\vert +M \bigl\vert x^{(\alpha)}(s)-y^{(\alpha)}(s) \bigr\vert \bigr) \,\mathrm{d}s \\ \leq{}&\max\{L,M\} \biggl(\max_{t\in[0,T]}\frac{ \vert x(t)-y(t) \vert }{E_{2-\alpha}(\lambda t^{2-\alpha})} \\ & {} +\max_{t\in[0,T]}\frac{ \vert x^{(\alpha)}(t)-y^{(\alpha)}(t) \vert }{E_{2-\alpha}(\lambda t^{2- \alpha})} \biggr) \biggl\vert \int_{0}^{t}(t-s)\,\mathrm{d}s \biggr\vert \\ \leq{}&\max\{L,M\}\frac{T^{2}}{2}d_{\lambda}(x,y). \end{aligned}$$

Furthermore, for \(0<\alpha<2\), from Lemma 3.1, we have

$$ \begin{aligned} &\frac{ \vert x^{(\alpha)}(t)-y^{(\alpha)}(t) \vert }{E_{2-\alpha}(\lambda t^{2-\alpha})} \\ &\quad =\frac{1}{E_{2-\alpha}(\lambda t^{2-\alpha})} \frac{1}{ \varGamma(2-\alpha)} \int_{0}^{t}(t \\ &\quad \quad{} -s)^{1-\alpha} \bigl\vert g \bigl(s,x(s),x^{(\alpha)}(s) \bigr)-g \bigl(s,y(s),y^{(\alpha)}(s) \bigr) \bigr\vert \,\mathrm{d}s \\ &\quad \leq\frac{1}{E_{2-\alpha}(\lambda t^{2-\alpha})}\frac{1}{\varGamma (2-\alpha)} \int_{0}^{t}(t \\ & \quad \quad{} -s)^{1-\alpha} \biggl(E_{2-\alpha}\bigl(\lambda s^{2-\alpha}\bigr)\frac{L \vert x(s)-y(s) \vert +M \vert x^{(\alpha )}(s)-y^{(\alpha)}(s) \vert }{E_{2- \alpha}(\lambda s^{2-\alpha})} \biggr)\,\mathrm{d}s \\ &\quad \leq\frac{1}{E_{2-\alpha}(\lambda t^{2-\alpha})} \max\{L,M\}d _{\lambda}(x,y) \frac{1}{\varGamma(2-\alpha)} \int_{0}^{t}(t-s)^{1- \alpha}E_{2-\alpha} \bigl(\lambda s^{2-\alpha}\bigr)\,\mathrm{d}s. \end{aligned} $$

Applying Eq. (2.8) and Eq. (2.6), we have

$$\begin{aligned} \frac{ \vert x^{(\alpha)}(t)-y^{(\alpha)}(t) \vert }{E_{2-\alpha}(\lambda t^{2-\alpha})} \leq{}&\max\{L,M\}d_{\lambda}(x,y) \max _{t\in[0,T]} \biggl\{ \frac{1}{E_{2-\alpha}(\lambda t^{2- \alpha})} \biggl(\mathbb{I}_{0+}^{2-\alpha}{^{C}D_{0+}^{2-\alpha }} \frac{E _{2-\alpha}(\lambda t^{2-\alpha})}{\lambda} \biggr) \biggr\} \\ \leq{}&\max\{L,M\}d_{\lambda}(x,y)\max_{t\in[0,T]} \biggl\{ \frac{1}{E _{2-\alpha}(\lambda t^{2-\alpha})} \biggl(\frac{E_{2-\alpha}(\lambda t^{2-\alpha})}{\lambda}-\frac{1}{\lambda} \biggr) \biggr\} \\ \leq{}&\max\{L,M\}\frac{1}{\lambda}d_{\lambda}(x,y) \max _{t\in[0,T]} \biggl\{ 1-\frac{1}{E_{2-\alpha}(\lambda T^{2- \alpha})} \biggr\} . \end{aligned}$$

Since \(E_{2-\alpha}(\lambda t^{2-\alpha})\) with \(2-\alpha>0\) is continuous and strictly increasing on \([0,T]\), we derive

$$\begin{aligned} \frac{1}{E_{2-\alpha}(\lambda T^{2-\alpha})}\leq\frac{1}{E_{2- \alpha}(\lambda t^{2-\alpha})}\leq1\quad \text{for all } t\in[0,T]. \end{aligned}$$

So,

$$\begin{aligned} \frac{ \vert x^{(\alpha)}(t)-y^{(\alpha)}(t) \vert }{E_{2-\alpha}(\lambda t^{2-\alpha})}\leq\max\{L,M\}\frac{1}{\lambda}d_{\lambda}(x,y). \end{aligned}$$

Therefore, for any two solutions x, y to IVP (3.1)–(3.2), we have \(x, y \in(\mathbb{X},d_{\lambda })\) satisfy

$$\begin{aligned} d_{\lambda}(x,y)\leq\max\{L,M\} \biggl(\frac{T^{2}}{2}+ \frac{1}{ \lambda} \biggr)d_{\lambda}(x,y)=\delta d_{\lambda}(x,y), \end{aligned}$$

which yields

$$\begin{aligned} (1-\delta)d_{\lambda}(x,y)\leq0. \end{aligned}$$

Since we can choose λ such that \(\delta<1\), so \(d_{\lambda }(x,y)=0\), yielding \(x\equiv y\). Hence, IVP (3.1)–(3.2) has, at the utmost, one solution. □

Theorem 3.3

Under the same conditions of Theorem 3.2, IVP (3.1)(3.2) has a unique solution defined on \([0, T]\).

Proof

Set a sequence of functions \(\{x_{i}\}_{i=1}^{\infty}\) with \(x_{0}:=Bt+A\) and

$$\begin{aligned} x_{k+1}:= \int_{0}^{t}(t-s) g \bigl(s,x_{k}(s),x_{k}^{(\alpha)}(s) \bigr)\,\mathrm{d}s+Bt+A \quad (i=1,2,\ldots). \end{aligned}$$
(3.9)

Firstly, we show \(\{x_{i}\}_{i=1}^{\infty}\) is a Cauchy sequence on \([0,T]\). With Theorem 3.2, we have

$$\begin{aligned} d_{\lambda}(x_{i+1},x_{i})\leq\delta d_{\lambda}(x_{i},x_{i-1})\quad (i=0,1,2,\ldots). \end{aligned}$$

We proceed using induction as follows:

$$\begin{aligned} d_{\lambda}(x_{i+1},x_{i})\leq\delta^{i} d_{\lambda }(x_{1},x_{0})\quad (i=0,1,2,\ldots), \end{aligned}$$

where \(\forall\lambda>0\) is chosen in the above definition of \(d_{\lambda}\) such that \(\delta:=\max\{L,M\}(\frac{T^{2}}{2}+ \frac{1}{ \lambda})<1\). Applying the triangle inequality, we can find large \(N\in\mathbb{N}_{+}\) such that, for all natural numbers \(m>n > N\) and for \(\forall\varepsilon>0\),

$$\begin{aligned} d_{\lambda}(x_{m},x_{n}) &\leq d_{\lambda}(x_{m},x_{m-1})+d_{\lambda }(x_{m-1},x_{m-2})+ \cdots+d_{\lambda}(x_{n+1},x_{n}) \\ &\leq\bigl(\delta^{m-1}+\delta^{m-2}+\cdots+ \delta^{n}\bigr)d_{\lambda}(x _{1},x_{0}) \\ &< \frac{\delta^{n}}{1-\delta}d_{\lambda}(x_{1},x_{0})< \varepsilon. \end{aligned}$$

This proves that \(\{x_{i}\}_{i=1}^{\infty}\) is a Cauchy sequence. In the real numbers every Cauchy sequence converges to some limit. Therefore, there is a continuously differentiable function \(x=x(t)\) such that \(\lim_{i\rightarrow\infty}d_{\lambda}(x_{i},x)=0\).

Secondly, the limit function \(x(t)\) satisfies

$$\begin{aligned} x(t)= \int_{0}^{t}(t-s) g \bigl(s,x(s),x^{(\alpha)}(s) \bigr)\,\mathrm{d}s+Bt+A. \end{aligned}$$

Hence, the limit function \(x(t)\) is a solution to IVP (3.1)–(3.2) on \([0,T]\). Combining with Theorem 3.2, IVP (3.1)–(3.2) has a unique solution defined on \([0, T]\). □

3.2 Analytical solutions

In this section, we present two equivalent analytical solutions to IVP (1.1)–(1.2).

Theorem 3.4

The general analytical solution of IVP (1.1)(1.2), on the basis of the Prabhakar function, can be written, for \(0<\alpha \leq1\), as follows:

$$\begin{aligned} x(t)={} &A\sum_{k=0}^{\infty}(-b)^{k}t^{(2-\alpha)k}E_{2,(2- \alpha)k+1}^{k+1} \bigl(-ct^{2}\bigr) +Ab\sum_{k=0}^{\infty}(-b)^{k}t ^{(2-\alpha)k-\alpha+2}E_{2,(2-\alpha)k-\alpha+3}^{k+1} \bigl(-ct^{2}\bigr) \\ &{}+B\sum_{k=0}^{\infty}(-b)^{k}t^{(2-\alpha)k+1}E_{2,(2- \alpha)k+2}^{k+1} \bigl(-ct^{2}\bigr) \\ &{}+\sum_{k=0}^{\infty}(-b)^{k} \int_{0}^{t}f(t-s)s^{(2-\alpha )k+1}E_{2,(2-\alpha)k+2}^{k+1} \bigl(-cs^{2}\bigr)\,\mathrm{d}s \end{aligned}$$
(3.10)

and for \(1<\alpha\leq2\),

$$\begin{aligned} x(t)={} &A\sum_{k=0}^{\infty}(-b)^{k}t^{(2-\alpha)k}E_{2,(2- \alpha)k+1}^{k+1} \bigl(-ct^{2}\bigr) +Ab\sum_{k=0}^{\infty}(-b)^{k}t ^{(2-\alpha)k-\alpha+2}E_{2,(2-\alpha)k-\alpha+3}^{k+1} \bigl(-ct^{2}\bigr) \\ &{}+Bb\sum_{k=0}^{\infty}(-b)^{k}t^{(2-\alpha)k-\alpha+3}E _{2,(2-\alpha)k-\alpha+4}^{k+1} \bigl(-ct^{2}\bigr) +B\sum _{k=0}^{ \infty}(-b)^{k}t^{(2-\alpha)k+1}E_{2,(2-\alpha)k+2}^{k+1} \bigl(-ct^{2}\bigr) \\ &{}+\sum_{k=0}^{\infty}(-b)^{k} \int_{0}^{t}f(t-s)s^{(2-\alpha )k+1}E_{2,(2-\alpha)k+2}^{k+1} \bigl(-cs^{2}\bigr)\,\mathrm{d}s. \end{aligned}$$
(3.11)

Proof

According to Eq. (2.7), taking the Laplace transform of Eq. (1.1), we obtain

$$\begin{aligned} &X(s) =\bigl(s^{2}+bs^{\alpha}+c\bigr)^{-1}\bigl[ \bigl(s+bs^{\alpha-1}\bigr)A+B+F(s)\bigr],\quad\text{for } 0< \alpha\leq1, \\ &X(s) =\bigl(s^{2}+bs^{\alpha}+c\bigr)^{-1}\bigl[ \bigl(s+bs^{\alpha-1}\bigr)A+\bigl(1+bs^{\alpha -2}\bigr)B+F(s)\bigr]\quad \text{for } 1< \alpha< 2. \end{aligned}$$

The following expression holds:

$$\begin{aligned} \bigl(s^{2}+bs^{\alpha}+c\bigr)^{-1}= \frac{1}{s^{2}+c}\biggl(1+\frac{bs^{\alpha}}{s ^{2}+c}\biggr)^{-1} =\sum _{k=0}^{\infty}(-b)^{k}\frac{s^{k\alpha}}{(s ^{2}+c)^{k+1}}. \end{aligned}$$

Applying the above formula, we have, for \(0<\alpha\leq1\),

$$\begin{aligned} \begin{aligned} X(s)={}&A\sum_{k=0}^{\infty}(-b)^{k} \frac{s^{k\alpha+1}}{(s ^{2}+c)^{k+1}} +Ab\sum_{k=0}^{\infty}(-b)^{k} \frac{s^{(k+1) \alpha-1}}{(s^{2}+c)^{k+1}} \\ & {} +B\sum_{k=0}^{\infty}(-b)^{k} \frac{s^{k\alpha}}{(s^{2}+c)^{k+1}} +F(s)\sum_{k=0}^{\infty}(-b)^{k} \frac{s ^{k\alpha}}{(s^{2}+c)^{k+1}}, \end{aligned} \end{aligned}$$

and for \(1<\alpha<2\),

$$\begin{aligned} X(s)={}&A\sum_{k=0}^{\infty}(-b)^{k} \frac{s^{k\alpha+1}}{(s ^{2}+c)^{k+1}} +Ab\sum_{k=0}^{\infty}(-b)^{k} \frac{s^{(k+1) \alpha-1}}{(s^{2}+c)^{k+1}} +Bb\sum_{k=0}^{\infty}(-b)^{k} \frac{s ^{(k+1)\alpha-2}}{(s^{2}+c)^{k+1}} \\ & {} +B\sum_{k=0}^{\infty}(-b)^{k} \frac{s^{k\alpha}}{(s^{2}+c)^{k+1}} +F(s)\sum_{k=0}^{\infty}(-b)^{k} \frac{s ^{k\alpha}}{(s^{2}+c)^{k+1}}. \end{aligned}$$

Using the inverse Laplace transform of Eq. (2.10), the results are valid. □

Theorem 3.5

The general analytical solution of IVP (1.1)(1.2), on the basis of the Wiman function, can be built, for \(0<\alpha\leq1\), as follows:

$$\begin{aligned} x(t)={} &A\sum_{k=0}^{\infty} \frac{(-c)^{k}}{k!}t^{2k}E_{2- \alpha,k\alpha+1}^{(k)} \bigl(-bt^{2-\alpha}\bigr) +Ab\sum_{k=0}^{ \infty} \frac{(-c)^{k}}{k!}t^{2k-\alpha+2}E_{2-\alpha,(k-1)\alpha+3} ^{(k)} \bigl(-bt^{2-\alpha}\bigr) \\ &{}+B\sum_{k=0}^{\infty}\frac{(-c)^{k}}{k!}t^{2k+1}E_{2-\alpha ,2+k\alpha}^{(k)} \bigl(-bt^{2-\alpha}\bigr) \\ &{}+\sum_{k=0}^{\infty}\frac{(-c)^{k}}{k!} \int_{0}^{t}f(t-s)s ^{2k+1}E_{2-\alpha,2+k\alpha}^{(k)} \bigl(-bs^{2-\alpha}\bigr)\,\mathrm{d}s, \end{aligned}$$
(3.12)

and for \(1<\alpha<2\),

$$\begin{aligned} x(t)={} &A\sum_{k=0}^{\infty} \frac{(-c)^{k}}{k!}t^{2k}E_{2- \alpha,k\alpha+1}^{(k)} \bigl(-bt^{2-\alpha}\bigr) +Ab\sum_{k=0}^{ \infty} \frac{(-c)^{k}}{k!}t^{2k-\alpha+2}E_{2-\alpha,(k-1)\alpha+3} ^{(k)} \bigl(-bt^{2-\alpha}\bigr) \\ &{}+Bb\sum_{k=0}^{\infty}\frac{(-c)^{k}}{k!}t^{2k-\alpha+3}E _{2-\alpha,(k-1)\alpha+4}^{(k)} \bigl(-bt^{2-\alpha}\bigr) \\ &{}+B\sum_{k=0}^{\infty}\frac{(-c)^{k}}{k!}t^{2k+1}E_{2-\alpha ,2+k\alpha}^{(k)} \bigl(-bt^{2-\alpha}\bigr) \\ &{}+\sum_{k=0}^{\infty}\frac{(-c)^{k}}{k!} \int_{0}^{t}f(t-s)s ^{2k+1}E_{2-\alpha,2+k\alpha}^{(k)} \bigl(-bs^{2-\alpha}\bigr)\,\mathrm{d}s. \end{aligned}$$
(3.13)

Proof

Similar to the above proof and with the following formula

$$\begin{aligned} \bigl(s^{2}+bs^{\alpha}+c\bigr)^{-1}= \frac{s^{-\alpha}}{s^{2-\alpha }+b}\biggl(1+\frac{cs ^{-\alpha}}{s^{2-\alpha}+b}\biggr)^{-1} =\sum _{k=0}^{\infty}(-c)^{k}\frac{s ^{-\alpha(k+1)}}{(s^{2-\alpha}+b)^{k+1}} \end{aligned}$$

and the inverse Laplace transform of Eq. (2.11), we can show that the results are valid. □

Remark 3.6

The solutions expand the well-known results. Under the same α, the above two theorems are equivalent when one rewrites them in terms of the gamma function. When the initial condition of IVP (1.1)–(1.2) is the zero initial condition (i.e., \(A=B=0\)), the solution has only one expression for \(\alpha\in(0,2)\). And for \(\alpha=\frac{3}{2}\), one can find that the solution of IVP (1.1)–(1.2) with the zero-initial condition in Theorem 3.5 is identical with the solution presented with the fractional Green function by Podlubny ([1], (8.26)).

4 Two illustrative examples

The following illustrative examples are given to show the validity of our results.

Example 4.1

Consider the initial value problem:

$$\begin{aligned} &F\ddot{x}(t)+Gx^{(\frac{1}{2})}(t)+Hx(t)=h(t), \end{aligned}$$
(4.1)
$$\begin{aligned} &x(0)=0,\quad\quad \dot{x}(0)=0. \end{aligned}$$
(4.2)

Substituting the parameters \(\alpha=\frac{1}{2}\), \(b=\frac{G}{F}\), \(c=\frac{H}{F}\), \(f(t)=\frac{h(t)}{F}\), \(A=B=0\) into Eq. (3.10) and Eq. (3.12), respectively, we have the analytical solutions

$$ x_{1}(t)=\frac{1}{F}\sum_{k=0}^{\infty} \biggl(-\frac{G}{F} \biggr)^{k} \int_{0}^{t}h(t-s) \biggl(s^{1.5k+1}E_{2,1.5k+2}^{k+1} \biggl(- \frac{H}{F}s^{2} \biggr) \biggr)\,\mathrm{d}s $$
(4.3)

and

$$ x_{2}(t)=\frac{1}{F}\sum_{k=0}^{\infty} \frac{(-1)^{k}}{k!} \biggl(\frac{H}{F} \biggr)^{k} \int_{0}^{t} h(t-s) \biggl(s^{2k+1}E_{1.5,2+0.5k} ^{(k)} \biggl(-\frac{G}{F}s^{1.5} \biggr) \biggr)\, \mathrm{d}s. $$
(4.4)

Rewriting them in terms of the gamma function with Eq. (2.9) and Eq. (2.12), we have

$$ x_{1}(t)=\frac{1}{F}\sum_{k=0}^{\infty} \sum_{j=0}^{ \infty} (-1)^{k+j} \biggl( \frac{G}{F} \biggr)^{k} \biggl(\frac{H}{F} \biggr)^{j} \frac{(k+j)!}{k!j!}\frac{1}{\varGamma(2j+1.5k+2)} \int_{0}^{t}h(t-s)s ^{2j+1.5k+1}\,\mathrm{d}s $$

and

$$ x_{2}(t)=\frac{1}{F}\sum_{k=0}^{\infty} \sum_{j=0}^{ \infty} (-1)^{k+j} \biggl( \frac{H}{F} \biggr)^{k} \biggl(\frac{G}{F} \biggr)^{j} \frac{(k+j)!}{k!j!}\frac{1}{\varGamma(1.5j+2k+2)} \int_{0}^{t}h(t-s)s ^{1.5j+2k+1}\,\mathrm{d}s. $$

It is obvious that \(x_{1}(t)=x_{2}(t)\), they are the equivalent analytical solutions to IVP (4.1)–(4.2).

Example 4.2

Consider the initial value problem:

$$\begin{aligned} &F\ddot{x}(t)+Gx^{(\frac{3}{2})}(t)+Hx(t)=h(t), \end{aligned}$$
(4.5)
$$\begin{aligned} &x(0)=0,\quad\quad \dot{x}(0)=0. \end{aligned}$$
(4.6)

Substituting the parameters \(\alpha=\frac{3}{2}\), \(b=\frac{G}{F}\), \(c=\frac{H}{F}\), \(f(t)=\frac{h(t)}{F}\), \(A=B=0\) into Eq. (3.11) and Eq. (3.13), respectively, we have the analytical solutions

$$ x_{1}(t)=\frac{1}{F}\sum_{k=0}^{\infty} \biggl(-\frac{G}{F} \biggr)^{k} \int_{0}^{t}h(t-s) \biggl(s^{0.5k+1}E_{2,0.5k+2}^{k+1} \biggl(- \frac{H}{F}s^{2} \biggr) \biggr)\,\mathrm{d}s $$
(4.7)

and

$$ x_{2}(t)=\frac{1}{F}\sum_{k=0}^{\infty} \frac{(-1)^{k}}{k!} \biggl(\frac{H}{F} \biggr)^{k} \int_{0}^{t} h(t-s) \biggl(s^{2k+1}E_{0.5,2+1.5k} ^{(k)} \biggl(-\frac{G}{F}s^{0.5} \biggr) \biggr)\, \mathrm{d}s. $$
(4.8)

Rewriting them in terms of the gamma function with Eq. (2.9) and Eq. (2.12), we have

$$ x_{1}(t)=\frac{1}{F}\sum_{k=0}^{\infty} \sum_{j=0}^{ \infty} (-1)^{k+j} \biggl( \frac{G}{F} \biggr)^{k} \biggl(\frac{H}{F} \biggr)^{j} \frac{(k+j)!}{k!j!}\frac{1}{\varGamma(2j+0.5k+2)} \int_{0}^{t}h(t-s)s ^{2j+0.5k+1}\,\mathrm{d}s $$

and

$$ x_{2}(t)=\frac{1}{F}\sum_{k=0}^{\infty} \sum_{j=0}^{ \infty} (-1)^{k+j} \biggl( \frac{H}{F} \biggr)^{k} \biggl(\frac{G}{F} \biggr)^{j} \frac{(k+j)!}{k!j!}\frac{1}{\varGamma(0.5j+2k+2)} \int_{0}^{t}h(t-s)s ^{0.5j+2k+1}\,\mathrm{d}s. $$

It is obvious that \(x_{1}(t)=x_{2}(t)\), they are the equivalent analytical solutions to IVP (4.5)–(4.6).

Remark 4.3

The analytical solution (4.8) is identical to the solution (8.26) in [1].

5 Conclusion

In this paper, we investigate the generalized Bagley–Torvik equation with the fractional order \((0,2)\), derive the existence and uniqueness of the solution and analytical solutions of the initial value problem, and expand the well-known results about the general Bagley–Torvik equation. Furthermore, two examples are presented to illustrate the validity of our main results. For further research, it is interesting and challenging to discuss the finite-time stability, robust stability, and the resonance of the generalized Bagley–Torvik equation.