1 Introduction

Recently, fractional differential equations have become more and more important due to their varying applications in various fields of applied sciences and engineering, such as the charge transport in amorphous semiconductors, the spread of contaminants in underground water, the diffusion of pollution in the atmosphere, network traffic, etc. For details, see [19] and references therein.

As in classical calculus, stability analysis plays a key role in control theory. Stability analysis of linear fractional differential equations has been carried out by various researchers [1020]. In [10], Matignon provided famous stability results for finite-dimensional linear fractional differential systems with constant coefficient matrix A. The main qualitative result is that the stability results are guaranteed iff the roots of the eigenfunction of the system lie outside the closed angular sector \(|\operatorname{arg}(\lambda(A))| \leq\frac{\pi \alpha}{2}\), which generalized the results for the integer case \(\alpha = 1\). Chen [11] studied the stability of one-dimensional fractional systems with retarded time by using Lambert function.

Many years later, Matignon’s stability results were promoted by many scholars such as Deng, etc. In [12], by using the Laplace transform, Deng generalized the system to a linear fractional differential system with multi-orders and multiple delays and discovered that the linear system is Lyapunov globally asymptotical stable if all roots of the characteristic equation have negative parts. In 2010, Odibat [13] described the issues of existence, uniqueness, and stability of the solutions for two classes of linear fractional differential systems with Caputo derivative. In [14], Qian established stability theorems for fractional differential systems with Riemann-Liouville derivative. In [15], one studied basic stability properties of linear fractional differential systems with Riemann-Liouville derivative and one derived stability regions for special discretizations of the studied fractional differential systems including a precise description of their asymptotics.

Meanwhile, Li [21, 22] was first to study the stability of fractional order nonlinear systems by applying the Lyapunov direct method with the introductions of Mittag-Leffler stability. Some valuable results have been derived on the stability of nonlinear fractional differential systems; see [2328] and references therein. The stability of fractional differential systems has been fully studied. There is no doubt that the Lyapunov direct method provides a very effective approach to analyzing the stability of nonlinear systems. However, it is not easy to find such a suitable Lyapunov function. There are still many works that need to be improved.

As is well known, many systems are most naturally modeled by degenerate differential equations such as multibody mechanics, electrical circuits, prescribed path control, chemical engineering, etc.; see [2933] and the references therein. Degenerate differential equations can describe more complex dynamical models than state-space systems, due to the fact that a degenerate differential system model includes not only dynamic equations but also static equations. Recently, more and more research has been devoted to the study of degenerate fractional systems. For example, in [34], the constant variation formulas for degenerate fractional differential systems with delay were presented. In [35], the exponential estimation of the degenerate fractional differential system with delay and sufficient conditions for the finite time stability of the system are obtained. In 2010, by using linear matrix inequalities, N’Doye [36] derived sufficient conditions for the robust asymptotical stabilization of uncertain degenerate fractional systems with the fractional order α, satisfying \(0 < \alpha< 2\).

However, there are very limited works that focus on the stability of degenerate fractional linear differential systems with Riemann-Liouville derivative. Motivated by [3437], in this paper, we present the explicit representation of a solution for the degenerate fractional linear system with Riemann-Liouville derivative and derive the stability criteria for the system. The results show that the stability criterion is easy to verify.

The paper is organized as follows. In Section 2, we review basic notions and results from the theory of fractional calculus and degenerate differential system. In Section 3, we discuss the existence and uniqueness of solution for the linear degenerate fractional differential system and give the explicit representation of solutions for the system. In addition, we analyze the stability of the linear degenerate fractional differential system and achieve sufficient conditions to provide the asymptotically stability of the system. In Section 4, some examples are presented to illustrate the main results. Finally, concluding remarks are given.

2 Preliminaries

In this section, we present some related definitions and some fundamental theories as follows.

Definition 2.1

The Riemann-Liouville fractional integral operator of order \(\alpha>0\) of \(f(t)\) is defined as

$$ I_{a,t}^{\alpha}f(t)=\frac{1}{\Gamma(\alpha)} \int_{a}^{t}(t-s)^{\alpha -1}f(s)\,ds,\quad \alpha>0, t>a, $$
(1)

and the Riemann-Liouville fractional derivative is defined as

$$ D_{a, t}^{\alpha}f(t)=\frac{1}{\Gamma(n-\alpha)}\biggl( \frac{d}{dt}\biggr)^{n} \int _{a}^{t}(t-\xi)^{n-\alpha-1}f(\xi)\,d\xi\quad (n-1\leq\alpha< n), $$
(2)

where \(\Gamma(\cdot)\) is the gamma function. The initial time a is often set to be 0.

The Laplace transform of the Riemann-Liouville fractional derivative \(D_{0, t}^{\alpha}x(t)\) is

$$\int_{0}^{\infty}e^{-st}D_{0,t}^{\alpha}x(t)\,dt=s^{\alpha}X(s)- \sum_{k=o}^{n-1}s^{k} \bigl[D_{0,t}^{\alpha-k-1}x(t)\bigr]_{t=0} \quad (n-1\leq \alpha< n). $$

Definition 2.2

The one parameter Mittag-Leffler function \(E_{\alpha}(z)\) and the two parameter Mittag-Leffler function \(E_{\alpha,\beta}(z)\) are defined as

$$\begin{aligned}& E_{\alpha}(z)=\sum_{j=0}^{\infty} \frac{z^{j}}{\Gamma(\alpha j+1)}, \quad \alpha>0, z\in C, \end{aligned}$$
(3)
$$\begin{aligned}& E_{\alpha,\beta}(z)=\sum_{j=0}^{\infty} \frac{z^{j}}{\Gamma(\alpha j+\beta)}, \quad\alpha, \beta>0, z\in C. \end{aligned}$$
(4)

Their kth derivatives, for \(k=0, 1, 2, 3, \ldots \) , are given by

$$\begin{aligned}& E^{k}_{\alpha}(z)=\sum^{\infty}_{j=0} \frac{(j+k)!z^{j}}{j!\Gamma(\alpha j+\alpha k +1)}, \end{aligned}$$
(5)
$$\begin{aligned}& E^{k}_{\alpha,\beta}(z)=\sum^{\infty}_{j=0} \frac{(j+k)!z^{j}}{j!\Gamma (\alpha j+\alpha k +\beta)}. \end{aligned}$$
(6)

It can be noted that \(E_{\alpha, 1}(z)=E_{\alpha}(z)\) and \(E_{1, 1}(z)=e^{z}\). The Mittag-Leffler functions \(E_{\alpha}(z)\) and \(E_{\alpha, \beta}(z)\) are entire functions for \(\alpha, \beta>0\). According to \(E_{1, 1}(z)=e^{z}\), the Mittag-Leffler function \(E_{\alpha,\beta}(z)\) is a generalization of the exponential function \(e^{z}\) and the exponential function is a particular case of the Mittag-Leffler function. The Mittag-Leffler function plays a very important role in the theory of fractional differential equations, which is similar to the exponential function frequently used in the solutions of integer-order systems. The properties of the Mittag-Leffler functions can be found in [4, 13] and the references therein. Moreover, the Laplace transforms of the Mittag-Leffler functions are given by

$$L\bigl\{ t^{\alpha k+\beta-1}E_{\alpha,\beta}^{(k)}\bigl(-\lambda t^{\alpha}\bigr)\bigr\} =\frac{k!s^{\alpha-\beta}}{(s^{\alpha}+\lambda)^{k+1}}, \quad \operatorname{Res}(s)>|\lambda|^{\frac{1}{\alpha}}, $$

where \(\lambda\in C\), \(E_{\alpha,\beta}^{k}(z)=\frac {d^{k}}{dz^{k}}E_{\alpha,\beta}(z)\), \(Res(s)\) denotes the real part of s. t and s are the variables in the time domain and Laplace domain, respectively.

Lemma 2.1

([3])

If \(0<\alpha<2\), β is an arbitrary complex number and μ is an arbitrary real number, satisfying

$$\frac{\pi\alpha}{2}< \mu< \min[\pi,\pi\alpha], $$

then, for an arbitrary integer \(N\geq1\), the following expansion holds:

  1. (1)
    $$E_{\alpha,\beta}(z)=\frac{1}{\alpha}z^{\frac{1-\beta}{\alpha}}\exp \bigl(z^{\frac{1}{\alpha}} \bigr)-\sum_{k=1}^{N}\frac{1}{\Gamma(\beta-\alpha k)} \frac{1}{z^{k}}+O\biggl(\frac{1}{|z|^{N+1}}\biggr), $$

    with \(|z|\rightarrow\infty, |\operatorname{arg}(z)|\leq\mu\), and

  2. (2)
    $$E_{\alpha,\beta}(z)=-\sum_{k=1}^{N} \frac{1}{\Gamma(\beta-\alpha k)}\frac {1}{z^{k}}+O\biggl(\frac{1}{|z|^{N+1}}\biggr), $$

    with \(|z|\rightarrow\infty, \mu\leq|\operatorname{arg}(z)|\leq\pi\).

Remark 2.1

In Lemma 2.1, if \(\alpha=\beta\) and \(N\geq2\), then the following expansion holds:

$$E_{\alpha,\alpha}(z)=\frac{1}{\alpha}z^{\frac{1-\alpha}{\alpha}}\exp \bigl(z^{\frac{1}{\alpha}} \bigr)-\sum_{k=2}^{N}\frac{1}{\Gamma(\alpha(1- k))} \frac {1}{z^{k}}+O\biggl(\frac{1}{|z|^{N+1}}\biggr), $$

with \(|z|\rightarrow\infty, |\operatorname{arg}(z)|\leq\mu\);

$$E_{\alpha,\alpha}(z)=-\sum_{k=2}^{N} \frac{1}{\Gamma(\alpha(1- k))}\frac {1}{z^{k}}+O\biggl(\frac{1}{|z|^{N+1}}\biggr), $$

with \(|z|\rightarrow\infty,\mu\leq|\operatorname{arg}(z)|\leq\pi\).

Consider the following linear fractional differential system:

$$ \left \{ \textstyle\begin{array}{@{}l} ED_{0,t}^{\alpha}x(t)=Ax(t),\\ D_{0,t}^{\alpha-1}x(t)|_{t=0}=x_{0}, \end{array}\displaystyle \right . $$
(7)

where \(x(t)\in R^{n}\) is the state vector, \(E, A\in R^{n\times n}, \operatorname{rank}E< n, x_{0}\in R^{n}, D_{0,t}^{\alpha}\) denotes an αth order Riemann-Liouville derivative of \(x(t)\), and \(0<\alpha<1\).

Definition 2.3

The system (7) is said to be:

  1. (a)

    stable iff for any \(x_{0}\), there exists an \(\varepsilon>0\) such that \(\|x(t)\|\leq\varepsilon\) for \(t\geq0\);

  2. (b)

    asymptotically stable iff \(\lim _{t\rightarrow+\infty}\| x(t)\|=0\).

Definition 2.4

For any given two matrices \(E, A \in R^{n\times n}\), the matrix pair \((E, A)\) is called regular if \(\operatorname{det}(\lambda E-A)\not \equiv0\), where \(\lambda\in\mathcal{C}\). If \((E, A)\) is regular, we call system (7) is regular.

Definition 2.5

Let Q be a square matrix. The index of Q is the least nonnegative integer k such that \(\operatorname{rank}(Q^{k+1})=\operatorname{rank}(Q^{k})\). The Drazin inverse of Q is the unique matrix \(Q^{d}\) which satisfies

$$Q^{d}QQ^{d}=Q^{d}, \qquad QQ^{d}=Q^{d}Q,\qquad Q^{k+1}Q^{d}=Q^{k}. $$

Lemma 2.2

[31]

Suppose that \(E, A\in R^{n\times n}\) are such that there exists a λ so that \((\lambda E-A)^{-1}\) exists. Let \(\hat{E}_{\lambda}=(\lambda E-A)^{-1}E, \hat{A}_{\lambda}=(\lambda E-A)^{-1}A\). For all \(\lambda_{1}\neq\lambda_{2}\) for which \((\lambda_{i}E-A)^{-1}, i=1, 2\), exist, the following statements are true:

$$\begin{aligned}& \lambda\hat{E}_{\lambda}=I+\hat{A}_{\lambda},\qquad\hat{E}_{\lambda}\hat{A}_{\lambda}=\hat{A}_{\lambda}\hat{E}_{\lambda},\\& \hat{E}_{\lambda_{1}}^{d}\hat{A}_{\lambda_{1}}=\hat{E}_{\lambda _{2}}^{d}\hat{A}_{\lambda_{2}},\qquad \hat{E}_{\lambda_{1}}^{d} \hat{E}_{\lambda_{1}}=\hat{E}_{\lambda _{2}}^{d}\hat{E}_{\lambda_{2}},\qquad \hat{E}_{\lambda_{1}}\hat{E}_{\lambda _{1}}^{d}=\hat{E}_{\lambda_{2}} \hat{E}_{\lambda_{2}}^{d}, \end{aligned}$$

where \(\hat{E}_{\lambda}^{d}\) is the Drazin inverse of \(\hat{E}_{\lambda }\) and I is the \(n\times n\) identity matrix.

3 Main results

3.1 The existence and uniqueness of the solution for linear degenerate fractional differential system

In this section, we consider the solvability of the following system:

$$ \left \{ \textstyle\begin{array}{@{}l} ED_{0,t}^{\alpha}x(t)=Ax(t),\\ D_{0,t}^{\alpha-1}x(t)|_{t=0}=x_{0}, \end{array}\displaystyle \right . $$
(8)

where \(x(t)\in R^{n}\) is the state vector, \(A, E\in R^{n\times n}, \operatorname{rank}E< n,x_{0}\in R^{n}, D_{0,t}^{\alpha}\) denotes an αth order Riemann-Liouville derivative, and \(0<\alpha<1\).

Theorem 3.1

If the system (8) is regular, then the system (8) has a unique solution on \([0, +\infty)\) and the solution is given by

$$x(t)=e_{\alpha}^{\hat{E}_{\lambda}^{d}\hat{A}_{\lambda}t}\hat{E}_{\lambda }\hat{E}_{\lambda}^{d}x(0), $$

where \(e_{\alpha}^{\hat{E}_{\lambda}^{d}\hat{A}_{\lambda}t}\)= \(t^{\alpha -1}\sum _{k=0}^{\infty} (\hat{E}_{\lambda}^{d}\hat{A}_{\lambda })^{k}\frac{t^{k\alpha}}{\Gamma[(k+1)\alpha]},\hat{E}_{\lambda}=(\lambda E-A)^{-1}E, \hat{A}_{\lambda}=(\lambda E-A)^{-1}A\), \(x(0)\) satisfies \(x(0)=\hat{E}_{\lambda}\hat{E}_{\lambda}^{d}x(0)\). E and A are the coefficient matrices of the system (8), and λ is constant.

Proof

Since the system is regular, there exists a λ so that \((\lambda E-A)^{-1}\) exists. Let \(\hat{E}_{\lambda}=(\lambda E-A)^{-1}E,\hat{A}_{\lambda}=(\lambda E-A)^{-1}A\). From Lemma 2.2 and [31], there exists an invertible matrix T such that

$$ \hat{E}_{\lambda}=T^{-1} \begin{pmatrix} C&0\\ 0&N \end{pmatrix} T, $$
(9)

where \(C\in R^{p\times p}\) is invertible matrix, \(N\in R^{q\times q}\) is nilpotent, and \(q+p=n\).

Then

$$ \hat{A}_{\lambda}=\lambda\hat{E}_{\lambda}-I=T^{-1} \begin{pmatrix} \lambda C-I&0\\ 0&\lambda N-I \end{pmatrix} T. $$
(10)

Premultiplying \((\lambda E-A)^{-1}\) on both sides of the formula \(ED^{\alpha}x(t)=Ax(t)\), then

$$ \hat{E}_{\lambda}D^{\alpha}x(t)=\hat{A}_{\lambda}x(t). $$
(11)

From (9) and (10), we get

$$ T^{-1} \begin{pmatrix} C&0\\ 0&N \end{pmatrix} TD^{\alpha}x(t) =T^{-1} \begin{pmatrix} \lambda C-I&0\\ 0&\lambda N-I \end{pmatrix} Tx(t). $$
(12)

Taking the transform as \(\xi(t) =\bigl ( {\scriptsize\begin{matrix}{} \xi_{1}(t)\cr \xi_{2}(t) \end{matrix}} \bigr )=Tx(t) , \xi_{1}(t)\in R^{p},\xi_{2}(t)\in R^{q}\), and \(\xi(0) =\bigl ( {\scriptsize\begin{matrix}{} \xi_{1}(0)\cr \xi_{2}(0) \end{matrix}} \bigr )=Tx(0)\) such that (12) is r.s.e. to

$$\begin{aligned}& CD^{\alpha}\xi_{1}(t)=(\lambda C-I)\xi_{1}(t), \end{aligned}$$
(13)
$$\begin{aligned}& ND^{\alpha}\xi_{2}(t)=(\lambda N-I)\xi_{2}(t). \end{aligned}$$
(14)

First we discuss the first subsystem (13). Since C is an invertible matrix, (13) can be rewritten as

$$ D^{\alpha}\xi_{1}(t)=C^{-1}(\lambda C-I) \xi_{1}(t). $$
(15)

From the theory of fractional calculus [37], a unique solution for subsystem (13) exists, which may be expressed by

$$ \xi_{1}(t)=e_{\alpha}^{C^{-1}(\lambda C-I)t} \xi_{1}(0). $$
(16)

Next, we study the second subsystem (14) as follows.

Let \(\operatorname{ind}(N)=k\), that is, \(N^{k-1}\neq0, N^{k}=0\), k is the index of the matrix pair \((E, A)\). Premultiplying \(N^{k-1}\) on both sides of equation (14), then

$$D^{\alpha}N^{k}\xi_{2}(t)=\lambda N^{k} \xi_{2}(t)-N^{k-1}\xi_{2}(t). $$

Since \(N^{k}=0\), we get \(N^{k-1}\xi_{2}(t)=0\).

Premultiplying \(N^{k-2}\) on both sides of equation (14), then \(N^{k-2}\xi_{2}(t)=0\). In the same way, we can get

$$N^{k-3}\xi_{2}(t)=0,\qquad N^{k-4}\xi_{2}(t)=0,\qquad \ldots,\qquad N^{1}\xi_{2}(t)=0, \qquad\xi _{2}(t)=0. $$

Then we can get \(\xi_{2}(t)\equiv0\).

From the above discussion, the unique solution of the system (13) and (14) is given by

$$ \left \{ \textstyle\begin{array}{@{}l} \xi_{1}(t)=e_{\alpha}^{C^{-1}(\lambda C-I)t}\xi _{1}(0),\\ \xi_{2}(t)=0. \end{array}\displaystyle \right . $$
(17)

Applying \(x(t)=T^{-1}\xi(t)\), the solution of (8) is given by

$$\begin{aligned} x(t) =&T^{-1}\xi(t)= T^{-1} \begin{pmatrix} e_{\alpha}^{C^{-1}(\lambda C-I)t}\xi_{1}(0)\\ 0 \end{pmatrix} \\ =&T^{-1} \begin{pmatrix} e_{\alpha}^{C^{-1}(\lambda C-I)t}&0\\ 0&0 \end{pmatrix} TT^{-1} \begin{pmatrix} I&0\\ 0&0 \end{pmatrix} TT^{-1} \begin{pmatrix} \xi_{1}(0)\\ \xi_{2}(0) \end{pmatrix}. \end{aligned}$$
(18)

From [34], Lemma 2.1, one can get

$$ \hat{E}^{d}_{\lambda}=T^{-1} \begin{pmatrix} C^{-1}&0\\ 0&0 \end{pmatrix} T. $$
(19)

Then

$$\begin{aligned}& \hat{E}_{\lambda}\hat{E}^{d}_{\lambda}=T^{-1} \begin{pmatrix} C&0\\ 0&N \end{pmatrix} TT^{-1} \begin{pmatrix} C^{-1}&0\\ 0&0 \end{pmatrix} T =T^{-1} \begin{pmatrix} I&0\\ 0&0 \end{pmatrix} T, \\& \begin{aligned}[b] e_{\alpha}^{\hat{E}^{d}_{\lambda}\hat{A}_{\lambda}t}&= e_{\alpha}^{\bigl\{ T^{-1} \bigl({\scriptsize\begin{matrix}{} C^{-1}&0\cr 0&0 \end{matrix}}\bigr) TT^{-1} \bigl({\scriptsize\begin{matrix}{} (\lambda C-I)& 0\cr 0&(\lambda N-I) \end{matrix}}\bigr) Tt\bigr\} } \\ &=e_{\alpha}^{\bigl\{ T^{-1} \bigl({\scriptsize\begin{matrix}{} C^{-1}(\lambda C-I)&0\cr 0&0 \end{matrix}}\bigr) Tt\bigr\} } \\ &= T^{-1} \begin{pmatrix} e_{\alpha}^{C^{-1}(\lambda C-I)t}&0\\ 0&0 \end{pmatrix} T, \end{aligned} \\& x(0)= \begin{pmatrix} x_{1}(0)\\ x_{2}(0) \end{pmatrix} =T^{-1} \begin{pmatrix} \xi_{1}(0)\\ \xi_{2}(0) \end{pmatrix}. \end{aligned}$$

Then

$$x(t)=e_{\alpha}^{\hat{E}^{d}_{\lambda}\hat{A}_{\lambda}t}\hat{E}_{\lambda }\hat{E}^{d}_{\lambda}x(0), $$

where \(x(0)\) satisfies \(x(0)=\hat{E}_{\lambda}\hat{E}^{d}_{\lambda}x(0)\). According to Lemma 2.2, we know that \(\hat{E}^{d}_{\lambda}\hat{A}_{\lambda},\hat{E}_{\lambda}\hat{E}^{d}_{\lambda}\) are independent from λ. Hence, the system (8) has a unique solution \(x(t)=e_{\alpha }^{\hat{E}^{d}_{\lambda}\hat{A}_{\lambda}t}\hat{E}_{\lambda}\hat{E}^{d}_{\lambda}x(0)\). The proof is completed. □

Remark 3.1

From Lemma 2.2, one shows that \(\hat{E}^{d}_{\lambda}\hat{A}_{\lambda},\hat{E}_{\lambda}\hat{E}^{d}_{\lambda}\) are independent from λ. Therefore, we can drop the subscript λ whenever the terms \(\hat{E}^{d}_{\lambda}\hat{A}_{\lambda}\) and \(\hat{E}_{\lambda}\hat{E}^{d}_{\lambda}\) appear. Hence, the solution of the system (8) can be given by

$$x(t)=e_{\alpha}^{\hat{E}^{d}\hat{A}t}\hat{E}\hat{E}^{d}x(0), $$

where \(e_{\alpha}^{\hat{E}^{d}\hat{A}t}\)= \(t^{\alpha-1}\sum _{k=0}^{\infty} (\hat{E}^{d}\hat{A})^{k}\frac{t^{k\alpha}}{\Gamma [(k+1)\alpha]},\hat{E}=\hat{E}_{\lambda}=(\lambda E-A)^{-1}E, \hat{A}=\hat{A}_{\lambda}=(\lambda E-A)^{-1}A\), and \(x(0)\) satisfies \(x(0)=\hat{E}\hat{E}^{d}x(0)\). E and A are the coefficient matrices of the system (8), and λ is constant.

3.2 Stability results for linear degenerate fractional differential system

In this section, we derive the conditions for the asymptotically stable of the system (8).

Theorem 3.2

If the system (8) is regular, the algebraic and geometric multiplicities are the same for the zero eigenvalues of \(\hat{E}^{d}\hat{A}\) and all the non-zero eigenvalues satisfy

$$\bigl|\operatorname{arg}\bigl(\lambda\bigl(\hat{E}^{d}\hat{A}\bigr)\bigr)\bigr|> \frac{\alpha\pi}{2}, $$

then the system (8) is asymptotically stable.

Proof

From Theorem 3.1 and Remark 3.1, we know that the solution of the system (8) is given by

$$x(t)=e_{\alpha}^{\hat{E}^{d}\hat{A}t}\hat{E}\hat{E}^{d}x(0)=t^{\alpha -1}E_{\alpha,\alpha} \bigl(\hat{E}^{d}\hat{A}t^{\alpha}\bigr)\hat{E}\hat{E}^{d}x(0). $$

Applying (10) and (19), one gets

$$ \hat{E}^{d}\hat{A}=T^{-1} \begin{pmatrix} C^{-1}(\lambda C-I)&0\\ 0&0 \end{pmatrix} T, $$
(20)

where λ is constant, \((\lambda E-A)\) is invertible and \(C^{-1}(\lambda C-I)\in R^{p\times p}\).

Then there exists an invertible matrix H such that

$$ H^{-1}\hat{E}^{d}\hat{A}H=\operatorname{diag}(J_{1},J_{2}, \ldots, J_{r},\mathbf{0}), $$
(21)

where \(J_{i}, 1\leq i \leq r\), are Jordan canonical forms and 0 is a zero matrix with corresponding dimension.

Without loss of generality, assume that the numbers of non-zero eigenvalues and zero eigenvalues are p and q for \(\hat{E}^{d}\hat{A}\), separately. For the situation of non-zero eigenvalues, we discuss the problem in two cases.

Case (i): Assuming the matrix \(\hat{E}^{d}\hat{A}\) is diagonalizable, and \(\lambda_{1},\lambda_{2}, \ldots,\lambda_{p}\), are its non-zero eigenvalues, then (21) can be shown to obey

$$\wedge=H^{-1}\hat{E}^{d}\hat{A}H=\operatorname{diag}\Bigl( \lambda_{1},\lambda_{2}, \ldots ,\lambda_{p}, \underbrace{0, \ldots,0} _{q}\Bigr). $$

Hence,

$$\begin{aligned} E_{\alpha,\alpha}\bigl(\hat{E}^{d}\hat{A} t^{\alpha}\bigr) =&HE_{\alpha,\alpha}\bigl(\wedge t^{\alpha}\bigr)H^{-1} \\ =&H \operatorname{diag}\biggl[E_{\alpha,\alpha}\bigl(\lambda_{1} t^{\alpha}\bigr), \ldots,E_{\alpha ,\alpha}\bigl(\lambda_{p} t^{\alpha}\bigr),\underbrace{\frac{1}{\Gamma(\alpha)}, \ldots,\frac{1}{\Gamma(\alpha)}} _{q} \biggr]H^{-1}. \end{aligned}$$

From Lemma 2.1 and the conditions of Theorem 3.2, we get

$$E_{\alpha,\alpha}\bigl(\lambda_{i} t^{\alpha}\bigr)=-\sum _{k=2}^{N}\frac{1}{\Gamma (\alpha(1- k))} \frac{1}{(\lambda_{i} t^{\alpha})^{k}}+O\biggl(\frac {1}{|\lambda_{i} t^{\alpha}|^{N+1}}\biggr)\rightarrow0, \quad t \rightarrow +\infty, $$

where \(1\leq i \leq p\).

Hence,

$$\begin{aligned} \bigl\| t^{\alpha-1}E_{\alpha,\alpha}\bigl(\hat{E}^{d}\hat{A}t^{\alpha}\bigr)\bigr\| =& \biggl\| \operatorname{diag}\biggl[t^{\alpha-1}E_{\alpha,\alpha} \bigl(\lambda_{1} t^{\alpha}\bigr), \ldots ,t^{\alpha-1}E_{\alpha,\alpha} \bigl(\lambda_{p} t^{\alpha}\bigr), \\ & {}\underbrace {t^{\alpha-1}\frac{1}{\Gamma(\alpha)}, \ldots,t^{\alpha-1} \frac {1}{\Gamma(\alpha)}} _{q}\biggr]\biggr\| \rightarrow0,\quad t\rightarrow +\infty. \end{aligned}$$

Thus, the conclusion proved.

Case (ii): Assume \(E^{d}\hat{A}\) is similar to a Jordan canonical form as (21).

Let \(J_{i}, 1\leq i \leq r\), have the form

$$J_{i} = \begin{pmatrix} \lambda_{i}&1& & \\ &\lambda_{i}&\ddots& \\ & &\ddots&1\\ & & &\lambda_{i} \end{pmatrix} _{n_{i}\times n_{i}},\quad 1\leq i\leq r, \sum _{i=1}^{r}n_{i}=p, $$

then

$$E_{\alpha,\alpha}\bigl(\hat{E}^{d}\hat{A}t^{\alpha}\bigr) =H \operatorname{diag}\biggl[E_{\alpha,\alpha}\bigl(J_{1}t^{\alpha} \bigr), \ldots,E_{\alpha,\alpha }\bigl(J_{r}t^{\alpha}\bigr), \underbrace{\frac{1}{\Gamma(\alpha)}, \ldots, \frac {1}{\Gamma(\alpha)}} _{q} \biggr]H^{-1}. $$

We can get the following results by calculation:

$$\begin{aligned} &E_{\alpha,\alpha}\bigl(J_{i}t^{\alpha}\bigr) \\ &\quad =\sum_{k=0}^{\infty}\frac{t^{k\alpha}J_{i}^{k}}{\Gamma(\alpha (k+1))} \\ &\quad =\sum _{k=0}^{\infty}\frac{t^{k\alpha}}{\Gamma(\alpha(k+1))} \begin{pmatrix} \lambda_{i}^{k}&C_{k}^{1}\lambda_{i}^{k-1}& & C_{k}^{n_{i}-1}\lambda _{i}^{k-n_{i}+1} \\ &\lambda_{i}^{k}&\ddots& \\ & &\ddots&C_{k}^{1}\lambda_{i}^{k-1}\\ & & &\lambda_{i}^{k} \end{pmatrix} \\ & \quad= \begin{pmatrix} \sum_{k=0}^{\infty}\frac{t^{k\alpha}}{\Gamma(\alpha(k+1))}\lambda _{i}^{k}&\sum_{k=1}^{\infty}\frac{t^{k\alpha}}{\Gamma(\alpha (k+1))}C_{k}^{1}\lambda_{i}^{k-1}& &\sum_{k=n_{i}-1}^{\infty }\frac{t^{k\alpha}}{\Gamma(\alpha(k+1))}C_{k}^{n_{i}-1}\lambda _{i}^{k-n_{i}+1}\\ &\sum _{k=0}^{\infty}\frac{t^{k\alpha}}{\Gamma(\alpha (k+1))}\lambda_{i}^{k}&\ddots&\\ & &\ddots&\sum _{k=1}^{\infty}\frac{t^{k\alpha}}{\Gamma(\alpha (k+1))}C_{k}^{1}\lambda_{i}^{k-1}\\ & & &\sum _{k=0}^{\infty}\frac{t^{k\alpha}}{\Gamma(\alpha (k+1))}\lambda_{i}^{k} \end{pmatrix}, \end{aligned}$$
(22)
$$\begin{aligned} &E_{\alpha,\alpha}\bigl(J_{i}t^{\alpha}\bigr) \\ &\quad = \begin{pmatrix} E_{\alpha,\alpha}(\lambda_{i}t^{\alpha})&\frac{1}{1!}\frac{\partial }{\partial\lambda_{i}}E_{\alpha,\alpha}(\lambda_{i}t^{\alpha})& & \frac {1}{(n_{i}-1)!}(\frac{\partial}{\partial\lambda_{i}})^{n_{i}-1}E_{\alpha ,\alpha}(\lambda_{i}t^{\alpha}) \\ &E_{\alpha,\alpha}(\lambda_{i}t^{\alpha})&\ddots& \\ & &\ddots&\frac{1}{1!}\frac{\partial}{\partial\lambda_{i}}E_{\alpha ,\alpha}(\lambda_{i}t^{\alpha})\\ & & &E_{\alpha,\alpha}(\lambda_{i}t^{\alpha}) \end{pmatrix} _{n_{i}\times n_{i}}. \end{aligned}$$
(23)

Under the conditions of Theorem 3.2, we can get

$$\begin{aligned} &E_{\alpha,\alpha}\bigl(\lambda_{i} t^{\alpha}\bigr) \\ &\quad =-\sum_{k=2}^{N}\frac {1}{\Gamma(\alpha(1- k))} \frac{1}{(\lambda_{i} t^{\alpha})^{k}}+O\biggl(\frac {1}{|\lambda_{i} t^{\alpha}|^{N+1}}\biggr)\rightarrow0,\quad t \rightarrow +\infty, \end{aligned}$$

also, under the condition of \(|\operatorname{arg}(\lambda_{i})|>\frac {\alpha\pi}{2}\) and from Theorem 4 in [10], we have \(\frac {1}{(l-1)!}(\frac{\partial}{\partial\lambda_{i}})^{l-1}E_{\alpha,\alpha }(\lambda_{i}t^{\alpha})\rightarrow0 \) as \(t\rightarrow+\infty\), which is derived from

$$\begin{aligned} &\frac{1}{(l-1)!}\biggl(\frac{\partial}{\partial\lambda_{i}}\biggr)^{l-1}E_{\alpha ,\alpha} \bigl(\lambda_{i}t^{\alpha}\bigr) \\ &\quad =t^{1-\alpha}\Biggl(t^{l\alpha-1}\sum _{k=0}^{\infty }C_{l-1+k}^{l-1} \frac{(\lambda_{i} t^{\alpha})^{k}}{\Gamma(\alpha (k+l))}\Biggr)\sim \frac{\alpha(-\lambda_{i})^{-1-l}t^{-2\alpha}}{\Gamma(1-\alpha)}, \end{aligned}$$

where \(1\leq l\leq n_{i}\) and \(1\leq i \leq r \).

Hence,

$$\begin{aligned} \bigl\| t^{\alpha-1}E_{\alpha,\alpha}\bigl(\hat{E}^{d}\hat{A}t^{\alpha}\bigr)\bigr\| =&\biggl\| \operatorname{diag}\biggl[t^{\alpha-1}E_{\alpha,\alpha} \bigl(J_{1}t^{\alpha}\bigr), \ldots ,t^{\alpha-1}E_{\alpha,\alpha} \bigl(J_{r}t^{\alpha}\bigr), \\ &{}\underbrace{\frac{1}{\Gamma(\alpha)}t^{\alpha-1}, \ldots, \frac {1}{\Gamma(\alpha)}t^{\alpha-1}} _{q}\biggr]\biggr\| \rightarrow0, \quad t \rightarrow+\infty. \end{aligned}$$

From the discussion of the above two cases, we get

$$\begin{aligned} \lim_{t\rightarrow+\infty}\bigl\| x(t)\bigr\| =&\lim_{t\rightarrow+\infty} \bigl\| e_{\alpha}^{\hat{E}^{d}\hat{A}t}\hat{E}\hat{E}^{d}x(0)\bigr\| \\ =&\lim_{t\rightarrow+\infty}\bigl\| t^{\alpha-1}E_{\alpha,\alpha }\bigl(\hat{E}^{d}\hat{A}t^{\alpha}\bigr)\hat{E}\hat{E}^{d}x(0)\bigr\| =0. \end{aligned}$$

The proof is completed. □

Theorem 3.3

If the system (8) is regular, the zero eigenvalues of \(\hat{E}^{d}\hat{A}\) are such that their algebraic multiplicities are larger than their geometric multiplicities, \(\breve{n}\alpha<1\), in which, is the max dimension value for the Jordan canonical blocks of zero eigenvalues, and all the non-zero eigenvalues satisfy

$$\bigl|\operatorname{arg}\bigl(\lambda\bigl(\hat{E}^{d}\hat{A}\bigr)\bigr)\bigr|> \frac{\alpha\pi}{2}, $$

then the system (8) is asymptotically stable.

Proof

According to Theorem 3.2, there exists an invertible matrix H, such that

$$ J=H^{-1}\hat{E}^{d}\hat{A}H= \operatorname{diag}(J_{1}, J_{2}, \ldots, J_{r}, \mathbf{0}), $$
(24)

where \(J_{i}\), is a Jordan canonical form, \(1\leq i \leq r\), and 0 is a zero matrix with corresponding dimension.

Let \(J_{\breve{n}}\) be the zero eigenvalues of \(\hat{E}^{d}\hat{A}\) corresponding to the following Jordan canonical form with order :

$$J_{\breve{n}} = \begin{pmatrix} 0&1& & \\ &0&\ddots& \\ & &\ddots&1\\ & & &0 \end{pmatrix} _{\breve{n}\times\breve{n}}. $$

Applying (22), we get

$$\begin{aligned}& E_{\alpha,\alpha}\bigl(J_{\breve{n}}t^{\alpha}\bigr)_{\lambda_{\breve{n}}=0}= \begin{pmatrix} \frac{1}{\Gamma(\alpha)}&\frac{t^{k\alpha}}{\Gamma(2\alpha)}& &\frac {t^{(\breve{n}-1)\alpha}}{\Gamma(\breve{n}\alpha)} \\ &\frac{1}{\Gamma(\alpha)}&\ddots& \\ & &\ddots&\frac{t^{k\alpha}}{\Gamma(2\alpha)}\\ & & &\frac{1}{\Gamma(\alpha)} \end{pmatrix}, \end{aligned}$$
(25)
$$\begin{aligned}& t^{\alpha-1}E_{\alpha,\alpha}\bigl(J_{\breve{n}}t^{\alpha} \bigr)_{\lambda_{\breve {n}}=0}= \begin{pmatrix} \frac{t^{\alpha-1}}{\Gamma(\alpha)}&\frac{t^{2\alpha-1}}{\Gamma(2\alpha )}& &\frac{t^{(\breve{n}\alpha-1)}}{\Gamma(\breve{n}\alpha)} \\ &\frac{t^{\alpha-1}}{\Gamma(\alpha)}&\ddots& \\ & &\ddots&\frac{t^{2\alpha-1}}{\Gamma(2\alpha)}\\ & & &\frac{t^{\alpha-1}}{\Gamma(\alpha)} \end{pmatrix}, \end{aligned}$$
(26)

from (26) and the condition \(\breve{n}\alpha<1\), we get

$$t^{\alpha-1}E_{\alpha,\alpha}\bigl(J_{\breve{n}}t^{\alpha} \bigr)_{\lambda_{\breve {n}}=0}\rightarrow0, \quad t\rightarrow+\infty. $$

The result is also satisfied for Jordan canonical blocks of zero eigenvalues, with lower dimension than \(J_{\breve{n}}\). Other Jordan canonical forms in (24), corresponding to all the non-zero eigenvalues of \(\hat{E}^{d}\hat{A}\), can be treated as the same method conducted in Theorem 3.2. The proof is completed. □

Theorem 3.4

If the system (8) is regular, the algebraic and geometric multiplicities are the same for the zero eigenvalues of \(\hat{E}^{d}\hat{A}\) and all the non-zero eigenvalues satisfy

$$\bigl|\operatorname{arg}\bigl(\lambda\bigl(\hat{E}^{d}\hat{A}\bigr)\bigr)\bigr| \geq\frac{\alpha\pi}{2}, $$

moreover, the algebraic and geometric multiplicities are the same for the critical eigenvalues, satisfying \(|\operatorname{arg}(\lambda(\hat{E}^{d}\hat{A}))|=\frac{\alpha\pi}{2}\), then the system (8) is stable.

Proof

According to Theorem 3.2, there exists an invertible matrix H, such that

$$ J=H^{-1}\hat{E}^{d}\hat{A}H= \operatorname{diag}(J_{1}, J_{2}, \ldots, J_{r}, \mathbf{0}), $$
(27)

where \(J_{i}\), is a Jordan canonical form, \(1\leq i \leq r\), and 0 is a zero matrix with corresponding dimension.

From the conditions of Theorem 3.4, without loss of generality, we suppose that the eigenvalue \(\lambda_{s}\) satisfies \(|\operatorname{arg}(\lambda(\hat{E}^{d}\hat{A}))|=\frac{\alpha\pi}{2}\) with algebraic and geometric multiplicity both equal to 1, as well as \(\lambda_{s}=J_{s},1\leq s\leq r\), \(\lambda_{s}=r_{0}(\cos(\frac{\alpha\pi}{2})+i_{0}\sin(\frac{\alpha \pi}{2}))=r_{0}e^{\frac{\alpha\pi}{2}i_{0}}, (i_{0})^{2}=-1\).

Then

$$\begin{aligned} E_{\alpha,\alpha}\bigl(\hat{E}^{d}\hat{A}t^{\alpha} \bigr) =&H\operatorname{diag}\biggl[E_{\alpha,\alpha}\bigl(J_{1}t^{\alpha} \bigr), \ldots, E_{\alpha,\alpha}\bigl(J_{s-1}t^{\alpha} \bigr),E_{\alpha,\alpha}\bigl(\lambda _{s}t^{\alpha}\bigr), \\ &{}E_{\alpha,\alpha}\bigl(J_{s+1}t^{\alpha}\bigr), \ldots,E_{\alpha,\alpha }\bigl(J_{r}t^{\alpha}\bigr), \frac{1}{\Gamma(\alpha)}, \ldots, \frac{1}{\Gamma (\alpha)}\biggr]H^{-1}. \end{aligned}$$
(28)

Applying Lemma 2.1, we get

$$\begin{aligned} E_{\alpha,\alpha}\bigl(\lambda_{s}t^{\alpha}\bigr) =& \frac{1}{\alpha}\bigl(\lambda_{s}t^{\alpha} \bigr)^{\frac{1-\alpha}{\alpha}}\exp \bigl(\bigl(\lambda_{s}t^{\alpha} \bigr)^{\frac{1}{\alpha}}\bigr) \\ &{}-\sum _{k=2}^{N}\frac{1}{\Gamma(\alpha(1- k))} \frac{1}{(\lambda _{s}t^{\alpha})^{k}}+O\biggl(\frac{1}{|\lambda_{s}t^{\alpha}|^{N+1}}\biggr) \\ =&\frac{1}{\alpha}\bigl(r_{0}^{\frac{1-\alpha}{\alpha}}t^{1-\alpha}e^{\frac {(1-\alpha)\pi}{2}i_{0}} \bigr)\exp\bigl(r_{0}^{\frac{1}{\alpha}}e^{\frac{\pi }{2}i_{0}}t\bigr) \\ &{}-\sum _{k=2}^{N}\frac{1}{\Gamma(\alpha(1- k))} \frac {1}{r_{0}^{k}e^{\frac{(k\alpha)\pi}{2}i_{0}}t^{k\alpha}}+O\biggl(\frac {1}{|r_{0}t^{\alpha}|^{N+1}}\biggr) \\ =&\frac{1}{\alpha}\bigl(r_{0}^{\frac{1-\alpha}{\alpha}}t^{1-\alpha}e^{\frac {\alpha\pi}{2}i_{0}} \bigr)\exp\bigl(r_{0}^{\frac{1}{\alpha}}ti_{0}\bigr) \\ &-\sum _{k=2}^{N}\frac{1}{\Gamma(\alpha(1- k))} \frac {1}{r_{0}^{k}e^{\frac{(k\alpha)\pi}{2}i_{0}}t^{k\alpha}}+O\biggl(\frac {1}{|r_{0}t^{\alpha}|^{N+1}}\biggr). \end{aligned}$$

Then

$$\begin{aligned} t^{\alpha-1}E_{\alpha,\alpha}\bigl(\lambda_{s}t^{\alpha} \bigr) =&\frac{1}{\alpha}\bigl(r_{0}^{\frac{1-\alpha}{\alpha}}e^{\frac{\alpha\pi }{2}i_{0}} \bigr)\exp\bigl(r_{0}^{\frac{1}{\alpha}}ti_{0}\bigr) \\ &{}-\sum _{k=2}^{N}\frac{1}{\Gamma(\alpha(1- k))} \frac {1}{r_{0}^{k}e^{\frac{(k\alpha)\pi}{2}i_{0}}t^{(k-1)\alpha+1}}+O\biggl(\frac {1}{|r_{0}|^{N+1}|t^{\alpha N+1}|}\biggr), \end{aligned}$$

when \(t\rightarrow+\infty\), we get

$$\begin{aligned}& \biggl|\frac{1}{\alpha}\bigl(r_{0}^{\frac{1-\alpha}{\alpha}}e^{\frac{\alpha\pi }{2}i_{0}}\bigr) \exp\bigl(r_{0}^{\frac{1}{\alpha}}ti_{0}\bigr)\biggr|\leq \frac{1}{\alpha }r_{0}^{\frac{1-\alpha}{\alpha}}, \\& -\sum _{k=2}^{N}\frac{1}{\Gamma(\alpha(1- k))} \frac {1}{r_{0}^{k}e^{\frac{(k\alpha)\pi}{2}i_{0}}t^{(k-1)\alpha+1}}+O\biggl(\frac {1}{|r_{0}|^{N+1}|t^{\alpha N+1}|}\biggr)\rightarrow0. \end{aligned}$$

From the above discussion, we can see that \(t^{\alpha-1}E_{\alpha,\alpha }(\lambda_{s}t^{\alpha}) \) is stable as \(t\rightarrow+\infty\). Other Jordan canonical forms in (28) can be treated by the same method as used in Theorem 3.2. Hence, the system (8) is stable. The proof is completed. □

Theorem 3.5

If the system (8) is regular, the zero eigenvalues of \(\hat{E}^{d}\hat{A}\) are such that their algebraic multiplicities are larger than their geometric multiplicities, \(\breve{n}\alpha<1\), in which is the max dimension value for the Jordan canonical blocks of zero eigenvalues, and all the non-zero eigenvalues satisfy

$$\bigl|\operatorname{arg}\bigl(\lambda\bigl(\hat{E}^{d}\hat{A}\bigr)\bigr)\bigr| \geq\frac{\alpha\pi}{2}, $$

moreover, the algebraic and geometric multiplicities are the same for the critical eigenvalues, satisfying \(|\operatorname{arg}(\lambda(\hat{E}^{d}\hat{A}))|=\frac{\alpha\pi}{2}\), then the system (8) is stable.

Proof

According to Lemma 2.1 and the conditions of Theorem 3.5, the following proof is similar to Theorems 3.3, 3.4 and will be omitted. □

Theorem 3.6

For the system (8), if all the roots of the characteristic equation \(|s^{\alpha}E-A|=0\) have negative real parts, then the system is asymptotically stable.

Proof

Taking the Laplace transform on both sides of system (8), we get the characteristic equation of system (8) as follows:

$$ \bigl|s^{\alpha}E-A\bigr|=0. $$
(29)

Let \(\lambda=s^{\alpha}\), then

$$ |\lambda E-A|=0. $$
(30)

Next, we prove that the characteristic equations, \(|\lambda E-A|=0\) and \(|\lambda I-\hat{E}^{d}\hat{A}|=0\), have the same non-zero eigenvalues.

In fact, from (20) in Theorem 3.2, there exist a ρ and an invertible matrix T such that

$$ \hat{E}^{d}\hat{A}=T^{-1} \begin{pmatrix} C^{-1}(\rho C-I)&0\\ 0&0 \end{pmatrix} T, $$
(31)

where ρ is constant, \((\rho E-A)\) is invertible, and \(C^{-1}(\rho C-I)\in R^{p\times p}\).

Then

$$\begin{aligned} \bigl|\lambda I-\hat{E}^{d}\hat{A}\bigr|&=\bigl|T^{-1}\bigr| \begin{vmatrix} \lambda I-C^{-1}(\rho C-I)&0 \\ 0&\lambda I \end{vmatrix} |T| \\ &=\bigl|\lambda I-C^{-1}(\rho C-I)\bigr||\lambda I|. \end{aligned}$$
(32)

Premultiplying \(|(\rho E-A)^{-1}|\) on both sides of \(|\lambda E-A|=0\) and applying (9), (10), we have

$$\begin{aligned} 0 =&|\lambda E-A|=\bigl|(\rho E-A)^{-1}\bigr||\lambda E-A| \\ =&\bigl|(\rho E-A)^{-1}(\lambda E-A)\bigr| \\ =&|\lambda\hat{E}-\hat{A}| \\ =& \begin{vmatrix} \lambda C-(\rho C-I)&0 \\ 0&(\lambda-\rho)N+I \end{vmatrix} \\ =&\bigl|C^{-1}\bigr|\bigl|\lambda I-C^{-1}(\rho C-I)\bigr|\bigl|(\lambda-\rho)N+I\bigr|. \end{aligned}$$
(33)

Since N is nilpotent, we get

$$\bigl|(\lambda-\rho)N+I\bigr|= 1. $$

Hence,

$$ 0=|\lambda E-A|=\bigl| C^{-1}\bigr|\bigl|\lambda I-C^{-1}( \rho C-I)\bigr|. $$
(34)

From (32) and (34), we can see that the characteristic equations, \(|\lambda E-A|=0\) and \(|\lambda I-\hat{E}^{d}\hat{A}|=0\), have the same non-zero eigenvalues.

According to the conditions of Theorem 3.6, assume \(\lambda_{1}, \lambda _{2}, \ldots, \lambda_{m}\) are all non-zero roots of \(|\lambda E-A|=0\). \(n_{i}\) is the multiplicity of \(\lambda_{i}, 1\leq i\leq m\), and \(n_{1}+n_{2}+\cdots+n_{m}< n\). From the above discussion, we know that \(\lambda_{1}, \lambda_{2}, \ldots, \lambda_{m}\) are all non-zero roots of \(|\lambda I-\hat{E}^{d}\hat{A}|=0\) and they are also all non-zero roots of \(|\lambda I-C^{-1}(\rho C-I)|=0\).

Two cases, whether there are multiple roots among \(\lambda_{1},\lambda _{2},\ldots,\lambda_{m}\) or not, will be discussed.

First, suppose that the matrix \(C^{-1}(\rho C-I)\) is diagonalizable, i.e., there exists an invertible matrix P such that

$$C^{-1}(\rho C-I)=P^{-1} \begin{pmatrix} \lambda_{1}& & 0\\ &\ddots& \\ 0& &\lambda_{m} \end{pmatrix} P. $$

From (31), let \(H=T\bigl ( {\scriptsize\begin{matrix}{} P&0\cr 0&I \end{matrix}} \bigr )\), then

$$\hat{E}^{d}\hat{A}=H^{-1} \begin{pmatrix} \left({\scriptsize\begin{matrix}{} \lambda_{1}& & 0\cr &\ddots& \cr 0& &\lambda_{m} \end{matrix}}\right) & & \mathbf{0} \\ &\ddots& \\ \mathbf{0}& &\mathbf{0} \end{pmatrix} H. $$

Hence,

$$E_{\alpha,\alpha}\bigl(\hat{E}^{d}\hat{A} t^{\alpha}\bigr) =H \operatorname{diag}\biggl[E_{\alpha,\alpha}\bigl(\lambda_{1} t^{\alpha}\bigr), \ldots,E_{\alpha ,\alpha}\bigl(\lambda_{m} t^{\alpha}\bigr),\underbrace{\frac{1}{\Gamma(\alpha)}, \ldots,\frac{1}{\Gamma(\alpha)}} _{n-m} \biggr]H^{-1}. $$

Due to \(\lambda=s^{\alpha}\), \(|\operatorname{arg}(\lambda)|=|\operatorname{arg}(s^{\alpha})|>\frac{\pi \alpha}{2}\) when \(|\operatorname{arg}(s)|>\frac{\pi}{2}\). From the proof of Theorem 3.2, we see that system (8) is asymptotically stable.

Second, suppose that the matrix \(C^{-1}(\rho C-I)\) is similar to a Jordan canonical form, i.e., there exists an invertible matrix P such that

$$ C^{-1}(\rho C-I)=P^{-1}\operatorname{diag}(J_{1},J_{2}, \ldots,J_{m})P. $$
(35)

Let \(J_{i},1\leq i \leq m\), have the form

$$J_{i} = \begin{pmatrix} \lambda_{i}&1& & \\ &\lambda_{i}&\ddots& \\ & &\ddots&1\\ & & &\lambda_{i} \end{pmatrix} _{n_{i}\times n_{i}}, \quad 1\leq i\leq m. $$

The following proof is similar to case (ii) in Theorem 3.2 and will be omitted. The proof is completed. □

4 Illustrative examples

In this section, we present some examples to illustrate the application of our results.

Example 1

Consider the following system:

$$ E D^{\frac{1}{3}} x(t) =Ax(t), $$
(36)

where \(E=\Bigl( {\scriptsize\begin{matrix}{} 1& 0 & -2\cr -1&0 &2 \cr 2&3&2 \end{matrix}} \Bigr), A=\Bigl( {\scriptsize\begin{matrix}{} 0& -1 & -2\cr 27&22 &17 \cr -18&-14&-10 \end{matrix}} \Bigr), x(t)=\Bigl( {\scriptsize\begin{matrix}{} x_{1}(t) \cr x_{2}(t) \cr x_{3}(t) \end{matrix}} \Bigr)\).

Since \(E-A\) is invertible, we have

$$\hat{E} =(E-A)^{-1}E= \begin{pmatrix} -1& -\frac{5}{3} & -\frac{4}{3}\\ 2&\frac{5}{3}&-\frac{2}{3} \\ -1&\frac{2}{3}&\frac{10}{3} \end{pmatrix},\qquad \hat{A}=(E-A)^{-1}A= \begin{pmatrix} -2& -\frac{5}{3} & -\frac{4}{3}\\ 2&\frac{2}{3}&-\frac{2}{3} \\ -1&\frac{2}{3}&\frac{7}{3} \end{pmatrix}, $$

\(\hat{E}^{d}=\Bigl( {\scriptsize\begin{matrix}{} -1& -\frac{41}{27} & -\frac{28}{27}\cr 2&\frac{77}{27}&\frac{46}{27} \cr -1&-\frac{34}{27}&-\frac{14}{27} \end{matrix}} \Bigr)\), and the initial condition \(x(0)\) satisfies

$$\bigl(I-\hat{E} \hat{E}^{d}\bigr)x(0)=0, \quad \mbox{i.e.}, 9x_{1}(0)+ 7x_{2}(0)+ 5x_{3}(0)=0. $$

Hence, we get the explicit representation of the solution for the example:

$$x(t)=t^{-\frac{2}{3}}E_{\frac{1}{3}, \frac{1}{3}}\bigl(\hat{E}^{d}\hat{A}t^{\frac{1}{3}}\bigr)x(0). $$

Example 2

Consider the following system:

$$ E D^{\frac{1}{3}} x(t) =Ax(t), $$
(37)

where \(E=\bigl ( {\scriptsize\begin{matrix}{} 1&0 \cr 0&0 \end{matrix}} \bigr ), A=\bigl ( {\scriptsize\begin{matrix}{} -1&0 \cr \frac{1}{3}&-1 \end{matrix}} \bigr ), x(t)=\bigl ( {\scriptsize\begin{matrix}{} x_{1}(t) \cr x_{2}(t) \end{matrix}} \bigr )\).

Since \(E-A\) is invertible, we have

$$\begin{aligned}& \hat{E} =(E-A)^{-1}E= \begin{pmatrix} \frac{1}{2} & 0\\ \frac{1}{6}&0 \end{pmatrix},\qquad \hat{A}=(E-A)^{-1}A= \begin{pmatrix} -\frac{1}{2} & 0\\ \frac{1}{6}&-1 \end{pmatrix}, \\& \hat{E}^{d}= \begin{pmatrix} 2&0 \\ \frac{2}{3}&0 \end{pmatrix},\qquad \hat{E}^{d}\hat{A}= \begin{pmatrix} -1&0 \\ -\frac{1}{3}&0 \end{pmatrix},\qquad \hat{E}\hat{E}^{d}= \begin{pmatrix} 1&0 \\ \frac{1}{3}&0 \end{pmatrix}, \end{aligned}$$

and the initial condition \(x(0)\) satisfies

$$\bigl(I-\hat{E} \hat{E}^{d}\bigr)x(0)=0, \quad\mbox{i.e.}, x_{1}(0)-3x_{2}(0)=0. $$

The eigenvalues of the matrix \(\hat{E}^{d}\hat{A}\) are \(\lambda _{1}=-1,\lambda_{2}=0\), therefore the system is asymptotically stable.

Verifying it in another way, the solution of the system is

$$x_{1}(t)=t^{-\frac{2}{3}}E_{\frac{1}{3},\frac{1}{3}}\bigl(-t^{\frac {1}{3}} \bigr)x_{1}(0), \qquad x_{2}(t)=\frac{1}{3}x_{1}(t). $$

From [16], when \(t\rightarrow+\infty\), there exists a constant \(M>1\), such that

$$\bigl|E_{\frac{1}{3},\frac{1}{3}}\bigl(-t^{\frac{1}{3}}\bigr)\bigr|\leq M\bigl|e^{-t}\bigr|. $$

When \(t\rightarrow+\infty\), we can get \(x_{1}(t)\rightarrow0\) and \(x_{2}(t)\rightarrow0\). Thus, the system (37) is asymptotically stable.

5 Conclusions

In this paper, we obtain the existence and uniqueness theorem for the initial value problem of the linear degenerate fractional differential system and derive an explicit representation of the solution for the system. The stability of linear degenerate fractional differential systems under the Riemann-Liouville derivative is investigated and some stability criteria for the system are given, which can be verified easily. We derive the relationship between the stability and the distribution of the zero eigenvalues of system as well as the distribution of the eigenvalues \(\lambda(E^{d}\hat{A})\) satisfying \(|\operatorname{arg}(\lambda(\hat{E}^{d}\hat{A}))|=\frac{\alpha\pi}{2}\). Since the considered systems are degenerate fractional systems, the theorems obtained in this paper can also be widely applied to many practical systems and generalize the results which are derived in [13, 37].