1 Introduction

Fractional differential equations (FDEs) are the well-known differential equations which include non-integer orders. Together with the improvements of FDEs, it is noticed that FDEs can model many real-world problems, especially including hereditary features and memory of several process and materials [1,2,3,4,5,6,7,8,9,10,11,12,13] more than ordinary differential equations. When having a look at the literature, we observe that relatively few studies are interested in linear fractional differential systems with variable coefficients even though linear fractional differential systems are quite well examined as a field of research. On the other hand, like linearized models of population, aircraft, the diffusion of the batteries, the distribution in the charge transfer of parameters, etc, many real-life process and dynamics can be formulated in terms of linear fractional differential systems with variable coefficients. There is no doubt that an analytical or a closed-form solutions for linear differential systems are so important to check their stabilities and controllability. So far, representation of solutions for linear fractional differential systems with constant coefficients [14, 15] have been mostly studied and used to solve differential games and control problems [16,17,18,19]. Unfortunately, only a couple of works are conducted for linear fractional differential systems with variable coefficients and their controllability. Researchers in [20, 21] acquire a closed-form solution to linear fractional differential systems via the generalized Peano–Baker series [22]. Recently, Matychyn [23] manage to get representation of solution to linear Riemann–Liouville and Caputo fractional differential systems with variable coefficients by state-transition matrix in the light of generalized Peano–Baker series.

Picard iteration method is the procedure of generating a sequence of functions that approximate the solution whose existence we are trying to establish. A Volterra integral equation is a special type of integral equation obtained from a (fractional) differential equation. With the help of this integral equation, qualitative properties such as controllability, stability, etc; of the system have been able to investigated. To put it simply, the (generalized) Peano–Baker series [22] is mainly obtained from a Volterra integral equation by means of a formal Picard iteration method. In this respect, we eventually get a closed-form solution in terms of the Peano–Baker series which is offered along with its explicit series expansion with convergence. The closed-form solution for the fractional system used to be the key to solving the corresponding (optimal) control [24, 25] and to investigate some other qualitative properties. This holds also for ordinary systems [26]. Keeping this in mind, it can be said with confidence that the results of the present paper are the basis for solving the fractional control problems and investigating some other qualitative properties.

In recent years, Prabhakar fractional calculus have been appeared in the literature. The onset of it is based on the fractional integral operator which was defined in [27], profoundly studied in [28], and widened to the notions of fractional derivatives in the study [29]. It has been adapted to pure mathematical subjects [30,31,32,33] and various applications [34, 35]. As a result, it has been the center of interest. For all reasons, we devote this paper to finding analytical solutions to linear fractional differential systems in the context of the Prabhakar fractional derivatives with variable coefficients by applying the same technique as used in [20, 22, 23].

Apart from its evident efficacy in serving as a foundational tool for representing anomalous dielectrics, the Prabhakar calculus also plays a noteworthy role in the context of fractional viscoelasticity [31]. As detailed in [36], considering the formal duality between viscoelastic models and electrical systems-originally introduced by Gross and Fuoss in [37,38,39] and later revisited in [40]—reveals that the distinctive relaxation processes have a parallel representation in linear viscoelasticity. Consequently, it is reasonable to inquire about the implications of formulating the constitutive equation for a given material using Prabhakar derivatives.

The research [41] delves into the adjustment of fractional operators with Mittag-Leffler kernels, specifically named ABC and ABR, with the utilization of a generalized Mittag-Leffler type as the kernel. In the study [42], through the investigation of the associations between Prabhakar fractional operators and fractional operators utilizing generalized Mittag-Leffler kernels, fresh properties of both operators, as well as the original Atangana–Baleanu model and its iterated form, can be unveiled.

Inspired by the above-cited works, we firstly consider the following inhomogeneous linear fractional system including Prabhakar fractional derivative of Riemann–Liouville type with variable coefficients

$$\begin{aligned} \left\{ \begin{array}{l} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =Y(x)y\left( x\right) +u\left( x \right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=a}=y_0 \end{array}\right. \end{aligned}$$
(1)

where \(_a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\) represents the Prabhakar derivative of Riemann Liouville type of fractional order \(0<\beta <1\), \(y: \mathcal {J} \rightarrow \mathbb {R}^n \) is a vector-valued function, the matrix function \(Y: \mathcal {J} \rightarrow \mathbb {R}^{n\times n}\) and the function \(u: \mathcal {J} \rightarrow \mathbb {R}^{n} \) are continuous.

After that, we also take into consideration the following inhomogeneous linear fractional system including Prabhakar fractional derivative of Caputo type with variable coefficients

$$\begin{aligned} \left\{ \begin{array}{l} _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =Y(x)y\left( x\right) +u\left( x \right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y\left( a\right) =y_0 \end{array}\right. \end{aligned}$$
(2)

where \(_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }\) represents the Prabhakar derivative of Caputo type of fractional order \(0<\beta <1\), the rest of the information is the same as given in (1).

This paper addresses the initial value problem for linear systems of fractional differential equations with variable coefficients, incorporating Prabhakar fractional derivatives of Riemann–Liouville and Caputo types. The generalized Peano–Baker series technique is applied to derive the state-transition matrix. The closed form solutions are obtained for both homogeneous and inhomogeneous cases, and the theoretical findings are illustrated through examples.

2 Preliminaries

In this section, the most fundamental tools are presented to make the coming findings easily understandable.

\(\mathbb {R}^n\) is an Euclidean space with dimension \(n \in \mathbb {N}\). \(\mathcal {J}=[a,T] \subset \mathbb {R}\) with \(T>a\), and \(\overset{\textit{ o}}{\mathcal {J}}=(a,T)\). The set \(AC^n(\overset{\textit{ o}}{\mathcal {J}})\) consists of real-valued functions f such that it has derivatives up to order \(n-1\) on \(\overset{\textit{ o}}{\mathcal {J}}\), and \(f^{n-1}\) is absolutely continuous.

For \(\alpha , \beta , \delta ,w \in \mathbb {C}\) with \(Re(\alpha )>0\) and \(Re(\beta )>0\), the Prabhakar (fractional) integral operator [27] is defined as follows

$$\begin{aligned} \left( _a\mathcal {I}^{w,\delta }_{\alpha , \beta }\daleth \right) \left( x\right) =\int _{a}^{x} \left( x-s\right) ^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w\left( x-s \right) ^\alpha \right) \daleth \left( s \right) ds \end{aligned}$$
(3)

where the well-known three-parameter Mittag-Leffler function

$$\begin{aligned} \mathcal {E}^\delta _{\alpha , \beta }\left( x \right) =\sum _{m=0}^{\infty }\frac{(\delta )_m}{\Gamma (m\alpha +\beta )} \frac{x^m}{m!}. \end{aligned}$$

here \(\Gamma (.)\) is the famous Gamma function and \((\delta )_m\) is the Pochhammer symbol, that is, \((\delta )_m=\frac{\Gamma (\delta +m)}{\Gamma (\delta )}\) or

$$\begin{aligned} (\delta )_0=1, \ \ \ (\delta )_m=\delta (\delta +1)...(\delta )(\delta +m-1), \ \ m=0,1,2,... \ . \end{aligned}$$

Remark 1

For \(\delta =0\), the three-parameter Mittag-Leffler function \(\mathcal {E}^\delta _{\alpha , \beta }\left( x \right) =\frac{1}{\Gamma (\beta )}\).

Note that the Prabhakar (fractional) integral \(_a\mathcal {I}^{w,\delta }_{\alpha , \beta }\) for \(\delta =0\) reduces to Riemann–Liouville fractional integral \(_a^{RL}\mathcal {I}_{ \beta }\) of order \(\beta \), that is

$$\begin{aligned} \left( _a^{RL}\mathcal {I}_{ \beta }\daleth \right) \left( x\right) =\frac{1}{\Gamma (\beta )}\int _{a}^{x} \left( x-s\right) ^{\beta -1} \daleth \left( s \right) ds. \end{aligned}$$

In the reference [43], the Prabhakar (fractional) integral was represented with a series representation including Riemann–Liouville fractional integral as follows:

$$\begin{aligned} \left( _a\mathcal {I}^{w,\delta }_{\alpha , \beta }\daleth \right) \left( x\right) =\sum _{m=0}^{\infty }\frac{(\delta )_m w^m}{m!} {_a^{RL}\mathcal {I}_{\alpha m + \beta }}\daleth \left( x\right) , \end{aligned}$$
(4)

where \( Re(\alpha )>0, \ Re(\beta ) >0.\) In the work [28], the following equality is proved for \( Re(\alpha )>0, Re(\beta _1)>0, Re(\beta _2) >0\)

$$\begin{aligned} _a\mathcal {I}^{w,\delta _1}_{\alpha , \beta _1} o _a\mathcal {I}^{w,\delta _2}_{\alpha , \beta _2}=_a\mathcal {I}^{w,\delta _1+\delta _2}_{\alpha , \beta _1+\beta _2}, \ , \end{aligned}$$
(5)

which shows that the Prabhakar (fractional) integral has semigroup property with respect to \(\delta \) and \(\beta \).

Remark 2

In fact, the Prabhakar integral operators are studied in the works [28, 30] under the name of a generalized fractional integral accepting Mittag-Leffler function as a kernel.

In the study [29], the Prabhakar derivatives of Riemann–Liouville and Caputo types are defined respectively, as follows

$$\begin{aligned} \left( _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\daleth \right) \left( x\right)&=\frac{d^m}{dx^m} \left( _a\mathcal {I}^{w,-\delta }_{\alpha ,m- \beta } \daleth \right) \left( x\right) \nonumber \\&=\frac{d^m}{dx^m} \int _{a}^{x} \left( x-s\right) ^{m-\beta -1} \mathcal {E}^{-\delta }_{\alpha ,m- \beta }\left( w\left( x-s \right) ^\alpha \right) \daleth \left( s \right) ds, \end{aligned}$$
(6)

and

$$\begin{aligned} \left( _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }\daleth \right) \left( x\right)&= {} _a\mathcal {I}^{w,-\delta }_{\alpha ,m- \beta }\left( \frac{d^m}{dx^m} \daleth \right) \left( x\right) \nonumber \\&= \int _{a}^{x} \left( x-s\right) ^{m-\beta -1} \mathcal {E}^{-\delta }_{\alpha ,m- \beta }\left( w\left( x-s \right) ^\alpha \right) \frac{d^m}{ds^m}\daleth \left( s \right) ds, \end{aligned}$$
(7)

where \(\alpha , \beta , \delta ,w \in \mathbb {C}\) with \(Re(\alpha )>0\) and \(Re(\beta )\ge 0\), and \(m=\lfloor Re(\beta ) \rfloor +1 \) (here \(\lfloor .\rfloor \) is the floor function) and \(\daleth \in AC^m(\overset{\textit{ o}}{\mathcal {J}})\). The same study discussed about a relation between them as noted below

$$\begin{aligned} \left( _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }\daleth \right) \left( x\right) = {} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\left[ \daleth \left( x\right) -\sum _{n=0}^{\infty }\frac{f^{(n)}(a)}{n!}\left( x-a \right) ^n\right] . \end{aligned}$$
(8)

In the work [44], researchers investigated the compositions of the Prabhakar integral operator with the Prabhakar fractional derivatives of both Riemann–Liouville and Caputo types as expressed below

$$\begin{aligned} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } o _a\mathcal {I}^{w,\delta }_{\alpha , \beta } \daleth \left( x\right) = \daleth \left( x\right) , \end{aligned}$$
(9)

and

$$\begin{aligned} _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta } o _a\mathcal {I}^{w,\delta }_{\alpha , \beta } \daleth \left( x\right) = \daleth \left( x\right) . \end{aligned}$$
(10)

Lemma 3

[44] Let Y and Z be intervals in \(\mathbb {R}\) and let \(\daleth : Y \times Z \rightarrow \mathbb {R}\) be a function satisfying the below conditions:

  1. 1.

    \(\daleth (.,z)\) is measurable on the interval Y.

  2. 2.

    For an arbitrary interior element \((y,z) \in Y \times Z\), the partial derivative \(\frac{d}{dz}\daleth (y,z)\) exists.

  3. 3.

    There is an integrable non-negative function f such that

    $$\begin{aligned} \left| \frac{d}{dz}\daleth (y,z) \right| \le f(y) \end{aligned}$$

    for an arbitrary interior element \((y,z) \in Y \times Z\).

  4. 4.

    There is such an element \(z_0 \in Z\) that \(\daleth (y,z_0)\) is integrable on the set Y.

Then,

$$\begin{aligned} \frac{d}{dz} \displaystyle { \int _{z_0}^{z}} \daleth (y,z) dy= \int _{z_0}^{z} \frac{d}{dz} \daleth (y,z) dy+\lim _{y\rightarrow z^{-}} \daleth (y,z). \end{aligned}$$

From now on, all of the following sharing will be new contributions.

3 Nonhomogeneous Linear System Involving Prabhakar Derivatives of R–L Type with Variable Coefficients

Before starting to give main theorems, we offer some useful lemmas to make the coming proofs easy and short.

Lemma 4

Let \(\alpha , \beta , \beta _1, \beta _2, \delta _1, \delta _2, w \in \mathbb {C}\) with \(Re(\alpha )>0\), \(Re(\beta _1)>0\), and \(Re(\beta _2)>0\). Then

$$\begin{aligned}{} & {} _a\mathcal {I}^{w,\delta _1}_{\alpha , \beta _1} \left( x-a\right) ^{\beta _2-1} \mathcal {E}^{\delta _2}_{\alpha , \beta _2}\left( w\left( x-a \right) ^\alpha \right) =\left( x-a\right) ^{\beta _1+\beta _2-1} \mathcal {E}^{\delta _1+\delta _2}_{\alpha , \beta _1+\beta _2}\left( w\left( x-a \right) ^\alpha \right) ,\\{} & {} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } \left( x-a\right) ^{\beta -1} \mathcal {E}^{\delta }_{\alpha , \beta }\left( w\left( x-a \right) ^\alpha \right) =0, \end{aligned}$$

and

$$\begin{aligned} _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta } 1=0, \end{aligned}$$

hold true.

Proof

When Definition 3 is applied to the first one, one gets

$$\begin{aligned}&_a\mathcal {I}^{w,\delta _1}_{\alpha , \beta _1} \left( x-a\right) ^{\beta _2-1} \mathcal {E}^{\delta _2}_{\alpha , \beta _2}\left( w\left( x-a \right) ^\alpha \right) \\&\quad = \int _{a}^{x} \left( x-s\right) ^{\beta _1-1} \mathcal {E}^{\delta _1}_{\alpha , \beta _1}\left( w\left( x-s \right) ^\alpha \right) \left( s-a\right) ^{\beta _2-1} \mathcal {E}^{\delta _2}_{\alpha , \beta _2}\left( w\left( s-a \right) ^\alpha \right) ds, \end{aligned}$$

under the substitution \(v=s-a\), one acquires

$$\begin{aligned}&= \int _{0}^{x-a} \left( x-a-s\right) ^{\beta _1-1} \mathcal {E}^{\delta _1}_{\alpha , \beta _1}\left( w\left( x-a-s \right) ^\alpha \right) v^{\beta _2-1} \mathcal {E}^{\delta _2}_{\alpha , \beta _2}\left( w v^\alpha \right) ds. \end{aligned}$$

By applying [28, Theorem 2] to the just-above equation, one directly obtains the desired result, that is,

$$\begin{aligned} _a\mathcal {I}^{w,\delta _1}_{\alpha , \beta _1} \left( x-a\right) ^{\beta _2-1} \mathcal {E}^{\delta _2}_{\alpha , \beta _2}\left( w\left( x-a \right) ^\alpha \right) =\left( x-a\right) ^{\beta _1+\beta _2-1} \mathcal {E}^{\delta _1+\delta _2}_{\alpha , \beta _1+\beta _2}\left( w\left( x-a \right) ^\alpha \right) . \end{aligned}$$

To prove the second one, we use Eq. (6), the first item, and Remark 1.

$$\begin{aligned} \left( _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\daleth \right) \left( x\right)&=\frac{d^m}{dx^m} \left( _a\mathcal {I}^{w,-\delta }_{\alpha ,m- \beta } \left( x-a\right) ^{\beta -1} \mathcal {E}^{\delta }_{\alpha , \beta }\left( w\left( x-a \right) ^\alpha \right) \right) \\&=\frac{d^m}{dx^m} \left( \left( x-a\right) ^{m-1} \mathcal {E}^{0}_{\alpha , m}\left( w\left( x-a \right) ^\alpha \right) \right) \\&=\frac{d^m}{dx^m} \frac{\left( x-a\right) ^{m-1} }{\Gamma (\beta )}=0. \end{aligned}$$

The proof of the last one is left to the readers because it is so easy to prove it with the help of Eq. (7) and properties of integrals. \(\square \)

First of all, we examine the homogeneous case of system (1), that is,

$$\begin{aligned} \left\{ \begin{array}{l} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =Y(x)y\left( x\right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=a}=y_0, \end{array}\right. \end{aligned}$$
(11)

where all of the information is as in system (1).

Now, we will define the state-transition matrix to build the keystone of the solution to system (11).

Definition 5

The state-transition (matrix) function of system (11) is identified as noted below

$$\begin{aligned} \Omega \left( x,a \right) =\sum _{n=0}^{\infty } {_a\mathcal {I}^{w,\delta }_{\alpha , n o \beta }}Y(x), \end{aligned}$$
(12)

where

$$\begin{aligned}{} & {} _a\mathcal {I}^{w,\delta }_{\alpha , 0 o \beta }Y(x)=\left( x-a\right) ^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w\left( x-a \right) ^\alpha \right) \mathbb {I} \end{aligned}$$
(13)
$$\begin{aligned}{} & {} {_a\mathcal {I}^{w,\delta }_{\alpha , (n+1) o \beta }}Y(x) = {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , n o \beta }} Y(x) \right) , \ \ n=0,1,2,\dots , \end{aligned}$$
(14)

here, \(\mathbb {I}\) symbolizes an identity matrix. Within the expression \(n\circ \beta \), the parameters n and \(\beta \) are used to symbolize the concepts of recursion and fractional structure, respectively. Consequently, \(n\circ \beta \) signifies the recurrence associated with the fractional structure.

The series in Eq. (12) can be regarded and called as the generalized Peano–Baker series [20, 22].

Lemma 6

Under the assumption of the uniform convergence of the generalized Peano–Baker series, the state-transition function is a solution of the below system

$$\begin{aligned} \left\{ \begin{array}{l} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\Omega \left( x,a \right) =Y(x)\Omega \left( x,a \right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }\Omega \left( x,a \right) \big \vert _{x=a}=\mathbb {I}, \end{array}\right. \end{aligned}$$
(15)

Proof

We firstly put Eq. (12) inside the derivative, and secondly use Eqs. (13) and (14) then we get

$$\begin{aligned} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\Omega \left( x,a \right)&={_a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }} {_a\mathcal {I}^{w,\delta }_{\alpha , 0 o \beta }}Y(x)+{_a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }} \sum _{n=1}^{\infty } {_a\mathcal {I}^{w,\delta }_{\alpha , n o \beta }}Y(x) \\&={_a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }} \left( x-a\right) ^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w\left( x-a \right) ^\alpha \right) \mathbb {I}\\&\quad + \sum _{n=1}^{\infty } {_a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }} {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) \end{aligned}$$

Applying Lemma 4 and Eq. (9), one gets

$$\begin{aligned} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\Omega \left( x,a \right)&=Y(x) \sum _{n=1}^{\infty } {_a\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \\&=Y(x) \sum _{n=0}^{\infty } {_a\mathcal {I}^{w,\delta }_{\alpha , n o \beta }} Y(x) \\&=Y(x)\Omega \left( x,a \right) . \end{aligned}$$

Now let’s verify the initial condition

$$\begin{aligned} {_a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }}\Omega \left( x,a \right) \big \vert _{x=a}&={_a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }} \left( x-a\right) ^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w\left( x-a \right) ^\alpha \right) \big \vert _{x=a} \mathbb {I}\\&\quad + \sum _{n=1}^{\infty } {_a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }} {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) \big \vert _{x=a} \\&=\mathcal {E}^{2\delta }_{\alpha , 1}\left( w\left( x-a \right) ^\alpha \right) \big \vert _{x=a} \mathbb {I} \\&\quad + \sum _{n=1}^{\infty } {_a\mathcal {I}^{w,2\delta }_{\alpha , 1}} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) \big \vert _{x=a} \end{aligned}$$

here,

$$\begin{aligned} \mathcal {E}^{2\delta }_{\alpha , 1}\left( w\left( x-a \right) ^\alpha \right) \big \vert _{x=a}&=\sum _{m=0}^{\infty }\frac{(2\delta )_m}{\Gamma (m\alpha +1)} \frac{\left( w\left( x-a \right) ^\alpha \right) ^m}{m!} \bigg \vert _{x=a} \\&=\left( 1+\frac{\Gamma (2\delta +1)}{\Gamma (2\delta )\Gamma (\alpha +1)}w\left( x-a \right) ^\alpha +\dots \right) \bigg \vert _{x=a}=1, \end{aligned}$$

and

$$\begin{aligned}&\sum _{n=1}^{\infty } {_a\mathcal {I}^{w,2\delta }_{\alpha , 1}} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) \bigg \vert _{x=a} \\&\quad = \sum _{n=1}^{\infty } \int _{a}^{x} \mathcal {E}^{2\delta }_{\alpha , 1}\left( w\left( x-s \right) ^\alpha \right) Y(s) {_a\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(s)ds \bigg \vert _{x=a}=0, \end{aligned}$$

which completes the proof. \(\square \)

Remark 7

For \(\delta =0\), Lemma 6 reduces to that one of [23, Lemma 3].

Theorem 8

The function \(y(x)=\Omega \left( x,a \right) y_0\) is a solution to system (11) under the assumption of the uniform convergence of the series \(\Omega \left( x,a \right) \).

Proof

With the aid of the just-previous lemma, it is quite easy to confirm that

$$\begin{aligned} {_a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } }y \left( x\right)&= {_a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } } \Omega \left( x,a \right) y_0 \\&= Y(x)\Omega \left( x,a \right) y_0\\&=y(x), \end{aligned}$$

and

$$\begin{aligned} {_a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }}y\left( x\right) \big \vert _{x=a}&{_a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }}\Omega \left( x,a \right) \big \vert _{x=a} y_0 \\&=\mathbb {I}y_0\\&=y_0. \end{aligned}$$

\(\square \)

Remark 9

For \(\delta =0\), Theorem  8 reduces to that one of [23, Theorem 2].

Now we firstly infer the partial Prabhakar integral (operator) from the definition of Riemann–Liouville integral and Equation (4). In the sequel, we define partial Prabhakar derivatives of Riemann–Liouville and Caputo types via the Prabhakar integral. If one puts the partial Riemann–Liouville integral of Riemann–Liouville integral in Eq. (4), one can acquire the partial Prabhakar integral as noted below:

$$\begin{aligned} _a^x\mathcal {I}^{w,\delta }_{\alpha , \beta }\daleth \left( x,s\right)&= \sum _{m=0}^{\infty }\frac{(\delta )_m w^m}{m!} {_a^{RL,x}\mathcal {I}_{\alpha m + \beta }}\daleth \left( x,s\right) \\&=\sum _{m=0}^{\infty }\frac{(\delta )_m w^m}{m!} \frac{1}{\Gamma (\alpha m+ \beta )}\int _{a}^{x} \left( x-\tau \right) ^{\alpha n+ \beta -1} \daleth \left( \tau ,s\right) d\tau \\&= \int _{a}^{x} \left( x-\tau \right) ^{ \beta -1}\sum _{m=0}^{\infty }\frac{(\delta )_m w^m}{\Gamma (\alpha m+ \beta )} \frac{\left( w\left( x-\tau \right) ^{ \alpha } \right) ^m}{m!} \daleth \left( \tau ,s\right) d\tau \\&=\int _{a}^{x} \left( x-\tau \right) ^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w\left( x-\tau \right) ^\alpha \right) \daleth \left( \tau ,s \right) d\tau , \end{aligned}$$

which is the partial Prabhakar integral with respect to x of a function \(\daleth \left( x,s \right) : \mathcal {J}\times \mathcal {J} \rightarrow \mathbb {R}\).

We could directly define it, but we wanted to support our definition via the available definitions in the related literature. We are set to define the partial Prabhakar integral and derivatives of Riemann–Liouville and Caputo types with respect to x of a function \(\daleth \left( x,s \right) : \mathcal {J}\times \mathcal {J} \rightarrow \mathbb {R}\) as follows:

$$\begin{aligned} _a^x\mathcal {I}^{w,\delta }_{\alpha , \beta }\daleth \left( x,s\right)= & {} \int _{a}^{x} \left( x-\tau \right) ^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w\left( x-\tau \right) ^\alpha \right) \daleth \left( \tau ,s \right) d\tau , \end{aligned}$$
(16)
$$\begin{aligned} _{a,x}^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\daleth \left( x,s\right)= & {} \frac{d^m}{dx^m} {_a^x\mathcal {I}^{w,-\delta }_{\alpha ,m- \beta }} \daleth \left( x,s\right) \nonumber \\= & {} \frac{d^m}{dx^m} \int _{a}^{x} \left( x-\tau \right) ^{m-\beta -1} \mathcal {E}^{-\delta }_{\alpha ,m- \beta }\left( w\left( x-\tau \right) ^\alpha \right) \daleth \left( \tau , s \right) d\tau ,\nonumber \\ \end{aligned}$$
(17)

and

$$\begin{aligned} \left( _{a,x}^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }\daleth \right) \left( x\right)&= {_a^x\mathcal {I}^{w,-\delta }_{\alpha ,m- \beta }}\left( \frac{d^m}{dx^m} \daleth \right) \left( x\right) \nonumber \\&= \int _{a}^{x} \left( x-\tau \right) ^{m-\beta -1} \mathcal {E}^{-\delta }_{\alpha ,m- \beta }\left( w\left( x-\tau \right) ^\alpha \right) \frac{d^m}{d\tau ^m}\daleth \left( \tau ,s \right) d\tau . \end{aligned}$$
(18)

Lemma 10

Assume that \(\eta : \mathcal {J} \times \mathcal {J} \rightarrow \mathbb {R}\) is a function satisfying the below hypotheses:

  1. 1.

    For each fixed point \(x\in \mathcal {J} \), then \(_a^x\mathcal {I}^{w,\delta }_{\alpha , \beta }\eta \left( x,s\right) \) is integrable with respect to s for some \(x^* \in \mathcal {J}\) and measurable on \(\mathcal {J}\).

  2. 2.

    For each interior element \((x,s) \in \overset{\textit{ o}}{\mathcal {J}} \times \overset{\textit{ o}}{\mathcal {J}}\), \(_{a,x}^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\eta \left( x,s\right) \) exists.

  3. 3.

    There is an integrable non-negative function f such that

    $$\begin{aligned} \left| _{a,x}^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\eta \left( x,s\right) \right| \le f(s) \end{aligned}$$

    for each interior point \((x,s) \in \overset{\textit{ o}}{\mathcal {J}} \times \overset{\textit{ o}}{\mathcal {J}}\).

For \(0< \beta \le 1\),

$$\begin{aligned} _{a}^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } \int _{a}^{x} \eta (x,s) ds= \int _{a}^{x} {_{a,x}^{RL}\mathcal {D}}^{w,\delta }_{\alpha , \beta } \eta (x,s) ds+\lim _{s\rightarrow x^{-}} {_s^x\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\eta (x,s). \end{aligned}$$

Proof

According to Lemma 4, we get

$$\begin{aligned} _{a}^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } \int _{a}^{x} \eta (x,s) ds= \frac{d}{dx}\int _{a}^{x} \left( x-\tau \right) ^{-\beta } \mathcal {E}^{-\delta }_{\alpha , 1-\beta }\left( w\left( x-\tau \right) ^\alpha \right) \int _{a }^{\tau } \eta \left( \tau , s \right) dsd\tau . \end{aligned}$$

By implementing the well-known Fubini’s theorem, Corollary 3, and employing Equations (16) and (17), one gets

$$\begin{aligned} _{a}^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } \int _{a}^{x} \eta (x,s) ds&=\frac{d}{dx}\int _{a}^{x}\int _{s }^{x} \left( x-\tau \right) ^{-\beta } \mathcal {E}^{-\delta }_{\alpha , 1-\beta }\left( w\left( x-\tau \right) ^\alpha \right) \eta \left( \tau , s \right) d\tau ds \\&=\frac{d}{dx}\int _{a}^{x} {_s^x\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\eta \left( x,s\right) ds \\&= \int _{a}^{x} \frac{d}{dx} {_s^x\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\eta \left( x,s\right) ds + \lim _{s\rightarrow x^{-}} {_s^x\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\eta \left( x,s\right) \\&= \int _{a}^{x} {_{a,x}^{RL}\mathcal {D}}^{w,\delta }_{\alpha , \beta } \eta (x,s) ds+\lim _{s\rightarrow x^{-}} {_s^x\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\eta (x,s). \end{aligned}$$

\(\square \)

Remark 11

The above significant result for fractional different equations was acquired by Podlubny [1] and modified for singular kernels by Matychyn [23]. This lemma offers its slightly modified version appropriate for Prabhakar derivative of Riemann–Liouville type.

Now we examine the inhomogeneous case of system (1), that is,

$$\begin{aligned} \left\{ \begin{array}{l} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =Y(x)y\left( x\right) +u(x) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=a}=y_0, \end{array}\right. \end{aligned}$$
(19)

where all of the information is as in system (1).

Theorem 12

Under the assumption of the uniform convergence of the series \(\Omega (x,a)\), an analytical solution to system (19) can be expressed as noted below

$$\begin{aligned} y(x)=\Omega (x,a)y_0+ \int _{a}^{x} \Omega (x,s)u(s)ds. \end{aligned}$$
(20)

Proof

If one applies the Prabhakar derivative of R-L type to the both sides of Eq. (20), then the following equalities can be acquired with the help of Equations (5) and (9) and Lemmas 6 and 10,

$$\begin{aligned} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }y(x)&= {}_a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\Omega (x,a)y_0+ _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta }\int _{a}^{x} \Omega (x,s)u(s)ds \\&=Y(x)\Omega (x,a)y_0+ \int _{a}^{x} {_{a,x}^{RL}\mathcal {D}}^{w,\delta }_{\alpha , \beta } \Omega (x,s)u(s) ds +\lim _{s\rightarrow x^{-}} {_s^x\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\Omega (x,s)u(s)\\&=Y(x)y(x)+\lim _{s\rightarrow x^{-}} {_s\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}{_s\mathcal {I}^{w,\delta }_{\alpha , 0o\beta }}Y(x)u(s) \\&\quad + \sum _{n=1}^{\infty }\lim _{s\rightarrow x^{-}} {_s\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}{_s\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_s\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) u(s) \\&=Y(x)y(x)+\lim _{s\rightarrow x^{-}} {_s\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\left( x-s\right) ^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w\left( x-s \right) ^\alpha \right) \mathbb {I} u(s) \\&\quad + \sum _{n=1}^{\infty }\lim _{s\rightarrow x^{-}} {_s\mathcal {I}^{w,0}_{\alpha , 1}} \left( Y(x) {_s\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) u(s) \\&=Y(x)y(x)+\lim _{s\rightarrow x^{-}} \mathcal {E}^0_{\alpha , 1}\left( w\left( x-s \right) ^\alpha \right) \mathbb {I} u(s) \\&\quad + \sum _{n=1}^{\infty }\lim _{s\rightarrow x^{-}} {_s\mathcal {I}^{w,0}_{\alpha , 1}} \left( Y(x) {_s\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) u(s) \\&=Y(x)y(x)+u(x) \end{aligned}$$

because \(\mathcal {E}^0_{\alpha , 1}\left( z \right) =1\) and \(\lim _{s\rightarrow x^{-}} {_s\mathcal {I}^{w,0}_{\alpha , 1}} \left( Y(x) {_s\mathcal {I}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) u(s)=0\), \(n=1,2,\dots .\) \(\square \)

Remark 13

For \(\delta =0\), Eq. (20) reduces to that one of [23, Theorem 3].

4 Nonhomogeneous Linear System Involving Prabhakar Derivatives of Caputo Type with Variable Coefficients

We now examine homogeneous linear system involving Prabhakar derivatives of Caputo type with variable coefficients

$$\begin{aligned} \left\{ \begin{array}{l} _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =Y(x)y\left( x\right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y\left( a\right) =y_0 \end{array}\right. \end{aligned}$$
(21)

where all of the information is as in system (2).

Now, we will define the state-transition matrix to build the keystone of the solution to system (21).

Definition 14

The state-transition (matrix) function of system (21) is identified as noted below

$$\begin{aligned} \mho \left( x,a \right) =\sum _{n=0}^{\infty } {_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , n o \beta }}Y(x), \end{aligned}$$
(22)

where

$$\begin{aligned}{} & {} _a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , 0 o \beta }Y(x)=\mathbb {I}, \end{aligned}$$
(23)
$$\begin{aligned}{} & {} {_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , (n+1) o \beta }}Y(x) = {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , n o \beta }} Y(x) \right) , \ \ n=0,1,2,\dots , \end{aligned}$$
(24)

here, \(\mathbb {I}\) symbolizes an identity matrix.

The series in Eq. 22 also can be regarded and called as the generalized Peano–Baker series [20, 22].

Lemma 15

Under the assumption of the uniform convergence of the generalized Peano–Baker series, the state-transition function is a solution of the below system

$$\begin{aligned} \left\{ \begin{array}{l} _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }\mho \left( x,a \right) =Y(x)\mho \left( x,a \right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ \mho \left( a,a \right) =\mathbb {I}. \end{array}\right. \end{aligned}$$
(25)

Proof

In the light of Eqs. (10), (23), and (24) and Lemma 4, one can do the following easy calculations,

$$\begin{aligned} _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }\mho \left( x,a \right)&={_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }}{_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , 0 o \beta }}Y(x)+ \sum _{n=1}^{\infty } {_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }}{_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , n o \beta }}Y(x)\\&={_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }}\mathbb {I} + \sum _{n=1}^{\infty } {_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }} {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) \\&=Y(x) \sum _{n=1}^{\infty } {_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x)\\&=Y(x)\mho \left( x,a \right) , \end{aligned}$$

and

$$\begin{aligned} \mho \left( a,a \right)&= {_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , 0 o \beta }}Y(x)\big \vert _{x=a}+ \sum _{n=1}^{\infty } {_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , n o \beta }}Y(x) \bigg \vert _{x=a} \\&= \mathbb {I} + \sum _{n=1}^{\infty } {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , (n-1) o \beta }} Y(x) \right) \bigg \vert _{x=a}\\&= \mathbb {I}. \end{aligned}$$

\(\square \)

Remark 16

For \(\delta =0\), Lemma 15 reduces to that one of [23, Lemma 5].

Theorem 17

The function \(y(x)=\mho \left( x,a \right) y_0\) is a solution to system (25) under the assumption of the uniform convergence of the series \(\mho \left( x,a \right) \).

Proof

The proof is left to the readers because of its similarity of that of Theorem 8 by using the just-previous lemma. \(\square \)

Remark 18

For \(\delta =0\), Theorem  17 reduces to that one of [23, Theorem 4].

Lemma 19

Let \(\eta : \mathcal {J} \times \mathcal {J} \rightarrow \mathbb {R}\) fulfill the hypotheses of Lemma 10. For \(0< \beta \le 1\),

$$\begin{aligned} _{a}^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta } \int _{a}^{x} \eta (x,s) ds= \int _{a}^{x} {_{a,x}^{RL}\mathcal {D}}^{w,\delta }_{\alpha , \beta } \eta (x,s) ds+\lim _{s\rightarrow x^{-}} {_s^x\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\eta (x,s). \end{aligned}$$

Proof

Based on Eq. (8), one can readily get

$$\begin{aligned} _{a}^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta } \int _{a}^{x} \eta (x,s) ds&= {} _{a}^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } \left( \int _{a}^{x} \eta (x,s) ds - \int _{a}^{x} \eta (x,s) ds \bigg \vert _{x=a} \right) \\&= {}_{a}^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } \int _{a}^{x} \eta (x,s) ds \end{aligned}$$

which provides the expression of this lemma in view of the expression of Lemma 10. \(\square \)

Now we examine the inhomogeneous case of system (2), that is,

$$\begin{aligned} \left\{ \begin{array}{l} _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =Y(x)y\left( x\right) +u(x) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ y\left( a\right) =y_0, \end{array}\right. \end{aligned}$$
(26)

where all of the information is as in system (2).

Theorem 20

Under the assumption of the uniform convergence of the series \(\mho (x,a)\), an analytical solution to system (26) can be expressed as noted below

$$\begin{aligned} y(x)=\mho (x,a)y_0+ \int _{a}^{x} \Omega (x,s)u(s)ds. \end{aligned}$$
(27)

Proof

First of all, one can apply the Prabhakar derivative \({_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }}\) to both sides of Eq. (27), and use its linearity, then the following is acquired:

$$\begin{aligned} {_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }}y(x)={_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }}\mho (x,a)y_0+ {_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }}\int _{a}^{x} \Omega (x,s)u(s)ds. \end{aligned}$$

If one implements Lemma 15 to the first term and Lemma 19 to the second term, the following is obtained:

$$\begin{aligned} {_a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta }}y(x) =Y(x)\mho (x,a)y_0 + \int _{a}^{x} {_{a,x}^{RL}\mathcal {D}}^{w,\delta }_{\alpha , \beta } \Omega (x,s)u(s) ds +\lim _{s\rightarrow x^{-}} {_s^x\mathcal {I}^{w,-\delta }_{\alpha , 1-\beta }}\Omega (x,s)u(s) \end{aligned}$$

Applying the similar procedure as in the proof of Theorem 12, one directly gets the desired result. \(\square \)

Remark 21

For \(\delta =0\), Eq. (27) reduces to that one of [23, Theorem 5].

5 Special Cases

Under this section, we offer new results which can be obtained from our findings as expressed contexts of the following theorems.

Theorem 22

Under the appropriate choices of all undernamed parameters, we acquire

$$\begin{aligned} { _a\mathcal {I}^{w,\delta }_{\alpha , n o \beta }}Y=\left( x-a\right) ^{(n+1)\beta -1} \mathcal {E}^{(n+1)\delta }_{\alpha , (n+1)\beta }\left( w\left( x-a \right) ^\alpha \right) Y^n, \ \ \ n=0,1,2,\dots , \end{aligned}$$

and

$$\begin{aligned} \Omega \left( x,a \right) =\sum _{n=0}^{\infty } \left( x-a\right) ^{(n+1)\beta -1} \mathcal {E}^{(n+1)\delta }_{\alpha , (n+1)\beta }\left( w\left( x-a \right) ^\alpha \right) Y^n, \end{aligned}$$

where Y(t) is a constant matrix function, i.e. \(Y(t)=Y\).

Proof

We apply the mathematical induction on n to prove the first one. Let’s start with the case \(n=0\) and check the trueness of the equation. Equation (13) verifies the accuracy of this case. Now, we suppose that it’s true for the case \(n=k\), that is,

$$\begin{aligned} { _a\mathcal {I}^{w,\delta }_{\alpha , k o \beta }}Y=\left( x-a\right) ^{(k+1)\beta -1} \mathcal {E}^{(k+1)\delta }_{\alpha , (k+1)\beta }\left( w\left( x-a \right) ^\alpha \right) Y^k. \end{aligned}$$

The remained step is to check if it is true for the case \(n=k+1\). Let’s consider

$$\begin{aligned} { _a\mathcal {I}^{w,\delta }_{\alpha , (k+1) o \beta }}Y&={ _a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y{ _a\mathcal {I}^{w,\delta }_{\alpha , k o \beta }}Y \right) \\&={ _a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( \left( x-a\right) ^{(k+1)\beta -1} \mathcal {E}^{(k+1)\delta }_{\alpha , (k+1)\beta }\left( w\left( x-a \right) ^\alpha \right) \right) Y^{k+1} \\&=\left( x-a\right) ^{(k+2)\beta -1} \mathcal {E}^{(k+2)\delta }_{\alpha , (k+2)\beta }\left( w\left( x-a \right) ^\alpha \right) Y^{k+1}, \end{aligned}$$

which is the desired thing. By keeping the series expansion of the function \( \Omega \left( x,a \right) \) in Definition 5 in mind, the proof of the second one is obvious under favour of the first one. \(\square \)

Theorem 23

An analytical solution of the following inhomogeneous linear system

$$\begin{aligned} \left\{ \begin{array}{l} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =Yy\left( x\right) +u(x) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=a}=y_0, \end{array}\right. \end{aligned}$$

is given by the following integral equation

$$\begin{aligned} y(x)&=\sum _{n=0}^{\infty } \left( x-a\right) ^{(n+1)\beta -1} \mathcal {E}^{(n+1)\delta }_{\alpha , (n+1)\beta }\left( w\left( x-a \right) ^\alpha \right) Y^n y_0\\&\quad + \sum _{n=0}^{\infty } \int _{a}^{x} \left( x-s\right) ^{(n+1)\beta -1} \mathcal {E}^{(n+1)\delta }_{\alpha , (n+1)\beta }\left( w\left( x-s \right) ^\alpha \right) Y^nu(s)ds. \end{aligned}$$

Proof

The proof is an immediate result of Theorem 12 with Theorem 22. \(\square \)

Remark 24

A different approach for obtaining the solution to the system given in Theorem 23 is outlined in the upcoming publication [33].

Theorem 25

Under the appropriate choices of all undernamed parameters, we acquire

$$\begin{aligned} { _a\tilde{\mathcal {I}}^{w,\delta }_{\alpha , n o \beta }}Y=\left( x-a\right) ^{n\beta } \mathcal {E}^{n\delta }_{\alpha , n\beta +1}\left( w\left( x-a \right) ^\alpha \right) Y^n, \ \ \ n=0,1,2,\dots , \end{aligned}$$

and

$$\begin{aligned} \mho \left( x,a \right) =\sum _{n=0}^{\infty } \left( x-a\right) ^{n\beta } \mathcal {E}^{n\delta }_{\alpha , n\beta +1}\left( w\left( x-a \right) ^\alpha \right) Y^n, \end{aligned}$$

where Y(t) is a constant matrix function, i.e. \(Y(t)=Y\).

Proof

The proof can be easily done via the mathematical induction just as the proof of Theorem 22. \(\square \)

Theorem 26

An analytical solution of the following inhomogeneous linear system

$$\begin{aligned} \left\{ \begin{array}{l} _a^{C}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =Yy\left( x\right) +u(x) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ y\left( a\right) =y_0, \end{array}\right. \end{aligned}$$

is given by the following integral equation

$$\begin{aligned} y(x)&=\sum _{n=0}^{\infty } \left( x-a\right) ^{n\beta } \mathcal {E}^{n\delta }_{\alpha , n\beta +1}\left( w\left( x-a \right) ^\alpha \right) Y^n y_0\\&\quad + \sum _{n=0}^{\infty } \int _{a}^{x} \left( x-s\right) ^{(n+1)\beta -1} \mathcal {E}^{(n+1)\delta }_{\alpha , (n+1)\beta }\left( w\left( x-s \right) ^\alpha \right) Y^nu(s)ds. \end{aligned}$$

Proof

The proof is an immediate result of Theorem 20 with Theorems 22 and (25). \(\square \)

Remark 27

For \(\delta =0\), \(\alpha =\beta =1\), and \(Y(t)=Y\), the integral equations in Theorems (12) and 20 turn into the following integral equation

$$\begin{aligned} y(x)=e^{Y(x-a)}y_0+\int _{a}^{x} e^{Y(x-s)}u(s)ds \end{aligned}$$

which is, as it is well known, the analytical solution to the following first order Cauchy system

$$\begin{aligned} \left\{ \begin{array}{l} \dot{y} \left( x\right) =Yy\left( x\right) +u(x) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ y\left( a\right) =y_0. \end{array}\right. \end{aligned}$$

6 Numerical Examples

In this section, we numerically exemplify our theoretical findings.

Example 28

We will take into consideration the following linear homogeneous system

$$\begin{aligned} \left\{ \begin{array}{l} _0^{RL}\mathcal {D}^{w,0}_{\alpha , \beta } y \left( x\right) =\begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix} y\left( x\right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _0\mathcal {I}^{w,0}_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=0}=y_0. \end{array}\right. \end{aligned}$$
(28)

We firstly calculate the following expressions one by one to acquire the corresponding generalized Peano–Baker series version of the function \(\Omega (x,a)\):

$$\begin{aligned} {_0\mathcal {I}^{w,0}_{\alpha , 0 o \beta }}Y(x)&= x^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( wx^\alpha \right) \mathbb {I} \\&=\frac{x^{\beta -1}}{\Gamma (\beta )}\mathbb {I}=\begin{pmatrix} \frac{x^{\beta -1}}{\Gamma (\beta )} &{} 0 \\ 0 &{} \frac{x^{\beta -1}}{\Gamma (\beta )} \end{pmatrix},\\ {_0\mathcal {I}^{w,0}_{\alpha , 1o \beta }}Y(x)&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , 0 o \beta }} Y(x) \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( Y(x) \frac{x^{\beta -1}}{\Gamma (\beta )}\mathbb {I} \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \begin{pmatrix} 0 &{} 0 \\ \frac{x^{\beta }}{\Gamma (\beta )} &{} 0 \end{pmatrix} = \begin{pmatrix} 0 &{} 0 \\ \frac{\beta x^{2\beta }}{\Gamma (2\beta +1)} &{} 0 \end{pmatrix},\\ {_0\mathcal {I}^{w,0}_{\alpha , 2 o \beta }}Y(x)&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , 1 o \beta }} Y(x) \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( Y(x) \begin{pmatrix} 0 &{} 0 \\ \frac{\beta x^{2\beta }}{\Gamma (2\beta +1)} &{} 0 \end{pmatrix} \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( \begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix} \begin{pmatrix} 0 &{} 0 \\ \frac{\beta x^{2\beta }}{\Gamma (2\beta +1)} &{} 0 \end{pmatrix} \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \begin{pmatrix} 0 &{} 0 \\ 0 &{} 0 \end{pmatrix} =\begin{pmatrix} 0 &{} 0 \\ 0 &{} 0 \end{pmatrix} . \end{aligned}$$

If one keeps going on this, the following data can be easily reached to:

$$\begin{aligned} {_0\mathcal {I}^{w,0}_{\alpha , k o \beta }}Y(x)=\begin{pmatrix} 0 &{} 0 \\ 0 &{} 0 \end{pmatrix}, \ \ \ k \ge 2. \end{aligned}$$

Then one gets the series function \(\Omega (x,0)\)

$$\begin{aligned} \Omega (x,0)&= {_0\mathcal {I}^{w,0}_{\alpha , 0 o \beta }}Y(x)+{_0\mathcal {I}^{w,0}_{\alpha , 1 o \beta }}Y(x) \nonumber \\&=\begin{pmatrix} \frac{ x^{\beta -1}}{\Gamma (\beta )} &{} 0 \\ \frac{\beta x^{2\beta }}{\Gamma (2\beta +1)} &{} \frac{ x^{\beta -1}}{\Gamma (\beta )} \end{pmatrix}. \end{aligned}$$
(29)

One easefully verifies that

$$\begin{aligned} \left\{ \begin{array}{l} _0^{RL}\mathcal {D}^{w,0}_{\alpha , \beta } \Omega (x,0) =\begin{pmatrix} 0 &{} 0 \\ \frac{ x^{\beta }}{\Gamma (\beta )} &{} 0 \end{pmatrix}=Y(x) \Omega (x,0) \\ _0\mathcal {I}^{w,0}_{\alpha , 1-\beta }\Omega (x,0) \big \vert _{x=0}=\begin{pmatrix} 1 &{} 0 \\ \frac{ x^{\beta +1}}{\Gamma (\beta +1)} &{} 1 \end{pmatrix}\bigg \vert _{x=0}=\mathbb {I}, \end{array}\right. \end{aligned}$$

which show that Lemma 6 is satisfied.

Assume that \(y_0=\begin{pmatrix} 2 \\ 1 \end{pmatrix}\). Theorem 8 presents that the solution of system (28) can be expressed as noted below:

$$\begin{aligned} y(x)&=\Omega \left( x,0 \right) y_0 =\begin{pmatrix} 2\frac{ x^{\beta -1}}{\Gamma (\beta )} \\ 2\frac{\beta x^{2\beta }}{\Gamma (2\beta +1)} + \frac{ x^{\beta -1}}{\Gamma (\beta )} \end{pmatrix}. \end{aligned}$$

Example 29

We will take into consideration the following linear nonhomogeneous system

$$\begin{aligned} \left\{ \begin{array}{l} _0^{RL}\mathcal {D}^{w,0}_{\alpha , \beta } y \left( x\right) =\begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix} y\left( x\right) +uy\left( x\right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _0\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=0}=y_0. \end{array} \right. \end{aligned}$$
(30)

Assume that

$$\begin{aligned} y_0=\begin{pmatrix} 1 \\ 0 \end{pmatrix}, \ \ \ u(x)=\begin{pmatrix} 0 \\ 1 \end{pmatrix}, \ x>0 . \end{aligned}$$

Based on Eq. (29), an analytical solution to system (30) can be expressed as noted below:

$$\begin{aligned} y(x)&=\Omega (x,0)y_0+ \int _{0}^{x} \Omega (x,s)u(s)ds \\&=\begin{pmatrix} \frac{ x^{\beta -1}}{\Gamma (\beta )} &{} 0 \\ \frac{\beta x^{2\beta }}{\Gamma (2\beta +1)} &{} \frac{ x^{\beta -1}}{\Gamma (\beta )} \end{pmatrix} \begin{pmatrix} 1 \\ 0\end{pmatrix} +\int _{0}^{x}\begin{pmatrix} \frac{ \left( x-s\right) ^{\beta -1}}{\Gamma (\beta )} &{} 0 \\ \frac{\beta \left( x-s\right) ^{2\beta }}{\Gamma (2\beta +1)} &{} \frac{ \left( x-s\right) ^{\beta -1}}{\Gamma (\beta )} \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} ds \\&= \begin{pmatrix} \frac{ x^{\beta -1}}{\Gamma (\beta )} \\ \frac{\beta x^{2\beta }}{\Gamma (2\beta +1)} +\frac{ x^{\beta }}{\Gamma (\beta +1)} \end{pmatrix}. \end{aligned}$$

Example 30

Let us examine the following homogeneous linear system involving Prabhakar derivatives of Caputo type with variable coefficients

$$\begin{aligned} \left\{ \begin{array}{l} _0^{C}\mathcal {D}^{w,0}_{\alpha , \beta } y \left( x\right) =\begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix} y\left( x\right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y\left( 0\right) =\begin{pmatrix} 2 \\ 1 \end{pmatrix}. \end{array} \right. \end{aligned}$$
(31)

One directly can calculate the series \(\mho (x,0)\):

$$\begin{aligned} \mho (x,0)=\begin{pmatrix} 1 &{} 0 \\ \frac{x^{\beta +1}}{\Gamma (\beta +2)} &{} 1 \end{pmatrix} \end{aligned}$$

and can readily verify Lemma 15 that

$$\begin{aligned} \left\{ \begin{array}{l} _0^{C}\mathcal {D}^{w,0}_{\alpha , \beta }\mho \left( x,0 \right) =\begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix} =Y(x)\mho \left( x,0 \right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ \mho \left( 0,0 \right) =\mathbb {I}. \end{array} \right. \end{aligned}$$
(32)

can also confirm Theorem 17 that

$$\begin{aligned} y(t)=\begin{pmatrix} 1 &{} 0 \\ \frac{x^{\beta +1}}{\Gamma (\beta +2)} &{} 1 \end{pmatrix} \begin{pmatrix} 2 \\ 1 \end{pmatrix}=\begin{pmatrix} 2 \\ 2\frac{x^{\beta +1}}{\Gamma (\beta +2)} + 1 \end{pmatrix} \end{aligned}$$

which is an analytical solution to system (30).

Example 31

Let us examine the following inhomogeneous linear system involving Prabhakar derivatives of Caputo type with variable coefficients

$$\begin{aligned} \left\{ \begin{array}{l} _0^{C}\mathcal {D}^{w,0}_{\alpha , \beta } y \left( x\right) =\begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix} y\left( x\right) +\begin{pmatrix} 0 \\ 1 \end{pmatrix},\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ y\left( 0\right) =\begin{pmatrix} 1 \\ 0 \end{pmatrix}. \end{array}\right. \end{aligned}$$
(33)

Together with Eqs. (29) and (31), Theorem 20 provides an analytical solution of system (33) as noted below

$$\begin{aligned} y(x)&=\mho (x,0)y_0+ \int _{0}^{x} \Omega (x,s)u(s)ds \\&=\begin{pmatrix} 1 &{} 0 \\ \frac{x^{\beta +1}}{\Gamma (\beta +2)} &{} 1 \end{pmatrix} \begin{pmatrix} 1 \\ 0 \end{pmatrix} +\int _{0}^{x}\begin{pmatrix} \frac{ \left( x-s\right) ^{\beta -1}}{\Gamma (\beta )} &{} 0 \\ \frac{\beta \left( x-s\right) ^{2\beta }}{\Gamma (2\beta +1)} &{} \frac{ \left( x-s\right) ^{\beta -1}}{\Gamma (\beta )} \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} ds \\&= \begin{pmatrix} 1 \\ \frac{\beta x^{\beta +1}}{\Gamma (\beta +2)} +\frac{ x^{\beta }}{\Gamma (\beta +1)} \end{pmatrix}. \end{aligned}$$

If requested, one can write down the explicit form of the solution

$$\begin{aligned} y_1&=1, \\ y_2&= \frac{\beta x^{\beta +1}}{\Gamma (\beta +2)} +\frac{ x^{\beta }}{\Gamma (\beta +1)}. \end{aligned}$$

Example 32

We will take into consideration the following linear homogeneous system

$$\begin{aligned} \left\{ \begin{array}{l} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =\begin{pmatrix} 0 &{} 0 \\ x-a &{} 0 \end{pmatrix} y\left( x\right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=a}=y_0. \end{array} \right. \end{aligned}$$
(34)

We firstly calculate the following expressions one by one to acquire the corresponding generalized Peano–Baker series version of the function \(\Omega (x,a)\):

$$\begin{aligned} {_a\mathcal {I}^{w,\delta }_{\alpha , 0 o \beta }}Y(x)&= (x-a)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) \mathbb {I} \\&=\begin{pmatrix} (x-a)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) &{} 0 \\ 0 &{} (x-a)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) \end{pmatrix},\\ {_a\mathcal {I}^{w,\delta }_{\alpha , 1o \beta }}Y(x)&= {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , 0 o \beta }} Y(x) \right) \\&= {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) (x-a)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) \mathbb {I} \right) \\&= {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \begin{pmatrix} 0 &{} 0 \\ (x-a)^{\beta } \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) &{} 0 \end{pmatrix} \\&= \begin{pmatrix} 0 &{} 0 \\ {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }}(x-a)^{\beta } \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) &{} 0 \end{pmatrix},\\ {_a\mathcal {I}^{w,\delta }_{\alpha , 2 o \beta }}Y(x)&= {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , 1 o \beta }} Y(x) \right) \\&= {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( Y(x) \begin{pmatrix} 0 &{} 0 \\ {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }}(x-a)^{\beta } \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) &{} 0 \end{pmatrix} \right) \\&= {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \left( \begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix} \begin{pmatrix} 0 &{} 0 \\ {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }}(x-a)^{\beta } \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) &{} 0 \end{pmatrix} \right) \\&= {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }} \begin{pmatrix} 0 &{} 0 \\ 0 &{} 0 \end{pmatrix} =\begin{pmatrix} 0 &{} 0 \\ 0 &{} 0 \end{pmatrix} . \end{aligned}$$

If one keeps going on this, the following data can be easily reached to:

$$\begin{aligned} {_a\mathcal {I}^{w,\delta }_{\alpha , k o \beta }}Y(x)=\begin{pmatrix} 0 &{} 0 \\ 0 &{} 0 \end{pmatrix}, \ \ \ k \ge 2. \end{aligned}$$

Then one gets the series function \(\Omega (x,a)\)

$$\begin{aligned} \Omega (x,a)&= {_a\mathcal {I}^{w,\delta }_{\alpha , 0 o \beta }}Y(x)+{_a\mathcal {I}^{w,\delta }_{\alpha , 1 o \beta }}Y(x) \nonumber \\&=\begin{pmatrix} (x-a)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) &{} 0 \\ {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }}(x-a)^{\beta } \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) &{} (x-a)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) \end{pmatrix}. \end{aligned}$$
(35)

One easefully verifies that

$$\begin{aligned} \left\{ \begin{array}{l} _a^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } \Omega (x,a) =\begin{pmatrix} 0 &{} 0 \\ (x-a)^{\beta } \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) &{} 0 \end{pmatrix}=Y(x) \Omega (x,a) \\ _a\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }\Omega (x,a) \big \vert _{x=a}=\begin{pmatrix} 1 &{} 0 \\ {_a\mathcal {I}^{w,2\delta }_{\alpha , 1}} (x-a)^{\beta } \mathcal {E}^\delta _{\alpha , \beta +1}\left( w(x-a)^\alpha \right) &{} 1 \end{pmatrix}\bigg \vert _{x=a}=\mathbb {I}, \end{array} \right. \end{aligned}$$

which show that Lemma 6 is satisfied.

Assume that \(y_0=\begin{pmatrix} 0 \\ 1 \end{pmatrix}\). Theorem 8 presents that the solution of system (28) can be expressed as noted below:

$$\begin{aligned} y(x)&=\Omega \left( x,a \right) y_0 =\begin{pmatrix} 0 \\ (x-a)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-a)^\alpha \right) \end{pmatrix}. \end{aligned}$$

Example 33

We will take into consideration the following linear nonhomogeneous system

$$\begin{aligned} \left\{ \begin{array}{l} _0^{RL}\mathcal {D}^{w,\delta }_{\alpha , \beta } y \left( x\right) =\begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix} y\left( x\right) +uy\left( x\right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _0\mathcal {I}^{w,\delta }_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=0}=y_0. \end{array} \right. \end{aligned}$$
(36)

Assume that

$$\begin{aligned} y_0=\begin{pmatrix} 0 \\ 1 \end{pmatrix}, \ \ \ u(x)=\begin{pmatrix} 0 \\ 1 \end{pmatrix}, \ x>0 . \end{aligned}$$

Based on Eq. (35) with \(a=0\), an analytical solution to system (36) can be expressed as noted below:

$$\begin{aligned} y(x)&=\Omega (x,0)y_0+ \int _{0}^{x} \Omega (x,s)u(s)ds \\&=\begin{pmatrix} x^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( wx^\alpha \right) &{} 0 \\ {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }}x^{\beta } \mathcal {E}^\delta _{\alpha , \beta }\left( wx^\alpha \right) &{} x^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( wx^\alpha \right) \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix}\\&\quad +\int _{0}^{x}\begin{pmatrix} (x-s)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-s)^\alpha \right) &{} 0 \\ {_a\mathcal {I}^{w,\delta }_{\alpha , \beta }}(x-s)^{\beta } \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-s)^\alpha \right) &{} (x-s)^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( w(x-s)^\alpha \right) \end{pmatrix} \begin{pmatrix} 0 \\ 1 \end{pmatrix} ds \\&= \begin{pmatrix} 0 \\ x^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( wx^\alpha \right) +x^{\beta } \mathcal {E}^\delta _{\alpha , \beta +1}\left( wx^\alpha \right) \end{pmatrix}. \end{aligned}$$

Example 34

We will take into consideration the following linear homogeneous system with the variable coefficient \(\begin{pmatrix} 4x &{} -x \\ 12x &{} -3x \end{pmatrix}\) different from the coefficient \(\begin{pmatrix} 0 &{} 0 \\ x &{} 0 \end{pmatrix}\):

$$\begin{aligned} \left\{ \begin{array}{l} _0^{RL}\mathcal {D}^{w,0}_{\alpha , \beta } y \left( x\right) =\begin{pmatrix} 4x &{} -x \\ 12x &{} -3x \end{pmatrix} y\left( x\right) ,\ \ \ x\in \overset{\textit{ o}}{\mathcal {J}} ,\\ \ \ \ \ \ \ \ \ \ _0\mathcal {I}^{w,0}_{\alpha , 1-\beta }y\left( x\right) \big \vert _{x=0}=y_0. \end{array} \right. \end{aligned}$$
(37)

We firstly calculate the following expressions one by one to acquire the corresponding generalized Peano–Baker series version of the function \(\Omega (x,a)\):

$$\begin{aligned} {_0\mathcal {I}^{w,0}_{\alpha , 0 o \beta }}Y(x)&= x^{\beta -1} \mathcal {E}^\delta _{\alpha , \beta }\left( wx^\alpha \right) \mathbb {I} \\&=\frac{x^{\beta -1}}{\Gamma (\beta )}\mathbb {I}=\begin{pmatrix} \frac{x^{\beta -1}}{\Gamma (\beta )} &{} 0 \\ 0 &{} \frac{x^{\beta -1}}{\Gamma (\beta )} \end{pmatrix},\\ {_0\mathcal {I}^{w,0}_{\alpha , 1o \beta }}Y(x)&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , 0 o \beta }} Y(x) \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( \begin{pmatrix} 4x &{} -x \\ 12x &{} -3x \end{pmatrix} \frac{x^{\beta -1}}{\Gamma (\beta )}\mathbb {I} \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left[ \frac{x^{\beta }}{\Gamma (\beta )} \begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix} \right] = \beta \frac{ x^{2\beta }}{\Gamma (2\beta +1)} \begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix},\\ {_0\mathcal {I}^{w,0}_{\alpha , 2 o \beta }}Y(x)&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( Y(x) {_a\mathcal {I}^{w,\delta }_{\alpha , 1 o \beta }} Y(x) \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( Y(x) \beta \frac{ x^{2\beta }}{\Gamma (2\beta +1)} \begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix} \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( \begin{pmatrix} 4x &{} -x \\ 12x &{} -3x \end{pmatrix} \beta \frac{x^{2\beta }}{\Gamma (2\beta +1)} \begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix} \right) \\&= {_0\mathcal {I}^{w,0}_{\alpha , \beta }} \left( \beta \frac{ x^{2\beta +1}}{\Gamma (2\beta +1)} \begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}^2 \right) \\&= \beta (2\beta +1) \frac{ x^{3\beta +1}}{\Gamma (3\beta +2)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}.\\ {_0\mathcal {I}^{w,0}_{\alpha , 3 o \beta }}Y(x)&= \beta (2\beta +1) (3\beta +2) \frac{ x^{4\beta +2}}{\Gamma (4\beta +3)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}. \end{aligned}$$

If one keeps going on this, the following data can be easily reached to:

$$\begin{aligned} {_0\mathcal {I}^{w,0}_{\alpha , k o \beta }}Y(x)=\prod _{n=1}^k (n\beta +n-1)\frac{ x^{(k+1)\beta + k-1}}{\Gamma ((k+1)\beta +k)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}, \ \ \ k \ge 1. \end{aligned}$$

Then one gets the series function \(\Omega (x,0)\)

$$\begin{aligned} \Omega (x,0)=\frac{x^{\beta -1}}{\Gamma (\beta )}\begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix}+\sum _{k=1}^{\infty }\prod _{n=1}^k (n\beta +n-1)\frac{ x^{(k+1)\beta +k-1}}{\Gamma ((k+1)\beta +k)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}. \end{aligned}$$

The first five terms of the series expansion of the function \(\Omega (x,0)\) are given as follows

$$\begin{aligned} \Omega (x,0)=&\frac{x^{\beta -1}}{\Gamma (\beta )}\begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix}\\&+\beta \frac{ x^{2\beta }}{\Gamma (2\beta +1)} \begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+ \beta (2\beta +1) \frac{ x^{3\beta +1}}{\Gamma (3\beta +2)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+ \beta (2\beta +1) (3\beta +2) \frac{ x^{4\beta +2}}{\Gamma (4\beta +3)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+ \beta (2\beta +1) (3\beta +2)(4\beta +3) \frac{ x^{5\beta +3}}{\Gamma (5\beta +4)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+ \beta (2\beta +1) (3\beta +2)(4\beta +3)(5\beta +4) \frac{ x^{6\beta +4}}{\Gamma (6\beta +5)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+\\&\vdots \\&. \end{aligned}$$

One easefully verifies that

$$\begin{aligned} _0^{RL}\mathcal {D}^{w,0}_{\alpha , \beta } \Omega (x,0)&= \beta \frac{x^{\beta }}{\Gamma (\beta +1)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+\beta \frac{ x^{2\beta +1}}{\Gamma (2\beta +1)} \begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+ \beta (2\beta +1) \frac{ x^{3\beta +2}}{\Gamma (3\beta +2)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+ \beta (2\beta +1) (3\beta +2) \frac{ x^{4\beta +3}}{\Gamma (4\beta +3)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+ \beta (2\beta +1) (3\beta +2)(4\beta +3) \frac{ x^{5\beta +4}}{\Gamma (5\beta +4)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+ \beta (2\beta +1) (3\beta +2)(4\beta +3)(5\beta +4) \frac{ x^{6\beta +5}}{\Gamma (6\beta +5)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\\&+\\&\vdots \\ =&\begin{pmatrix} 4x &{} -x \\ 12x &{} -3x \end{pmatrix} \Omega (x,0). \end{aligned}$$

and

$$\begin{aligned} _0\mathcal {I}^{w,0}_{\alpha , 1-\beta }\Omega (x,0)\big \vert _{x=0} =&\begin{pmatrix} 1 &{} \quad 0 \\ 0 &{} \quad 1 \end{pmatrix}\big \vert _{x=0} \\&+\beta \frac{ x^{\beta +1}}{\Gamma (\beta +2)} \begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\big \vert _{x=0} \\&+ \beta (2\beta +1) \frac{ x^{2\beta +2}}{\Gamma (2\beta +3)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\big \vert _{x=0} \\&+ \beta (2\beta +1) (3\beta +2) \frac{ x^{3\beta +3}}{\Gamma (3\beta +4)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\big \vert _{x=0} \\&+ \beta (2\beta +1) (3\beta +2)(4\beta +3) \frac{ x^{4\beta +4}}{\Gamma (4\beta +5)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\big \vert _{x=0} \\&+ \beta (2\beta +1) (3\beta +2)(4\beta +3)(5\beta +4) \frac{ x^{5\beta +5}}{\Gamma (5\beta +6)}\begin{pmatrix} 4 &{} -1 \\ 12 &{} -3 \end{pmatrix}\big \vert _{x=0} \\&+\\&\vdots \\&=\begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix}=\mathbb {I}. \end{aligned}$$

7 Conclusion

In this paper, linear systems of fractional differential equations including Prabhakar fractional derivative of Riemann–Liouville and Caputo types characterized by variable coefficients were focused on. The state transition matrix was expressed using the generalized Peano–Baker series, assuming its convergence. The closed form solutions for initial value problems in both homogeneous and inhomogeneous scenarios were provided. The investigation encompassed systems employing both Riemann–Liouville and Caputo-type operators. Numerous examples were presented to demonstrate the application of the derived formulas.

Future research endeavors will explore the utilization of these findings in addressing control and stability problems for systems featuring fractional dynamics.