Introduction

When no samples are available to estimate a probability distribution, we have to invite some domain experts to evaluate the belief degree that each event will happen. In order to model such phenomena, uncertainty theory was founded by Liu [7] in 2007, refined by Liu [6] in 2010, and became a branch of mathematics based on the normality axiom, duality axiom, subadditivity axiom, and product axiom. It is a new tool to study subjective uncertainty. The first fundamental concept in uncertainty theory is uncertain measure which is used to indicate the belief degree that an uncertain event may occur. Liu process and uncertain calculus were initialized by Liu (2009) to deal with differentiation and integration of functions of uncertain processes. Furthermore, uncertain differential equations, a type of differential equations driven by the Liu process, was defined by Liu [4]. Uncertainty theory and uncertain differential equations has been studied in many literatures (for example, see [1,2,3,4,5,6,7,8,9,10,11,12] and the references cited therein).

In this paper the fundamental matrix for the uncertain homogeneous linear system will be introduced and the Liouville formula for the system will be proven. Liouville formula gives us better information about determinant of uncertain fundamental matrix. Moreover, introducing the exponential matrix, the explicit solutions of the system will be calculated. First, let us have some definitions and preliminaries.

Preliminaries

In this section, we will state some basic concepts in the uncertainty theory.

Definition 1

[7] Let \(\varGamma\) be a nonempty set, and \(\ L\) be a \(\sigma\)-algebra over \(\varGamma\). Each element \(\varLambda \in L\) is called an event. To measure uncertain event, uncertain measure was introduced as a set function satisfying the following axioms:

Axiom 1.:

(Normality) \(M\{\varGamma \}=1\).

Axiom 2.:

(Duality Axiom) \(M\{\varLambda \}+M\{\varLambda ^c\}=1\) for any event \(\varLambda\).

Axiom 3.:

(Countable Subadditivity) For every countable sequence of events \(\varLambda _1,\varLambda _2,\dots\), we have

$$\begin{aligned} M\left\{ {\bigcup\limits_{{i = 1}}^{\infty } {\varLambda _{i} } } \right\} \leqslant \sum \limits _{i=1}^\infty M \{\varLambda _i\}. \end{aligned}$$
Axiom 4.:

(Product Axiom) Let \((\varGamma _k,L_k,M_k)\) be uncertainty spaces for \(k =1, 2,\dots\), The product uncertain measure M is an uncertain measure satisfying

$$\begin{aligned} M\Biggl \{\prod _{k=1}^{\infty }\varLambda _k\Biggr \}=\bigwedge \limits _{k=1}^{\infty } M_k\{\varLambda _k\}, \end{aligned}$$

where \(\varLambda _k\) are arbitrarily chosen events from \(L_k\) for \(k = 1, 2,\dots\), respectively.

Let \(\varGamma\) be a nonempty set, L be a \(\sigma\)-algebra over \(\varGamma\), and M be an uncertain measure. Then the triplet \((\varGamma ,L,M)\) is called an uncertainty space [7]. Suppose T is a totally ordered set (e.g time). An uncertain process is a function \(X_t\) from \(T\times \varGamma\) to the set of real numbers such that \(\{\gamma \in \varGamma \mid X_t(\gamma )\in B\}\) is an event for any Borel set B of real numbers at each time t [4]. Let \(X_t\) be an uncertain process, then for each \(\gamma \in \varGamma\), the function \(X_t(\gamma )\) is called a sample path of \(X_t\) [4].

Definition 2

[5] An uncertain process \(C_t\) is said to be a Liu process if

  1. (i)

    \(C_0=0\) and almost all sample paths are Lipschitz continuous,

  2. (ii)

    \(C_t\) has stationary and independent increments,

  3. (iii)

    every increment \(C_{s+t} - C_s\) is a normal uncertain variable with expected value 0 and variance \(t^2\).

Let \(C_{it}, i=1,2,\dots ,n\) be independent Liu processes. Then, \(C_t=(C_{1t},C_{2t},\dots ,C_{nt})^T\) is called an n-dimensional Liu process [12].

Definition 3

[5] Let \(X_t\) be an uncertain process and \(C_t\) be a Liu process. For any partition of closed interval [ab] with \(a=t_1<t_2<\dots <t_{n+1}=b\), the mesh is written as

$$\begin{aligned} \varDelta =\max \limits _{1\leqslant i\leqslant n}\vert t_{i+1}-t_{i}\vert . \end{aligned}$$

Then, Liu integral of \(X_t\) with respect to \(C_t\) is defined as

$$\begin{aligned} \int _{b}^{a}X_t\mathrm {d}C_t=\lim \limits _{\varDelta \rightarrow 0}\sum \limits _{i=1}^{n}X_{t_i}.(C_{t_{i+1}}-C_{t_i}), \end{aligned}$$

provided that the limit exists almost surely and is finite. In this case, the uncertain process \(X_t\) is said to be integrable.

An uncertain differential equation is a type of differential equation involving uncertain processes. We introduce uncertain differential equation and system of uncertain linear differential equations as follows.

Definition 4

[4] Suppose \(C_t\) is a Liu process, and f and g are two functions. Then

$$\begin{aligned} \mathrm {d}X_t=f(t,X_t)\mathrm {d}t+g(t,X_t)\mathrm {d}C_t, \end{aligned}$$
(2.1)

is called an uncertain differential equation. A solution is an uncertain process \(C_t\) that satisfies (2.1) identically in t.

Definition 5

[8] Let t be a positive real variable and \(X_t =(X_{1t},X_{2t},\dots ,X_{nt})^T\) be an n-dimensional uncertain process whose elements \(X_{jt}\) are integrable uncertain processes. Also, \(A (t)=[a_{ij}(t)]\) and \(B(t)=[b_{ij}(t)]\) are \(n\times n\) matrices of integrable uncertain real functions and \(U_t=(u_1(t),u_2(t),\dots ,u_n(t))^T\) and \(V_t =(v_1(t),v_2(t),\dots ,v_n(t))^T\) are n-component vectors of uncertain integrable real functions. Then

$$\begin{aligned} \mathrm {d}X_t= [A(t)X_t+U(t)]\mathrm {d}t+[B(t)X_t+V(t)]\mathrm {d}C_t, \end{aligned}$$
(2.2)

is called a system of uncertain linear differential equations.

If U(t) and V(t) in (2.2) are identically 0; that is, if (2.2) has the form

$$\begin{aligned} \mathrm {d}X_t=[A(t)X_t]\mathrm {d}t+[B(t)X_t]\mathrm {d}C_t, \end{aligned}$$
(2.3)

then the equation is called uncertain homogeneous linear system ( [8]). Chen and Liu in [1] considered uncertain differential equation (2.1) and presented the following theorem about the existence and uniqueness of solutions of (2.1).

Theorem 1

[1] The uncertain differential equation (2.1) has a unique solution if the coefficients f(xt) and g(xt) satisfy the Lipschitz condition

$$\begin{aligned} \vert f(x,t)-f(y,t)\vert +\vert g(x,t)-g(y,t)\vert \leqslant L\vert x-y\vert ,\quad \forall x, y\in {\mathbb {R}},\quad t\geqslant 0 \end{aligned}$$

and linear growth condition

$$\begin{aligned} \vert f(x,t)\vert +\vert g(x,t)\vert \leqslant L(1+\vert x\vert ),\quad \forall x\in {\mathbb {R}},\quad t\geqslant 0 \end{aligned}$$

for some constants L. Moreover, the solution is sample-continuous.

Stability of uncertain differential equation (2.1) is considered by the authors in [11] as follows.

Theorem 2

[11] The uncertain differential equation (2.1) is stable if the coefficients f(tx) and g(tx) satisfy the linear growth condition for some constant K and strong Lipschitz condition for some bounded and integrable function L(t) on \([0,\infty ]\).

Recently, Lio and Liu in [3] have obtained a COVID-19 spread model as uncertain differential equation

$$\begin{aligned} \mathrm {d}X_t=\mu _t X_t\mathrm {d}t+\sigma _t X_t\mathrm {d}C_t, \end{aligned}$$

where \(X_t\) is the cumulative number of COVID-19 infections in China at time t, \(C_t\) is Liu process, and \(\mu _t\) and \(\sigma _t\) are unknown time-varying parameters at this moment. Then, they have inferred the zero-day of COVID-19 spread in China.

Let \(C_t =(C_{1t},C_{2t},\dots ,C_{nt})^T\) be an n-dimensional Liu process, \(A(t)=[a_{ij}(t)]\) be an \(n\times n\) matrix of uncertain integrable real functions and \(X_t= (X_{1t},X_{2t}\),..., \(X_{nt})^T\) be an n-dimensional uncertain process whose elements \(X_{jt}\) and all \(a_{ij}(t)X_{jt}\) are integrable uncertain processes. Then, the Liu integral of \(A(t)X_t\) with respect to \(C_t\) on [ab] is defined by

$$\begin{aligned} \int _{a}^{b}A(t)X_t\mathrm {d}C_t = \begin{pmatrix} \sum \limits _{j=1}^{n}\int _{b}^{a} a_{1j}(t)X_{jt}\mathrm {d}C_{jt} &{} \\ \sum \limits _{j=1}^{n}\int _{b}^{a} a_{2j}(t)X_{jt}\mathrm {d}C_{jt} &{} \\ \vdots &{}\\ \sum \limits _{j=1}^{n}\int _{b}^{a} a_{nj}(t)X_{jt}\mathrm {d}C_{jt} &{} \\ \end{pmatrix}. \end{aligned}$$

In this case, \(A(t) X_t\) is said to be Liu integrable with respect to \(C_t\) [8].

Remark 1

The system (2.2) is equivalent to the uncertain integral equation

$$\begin{aligned} X_t=X_0+\int _{0}^{t}{[A(s)X_s+U(s)}]\mathrm {d}s+\int _{0}^{t}[B(s)X_s+V(s)]\mathrm {d}C_s. \end{aligned}$$

Recently, the authors in [8] considered system (2.2) with an initial condition and proved the following theorem about the existence and uniqueness of solutions of the initial value problem.

Theorem 3

[8] Suppose that there exists a continuous function k(t) on [ab] such that \(\vert A(t)\vert \leqslant k(t)\), \(\vert B(t)\vert \leqslant k(t)\), \(\vert U(t)\vert \leqslant k(t)\), and \(\vert V(t)\vert \leqslant k(t)\) on [ab]. Then, system (2.2) with initial condition \(X_{t_0}=X_0\) has a unique solution X(t) on [ab] in the following sense.

$$\begin{aligned} X_t=X_0+\int _{t_0}^{t}[A(s)X_s+U(s)]\mathrm {d}s+\int _{t_0}^{t}[B(s)X_s+V(s)]\mathrm {d}C_s,\quad t\in [a,b]. \end{aligned}$$

Liouville Formula

In this section, the fundamental system and fundamental matrix associate with the system of uncertain homogeneous linear differential equations will be introduced. The main result in this section is to prove the Liouville formula for the system.

Definition 6

A set of n linearly independent solutions of (2.3) is called an uncertain fundamental system of (2.3).

Definition 7

An \(n \times n\) matrix whose columns are linearly independent solutions of (2.3) is called uncertain fundamental matrix of (2.3).

Theorem 4

Let \(X_s\) be a solution of (2.3). Then \(X_s\equiv 0\) or \(X_s\ne 0\).

Proof

If there exists \(s_0\in (a,b)\) such that \(X_{s_0}=0\), then \(X_s\) and \(Y_s\equiv 0\) are two solutions of (2.3). Therefore, by the uniqueness of solutions \(X_s\equiv 0\).\(\square\)

Theorem 5

Let \(Y_s\) be an \(n \times n\) matrix whose columns are solutions of (2.3) on (ab). A sufficient condition for \(\det Y_s\ne 0\) is that there exists \(s_0\in (a,b)\) such that \(\det Y_{s_0} \ne 0\).

Proof

Let \(\det Y_{s_1}=0\) for some \(s_1\in (a,b).\) Then there exists \((c_1,c_2,\dots ,c_n)\ne 0\) on \({\mathbb {R}}^n\) such that \(\sum \nolimits _{i=1}^{n}c_iy_{is_1}=0\). Then, by Theorem 4, \(\sum \nolimits _{i=1}^{n}c_iy_{is}=0\). Therefore, \(\det (Y_s)=0\) for all \(s\in (a,b)\). That is a contradiction with \(\det (Y_{s_0})\ne 0\).\(\square\)

Theorem 6

Let \(Y_s\) be an \(n \times n\) matrix whose columns are solutions of (2.3) in (ab). A necessary and sufficient condition for \(Y_s\) to be an uncertain fundamental matrix of (2.3) is that there exists \(s_0\in (a,b)\) such that \(\det Y_{s_0} \ne 0\).

Proof

Let \(\det Y_{s_0} \ne 0\), then by Theorem 5\(\det Y_s \ne 0\). Now, let \(y_s\ne 0\) be a solution of (2.3) with initial condition \(y_{s_1}\ne 0\). Since \(\det Y_{s_1}\ne 0\), there exists \(C=(c_1,c_2,\dots ,c_n)^T\) such that \(Y_{s_1}C=y_{s_1}\) or \(\sum \nolimits _{i=1}^{n}c_iy_{is_1}=y_{s_1}\). Thus, by the uniqueness of solutions, \(\sum \nolimits _{i=1}^{n}c_iy_{si}=y_{s}\). Therefore, \(Y_s\) is an uncertain fundamental matrix.

Now let \(\det Y_s\equiv 0.\) In this case, \(\det Y_{s_1}= 0\) for all \(s_1\in (a,b)\). Thus, the columns of \(Y_{s_1}\) are linearly dependent. Therefore, there exist \(c_1,c_2,\dots ,c_n\) such that \(\sum \nolimits _{i=1}^{n}c_iy_{is_1}\equiv 0\). But by Theorem 4, \(\sum \nolimits _{i=1}^{n}c_iy_{si}\equiv 0\). Thus, the columns of \(Y_s\) are linearly dependent and therefore, \(Y_s\) is not an uncertain fundamental matrix. Therefore, if \(Y_s\) be an uncertain fundamental matrix, we must have \(\det Y_s\ne 0\) or there exists \(s_0\in (a,b)\) such that \(\det Y_{s_0}\ne 0\). \(\square\)

The following corollary is a direct conclusion of Theorems 5 and 6.

Corollary 1

Let \(Y_s\) be an \(n \times n\) matrix whose columns are solutions of (2.3) in (ab). A necessary and sufficient condition for \(Y_s\) to be an uncertain fundamental matrix of (2.3) is that \(\det Y_s \ne 0\).

Theorem 7

Let \(Y_s\) be an uncertain fundamental matrix of (2.3) and C be a constant nonsingular matrix. Then \(Y_sC\) is an uncertain fundamental matrix too. Moreover, if \(Z_s\) be another uncertain fundamental matrix, then there exists a nonsingular constant matrix \(C_1\) such that \(Z_s=Y_sC_1\).

Proof

Since

$$\begin{aligned} \mathrm {d}Y_s=[A(s)Y_s]\mathrm {d}s+[B(s)Y_s]\mathrm {d}C_s, \end{aligned}$$

then,

$$\begin{aligned} \mathrm {d}Y_sC=[A(s)Y_s]C\mathrm {d}s+[B(s)Y_s]C\mathrm {d}C_s, \end{aligned}$$

or

$$\begin{aligned} \mathrm {d}(Y_sC)=A(s)[Y_s C]\mathrm {d}s+B(s)[Y_sC]\mathrm {d}C_s, \end{aligned}$$

which means that \(Y_sC\) satisfies in (2.3). Also,

$$\begin{aligned} \det (Y_s C)=(\det Y_s) (\det C)\ne 0. \end{aligned}$$

Thus, \(Y_sC\) is an uncertain fundamental matrix. Now for \(s\in (a,b)\), let

$$\begin{aligned} F_s=(Y_s)^{-1}Z_s. \end{aligned}$$

Then,

$$\begin{aligned} Z_s=Y_sF_s, \end{aligned}$$

and

$$\begin{aligned} \mathrm {d}{Z_s}=Y_s\mathrm {d}{F_s}+(\mathrm {d}{Y_s})F_s. \end{aligned}$$

Hence,

$$\begin{aligned} A(s)Z_s\mathrm {d}s+B(s)Z_s\mathrm {d}C_s&=Y_s\mathrm {d}{F_s}+(A(s)Y_s\mathrm {d}s+B(s)Y_s\mathrm {d}C_s)F_s\\ {}&=Y_s\mathrm {d} {F_s}+A(s)Y_sF_s\mathrm {d}s+B(s)Y_sF_s\mathrm {d}C_s\\&=Y_s\mathrm {d}F_s. \end{aligned}$$

Therefore,

$$\begin{aligned} Y_s\mathrm {d}F_s=0. \end{aligned}$$

Since \(\det Y_s\ne 0\) on (ab), then

$$\begin{aligned} \mathrm {d}F_s=0. \end{aligned}$$

Therefore, \(F_s\) is a constant matrix. Since

$$\begin{aligned} \det F_s = \det (Y_s^{-1}) \det (Z_s)\ne 0, \end{aligned}$$

then \(F_s\) is nonsingular and the proof is complete. \(\square\)

The authors in [1] proved the following theorem and calculated the exact solution of an uncertain linear equation. We will use this theorem to state our main result in this section.

Theorem 8

[1] Let \(u_{1t}, u_{2t}, v_{1t}, v_{2t}\) be integrable uncertain processes. Then the linear uncertain differential equation

$$\begin{aligned} \mathrm {d}X_t = (u_{1t}X_t + u_{2t})\mathrm {d}t + (v_{1t}X_t + v_{2t})\mathrm {d}C_t, \end{aligned}$$

has a solution

$$\begin{aligned} X_t=U_t\Big (X_0+\int _{0}^{t}\frac{u_{2s}}{U_s}\mathrm {d}s+\int _{0}^{t}\frac{v_{2s}}{U_s}\mathrm {d}C_s\Big ),{\tiny } \end{aligned}$$

where

$$\begin{aligned} U_t=e^{(\int _{0}^{t}u_{1s}\mathrm {d}s+\int _{0}^{t}v_{1s}\mathrm {d}C_s)}. \end{aligned}$$

Now we state our main result in this section which is an extension of a theorem in the theory of ordinary differential equations, which is known as Liouville formula, to the uncertain homogeneous linear systems.

Theorem 9

(Liouville formula) Let \(Y_s\) be a matrix which satisfies in the following matrix differential equation

$$\begin{aligned} \mathrm {d}Y_s=A(s)Y_s\mathrm {d}s+B(s)Y_s\mathrm {d}C_s. \end{aligned}$$
(3.1)

Then,

$$\begin{aligned} \mathrm {d}[\det Y_s] = [tr A(s)][\det Y_s]\mathrm {d}s+[tr B(s)][\det Y_s]\mathrm {d}C_s. \end{aligned}$$

Also, if \(s, s_0 \in (a,b)\), then

$$\begin{aligned} \det Y_s = [\det Y_{s_0}]e^{\int _{s_0}^{s}tr A(t)\mathrm {d}t+\int _{s_0}^{s}tr B(t)\mathrm {d}C_t}. \end{aligned}$$
(3.2)

Proof

Let \(Y_s = (y^{ij}_s)\) and \(A(s) = (a^{ij}_s)\). Then (by induction)

$$\begin{aligned} \mathrm {d}[\det Y_s]&= \begin{vmatrix} \mathrm {d}y^{11}_s&\mathrm {d}y^{12}_s&\dots&\mathrm {d}y^{1n}_s\\ y^{21}_s&y^{22}_s&\dots&y^{2n}_s\\ \vdots&\vdots&\quad&\vdots \\ y^{(n-1)1}_s&y^{(n-1)2}_s&\dots&y^{(n-1)n}_s\\ y^{n1}_s&y^{n2}_s&\dots&y^{nn}_s \end{vmatrix}+\dots +\begin{vmatrix} y^{11}_s&y^{12}_s&\dots&y^{1n}_s\\ y^{21}_s&y^{22}_s&\dots&y^{2n}_s\\ \vdots&\vdots&\quad&\vdots \\ y^{(n-1)1}_s&y^{(n-2)2}_s&\dots&y^{(n-1)n}_s\\ \mathrm {d}y^{n1}_s&\mathrm {d}y^{n2}_s&\dots&\mathrm {d}y^{nn}_s \end{vmatrix}. \end{aligned}$$

But

$$\begin{aligned} \mathrm {d}y^{ij}_s=\sum _{k=1}^{n}a^{ik}(s)y^{kj}_s\mathrm {d}s+\sum _{k=1}^{n}b^{ik}(s)y^{kj}_s\mathrm {d}C_s. \end{aligned}$$

Hence,

$$\begin{aligned} \mathrm {d}[\det Y_s]&= \begin{vmatrix} \sum \limits _{k=1}^{n}[a^{1k}(s)y^{k1}_s\mathrm {d}s +b^{1k}(s)y^{k1}_s\mathrm {d}C_s]&\dots&\sum \limits _{k=1}^{n}[a^{1k}(s)y^{kn}_s\mathrm {d}s +b^{1k}(s)y^{kn}_s\mathrm {d}C_s]\\ y^{21}_s&\dots&y^{2n}_s\\ \vdots&\quad&\vdots \\ y^{n1}_s&\dots&y^{nn}_s \end{vmatrix}+\dots \\ \\&\quad +\begin{vmatrix} y^{11}_s&\dots&y^{1n}_s\\ \vdots&\quad&\vdots \\ y^{({n-1})1}_s&\dots&y^{({n-1})n}_s\\ \sum \limits _{k=1}^{n}[a^{nk}(s)y^{k1}_s\mathrm {d}s +b^{nk}(s)y^{k1}_s\mathrm {d}C_s]&\dots&\sum \limits _{k=1}^{n}[a^{nk}(s)y^{kn}\mathrm {d}s +b^{nk}(s)y^{kn}_s\mathrm {d}C_s] \end{vmatrix}. \end{aligned}$$

Therefore,

$$\begin{aligned} \mathrm {d}[\det Y_s]&= \begin{vmatrix} \sum \limits _{k=1}^{n}a^{1k}(s)y^{k1}_s\mathrm {d}s&\dots&\sum \limits _{k=1}^{n}a^{1k}(s)y^{kn}_s\mathrm {d}s\\ y^{21}_s&\dots&y^{2n}_s\\ \vdots&\quad&\vdots \\ y^{n1}_s&\dots&y^{nn}_s \end{vmatrix}+\dots \\&\quad + \begin{vmatrix} y^{11}_s&\dots&y^{1n}_s\\ \vdots&\quad&\vdots \\ y^{(n-1)1}_s&\dots&y^{(n-1)n}_s\\ \sum \limits _{k=1}^{n}a^{nk}(s)y^{k1}_s\mathrm {d}s&\dots&\sum \limits _{k=1}^{n}a^{nk}(s)y^{kn}_s\mathrm {d}s \end{vmatrix}\\ \\&\quad + \begin{vmatrix} \sum \limits _{k=1}^{n}b^{1k}(s)y^{k1}_s\mathrm {d}C_s&\dots&\sum \limits _{k=1}^{n}b^{1k}(s)y^{kn}_s\mathrm {d}C_s\\ y^{21}_s&\dots&y^{2n}_s\\ \vdots&\quad&\vdots \\ y^{n1}_s&\dots&y^{nn}_s \end{vmatrix}+\dots \\&\quad +\begin{vmatrix} y^{11}_s&\dots&y^{1n}_s\\ \vdots&\quad&\vdots \\ y^{(n-1)1}_s&\dots&y^{(n-1)n}_s\\ \sum \limits _{k=1}^{n}b^{nk}(s)y^{k1}_s\mathrm {d}C_s&\dots&\sum \limits _{k=1}^{n}b^{nk}(s)y^{kn}_s\mathrm {d}C_s \end{vmatrix}. \end{aligned}$$

In the first determinant on the right, by adding the appropriate multiple of each row to the first row and carrying out similar operations on the other determinants on the right, we obtain

$$\begin{aligned} \mathrm {d}[\det Y_s]&= \begin{vmatrix} a^{11}(s)y^{11}_s\mathrm {d}s&\dots&a^{11}(s)y^{1n}_s\mathrm {d}s\\ y^{21}_s&\dots&y^{2n}_s\\ \vdots&\quad&\vdots \\ y^{n1}_s&\dots&y^{nn}_s \end{vmatrix}+\dots +\begin{vmatrix} y^{11}_s&\dots&y^{1n}_s\\ \vdots&\quad&\vdots \\ y^{(n-1)1}_s&\dots&y^{(n-1)n}_s\\ a^{nn}(s)y^{n1}_s\mathrm {d}s&\dots&a^{nn}(s)y^{nn}_s\mathrm {d}s \end{vmatrix}\\&\quad + \begin{vmatrix} b^{11}(s)y^{11}_s\mathrm {d}C_s&\dots&b^{11}(s)y^{1n}_s\mathrm {d}C_s\\ y^{21}_s&\dots&y^{2n}_s\\ \vdots&\quad&\vdots \\ y^{n1}_s&\dots&y^{nn}_s \end{vmatrix}+\dots +\begin{vmatrix} y^{11}_s&\dots&y^{1n}_s\\ \vdots&\quad&\vdots \\ y^{(n-1)1}_s&\dots&y^{(n-1)n}_s\\ b^{nn}(s)y^{n1}_s\mathrm {d}C_s&\dots&b^{nn}(s)y^{nn}_s\mathrm {d}C_s \end{vmatrix}\\ \\&= a^{11}(s)\begin{vmatrix} y^{11}_s\mathrm {d}s&\dots&y^{1n}_s\mathrm {d}s\\ y^{21}_s&\dots&y^{2n}_s\\ \vdots&\quad&\vdots \\ y^{n1}_s&\dots&y^{nn}_s \end{vmatrix}+\dots +a^{nn}(s)\begin{vmatrix} y^{11}_s&\dots&y^{1n}_s\\ \vdots&\quad&\vdots \\ y^{(n-1)1}_s&\dots&y^{(n-1)n}_s\\ y^{n1}_s\mathrm {d}s&\dots&y^{nn}_s\mathrm {d}s \end{vmatrix}\\ \\&\quad + b^{11}(s)\begin{vmatrix} y^{11}_s\mathrm {d}C_s&\dots&y^{1n}_s\mathrm {d}C_s\\ y^{21}_s&\dots&y^{2n}_s\\ \vdots&\quad&\vdots \\ y^{n1}_s&\dots&y^{nn}_s \end{vmatrix}+\dots +b^{nn}(s)\begin{vmatrix} y^{11}_s&\dots&y^{1n}_s\\ \vdots&\quad&\vdots \\ y^{(n-1)1}_s&\dots&y^{(n-1)n}_s\\ y^{n1}_s\mathrm {d}C_s&\dots&y^{nn}_s\mathrm {d}C_s \end{vmatrix}\\ \\&=[tr A(s)][\det Y_s]\mathrm {d}s+[tr B(s)][\det Y_s]\mathrm {d}C_s. \end{aligned}$$

Thus, we have

$$\begin{aligned} \mathrm {d}[\det Y_s]=[tr A(s)][\det Y_s]\mathrm {d}s+[tr B(s)][\det Y_s]\mathrm {d}C_s. \end{aligned}$$

According to Theorem 8, we have

$$\begin{aligned} \det Y_s=\Big (\det Y_{s_0}\Big )U_s \end{aligned}$$

where

$$\begin{aligned} U_s=e^{\int _{s_0}^{s}tr A(t)\mathrm {d}t+\int _{s_0}^{s}tr B(t)\mathrm {d}C_t}. \end{aligned}$$

Therefore,

$$\begin{aligned} \det Y_s = [\det Y_{s_0}]e^{\int _{s_0}^{s}tr A(t)\mathrm {d}t+\int _{s_0}^{s}tr B(t)\mathrm {d}C_t}. \end{aligned}$$

\(\square\)

Corollary 2

The \(n\times n\) matrix \(X_t\) is a fundamental matrix if and only if exists \(s_0\in (a,b)\) such that \(\det Y_{s_0} \ne 0\).

Proof

Let \(X_t\) be a fundamental matrix. Then \(X_t\) is a solution of (3.1) and therefore satisfies in (3.2). On the other hand, according to Corollary 1, \(X_t\) is a fundamental matrix if only if \(\det X_t\ne 0\). According to relation (3.2), \(\det X_t\ne 0\) if only if there exists \(t_0\in (a,b)\) such that \(\det X_{t_0} \ne 0\). Thus \(X_t\) is an uncertain fundamental matrix if and only exists \(t_0\in (a,b)\) such that \(\det X_{t_0} \ne 0\). \(\square\)

Explicit Solutions of Uncertain Homogeneous Linear System

In this section, introducing the exponential matrix, the explicit solutions of (2.3) will be presented. Suppose that B(t) is an \(n\times n\) matrix the elements of which are continuous functions on (ab), and \(B^0=I_{n\times n}\) is the identity matrix. In this case \(\sum \limits _{s=0}^{m}\frac{1}{s!}{B^s}\) is also an \(n\times n\) matrix. For the integers p and q we have

$$\begin{aligned} \Big \vert \sum _{s=0}^{p+q}\frac{B^s}{s!}-\sum _{s=0}^{p}\frac{B^s}{s!}\Big \vert =\Big \vert \sum _{s=p+1}^{p+q}\frac{B^s}{s!}\Big \vert \leqslant \sum _{s=p+1}^{p+q}\frac{1}{s!}\vert B\vert ^s. \end{aligned}$$

Hence, \(\sum \nolimits _{s=0}^{m}\frac{1}{s!}{B^s}\) is a Cauchy sequence and therefore it converges to an \(n\times n\) matrix \(C(t)=[c_{ij}(t)]\). This matrix is called the exponential matrix of B(t) and is denoted by \(e^{B(t)}\). It is well known that, from linear algebra, for two \(n\times n\) matrices A and B, if \(AB=BA\), then \(e^{A+B}=e^Ae^B\).

Now we present the main result of this section.

Theorem 10

Let A(t), B(t) be \(n\times n\) continuous matrices on (ab) and assume that \(D(t)=\int _{t_0}^{t} A(s)\mathrm {d}s+\int _{t_0}^{t}B(s) \mathrm {d}C_s\). If \(A(t)D(t)=D(t)A(t)\) and \(B(t)D(t)=D(t)B(t)\) on (ab),  then \(X_t=X_0e^{D(t)}\) is a solution of the following initial value problem.

$$\begin{aligned} \mathrm {d}X_t=A(t)X_t\mathrm {d}t+B(t)X_t\mathrm {d}C_t,\quad X_{t_0}=X_0. \end{aligned}$$
(4.1)

Proof

First note that for differentiable matrices M and N whose elements are also uncertain differentiable functions, the following relations hold.

$$\begin{aligned} \frac{\mathrm d}{\mathrm {d}t}(MN)&=M\frac{\mathrm {d}N}{\mathrm {d}t}+\frac{\mathrm {d}M}{\mathrm {d}t}N,\\ \frac{\mathrm d}{\mathrm {d}C_t}(MN)&=M\frac{\mathrm {d}N}{\mathrm {d}C_t}+\frac{\mathrm {d}M}{\mathrm {d}C_t}N. \end{aligned}$$

From the definition of D(t), it follows that

$$\begin{aligned} \mathrm {d}D(t)=A(t)\mathrm {d}t+B(t)\mathrm {d}C_t. \end{aligned}$$
(4.2)

Now assume that

$$\begin{aligned} \mathrm {d}[D(t)]^m=mA(t)[D(t)]^{m-1}\mathrm {d}t+mB(t)[D(t)]^{m-1}\mathrm {d}C_t. \end{aligned}$$
(4.3)

Then, using (4.2) and the hypothesis on AB and D, it can be concluded that

$$\begin{aligned} \mathrm {d}[D(t)]^{m+1}&=\mathrm {d}([D(t)]^mD(t))\\ {}&=\big [mA(t)[D(t)]^{m-1}(t)\mathrm {d}t+mB(t)[D(t)]^{m-1}(t)\mathrm {d}C_t\big ]D(t)\\&\ \ \ \ +[D(t)]^m \big [A(t)\mathrm {d}t+B(t)\mathrm {d}C_t\big ]\\&=mA(t)[D(t)]^m\mathrm {d}t+mB(t)[D(t)]^m\mathrm {d}C_t+A(t)[D(t)]^m\mathrm {d}t\\&\ \ \ \ +B(t)[D(t)]^m\mathrm {d}C_t\\&=(m+1)A(t)[D(t)]^m\mathrm {d}t+(m+1)B(t)[D(t)]^m\mathrm {d}C_t.\\ \end{aligned}$$

Hence, by induction, relation (4.3) holds for all positive integers m. By the proof of the existence and uniqueness theorem for uncertain linear systems [8], the unique solution \(X_t\) of (4.1) is the limit of the following recursive sequence

$$\begin{aligned} X_t^1&=X_0+\int _{t_0}^{t}A(s)X_0\mathrm {d}s+\int _{t_0}^{t}B(s)X_0\mathrm {d}C_s,\\ X_t^m&=X_0+\int _{t_0}^{t}A(s)X^{m-1}_s\mathrm {d}s+\int _{t_0}^{t}B(s)X^{m-1}_s\mathrm {d}C_s. \end{aligned}$$

Therefore,

$$\begin{aligned} \mathrm {d}X_t^m=A(t)X^{m-1}_t\mathrm {d}t+B(t)X^{m-1}_t\mathrm {d}C_t. \end{aligned}$$
(4.4)

Now assume that

$$\begin{aligned} X_t^{m-1}=\Big [I+D(t)+\frac{1}{2!}[D(t)]^2+\dots +\frac{1}{(m-1)!}[D(t)]^{m-1}\Big ]X_0. \end{aligned}$$

Then by (4.4)

$$\begin{aligned} \mathrm {d}X_t^m&=A(t)X^{m-1}_t\mathrm {d}t+B(t)X^{m-1}_t\mathrm {d}C_t\\&=\bigg [A(t)\Big [I+[D(t)]+\frac{1}{2!}[D(t)]^2+\dots +\frac{1}{(m-1)!}[D(t)]^{m-1}\Big ]X_0\bigg ]\mathrm {d}t\\&\quad +\bigg [B(t)\Big [I+[D(t)]+\frac{1}{2!}[D(t)]^2+\dots +\frac{1}{(m-1)!}[D(t)]^{m-1}\Big ]X_0\bigg ]\mathrm {d}C_t\\&=\bigg [A(t)X_0+A(t)D(t)X_0+\dots +\frac{1}{(m-1)!}[A(t)[D(t)]^{m-1}X_0]\bigg ]\mathrm {d}t\\&\quad +\bigg [B(t)X_0+B(t)D(t)X_0+\dots +\frac{1}{(m-1)!}[B(t)[D(t)]^{m-1}X_0]\bigg ]\mathrm {d}C_t\\&=\Big [A(t)X_0\mathrm {d}t+B(t)X_0\mathrm {d}C_t\Big ]+\Big [ A(t)D(t)X_0\mathrm {d}t +B(t)D(t)X_0\mathrm {d}C_t\Big ]+\dots \\&\quad +\Big [\frac{1}{(m-1)!}A(t)[D(t)]^{m-1}(t)X_0\mathrm {d}t+\frac{1}{(m-1)!}B(t)[D(t)]^{m-1}X_0\mathrm {d}C_t\Big ].\\ \end{aligned}$$

Integrating of the equation above on \([t_0,t]\) and using relations (4.3), (4.4), \(D(t_0)=0\) and \(X^m_{t_0}=X_0\) yields

$$\begin{aligned} \begin{aligned} X_t^m&=X_0+D(t)X_0+\frac{1}{2!}[D(t)]^2X_0+\dots +\frac{1}{m!}[D(t)]^mX_0\\&=\big [I+D(t)+\frac{1}{2!}[D(t)]^2+\dots +\frac{1}{m!}[D(t)]^m\big ]X_0. \end{aligned} \end{aligned}$$

Therefore,

$$\begin{aligned} X_t=\lim \limits _{m\rightarrow \infty }X^m_t=e^{D(t)}X_0, \end{aligned}$$

and the proof is complete. \(\square\)

Corollary 3

Let A(t), B(t) and D(t) are satisfied at the conditions of Theorem 10. Then \(e^{D(t)}\) is a fundamental matrix.

Proof

Let \(I_i\) be the i-th column of identity matrix, \(X_{it}=e^{D(t)}I_i\) for \(1\leqslant i\leqslant n\) and \(X_t=(X_{1t},\dots ,X_{nt}\)). Then every column of \(X_t\) is a solution of (4.1) and \(X_t=e^{D(t)}I=e^{D(t)}\). From \(X_{t_0}=e^{D(t_0)}=e^0=I\) and \(\det X_{t_0}=1\ne 0\) it can be concluded that \(X_t=e^{D(t)}\) is a fundamental matrix. \(\square\)

Example 1

Consider the following uncertain linear system

$$\begin{aligned} \mathrm {d}X_t&=\Big (X_t+2tY_t\Big )\mathrm {d}t+\Big (2X_t-3C_tY_t\Big )\mathrm {d}C_t,\\ \mathrm {d}Y_t&=\Big (2tX_t+Y_t\Big )\mathrm {d}t+\Big (-3C_tX_t+2Y_t\Big )\mathrm {d}C_t.\\ \end{aligned}$$

for this system

$$\begin{aligned} A(t)&=\begin{pmatrix} 1 &{}2t\\ 2t&{}1\\ \end{pmatrix}, B(t)=\begin{pmatrix} 2&{}-3C_t\\ -3C_t&{}2\\ \end{pmatrix}.\\ \end{aligned}$$

Therefore,

$$\begin{aligned} D(t)&=\int _{t_0}^{t}A(t)\mathrm {d}t+\int _{t_0}^{t}B(t)\mathrm {d}C_t =\begin{pmatrix} t-t_0&{}t^2-t_0^2\\ t^2-t_0^2&{}t-t_0\\ \end{pmatrix}\\&\ \ \ \ +\begin{pmatrix} 2(C_t-C_{t_0})&{}-\frac{3}{2}({C_t}^2-{{C_{t_0}}^2})\\ -\frac{3}{2}({C_t}^2-{{C_{t_0}}^2})&{}2(C_t-C_{t_0})\\ \end{pmatrix}\\&=\begin{pmatrix} t-t_0+2(C_t-C_{t_0})&{}t^2-t_0^2-\frac{3}{2}({C_t}^2-{C_{t_0}}^2)\\ t^2-t_0^2-\frac{3}{2}({C_t}^2-{C_{t_0}}^2)&{}t-t_0+2(C_t-C_{t_0})\\ \end{pmatrix}.\\ \end{aligned}$$

Since \(A(t)D(t)=D(t)A(t)\) and \(B(t)D(t)=D(t)B(t)\), \(e^{D(t)}\) is a fundamental matrix. For simplicity let \(t_0=0\) and \(C_{t_0}=0\). Then

$$\begin{aligned} D(t)&=\begin{pmatrix} t+2C_t&{}t^2-\frac{3}{2}{C_{t}}^2\\ t^2-\frac{3}{2}{C_t}^2&{}t+2C_t\\ \end{pmatrix}=(t+2C_t)\begin{pmatrix} 1&{}0\\ 0&{}1\\ \end{pmatrix} +(t^2-\frac{3}{2}{C_t}^2)\begin{pmatrix} 0&{}1\\ 1&{}0\\ \end{pmatrix}\\&=(t+2C_t)I+(t^2-\frac{3}{2}{C_t}^2)J. \end{aligned}$$

Since \(IJ=JI\), then

$$\begin{aligned} e^{D(t)}=e^{(t+2C_t)I+(t^2-\frac{3}{2}{C_t}^2)J}=e^{(t+2C_t)I}e^{(t^2-\frac{3}{2}{C_t}^2)J},\quad J^2=I. \end{aligned}$$

On the other hand,

$$\begin{aligned} e^{(t+2C_t)I}=\sum _{n=0}^{\infty }\frac{(t+2C_t)^n}{n!}I^n=e^{(t+2C_t)}I \end{aligned}$$

and

$$\begin{aligned} e^{(t^2-\frac{3}{2}{C_t}^2)J}&=\sum _{n=0}^{\infty }\frac{(t^2-\frac{3}{2}{C_t}^2)^n}{n!}J^n=\sum _{k=0}^{\infty }\frac{(t^2-\frac{3}{2}{C_t}^2)^{2k}}{(2k)!}I \\&\ \ \ \ +\sum _{k=0}^{\infty }\frac{(t^2-\frac{3}{2}{C_t}^2)^{2k+1}}{(2k+1)!}J\\&=\cosh \left( t^2-\frac{3}{2}{C_t}^2\right) I+\sinh (t^2-\frac{3}{2}{C_t}^2)J. \end{aligned}$$

Therefore, the fundamental matrix of the system is as follows.

$$\begin{aligned} e^{D(t)}&=e^{(t+2C_t)}I\cosh \left( t^2-\frac{3}{2}{C_t}^2\right) I +e^{(t+2C_t)}I\sinh \left( t^2-\frac{3}{2}{C_t}^2\right) J\\ {}&=\begin{pmatrix} e^{(t+2C_t)}\cosh \left( t^2-\frac{3}{2}{C_t}^2\right) &{}e^{(t+2C_t)}\sinh \left( t^2-\frac{3}{2}{C_t}^2\right) \\ e^{(t+2C_t)}\sinh \left( t^2-\frac{3}{2}{C_t}^2\right) &{}e^{(t+2C_t)}\cosh \left( t^2-\frac{3}{2}{C_t}^2\right) \\ \end{pmatrix}. \end{aligned}$$

Remark 2

In order to calculate the fundamental matrix for \(t_0\ne 0\) and \(C_{t_0}\ne 0\), it suffices to put \(t-t_0\), \(t^2-{t_0}^2\), \(C_t-C_{t_0}\) and \({C_t}^2-{C_{t_0}}^2\) instead of t, \(t^2\), \(C_t\) and \({C_t}^2\) respectively.

Conclusion

In this paper, we introduced the uncertain fundamental system and uncertain fundamental matrix for the uncertain homogeneous linear system. The main contribution of this paper was to prove the Liouville formula for the system and calculating the explicit solutions of the system.