Linear differential equations
We begin our analysis with a look at the special case that the differential equation under consideration is linear. Much as in [4], the results for this special case will later allow us to discuss the general case in Sect. 4.2.
Therefore, first consider the equation (1.1) under the assumption that \(f(t,x)=a(t)x\) for any \(t\in J\) and \(x\in \mathbb {R}\), where \(a:J \rightarrow \mathbb {R}\) is continuous. First we formulate and prove a lower bound for the distance between two solutions.
Theorem 4
(Convergence rate for solutions of 1-dimensional FDEs) Under the conditions of Theorem 1 and the assumption
$$\begin{aligned} f(t,x)=a(t)x \end{aligned}$$
with a continuous function \(a : J \rightarrow \mathbb {R}\), for any \(t\in J\) the estimate
$$\begin{aligned} |x_2(t) - x_1(t)| \ge |x_2(0) - x_1(0)| \cdot E_\alpha \big (a_*(t) t^\alpha \big ) \end{aligned}$$
holds, where
$$\begin{aligned} a_*(t) := \min _{\tau \in [0,t]} a(\tau ). \end{aligned}$$
Proof
For definiteness we assume \(x_2(0) > x_1(0)\). Let \(u(t):=x_2(t)-x_1(t)\) for \(t\in J\). Then by Theorem 1(iii), we have \(u(t) > 0\) for any \(t \in J\). On the other hand, \(u(\cdot )\) is the unique solution to the system
$$\begin{aligned} ^{C\!}D_{0+}^{\alpha }u(t)&=a(t)u(t),\;t\in J\setminus \{0\}, \end{aligned}$$
(4.1a)
$$\begin{aligned} u(0)&=x_2(0)-x_1(0). \end{aligned}$$
(4.1b)
For an arbitrary but fixed \(t> 0\), we consider the problem
$$\begin{aligned} ^{C\!}D_{0+}^{\alpha }v(s)&= a_*(t) v(s), \; s \in (0,t], \end{aligned}$$
(4.2a)
$$\begin{aligned} v(0)&= x_2(0) - x_1(0). \end{aligned}$$
(4.2b)
From [4, Lemma 3.1] or [7, Theorem 7.2], we deduce that this problem has the unique solution \(v(s) = |x_2(0) - x_1(0)| \cdot E_\alpha (a_*(t) s^\alpha )\), \(s \in [0,t]\). Define \(h(s) := u(s) - v(s)\), \(s\in [0,t]\). It is easy to see that h is the unique solution of the system
$$\begin{aligned} ^{C\!}D_{0+}^{\alpha }h(s)&=a_*(t)h(s)+[a(s)-a_*(t)]u(s),\;t\in (0,t], \end{aligned}$$
(4.3a)
$$\begin{aligned} h(0)&=0. \end{aligned}$$
(4.3b)
Notice that for \(s\in [0,t]\)
$$\begin{aligned} h(s)=\int _0^s (s-\tau )^{\alpha -1}E_{\alpha ,\alpha }(a_*(t)(s-\tau )^\alpha )[a(\tau )-a_*(t)]u(\tau )d\tau , \end{aligned}$$
see also [4, Lemma 3.1]. Furthermore, \([a(s)-a_*(t)]u(s)\ge 0\) for all \(s\in [0,t]\). Thus, \(h(s)\ge 0\) for all \(s\in [0,t]\). In particular, \(h(t)\ge 0\) or \(u(t)\ge v(t) = |x_2(0)-x_1(0)| \cdot E_\alpha (a_*(t)t^\alpha )\). The proof is complete. \(\square \)
For the divergence rate and upper bounds for solutions, the following statement is an easy modification of the well known result.
Theorem 5
(Divergence rate for solutions of 1-dimensional FDEs) Under the assumptions of Theorem 4, for any \(t \in J\) the estimate
$$\begin{aligned} |x_2(t)-x_1(t)| \le |x_2(0) - x_1(0)| \cdot E_\alpha (a^*(t) t^\alpha ) \end{aligned}$$
holds, where
$$\begin{aligned} a^*(t)=\max _{s\in [0,t]}a(s). \end{aligned}$$
Proof
For definiteness we once again assume \(x_2(0) > x_1(0)\) and let \(u(t):=x_2(t)-x_1(t)\) for \(t\in J\). As shown above, \(u(t)>0\) for any \(t\in J\), and \(u(\cdot )\) is the unique solution to the system given by eqs. (4.1a) and (4.1b). For an arbitrary but fixed \(t> 0\), this system on the interval [0, t] is rewritten as
$$\begin{aligned} ^{C\!}D_{0+}^{\alpha }u(s)&=a^*(t)u(s)+[a(s)-a^*(t)]u(s),\;s\in (0,t], \end{aligned}$$
(4.4a)
$$\begin{aligned} u(0)&=x_2(0)-x_1(0). \end{aligned}$$
(4.4b)
Thus, due to [4, Lemma 3.1] we obtain
$$\begin{aligned} u(s)&=|x_2(0)-x_1(0)| \cdot E_\alpha (a^*(t)s^\alpha )\\&\quad +\int _0^s (s-\tau )^{\alpha -1}E_{\alpha ,\alpha }(a^*(t)(s-\tau )^\alpha )[a(\tau )-a^*(t)]u(\tau )d\tau \end{aligned}$$
for \(s\in [0,t]\) which together with \([a(s)-a^*(t)]u(s)\le 0\) for all \(s\in [0,t]\) implies that
$$\begin{aligned} u(s)\le |x_2(0)-x_1(0)|E_\alpha (a^*(t)s^\alpha ) \end{aligned}$$
for \(s\in [0,t]\). In particular, \(u(t)\le |x_2(0)-x_1(0)| \cdot E_\alpha (a^*(t)t^\alpha )\). The theorem is proved. \(\square \)
As an immediate consequence of Theorem 5, we obtain a stability result for homogeneous linear equations with non-constant coefficients:
Corollary 1
Assume the hypotheses of Theorem 4 and let \(J = [0, \infty )\). If \(\sup _{t \ge 0} a(t) < 0\) then all solutions x to the equation (1.1) satisfy the property \(\lim _{t \rightarrow \infty } x(t) = 0\). In other words, the differential equation is asymptotically stable.
Proof
As the differential equation under consideration is linear and homogeneous, it is clear that \(\tilde{x} \equiv 0\) is one of its solutions. Moreover, we note that
$$\begin{aligned} A^* := \sup _{t \ge 0} a^*(t) = \sup _{t \ge 0} a(t). \end{aligned}$$
Thus, if x is any solution to the differential equation, it follows from Theorem 5 that
$$\begin{aligned} |x(t)|&= |x(t) - 0| \le | x(0) - 0| \cdot E_\alpha (a^*(t) t^\alpha ) = | x(0)| \cdot E_\alpha (a^*(t) t^\alpha ) \\&\le | x(0)| \cdot E_\alpha (A^* t^\alpha ) \end{aligned}$$
for all \(t \ge 0\) where in the last inequality we have used the well known monotonicity of the Mittag-Leffler function \(E_\alpha \) [16, Proposition 3.10]. Since \(A^* < 0\) by assumption, Lemma 1 implies that the upper bound tends to 0 for \(t \rightarrow \infty \), and our claim follows. \(\square \)
Remark 2
Using the arguments developed in [4], we can see that the observations of Remark 1 hold here as well: Theorem 5 (and hence also Corollary 1) can be generalized to the multidimensional setting but Theorem 4 cannot.
Remark 3
The question addressed in Corollary 1 is closely related to the topic discussed (with completely different methods) in [5].
Nonlinear differential equations
Now consider equation (1.1) with f assumed to be continuous on \(J\times \mathbb {R}\) and to satisfy the condition (2.1), so we are in the situation discussed in Theorem 1. Further, we assume temporarily that \(f(t,0) = 0\) for any \(t\in J\). For each \(t \in J\), we define
$$\begin{aligned} a_*(t) := \inf _{s\in [0,t],\;x\in \mathbb {R}\setminus \{0\}}\frac{f(s,x)}{x} \quad \text { and } \quad a^*(t) := \sup _{s\in [0,t], \; x \in \mathbb {R}\setminus \{ 0 \}} \frac{f(s,x)}{x}. \end{aligned}$$
(4.5)
Note that, if the differential equation is linear, i.e. if \(f(t,x) = a(t) x\), then these definitions of \(a_*\) and \(a^*\) coincide with the conventions introduced in Theorems 4 and 5, respectively.
We first state an auxiliary result which asserts that this definition makes sense because the infimum and the supremum mentioned in (4.5) exist.
Lemma 2
Let f satisfy the assumptions mentioned in Theorem 1, and assume furthermore that \(f(t, 0) = 0\) for all \(t \in J\). Then, the definitions of the functions \(a_*\) and \(a^*\) given in (4.5) are meaningful for all \(t \in J\), and the functions \(a_*\) and \(a^*\) are bounded on this interval.
Proof
By definition, we obtain—in view of the property \(f(t, 0) = 0\) and the Lipschitz condition (2.1)—the estimate
$$\begin{aligned} -L(t)\le \frac{f(t,x)}{x}\le L(t) \end{aligned}$$
for any \(x\in \mathbb {R}\setminus \{0\}\) and \(t\in J\). Thus, for any given time \(t\in J\),
$$\begin{aligned} -\max _{s\in [0,t]}L(s) =\inf _{s\in [0,t]} \left( -L(s) \right) \le \frac{f(s,x)}{x} \quad \forall s\in [0,t],\;x\ne 0. \end{aligned}$$
This implies that
$$\begin{aligned} -\max _{s\in [0,t]}L(s)\le \inf _{s\in [0,t],\;x\ne 0}\frac{f(s,x)}{x}=a_*(t). \end{aligned}$$
(4.6)
On the other hand, we also see that
$$\begin{aligned} a^*(t)\le \max _{s\in [0,t]}L(s) \end{aligned}$$
for any \(t\in J\). This together with (4.6) implies that
$$\begin{aligned} -\max _{s\in [0,t]}L(s)\le a_*(t)\le a^*(t)\le \max _{s\in [0,t]}L(s) \end{aligned}$$
for any \(t\in J\). The lemma is proved. \(\square \)
Theorem 6
Under the assumptions of Lemma 2, we have:
-
(i)
For \(x_0>0\), the solution \(\varphi (\cdot ,x_0)\) of Eq. (1.1) with the condition \(x(0)=x_0\) satisfies
$$\begin{aligned} x_0 E_\alpha (a_*(t)t^\alpha )\le \varphi (t,x_0 )\le x_ 0E_\alpha (a^*(t)t^\alpha ). \end{aligned}$$
-
(ii)
For \(x_0<0\), the solution \(\varphi (\cdot ,x_0)\) of Eq. (1.1) with the condition \(x(0)=x_0\) satisfies
$$\begin{aligned} x_0 E_\alpha (a^*(t)t^\alpha ) \le \varphi (t,x_0 )\le x_0 E_\alpha (a_*(t)t^\alpha ). \end{aligned}$$
Proof
We only show the proof of the statement (i). The case (ii) is proven similarly. Let \(x_0>0\). By Theorem 1(iii) and the fact that \(f(t,0)=0\) for all \(t\in J\), the solution \(\varphi (\cdot ,x_0)\) is positive on J. For an arbitrary but fixed \(t> 0\), we have on the interval [0, t]
$$\begin{aligned} ^{C\!}D_{0+}^{\alpha }\varphi (s,x_0) = a_*(t)\varphi (s,x_0)+\Big (-a_*(t)+\frac{f(s,\varphi (s,x_0))}{\varphi (s,x_0)}\Big )\varphi (s,x_0). \end{aligned}$$
This implies that
$$\begin{aligned} \varphi (s,x_0)\ge x_0 E_\alpha (a_*(t)s^\alpha ),\; s\in [0,t]. \end{aligned}$$
In particular, \(\varphi (t,x_0)\ge x_0 E_\alpha (a_*(t)t^\alpha )\). On the other hand, \(\varphi (\cdot ,x_0)\) is also the unique solution of the equation
$$\begin{aligned} ^{C\!}D_{0+}^{\alpha }\varphi (s,x_0) = a^*(t)\varphi (s,x_0)+\Big (-a^*(t)+\frac{f(s,\varphi (s,x_0))}{\varphi (s,x_0)}\Big )\varphi (s,x_0),\;s\in [0,t]. \end{aligned}$$
Thus,
$$\begin{aligned} \varphi (s,x_0)\le x_0 E_\alpha (a^*(t)s^\alpha ),\; s\in [0,t] \end{aligned}$$
and \(\varphi (t,x_0)\le x_0 E_\alpha (a^*(t)t^\alpha )\). The proof is complete. \(\square \)
Theorem 6 has an immediate consequence:
Corollary 2
Assume the hypotheses of Theorem 1, and furthermore let \(f(t,0) = 0\) for all \(t \in J\).
-
(i)
For \(0< x_{10} < x_{20}\), we have for all \(t \in J\) that
$$\begin{aligned}&x_{20} E_\alpha (a_*(t) t^\alpha ) - x_{10} E_\alpha (a^*(t) t^\alpha ) \\&\quad \le x_2(t) - x_1(t) \\&\quad \le x_{20} E_\alpha (a^*(t) t^\alpha ) - x_{10} E_\alpha (a_*(t) t^\alpha ). \end{aligned}$$
-
(ii)
For \(x_{10}< 0 < x_{20}\), we have for all \(t \in J\) that
$$\begin{aligned} (x_{20} - x_{10}) E_\alpha (a_*(t) t^\alpha ) \le x_2(t) - x_1(t) \le (x_{20} - x_10) E_\alpha (a^*(t) t^\alpha ). \end{aligned}$$
-
(iii)
For \(x_{10}< x_{20} < 0\), we have for all \(t \in J\) that
$$\begin{aligned}&x_{20} E_\alpha (a^*(t) t^\alpha ) - x_{10} E_\alpha (a_*(t) t^\alpha ) \\&\quad \le x_2(t) - x_1(t) \\&\quad \le x_{20} E_\alpha (a_*(t) t^\alpha ) - x_{10} E_\alpha (a^*(t) t^\alpha ). \end{aligned}$$
From this result, we can also deduce an analog of Corollary 1, i.e. a sufficient criterion for asymptotic stability, for the nonlinear case.
Corollary 3
Assume the hypotheses of Theorem 1, and furthermore let \(J = [0, \infty )\) and \(f(t,0) = 0\) for all \(t \in J\). Moreover, let \(\sup _{t \ge 0} a^*(t) < 0\). Then, all solutions x of the differential equation (1.1) satisfy \(\lim _{t \rightarrow \infty } x(t) = 0\).
The proof is an immediate generalization of the proof of Corollary 1. We omit the details.
We now give up the requirement that \(f(t, 0) = 0\). To this end, we essentially follow the standard procedure in the analysis of stability properties of differential equations; cf., e.g., [7, Remark 7.4].
Theorem 7
Assume the hypotheses of Theorem 1, and let \(x_{10} < x_{20}\). Then, for any \(t \in J\) we have
$$\begin{aligned} (x_2(0) - x_1(0)) E_\alpha (\tilde{a}_*(t) t^\alpha ) \le x_2(t) - x_1(t) \le (x_2(0) - x_1(0)) E_\alpha (\tilde{a}^*(t) t^\alpha ) \end{aligned}$$
where
$$\begin{aligned} \tilde{a}_*(t) = \inf _{s \in [0, t], \, x \ne 0} \frac{f(s , x + x_1(s)) - f(s, x_1(s))}{x} \end{aligned}$$
(4.7a)
and
$$\begin{aligned} \tilde{a}^*(t) = \sup _{s \in [0, t], \, x \ne 0} \frac{f(s , x + x_1(s)) - f(s, x_1(s))}{x}. \end{aligned}$$
(4.7b)
Proof
First, we note that, in view of Theorem 1(iii), we have \(x_1(t) < x_2(t)\) for all \(t \in J\). Then we define the function
$$\begin{aligned} \tilde{f}(t, x) = f(t, x + x_1(t)) - f(t, x_1(t)) \end{aligned}$$
and notice that
$$\begin{aligned} ^{C\!}D_{0+}^{\alpha } (x_2 - x_1)(t) = {}^{C\!}D_{0+}^{\alpha } x_2(t) - {}^{C\!}D_{0+}^{\alpha } x_1 (t) = f(t, x_2(t)) - f(t, x_1(t)), \end{aligned}$$
so that the function \(\tilde{x} := x_2 - x_1\) satisfies the differential equation
$$\begin{aligned} ^{C\!}D_{0+}^{\alpha } \tilde{x}(t) = \tilde{f}(t, \tilde{x}(t)), \end{aligned}$$
and the initial condition \(\tilde{x}(0) = x_2(0) - x_1(0) > 0\). Moreover, \(\tilde{f}(t,0) = 0\) for all t, and \(\tilde{f}\) satisfies the Lipschitz condition (2.1) with the same Lipschitz bound L(t) as f itself. This implies that the quantities \(\tilde{a}_*(t)\) and \(\tilde{a}^*(t)\) exist and are finite. Furthermore, we may apply Theorem 6(i) to the function \(\tilde{x}\) and derive the claim. \(\square \)
Note that Theorem 7 is the only result in Sect. 4 whose application in practice requires the knowledge of an exact solution to the given differential equation. All other results are solely based on information about the given function f on the right-hand side of the differential equation.