Abstract
Given a fractional differential equation of order \(\alpha \in (0,1]\) with Caputo derivatives, we investigate in a quantitative sense how the associated solutions depend on their respective initial conditions. Specifically, we look at two solutions \(x_1\) and \(x_2\), say, of the same differential equation, both of which are assumed to be defined on a common interval [0, T], and provide upper and lower bounds for the difference \(x_1(t)  x_2(t)\) for all \(t \in [0,T]\) that are stronger than the bounds previously described in the literature.
1 Introduction and motivation
1.1 Statement of the problem
Initial value problems for fractional differential equations with Caputo derivatives have proven to be important tools for the mathematical modeling of various phenomena in science and engineering, see, e.g., [1,2,3, 18, 20,21,22,23]. In order to fully understand the behaviour of such models, it is of interest to precisely describe how their solutions depend on the initial values. In particular, we shall here look at the following question:
Given two solutions \(x_1\) and \(x_2\) to the fractional differential equation
$$\begin{aligned} {}^{C\!}D_{0+}^{\alpha } x(t) = f(t, x(t)), \end{aligned}$$(1.1)where \(^{C}D_{0+}^{\alpha }\) denotes the Caputo type differential operator of order \(\alpha \in (0,1]\) with starting point 0 [7, Sect. 3], associated to the initial conditions \(x_1(0) = x_{01}\) and \(x_2(0) = x_{02}\), respectively, what can be said about the difference \(x_1(t)  x_2(t)\) for all t for which both solutions exist?
Our aim is to provide both upper and lower bounds for the difference. A review of the literature (see Sect. 3 below) reveals that such bounds exist in principle, but that in many cases they tend to be too far away from each other to be of practical use. In other words, one usually observes that at least one of the two bounds is very weak. (A concrete example for such a situation is given in Sect. 5.) Therefore, we shall derive tighter inclusions here.
1.2 Motivation
From a purely mathematical point of view, such estimates are relevant in their own right as they allow to draw interesting conclusions about the behaviour of the solution to the differential equation (1.1).
In addition, our interest in this question is especially motivated by an application in the numerical analysis of fractional differential equations that strongly benefits from tight inclusions. Specifically, one is sometimes interested in terminal value problems, i.e. problems of the form
with some \(T > 0\), and seeks the solution to (1.2) on the interval [0, T], cf., e.g., [7, pp. 107ff.] or [10, 13]. For the numerical solution of such problems, one may apply a socalled shooting method [8, 9, 11, 13, 14], i.e. one starts with a first guess \(x_{0,1}\) for x(0), (numerically) solves the initial value problem consisting of the differential equation given in (1.2) and the initial condition \(x(0) = x_{0,1}\), and in this way obtains a first approximate solution \(x^*_1\) for \(x(T) = x^*\). One then compares this approximation \(x^*_1\) with the exact value \(x^*\), replaces the guess \(x_{0,1}\) for the initial value by a new and improved value \(x_{0,2}\) and repeats the process. In order to determine a suitable choice for \(x_{0,2}\), it is useful to have the estimates of the form indicated above because they describe a connection between \(x_{0,2}  x_{0,1}\) one the one hand and \(x^*  x^*_1\) on the other hand, thus telling us which range the new value \(x_{0,2}\) needs to come from in order for the corresponding initial value problem to have a solution that “hits” the required terminal value as accurately as possible.
2 Preliminaries
Throughout this paper, we shall use the following conventions.
Let \(\alpha \in (0,1]\), \(b > 0\), \([0,b]\subset \mathbb {R}\) and \(x:[0,b]\rightarrow \mathbb {R}\) be a measurable function such that \(\int _0^bx(\tau )\;d\tau <\infty \). The Riemann–Liouville integral operator of order \(\alpha \) is defined by
for \(t \in [0, b]\), where \(\varGamma (\cdot )\) is the Gamma function. The Riemann–Liouville fractional derivative \(^{RL\!}D_{0+}^\alpha x\) of x on [0, b] is defined by
where \(D=\frac{d}{dt}\) is the usual derivative. The Caputo fractional derivative of x on [0, b] is defined by
Finally, by \(E_\alpha \) we denote the standard oneparameter MittagLeffler function, viz.
We can cite from [16, Proposition 3.5] the following asymptotic result that we shall use later:
Lemma 1
Let \(\lambda \in \mathbb {R}\setminus \{ 0 \}\). For \(t \rightarrow \infty \) we have
so \(E_\alpha (\lambda t^\alpha )\) grows exponentially towards \(\infty \) if \(\lambda > 0\) and decays algebraically towards 0 if \(\lambda < 0\).
Let now \(J = [0,T]\) with some real number \(T > 0\) or \(J=[0,\infty )\). As indicated above, we consider the equation (1.1) in this note. In particular, we shall only discuss the case that \(f:J \times \mathbb {R}\rightarrow \mathbb {R}\) is a continuous function. Moreover, we shall generally assume that f satisfies the following Lipschitz condition on the second variable: there exists a nonnegative continuous function \(L: J \rightarrow \mathbb {R}_+\) such that
The following fundamental results are then known (see [4, 7, 12]):
Theorem 1
Assume that the function f is continuous and satisfies the Lipschitz condition (2.1). Moreover, let \(x_{10}\) and \(x_{20}\) be two arbitrary real numbers with \(x_{10} \not = x_{20}\) and consider the two initial value problems
and
respectively. Then we have:

(i)
For each of the initial value problems, there exists a unique continuous function that solves the problem on the entire interval J.

(ii)
The trajectories of the two solutions do not meet on J, i.e., the solutions \(x_1(\cdot )\) and \(x_2(\cdot )\) of (2.2a) and (2.2b), respectively, satisfy \(x_1(t) \not = x_2(t)\) for all \(t \in J\).

(iii)
In particular, if \(x_{10} < x_{20}\), then \(x_1(t) < x_2(t)\) for all \(t \in J\).
Note that the two initial value problems (2.2a) and (2.2b) differ only in their initial conditions but contain the same differential equation (which is also the same as the differential equation given in (1.1)).
Proof
Part (i) immediately follows from [7, Theorem 6.5] (see also [12, Theorem 2.3] or [24]) for the case of a finite interval; the extension to the case \(J = [0, \infty )\) is immediate, cf. [12, Corollary 2.4]. Part (ii) has been shown in [4, Theorem 3.5], and part (iii) is a direct consequence of (ii) in connection with the continuity of \(x_1\) and \(x_2\) that has been established in part (i). \(\square \)
3 Existing results
In connection with the task that we have set, some results have already been derived. We shall recollect them here in order to demonstrate why it is necessary to find more accurate bounds. To this end, it is useful to introduce the notation
for \(t \in [0, T]\), with L being the function from the Lipschitz condition (2.1). Using this terminology, we can state the following estimates that are, to the best of our knowledge, the best currently known bounds for the difference \(x_1(t)  x_2(t)\) under the assumptions of Theorem 1.
Theorem 2
Under the assumptions of Theorem 1, the solutions \(x_1\) and \(x_2\) of the two initial value problems (2.2a) and (2.2b), respectively, satisfy the inequality
for all \(t \in J\).
Theorem 3
Under the assumptions of Theorem 1, the solutions \(x_1\) and \(x_2\) of the two initial value problems (2.2a) and (2.2b), respectively, satisfy the inequality
for all \(t \in J\).
Theorem 2 is given in [4, Theorem 4.1]. Theorem 3 has been shown in [4, Theorem 4.3]; slightly weaker forms can be found in [7, Theorem 6.20] or [24, Theorem 4.10].
Remark 1
It should be noted that, as pointed out in [4, Sect. 6], there is a significant difference between Theorems 2 and 3 in the sense that Theorem 3 also holds in the vector valued case, i.e. in the case where \(f: J \times \mathbb {R}^d \rightarrow \mathbb {R}^d\) with some \(d > 1\), whereas Theorem 2 only holds in the scalar setting.
To demonstrate the shortcomings of the estimates provided by Theorems 2 and 3, it suffices to look at the very simple example of the homogeneous linear differential equation with constant coefficients
with some real constant \(\lambda \), i.e. at the case \(f(t, x) = \lambda x\). Clearly, we may choose \(J = [0, \infty )\) here. In this case we can observe the following facts about the initial value problems considered in the theorems:

1.
The function L is simply given by \(L(t) = \lambda \); thus, \(L^*(t) = \lambda \) too.

2.
The exact solutions to the initial value problems have the form \(x_k(t) = x_{k0} E_\alpha (\lambda t^\alpha )\) (\(k = 1, 2\)). Hence,
$$\begin{aligned} x_1(t)  x_2(t)&= x_{10}  x_{20} \cdot E_\alpha (\lambda t^\alpha ) \\&= x_{10}  x_{20} \times {\left\{ \begin{array}{ll} 1 &{} \text{ if } \lambda = 0, \\ E_\alpha (L^*(t) t^\alpha ) &{} \text{ if } \lambda > 0, \\ E_\alpha (L^*(t) t^\alpha ) &{} \text{ if } \lambda < 0. \\ \end{array}\right. } \end{aligned}$$ 
3.
If \(\lambda = 0\) then the upper bound from Theorem 3 coincides with the lower bound from Theorem 2, and hence both estimates are sharp.

4.
If \(\lambda < 0\), the estimate of Theorem 2 is sharp but, in view of Lemma 1, Theorem 3 massively overestimates the difference for large t.

5.
If \(\lambda > 0\), the estimate of Theorem 3 is sharp but, in view of Lemma 1, Theorem 2 massively underestimates the difference for large t.
This means that we always have an upper bound and a lower bound for \(x_1(t)  x_2(t)\), but in all cases except for the trivial case \(\lambda = 0\), at least one of these bounds is likely to be far away from the correct value. Based on this fact, our goal now is to improve those bounds in the sense that we want to obtain a narrower inclusion, i.e. an upper bound and a lower bound that are closer together. Section 5 below will contain a concrete example that demonstrates a case where the inclusion based on our new estimates is much tighter than the one based on Theorems 2 and 3.
4 New and tighter bounds for the difference between solutions
4.1 Linear differential equations
We begin our analysis with a look at the special case that the differential equation under consideration is linear. Much as in [4], the results for this special case will later allow us to discuss the general case in Sect. 4.2.
Therefore, first consider the equation (1.1) under the assumption that \(f(t,x)=a(t)x\) for any \(t\in J\) and \(x\in \mathbb {R}\), where \(a:J \rightarrow \mathbb {R}\) is continuous. First we formulate and prove a lower bound for the distance between two solutions.
Theorem 4
(Convergence rate for solutions of 1dimensional FDEs) Under the conditions of Theorem 1 and the assumption
with a continuous function \(a : J \rightarrow \mathbb {R}\), for any \(t\in J\) the estimate
holds, where
Proof
For definiteness we assume \(x_2(0) > x_1(0)\). Let \(u(t):=x_2(t)x_1(t)\) for \(t\in J\). Then by Theorem 1(iii), we have \(u(t) > 0\) for any \(t \in J\). On the other hand, \(u(\cdot )\) is the unique solution to the system
For an arbitrary but fixed \(t> 0\), we consider the problem
From [4, Lemma 3.1] or [7, Theorem 7.2], we deduce that this problem has the unique solution \(v(s) = x_2(0)  x_1(0) \cdot E_\alpha (a_*(t) s^\alpha )\), \(s \in [0,t]\). Define \(h(s) := u(s)  v(s)\), \(s\in [0,t]\). It is easy to see that h is the unique solution of the system
Notice that for \(s\in [0,t]\)
see also [4, Lemma 3.1]. Furthermore, \([a(s)a_*(t)]u(s)\ge 0\) for all \(s\in [0,t]\). Thus, \(h(s)\ge 0\) for all \(s\in [0,t]\). In particular, \(h(t)\ge 0\) or \(u(t)\ge v(t) = x_2(0)x_1(0) \cdot E_\alpha (a_*(t)t^\alpha )\). The proof is complete. \(\square \)
For the divergence rate and upper bounds for solutions, the following statement is an easy modification of the well known result.
Theorem 5
(Divergence rate for solutions of 1dimensional FDEs) Under the assumptions of Theorem 4, for any \(t \in J\) the estimate
holds, where
Proof
For definiteness we once again assume \(x_2(0) > x_1(0)\) and let \(u(t):=x_2(t)x_1(t)\) for \(t\in J\). As shown above, \(u(t)>0\) for any \(t\in J\), and \(u(\cdot )\) is the unique solution to the system given by eqs. (4.1a) and (4.1b). For an arbitrary but fixed \(t> 0\), this system on the interval [0, t] is rewritten as
Thus, due to [4, Lemma 3.1] we obtain
for \(s\in [0,t]\) which together with \([a(s)a^*(t)]u(s)\le 0\) for all \(s\in [0,t]\) implies that
for \(s\in [0,t]\). In particular, \(u(t)\le x_2(0)x_1(0) \cdot E_\alpha (a^*(t)t^\alpha )\). The theorem is proved. \(\square \)
As an immediate consequence of Theorem 5, we obtain a stability result for homogeneous linear equations with nonconstant coefficients:
Corollary 1
Assume the hypotheses of Theorem 4 and let \(J = [0, \infty )\). If \(\sup _{t \ge 0} a(t) < 0\) then all solutions x to the equation (1.1) satisfy the property \(\lim _{t \rightarrow \infty } x(t) = 0\). In other words, the differential equation is asymptotically stable.
Proof
As the differential equation under consideration is linear and homogeneous, it is clear that \(\tilde{x} \equiv 0\) is one of its solutions. Moreover, we note that
Thus, if x is any solution to the differential equation, it follows from Theorem 5 that
for all \(t \ge 0\) where in the last inequality we have used the well known monotonicity of the MittagLeffler function \(E_\alpha \) [16, Proposition 3.10]. Since \(A^* < 0\) by assumption, Lemma 1 implies that the upper bound tends to 0 for \(t \rightarrow \infty \), and our claim follows. \(\square \)
Remark 2
Using the arguments developed in [4], we can see that the observations of Remark 1 hold here as well: Theorem 5 (and hence also Corollary 1) can be generalized to the multidimensional setting but Theorem 4 cannot.
Remark 3
The question addressed in Corollary 1 is closely related to the topic discussed (with completely different methods) in [5].
4.2 Nonlinear differential equations
Now consider equation (1.1) with f assumed to be continuous on \(J\times \mathbb {R}\) and to satisfy the condition (2.1), so we are in the situation discussed in Theorem 1. Further, we assume temporarily that \(f(t,0) = 0\) for any \(t\in J\). For each \(t \in J\), we define
Note that, if the differential equation is linear, i.e. if \(f(t,x) = a(t) x\), then these definitions of \(a_*\) and \(a^*\) coincide with the conventions introduced in Theorems 4 and 5, respectively.
We first state an auxiliary result which asserts that this definition makes sense because the infimum and the supremum mentioned in (4.5) exist.
Lemma 2
Let f satisfy the assumptions mentioned in Theorem 1, and assume furthermore that \(f(t, 0) = 0\) for all \(t \in J\). Then, the definitions of the functions \(a_*\) and \(a^*\) given in (4.5) are meaningful for all \(t \in J\), and the functions \(a_*\) and \(a^*\) are bounded on this interval.
Proof
By definition, we obtain—in view of the property \(f(t, 0) = 0\) and the Lipschitz condition (2.1)—the estimate
for any \(x\in \mathbb {R}\setminus \{0\}\) and \(t\in J\). Thus, for any given time \(t\in J\),
This implies that
On the other hand, we also see that
for any \(t\in J\). This together with (4.6) implies that
for any \(t\in J\). The lemma is proved. \(\square \)
Theorem 6
Under the assumptions of Lemma 2, we have:

(i)
For \(x_0>0\), the solution \(\varphi (\cdot ,x_0)\) of Eq. (1.1) with the condition \(x(0)=x_0\) satisfies
$$\begin{aligned} x_0 E_\alpha (a_*(t)t^\alpha )\le \varphi (t,x_0 )\le x_ 0E_\alpha (a^*(t)t^\alpha ). \end{aligned}$$ 
(ii)
For \(x_0<0\), the solution \(\varphi (\cdot ,x_0)\) of Eq. (1.1) with the condition \(x(0)=x_0\) satisfies
$$\begin{aligned} x_0 E_\alpha (a^*(t)t^\alpha ) \le \varphi (t,x_0 )\le x_0 E_\alpha (a_*(t)t^\alpha ). \end{aligned}$$
Proof
We only show the proof of the statement (i). The case (ii) is proven similarly. Let \(x_0>0\). By Theorem 1(iii) and the fact that \(f(t,0)=0\) for all \(t\in J\), the solution \(\varphi (\cdot ,x_0)\) is positive on J. For an arbitrary but fixed \(t> 0\), we have on the interval [0, t]
This implies that
In particular, \(\varphi (t,x_0)\ge x_0 E_\alpha (a_*(t)t^\alpha )\). On the other hand, \(\varphi (\cdot ,x_0)\) is also the unique solution of the equation
Thus,
and \(\varphi (t,x_0)\le x_0 E_\alpha (a^*(t)t^\alpha )\). The proof is complete. \(\square \)
Theorem 6 has an immediate consequence:
Corollary 2
Assume the hypotheses of Theorem 1, and furthermore let \(f(t,0) = 0\) for all \(t \in J\).

(i)
For \(0< x_{10} < x_{20}\), we have for all \(t \in J\) that
$$\begin{aligned}&x_{20} E_\alpha (a_*(t) t^\alpha )  x_{10} E_\alpha (a^*(t) t^\alpha ) \\&\quad \le x_2(t)  x_1(t) \\&\quad \le x_{20} E_\alpha (a^*(t) t^\alpha )  x_{10} E_\alpha (a_*(t) t^\alpha ). \end{aligned}$$ 
(ii)
For \(x_{10}< 0 < x_{20}\), we have for all \(t \in J\) that
$$\begin{aligned} (x_{20}  x_{10}) E_\alpha (a_*(t) t^\alpha ) \le x_2(t)  x_1(t) \le (x_{20}  x_10) E_\alpha (a^*(t) t^\alpha ). \end{aligned}$$ 
(iii)
For \(x_{10}< x_{20} < 0\), we have for all \(t \in J\) that
$$\begin{aligned}&x_{20} E_\alpha (a^*(t) t^\alpha )  x_{10} E_\alpha (a_*(t) t^\alpha ) \\&\quad \le x_2(t)  x_1(t) \\&\quad \le x_{20} E_\alpha (a_*(t) t^\alpha )  x_{10} E_\alpha (a^*(t) t^\alpha ). \end{aligned}$$
From this result, we can also deduce an analog of Corollary 1, i.e. a sufficient criterion for asymptotic stability, for the nonlinear case.
Corollary 3
Assume the hypotheses of Theorem 1, and furthermore let \(J = [0, \infty )\) and \(f(t,0) = 0\) for all \(t \in J\). Moreover, let \(\sup _{t \ge 0} a^*(t) < 0\). Then, all solutions x of the differential equation (1.1) satisfy \(\lim _{t \rightarrow \infty } x(t) = 0\).
The proof is an immediate generalization of the proof of Corollary 1. We omit the details.
We now give up the requirement that \(f(t, 0) = 0\). To this end, we essentially follow the standard procedure in the analysis of stability properties of differential equations; cf., e.g., [7, Remark 7.4].
Theorem 7
Assume the hypotheses of Theorem 1, and let \(x_{10} < x_{20}\). Then, for any \(t \in J\) we have
where
and
Proof
First, we note that, in view of Theorem 1(iii), we have \(x_1(t) < x_2(t)\) for all \(t \in J\). Then we define the function
and notice that
so that the function \(\tilde{x} := x_2  x_1\) satisfies the differential equation
and the initial condition \(\tilde{x}(0) = x_2(0)  x_1(0) > 0\). Moreover, \(\tilde{f}(t,0) = 0\) for all t, and \(\tilde{f}\) satisfies the Lipschitz condition (2.1) with the same Lipschitz bound L(t) as f itself. This implies that the quantities \(\tilde{a}_*(t)\) and \(\tilde{a}^*(t)\) exist and are finite. Furthermore, we may apply Theorem 6(i) to the function \(\tilde{x}\) and derive the claim. \(\square \)
Note that Theorem 7 is the only result in Sect. 4 whose application in practice requires the knowledge of an exact solution to the given differential equation. All other results are solely based on information about the given function f on the righthand side of the differential equation.
5 An application example
As an application example, we consider the linear differential equation
for \(t \in [0,\infty )\). In the notation of Sect. 4.1, we have
The function \(a^*\) defined in Theorem 5 satisfies
therefore, by Corollary 1, the equation is asymptotically stable and hence a prototype of a class of problems that is particularly relevant in practice.
For the purposes of concrete experiments, we restrict our attention to the interval \(J = [0, T]\) with \(T = 6\). We have plotted the function a and the associated functions \(a_*\) and \(a^*\) on this interval in Fig. 1. In particular, we can compute (and see in the figure) that
and
To demonstrate the effectiveness of our new estimates, we choose \(\alpha = 0.65\) and consider two solutions \(x_1\) and \(x_2\) to the differential equation (5.1) subject to the initial conditions \(x_1(0) = 1\) and \(x_2(0) = 2\), respectively. Since exact solutions for these two initial value problems are not available, we have reverted to numerical solutions instead. To this end, we have used Garrappa’s fast implementation of the fractional trapezoidal method [15] that is based on the ideas of Lubich et al. [17, 19]. We have used the step size \(h = 10^{5}\) which, in combination with the well known stability properties of this numerical method, allows us to reasonably believe that the numerical solution is very close to the exact solution. Figure 2 shows the graphs of the two solutions.
The essential observation can be read off from Fig. 3. Since \(a(t) < 0\) for all t in this example, it can be seen that the function L from the Lipschitz condition of the differential equation’s righthand side is just \(L(t) = a(t) = a(t)\), and hence the function \(L^*\) from Theorems 2 and 3 is simply \(L^*(t) =  a_*(t)\). Therefore, the old lower bound of Theorem 2 is identical to the new bound of Theorem 4. The fact that we have been unable to improve this bound in the example reflects the fact that the old bound is already very close to the correct value of the difference between the two functions. For the two upper bounds, however, we obtain a completely different picture. While the old bound from Theorem 3 vastly overestimates the true value of the difference (note the logarithmic scale on the vertical axis of Fig. 3), the new bound is very much closer. In particular, our new bound—like the true difference—tends to 0 as \(t \rightarrow \infty \) whereas the previously known bound tends to \(\infty \).
References
Baleanu, D., Lopes, A.M. (eds.): Handbook of Fractional Calculus with Applications, Vol. 7: Applications in Engineering, Life and Social Sciences. Part A. De Gruyter, Berlin (2019). https://doi.org/10.1515/9783110571905
Baleanu, D., Lopes, A.M. (eds.): Handbook of Fractional Calculus with Applications, Vol. 8: Applications in Engineering, Life and Social Sciences. Part B. De Gruyter, Berlin (2019). https://doi.org/10.1515/9783110571929
Baleanu, D., Diethelm, K., Scalas, E., Trujillo, J.J.: Fractional CalculusModels and Numerical Methods, 2nd edn. World Scientific, Singapore (2016)
Cong, N.D., Tuan, H.T.: Generation of nonlocal fractional dynamical systems by fractional differential equations. J. Integral Equ. Appl. 29, 585–608 (2017). https://doi.org/10.1216/JIE2017294585
Cong, N.D., Son, D.T., Tuan, H.T.: On fractional Lyapunov exponent for solutions of linear fractional differential equations. Fract. Calc. Appl. Anal. 17(2), 285–306 (2014). https://doi.org/10.2478/s1354001401691
Diethelm, K.: On the separation of solutions of fractional differential equations. Fract. Calc. Appl. Anal. 11(3), 259–268 (2008)
Diethelm, K.: The Analysis of Fractional Differential Equations. Springer, Berlin (2010). https://doi.org/10.1007/9783642145742
Diethelm, K.: Increasing the efficiency of shooting methods for terminal value problems of fractional order. J. Comput. Phys. 293, 135–141 (2015). https://doi.org/10.1016/j.jcp.2014.10.054
Diethelm, K., Ford, N.J.: Volterra integral equations and fractional calculus: Do neighboring solutions intersect? J. Integral Equ. Appl. 24, 25–37 (2012). https://doi.org/10.1216/JIE201224125
Diethelm, K., Ford, N.J.: A note on the wellposedness of terminal value problems for fractional differential equations. J. Integral Equ. Appl. 30, 371–376 (2018). https://doi.org/10.1216/JIE2018303371
Diethelm, K., Uhlig, F.D.: A novel approach to shooting methods for fractional terminal value problems
Diethelm, K., Siegmund, S., Tuan, H.T.: Asymptotic behavior of solutions of linear multiorder fractional differential equation systems. Fract. Calc. Appl. Anal. 20(5), 1165–1195 (2017). https://doi.org/10.1515/fca20170062
Ford, N.J., Morgado, M.L.: Fractional boundary value problems: Analysis and numerical methods. Fract. Calc. Appl. Anal. 14(4), 564–567 (2011). https://doi.org/10.2478/s1354001100344
Ford, N.J., Morgado, M.L., Rebelo, M.: High order numerical methods for fractional terminal value problems. Comput. Methods Appl. Math. 14, 55–70 (2014). https://doi.org/10.1515/cmam20130022
Garrappa, R.: Trapezoidal methods for fractional differential equations: Theoretical and computational aspects. Math. Comput. Simul. 110, 96–112 (2015). https://doi.org/10.1016/j.matcom.2013.09.012
Gorenflo, R., Kilbas, A.A., Mainardi, F., Rogosin, S.: MittagLeffler Functions, Related Topics and Applications, 2nd edn. Springer, Berlin (2020). https://doi.org/10.1007/9783662615508
Hairer, E., Lubich, C., Schlichte, M.: Fast numerical solution of nonlinear Volterra convolution equations. SIAM J. Sci. Stat Comput. 6, 532–541 (1985). https://doi.org/10.1137/0906037
Holm, S.: Waves with PowerLaw Attenuation. Springer International, Cham (2019). https://doi.org/10.1007/9783030149277
Lubich, C.: Discretized fractional calculus. SIAM J. Numer. Anal. 17, 704–719 (1986). https://doi.org/10.1137/0517050
Mainardi, F.: Fractional Calculus and Waves in Linear Viscoelasticity. World Scientific, Singapore (2010)
Petraš, I. (ed.): Handbook of Fractional Calculus with Applications, Vol. 6: Applications in Control. De Gruyter, Berlin (2019). https://doi.org/10.1515/9783110571745
Tarasov, V.E. (ed.): Handbook of Fractional Calculus with Applications, Vol. 4: Applications in Physics, Part A. De Gruyter, Berlin (2019). https://doi.org/10.1515/9783110571707
Tarasov, V.E. (ed.): Handbook of Fractional Calculus with Applications, Vol. 5: Applications in Physics, Part B. De Gruyter, Berlin (2019). https://doi.org/10.1515/9783110571721
Tisdell, C.C.: On the application of sequential and fixedpoint methods to fractional differential equations of arbitrary order. J. Integral Equ. Appl. 24, 283–319 (2012). https://doi.org/10.1216/JIE2012242283
Acknowledgements
Hoang The Tuan was funded by Vingroup JSC and supported by the Postdoctoral Scholarship Programme of the Vingroup Innovation Foundation (VINIF), Vingroup Big Data Institute (VinBigdata), under the code VINIF.2021.STS.17.
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what reuse is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and reuse information, please contact the Rights and Permissions team.
About this article
Cite this article
Diethelm, K., Tuan, H.T. Upper and lower estimates for the separation of solutions to fractional differential equations. Fract Calc Appl Anal 25, 166–180 (2022). https://doi.org/10.1007/s1354002100007x
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1354002100007x
Keywords
 Fractional differential equation
 Caputo derivative
 Initial condition
 Separation of solutions
Mathematics Subject Classification
 34A08
 34A12