Abstract
We extend the study of the random Hermite second-order ordinary differential equation to the fractional setting. We first construct a random generalized power series that solves the equation in the mean square sense under mild hypotheses on the random inputs (coefficients and initial conditions). From this representation of the solution, which is a parametric stochastic process, reliable approximations of the mean and the variance are explicitly given. Then, we take advantage of the random variable transformation technique to go further and construct convergent approximations of the first probability density function of the solution. Finally, several numerically simulations are carried out to illustrate the broad applicability of our theoretical findings.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The extension of many classical results to the context of fractional calculus has allowed their successful application to a number of practical problems. In particular, fractional-order derivatives have demonstrated to be powerful tools to better describe systems, media and fields characterized by non-local and memory of power-law type often met in models that appear in physics, control, signal and image processing, mechanics and dynamic systems, biology, environmental science, materials, economic and multidisciplinary in engineering fields (Honguang et al. 2018). The aforementioned extension is often done from relevant models formulated via classical differential equations that have been generalized using different fractional-order derivatives. Examples in this regard include the linear, logistic, Riccati, Gompertz, etc. Rivero et al. (2008), Nieto (2022), Khan et al. (2013) and Frunzo et al. (2019), just to mention a few models.
On the other hand, the applications of fractional differential equations to modeling the dynamics of complex phenomena using real-world data involve the rigorous treatment of randomness coming from the combination of epistemic and aleatoric uncertainties (Kiureghian and Ditlevsen 2009). Epistemic (or systematic) uncertainty appears because inaccurate measurements or because the model simplifies the true complexity of the phenomena under study neglecting certain effects, while aleatoric (or stochastic) uncertainty comes from the fact that different outcomes are obtained when we run or observe the same experiment. These facts lead to stochastic or random fractional differential equations. As it is accurately pointed out in (Smith 2014, p. 96), it is important to underline that there is a growing trend in the Uncertainty Quantification community to treat stochastic and random differential equations as synonymous terms, when in fact they require completely different approaches for analysis and approximation. In dealing with stochastic differential equations (SDEs), uncertainties are forced by an irregular process, such as the Brownian motion or, more generality, a Wiener process. SDEs are typically represented in terms of stochastic differentials, but they must be interpreted as Itô or Stratonovich stochastic integrals (Smith 2014, p. 97), Kloeden and Platen (1992). The role of uncertainty is essentially different in random differential equations (RDEs). Indeed, in the setting of these equations, random effects are directly manifested through coefficients, initial/boundary conditions, and/or source term that are assumed to be well-behaved (e.g., continuous) with respect to time and/or space (Smith 2014, p.97), Soong (1973). As pointed out in (Banks et al. 2014, p.258), overall the theory of RDEs is much less advanced than that for SDEs. This fact is even more noticeable in the case of RDEs formulated by means of fractional-order derivatives.
The aim of this paper is to continue contributing the realm of Fractional Calculus by extending the analysis of the Hermite differential equation in a twofold sense, namely introducing both fractional derivatives and uncertainties in its formulation. For the former goal, the mean square Caputo fractional derivative will be used, while for the later we will rely on the RDE approach.
On the one hand, the fractional Hermite differential equation, based on Caputo operator, has been studied to introduce fractional Hermite polynomials and with applications to design special filters (AbdelAty et al. 2016). On the other hand, the random Hermite equation
where \(Y_0\), \(Y_1\) and \(\lambda \) are random variables, has been studied in Calbo et al. (2011) using the so-called mean square calculus (Soong 1973). In this latter contribution, one constructs a power series solution for the randomized classical Hermite differential equation and then both the expectation and the variance of the solution are approximated. Apart from these above-mentioned contributions, and to the best of our knowledge, none contribution has dealt yet with the study of the random fractional Hermite differential equation. So, in some sense, the present paper is aimed at extending the results that are available so far. Even more, as it shall be seen, we will also give a method to calculate the first probability density function of the solution, that is a more ambitious goal.
Hereinafter, we will work on the Lebesgue spaces \(\textrm{L}^p(\mathcal {D}) \equiv \textrm{L}^p(\mathcal {D},\textrm{d}\mu )\), \(1\le p < \infty \), whose elements are real-valued measurable functions \(h:\mathcal {D} \longrightarrow \mathbb {R}\) with the norm \(\Vert h \Vert _{\textrm{L}^p(\mathcal {D})}=\left( \int _{\mathcal {D}} |h|^p \textrm{d} \mu \right) ^{1/p}<\infty \). In the case that \(p=\infty \), recall that the norm is defined as \(\Vert h \Vert _{\textrm{L}^{\infty }(\mathcal {D})}=\inf \{ \sup \{|h(t)|: t\in \mathcal {D} {\setminus } \mathcal {N} \}: \mu (\mathcal {N})=0 \}< \infty \). For \(p=\infty \), elements in the space \(\textrm{L}^{\infty }(\mathcal {D})\) are essentially bounded functions. Classically, \(\mathcal {D}=\mathcal {T}\subset \mathbb {R}\) is an interval and \(\textrm{d}\mu =\textrm{d}t\) is the Lebesgue measure. Throughout the paper, as we shall also work with random variables and stochastic processes, we will implicitly take \(\mathcal {D}=\Omega \) (sample space) and \(\mu =\mathbb {P}\) (probability measure), and \(\mathcal {D}=\mathcal {T} \times \Omega \) and \(\textrm{d}\mu =\textrm{d} t \times \textrm{d} \mathbb {P}\), respectively. Notice that \(X\in \textrm{L}^{p}(\Omega ) \) if and only if \(\Vert X \Vert _{\textrm{L}^{p}(\Omega )}=\left( \mathbb {E}[|X|^p] \right) ^{1/p}<\infty \), where \(\mathbb {E}[\,]\) denotes the expectation operator, and, \(X\equiv X(t)\in \textrm{L}^{p}(\mathcal {T} \times \Omega ) \) if and only if \(\Vert X \Vert _{\textrm{L}^{p}(\mathcal {T} \times \Omega )}=\left( \mathbb {E} \left[ \int _{\mathcal {T}}|X(t)|^p \, \textrm{d}t \right] \right) ^{1/p}<\infty \). Any stochastic process X(t) in \(\textrm{L}^{p}(\mathcal {T} \times \Omega )\) can be interpreted as a set of random variables in \(\textrm{L}^{p}( \Omega )\) indexed by \(t\in \mathcal {T}\). An important result in the above probabilistic Lebesgue spaces is the so-called Liapunov’s inequality
provided the expectation \(\mathbb {E}[|X|^{s}]<\infty \). This result indicates that \(\textrm{L}^{s}(\Omega ) \subset \textrm{L}^{r}(\Omega )\), \(0<r\le s\), and as a consequence, in the probabilistic setting, it is preferred to establish results in the biggest space \(\textrm{L}^{2}(\Omega )\) whose elements are real-valued random variables, \(X:\Omega \longrightarrow \mathbb {R}\), with finite second-order moment \(\mathbb {E}[X^2]<\infty \) (equivalently, finite variance). The elements of \(\textrm{L}^2(\Omega )\) are usually called second-order variables. It can be proven that \(\textrm{L}^2(\Omega )\) is a Hilbert space with the following inner product \(\left<X,Y\right> = \mathbb {E}[XY]\), from which one infers the so-called 2-norm: \(||X||_2 = \sqrt{\left<X,X\right> }= \mathbb {E}[X^2]^{\frac{1}{2}}.\) Given a sequence of second-order random variables, \(\{X_n: n\ge 0\,\, \text {integer}\}\), is said to be mean square convergent to a random variable \(X\in \textrm{L} ^2(\Omega )\) if and only if \(||X-X_n||_2\longrightarrow 0\) as \(n\rightarrow \infty \). In the case that the collection of second-order random variables is indexed with reference to an interval, say \(\mathcal {T}\subset \mathbb {R}\), then \(\{X(t):t\in \mathcal {T}\}\) is called a second-order stochastic process. The concepts of continuity, differentiability and integrability in the mean square sense are naturally inferred from the 2-norm. When trying to prove the mean square convergence of a sequence of second-order stochastic processes that defines the solution of a random fractional differential equation often is required to bound products of random variables. Unfortunately, the following inequality \(\Vert XY\Vert _2\le \Vert X\Vert _2 \Vert Y\Vert _2\), \(X,Y\in \textrm{L}^2(\Omega )\) does not hold, in general. However, Hölder inequality
applied to \(r=p=2\) and \(q=\infty \) leads to \(\Vert XY\Vert _2\le \Vert X\Vert _2 \Vert Y\Vert _{\infty }\). This result, that relates the Lebesgue spaces \(\textrm{L}^2(\Omega )\) and \(\textrm{L}^{\infty }(\Omega )\), will be very useful in our subsequent analysis to properly majorizing some quantities and then establishing the mean square convergence. After doing that, we will be interested in computing reliable approximations of the main moments of the solution, such as the expectation and the variance. To achieve this important goal, the following property of the mean square convergence will play a key role.
Proposition 1.1
(Soong 1973, Th 4.4.3) Let \(\{X_n: n\ge 0\}\) be a sequence of second-order random variables such that \(X_n \longrightarrow X\) as \(n \rightarrow \infty \) in the mean square sense. Then,
In this paper, we shall study the following random fractional initial value problem (RFIVP), that extends, to the fractional setting the random (classical) Hermite equation previously introduced in (1),
Here, \((^C D_0^{\alpha } Y)(t)\) stands for the Caputo mean square derivative of order \(\alpha >0\) of the second-order stochastic process Y(t), and \(\lambda \), \(Y_0\) and \(Y_1\) are second-order random variables defined on a complete probability space \((\Omega ,\mathcal {F},\mathbb {P})\). Let us recall that, given a second-order stochastic process, the random Caputo operator is defined by Burgos et al. (2017)
where \(n=-[-\alpha ]\), being \([\cdot ]\) the ceiling function. As the classical setting the Hermite equation is a second-order differential equation, hereinafter we will assume that \(\alpha \in ]0,1]\) in (3). It is important to remark that throughout this paper, we take \(( ^C D_0^{2 \alpha } Y)(t):= ( ^C D_0^{ \alpha }(^C D_0^{ \alpha } Y) )(t).\)
This paper is organized as follows. Section 2 is addressed to construct a mean square convergent solution of the RFIVP (3). In Sect. 3, we take advantage of Proposition 1.1 together with the results established in Sect. 2 to construct reliable approximations of mean and of the standard deviation (equivalently, the variance) functions for the solution of the RFIVP (3). To complete our probabilistic study, in Sect. 4 we will go further and, first, we will construct formal approximations of the probability density function of the solution in Sect. 4.1 and, second, in Sect. 4.2 we will rigorously prove they are convergent. In Sect. 5, we illustrate all our theoretical findings by means of two numerical examples, where a wide range of probability distributions for model parameters is considered to better illustrate the applicability of the results.
2 Obtaining a mean square convergent solution for the Hermite random fractional differential equation
This section is devoted to construct a convergent solution of the random IVP (3) in the so-called mean square sense (Soong 1973). The solution, which is a stochastic process, will be constructed, by means of a generalized random power series, by applying the extension of classical Fröbenius method to the stochastic setting. To guarantee the mean square convergence of the above-mentioned series, we will impose some conditions, that will be specified later, on the random coefficient \(\lambda \), and on the random initial conditions, \(Y_0\) and \(Y_1\).
According to the random Fröbenius method, let us assume that the solution, Y(t), can be expanded via a generalized random power series,
where \(\{X_m\}\) is a sequence of random variables in \(\textrm{L}^2(\Omega )\) to be determined. To calculate \(X_m\), using the random Fröbenius method, we will impose that (5) is a solution of the random IVP (3). To this end, we need to determine the mean square Caputo fractional derivatives, \( (^C D_0^{\alpha } Y)(t)\) and \( (^C D_0^{ 2 \alpha } Y)(t)\), of the stochastic process given in (5). We first deal with \( (^C D_0^{\alpha } Y)(t)\) that, according to (4), is defined in terms of the first-order mean square derivative of Y(t), denoted by \(Y'(t)\). To rigorously do that, we will apply (Cortés et al. 2005, Theorem 3.1). Let us first denote \(U_m(t):=X_m t^{\alpha m}\), applying (Soong 1973, Property 4.126) with the following identification: \(f(t)=t^{\alpha m}\) and \(X(t) = X_m\) (constant), one gets that \(U_m(t)\) is mean square differentiable and \(U_m'(t)= \alpha m X_m t^{\alpha m-1}.\) Furthermore, by the assumption \(\{X_m\}\in \textrm{L}^2(\Omega )\), \(U_m(t)\) and \(U_m'(t)\) are mean square continuous for each \(m\ge 0\).
Later, once the coefficients \(X_m\) had been explicitly determined, we will justify that \( Y(t)=\sum _{m=0}^\infty U_m(t)\) is mean square convergent for all real \(t>0\) and \(\sum _{m=0}^\infty U_m'(t)\) is mean square uniformly convergent on \([-K,K ]\) for any positive K. Then,
will be justified, in the mean square sense, by (Cortés et al. 2005, Theorem 3.1).
Now, we shall calculate the mean square Caputo derivative of the stochastic process Y(t), \(( ^C D_0^{\alpha } Y)(t)\), \(0<\alpha \le 1\). Recall that the Caputo derivative of the deterministic power function \(t^\nu \) is given by
see (Diethelm 2010, Example 3.1). Then, taking into account (6) and (7), one gets
Notice that we have used that \(\sum _{m=0}^\infty U_m'(t)\) converges uniformly in the mean square sense to legitimate the commutation between this series and the integral.
Now, we proceed to compute \(( ^C D_0^{2 \alpha } Y)(t)\) by applying one more time Caputo’s fractional operator to (8),
Once \(( ^C D_0^{\alpha } Y)(t)\) and \(( ^C D_0^{ 2 \alpha } Y)(t)\) have been computed, we formally plug expressions (8), (9) and (5) in the RFIVP (3), this gives
This relation fulfills choosing \(X_n\) such that
and
Note that the terms \(X_0\) and \(X_1\) are obtained from the initial conditions given in (3), \(X_0=Y(0)=Y_0\) and \(X_1=Y'(0)=Y_1\). As it can be observed from Eq. (10), odd and even terms, \(X_m\), are independently defined. By recursion, it is easy to check that they can be explicitly expressed as follows
and
respectively.
Then the solution (5) can be rewritten as
where
Substituting (12) into (11) and rearranging the terms yields
where
and
Notice that in the definition of \({\hat{Y}}_1(t)\), we have used the usual convention \(\prod _{k=i}^{j} p_k=1\) for \(i>j\) in the particular case that \(i=1>0=j\).
Hereinafter, we shall assume that:
-
H1: The coefficient \(\lambda \) is a bounded random variable, i.e., there are real numbers \(b_1\) and \(b_2\) such that \(b_1<\lambda (\omega )<b_2\), for all \(\omega \in \Omega \). Notice that this is equivalent to write that \(\lambda \in \textrm{L}^{\infty }(\Omega )\).
-
H2: The initial conditions \(Y_0, Y_1 \in \textrm{L}^2(\Omega )\) and \(\lambda \in \textrm{L}^{\infty }(\Omega )\) are independent random variables.
In the sequel, we will show that Y(t) in (13) is a rigorous solution of the RFIVP (3). To this end, we show that Y(t) in (13) is mean square convergent for all real \(t>0\) and \(Y'(t)=Y_0 {\hat{Y}}'_1(t) + Y_1 {\hat{Y}}'_2(t)\) (derived from (13) and H2) is uniformly mean square convergent for all real \(t>0\).
To establish the mean square convergence of Y(t), let us first observe that each \({\hat{Y}}_i(t)\), \(i=1,2\), only depends on the random variable \(\lambda \). By hypothesis H2, \(Y_0\), \(Y_1\) and \(\lambda \) are independent random variables. Thus, (13) implies
Since \(Y_0\) and \(Y_1\) belong \(\textrm{L}^{2}(\Omega )\), considering the previous inequality, the mean square convergence of Y(t) follows from the mean square convergence of series \({\hat{Y}}_i(t)\), \(i=1,2\), defined in (14) and (15), respectively. Hence, we begin by proving the mean square convergence of \({\hat{Y}}_i(t)\), \(i=1,2\). First, we find a bound for\(\left\| {\hat{Y}}_1(t) \right\| _2\). The triangle inequality and the Hölder inequality (2) with \(r=p=2\) and \(q=\infty \) imply
By setting
we only need to show that \(\sum _{m=1}^\infty \delta _m(t)\) converges for all real t, to ensure the mean square convergence of \(Y_1(t)\) for all real t. Taking advantage of the Stirling formula, \(\Gamma (x+1) \approx x^x e^{-x}\sqrt{2\pi x}\) as \(x\rightarrow \infty \), we have
because \(\left( \frac{2\,m \alpha }{2 (m+1)\alpha }\right) ^{2\,m \alpha }\xrightarrow []{m\rightarrow \infty } e^{-2 \alpha }\), \(\left( \frac{2\,m\alpha }{(2\,m-1)\alpha }\right) ^{(2\,m-1)\alpha }\xrightarrow []{m\rightarrow \infty } e^{\alpha }\), \(\left( \frac{1}{2(m+1)\alpha }\right) ^{k\alpha }\xrightarrow []{m\rightarrow \infty } 0\) for \(k=1,2\) and \(\left( \frac{2\,m \alpha }{2(m+1)\alpha }\right) ^{\alpha }\xrightarrow []{m\rightarrow \infty } 1\). By the ratio test, the series \(\sum _{m=1}^\infty \delta _m(t)\) converges for all real t. Hence, \(Y_1(t)\) defined in (14), is mean square convergent for all real \(t>0\). Similarly, for all real t, it can be proved the mean square convergence of \(Y_2(t)\) given by (15). Moreover, using similar arguments, one can prove that their corresponding mean square derivatives, \({\hat{Y}}_1'(t)\) and \({\hat{Y}}_2 '(t)\), are uniformly mean square convergent on \([-K,K]\) for any positive K. Summarizing, the following result has been established:
Theorem 2.1
If the random variables \(Y_0\), \(Y_1\) and \(\lambda \) satisfy hypotheses H1 and H2, then
is a mean square convergent solution of the RFIVP (3) for all \(t>0\).
3 Obtaining approximations for the mean and standard deviation of the solution
Theorem 2.1 ensures the mean square convergence of the solution process Y(t) in 17. Hence, Proposition 1.1 guarantees the convergence of its mean and standard deviation. This section is devoted to find explicit expressions for these relevant statistical functions. To this end, we first introduce the following technical result that simplifies the subsequent calculations.
Lemma 3.1
Let f(k) be a real function and let \(\lambda \) be a random variable. Then,
where
In other words, for \(i<m\), \(G_{m,i}\) is defined as the sum taken over all subsets of \(m-i\) indexes \(j_1,\ldots ,j_{m-i}\) from the set \(\{1,\ldots ,m \}\).
Proof
We proceed by induction on m. Clearly, (18) is true for \(m=1\). Indeed, observe that \(G_{1,0}=f(1)\) and \(G_{1,1}=1\) and the right side of (18) is
which is equal to the left side of (18). The Eq. (18) holds for \(m=2\) since the left side of (18) is
and the right side of (18) is
By definition of \(G_{m,i}\) it follows
Let \(m\in \mathbb {N} \) such that \(m\ge 2\) and suppose that
By induction hypothesis 20, we have
Using the equalities \(G_{m,0}=f(m) G_{m-1,0}\) and \(f(m) G_{m-1,i+1}=f(m) G_{m-1,i+1}+G_{m-1,i}\) (derived from (\(**\))) yields
By the principle of mathematical induction, we conclude that (18) is true for all m. \(\square \)
Now, we apply Lemma 3.1 to simplify the products involved in (13).
Let \(f(k)=2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}\). Then,
being \(G_{m-1,i}\) as in (19).
Next, setting \({\hat{f}}(k)=2 \frac{\Gamma ((2k-1)\alpha +1)}{\Gamma ((2k-2)\alpha +1)}\), one gets
where
As a consequence, the solution given in (13) can be represented free of products by the following expression
where \(G_{m,i}\) and \({\hat{G}}_{m,i}\) are defined in (19) and (21), respectively.
Now, we shall obtain reliable approximations for the mean and the variance functions of the solution. To achieve this goal, we first consider the truncation of order M, \(Y_M(t)\), of the solution given in (22):
By independence of \(Y_0\), \(Y_1\) and \(\lambda \), see assumption H1, one gets
Recall that the standard deviation of \(Y_M(t)\), \(\sigma [Y_M(t)]\), is defined by
Note that
Now, for the sake of clarity, we separately compute the above three terms, denoted by A, B and C.
and
Substituting A, B and C in (26), \(Y_M(t)^2\) can be expressed as
Applying the expectation operator on (27), one gets
From the previous expressions, it is interesting to observe that the approximation of order M of the mean, \(\mathbb {E}[Y_M(t)]\), depends on \(\mathbb {E}[Y_0]\), \(\mathbb {E}[Y_1]\) and \(\mathbb {E}[\lambda ^m]\), \(m=1,\ldots ,M\), while the approximation of the second-order moment, \(\mathbb {E}[Y_M^2(t)]\) (and hence, by (25), of \(\sigma [Y_M(t)]\))), depends on the above quantities together with \(\mathbb {E}[Y_0^2]\), \(\mathbb {E}[Y_1^2]\) and \(\mathbb {E}[\lambda ^m]\), \(m=1,\ldots ,2\,M\), as expected. Finally, notice that Theorem 2.1 ensures the mean square convergence of \(Y_M(t)\), and according to Proposition 1.1, \(\mathbb {E}[Y_M(t)]\) and \(\mathbb {E}[Y_M^2(t)]\) converge to their corresponding exact values, \(\mathbb {E}[Y(t)]\) and \(\mathbb {E}[Y^2(t)]\), respectively.
4 Convergent approximations for the 1-PDF of the solution
So far, convergent approximations for the mean, \(\mathbb {E}[Y_M(t)]\), and for the standard deviation, \(\sigma [Y_M(t)]\), of the solution, Y(t), given in (22) have been computed from its truncation of order M, \(Y_M(t)\), given in (23). Nevertheless, sometimes, it is required further statistical information of Y(t). On the one hand, computing higher-order one-dimensional statistical moments, \(\mathbb {E}[(Y_M(t))^k]\), allow us to approximate additional statistical properties, such as the asymmetry, the kurtosis, etc., of Y(t) that are useful functions to better describing the solution from a probabilistic standpoint. On the other hand, the probability that the solution lies within an interval of interest is, obviously, a relevant information in practice. Approximations for both quantities can be calculated by integrating the so-called first probability density function (1-PDF) of \(Y_M(t)\), say \(f_{Y_M(t)}(y)\),
and
Of course the above approximations will be legitimated provided \(f_{Y_M(t)}(y)\longrightarrow f_{Y(t)}(y)\) as \(M\rightarrow \infty \), where \(f_{Y(t)}(y)\) stands for the 1-PDF of the exact solution Y(t), given in (22). In this section, we first formally construct the approximations \(f_{Y_M(t)}(y)\) and, then, we establish sufficient conditions so that the foregoing convergence fulfills.
4.1 Constructing formal approximations for the 1-PDF
In the extant literature, there exist different approaches to obtain, exact or approximately, the 1-PDF of a stochastic process. Most of these methods are natural extensions of their corresponding counterpart for calculating the PDF of a random variable. As we have previously obtained approximations for the two first moments of the solution, a natural approach would be to apply the principle of maximum entropy (PME). This method constructs the PDF taking into account the available information of the random variable (in our case, the two first moments) by maximizing the concept of Shannon’s entropy, which defines the lack of knowledge of a random variable (Michalowicz et al. 2013). In the setting of ordinary and fractional differential equations with randomness, this approach has been recently applied in Burgos-Simón et al. (2020) and Burgos et al. (2019), respectively. Although, the method provides well-founded approximations to calculate the 1-PDF, the results heavily depend on the accuracy of the approximations of the first statistical moments. Moreover, according to the PME method, the approximations of the 1-PDF are limited to certain specific classes of densities depending on the number of statistical moments that have been pre-calculated. For example, if it is only known the mean and that the solution is positive, the PDF will be an exponential distribution; if both the mean and the variance are known, the approximation of the PDF will be Gaussian, etc. Michalowicz et al. (2013). Non-standard distributions can be achieved at expenses of pre-calculating higher statistical moments that could be cumbersome, as can be guess from the expressions of the two first moments (see expressions (24) and (28)).
To avoid these drawbacks, we here propose to obtain the 1-PDF by an alternative method termed the Probabilistic Transformation Method (PTM), which is based on the following result.
Theorem 4.1
(PTM) (Soong 1973, p. 25) Let us consider \(\textbf{Z}=(Z_1,\ldots ,Z_k)\) and \(\textbf{X}=(X_1,\ldots ,X_k)\) two k-dimensional absolutely continuous random vectors defined on a common complete probability space \((\Omega ,\mathcal {F}_{\Omega },\mathbb {P})\). Let \(\textbf{r}: \mathbb {R}^k \rightarrow \mathbb {R}^k\) be a one-to-one deterministic transformation of \(\textbf{Z}\) into \(\textbf{X}\), i.e., \(\textbf{X}=\textbf{r}(\textbf{Z})\). Assume that \(\textbf{r}\) is continuous in \(\textbf{Z}\) and has continuous partial derivatives with respect to each \(Z_i\), \(1\le i \le k\). Then, if \(f_{\textbf{Z}}(\textbf{z})\) denotes the joint probability density function of random vector \(\textbf{Z}\), and \(\textbf{s}=\textbf{r}^{-1}=(s_1(x_1,\ldots ,x_k),\ldots ,s_k(x_1,\ldots ,x_k))\) represents the inverse mapping of \(\textbf{r}=(r_1(z_1,\ldots ,z_k),\ldots ,r_k(z_1,\ldots ,z_k))\), the joint probability density function of random vector \(\textbf{X}\) is given by
where \(\left| J \right| \), which is assumed to be different from zero, is the absolute value of the Jacobian defined by the following determinant
In our setting, the key idea to take advantage of the above results is to note that, for \(t>0\) fixed, the approximate solution, \(Y_M(t)\), given in (23) is described by means of a transformation, \(\textbf{r}\), of the input parameters \(Y_0\), \(Y_1\) and \(\lambda \), whose PDFs, \(f_{Y_0}\), \(f_{Y_1}\) and \(f_{\lambda }\) are known. Observe that, according to hypothesis H1, the joint PDF of \((Y_0,Y_1,\lambda )\) is given by \(f_{Y_0,Y_1,\lambda } = f_{Y_0} f_{Y_1} f_{\lambda }\). Applying Theorem 4.1 to \(Y_M(t)\), we first shall obtain the approximations, \(f_{Y_M(t)}(y)\) and, later, we will establish sufficient condition so that \(f_{Y_M(t)}(y)\longrightarrow f_{Y(t)}(y)\) as \(M\rightarrow \infty \).
The PTM (also referred to as RVT—Random Variable Transformation) method has been successfully applied to obtain the 1-PDF of the solution of some classes of differential equations with uncertainties. In Dorini et al. (2016), the authors have obtained the 1-PDF of the solution of a logistic random differential equation. In Caraballo et al. (2019), the PTM method is applied to approximate the 1-PDF of the solution of a delay random differential equation. The PTM method has also been applied to numerically solve PDEs (Calatayud et al. 2020). In Burgos et al. (2018), some of the authors of this contribution, approximate the 1-PDF of a linear autonomous random fractional differential equation, whose order of fractional differentiation is \(0<\alpha \le 1\), by taking advantage of the PTM technique.
Let us apply Theorem () with the following identification, \(k=3\) and \(\textbf{Z} = (Z_1,Z_2,Z_3) = (Y_0,Y_1,\lambda )\). The vector \(\textbf{X} =(X_1,X_2, X_3)\) is defined by the following deterministic transformation \(\textbf{r}=(r_1,r_2,r_3)\), of \(\textbf{Z}\), i.e., \(\textbf{X} = \textbf{r} (\textbf{Z})\), where
It can be seen that the inverse mapping of \(\textbf{r}\), \(\textbf{s} = \textbf{r}^{-1}\), is given by
The absolute value of the Jacobian of the transformation \(\textbf{s}\) is given by
Applying Theorem (), the PDF of the random vector \(\textbf{X} = (X_1,X_2,X_3)\) is given by
Marginalizing with respect \(X_2 = Y_1\) and \(X_3 = \lambda \), we can obtain the 1-PDF of the approximate solution, \(Y_M(t)\),
4.2 Convergence of approximations of the 1-PDF
This subsection is addressed to show that \(f_{Y_M(t)}(y)\longrightarrow f_{Y(t)}(y)\) as \(M\rightarrow \infty \) under mild conditions. Note that \(f_{Y_M(t)}(y)\) is given by (29), while the limit is given by
For the sake of clarity in the subsequent development, we first introduce the following notation.
Then, expressions (29) and (30) read
Before proceeding with the proof, it is important to remark the following observations. Note that with the notation of (31), the solution (22) is given by \(Y(t)= Y_0S_0(t)+X_1S_1(t)\).
If \(Y_{0}\ne 0\), then
and \(S_0(0)=1\) with probability 1, because \(S_{1}(0)=0\). Taking into account that \(S_0(t)\) is a power series evaluated at \(t^{2 \alpha }\) and consequently continuous, we can guarantee that
Moreover, by the definition of Eq. (31), it is known that \(S_0^M(t)\) and \(S_1^M(t)\) are convergent series in the whole real line. Thus, these series are almost surely uniform convergent in every compact subset of \(\mathbb {R}\). This guarantees that, for \(j=0,1 \),
Finally, it is note that \(S_0^M(t)\) and \(S_1^M(t)\) converge uniformly to \(S_0(t)\) and \(S_1(t)\) on \([-\delta _0, \delta _0]\), respectively. So, taken \(\varepsilon _j>0,\) \(j=0,1,\) arbitrarily but fixed, there exists \(M_0^{j}>0\) integer, so that
To complete the proof, we fix t assuming that it lies within a neighborhood about \(t_0=0\), where the RFIVP is formulated and the bounds (33) and (34) fulfill. To proof that \(f_{Y_M(t)}(y) \longrightarrow f_{Y(t)}(y)\) as \(M \rightarrow \infty \), besides assuming hypotheses H1 and H2, we will assume that
-
H3: The PDF, \(f_{Y_0}\), of the initial condition \(Y_0\) is Lipschitz on the whole real line, \(\mathbb {R}\), i.e., there exists \(L_0>\) such that
$$\begin{aligned} \left| f_{Y_0}(x)-f_{Y_0}(z)\right| \le L_0 |x-z|,\quad \forall x,z\in \mathbb {R}. \end{aligned}$$
To prove the convergence, we fix t and calculate the difference \(\left| f_{Y(t)}(y)-f_{Y_M(t)}(y)\right| \) using (32).
Now, we proceed to bound the terms (I)–(IV) in (36). Let us start with term (III). First, let us denote \(F_0:=f_{Y_0}(0)\), then using hypothesis H3 and bounds (33) and (34) for \(S_0^M\) and \(S_1^M\), respectively, one gets
Using the bound (33) for \(S_0^M\) and (35) for \(j=0\), the term (IV) can be majorized by
The bound of the term (II) straightforwardly follows from the application of (33)
Finally, we proceed to bound the term (I). To this end, we first apply hypothesis H3
where in the last step, we have applied (35) and (34), both for \(j=0,1\), and (33) for \(S_0^m\) and \(S_0\).
Substituting (40), (39), (37) and (38), in (36) to bound the terms (I)–(IV), respectively, one gets
Let us denote \(\mathcal {M}=\max \{M_{s,0},M_{s,1}\}\) and \(\varepsilon = \max \{\varepsilon _{0},\varepsilon _{1}\}\), then,
Mean and standard deviation of the solution for different orders of \(M\in \{5,7,10,12,15\}\) in the context of Example 5.1. Convergence of these two statistical moments is clearly observed as M increases
Since by hypothesis H2, \(Y_1\in \textrm{L}^2(\Omega )\), by Schwarz’s inequality \(\mathbb {E}[|Y_1|]\le \mathbb {E}[|Y_1|^2]<\infty \). Then, as a consequence of the previous development, we conclude that \(f_{Y_M(t)}(y) \longrightarrow f_{Y(t)}(y)\) as \(M \rightarrow \infty \).
5 Numerical examples
This section is devoted to illustrate the theoretical findings established in the previous sections by means of two numerical examples. These examples are devised with regard to the probability distribution of model parameter \(\lambda \), which, according to hypothesis H1, is assumed to be an essentially bounded random variable. In the first example, we will assume that \(\lambda \) has a bounded distribution. In the second example, we will illustrate how the case, where \(\lambda \) is an unbounded random variable can be treated via its approximation using truncated random variables for which hypothesis H1 fulfills. In this latter case, we will graphically show the correct convergence of the approximations of the 1-PDF of the solution stochastic process.
Example 5.1
In this first example, let us consider that the order of the fractional derivative is \(\alpha =0.5\). We will assume the following probability distributions for the model input parameters: \(Y_0\) has a Gamma distribution with parameters (1, 1), i.e., \(Y_0\sim \text {Ga}(1,1)\) (hence, \(\mathbb {E}[Y_0]=1\) \(\mathbb {E}[Y_0^2]=2\)); \(Y_1\) has a Gaussian distribution with mean 2 and standard deviation \(\sqrt{2}\), i.e., \(Y_1\sim \text {N}(2,(\sqrt{2})^2)\) (hence, \(\mathbb {E}[Y_1]=2\) \(\mathbb {E}[Y_1^2]=6\)); and, \(\lambda \) has a Beta distribution with parameters (2, 3), i.e., \(\lambda \sim \text {Be}(2,3)\). According to (24) and (28), to compute the mean and the second-order moment of the solution besides knowing the two first moments of \(Y_0\) and \(Y_1\), it is also required to pre-calculate the higher moments \(\mathbb {E}[\lambda ^k]\), \(k\in \mathbb {N}\), which are explicitly known in the case for \(\lambda \sim \text {Be}(2,3)\),
In Fig.1 we can observe, along the time \(t\in [0,1]\), the mean and the standard deviation of the solution considering different order of truncation \(M \in \{5,7,10,12,15\}.\) To illustrate clearly the convergence as M increases, in each subfigure a zoom has been made at the time instants, t, close to 1, which is where the graphs can be perceived separately.
In Fig.2 the 1-PDF, \(f_{Y_M}(y)\), of the solution, given in (29), for different orders of truncation, \(M\in \{2,3,4,5,6\}\) and times instants, \(t\in \{0.25,0.5,0.75\}\) have been plotted. We can see graphically the convergence of the 1-PDFs, studied in Sect. 4.2, as M increases. To have better visualization of this convergence, in each subplot, a zoom has been performed around the maximum of these functions. From the symmetry of the 1-PDFs, we can determine that the mean is around the point y where the maximum of the function occurs. Taking advantage of this zoom we can verify that the mean estimated in Fig. 2 matches the mean obtained in Fig. 1.
Example 5.2
As it has been mentioned before, the objective of this second example is to illustrate an approximation of the case where the random variable \(\lambda \) is not bounded. To this end, \(\lambda \) is truncated on an interval containing a high percentage of probability mass. It is important to remark that this approach approximates the original problem. Nevertheless, the more probability mass the truncation interval contains the better this approximation will be.
On the one hand, we have considered that \(\lambda \) has a truncated Gaussian distribution with mean 0 and standard deviation 0.2 on the interval \([-100,100]\), i.e., \(\lambda \sim \textrm{ N}_{[-100,100]}(0,0.2^2)\). The truncation of a \(\textrm{ N}(0,0.2^2)\) over the interval \([-100,100]\) captures a \(99.9999\%\) of the total probability mass.
On the other hand, we will assume that the order of the derivative is \(\alpha =0.4\). We will assume that \(Y_0\) has an Exponential distribution of parameter 2, i.e., \(Y_0\sim \textrm{ Exp}(2)\). For the random variable \(Y_1\), we will assume that it has a Beta distribution of parameters (2, 4), i.e., \(Y_1\sim \textrm{Be}(2,4)\). The two first moments of \(Y_0\) and \(Y_1\), required to compute the mean and the standard deviation, are then \(\mathbb {E}[Y_0]=1/2\), \(\mathbb {E}[X_0^2]=1/2\), \(\mathbb {E}[X_1]=1/3\) and \(\mathbb {E}[Y_1^2]=1/7\). It is also necessary to know the higher order moments of the random variable \(\lambda \sim \textrm{N}_{[-100,100]}(0,0.2^2)\). Note that it can be calculated by
where
This calculation approximates the moments
where \((k-1)!!\) is defined as the double factorial, which is the product of all numbers from \(k-1\) to 1 that have the same parity as \(k-1\). Here, \(\sigma =0.2\). This approximation is based on the fact that, according to Chebyshev’s inequality, the truncated Gaussian random captures \(99.9999\%\) of the probability of the original Gaussian random variable \(\textrm{N}(0,0.2^2)\).
In Figure 3, we show the approximations of the mean and the standard deviation of the solution for \(t\in [0,1]\) considering different order of truncation, \(M \in \{7, 10, 12, 15, 17\}\). As in the previous example, to better show convergence as M increases, we have magnified the plot about \(t=1\), where the discrepancies could be greater. We can see that the approximations are very good.
In Fig.4 different plots for the 1-PDF at times \(t\in \{0.25,0.5,0.75\}\) considering different order of truncation have been included. A zoom has been added at the maximum of each plot to better show graphically the convergence proved in Sect. 4.2.
6 Conclusions
In this paper, we have presented a comprehensive analysis of the fractional Hermite differential equation with uncertainties in all its data (coefficient and initial conditions). Our study has been based on the so-called random differential equation approach. To perform the study, we first have constructed a random generalized power series and we have proved that this solution is mean square convergent by assuming mild hypotheses on the data. Second, we have taken advantage of a key property of the mean square convergence to approximate the mean and the variance of the solution. Afterwards, we have constructed approximations of the first probability density function of the solution using the so called Probability Transformation Method. We have also shown that these approximations are also convergent under some assumptions that fulfill in many practical applications.
The main spirit of the paper is to continue developing new results in the setting of Fractional Calculus with uncertainty, where results for random fractional differential equations are still scarce. In this sense, the results presented in this paper for the random fractional Hermite equation can inspire to extend our analysis to other significant random fractional second-order differential equations in forthcoming contributions. Furthermore, the ideas developed in this contribution may help to extend the deterministic theory for other types of polynomials, such as Cesarano (2014), Cesarano et al. (2014), Cesarano et al. (2005) and Quintana et al. (2018), to the fractional random framework.
References
AbdelAty AM, Soltan A, Ahmed W, Radwan AG (2016) Hermite polynomials in the fractional order domain suitable for special filters design. In: 13th international conference on electrical engineering/electronics, computer, telecommunications and information technology (ECTI-CON). https://doi.org/10.1109/ECTICon.2016.7561396
Banks HT, Shuhua Hu, Clayton Thompson W (2014) Modeling and inverse problems in the presence of uncertainty. CRC Press, Boca Raton (ISBN: 13: 9781482206432)
Burgos C, Cortés JC, Villafuerte L, Villanueva RJ (2017) Extending the deterministic Riemann-Liouville and Caputo operators to the random framework: a mean square approach with applications to solve random fractional differential equations. Chaos Solitons Fract 102:305–318
Burgos C, Calatayud J, Cortés JC, Navarro A (2018) A full probabilistic solution of the random linear fractional differential equation via the random variable transformation technique. Math Methods Appl Sci 41(18):9037–9047
Burgos C, Cortés JC, Debbouche A, Villafuerte L, Villanueva RJ (2019) Random fractional generalized Airy differential equations: a probabilistic analysis using mean square calculus. Appl Math Comput 352:15–29. https://doi.org/10.1016/j.amc.2019.01.039
Burgos-Simón C, Cortés JC, Martínez-Rodríguez D, Villanueva RJ (2020) Modeling breast tumor growth by a randomized logistic model: a computational approach to treat uncertainties via probability densities. Eur Phys J Plus 135(10):1–14. https://doi.org/10.1140/EPJP/S13360-020-00853-3
Calatayud J, Cortŕs J-C, Díaz JA, Jornet M (2020) Constructing reliable approximations of the probability density function to the random heat PDE via a finite difference scheme. Appl Numer Math 151:413–424. https://doi.org/10.1016/j.apnum.2020.01.012
Calbo G, Cortés J-C, Jódar L (2011) Random Hermite differential equations: mean square power series solutions and statistical properties. Appl Math Comput 8(7):3654–3666. https://doi.org/10.1016/j.amc.2011.09.008
Caraballo T, Cortés JC, Navarro A (2019) Applying the random variable transformation method to solve a class of random linear differential equation with discrete delay. Appl Math Comput 356:198–218
Cesarano C (2014) Generalized Chebyshev polynomials. Hacettepe J Math Stat 43:731–740
Cesarano C, Germano B, Ricci PE (2005) Laguerre-type Bessel functions. Integr Transform Spec Funct 16:315–322
Cesarano C, Cennamo GM, Placidi L (2014) Humbert polynomials and functions in terms of Hermite polynomials towards applications to wave propagation. Wseas Trans Math 13:595–602
Cortés JC, Sevilla-Peris P, Jódar L (2005) Analytic-numerical approximating processes of diffusion equation with data uncertainty. Comput Math Appl 49(7–8):1255–1266. https://doi.org/10.1016/j.camwa.2004.05.015
Der Kiureghian A, Ditlevsen O (2009) Aleatory or epistemic? Does it matter? Struct Saf 31(2):105–112. https://doi.org/10.1016/j.strusafe.2008.06.020
Diethelm K (2010) The analysis of fractional differential equations: an application-oriented exposition using differential operators of Caputo type. Springer, Berlin (ISBN: 978-3-642-14574-2)
Dorini FA, Cecconello MS, Dorini LB (2016) On the logistic equation subject to uncertainties in the environmental carrying capacity and initial population density. Commun Nonlinear Sci Numer Simul 33:160–173. https://doi.org/10.1016/j.cnsns.2015.09.009
Du M, Wang Z, Hu H (2013) Measuring memory with the order of fractional derivative. Sci Rep 3(1):1–3. https://doi.org/10.1038/srep03431
Frunzo L, Garra R, Giusti A, Luongo V (2019) tModeling biological systems with an improved fractional Gompertz law. Commun Nonlinear Sci Numer Simul 74:260–267. https://doi.org/10.1016/j.cnsns.2019.03.024
Honguang S, Yong Z, Baleanu D, Wen C, Yangquan C (2018) A new collection of real world applications of fractional calculus in science and engineering. Commun Nonlinear Sci Numer Simul 64:213–231. https://doi.org/10.1016/j.cnsns.2018.04.019
Khan NA, Ara A, Alam K (2013) Fractional-order Riccati differential equation: analytical approximation and numerical results. Adv Differ Equ. https://doi.org/10.1186/1687-1847-2013-185
Kloeden PE, Platen E (1992) Stochastic differential equations. In: Numerical solution of stochastic differential equations. Applications of mathematics. Springer, p 23. https://doi.org/10.1007/978-3-662-12616-5_4
Michalowicz JV, Nichols JM, Bucholtz F (2013) Handbook of differential entropy. CRC Press, Boca Raton (9780429072246)
Nieto JJ (2022) Solution of a fractional logistic ordinary differential equation. Appl Math Lett 123:107568. https://doi.org/10.1016/j.aml.2021.107568
Quintana Y, Ramírez W, Urieles A (2018) On an operational matrix method based on generalized Bernoulli polynomials of level m. Calcolo 55:30. https://doi.org/10.1007/s10092-018-0272-5
Rivero M, Rodríguez-Germá L, Trujillo JJ (2008) Linear fractional differential equations with variable coefficients. Appl Math Lett 21(9):892–897. https://doi.org/10.1016/j.aml.2007.09.010
Smith RC (2014) Uncertainty quantification: theory, implementation and applications. Computational science and engineering. SIAM, New York
Soong TT (1973) Random differential equations in science and engineering. Academic Press, New York (ISBN: 9780080956121)
Acknowledgements
This work has been supported by the grant PID2020-115270GB–I00 funded by MCIN/AEI/10.13039/501100011033 (Spanish “Agencia Estatal de Investigación”), the grant AICO/2021/302 (Generalitat Valenciana) and the posdoctoral grant from Universitat Politécnica de Valencia Margarita Salas funded by the Spanish Ministry of Universities and Next-Generation, EU.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that there is no conflict of interests regarding the publication of this article.
Additional information
Communicated by Fabio Durastante.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Burgos, C., Caraballo, T., Cortés, J.C. et al. Constructing reliable approximations of the random fractional Hermite equation: solution, moments and density. Comp. Appl. Math. 42, 140 (2023). https://doi.org/10.1007/s40314-023-02274-1
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40314-023-02274-1
Keywords
- Random fractional Hermite differential equation
- Random mean square calculus
- Statistical moments
- First probability density function