1 Introduction

The extension of many classical results to the context of fractional calculus has allowed their successful application to a number of practical problems. In particular, fractional-order derivatives have demonstrated to be powerful tools to better describe systems, media and fields characterized by non-local and memory of power-law type often met in models that appear in physics, control, signal and image processing, mechanics and dynamic systems, biology, environmental science, materials, economic and multidisciplinary in engineering fields (Honguang et al. 2018). The aforementioned extension is often done from relevant models formulated via classical differential equations that have been generalized using different fractional-order derivatives. Examples in this regard include the linear, logistic, Riccati, Gompertz, etc. Rivero et al. (2008), Nieto (2022), Khan et al. (2013) and Frunzo et al. (2019), just to mention a few models.

On the other hand, the applications of fractional differential equations to modeling the dynamics of complex phenomena using real-world data involve the rigorous treatment of randomness coming from the combination of epistemic and aleatoric uncertainties (Kiureghian and Ditlevsen 2009). Epistemic (or systematic) uncertainty appears because inaccurate measurements or because the model simplifies the true complexity of the phenomena under study neglecting certain effects, while aleatoric (or stochastic) uncertainty comes from the fact that different outcomes are obtained when we run or observe the same experiment. These facts lead to stochastic or random fractional differential equations. As it is accurately pointed out in (Smith 2014, p. 96), it is important to underline that there is a growing trend in the Uncertainty Quantification community to treat stochastic and random differential equations as synonymous terms, when in fact they require completely different approaches for analysis and approximation. In dealing with stochastic differential equations (SDEs), uncertainties are forced by an irregular process, such as the Brownian motion or, more generality, a Wiener process. SDEs are typically represented in terms of stochastic differentials, but they must be interpreted as Itô or Stratonovich stochastic integrals (Smith 2014, p. 97), Kloeden and Platen (1992). The role of uncertainty is essentially different in random differential equations (RDEs). Indeed, in the setting of these equations, random effects are directly manifested through coefficients, initial/boundary conditions, and/or source term that are assumed to be well-behaved (e.g., continuous) with respect to time and/or space (Smith 2014, p.97), Soong (1973). As pointed out in (Banks et al. 2014, p.258), overall the theory of RDEs is much less advanced than that for SDEs. This fact is even more noticeable in the case of RDEs formulated by means of fractional-order derivatives.

The aim of this paper is to continue contributing the realm of Fractional Calculus by extending the analysis of the Hermite differential equation in a twofold sense, namely introducing both fractional derivatives and uncertainties in its formulation. For the former goal, the mean square Caputo fractional derivative will be used, while for the later we will rely on the RDE approach.

On the one hand, the fractional Hermite differential equation, based on Caputo operator, has been studied to introduce fractional Hermite polynomials and with applications to design special filters (AbdelAty et al. 2016). On the other hand, the random Hermite equation

$$\begin{aligned} Y''(t) -2tY'(t)+\lambda Y(t)=0, \quad Y(0)=Y_0,\quad Y'(0)=Y_1, \end{aligned}$$
(1)

where \(Y_0\), \(Y_1\) and \(\lambda \) are random variables, has been studied in Calbo et al. (2011) using the so-called mean square calculus (Soong 1973). In this latter contribution, one constructs a power series solution for the randomized classical Hermite differential equation and then both the expectation and the variance of the solution are approximated. Apart from these above-mentioned contributions, and to the best of our knowledge, none contribution has dealt yet with the study of the random fractional Hermite differential equation. So, in some sense, the present paper is aimed at extending the results that are available so far. Even more, as it shall be seen, we will also give a method to calculate the first probability density function of the solution, that is a more ambitious goal.

Hereinafter, we will work on the Lebesgue spaces \(\textrm{L}^p(\mathcal {D}) \equiv \textrm{L}^p(\mathcal {D},\textrm{d}\mu )\), \(1\le p < \infty \), whose elements are real-valued measurable functions \(h:\mathcal {D} \longrightarrow \mathbb {R}\) with the norm \(\Vert h \Vert _{\textrm{L}^p(\mathcal {D})}=\left( \int _{\mathcal {D}} |h|^p \textrm{d} \mu \right) ^{1/p}<\infty \). In the case that \(p=\infty \), recall that the norm is defined as \(\Vert h \Vert _{\textrm{L}^{\infty }(\mathcal {D})}=\inf \{ \sup \{|h(t)|: t\in \mathcal {D} {\setminus } \mathcal {N} \}: \mu (\mathcal {N})=0 \}< \infty \). For \(p=\infty \), elements in the space \(\textrm{L}^{\infty }(\mathcal {D})\) are essentially bounded functions. Classically, \(\mathcal {D}=\mathcal {T}\subset \mathbb {R}\) is an interval and \(\textrm{d}\mu =\textrm{d}t\) is the Lebesgue measure. Throughout the paper, as we shall also work with random variables and stochastic processes, we will implicitly take \(\mathcal {D}=\Omega \) (sample space) and \(\mu =\mathbb {P}\) (probability measure), and \(\mathcal {D}=\mathcal {T} \times \Omega \) and \(\textrm{d}\mu =\textrm{d} t \times \textrm{d} \mathbb {P}\), respectively. Notice that \(X\in \textrm{L}^{p}(\Omega ) \) if and only if \(\Vert X \Vert _{\textrm{L}^{p}(\Omega )}=\left( \mathbb {E}[|X|^p] \right) ^{1/p}<\infty \), where \(\mathbb {E}[\,]\) denotes the expectation operator, and, \(X\equiv X(t)\in \textrm{L}^{p}(\mathcal {T} \times \Omega ) \) if and only if \(\Vert X \Vert _{\textrm{L}^{p}(\mathcal {T} \times \Omega )}=\left( \mathbb {E} \left[ \int _{\mathcal {T}}|X(t)|^p \, \textrm{d}t \right] \right) ^{1/p}<\infty \). Any stochastic process X(t) in \(\textrm{L}^{p}(\mathcal {T} \times \Omega )\) can be interpreted as a set of random variables in \(\textrm{L}^{p}( \Omega )\) indexed by \(t\in \mathcal {T}\). An important result in the above probabilistic Lebesgue spaces is the so-called Liapunov’s inequality

$$\begin{aligned} (\mathbb {E}[|X|^{r}])^{1/r} \le (\mathbb {E}[|X|^{s}])^{1/s}, \quad 0<r\le s, \end{aligned}$$

provided the expectation \(\mathbb {E}[|X|^{s}]<\infty \). This result indicates that \(\textrm{L}^{s}(\Omega ) \subset \textrm{L}^{r}(\Omega )\), \(0<r\le s\), and as a consequence, in the probabilistic setting, it is preferred to establish results in the biggest space \(\textrm{L}^{2}(\Omega )\) whose elements are real-valued random variables, \(X:\Omega \longrightarrow \mathbb {R}\), with finite second-order moment \(\mathbb {E}[X^2]<\infty \) (equivalently, finite variance). The elements of \(\textrm{L}^2(\Omega )\) are usually called second-order variables. It can be proven that \(\textrm{L}^2(\Omega )\) is a Hilbert space with the following inner product \(\left<X,Y\right> = \mathbb {E}[XY]\), from which one infers the so-called 2-norm: \(||X||_2 = \sqrt{\left<X,X\right> }= \mathbb {E}[X^2]^{\frac{1}{2}}.\) Given a sequence of second-order random variables, \(\{X_n: n\ge 0\,\, \text {integer}\}\), is said to be mean square convergent to a random variable \(X\in \textrm{L} ^2(\Omega )\) if and only if \(||X-X_n||_2\longrightarrow 0\) as \(n\rightarrow \infty \). In the case that the collection of second-order random variables is indexed with reference to an interval, say \(\mathcal {T}\subset \mathbb {R}\), then \(\{X(t):t\in \mathcal {T}\}\) is called a second-order stochastic process. The concepts of continuity, differentiability and integrability in the mean square sense are naturally inferred from the 2-norm. When trying to prove the mean square convergence of a sequence of second-order stochastic processes that defines the solution of a random fractional differential equation often is required to bound products of random variables. Unfortunately, the following inequality \(\Vert XY\Vert _2\le \Vert X\Vert _2 \Vert Y\Vert _2\), \(X,Y\in \textrm{L}^2(\Omega )\) does not hold, in general. However, Hölder inequality

$$\begin{aligned} || X Y ||_r \le ||X||_p ||Y||_q, \quad 0 <p,q,r \le \infty , \quad \frac{1}{r}= \frac{1}{p} + \frac{1}{q}, \end{aligned}$$
(2)

applied to \(r=p=2\) and \(q=\infty \) leads to \(\Vert XY\Vert _2\le \Vert X\Vert _2 \Vert Y\Vert _{\infty }\). This result, that relates the Lebesgue spaces \(\textrm{L}^2(\Omega )\) and \(\textrm{L}^{\infty }(\Omega )\), will be very useful in our subsequent analysis to properly majorizing some quantities and then establishing the mean square convergence. After doing that, we will be interested in computing reliable approximations of the main moments of the solution, such as the expectation and the variance. To achieve this important goal, the following property of the mean square convergence will play a key role.

Proposition 1.1

(Soong 1973, Th 4.4.3) Let \(\{X_n: n\ge 0\}\) be a sequence of second-order random variables such that \(X_n \longrightarrow X\) as \(n \rightarrow \infty \) in the mean square sense. Then,

$$\begin{aligned} \mathbb {E}\left[ X_n \right] \xrightarrow [n \rightarrow \infty ]{} \mathbb {E}\left[ X \right] ,\quad \mathbb {V}\left[ X_n \right] \xrightarrow [n \rightarrow \infty ]{} \mathbb {V}\left[ X \right] . \end{aligned}$$

In this paper, we shall study the following random fractional initial value problem (RFIVP), that extends, to the fractional setting the random (classical) Hermite equation previously introduced in (1),

$$\begin{aligned} \begin{aligned}&( ^C D_0^{2 \alpha } Y)(t)-2 t^\alpha ( ^C D_0^{\alpha } Y)+\lambda Y(t)=0,\quad Y(0)=Y_0,\quad Y'(0)=Y_1. \end{aligned} \end{aligned}$$
(3)

Here, \((^C D_0^{\alpha } Y)(t)\) stands for the Caputo mean square derivative of order \(\alpha >0\) of the second-order stochastic process Y(t), and \(\lambda \), \(Y_0\) and \(Y_1\) are second-order random variables defined on a complete probability space \((\Omega ,\mathcal {F},\mathbb {P})\). Let us recall that, given a second-order stochastic process, the random Caputo operator is defined by Burgos et al. (2017)

$$\begin{aligned} ( ^C D_0^{\alpha }Y)(t):=\frac{1}{\Gamma (n-\alpha )}\int _0^t (t-u)^{n-\alpha -1} Y^{(n)}(u)\textrm{d}u, \end{aligned}$$
(4)

where \(n=-[-\alpha ]\), being \([\cdot ]\) the ceiling function. As the classical setting the Hermite equation is a second-order differential equation, hereinafter we will assume that \(\alpha \in ]0,1]\) in (3). It is important to remark that throughout this paper, we take \(( ^C D_0^{2 \alpha } Y)(t):= ( ^C D_0^{ \alpha }(^C D_0^{ \alpha } Y) )(t).\)

This paper is organized as follows. Section 2 is addressed to construct a mean square convergent solution of the RFIVP (3). In Sect. 3, we take advantage of Proposition 1.1 together with the results established in Sect. 2 to construct reliable approximations of mean and of the standard deviation (equivalently, the variance) functions for the solution of the RFIVP (3). To complete our probabilistic study, in Sect. 4 we will go further and, first, we will construct formal approximations of the probability density function of the solution in Sect. 4.1 and, second, in Sect. 4.2 we will rigorously prove they are convergent. In Sect. 5, we illustrate all our theoretical findings by means of two numerical examples, where a wide range of probability distributions for model parameters is considered to better illustrate the applicability of the results.

2 Obtaining a mean square convergent solution for the Hermite random fractional differential equation

This section is devoted to construct a convergent solution of the random IVP (3) in the so-called mean square sense (Soong 1973). The solution, which is a stochastic process, will be constructed, by means of a generalized random power series, by applying the extension of classical Fröbenius method to the stochastic setting. To guarantee the mean square convergence of the above-mentioned series, we will impose some conditions, that will be specified later, on the random coefficient \(\lambda \), and on the random initial conditions, \(Y_0\) and \(Y_1\).

According to the random Fröbenius method, let us assume that the solution, Y(t), can be expanded via a generalized random power series,

$$\begin{aligned} Y(t) = \sum _{m=0}^\infty X_m t^{\alpha m}, \end{aligned}$$
(5)

where \(\{X_m\}\) is a sequence of random variables in \(\textrm{L}^2(\Omega )\) to be determined. To calculate \(X_m\), using the random Fröbenius method, we will impose that (5) is a solution of the random IVP (3). To this end, we need to determine the mean square Caputo fractional derivatives, \( (^C D_0^{\alpha } Y)(t)\) and \( (^C D_0^{ 2 \alpha } Y)(t)\), of the stochastic process given in (5). We first deal with \( (^C D_0^{\alpha } Y)(t)\) that, according to (4), is defined in terms of the first-order mean square derivative of Y(t), denoted by \(Y'(t)\). To rigorously do that, we will apply (Cortés et al. 2005, Theorem 3.1). Let us first denote \(U_m(t):=X_m t^{\alpha m}\), applying (Soong 1973, Property 4.126) with the following identification: \(f(t)=t^{\alpha m}\) and \(X(t) = X_m\) (constant), one gets that \(U_m(t)\) is mean square differentiable and \(U_m'(t)= \alpha m X_m t^{\alpha m-1}.\) Furthermore, by the assumption \(\{X_m\}\in \textrm{L}^2(\Omega )\), \(U_m(t)\) and \(U_m'(t)\) are mean square continuous for each \(m\ge 0\).

Later, once the coefficients \(X_m\) had been explicitly determined, we will justify that \( Y(t)=\sum _{m=0}^\infty U_m(t)\) is mean square convergent for all real \(t>0\) and \(\sum _{m=0}^\infty U_m'(t)\) is mean square uniformly convergent on \([-K,K ]\) for any positive K. Then,

$$\begin{aligned} Y'(t) = \sum _{m=0}^\infty U_m'(t)=\sum _{m=1}^\infty \alpha m X_{m} t^{\alpha m-1} \end{aligned}$$
(6)

will be justified, in the mean square sense, by (Cortés et al. 2005, Theorem 3.1).

Now, we shall calculate the mean square Caputo derivative of the stochastic process Y(t), \(( ^C D_0^{\alpha } Y)(t)\), \(0<\alpha \le 1\). Recall that the Caputo derivative of the deterministic power function \(t^\nu \) is given by

$$\begin{aligned} (^C D_0^\alpha ) (t^\nu ) = \left\{ \begin{array}{ccl} \frac{\Gamma (\nu +1)}{\Gamma (\nu +1-\alpha )}t^{\nu -\alpha } &{} &{} \text {if}\quad \nu > 0,\\ 0 &{} &{} \text {if} \quad \nu = 0, \end{array} \right. \end{aligned}$$
(7)

see (Diethelm 2010, Example 3.1). Then, taking into account (6) and (7), one gets

$$\begin{aligned} ( ^C D_0^{\alpha } Y)(t)&= \frac{1}{\Gamma (1-\alpha )}\int _0^t(t-u)^{-\alpha }Y'(u)\textrm{d}u\nonumber \\&= \frac{1}{\Gamma (1-\alpha )}\int _0^t(t-u)^{-\alpha }\left( \sum _{m=0}^\infty U_m(u)\right) '\textrm{d}u\nonumber \\&= \sum _{m=0}^\infty \frac{1}{\Gamma (1-\alpha )}\int _0^t(t-u)^{-\alpha }\left( U_m'(u)\right) \textrm{d}u\nonumber \\&= \sum _{m=0}^\infty \, ^CD^\alpha _0 \left( U_m(t)\right) \nonumber \\&= \sum _{m=0}^\infty \, X_m \, ^CD^\alpha _0 \left( t^{\alpha m}\right) \nonumber \\&= \sum _{m=1}^\infty X_m \frac{\Gamma (\alpha m +1)}{\Gamma (\alpha (m-1)+1)} t^{\alpha (m-1)} \nonumber \\&= \sum _{m=0}^\infty X_{m+1} \frac{\Gamma (\alpha (m+1) +1)}{\Gamma (\alpha m+1)} t^{\alpha m}. \end{aligned}$$
(8)

Notice that we have used that \(\sum _{m=0}^\infty U_m'(t)\) converges uniformly in the mean square sense to legitimate the commutation between this series and the integral.

Now, we proceed to compute \(( ^C D_0^{2 \alpha } Y)(t)\) by applying one more time Caputo’s fractional operator to (8),

$$\begin{aligned} ( ^C D_0^{2 \alpha } Y)(t)=&\ ^C D_0^\alpha ( ^C D_0^{ \alpha } Y)(t) = \ ^C D_0^\alpha \left( \sum _{m=0}^\infty X_{m+1} \frac{\Gamma (\alpha (m+1) +1)}{\Gamma (\alpha m+1)} t^{\alpha m}\right) \nonumber \\&= \sum _{m=0}^\infty X_{m+1} \frac{\Gamma (\alpha (m+1)+1)}{\Gamma (\alpha m +1)} \ ^C D_0^\alpha (t^{\alpha m})\nonumber \\&= \sum _{m=1}^\infty X_{m+1} \frac{\Gamma (\alpha (m+1)+1)}{\Gamma (\alpha m+1)} \frac{\Gamma (\alpha m+1)}{\Gamma (\alpha m +1 - \alpha )}t^{\alpha m -\alpha }\nonumber \\&=\sum _{m=1}^\infty X_{m+1} \frac{\Gamma (\alpha (m+1)+1)}{\Gamma (\alpha (m-1)+1)}t^{\alpha m -\alpha }\nonumber \\&=\sum _{m=0}^\infty X_{m+2} \frac{\Gamma (\alpha (m+2)+1)}{\Gamma (\alpha m +1)}t^{\alpha m}\nonumber \\&=X_2 \frac{\Gamma (2\alpha +1)}{\Gamma (1)}+\sum _{m=1}^\infty X_{m+2} \frac{\Gamma (\alpha (m+2)+1)}{\Gamma (\alpha m +1)}t^{\alpha m}\nonumber \\&=X_2 \Gamma (2\alpha +1)+\sum _{m=0}^\infty X_{m+3} \frac{\Gamma (\alpha (m+3)+1)}{\Gamma (\alpha (m+1) +1)}t^{\alpha (m+1)}. \end{aligned}$$
(9)

Once \(( ^C D_0^{\alpha } Y)(t)\) and \(( ^C D_0^{ 2 \alpha } Y)(t)\) have been computed, we formally plug expressions (8), (9) and (5) in the RFIVP (3), this gives

$$\begin{aligned} \begin{aligned} 0&=( ^C D_0^{2 \alpha } Y)(t)-2 t^\alpha ( ^C D_0^{\alpha } Y)+\lambda Y(t)\\&=X_2 \Gamma (2\alpha +1)+\sum _{m=0}^\infty X_{m+3} \frac{\Gamma (\alpha (m+3)+1)}{\Gamma (\alpha (m+1) +1)}t^{\alpha (m+1)} \\&\quad - 2 \sum _{m=0}^\infty X_{m+1} \frac{\Gamma (\alpha (m+1) +1)}{\Gamma (\alpha m+1)} t^{\alpha (m+1)}+ \lambda X_0 + \lambda \sum _{m=0}^\infty X_{m+1} t^{\alpha (m+1)}\\&=\Gamma (2 \alpha + 1) X_2+ \lambda X_0 \\&\quad +\sum _{m=0}^\infty \left( \frac{\Gamma (\alpha (m+3)+1)}{\Gamma (\alpha (m+1)+1)}X_{m+3}-2 \frac{\Gamma (\alpha (m+1)+1)}{\Gamma (\alpha m +1)}X_{m+1}+\lambda X_{m+1} \right) t^{\alpha (m+1)}. \end{aligned} \end{aligned}$$

This relation fulfills choosing \(X_n\) such that

$$\begin{aligned} \begin{aligned} X_2&= -\frac{\lambda }{\Gamma (2\alpha +1)} X_0, \end{aligned} \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} X_{m+3}&= \frac{\Gamma (\alpha (m+1)+1)}{\Gamma (\alpha (m+3)+1)} \left( \frac{2 \Gamma (\alpha (m+1)+1)}{\Gamma (\alpha m +1)}-\lambda \right) X_{m+1}, \quad m \ge 0. \end{aligned} \end{aligned}$$
(10)

Note that the terms \(X_0\) and \(X_1\) are obtained from the initial conditions given in (3), \(X_0=Y(0)=Y_0\) and \(X_1=Y'(0)=Y_1\). As it can be observed from Eq. (10), odd and even terms, \(X_m\), are independently defined. By recursion, it is easy to check that they can be explicitly expressed as follows

$$\begin{aligned} X_m = \frac{\Gamma (\alpha +1)}{\Gamma (m\alpha +1)} \prod _{k=0}^{\frac{m-3}{2}}\left( 2\frac{\Gamma ((2k+1)\alpha +1)}{\Gamma (2k\alpha +1)}-\lambda \right) X_1, \quad m\ge 3,\,\, \text {m odd}, \end{aligned}$$

and

$$\begin{aligned} X_m =\frac{\Gamma (2 \alpha +1)}{\Gamma (m\alpha +1)} \prod _{k=1}^{\frac{m-2}{2}}\left( 2\frac{\Gamma (2k\alpha +1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) X_2, \quad m\ge 4,\,\, \text {m even}, \end{aligned}$$

respectively.

Then the solution (5) can be rewritten as

$$\begin{aligned} Y(t)=&X_0 + X_1 t^\alpha +X_2 t^{2\alpha } + \sum _{m=1}^\infty X_{2m+1}t^{(2m+1)\alpha } + \sum _{m=2}^\infty X_{2m} t^{2m\alpha }, \end{aligned}$$
(11)

where

$$\begin{aligned} \begin{aligned} X_0&= Y_0,\\ X_1&= Y_1,\\ X_2&= -\frac{\lambda X_0}{\Gamma (2 \alpha + 1)}=-\frac{\lambda Y_0}{\Gamma (2 \alpha + 1)},\\ X_{2m+1}&= \frac{\Gamma (\alpha + 1)}{\Gamma ((2m+1)\alpha +1)} \prod _{k=0}^{m-1}\left( 2 \frac{\Gamma ((2k+1)\alpha +1)}{\Gamma (2k\alpha +1)}-\lambda \right) X_1\\&= \frac{\Gamma (\alpha + 1)}{\Gamma ((2m+1)\alpha +1)} \prod _{k=0}^{m-1}\left( 2 \frac{\Gamma ((2k+1)\alpha +1)}{\Gamma (2k\alpha +1)}-\lambda \right) Y_1,\\ X_{2m}&= \frac{\Gamma (2\alpha + 1)}{\Gamma (2 m \alpha + 1)} \prod _{k=1}^{m-1} \left( 2 \frac{\Gamma (2 k \alpha + 1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) X_2\\&=-\frac{1}{\Gamma (2 m \alpha + 1)} \prod _{k=1}^{m-1} \left( 2 \frac{\Gamma (2 k \alpha + 1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) \lambda X_0\\&=-\frac{1}{\Gamma (2 m \alpha + 1)} \prod _{k=1}^{m-1} \left( 2 \frac{\Gamma (2 k \alpha + 1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) \lambda Y_0. \end{aligned} \end{aligned}$$
(12)

Substituting (12) into (11) and rearranging the terms yields

$$\begin{aligned} Y(t)&= Y_0 + Y_1 t^\alpha -\frac{\lambda Y_0}{\Gamma (2\alpha +1)} t^{2\alpha }\nonumber \\&\quad + Y_1 \sum _{m=1}^\infty \left[ \frac{\Gamma (\alpha + 1)}{\Gamma ((2m+1)\alpha +1)} \prod _{k=0}^{m-1}\left( 2 \frac{\Gamma ((2k+1)\alpha +1)}{\Gamma (2k\alpha +1)}-\lambda \right) \right] t^{(2m+1)\alpha }\nonumber \\&\quad - \lambda Y_0 \sum _{m=2}^\infty \left[ \frac{1}{\Gamma (2 m \alpha + 1)} \prod _{k=1}^{m-1} \left( 2 \frac{\Gamma (2 k \alpha + 1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) \right] t^{2m\alpha }\nonumber \\&= Y_0 {\hat{Y}}_1(t) + Y_1 {\hat{Y}}_2(t), \end{aligned}$$
(13)

where

$$\begin{aligned} {\hat{Y}}_1(t):=1-\lambda \sum _{m=1}^\infty \left[ \frac{1}{\Gamma (2\alpha m +1)}\prod _{k=1}^{m-1} \left( 2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) \right] t^{2m\alpha } \end{aligned}$$
(14)

and

$$\begin{aligned} {\hat{Y}}_2(t):=t^\alpha + \sum _{m=1}^\infty \left[ \frac{\Gamma (\alpha +1)}{\Gamma ((2m+1)\alpha +1)}\prod _{k=0}^{m-1}\left( 2 \frac{\Gamma ((2k+1)\alpha +1)}{\Gamma (2\alpha k +1)}-\lambda \right) \right] t^{(2m+1)\alpha }. \end{aligned}$$
(15)

Notice that in the definition of \({\hat{Y}}_1(t)\), we have used the usual convention \(\prod _{k=i}^{j} p_k=1\) for \(i>j\) in the particular case that \(i=1>0=j\).

Hereinafter, we shall assume that:

  • H1: The coefficient \(\lambda \) is a bounded random variable, i.e., there are real numbers \(b_1\) and \(b_2\) such that \(b_1<\lambda (\omega )<b_2\), for all \(\omega \in \Omega \). Notice that this is equivalent to write that \(\lambda \in \textrm{L}^{\infty }(\Omega )\).

  • H2: The initial conditions \(Y_0, Y_1 \in \textrm{L}^2(\Omega )\) and \(\lambda \in \textrm{L}^{\infty }(\Omega )\) are independent random variables.

In the sequel, we will show that Y(t) in (13) is a rigorous solution of the RFIVP (3). To this end, we show that Y(t) in (13) is mean square convergent for all real \(t>0\) and \(Y'(t)=Y_0 {\hat{Y}}'_1(t) + Y_1 {\hat{Y}}'_2(t)\) (derived from (13) and H2) is uniformly mean square convergent for all real \(t>0\).

To establish the mean square convergence of Y(t), let us first observe that each \({\hat{Y}}_i(t)\), \(i=1,2\), only depends on the random variable \(\lambda \). By hypothesis H2, \(Y_0\), \(Y_1\) and \(\lambda \) are independent random variables. Thus, (13) implies

$$\begin{aligned} \Vert Y(t)\Vert _2 \le \Vert Y_0\Vert _2\Vert {\hat{Y}}_1(t)\Vert _2+\Vert Y_1\Vert _2\Vert {\hat{Y}}_2(t)\Vert _2. \end{aligned}$$

Since \(Y_0\) and \(Y_1\) belong \(\textrm{L}^{2}(\Omega )\), considering the previous inequality, the mean square convergence of Y(t) follows from the mean square convergence of series \({\hat{Y}}_i(t)\), \(i=1,2\), defined in (14) and (15), respectively. Hence, we begin by proving the mean square convergence of \({\hat{Y}}_i(t)\), \(i=1,2\). First, we find a bound for\(\left\| {\hat{Y}}_1(t) \right\| _2\). The triangle inequality and the Hölder inequality (2) with \(r=p=2\) and \(q=\infty \) imply

$$\begin{aligned} \begin{aligned}&\left\| {\hat{Y}}_1(t) \right\| _2=\left\| 1-\lambda \sum _{m=1}^\infty \left[ \frac{1}{\Gamma (2\alpha m +1)}\prod _{k=1}^{m-1} \left( 2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) \right] t^{2m\alpha } \right\| _2 \\&\quad \le 1 + \sum _{m=1}^\infty \left\| \left[ \frac{ \lambda }{\Gamma (2\alpha m +1)} \prod _{k=1}^{m-1} \left( 2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) \right\| _2 |t|^{2m\alpha } \right] \\&\quad \le 1 + \sum _{m=1}^\infty \left[ \frac{ \left\| \lambda \right\| _\infty }{\Gamma (2\alpha m +1)} \prod _{k=1}^{m-1} \left\| \left( 2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) \right\| _\infty |t|^{2m\alpha } \right] \\&\quad \le 1 + \sum _{m=1}^\infty \left[ \frac{ \left\| \lambda \right\| _\infty }{\Gamma (2\alpha m +1)} \prod _{k=1}^{m-1} \left( 2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}+ \left\| \lambda \right\| _{\infty } \right) |t|^{2m\alpha } \right] . \end{aligned} \end{aligned}$$

By setting

$$\begin{aligned} \delta _m(t) = \frac{ \left\| \lambda \right\| _\infty }{\Gamma (2\alpha m +1)} \prod _{k=1}^{m-1} \left( 2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}+ \left\| \lambda \right\| _{\infty } \right) |t|^{2m\alpha }, \end{aligned}$$

we only need to show that \(\sum _{m=1}^\infty \delta _m(t)\) converges for all real t, to ensure the mean square convergence of \(Y_1(t)\) for all real t. Taking advantage of the Stirling formula, \(\Gamma (x+1) \approx x^x e^{-x}\sqrt{2\pi x}\) as \(x\rightarrow \infty \), we have

$$\begin{aligned} \lim _{m\rightarrow \infty } \frac{\delta _{m+1}(t)}{\delta _{m}(t)}&=\lim _{m\rightarrow \infty } \frac{\Gamma (2 \alpha m +1)}{\Gamma (2 \alpha (m+1)+1)} \left( 2 \frac{\Gamma (2 \alpha m +1)}{\Gamma ((2m-1) \alpha +1)} + ||\lambda ||_\infty \right) |t|^{2 \alpha }\nonumber \\&=\lim _{m\rightarrow \infty } \frac{(2 m \alpha )^{2m\alpha } e^{-2 m \alpha } \sqrt{4 m \alpha \pi }}{(2(m+1)\alpha )^{2(m+1)\alpha }e^{-2(m+1)\alpha } \sqrt{4 \pi (m+1)\alpha }}\nonumber \\&\quad \cdot \left( 2 \frac{(2m\alpha )^{2m\alpha } e^{-2m\alpha }\sqrt{4m\alpha \pi }}{((2m-1)\alpha )^{(2m-1)\alpha } e^{-(2m-1)\alpha } \sqrt{2(2m-1)\pi \alpha }} + ||\lambda ||_\infty \right) |t|^{2 \alpha }\nonumber \\&=\lim _{m\rightarrow \infty } \left( \frac{2 m \alpha }{2 (m+1)\alpha }\right) ^{2 m \alpha } \left( \frac{1}{2(m+1)\alpha }\right) ^{2 \alpha } e^{2\alpha } \sqrt{\frac{m}{m+1}}\nonumber \\&\quad \cdot \left( 2 \left( \frac{2m\alpha }{(2m-1)\alpha }\right) ^{(2m-1)\alpha } (2m\alpha )^\alpha e^{-\alpha } \sqrt{\frac{2m}{2m-1}} + ||\lambda ||_\infty \right) |t|^{2 \alpha }\nonumber \\&=\lim _{m\rightarrow \infty } \left( \frac{2 m \alpha }{2 (m+1)\alpha }\right) ^{2 m \alpha } \left( \frac{1}{2(m+1)\alpha }\right) ^{2 \alpha } e^{2\alpha } \sqrt{\frac{m}{m+1}} \nonumber \\&\quad \cdot 2 \left( \frac{2m\alpha }{(2m-1)\alpha }\right) ^{(2m-1)\alpha } (2m\alpha )^\alpha e^{-\alpha } \sqrt{\frac{2m}{2m-1}}|t|^{2 \alpha }\nonumber \\&\quad + \lim _{m\rightarrow \infty } \left( \frac{2 m \alpha }{2 (m+1)\alpha }\right) ^{2 m \alpha } \left( \frac{1}{2(m+1)\alpha }\right) ^{2 \alpha } e^{2\alpha } \sqrt{\frac{m}{m+1}} ||\lambda ||_\infty |t|^{2 \alpha }\nonumber \\&=\lim _{m\rightarrow \infty } 2\left( \frac{2 m \alpha }{2 (m+1)\alpha }\right) ^{2 m \alpha } \left( \frac{2m\alpha }{(2m-1)\alpha }\right) ^{(2m-1)\alpha } \nonumber \\&\quad \cdot \left( \frac{1}{2(m+1)\alpha }\right) ^{\alpha } \left( \frac{2 m \alpha }{2(m+1)\alpha }\right) ^{\alpha } e^{\alpha } \sqrt{\frac{m}{m+1}} \sqrt{\frac{2m}{2m-1}}|t|^{2 \alpha } \nonumber \\&\quad + \lim _{m\rightarrow \infty } \left( \frac{2 m \alpha }{2 (m+1)\alpha }\right) ^{2 m \alpha } \left( \frac{1}{2(m+1)\alpha }\right) ^{2 \alpha } e^{2\alpha } \sqrt{\frac{m}{m+1}} ||\lambda ||_\infty |t|^{2 \alpha }\nonumber \\&= 0, \end{aligned}$$
(16)

because \(\left( \frac{2\,m \alpha }{2 (m+1)\alpha }\right) ^{2\,m \alpha }\xrightarrow []{m\rightarrow \infty } e^{-2 \alpha }\), \(\left( \frac{2\,m\alpha }{(2\,m-1)\alpha }\right) ^{(2\,m-1)\alpha }\xrightarrow []{m\rightarrow \infty } e^{\alpha }\), \(\left( \frac{1}{2(m+1)\alpha }\right) ^{k\alpha }\xrightarrow []{m\rightarrow \infty } 0\) for \(k=1,2\) and \(\left( \frac{2\,m \alpha }{2(m+1)\alpha }\right) ^{\alpha }\xrightarrow []{m\rightarrow \infty } 1\). By the ratio test, the series \(\sum _{m=1}^\infty \delta _m(t)\) converges for all real t. Hence, \(Y_1(t)\) defined in (14), is mean square convergent for all real \(t>0\). Similarly, for all real t, it can be proved the mean square convergence of \(Y_2(t)\) given by (15). Moreover, using similar arguments, one can prove that their corresponding mean square derivatives, \({\hat{Y}}_1'(t)\) and \({\hat{Y}}_2 '(t)\), are uniformly mean square convergent on \([-K,K]\) for any positive K. Summarizing, the following result has been established:

Theorem 2.1

If the random variables \(Y_0\), \(Y_1\) and \(\lambda \) satisfy hypotheses H1 and H2, then

$$\begin{aligned} Y(t)&= Y_0 \left( 1-\lambda \sum _{m=1}^\infty \left[ \frac{1}{\Gamma (2\alpha m +1)}\prod _{k=1}^{m-1} \left( 2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) \right] t^{2m\alpha }\right) \nonumber \\&\quad +Y_1 \left( t^\alpha + \sum _{m=1}^\infty \left[ \frac{\Gamma (\alpha +1)}{\Gamma ((2m+1)\alpha +1)}\prod _{k=0}^{m-1}\left( 2 \frac{\Gamma ((2k+1)\alpha +1)}{\Gamma (2\alpha k +1)}-\lambda \right) \right] t^{(2m+1)\alpha }\right) , \end{aligned}$$
(17)

is a mean square convergent solution of the RFIVP (3) for all \(t>0\).

3 Obtaining approximations for the mean and standard deviation of the solution

Theorem  2.1 ensures the mean square convergence of the solution process Y(t) in 17. Hence, Proposition 1.1 guarantees the convergence of its mean and standard deviation. This section is devoted to find explicit expressions for these relevant statistical functions. To this end, we first introduce the following technical result that simplifies the subsequent calculations.

Lemma 3.1

Let f(k) be a real function and let \(\lambda \) be a random variable. Then,

$$\begin{aligned} \prod _{k=1}^m\left( f(k)-\lambda \right) = \sum _{i=0}^m \lambda ^i (-1)^i G_{m,i},\quad \textrm{for} \, \textrm{all}\,\, m\in \mathbb {N}, \end{aligned}$$
(18)

where

$$\begin{aligned} G_{m,i} = \left\{ \begin{array}{lll} \sum _{j_1<j_2<\cdots<j_{m-i}}f(j_1)f(j_2) \cdots f(j_{m-i})&{} \textrm{if } &{} i < m,\\ 1&{} \textrm{if } &{}m = i, \\ 0 &{} \mathrm{otherwise. }&{} \end{array} \right. \end{aligned}$$
(19)

In other words, for \(i<m\), \(G_{m,i}\) is defined as the sum taken over all subsets of \(m-i\) indexes \(j_1,\ldots ,j_{m-i}\) from the set \(\{1,\ldots ,m \}\).

Proof

We proceed by induction on m. Clearly, (18) is true for \(m=1\). Indeed, observe that \(G_{1,0}=f(1)\) and \(G_{1,1}=1\) and the right side of (18) is

$$\begin{aligned} \lambda ^0(-1)^0G_{1,0}+\lambda ^1(-1)^1G_{1,1}, \end{aligned}$$

which is equal to the left side of (18). The Eq. (18) holds for \(m=2\) since the left side of (18) is

$$\begin{aligned} (f(1)-\lambda )(f(2)-\lambda ) = \lambda ^2-(f(1)+f(2))\lambda +f(1)f(2), \end{aligned}$$

and the right side of (18) is

$$\begin{aligned} \lambda ^0(-1)^0G_{2,0}+\lambda ^1(-1)^1G_{2,1}+\lambda ^2(-1)^2G_{2,2}=f(1)f(2)-\lambda (f(1)+f(2))+\lambda ^2. \end{aligned}$$

By definition of \(G_{m,i}\) it follows

$$\begin{aligned} G_{m+1,i} = f(m+1)G_{m,i}+G_{m,i-1}. \end{aligned}$$
($\ast \ast $)

Let \(m\in \mathbb {N} \) such that \(m\ge 2\) and suppose that

$$\begin{aligned} \prod _{k=1}^{m-1}(f(k)-\lambda ) = \sum _{i=0}^{m-1}\lambda ^i(-1)^i G_{m-1,i}. \end{aligned}$$
(20)

By induction hypothesis 20, we have

$$\begin{aligned} \begin{aligned} \prod _{k=1}^m (f(k)-\lambda )&= \left( f(m)-\lambda \right) \prod _{k=1}^{m-1} (f(k)-\lambda ) = \left( f(m)-\lambda \right) \sum _{i=0}^{m-1} \lambda ^i(-1)^i G_{m-1,i}\\&=f(m) \sum _{i=0}^{m-1} \lambda ^i(-1)^i G_{m-1,i} -\lambda \sum _{i=0}^{m-1} \lambda ^i(-1)^i G_{m-1,i}\\&= \sum _{i=0}^{m-1} \lambda ^i(-1)^i \left( f(m) G_{m-1,i}\right) + \sum _{i=0}^{m-1} \lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\\&= \lambda ^0(-1)^0 \left( f(m) G_{m-1,0}\right) + \sum _{i=1}^{m-1} \lambda ^i(-1)^i \left( f(m) G_{m-1,i}\right) \\&\quad + \sum _{i=0}^{m-1} \lambda ^{i+1}(-1)^{i+1} G_{m-1,i}. \end{aligned} \end{aligned}$$

Using the equalities \(G_{m,0}=f(m) G_{m-1,0}\) and \(f(m) G_{m-1,i+1}=f(m) G_{m-1,i+1}+G_{m-1,i}\) (derived from (\(**\))) yields

$$\begin{aligned} \begin{aligned}&= \lambda ^0(-1)^0 \left( G_{m,0}\right) + \sum _{i=0}^{m-2} \lambda ^{i+1}(-1)^{i+1} \left( f(m) G_{m-1,i+1}\right) \\&\quad + \sum _{i=0}^{m-2} \lambda ^{i+1}(-1)^{i+1} G_{m-1,i}+ \lambda ^m (-1)^m G_{m-1,m-1}\\&= \lambda ^0(-1)^0 G_{m,0} + \sum _{i=0}^{m-2} \lambda ^{i+1}(-1)^{i+1} \left( f(m) G_{m-1,i+1}+G_{m-1,i}\right) \\&\quad + \lambda ^m (-1)^m G_{m,m} \\&= \lambda ^0(-1)^0G_{m,0}+ \sum _{i=0}^{m-2} \lambda ^{i+1}(-1)^{i+1} G_{m,i+1}+ \lambda ^m(-1)^m G_{m,m}\\&= \sum _{i=0}^m\lambda ^i (-1)^iG_{m,i}. \end{aligned} \end{aligned}$$

By the principle of mathematical induction, we conclude that (18) is true for all m. \(\square \)

Now, we apply Lemma 3.1 to simplify the products involved in (13).

Let \(f(k)=2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}\). Then,

$$\begin{aligned} \prod _{k=1}^{m-1} \left( 2\frac{\Gamma (2 k \alpha +1)}{\Gamma ((2k-1)\alpha +1)}-\lambda \right) =\prod _{k=1}^{m-1} \left( f(k)-\lambda \right) =\sum _{i=0}^{m-1} \lambda ^i (-1)^i G_{m-1,i}, \end{aligned}$$

being \(G_{m-1,i}\) as in (19).

Next, setting \({\hat{f}}(k)=2 \frac{\Gamma ((2k-1)\alpha +1)}{\Gamma ((2k-2)\alpha +1)}\), one gets

$$\begin{aligned} \prod _{k=0}^{m-1}\left( 2 \frac{\Gamma ((2k+1)\alpha +1)}{\Gamma (2\alpha k +1)}-\lambda \right) = \prod _{k=1}^{m}\left( {\hat{f}}(k)-\lambda \right) =\sum _{i=0}^m \lambda ^i (-1)^i{\hat{G}}_{m,i}, \end{aligned}$$

where

$$\begin{aligned} {\hat{G}}_{m,i} = \left\{ \begin{array}{lll} \sum _{j_1<j_2<\cdots<j_{m-i}}{\hat{f}}(j_1){\hat{f}}(j_2) \cdots {\hat{f}}(j_{m-i})&{} \text { if } &{} i < m,\\ 1&{} \text { if } &{}m = i \\ 0 &{} \text { otherwise. }&{} \end{array} \right. \end{aligned}$$
(21)

As a consequence, the solution given in (13) can be represented free of products by the following expression

$$\begin{aligned} Y(t)&= Y_0 \left( 1+\sum _{m=1}^\infty \left[ \frac{t^{2m\alpha }}{\Gamma (2\alpha m +1)} \left( \sum _{i=0}^{m-1}\lambda ^{i+1} (-1)^{i+1} G_{m-1,i}\right) \right] \right) \nonumber \\&\quad +Y_1 \left( t^\alpha + \sum _{m=1}^\infty \left[ \frac{ \Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i (-1)^i{\hat{G}}_{m,i}\right) \right] \right) , \end{aligned}$$
(22)

where \(G_{m,i}\) and \({\hat{G}}_{m,i}\) are defined in (19) and (21), respectively.

Now, we shall obtain reliable approximations for the mean and the variance functions of the solution. To achieve this goal, we first consider the truncation of order M, \(Y_M(t)\), of the solution given in (22):

$$\begin{aligned} Y_M(t)&:= Y_0 \left( 1+\sum _{m=1}^M \left[ \frac{t^{2m\alpha }}{\Gamma (2\alpha m +1)} \left( \sum _{i=0}^{m-1}\lambda ^{i+1} (-1)^{i+1} G_{m-1,i}\right) \right] \right) \nonumber \\&\quad +Y_1 \left( t^\alpha + \sum _{m=1}^M \left[ \frac{ \Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i (-1)^i{\hat{G}}_{m,i}\right) \right] \right) . \end{aligned}$$
(23)

By independence of \(Y_0\), \(Y_1\) and \(\lambda \), see assumption H1, one gets

$$\begin{aligned} \mathbb {E}[Y_M(t)]&= \mathbb {E}[Y_0] \left( 1+\sum _{m=1}^M \left[ \frac{t^{2m\alpha }}{\Gamma (2\alpha m +1)} \left( \sum _{i=0}^{m-1}\mathbb {E}[\lambda ^{i+1}] (-1)^{i+1} G_{m-1,i}\right) \right] \right) \nonumber \\&\quad +\mathbb {E}[Y_1] \left( t^\alpha + \sum _{m=1}^M \left[ \frac{ \Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \mathbb {E}[\lambda ^i] (-1)^i{\hat{G}}_{m,i}\right) \right] \right) . \end{aligned}$$
(24)

Recall that the standard deviation of \(Y_M(t)\), \(\sigma [Y_M(t)]\), is defined by

$$\begin{aligned} \sigma [Y_M(t)] = \sqrt{\mathbb {E}[Y_M^2(t)]-(\mathbb {E}[Y_M(t)])^2}. \end{aligned}$$
(25)

Note that

$$\begin{aligned} \begin{aligned} Y_M^2(t)&= Y_0^2 \underbrace{\left( 1+\sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \right) ^2}_{:=\text {A}}\\&\quad + Y_1^2 \underbrace{\left( t^\alpha +\sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i(-1)^i{\hat{G}}_{m,i}\right) \right] \right) ^2}_{:=\text {B}} \\&\quad + 2 Y_0 Y_1 \underbrace{\left( 1+\sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \right) }_{\text {C}}\\&\quad \underbrace{\cdot \left( t^\alpha +\sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i(-1)^i{\hat{G}}_{m,i}\right) \right] \right) }_{:=\text {C}}. \end{aligned} \nonumber \\ \end{aligned}$$
(26)

Now, for the sake of clarity, we separately compute the above three terms, denoted by A, B and C.

$$\begin{aligned} \text {A}:= & {} \left( 1+\sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \right) ^2\\= & {} 1+ 2 \sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \\{} & {} + \left( \sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \right) ^2\\= & {} 1+ 2 \sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \\{} & {} + \left( \sum _{m=1}^M \sum _{n=1}^M\left[ \frac{t^{2\alpha (m+n) }}{\Gamma (2\alpha m+1)\Gamma (2\alpha n+1)}\left( \sum _{i=0}^{m-1}\sum _{j=0}^{n-1}\lambda ^{i+j+2}(-1)^{i+j+2} G_{m-1,i}G_{n-1,j}\right) \right] \right) , \\ \text {B}:= & {} \left( t^\alpha +\sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i(-1)^i{\hat{G}}_{m,i}\right) \right] \right) ^2 \\= & {} t^{2 \alpha } + 2t^\alpha \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i (-1)^i {\hat{G}}_{m,i} \right) \right] \\{} & {} + \left( \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i (-1)^i {\hat{G}}_{m,i} \right) \right] \right) ^2\\= & {} t^\alpha + 2t^\alpha \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i (-1)^i {\hat{G}}_{m,i} \right) \right] \\{} & {} +\sum _{m=1}^M \sum _{n=1}^M \left[ \frac{\Gamma (\alpha +1)^2 t^{(2n+2m+2)\alpha }}{\Gamma ((2m+1)\alpha +1)\Gamma ((2n+1)\alpha +1)}\left( \sum _{i=0}^m \sum _{j=0}^n \lambda ^{i+j} (-1)^{i+j} {\hat{G}}_{m,i}{\hat{G}}_{n,j} \right) \right] \end{aligned}$$

and

$$\begin{aligned} \begin{aligned} \text {C}&:= \left( 1+\sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \right) \\&\quad \cdot \left( t^\alpha +\sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i(-1)^i{\hat{G}}_{m,i}\right) \right] \right) \\&= t^\alpha + t^\alpha \sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \\&\quad + \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i(-1)^i{\hat{G}}_{m,i}\right) \right] \\&\quad + \sum _{m=1}^M \sum _{n=1}^M \left[ \frac{\Gamma (\alpha +1)t^{2\alpha m } t^{(2n+1)\alpha } }{\Gamma ((2n+1)\alpha +1) \Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1} \sum _{j=0}^n\lambda ^{i+j+1}(-1)^{i+j+1} G_{m-1,i} {\hat{G}}_{n,j}\right) \right] . \end{aligned} \end{aligned}$$

Substituting A, B and C in (26), \(Y_M(t)^2\) can be expressed as

$$\begin{aligned} Y_M^2(t)&= Y_0^2 \left( 1+ 2 \sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \right. \nonumber \\&\quad + \left. \left( \sum _{m=1}^M \sum _{n=1}^M\left[ \frac{t^{2\alpha (m+n) }}{\Gamma (2\alpha m+1)\Gamma (2\alpha n+1)}\left( \sum _{i=0}^{m-1}\sum _{j=0}^{n-1}\lambda ^{i+j+2}(-1)^{i+j+2} G_{m-1,i}G_{n-1,j}\right) \right] \right) \right) \nonumber \\&\quad + Y_1^2 \left( t^\alpha + 2t^\alpha \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i (-1)^i {\hat{G}}_{m,i} \right) \right] \right. \nonumber \\&\quad +\left. \sum _{m=1}^M \sum _{n=1}^M \left[ \frac{\Gamma (\alpha +1)^2 t^{(2n+2m+2)\alpha }}{\Gamma ((2m+1)\alpha +1)\Gamma ((2n+1)\alpha +1)}\left( \sum _{i=0}^m \sum _{j=0}^n \lambda ^{i+j} (-1)^{i+j} {\hat{G}}_{m,i}{\hat{G}}_{n,j} \right) \right] \right) \nonumber \\&\quad +2Y_0Y_1\left( t^\alpha + t^\alpha \sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i}\right) \right] \right. \nonumber \\&\quad + \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i(-1)^i{\hat{G}}_{m,i}\right) \right] \nonumber \\&\quad + \left. \sum _{m=1}^M \sum _{n=1}^M \left[ \frac{\Gamma (\alpha +1)t^{2\alpha m } t^{(2n+1)\alpha } }{\Gamma ((2n+1)\alpha +1) \Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1} \sum _{j=0}^n\lambda ^{i+j+1}(-1)^{i+j+1} G_{m-1,i} {\hat{G}}_{n,j}\right) \right] \right) . \end{aligned}$$
(27)

Applying the expectation operator on (27), one gets

$$\begin{aligned} \mathbb {E}[Y_M^2(t)]&= \mathbb {E}[Y_0^2] \left( 1+ 2 \sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\mathbb {E}[\lambda ^{i+1}](-1)^{i+1} G_{m-1,i}\right) \right] \right. \nonumber \\&\quad + \left. \left( \sum _{m=1}^M \sum _{n=1}^M\left[ \frac{t^{2\alpha (m+n) }}{\Gamma (2\alpha m+1)\Gamma (2\alpha n+1)}\left( \sum _{i=0}^{m-1}\sum _{j=0}^{n-1}\mathbb {E}[\lambda ^{i+j+2}](-1)^{i+j+2} G_{m-1,i}G_{n-1,j}\right) \right] \right) \right) \nonumber \\&\quad + \mathbb {E}[Y_1^2] \left( t^\alpha + 2t^\alpha \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \mathbb {E}[\lambda ^i] (-1)^i {\hat{G}}_{m,i} \right) \right] \right. \nonumber \\&\quad +\left. \sum _{m=1}^M \sum _{n=1}^M \left[ \frac{\Gamma (\alpha +1)^2 t^{(2n+2m+2)\alpha }}{\Gamma ((2m+1)\alpha +1)\Gamma ((2n+1)\alpha +1)}\left( \sum _{i=0}^m \sum _{j=0}^n \mathbb {E}[\lambda ^{i+j}] (-1)^{i+j} {\hat{G}}_{m,i}{\hat{G}}_{n,i} \right) \right] \right) \nonumber \\&\quad +2\mathbb {E}[Y_0]\mathbb {E}[Y_1]\left( t^\alpha + t^\alpha \sum _{m=1}^M\left[ \frac{t^{2\alpha m }}{\Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1}\mathbb {E}[\lambda ^{i+1}](-1)^{i+1} G_{m-1,i}\right) \right] \right. \nonumber \\&\quad + \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \mathbb {E}[\lambda ^i](-1)^i{\hat{G}}_{m,i}\right) \right] \nonumber \\&\quad + \left. \sum _{m=1}^M \sum _{n=1}^M \left[ \frac{\Gamma (\alpha +1)t^{2\alpha m } t^{(2n+1)\alpha } }{\Gamma ((2n+1)\alpha +1) \Gamma (2\alpha m+1)}\left( \sum _{i=0}^{m-1} \sum _{j=0}^n\mathbb {E}[\lambda ^{i+j+1}](-1)^{i+j+1} G_{m-1,i} {\hat{G}}_{n,j}\right) \right] \right) . \end{aligned}$$
(28)

From the previous expressions, it is interesting to observe that the approximation of order M of the mean, \(\mathbb {E}[Y_M(t)]\), depends on \(\mathbb {E}[Y_0]\), \(\mathbb {E}[Y_1]\) and \(\mathbb {E}[\lambda ^m]\), \(m=1,\ldots ,M\), while the approximation of the second-order moment, \(\mathbb {E}[Y_M^2(t)]\) (and hence, by (25), of \(\sigma [Y_M(t)]\))), depends on the above quantities together with \(\mathbb {E}[Y_0^2]\), \(\mathbb {E}[Y_1^2]\) and \(\mathbb {E}[\lambda ^m]\), \(m=1,\ldots ,2\,M\), as expected. Finally, notice that Theorem 2.1 ensures the mean square convergence of \(Y_M(t)\), and according to Proposition 1.1, \(\mathbb {E}[Y_M(t)]\) and \(\mathbb {E}[Y_M^2(t)]\) converge to their corresponding exact values, \(\mathbb {E}[Y(t)]\) and \(\mathbb {E}[Y^2(t)]\), respectively.

4 Convergent approximations for the 1-PDF of the solution

So far, convergent approximations for the mean, \(\mathbb {E}[Y_M(t)]\), and for the standard deviation, \(\sigma [Y_M(t)]\), of the solution, Y(t), given in (22) have been computed from its truncation of order M, \(Y_M(t)\), given in (23). Nevertheless, sometimes, it is required further statistical information of Y(t). On the one hand, computing higher-order one-dimensional statistical moments, \(\mathbb {E}[(Y_M(t))^k]\), allow us to approximate additional statistical properties, such as the asymmetry, the kurtosis, etc., of Y(t) that are useful functions to better describing the solution from a probabilistic standpoint. On the other hand, the probability that the solution lies within an interval of interest is, obviously, a relevant information in practice. Approximations for both quantities can be calculated by integrating the so-called first probability density function (1-PDF) of \(Y_M(t)\), say \(f_{Y_M(t)}(y)\),

$$\begin{aligned} \mathbb {E}[(Y_M(t))^k] = \int _{-\infty }^{\infty } y^k f_{Y_M(t)}(y)\, \textrm{d}y,\quad k=1,2,\ldots , \end{aligned}$$

and

$$\begin{aligned} \mathbb {P}[l_1\le Y_M(t) \le l_2] = \int _{l_1}^{l_2}f_{Y_M(t)}(y) \textrm{d}y. \end{aligned}$$

Of course the above approximations will be legitimated provided \(f_{Y_M(t)}(y)\longrightarrow f_{Y(t)}(y)\) as \(M\rightarrow \infty \), where \(f_{Y(t)}(y)\) stands for the 1-PDF of the exact solution Y(t), given in (22). In this section, we first formally construct the approximations \(f_{Y_M(t)}(y)\) and, then, we establish sufficient conditions so that the foregoing convergence fulfills.

4.1 Constructing formal approximations for the 1-PDF

In the extant literature, there exist different approaches to obtain, exact or approximately, the 1-PDF of a stochastic process. Most of these methods are natural extensions of their corresponding counterpart for calculating the PDF of a random variable. As we have previously obtained approximations for the two first moments of the solution, a natural approach would be to apply the principle of maximum entropy (PME). This method constructs the PDF taking into account the available information of the random variable (in our case, the two first moments) by maximizing the concept of Shannon’s entropy, which defines the lack of knowledge of a random variable (Michalowicz et al. 2013). In the setting of ordinary and fractional differential equations with randomness, this approach has been recently applied in Burgos-Simón et al. (2020) and Burgos et al. (2019), respectively. Although, the method provides well-founded approximations to calculate the 1-PDF, the results heavily depend on the accuracy of the approximations of the first statistical moments. Moreover, according to the PME method, the approximations of the 1-PDF are limited to certain specific classes of densities depending on the number of statistical moments that have been pre-calculated. For example, if it is only known the mean and that the solution is positive, the PDF will be an exponential distribution; if both the mean and the variance are known, the approximation of the PDF will be Gaussian, etc. Michalowicz et al. (2013). Non-standard distributions can be achieved at expenses of pre-calculating higher statistical moments that could be cumbersome, as can be guess from the expressions of the two first moments (see expressions (24) and (28)).

To avoid these drawbacks, we here propose to obtain the 1-PDF by an alternative method termed the Probabilistic Transformation Method (PTM), which is based on the following result.

Theorem 4.1

(PTM) (Soong 1973, p. 25) Let us consider \(\textbf{Z}=(Z_1,\ldots ,Z_k)\) and \(\textbf{X}=(X_1,\ldots ,X_k)\) two k-dimensional absolutely continuous random vectors defined on a common complete probability space \((\Omega ,\mathcal {F}_{\Omega },\mathbb {P})\). Let \(\textbf{r}: \mathbb {R}^k \rightarrow \mathbb {R}^k\) be a one-to-one deterministic transformation of \(\textbf{Z}\) into \(\textbf{X}\), i.e., \(\textbf{X}=\textbf{r}(\textbf{Z})\). Assume that \(\textbf{r}\) is continuous in \(\textbf{Z}\) and has continuous partial derivatives with respect to each \(Z_i\), \(1\le i \le k\). Then, if \(f_{\textbf{Z}}(\textbf{z})\) denotes the joint probability density function of random vector \(\textbf{Z}\), and \(\textbf{s}=\textbf{r}^{-1}=(s_1(x_1,\ldots ,x_k),\ldots ,s_k(x_1,\ldots ,x_k))\) represents the inverse mapping of \(\textbf{r}=(r_1(z_1,\ldots ,z_k),\ldots ,r_k(z_1,\ldots ,z_k))\), the joint probability density function of random vector \(\textbf{X}\) is given by

$$\begin{aligned} f_{\textbf{X}}(\textbf{y})=f_{\textbf{Z}}\left( \textbf{s}(\textbf{x})\right) \left| J \right| , \end{aligned}$$

where \(\left| J \right| \), which is assumed to be different from zero, is the absolute value of the Jacobian defined by the following determinant

$$\begin{aligned} J=\det \left( \frac{\partial \textbf{s}}{\partial \textbf{x}}\right) = \det \left( \begin{array}{ccc} \frac{\partial s_1(x_1,\ldots , x_k)}{\partial x_1} &{} \cdots &{} \frac{\partial s_k(x_1,\ldots , x_k)}{\partial x_1}\\ \vdots &{} \ddots &{} \vdots \\ \frac{\partial s_1(x_1,\ldots , x_k)}{\partial x_k} &{} \cdots &{} \frac{\partial s_k(x_1,\ldots , x_k)}{\partial x_k}\\ \end{array} \right) . \end{aligned}$$

In our setting, the key idea to take advantage of the above results is to note that, for \(t>0\) fixed, the approximate solution, \(Y_M(t)\), given in (23) is described by means of a transformation, \(\textbf{r}\), of the input parameters \(Y_0\), \(Y_1\) and \(\lambda \), whose PDFs, \(f_{Y_0}\), \(f_{Y_1}\) and \(f_{\lambda }\) are known. Observe that, according to hypothesis H1, the joint PDF of \((Y_0,Y_1,\lambda )\) is given by \(f_{Y_0,Y_1,\lambda } = f_{Y_0} f_{Y_1} f_{\lambda }\). Applying Theorem 4.1 to \(Y_M(t)\), we first shall obtain the approximations, \(f_{Y_M(t)}(y)\) and, later, we will establish sufficient condition so that \(f_{Y_M(t)}(y)\longrightarrow f_{Y(t)}(y)\) as \(M\rightarrow \infty \).

The PTM (also referred to as RVT—Random Variable Transformation) method has been successfully applied to obtain the 1-PDF of the solution of some classes of differential equations with uncertainties. In Dorini et al. (2016), the authors have obtained the 1-PDF of the solution of a logistic random differential equation. In Caraballo et al. (2019), the PTM method is applied to approximate the 1-PDF of the solution of a delay random differential equation. The PTM method has also been applied to numerically solve PDEs (Calatayud et al. 2020). In Burgos et al. (2018), some of the authors of this contribution, approximate the 1-PDF of a linear autonomous random fractional differential equation, whose order of fractional differentiation is \(0<\alpha \le 1\), by taking advantage of the PTM technique.

Let us apply Theorem () with the following identification, \(k=3\) and \(\textbf{Z} = (Z_1,Z_2,Z_3) = (Y_0,Y_1,\lambda )\). The vector \(\textbf{X} =(X_1,X_2, X_3)\) is defined by the following deterministic transformation \(\textbf{r}=(r_1,r_2,r_3)\), of \(\textbf{Z}\), i.e., \(\textbf{X} = \textbf{r} (\textbf{Z})\), where

$$\begin{aligned} \begin{aligned} x_1&= r_1(y_0,y_1,\lambda ) = y_0 \left( 1+\sum _{m=1}^M \left[ \frac{t^{2m\alpha }}{\Gamma (2\alpha m +1)} \left( \sum _{i=0}^{m-1}\lambda ^{i+1} (-1)^{i+1} G_{m-1,i}\right) \right] \right) ,\\&\quad +y_1 \left( t^\alpha + \sum _{m=1}^M \left[ \frac{ \Gamma (\alpha +1)t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i (-1)^i{\hat{G}}_{m,i}\right) \right] \right) ,\\ x_2&= r_2(y_0,y_1,\lambda ) = y_1\\ x_3&= r_3(y_0,y_1,\lambda ) = \lambda . \end{aligned} \end{aligned}$$

It can be seen that the inverse mapping of \(\textbf{r}\), \(\textbf{s} = \textbf{r}^{-1}\), is given by

$$\begin{aligned} \begin{aligned} y_0&= s_1(x_1,x_2,x_3) = \frac{x_1-x_2 \left( t^\alpha + \sum \limits _{m=1}^M\left[ \frac{\Gamma (\alpha +1) t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum \limits _{i=0}^m x_3^i (-1)^i {\hat{G}}_{m,i}\right) \right] \right) }{1+\sum \limits _{m=1}^M\left[ \frac{t^{2\alpha m}}{\Gamma (2\alpha m +1)} \left( \sum \limits _{i=0}^m x_3^{i+1} (-1)^{i+1} G_{m-1,i}\right) \right] },\\ y_1&= s_2(x_1,x_2,x_3) = x_2,\\ \lambda&= s_3(x_1,x_2,x_3) = x_3. \end{aligned} \end{aligned}$$

The absolute value of the Jacobian of the transformation \(\textbf{s}\) is given by

$$\begin{aligned} |J| = \left| \frac{\partial s_1(x_1,x_2,x_3)}{\partial x_1} \right| = \frac{1}{\left| 1+\sum \limits _{m=1}^M \left[ \frac{t^{2 \alpha m }}{\Gamma (2 \alpha m +1)}\left( \sum \limits _{i=0}^{m-1}x_3^{i+1}(-1)^{i+1} G_{m-1,i} \right) \right] \right| }. \end{aligned}$$

Applying Theorem (), the PDF of the random vector \(\textbf{X} = (X_1,X_2,X_3)\) is given by

$$\begin{aligned} \begin{aligned}&f_{X_1,X_2,X_3} (x_1, x_2, x_3) \\&\quad = f_{Y_0,Y_1,\lambda } \left( \frac{x_1-x_2 \left( t^\alpha + \sum \limits _{m=1}^M\left[ \frac{\Gamma (\alpha +1) t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum \limits _{i=0}^m x_3^i (-1)^i {\hat{G}}_{m,i}\right) \right] \right) }{1+\sum \limits _{m=1}^M\left[ \frac{t^{2\alpha m}}{\Gamma (2\alpha m +1)} \left( \sum \limits _{i=0}^{m-1} x_3^{i+1} (-1)^{i+1} G_{m-1,i}\right) \right] }, x_2,x_3 \right) \\&\qquad \cdot \frac{1}{\left| 1+\sum \limits _{m=1}^M \left[ \frac{t^{2 \alpha m }}{\Gamma (2 \alpha m +1)}\left( \sum \limits _{i=0}^{m-1}x_3^{i+1}(-1)^{i+1} G_{m-1,i} \right) \right] \right| }. \end{aligned} \end{aligned}$$

Marginalizing with respect \(X_2 = Y_1\) and \(X_3 = \lambda \), we can obtain the 1-PDF of the approximate solution, \(Y_M(t)\),

$$\begin{aligned} \begin{aligned}&f_{Y_M(t)}(y) = f_{X_1}(y) = \int _{-\infty }^{\infty }\int _{-\infty }^{\infty }f_{X_1,Y_1,\lambda } (y, y_1, \lambda )\, \textrm{d}y_1\, \textrm{d}\lambda \\&\quad = \int _{-\infty }^{\infty }\int _{-\infty }^{\infty } f_{Y_0} \left( \frac{y-y_1 \left( t^\alpha + \sum \limits _{m=1}^M\left[ \frac{\Gamma (\alpha +1) t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum \limits _{i=0}^m \lambda ^i (-1)^i {\hat{G}}_{m,i}\right) \right] \right) }{1+\sum \limits _{m=1}^M\left[ \frac{t^{2\alpha m}}{\Gamma (2\alpha m +1)} \left( \sum \limits _{i=0}^{m-1} \lambda ^{i+1} (-1)^{i+1} G_{m-1,i}\right) \right] }\right) \\&\qquad \cdot f_{Y_1}(y_1) f_{\lambda }(\lambda )\frac{1}{\left| 1+\sum \limits _{m=1}^M \left[ \frac{t^{2 \alpha m }}{\Gamma (2 \alpha m +1)}\left( \sum \limits _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i} \right) \right] \right| } \, \textrm{d}y_1 \, \textrm{d}\lambda . \end{aligned} \nonumber \\ \end{aligned}$$
(29)

4.2 Convergence of approximations of the 1-PDF

This subsection is addressed to show that \(f_{Y_M(t)}(y)\longrightarrow f_{Y(t)}(y)\) as \(M\rightarrow \infty \) under mild conditions. Note that \(f_{Y_M(t)}(y)\) is given by (29), while the limit is given by

$$\begin{aligned} \begin{aligned} f_{Y(t)}(y)&= \int _{-\infty }^{\infty }\int _{-\infty }^{\infty } f_{Y_0} \left( \frac{y-y_1 \left( t^\alpha + \sum \limits _{m=1}^\infty \left[ \frac{\Gamma (\alpha +1) t^{(2m+1)\alpha }}{\Gamma ((2m+1)\alpha +1)}\left( \sum \limits _{i=0}^m \lambda ^i (-1)^i {\hat{G}}_{m,i}\right) \right] \right) }{1+\sum \limits _{m=1}^\infty \left[ \frac{t^{2\alpha m}}{\Gamma (2\alpha m +1)} \left( \sum \limits _{i=0}^{m-1} \lambda ^{i+1} (-1)^{i+1} G_{m-1,i}\right) \right] }\right) \\&\quad \cdot f_{Y_1}(y_1) f_{\lambda }(\lambda ) \frac{1}{\left| 1+\sum \limits _{m=1}^\infty \left[ \frac{t^{2 \alpha m }}{\Gamma (2 \alpha m +1)}\left( \sum \limits _{i=0}^{m-1}\lambda ^{i+1}(-1)^{i+1} G_{m-1,i} \right) \right] \right| } \textrm{d}y_1 \textrm{d}\lambda . \end{aligned} \nonumber \\ \end{aligned}$$
(30)

For the sake of clarity in the subsequent development, we first introduce the following notation.

$$\begin{aligned} \begin{aligned} S_0^M(t)&= 1+\sum _{m=1}^M \left[ \frac{t^{2\alpha m }}{\Gamma (2 \alpha m +1)} \left( \sum _{i=0}^{m}\lambda ^{i+1}(-1)^{i+1}G_{m-1,i}\right) \right] ,\\ S_0(t)&= 1+\sum _{m\ge 1} \left[ \frac{t^{2\alpha m }}{\Gamma (2 \alpha m +1)} \left( \sum _{i=0}^{m}\lambda ^{i+1}(-1)^{i+1}G_{m-1,i}\right) \right] ,\\ S_1^M(t)&= t^\alpha + \sum _{m=1}^M \left[ \frac{\Gamma (\alpha +1)}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i(-1)^i {\hat{G}}_{m,i}\right) \right] ,\\ S_1(t)&= t^\alpha + \sum _{m\ge 1} \left[ \frac{\Gamma (\alpha +1)}{\Gamma ((2m+1)\alpha +1)}\left( \sum _{i=0}^m \lambda ^i(-1)^i {\hat{G}}_{m,i}\right) \right] . \end{aligned} \end{aligned}$$
(31)

Then, expressions (29) and (30) read

$$\begin{aligned} \begin{aligned} f_{Y_M(t)}(y)&= \int _{\mathbb {R}^2} f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) f_{Y_1}(y_1)f_{\lambda }(\lambda ) \left| \frac{1}{S_0^M(t)}\right| \textrm{d}y_1\, \textrm{d}\lambda ,\\ f_{Y(t)}(y)&= \int _{\mathbb {R}^2} f_{X_0}\left( \frac{y-y_1S_1(t)}{S_0(t)}\right) f_{Y_1}(y_1)f_{\lambda }(\lambda ) \left| \frac{1}{S_0(t)}\right| \textrm{d}y_1\, \textrm{d}\lambda . \end{aligned} \end{aligned}$$
(32)

Before proceeding with the proof, it is important to remark the following observations. Note that with the notation of (31), the solution (22) is given by \(Y(t)= Y_0S_0(t)+X_1S_1(t)\).

If \(Y_{0}\ne 0\), then

$$\begin{aligned}Y_0 = Y(0) = Y_0S_0(0)+Y_1S_1(0) = Y_0S_0(0),\end{aligned}$$

and \(S_0(0)=1\) with probability 1, because \(S_{1}(0)=0\). Taking into account that \(S_0(t)\) is a power series evaluated at \(t^{2 \alpha }\) and consequently continuous, we can guarantee that

$$\begin{aligned} \exists \delta _0>0: \, 0<m_{s,0} \le \min \{|S_0^M(t)|,|S_0(t)|\}, \, \, \forall t: |t|\le \delta _0, \quad \forall \,\, \text {integer}\,\, M \ge 0. \end{aligned}$$
(33)

Moreover, by the definition of Eq. (31), it is known that \(S_0^M(t)\) and \(S_1^M(t)\) are convergent series in the whole real line. Thus, these series are almost surely uniform convergent in every compact subset of \(\mathbb {R}\). This guarantees that, for \(j=0,1 \),

$$\begin{aligned} \exists M_{s,j}>0: \max \{|S_j^M(t)|,|S_j(t)|\} \le M_{s,j}, \, \, \forall t: |t|\le \delta _0, \quad \forall \,\, \text {integer}\,\, M \ge 0. \end{aligned}$$
(34)

Finally, it is note that \(S_0^M(t)\) and \(S_1^M(t)\) converge uniformly to \(S_0(t)\) and \(S_1(t)\) on \([-\delta _0, \delta _0]\), respectively. So, taken \(\varepsilon _j>0,\) \(j=0,1,\) arbitrarily but fixed, there exists \(M_0^{j}>0\) integer, so that

$$\begin{aligned} |S_j^M(t)-S_j(t)|<\varepsilon _j, \, \forall M \ge M_0^{j} \text { integer and } \, \, \forall t: |t|\le \delta _0. \end{aligned}$$
(35)

To complete the proof, we fix t assuming that it lies within a neighborhood about \(t_0=0\), where the RFIVP is formulated and the bounds (33) and (34) fulfill. To proof that \(f_{Y_M(t)}(y) \longrightarrow f_{Y(t)}(y)\) as \(M \rightarrow \infty \), besides assuming hypotheses H1 and H2, we will assume that

  • H3: The PDF, \(f_{Y_0}\), of the initial condition \(Y_0\) is Lipschitz on the whole real line, \(\mathbb {R}\), i.e., there exists \(L_0>\) such that

    $$\begin{aligned} \left| f_{Y_0}(x)-f_{Y_0}(z)\right| \le L_0 |x-z|,\quad \forall x,z\in \mathbb {R}. \end{aligned}$$

To prove the convergence, we fix t and calculate the difference \(\left| f_{Y(t)}(y)-f_{Y_M(t)}(y)\right| \) using (32).

$$\begin{aligned}&\left| f_{Y(t)}(y)-f_{Y_M(t)}(y)\right| \nonumber \\&\quad =\left| \int _{\mathbb {R}^2} \left( f_{Y_0}\left( \frac{y-y_1S_1(t)}{S_0(t)}\right) \frac{1}{|S_0(t)|}-f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)} \right) \frac{1}{|S_0^M(t)|} \right) f_{Y_1}(y_1) f_{\lambda }(\lambda )\, \textrm{d}\lambda \, \textrm{d}y_1\right| \nonumber \\&\quad \le \int _{\mathbb {R}^2} \left| f_{Y_0}\left( \frac{y-y_1S_1(t)}{S_0(t)}\right) \frac{1}{|S_0(t)|}-f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \frac{1}{|S_0^M(t)|}\right| f_{Y_1}(y_1) f_{\lambda }(\lambda )\, \textrm{d}\lambda \, \textrm{d}y_1 \nonumber \\&\quad = \int _{\mathbb {R}^2} \left| f_{Y_0}\left( \frac{y-y_1S_1(t)}{S_0(t)}\right) \frac{1}{|S_0(t)|}-f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \frac{1}{|S_0(t)|}\right. \nonumber \\&\qquad \left. +f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \frac{1}{|S_0(t)|}-f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \frac{1}{|S_0^M(t)|}\right| f_{Y_1}(y_1) f_{\lambda }(\lambda ) \textrm{d}\lambda \, \textrm{d}y_1 \nonumber \\&\quad \le \int _{\mathbb {R}^2} \left\{ \underbrace{\left| f_{Y_0}\left( \frac{y-y_1S_1(t)}{S_0(t)} \right) - f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \right| }_{\text {(I)}} \underbrace{ \frac{1}{|S_0(t)|}}_{\text {(II)}} \right. \nonumber \\&\qquad \left. +\underbrace{\left| f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \right| }_{\text {(III)}} \underbrace{\left| \frac{1}{|S_0(t)|}- \frac{1}{|S_0^M(t)|}\right| }_{\text {(IV)}} \right\} f_{Y_1}(y_1) f_{\lambda }(\lambda )\, \textrm{d}\lambda \, \textrm{d}y_1. \end{aligned}$$
(36)

Now, we proceed to bound the terms (I)–(IV) in (36). Let us start with term (III). First, let us denote \(F_0:=f_{Y_0}(0)\), then using hypothesis H3 and bounds (33) and (34) for \(S_0^M\) and \(S_1^M\), respectively, one gets

$$\begin{aligned} \text {(III)}&= \left| f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \right| = \left| f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) - f_{Y_0}(0)+F_0\right| \nonumber \\&\le \left| f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) - f_{Y_0}(0) \right| +F_0\nonumber \\&\le L_0 \left| \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right| +F_0\nonumber \\&\le \frac{L_0}{m_{s,0}} \left( |y|+ |y_1|M_{s,1}\right) +F_0. \end{aligned}$$
(37)

Using the bound (33) for \(S_0^M\) and (35) for \(j=0\), the term (IV) can be majorized by

$$\begin{aligned} \text {(IV)}&= \left| \frac{1}{|S_0(t)|}-\frac{1}{|S_0^M(t)|}\right| = \frac{\left| |S_0^M(t)|- |S_0(t)| \right| }{|S_0(t)||S_0^M(t)|} \le \frac{\left| S_0^M(t)- S_0(t) \right| }{|S_0(t)||S_0^M(t)|} \le \frac{\varepsilon _0}{m_{s,0}^2}. \end{aligned}$$
(38)

The bound of the term (II) straightforwardly follows from the application of (33)

$$\begin{aligned} \text {(II)}=\frac{1}{|S_0(t)|}\le \frac{1}{m_{s,0}}. \end{aligned}$$
(39)

Finally, we proceed to bound the term (I). To this end, we first apply hypothesis H3

$$\begin{aligned} \text {(I)}= & {} \left| f_{Y_0}\left( \frac{y-y_1S_1(t)}{S_0(t)} \right) - f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \right| \nonumber \\\le & {} L_0 \left| \frac{y-y_1S_1(t)}{S_0(t)} - \frac{y-y_1S_1^M(t)}{S_0^M(t)} \right| \nonumber \\\le & {} L_0 \left| \frac{y S_0^M(t)- y_1 S_1(t) S_0^M(t)-y S_0(t)+y_1S_0(t)S_1^M(t)}{S_0(t)S_0^M(t)}\right| \nonumber \\= & {} L_0 \left| \frac{y\left( S_0^M(t)-S_0(t) \right) +y_1 \left( S_0(t)S_1^M(t)-S_1(t)S_0^M(t) \right) }{S_0(t)S_0^M(t)}\right| \nonumber \\\le & {} L_0 \left( \frac{|y| |S_0^M(t)-S_0(t)|+|y_1||S_0(t)S_1^M(t)-S_1(t)S_0^M(t)|}{|S_0(t)||S_0^M(t)|} \right) \nonumber \\= & {} L_0 \left( \frac{|y| |S_0^M(t)-S_0(t)|+|y_1||S_0(t)S_1^M(t)-S_0(t)S_1(t)+S_0(t)S_1(t)-S_1(t)S_0^M(t)|}{|S_0(t)||S_0^M(t)|} \right) \nonumber \\\le & {} L_0 \left( \frac{|y| |S_0^M(t)-S_0(t)|+|y_1|\left( |S_0(t)| |S_1^M(t)-S_1(t)|+|S_1(t)| |S_0(t)-S_0^M(t)| \right) }{|S_0(t)||S_0^M(t)|} \right) \nonumber \\\le & {} L_0\left( \frac{|y|\varepsilon _0+|y_1|(M_{s,0}\varepsilon _1+M_{s,1}\varepsilon _0)}{m_{s,0}^2} \right) , \end{aligned}$$
(40)

where in the last step, we have applied (35) and (34), both for \(j=0,1\), and (33) for \(S_0^m\) and \(S_0\).

Substituting (40), (39), (37) and (38), in (36) to bound the terms (I)–(IV), respectively, one gets

$$\begin{aligned}{} & {} \left| f_{Y(t)}(y)-f_{Y_M(t)}(y)\right| \\{} & {} \quad \le \int _{\mathbb {R}^2} \left\{ \left| f_{Y_0}\left( \frac{y-y_1S_1(t)}{S_0(t)} \right) - f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \right| \frac{1}{|S_0(t)|} \right. \\{} & {} \qquad \left. +\left| f_{Y_0}\left( \frac{y-y_1S_1^M(t)}{S_0^M(t)}\right) \right| \left| \frac{1}{|S_0(t)|}- \frac{1}{|S_0^M(t)|}\right| \right\} f_{Y_1}(y_1) f_{\lambda }(\lambda )\, \textrm{d}\,\lambda \textrm{d}y_1\\{} & {} \quad \le \int _{\mathbb {R}^2}L_0 \left\{ \left( \frac{|y|\varepsilon _0+|y_1|(M_{s,0}\varepsilon _1+M_{s,1}\varepsilon _0)}{m_{s,0}^2} \right) \frac{1}{m_{s,0}} \right. \\{} & {} \qquad \left. + \left( \frac{L_0}{m_{s,0}} \left( |y|+ |y_1|M_{s,1}\right) +F_0 \right) \frac{\varepsilon _0}{m_{s,0}^2} \right\} f_{Y_1}(y_1) f_{\lambda }(\lambda ) \textrm{d}\lambda \textrm{d}y_1.\\ \end{aligned}$$

Let us denote \(\mathcal {M}=\max \{M_{s,0},M_{s,1}\}\) and \(\varepsilon = \max \{\varepsilon _{0},\varepsilon _{1}\}\), then,

$$\begin{aligned}{} & {} \left| f_{Y(t)}(y)-f_{Y_M(t)}(y)\right| \\{} & {} \quad \le \int _{\mathbb {R}^2} \left\{ L_0 \left( \frac{|y|\varepsilon +2 \mathcal {M} \varepsilon |y_1|}{m_{s,0}^3} \right) \right. \\{} & {} \qquad \left. + \left( \frac{L_0}{m_{s,0}} \left( |y|+ |y_1|\mathcal {M}\right) +F_0 \right) \frac{\varepsilon }{m_{s,0}^2} \right\} f_{Y_1}(y_1) f_{\lambda }(\lambda )\, \textrm{d}\lambda \,\textrm{d}y_1\\{} & {} \quad = \int _{\mathbb {R}^2} \left\{ \frac{L_0 \varepsilon }{m_{s,0}^3} |y| + \frac{2L_0 \mathcal {M} \varepsilon }{m_{s,0}^3} |y_1| + \frac{L_0 \varepsilon }{m_{s,0}^3}|y| \right. \\{} & {} \qquad \left. +\frac{L_0 \mathcal {M} \varepsilon }{m_{s,0}^3} |y_1| + \frac{F_0 \varepsilon }{m_{s,0}^2} \right\} f_{Y_1}(y_1) f_{\lambda }(\lambda )\, \textrm{d}\lambda \,\textrm{d}y_1\\{} & {} \quad = \left( \frac{2 L_0 \varepsilon }{m_{s,0}^3} |y| + \frac{F_0 \varepsilon }{m_{s,0}^2}\right) \int _{\mathbb {R}^2} f_{Y_1}(y_1) f_{\lambda }(\lambda )\,\textrm{d}y_1\, \textrm{d}\lambda \\{} & {} \qquad + \left( \frac{3 L_0 \mathcal {M} \varepsilon }{m_{s,0}^3}\right) \int _{\mathbb {R}^2} |y_1| f_{Y_1}(y_1) f_{\lambda }(\lambda )\,\textrm{d}y_1\, \textrm{d}\lambda \\{} & {} \quad = \left( \frac{2 L_0 \varepsilon }{m_{s,0}^3} |y| + \frac{F_0 \varepsilon }{m_{s,0}^2}\right) + \left( \frac{3 L_0 \mathcal {M} \varepsilon }{m_{s,0}^3}\right) \mathbb {E}[|Y_1|]\\{} & {} \quad = \varepsilon \left( \frac{2 L_0}{m_{s,0}^3} |y|+\frac{F_0}{m_{s,0}^2}+\frac{3 L_0 \mathcal {M}}{m_{s,0}^3} \mathbb {E}[Y_1] \right) . \end{aligned}$$
Fig. 1
figure 1

Mean and standard deviation of the solution for different orders of \(M\in \{5,7,10,12,15\}\) in the context of Example 5.1. Convergence of these two statistical moments is clearly observed as M increases

Fig. 2
figure 2

1-PDF of the solution, (29), for different \(t\in \{0.25,0.5,0.75\}\) in the context of Example 5.1, considering different order of truncation \(M\in \{2,3,4,5,6\}\)

Fig. 3
figure 3

Mean and standard deviation, (24) and (28), respectively, in the context of Example 5.2 considering different order of truncation \(M \in \{7, 10, 12, 15, 17\}\) on the interval \(t\in [0,1]\)

Fig. 4
figure 4

1-PDF of the solution, \(f_{Y_M(t)}(y)\), given in (29), for different \(t\in \{0.25,0.5,0.75\}\) in the context of Example 5.2, considering different order of truncation \(M\in \{ 4,5,7,10,12 \}\)

Since by hypothesis H2, \(Y_1\in \textrm{L}^2(\Omega )\), by Schwarz’s inequality \(\mathbb {E}[|Y_1|]\le \mathbb {E}[|Y_1|^2]<\infty \). Then, as a consequence of the previous development, we conclude that \(f_{Y_M(t)}(y) \longrightarrow f_{Y(t)}(y)\) as \(M \rightarrow \infty \).

5 Numerical examples

This section is devoted to illustrate the theoretical findings established in the previous sections by means of two numerical examples. These examples are devised with regard to the probability distribution of model parameter \(\lambda \), which, according to hypothesis H1, is assumed to be an essentially bounded random variable. In the first example, we will assume that \(\lambda \) has a bounded distribution. In the second example, we will illustrate how the case, where \(\lambda \) is an unbounded random variable can be treated via its approximation using truncated random variables for which hypothesis H1 fulfills. In this latter case, we will graphically show the correct convergence of the approximations of the 1-PDF of the solution stochastic process.

Example 5.1

In this first example, let us consider that the order of the fractional derivative is \(\alpha =0.5\). We will assume the following probability distributions for the model input parameters: \(Y_0\) has a Gamma distribution with parameters (1, 1), i.e., \(Y_0\sim \text {Ga}(1,1)\) (hence, \(\mathbb {E}[Y_0]=1\) \(\mathbb {E}[Y_0^2]=2\)); \(Y_1\) has a Gaussian distribution with mean 2 and standard deviation \(\sqrt{2}\), i.e., \(Y_1\sim \text {N}(2,(\sqrt{2})^2)\) (hence, \(\mathbb {E}[Y_1]=2\) \(\mathbb {E}[Y_1^2]=6\)); and, \(\lambda \) has a Beta distribution with parameters (2, 3), i.e., \(\lambda \sim \text {Be}(2,3)\). According to (24) and (28), to compute the mean and the second-order moment of the solution besides knowing the two first moments of \(Y_0\) and \(Y_1\), it is also required to pre-calculate the higher moments \(\mathbb {E}[\lambda ^k]\), \(k\in \mathbb {N}\), which are explicitly known in the case for \(\lambda \sim \text {Be}(2,3)\),

$$\begin{aligned}\mathbb {E}[\lambda ^k] = \prod _{r=0}^{k-1} \frac{2+r}{2+3+r}.\end{aligned}$$

In Fig.1 we can observe, along the time \(t\in [0,1]\), the mean and the standard deviation of the solution considering different order of truncation \(M \in \{5,7,10,12,15\}.\) To illustrate clearly the convergence as M increases, in each subfigure a zoom has been made at the time instants, t, close to 1, which is where the graphs can be perceived separately.

In Fig.2 the 1-PDF, \(f_{Y_M}(y)\), of the solution, given in (29), for different orders of truncation, \(M\in \{2,3,4,5,6\}\) and times instants, \(t\in \{0.25,0.5,0.75\}\) have been plotted. We can see graphically the convergence of the 1-PDFs, studied in Sect. 4.2, as M increases. To have better visualization of this convergence, in each subplot, a zoom has been performed around the maximum of these functions. From the symmetry of the 1-PDFs, we can determine that the mean is around the point y where the maximum of the function occurs. Taking advantage of this zoom we can verify that the mean estimated in Fig. 2 matches the mean obtained in Fig. 1.

Example 5.2

As it has been mentioned before, the objective of this second example is to illustrate an approximation of the case where the random variable \(\lambda \) is not bounded. To this end, \(\lambda \) is truncated on an interval containing a high percentage of probability mass. It is important to remark that this approach approximates the original problem. Nevertheless, the more probability mass the truncation interval contains the better this approximation will be.

On the one hand, we have considered that \(\lambda \) has a truncated Gaussian distribution with mean 0 and standard deviation 0.2 on the interval \([-100,100]\), i.e., \(\lambda \sim \textrm{ N}_{[-100,100]}(0,0.2^2)\). The truncation of a \(\textrm{ N}(0,0.2^2)\) over the interval \([-100,100]\) captures a \(99.9999\%\) of the total probability mass.

On the other hand, we will assume that the order of the derivative is \(\alpha =0.4\). We will assume that \(Y_0\) has an Exponential distribution of parameter 2, i.e., \(Y_0\sim \textrm{ Exp}(2)\). For the random variable \(Y_1\), we will assume that it has a Beta distribution of parameters (2, 4), i.e., \(Y_1\sim \textrm{Be}(2,4)\). The two first moments of \(Y_0\) and \(Y_1\), required to compute the mean and the standard deviation, are then \(\mathbb {E}[Y_0]=1/2\), \(\mathbb {E}[X_0^2]=1/2\), \(\mathbb {E}[X_1]=1/3\) and \(\mathbb {E}[Y_1^2]=1/7\). It is also necessary to know the higher order moments of the random variable \(\lambda \sim \textrm{N}_{[-100,100]}(0,0.2^2)\). Note that it can be calculated by

$$\begin{aligned} \mathbb {E}[\lambda ^k]=\int _{-100}^{100} \lambda ^k f_{\lambda }(\lambda )\, \textrm{d} \lambda ,\quad k=1,2,\ldots , \end{aligned}$$

where

$$\begin{aligned} f_{\lambda }(\lambda )=\frac{e^{-\frac{1}{2}\left( \frac{\lambda }{0.2}\right) ^2}}{\int _{-100}^{100} e^{-\frac{1}{2}\left( \frac{\lambda }{0.2}\right) ^2}\, \textrm{d} \lambda },\quad -100\le \lambda \le 100. \end{aligned}$$

This calculation approximates the moments

$$\begin{aligned} \mathbb {E}[\text {N}(0,\sigma )^k] =\left\{ \begin{aligned} 0&\,\,\,\, \text { if } k \text { is odd,}\\ \sigma ^k (k-1)!!&\,\,\,\, \text { if } k \text { is even,} \end{aligned} \right. \end{aligned}$$

where \((k-1)!!\) is defined as the double factorial, which is the product of all numbers from \(k-1\) to 1 that have the same parity as \(k-1\). Here, \(\sigma =0.2\). This approximation is based on the fact that, according to Chebyshev’s inequality, the truncated Gaussian random captures \(99.9999\%\) of the probability of the original Gaussian random variable \(\textrm{N}(0,0.2^2)\).

In Figure 3, we show the approximations of the mean and the standard deviation of the solution for \(t\in [0,1]\) considering different order of truncation, \(M \in \{7, 10, 12, 15, 17\}\). As in the previous example, to better show convergence as M increases, we have magnified the plot about \(t=1\), where the discrepancies could be greater. We can see that the approximations are very good.

In Fig.4 different plots for the 1-PDF at times \(t\in \{0.25,0.5,0.75\}\) considering different order of truncation have been included. A zoom has been added at the maximum of each plot to better show graphically the convergence proved in Sect. 4.2.

6 Conclusions

In this paper, we have presented a comprehensive analysis of the fractional Hermite differential equation with uncertainties in all its data (coefficient and initial conditions). Our study has been based on the so-called random differential equation approach. To perform the study, we first have constructed a random generalized power series and we have proved that this solution is mean square convergent by assuming mild hypotheses on the data. Second, we have taken advantage of a key property of the mean square convergence to approximate the mean and the variance of the solution. Afterwards, we have constructed approximations of the first probability density function of the solution using the so called Probability Transformation Method. We have also shown that these approximations are also convergent under some assumptions that fulfill in many practical applications.

The main spirit of the paper is to continue developing new results in the setting of Fractional Calculus with uncertainty, where results for random fractional differential equations are still scarce. In this sense, the results presented in this paper for the random fractional Hermite equation can inspire to extend our analysis to other significant random fractional second-order differential equations in forthcoming contributions. Furthermore, the ideas developed in this contribution may help to extend the deterministic theory for other types of polynomials, such as Cesarano (2014), Cesarano et al. (2014), Cesarano et al. (2005) and Quintana et al. (2018), to the fractional random framework.