1 Introduction

The notion of random times and resulting fractional evolution equations are widely discussing in stochastic and mathematical physics. But first of all we shall ask ourself why and how this notion did appear in the study of dynamical phenomena. The concept of time is a necessary element of everything studied in different areas of human activity in any science. This notion plays a key role in physics, biology, philosophy, sociology and many other areas. Starting from Greek philosophers the notion of time has become the subject of active discussions which, however, contain many controversies. The most general observation here is probably such that the time is created by the evolution of a system. For systems in a “frozen" state the time is absent as a characteristic. As was pointed out by Pascal, each individual system needs its own time realization. In fact, we can think about the general concept of time as an idea from Plato’s world of forms or ideas. Due to this theory of ideas, the visible objects and events are shadows of their ideal and perfect forms.

The necessity to consider specific time realizations for particular systems is undoubtably not new. For example, Vladimir Vernadsky was speaking about special biological time for the living matter. In physics, especially in mechanics, we deal with Newtonian time notion. This time runs uniformly throughout the course of evolution and actually plays the role of an additional independent variable in equations of the motion. This interpretation creates the huge theory of evolution equations in the PDE area. A certain instance related to the study of particular systems is that the time has different scales. The scales of life time for bacteria and elephants will unsurprisingly be different. But Newtonian time, even in different scales is not sufficient for more complex systems.

Let us consider a particular model to motivate our considerations. We would like to study an evolution of a model from plant ecology. This model is characterized by biological properties of considered plants such as fecundity and establishment coefficients, mortality and competition for resources parameters etc. Yet we shall also take into account the environment’s influence on the system. Namely, the model’s is affected by abiotic factors such as light availability, temperature, wind, rain, season etc. Since these influences behave randomly, the evolution of the system shall include such random effects. One possible approach here is the following. We may construct the dynamics of the model in the absence of the environment. Then we can include a single influence of the environment via a random time change in the initial dynamics. This dynamic may be stochastic (e.g., Markov) or deterministic (e.g., dynamical systems). In the described manner we may effectively include environmental behavior in this evolution. The use of random times involves a consideration of several particular shadows of the time idea as monotonic stochastic processes in the previously given dynamics. Note that a random time change may affect the dynamics in multiple ways. For some particular models, random times produce effective frictions for the moving particles, their give also some trap motion etc.

It is finitely definitely not only the way in the study of models which interact with an environment. The variety of such kind of systems is extremely rich and to have a universal approach to their analysis would be too naive a hope.

The aim of this paper is to give a brief overview of recent studies in the application of the random time concept to dynamical problems. In our explanation we refer to our recent works in the references, in particular to [5].

The idea to consider stochastic processes with general random times is known foremost from the classical book by Gikhman and Skorokhod [11]. In the case of Markov processes time changes by subordinators has been already considered by Bochner in [4], showing that it gives again a Markov process, the so-called Bochner subordinated Markov process. A more interesting scenario is realized when analyzing the case of inverse subordinators. Indeed, after the time change, we fail to obtain a Markov process. Therefore, the study of such kind of processes becomes really challenging. From this perspective, let us recall the work by Montroll and Weiss, [26], where the authors consider the physically motivated case of random walks in random time. This seminal paper originated a wide research activity related to the study of Markov processes with inverse stable subordinators as random times changes,see the book [25] for a detailed review and historical comments.

It is worth mentioning that when we take into account the case of processes with random time changes which are not subordinators or inverse subordinators, we can only rely on few results, the overall field having been less investigated.

On the one hand, additional assumptions on the stable subordinator considerably reduce the set of time change processes we can count on, resulting in restrictive assumptions for possible applications. On the other hand, we find technical difficulties in handling general inverse subordinators. Such limitations can be overcome for certain sub-classes of inverse subordinators, see, e.g., [19, 20]. Let us underline that the random time change approach turns to be a very effective tools in modeling several physical systems, spanning from ecological to biological ones, see, e.g., [23] and references therein, also in view of additional applications.

There is a natural question concerning the use of a random time change not only in stochastic dynamics but also with respect to a wider class of dynamical problems. In this paper we focus on the latter task’s analysis in the case of dynamical systems taking values in \({{\mathbb {R}}^d}\). In particular, let X(tx), \(t\ge 0\), \(x\in {\mathbb {R}}^{d}\) be a dynamical system in \({\mathbb {R}}^{d}\), starting from x at initial time, namely: \(X(0,x)=x\). Of course, such a system is also a deterministic Markov process. Given \(f:{\mathbb {R}}^{d}\longrightarrow {\mathbb {R}}\) we define

$$\begin{aligned} u(t,x):=f(X(t,x))\,, \end{aligned}$$

hence obtaining a version of the Kolmogorov equation, called the Liouville equation within the theory of dynamical systems:

$$\begin{aligned} \frac{\partial }{\partial t}u(t,x)=Lu(t,x)\,, \end{aligned}$$

L being the generator of a semigroup which results to be the solution of the Liouville equation, see, e.g., [9, 27, 30] for more details.

If E(t) is an inverse subordinator process (see Section 2 below for details and examples), then we may consider the time changed random dynamical systems

$$\begin{aligned} Y(t):=X(E(t))\,. \end{aligned}$$

Our aim is to analyze the properties of Y(t) depending on those of the initial dynamical systems X(t). In particular, we can define

$$\begin{aligned} v(t,x):={\mathbb {E}}[f(Y(t,x)]\,, \end{aligned}$$

then trying to compare the behavior u(tx) and v(tx) for a certain class of functions f.

In what follows, we present the main problems which naturally appear studying random time changes in dynamical systems. Moreover, we provide solutions to these problems with respect to the examples collected in Section 2.

The rest of the paper is organized as follows. In Section 2 we present the classes of inverse subordinators and the associated general fractional derivatives. We study random time dynamical systems, also considering the simplest examples of them. Moreover, we also provide the first results when the random time is associated to the \(\alpha \)-stable subordinator. In Subsection 2.3 we consider a dynamical system as a deterministic Markov processes. In Subsection 2.5 we investigate the path transformation of a simple dynamical system by a random time.

2 Random times

In what follows, we recall some preliminary definitions and results related to random times processes and subordinators. Let us start with the following fundamental definition.

Definition 2.1

Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space. A random time is a process \(E:[0,+\infty )\times \Omega \rightarrow {\mathbb {R}}^+\) such that

  1. (i)

    for a.e. \(\omega \in \Omega \) \(E(t,\omega )\ge 0\) for all \(t\in [0,+\infty )\),

  2. (ii)

    for a.e. \(\omega \in \Omega \) \(E(0,\omega )=0\),

  3. (iii)

    the function \(E(\cdot ,\omega )\) is increasing and satisfies

    $$\begin{aligned} \lim \limits _{t\rightarrow +\infty }E(t,\omega )=+\infty \,. \end{aligned}$$

Concerning the concept of subordinators, we can introduce it as follows:

Definition 2.2

Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space. A process \(\{S(t),\, t\ge 0\}\) is a subordinator if the following conditions are satisfied

  1. (i)

    \(S(0)=0\);

  2. (ii)

    \(S(t+r)-S(t)\) has the same law of S(r) for all \(t,r>0\,\);

  3. (iii)

    if \({({\mathcal {F}}_t)}_t\) denotes the filtration generated by \({(S(t))}_t\), i.e. \({\mathcal {F}}_t=\sigma (\{S(r), r\le t\})\), then \(S(t+r)-S(t)\) is independent of \({\mathcal {F}}_t\) for all   \(t,r>0\,;\)

  4. (iv)

    \(t\rightarrow S(t)(\omega )\) is almost surely right-continuous with left limits;

  5. (v)

    \(t\rightarrow S(t)\) is almost surely an increasing function.

For the sake of completeness, let us note that the process \(S(\cdot )\) is a Lévy process if it satisfies the conditions \((i)-(iv)\), see, e.g., [2] for more details. Let \(S=\{S(t),\;t\ge 0\}\) be Lévy process, then its Laplace transform can be written in terms of a Bernstein function (also known as Laplace exponent) \(\Phi :[0,\infty )\longrightarrow [0,\infty )\) by

$$\begin{aligned} {\mathbb {E}}[e^{-\lambda S(t)}]=e^{-t\Phi (\lambda )},\quad \lambda \ge 0\,. \end{aligned}$$

Moreover, the function \(\Phi \) admits the representation

$$\begin{aligned} \Phi (\lambda )=\int _{0}^{+\infty }(1-e^{-\lambda \tau })\,\mathrm {d}\sigma (\tau ), \end{aligned}$$
(2.1)

where the measure \(\sigma \), also called Lévy measure, has support in \([0,\infty )\) and fulfills

$$\begin{aligned} \int _{0}^{+\infty }(1\wedge \tau )\,\mathrm {d}\sigma (\tau )<\infty \,. \end{aligned}$$
(2.2)

Let \(\sigma \) be a Lévy measure, we define the associated kernel k as follows:

$$\begin{aligned} k:(0,\infty )&\longrightarrow (0,\infty ),\;\\ t&\mapsto k(t):=\sigma \big ((t,\infty )\big )\,.\nonumber \end{aligned}$$
(2.3)

Its Laplace transform is denoted by \({\mathcal {K}}\), and, for any \(\lambda \ge 0\), one has

$$\begin{aligned} {\mathcal {K}}(\lambda ):=\int _{0}^{\infty }e^{-\lambda t}k(t)\,\mathrm {d}t\,. \end{aligned}$$
(2.4)

We note that the relation between the function \({\mathcal {K}}\) and the Laplace exponent \(\Phi \) is given by

$$\begin{aligned} \Phi (\lambda )=\lambda {\mathcal {K}}(\lambda ),\quad \forall \lambda \ge 0\,. \end{aligned}$$
(2.5)

Throughout the paper we shall suppose that:

Hypothesis 2.1

Let \(\Phi \) be a complete Bernstein function, that is, the Lévy measure \(\sigma \) is absolutely continuous with respect to the Lebesgue measure. The functions \({\mathcal {K}}\) and \(\Phi \) satisfy

$$\begin{aligned}&{\mathcal {K}}(\lambda )\rightarrow \infty ,\text { as}~ \lambda \rightarrow 0;\quad {\mathcal {K}}(\lambda )\rightarrow 0, \ \text { as}~ \lambda \rightarrow \infty ; \\&\Phi (\lambda )\rightarrow 0,\text { as}~ \lambda \rightarrow 0;\quad \Phi (\lambda )\rightarrow \infty , \ \text { as}~ \lambda \rightarrow \infty . \end{aligned}$$

Example 2.1

(\(\alpha \)-stable subordinator) A classical example of a subordinator S is the so-called \(\alpha \)-stable process with index \(\alpha \in (0,1)\). In particular, a subordinator is \(\alpha \)-stable if its Laplace exponent is

$$\begin{aligned} \Phi (\lambda )=\lambda ^{\alpha }=\frac{\alpha }{\Gamma (1-\alpha )}\int _{0}^{\infty } (1-e^{-\lambda \tau })\tau ^{-1-\alpha }\,\mathrm {d}\tau \,, \end{aligned}$$

where \(\Gamma \) is the gamma function.

In this case, the associated Lévy measure is given by \(\mathrm {d}\sigma _{\alpha }(\tau )=\frac{\alpha }{\Gamma (1-\alpha )}\tau ^{-(1+\alpha )}\,\mathrm {d}\tau \) and the corresponding kernel \(k_{\alpha }\) has the form

$$\begin{aligned} k_{\alpha }(t)=g_{1-\alpha }(t):=\frac{t^{-\alpha }}{\Gamma (1-\alpha )}\,, \ \ \ \ t\ge 0\,, \end{aligned}$$

with Laplace transform equal to \({\mathcal {K}}_{\alpha }(\lambda )=\lambda ^{\alpha -1}\), for \(\lambda \ge 0\).

Example 2.2

(Gamma subordinator) The Gamma process \(Y^{(a,b)}\) with parameters \(a,b>0\) is another example of a subordinator with Laplace exponent

$$\begin{aligned} \Phi _{(a,b)}(\lambda )=a\log \left( 1+\frac{\lambda }{b}\right) =\int _{0}^{\infty } (1-e^{-\lambda \tau })a\tau ^{-1}e^{-b\tau }\,\mathrm {d}\tau \,, \end{aligned}$$

the second equality being the Frullani integral. The associated Lévy measure is given by \(d\sigma _{(a,b)}(\tau )=a\tau ^{-1}e^{-b\tau }\,\mathrm {d}\tau \), with associated kernel equal to

$$\begin{aligned} k_{(a,b)}(t)=a\Gamma (0,bt), \ \ t>0\,, \end{aligned}$$

where

$$\begin{aligned} \Gamma (\nu ,z):=\int _{z}^{\infty }e^{-t}t^{\nu -1}\,\mathrm {d}t \end{aligned}$$

is the incomplete Gamma function, see, e.g., [14, Section 8.3] for more details. Moreover, its Laplace transform is

$$\begin{aligned} {\mathcal {K}}_{(a,b)}(\lambda )=a\lambda ^{-1}\log \left( 1+\frac{\lambda }{b}\right) , \ \ \ \lambda >0\,. \end{aligned}$$

Example 2.3

(Truncated \(\alpha \)-stable subordinator) The truncated \(\alpha \)-stable subordinator, see [6, Example 2.1-(ii)], \(S_{\delta }\), \(\delta >0\), constitutes an example of a driftless \(\alpha \)-stable subordinator with Lévy measure given by

$$\begin{aligned} \mathrm {d}\sigma _{\delta }(\tau ):=\frac{\alpha }{\Gamma (1-\alpha )}\tau ^{-(1+\alpha )} 1\!\!1_{(0,\delta ]} (\tau )\,\mathrm {d}\tau , \quad \delta >0\,. \end{aligned}$$

The corresponding Laplace exponent is given by

$$\begin{aligned} \Phi _{\delta }(\lambda )=\lambda ^{\alpha }\left( 1-\frac{\Gamma (-\alpha ,\delta \lambda )}{\Gamma (-\alpha )}\right) +\frac{\delta ^{-\alpha }}{\Gamma (1-\alpha )}\,, \end{aligned}$$

with associated kernel

$$\begin{aligned} k_{\delta }(t):=\sigma _{\delta }\big ((t,\infty )\big ) = \frac{1\!\!1_{(0,\delta ]}(t)}{\Gamma (1-\beta )}(t^{-\beta }-\delta ^{-\beta }),\ \ \ t>0\,. \end{aligned}$$

Example 2.4

(Sum of two alpha stable subordinators) Let \(0<\alpha<\beta <1\) be given and let \(S_{\alpha ,\beta }(t)\), for \(t\ge 0\), be the driftless subordinator with Laplace exponent given by

$$\begin{aligned} \Phi _{\alpha ,\beta }(\lambda )=\lambda ^{\alpha }+\lambda ^{\beta }\,. \end{aligned}$$

Then, by Example 2.1, we have that the corresponding Lévy measure \(\sigma _{\alpha ,\beta }\) is the sum of two Lévy measures. Indeed, it holds

$$\begin{aligned} \mathrm {d}\sigma _{\alpha ,\beta }(\tau )=\mathrm {d}\sigma _{\alpha }(\tau )+\mathrm {d}\sigma _{\alpha }(\tau ) =\frac{\alpha }{\Gamma (1-\alpha )}\tau ^{-(1+\alpha )}\,\mathrm {d}\tau +\frac{\beta }{\Gamma (1-\beta )}\tau ^{-(1+\beta )}\,\mathrm {d}\tau \,, \end{aligned}$$

implying that the associated kernel \(k_{\alpha ,\beta }\) reads as follows

$$\begin{aligned} k_{\alpha ,\beta }(t):=g_{1-\alpha }(t)+g_{1-\beta }(t) =\frac{t^{-\alpha }}{\Gamma (1-\alpha )}+\frac{t^{-\beta }}{\Gamma (1-\beta )},\ \ \ t>0\,, \end{aligned}$$

with associated Laplace transform given by

$$\begin{aligned} {\mathcal {K}}_{\alpha ,\beta }(\lambda )={\mathcal {K}}_{\alpha }(\lambda )+{\mathcal {K}}_{\beta }(\lambda ) =\lambda ^{\alpha -1}+\lambda ^{\beta -1}\,,\, \ \ \ \lambda >0\,. \end{aligned}$$

Example 2.5

(Kernel with exponential weight) Taking \(\gamma >0\) and \(0<\alpha <1\), let us consider the subordinator with Laplace exponent

$$\begin{aligned} \Phi _{\gamma }(\lambda ):=(\lambda +\gamma )^{\alpha } =\left( \frac{\lambda +\gamma }{\lambda }\right) ^{\alpha }\frac{\alpha }{\Gamma (1-\alpha )} \int _{0}^{\infty }(1-e^{-\lambda \tau })\tau ^{-1-\alpha }\,\mathrm {d}\tau \,. \end{aligned}$$

Then the associated Lévy measure is given by

$$\begin{aligned} \mathrm {d}\sigma _{\gamma }(\tau )=\left( \frac{\lambda +\gamma }{\lambda }\right) ^{\alpha } \frac{\alpha }{\Gamma (1-\alpha )}\tau ^{-(1+\alpha )}\mathrm {d}\tau \,, \end{aligned}$$

which implies a kernel \(k_{\gamma }\) with exponential weight. In particular, we have

$$\begin{aligned} k_{\gamma }(t)=g_{1-\alpha }(t)e^{-\gamma t}=\frac{t^{-\alpha }}{\Gamma (1-\alpha )}e^{-\gamma t}\,. \end{aligned}$$

The corresponding Laplace transform of \(k_{\gamma }\) is then given by \({\mathcal {K}}_{\gamma }(\lambda )=\lambda ^{-1}(\lambda +\gamma )^{\alpha }\), \(\lambda >0\).

2.1 Inverse subordinators and general fractional derivatives

In this section we introduce the inverse subordinators and the corresponding general fractional derivatives.

Definition 2.3

Let \(S(\cdot )\) be a subordinator. We define \(E(\cdot )\) as the inverse process of \(S(\cdot )\), i.e.

$$\begin{aligned} E(t):=\inf \left\{ r>0\, |\, S(r)> t \right\} =\sup \left\{ r\ge 0\, |\, S( r)\le t \right\} \, \ \, \text{ for } \text{ all } \ \, t \in [0,+\infty )\,. \end{aligned}$$

For any \(t\ge 0\), we denote by \(G_{t}^{k}(\tau ):=G_{t}(\tau )\), \(\tau \ge 0\) the marginal density of E(t) or, equivalently

$$\begin{aligned} G_{t}(\tau )\,{\mathrm {d}}\tau =\frac{\partial }{\partial \tau }{\mathbb {P}}(E(t)\le \tau )\,{\mathrm {d}}\tau = \frac{\partial }{\partial \tau }{\mathbb {P}}(S(\tau )\ge t)\,{\mathrm {d}}\tau =-\frac{\partial }{\partial \tau }{\mathbb {P}}(S(\tau )<t)\,{\mathrm {d}}\tau . \end{aligned}$$

Remark 2.1

If S is the \(\alpha \)-stable process, \(\alpha \in (0,1)\), then the inverse process E(t) has Laplace transform, see [3, Prop. 1(a)], given by

$$\begin{aligned} {\mathbb {E}}[e^{-\lambda E(t)}]=\int _{0}^{\infty }e^{-\lambda \tau }G_{t}(\tau )\,\mathrm {d}\tau =\sum _{n=0}^{\infty }\frac{(-\lambda t^{\alpha })^{n}}{\Gamma (n\alpha +1)}=E_{\alpha }(-\lambda t^{\alpha }) \,. \end{aligned}$$
(2.6)

By the asymptotic behavior of the Mittag-Leffler function \(E_{\alpha }\), it follows that \({\mathbb {E}}[e^{-\lambda E(t)}]\sim Ct^{-\alpha }\) as \(t\rightarrow \infty \). Using the properties of the Mittag-Leffler function \(E_{\alpha }\), we can show that the density \(G_{t}(\tau )\) is given in terms of the Wright function \(W_{\mu ,\nu }\), namely \(G_{t}(\tau ) = t^{-\alpha }W_{-\alpha ,1-\alpha }(\tau t^{-\alpha })\), see [12] for more details.

For a general subordinator, the following lemma determines the t-Laplace transform of \(G_{t}(\tau )\), with k and \({\mathcal {K}}\) given in (2.3) and (2.4), respectively. For the proof see the following lemma.

Lemma 2.1

The t-Laplace transform of the density \(G_{t}(\tau )\) is given by

$$\begin{aligned} \int _{0}^{\infty }e^{-\lambda t}G_{t}(\tau )\,\mathrm {d}t={\mathcal {K}}(\lambda )e^{-\tau \lambda {\mathcal {K}}(\lambda )}. \end{aligned}$$
(2.7)

The double (\(\tau ,t\))-Laplace transform of \(G_{t}(\tau )\) is

$$\begin{aligned} \int _{0}^{\infty }\int _{0}^{\infty }e^{-p\tau }e^{-\lambda t}G_{t}(\tau )\,\mathrm {d}t\,\mathrm {d}\tau =\frac{{\mathcal {K}}(\lambda )}{\lambda {\mathcal {K}}(\lambda )+p}\,. \end{aligned}$$
(2.8)

Proof

For the proof see [17] or [29, Lemma 3.1] \(\square \)

Let us now recall the definition of General Fractional Derivative (GFD) associated to a kernel k, see [17] and references therein for more details.

Definition 2.4

Let S be a subordinator and the kernel \(k\in L_{\mathrm {loc}}^{1}({\mathbb {R}}_{+})\) given in (2.3). We define a differential-convolution operator by

$$\begin{aligned} \big ({\mathbb {D}}_{t}^{(k)}u\big )(t)=\frac{d}{dt}\int _{0}^{t}k(t-\tau )u(\tau )\,\mathrm {d}\tau -k(t)u(0), \ \ \ t>0\,. \end{aligned}$$
(2.9)

Remark 2.2

The operator \({\mathbb {D}}_{t}^{(k)}\) is also known as Generalized Fractional Derivative.

Example 2.6

(Distributed order derivative) Consider the kernel k defined by

$$\begin{aligned} k(t):=\int _{0}^{1}g_{\alpha }(t)\,\mathrm {d}\alpha =\int _{0}^{1}\frac{t^{\alpha -1}}{\Gamma (\alpha )}\,\mathrm {d}\alpha , \quad t>0\,. \end{aligned}$$
(2.10)

Then it is easy to see that

$$\begin{aligned} {\mathcal {K}}(\lambda )=\int _{0}^{\infty }e^{-\lambda t}k(t)\,\mathrm {d}t=\frac{\lambda -1}{\lambda \log (\lambda )},\quad \lambda >0\,. \end{aligned}$$

The corresponding differential-convolution operator \({\mathbb {D}}_{t}^{(k)}\) is called distributed order derivative, see, e.g., [1, 8, 13, 15, 16, 24] for more details and applications.

We conclude this section with a result that will be useful later on, starting by recalling the following definition.

Definition 2.5

Given the functions \(f:{\mathbb {R}}\rightarrow {\mathbb {R}}\) and \(g:{\mathbb {R}}\rightarrow {\mathbb {R}}\), we say that f and g are asymptotically equivalent at infinity, and denote \(f\sim g\) as \(x\rightarrow +\infty \), if

$$\begin{aligned} \lim _{x\rightarrow +\infty }\frac{f(x)}{g(x)}=1\,. \end{aligned}$$

Moreover, we say that f is slowly varying if

$$\begin{aligned} \lim _{x\rightarrow +\infty }\frac{f(\lambda x)}{f(x)}=1,\quad \text{ for } \text{ any } \quad \lambda >0\,. \end{aligned}$$

For more details on slowly varying functions, we refer the interested reader to, e.g., [10, 28].

Lemma 2.2

Suppose Hypothesis 2.1 is satisfied, and that the subordinator S(t), along with its inverse E(t), \(t\ge 0\) , are such that

$$\begin{aligned} {\mathcal {K}}(\lambda )\sim \lambda ^{-\gamma }Q\left( \frac{1}{\lambda }\right) ,\quad \lambda \rightarrow 0\,, \end{aligned}$$
(2.11)

where \(0\le \gamma \le 1\) and \(Q(\cdot )\) is a slowly varying function. Moreover, define

$$\begin{aligned} A(t,z):=\int _{0}^{\infty }e^{-z\tau }G_{t}(\tau )\,\mathrm {d}\tau ,\quad t>0,\;z>0\,. \end{aligned}$$

Then it holds

$$\begin{aligned} A(t,z)\sim \frac{1}{z}\frac{t^{\gamma -1}}{\Gamma (\gamma )}Q(t),\quad t\rightarrow \infty \,. \end{aligned}$$

Proof

For the proof see [18, Theorem 4.3]. \(\square \)

Remark 2.3

We point out that the condition (2.11) on the Laplace transform of the kernel k is satisfied by all Examples 2.12.5 and 2.6, stated above. The case of Example 2.4 is easily checked as

$$\begin{aligned} {\mathcal {K}}(\lambda )=\lambda ^{\alpha }+\lambda ^{\beta }=\lambda ^{-(1-\alpha )}(1+\lambda ^{-(\alpha -\beta )}) = \lambda ^{-\gamma }Q\left( \frac{1}{\lambda }\right) \,, \end{aligned}$$

where \(\gamma =1-\alpha >0\) and \(Q(t)=1+t^{\alpha -\beta }\) is a slowly varying function.

2.2 Dynamical systems and Liouville equations

There is a natural question concerning the use of a random time change not only in stochastic dynamics, but more generally in an ample class of dynamical problems. In what follows, we shall focus the attention on the analysis of the random time change approach for dynamical systems taking values in \({\mathbb {R}}^d\).

Let X(tx), \(t \ge 0\) be a dynamical system in \({\mathbb {R}}^d\) such that \(X(0, x) = x \in {\mathbb {R}}^d\). Such a system is also a deterministic Markov process. Therefore, given \(f : {\mathbb {R}}^d \rightarrow {\mathbb {R}}\), and defining

$$\begin{aligned} u(t,x) := f(X(t,x))\,, \end{aligned}$$

we have a version of the Kolmogorov equation, which is nothing but the Liouville equation within the theory of dynamical systems. Indeed,

$$\begin{aligned} u_t(t,x)= Lu(t,x)\,, \end{aligned}$$
(2.12)

where L is the generator of the semigroup solution of the Liouville equation, see, e.g., [9, 27, 30].

2.3 Random time changes and fractional Liouville equations

Let \((\Omega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space. Let X(tx), \(t \ge 0\), be a dynamical system in \({\mathbb {R}}^d\) starting at time \(t=0\) from \(x\in {\mathbb {R}}^d\). Given an inverse subordinator process \(E(\cdot )\), we consider the time changed random dynamical systems

$$\begin{aligned} Y(t,\omega ;x)=X(E(t,\omega );x)\,,\quad t\in [0,+\infty ), \ x \in {{\mathbb {R}}^d}, \ \omega \in \Omega \,. \end{aligned}$$

For a suitable \(f:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) we define

$$\begin{aligned} v(t,x):={\mathbb {E}}[f(Y(t;x))]\,, \end{aligned}$$
(2.13)

where, without loss of generality, with E(t) and Y(tx) we shortly refer to \(E(t,\cdot )\), resp. to \(Y(t,\cdot \ ;x)\).

As pointed out in, e.g., [6, 29], v(tx) solves an evolution equation with the generator L, with generalized fractional derivative (see (2.9)), i.e.

$$\begin{aligned} {\mathbb {D}}_{t}^{(k)}v( \cdot ,x)(t)=Lv(t,x)\,. \end{aligned}$$
(2.14)

Let u(tx) be the solution to (2.12) with the same generator L in (2.14). Under quite general assumptions there is an essentially obvious relation between these evolutions

$$\begin{aligned} v(t,x)=\int _{0}^{\infty }u(\tau ,x)G_{t}(\tau )\,\mathrm {d}\tau , \end{aligned}$$
(2.15)

\(G_{t}(\tau )\) being the density of E(t), as defined in Section 2.1.

Having in mind the analysis of the random time change influence on the asymptotic properties of v(tx), we may suppose that the latter formula gives all necessary technical equipments. Unfortunately, the situation is essentially more complicated. In fact, the knowledge we have of the properties characterizing the density \(G_t(\tau )\) is, in general, very poor. The aim of this section is to describe a class of subordinators for which we may obtain information about the time asymptotic of the generalized fractional dynamics.

2.4 First examples

We consider the simplest evolution equation in \({{\mathbb {R}}^d}\)

$$\begin{aligned} {\mathrm {d}}X(t)=v\mathrm {d}t\in {{\mathbb {R}}^d},\quad X(0)=x_{0}\in {{\mathbb {R}}^d}\,, \end{aligned}$$

with corresponding dynamics given by

$$\begin{aligned} X(t)=x_{0}+vt,\quad t\ge 0\,. \end{aligned}$$

Without loss of generality, let us assume that \(x_{0}=0\). Then, we take \(f(x)=e^{-\alpha |x|},\;\alpha >0\). Hence, the corresponding solution to the Liouville equation is

$$\begin{aligned} u(t,x)=e^{-\alpha t|v|},\quad t\ge 0\,. \end{aligned}$$

Proposition 2.1

Assume that the assumptions of Lemma 2.2 are satisfied. Then

$$\begin{aligned} v(t,x)\sim \frac{1}{\alpha |v|\Gamma (\gamma )}t^{\gamma -1}Q(t),\quad t\rightarrow \infty \,. \end{aligned}$$

Proof

From the explicit form of the solution u(tx), and using both (2.15) and Lemma 2.2, we obtain

$$\begin{aligned} v(t,x)\sim \frac{1}{\alpha |v|\Gamma (\gamma )}t^{\gamma -1}Q(t),\quad t\rightarrow \infty \,. \end{aligned}$$

In particular, for the \(\alpha \)-stable subordinator considered in Example 2.1, we obtain \(v(t,x)\sim Ct^{-\alpha }\), for a given constant \(C>0\). Therefore, starting with a solution u(tx) with exponential decay after subordination, we observe a polynomial decay with the order defined by the random time characteristics.

\(\square \)

For \(d=1\) consider the dynamics

$$\begin{aligned} \beta {\mathrm {d}}X(t)=\frac{1}{X^{\beta -1}(t)}\mathrm {d}t,\quad \beta \ge 1\,, \end{aligned}$$

then the solution is given by

$$\begin{aligned} X(t)=(t+C)^{1/\beta }\,. \end{aligned}$$

Considering the function \(f(x)=\exp (-a|x|^{\beta })\), \(a>0\), and supposing that the assumptions of Lemma 2.2 are satisfied, then, exploiting the explicit form of the solution u(tx), we have that the long time behavior of the subordination v(tx) is given by

$$\begin{aligned} v(t,x)\sim \frac{e^{-aC}}{a}\frac{t^{\gamma -1}}{\Gamma (\gamma )}Q(t),\quad t\rightarrow \infty \,. \end{aligned}$$

In particular, choosing the density \(G_{t}(\tau )\) of the inverse subordinator E(t) as in the Example 2.4, we obtain

$$\begin{aligned} v(t,x)\sim Ct^{-\alpha }(1+t^{\alpha -\beta })\sim Ct^{-\alpha },\quad t\rightarrow \infty \,. \end{aligned}$$

2.5 Path transformations

Let us now investigate how the trajectories of dynamical systems transform under random times. According to what seen above, we consider the Liouville equation for

$$\begin{aligned} u(t,x):=f(X(t,x)),\quad t\ge 0,\;x\in {{\mathbb {R}}^d}\,, \end{aligned}$$

that is,

$$\begin{aligned} u_t(t,x)=Lu(t,x),\quad u(0,x)=f(x)\,, \end{aligned}$$

L being the generator of a semigroup. In addition, let E(t), \(t\ge 0\), be the inverse subordinator process. Then we can consider the time changed random dynamical systems

$$\begin{aligned} Y(t,x)=X(E(t),x),\quad t\ge 0,\;x\in {{\mathbb {R}}^d}\,, \end{aligned}$$

where, without loss of generality, E(t), resp. Y(tx), shortly refer to \(E(t,\cdot )\), resp. to \(Y(t,\cdot \ ;x)\,.\) Definining

$$\begin{aligned} v(t,x):={\mathbb {E}}[f(Y(t,x)]\,, \end{aligned}$$

by the subordination formula, we have

$$\begin{aligned} v(t,x)=\int _{0}^{\infty }u(\tau ,x)G_{t}(\tau )\,\mathrm {d}\tau \,. \end{aligned}$$

Considering the vector-function \(f:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) defined as

$$\begin{aligned} f(x)=x\,, \end{aligned}$$

we have that the average trajectories of Y(tx) read as follow

$$\begin{aligned} {\mathbb {E}}[Y(t,x)]= \int _{0}^{\infty }X(\tau ,x)G_{t}(\tau )\,\mathrm {d}\tau \,. \end{aligned}$$

Then considering the dynamical system of Section 2.4, namely \(X(t,x)=vt\), we obtain

$$\begin{aligned} {\mathbb {E}}[Y(t,x)]=v\int _{0}^{\infty }\tau G_{t}(\tau )\,\mathrm {d}\tau \,. \end{aligned}$$

Therefore, we need to know the first moment of the density \(G_t\). Considering the case of the inverse \(\alpha \)-stable subordinator stated in Example 2.1, we have

$$\begin{aligned} \int _0^\infty \tau G_t(\tau )\,\mathrm {d}\tau = C t^{\alpha }\,. \end{aligned}$$

Therefore, the asymptotic of the time changed trajectory will be slower (proportional to \(t^{\alpha }\)) instead of initial linear vt motion. In a forthcoming paper we will study in detail these results for other classes of inverse subordinators.