Continuous-time statistics and generalized relaxation equations

Using two simple examples, the continuous-time random walk as well as a two state Markov chain, the relation between generalized anomalous relaxation equations and semi-Markov processes is illustrated. This relation is then used to discuss continuous-time random statistics in a general setting, for statistics of convolution-type. Two examples are presented in some detail: the sum statistic and the maximum statistic.


Introduction
Continuous-time random walks (CTRWs) are a straightforward generalization of compound Poisson processes. Their simplest version, the so-called uncoupled case, can be defined as follows. Let {Y i } ∞ i=1 be a sequence of independent and identically distributed random variables in R d (here, for the sake of simplicity, we consider d = 1) with cumulative distribution function F Y1 (u) = P(Y 1 ≤ u). The corresponding random walk is the homogeneous Markov chain defined by Now, suppose we are given a sequence of positive independent and identically distributed random variables with the meaning of inter-event durations and with cumulative distribution function F J1 (w) = P(J 1 ≤ w). Further assume that the sequences are independent. First define the epochs at which events occur as then introduce the number of events from T 0 = 0 seen as an event (technically, as a renewal point) N (t) = max{n : T n ≤ t}.
Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver. a e-mail: e.scalas@sussex.ac.uk Change time from n to N (t) in equation (1) to get the uncoupled CTRW If the J i s are exponentially distributed, then N (t) is a Poisson process and equation (4) defines compound Poisson processes [1,2]. These are Lévy processes [3] with Lévy triplet given by (0, 0, λσ) where drift and diffusion are 0 and σ is a measure on R with σ{0} = 0. Just as a reminder, a Lévy process is a Markov process with independent and stationary increments. The realizations (a.k.a sample paths) of a Lévy process are right-continuous with left limits (or càdlàg from the French continuà droit, limiteà gauche). Compound Poisson processes play an important role in the theory of Lévy processes as they can approximate any other Lévy process. To be more precise, one can consider any Lévy process as an independent sum of a Brownian motion with drift and a countable number of independent compound Poisson processes with different jump rates λ and jump distributions σ [4].
If the J i s are not exponentially distributed, then N (t) is a counting renewal process and equation (4) defines a renewal-reward process that is non-Markovian and non-Lévy, but semi-Markov [5]. The realizations are assumed to be càdlàg as well; this is useful for functional limit theorems [6]. In [5], a derivation of the famous Montroll-Weiss equation [7] as a necessary condition for semi-Markov processes is presented. In this case, the CTRW becomes a process with infinite memory. This is due to the infinite memory of the counting process N (t) that, in its turn, is due to the infinite memory of the residual time to the next renewal from any "observation" time t [8].
Assume absolute continuity of the distributions of X i s and J i s and define their respective probability density functions as f Y1 (u) = dF Y1 (u)/du for the jumps and f J1 (w) = dF J1 (w)/dw for the inter-event times. Then, straightforward calculations (see Appendix A) lead from the equation of Montroll and Weiss to the following evolution equation [9]: where p(x, t) = dF X(t) (x)/dx is the probability density function of finding the continuous time random in x at time t given that X(0) = 0 and Φ(t) has the following Laplace transform It is interesting to remark that equation (5) highlights the infinite memory of the process as Φ(t) plays the role of memory kernel. The reader might be interested in comparing this approach to semi-Markov processes with the classical approach in [10] for non-Markovian processes.
In the exponential/Poisson case (set λ = 1, for the sake of simplicity), one has f J1 (t) = exp(−t) and one gets L(exp(−t))(s) = 1/(1 + s) so that L(Φ(t))(s) = 1 and Φ(t) = δ(t); then, equation (5) Equation (5) naturally leads to anomalous diffusion when inter-event times have a power-law distribution with infinite first moment (see [12] and the references quoted at the end of the next section).

Anomalous relaxation
Equation (5) is an instance of anomalous relaxation equation. These equations describe governing equations for time-changed Markov processes. They have been recently studied in full generality by Meerschaert and Toaldo [13] and independently obtained for a specific semi-Markov random graph dynamics by Georgiou et al. [14]. In order to illustrate the relationship with relaxation processes more convincingly, we can use the simple example of a homogeneous Markov chain Y n for n ≤ 0 with two states A and B and transition probabilities q i,j = P (X 1 = j|X 0 = i) given by q A,A = 0, q A,B = 1, q B,A = 0 and q B,B = 1 [15]. This means that if the chain is prepared in state A, it will jump to state B at the first step and it will stay there forever. Now define, as above, the new process Y (t) = Y N (t) . Then, we have (see [14] for an explicit derivation and remember that T 0 = 0 is a renewal point): whereF In other words the probability of finding the chain in state A decays exponentially as t grows. In this case, p A,A (t) is the solution of the following relaxation Cauchy problem For a general renewal counting process N (t), one gets the following generalized relaxation Cauchy problem instead: The anomalous relaxation theory outlined above was studied by Scher and Montroll [16] in the context of transit-time dispersion in amorphous solid. They explicitly assumed a power-law behavior for the distribution of the inter-event durations. This theory was further developed by Klafter and Silbey [17] who studied transport of particles on a lattice using the projector operator technique. They showed that the exact equation governing the transport averaged over all configurations can be written either as a generalized master equation or as the CTRW equations.
In the next section, the relation will be presented between the anomalous relaxation discussed in [16] and fractional operators. This was already discussed in papers by Glöckle and Nonnenmacher [18,19] and Metzler et al. [20]. Mainardi, Gorenflo and co-workers published two review papers on anomalous relaxation and fractional calculus [21,22]. Two general reviews on fractional diffusion, Fokker-Planck equations, and relaxation equations were written by Metzler and Klafter [23,24]. More recently, important properties of CTRWs such as ageing or weak ergodicity breaking have been reviewed as well [25,26].
The useful character of the simple idea of CTRWs fully emerges from the body of work outlined above. Therefore, it is not surprising to see that the time-change from a deterministic n to the random process N (t) can lead to further developments.

Theory
After establishing the connection between the random time change N (t) and relaxation equations of the type (5) and (10), one can proceed to study continuous-time statistics in a rather general way. Let {X i } n i=1 be a sequence of n independent and identically distributed positive random variables with cumulative distribution function F X1 (u) = P(X 1 ≤ u). A statistic is a function from R n to R that summarizes some characteristic behavior of the random variables: The statistic S n is a random variable and, usually, something is known on its distribution. Asymptotic analytical results may be available in the limit of large n and Monte Carlo simulations can be used for small values of n. Let F Sn (u) = P(S n ≤ u) denote the cumulative distribution function of G n . As in the two previous examples, in order to introduce continuous-time statistics, we use another set of positive independent and identically distributed random variables (independent from the X i s) {J i } ∞ i=1 with the meaning of sojourn times. Let F J1 (t) = P(J 1 ≤ t) denote the cumulative distribution function of the J i s and f J1 (t) = dF J1 (t)/dt denote their probability density function. We again introduce the epochs at which events occur and the counting process N (t) giving the number of events that occur up to time t N (t) = max{n : T n ≤ t}.
The continuous-time statistic S(t) corresponding to S n is In plain words, the continuous-time statistic corresponds to the statistic of a random number N (t) of random variables X i s. In order to connect continuous-time statistics and relaxation equations, we consider a special class of statistics of convolution type. We will denote these statistics with the following symbol and we assume the existence of a transform L such that Let us now consider a continuous-time statistic of convolution type (with N (t) independent from the X i s) and let us compute its cumulative distribution function. We have We have that (see Appendix A) . (20) Now, following [9] (see Appendix A for details), we can invert the Laplace transform in (20) to get Q(w, t) is the solution of the Cauchy problem (with initial condition Q(w, t = 0) = 1) for the following pseudodifferential relaxation equation where L(Φ(t))(s) is given by equation (6).

Examples
To show that the theory developed above is not void, it is possible to present some examples.

Sum statistics
The first example is the sum statistic for independent and identically distributed random variables.
the corresponding continuous-time sum statistic is an uncoupled CTRW: where we take N (t) to be the Poisson process; in this case, is the usual convolution and the operator L coincides with the usual Laplace transform L. As we have exponentially distributed J i s, we recall that the kernel Φ(t) in (22) coincides with Dirac's delta δ(t) and equation (22) becomes an ordinary relaxation equation The solution of the Cauchy problem for the above relaxation equation is leading, upon inversion of the second transform, to where F n X1 (u) denotes the n-fold convolution and F 0 X1 (u) = θ(u).

Maximum statistics
As a second example, we consider the maximum statistic this time we assume that the interarrival times J i follow a Mittag-Leffler distribution of order 0 < α < 1; this is characterized by the following complementary cumulative distribution function where the one-parameter Mittag-Leffler function E α (z) is defined as .
The corresponding continuous-time maximum statistics is where N α (t) is the fractional Poisson process of renewal type [27] and, as before, we assume the independence of N (t) from the X i s; here, is the usual product and the operator L is the identity. The kernel is and the non-local relaxation equation (22) becomes where ∂ α /∂t α is the Caputo derivative (see Appendix A). The solution of (33) is If α = 1 and X 1 is exponentially distributed, equation (34) reduces to the well-known Gumbel distribution [28] F S (2) (t) (w) = exp(− exp(−w)t).
Incidentally, this result has potential applications in geophysics where power-law distributed interarrival times are often observed between extreme events. This was presented earlier in a master thesis [29]. A general discussion of this problem can be found in [30].

Summary and conclusions
In this paper, the relation between generalized anomalous relaxation equations and semi-Markov processes is explored in some specific cases. Explicit evolution equations are given for transforms of the cumulative distribution function of continuous-time statistics of convolutiontype. Two specific examples are worked out in detail: the sum statistic and the maximum statistic. The case of the sum statistics coincides with the CTRW. For the maximum statistic, in the presence of power-law interarrival times following the Mittag-Leffler distribution, the theory leads to an explicit analytic form for the cumulative distribution that was not published before. It is a fractional generalization of the well-known Gumbel distribution and it is given in equation (34). The theory outlined in Section 3 and yielding equations such as (5), (10) and (22) is leading to interesting developments presented, e.g. in [13,14]. Essentially, the idea is that a random time change N (t), where N (t) is a counting renewal process, in a Markov chain leads to a generalized relaxation equation for relevant probabilities (or characteristic functions) whose solution is given in terms of the complementary cumulative distribution of the inter-event duration. We are currently working on mixing properties and stability of these processes. Moreover, this is quite a rich class of processes and there is virtually no limit to modeling, extensions and generalizations. Open Access This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Equation (20) can be rearranged as follows [9] 1 − L(f J1 (t))(s) sL(f J1 (t))(s) (sQ(w, s) − 1) = −(1 − L (F X1 (u))(w)) ×Q(w, s), (A. 5) and inverting this with respect to the Laplace transform yields equation (22) with L(Φ(t))(s) = 1 − L(f J1 (t))(s) sL(f J1 (t))(s) = L(F J1 (t))(s) L(f J1 (t))(s) , (A. 6) and initial condition Q(w, t = 0) = 1. For the Caputo derivative in the second example, replace the kernel given in equation (32) in the non-local term of equation (22) to get This is indeed the Caputo derivative of Q (2) (w, t) [9,31].