From Semi-Markov Random Evolutions to Scattering Transport and Superdiffusion

We here study random evolutions on Banach spaces, driven by a class of semi-Markov processes. The expectation (in the sense of Bochner) of such evolutions is shown to solve some abstract Cauchy problems. Further, the abstract telegraph (damped wave) equation is generalized to the case of semi-Markov perturbations. A special attention is devoted to semi-Markov models of scattering transport processes which can be represented through these evolutions. In particular, we consider random flights with infinite mean flight times which turn out to be governed by a semi-Markov generalization of a linear Boltzmann equation; their scaling limit is proved to converge to superdiffusive transport processes.

1. Introduction 2. Assumptions and preliminaries 2.1.Markov and Semi-Markov random evolutions 2.2.Assumptions 2.3.PDEs connection 3. Boltzmann-type equations 3.1.The operator f (∂ t − G v ) through Bochner subordination 3.2.The expectation of a semi-Markov random evolution 3.3.The governing equation 4. Abstract wave equation with semi-Markov damping 5. Transport with infinite mean flight times and superdiffusion 5.1.Transport processes with heavy-tailed flight times 5.2.Convergence to a superdiffusive transport process 5.3.On the telegraph process: an hyperbolic-type equation of partial differential equations (PDEs).Most notably, he also realized that these two PDEs can be merged in the so-called telegraph (or damped waves) equation The process above is called the telegraph process and, to the best of our knowledge, the first remarks in the modern mathematical literature appears in Goldstein [18].
The n-dimensional version of such a process is the isotropic (Markovian) transport process (e.g., [40; 46; 61]).This is the uniform motion of a particle that chooses a new direction with uniformly distributed angles, at any jump times of a Poisson process.The position-velocity density function solves a linear Boltzmann equation, (e.g., [61]); moreover, by central limit arguments, a Brownian motion arises in the limit of large times and rapid jumps.
The above mentioned transport process can be seen in an abstract way: there is a running evolution (e.g., translation at velocity v) that changes mode of evolving (e.g., translation at velocity v ) after exponentially distributed waiting times.This abstract idea led Griego and Hersh [19] to formulate the notion of Random Evolutions (see also [49] and references therein).Indeed one can imagine that there is a phenomenon whose instantaneous state is represented by an element u of a Banach space B. The modes of time evolution are given by semigroups (T v (t)) t≥0 , v ∈ S, on B, and there is a random mechanism (e.g., a Markov chain V (t) on S) which changes the mode of evolution from T v (t) to T v (t) after an exponentially distributed waiting time.The authors realized the connection of these random evolutions with (systems of) abstract equations, for v ∈ S, (S finite) where G v generates T v (t) and h vv is the probability that a random jump of V (t) starts from v and arrives to v .The general formulation, for uncountable S, is which reduces to a Boltzmann-type equation when G v = v • ∇ x on a suitable Banach space (see [29,Corollary 3.1] for the general statement).Moreover, when S = {v, −v}, they established that the PDEs above can be combined in the abstract telegraph equation where q = 1/2(q v + q −v ).Exponential waiting times are typical in several physical systems, but this pattern can be distorted in many situations.From the probabilistic point of view this means that it is useful to relax the Markov assumption (with exponential time intervals) in order to allow arbitrarily distributed waiting times between different modes of evolution.Thus Korolyuk and Swishchuk had the idea of having V (t) a semi-Markov process and they developed the theory of the socalled semi-Markov Random Evolutions (see [29] and references therein).However, since in this case the Markov property is lost, the classical connection to abstract Cauchy problems is no longer true and a new theory in this sense has not yet been developed.
This paper makes a remarkable progress in this direction.Indeed, in Theorem 3.10 we find an abstract equation representing the semi-Markov counterpart of eq.(1.2) and therefore, it consists in a generalization of the abstract (linear) Boltzmann-type equation .We will also find the semi-Markov analogue of equation (1.3) for the two-state model.It turns out that these equations are non-local in the time variable, as a consequence of the memory effect induced by the semi-Markov perturbation.
An application of the above theory gives us the possibility to develop a semi-Markov model of scattering transport: we consider a semi-Markov version of the isotropic transport process, i.e., whose flight times are not exponentially distributed.If such flight times have finite mean and variance, then this process is again an approximation of a Brownian motion, just like in the Markov case.Instead, the asymptotic behaviour in case of infinite mean and variance is more complicated and is not included in the limit theorems developed so far.
Therefore we consider a random flight process whose flight times have infinite expectation and belong to the domain of attraction of a stable law.First we show that this model of scattering transport is described by an integro-differential equation exhibiting a pseudo-differential operator in both space and time variables; such equation represents the semi-Markov counterpart of the linear Boltzmann equation holding for the Markov flights.We show that a suitable scaling of our transport process converges (in distribution) to a transport process with superdiffusive behaviour.At time t this process is supported on the d-dimensional ball centered in the starting point and with radius t.Superdiffusive means that the mean square displacement of the limit process spreads, when t → ∞, as Kt γ , with γ > 1 and K > 0. In our case we will find that γ = 2.This last result is obtained by adapting the limit theorems for coupled continuous time random walks developed in [8; 37].It is noteworthy that the limit process is still a scattering transport process, performing, on any finite interval of time, a countable infinity of displacements shorter than > 0 and a finite number longer than , for some > 0. We stress that superdiffusion is empirically observed in many physical systems, like turbulent diffusion, quantum optics, bacterial motions and many others (see [39] and references therein for an overview on this subject).

Assumptions and preliminaries
We briefly introduce here the foundation of the theory which will be used throughout the paper and we outline the basic assumptions under which our theory takes shape.We also here establish the notations used in the whole manuscript.
2.1.Markov and Semi-Markov random evolutions.We refer to [19; 29; 49] for the basic theory.Let V (t) be a regular stepped semi-Markov process in the sense of [29,Chapter 1].Hence let (S, S) be a metric space and let v n , n ∈ N, be a discrete-time Markov chain on it which is embedded in (V (t), t ≥ 0).The transition probabilities will be denoted as (2.1) Let J n , n ∈ N, be a sequence of non-negative r.v.'s with the distribution, for any n ∈ N, We will assume that (2.2) are absolutely continuous, for any v ∈ S, and we will denote g v (w) a density.Further we will use the notation F v (w) := 1 − F v (w).Let τ 0 := 0 and τ n := n i=1 J i for n ∈ N. Hence denote 3) The assumption of V being regular means that it doesn't accumulate an infinity of jumps, i.e., if we define N (t) := max {n ∈ N : τ n ≤ t} we have N (t) < ∞, P v -a.s. for any v ∈ S and any t > 0. Now let T (t) be a semi-Markov random evolution in the sense of [29].Hence, for each v ∈ S let (T v (t), t ≥ 0), be a family of operators and assume that it forms a strongly continuous semigroup on a Banach space (B, • ).Now define the random operator on B (2.4) In the framework introduced in [29] the operator T (t) is called a 'continuous' semi-Markov random evolution (see [29,Def 3.2]).Denote (G v , B 0 ) the generators of T v (t) and suppose that B 0 ⊂ B is the common domain of definition of the operators G v .We remark that (2.4) has the (stochastic) representation (see [29,Lemma 3.1]) which must be meant on B 0 , I denoting the identity operator.One of the most important objects in this paper is the mean value of a semi-Markov random evolution, i.e., for a function u : S → B, the mapping where integration E v is meant in the Bochner sense.
If the (J n , n ∈ N) are such that (2.2) is the cdf of an exponential distribution with parameters θ v then the process V is a continuous time Markov chain and the operator T (t) in (2.4) defines a Markov evolution in the sense of [19; 49].In this case we will denote the process with W (t) in place of V (t); one can prove (e.g., [19,Theorem 2] for finite S or [29, Corollary 3.1] for general S) that q v (t) satisfies Eq. (2.7) has the same form of a linear Boltzmann equation; indeed it reduces to a linear Boltzmann equation in case the evolution is given by translation semigroups on , for an appropriate choice of h v .We remark that G v and the integral operator on the right hand side of (2.7) act upon different variables and that the equation is meant on B 0 .

2.2.
Assumptions.From now on we consider a special class of semi-Markov random evolutions, which is defined by the following assumptions.A1) For any v ∈ S, the family (T v (t), t ∈ R) forms a strongly continuous group of operators on (B, • ), such that T v (t)u ≤ u for all t ∈ R and u ∈ B. A2) The semi-Markov process V (t) is constructed as a continuous time Markov chain timechanged by the inverse of an independent driftless subordinator with infinite activity (i.e., a strictly increasing pure jump Lévy process).In Sections 3 and 4 the assumptions A1) and A2) will be always considered fulfilled, without needing to specify it further.

2.2.1.
Discussion.We now discuss assumptions A1) and A2).Moreover, we introduce and discuss some further minor technical assumptions which will be sometimes requested (saying it expicitly).
First note that A1) includes the remarkable case where T v represents a translation in R d at velocity v (on suitable function spaces like L 1 R d or C 0 R d ) and many others.Since we work under the assumptions that the family (T v (t)) t∈R forms a group for any fixed v we have that −G v generates (T v (−t)) t≥0 as well as (G v ) generates (T v (t)) t≥0 in the sense of semigroups (see, for example, [14,Section 3.11]).
We now explain in detail the time-change construction for V (t) to which we refer in A2).Let σ(t) be a subordinator, i.e., a one-dimensional Lévy process with non-decreasing sample paths.Its distribution is defined, for λ > 0, by where f (λ) is a Bernstein function (see more on subordinators in [9; 54]).Hence f (λ) has the form where a, b are non-negative constants and ν a sigma-finite measure fulfilling the integrability condition The measure ν is said to be the Lévy measure of σ(t).We here assume that a = 0, which implies that P (σ(t) < ∞) = 1 for all t > 0 and that b = 0, which implies that σ(t) is a pure jump process with no drift.Since there is not drift, in order to require that the process σ is strictly increasing we assume that ν(0, +∞) = ∞, i.e., the process has infinite activity.Now, let (2.12) Hence, assumption A2) means that our semi-Markov process is defined by the following timechange which means It is easy to see from (2.14) that the epochs (jump times) (τ n , n ∈ N) of V (t) are a transformation of (τ n , n ∈ N)), i.e., τ n = σ(τ n −) a.s.However, by a simple conditioning argument, using independence and the fact that σ(t) has no fixed discontinuity, i.e., σ(t) − σ(t−) = 0, a.s., one has Ee −λσ(τn−) = EE e −λσ(τn−) | τn = Ee −λσ(τn) . (2.15) It follows that (2.3) can be rewritten as Hence (2.13) is characterized by the same embedded Markov chain {v n } n∈N of W (t) but it exhibits new waiting times J n such that By stationarity of increments of subordinators and since τn − τn−1 are exponentially distributed it is clear that Since we assume that F v (w) has a Lebesgue density we will consider only subordinators whose one-dimensional marginal has a Lebesgue density.We will denote the density of σ(t) with the symbol µ t (w), i.e.,

.19)
A further quantity which is typical of semi-Markov processes is the density defined by which gives the probability that there is at least one jump during dt.In what follows we will assume that for any v and (since we are working with non-explosive processes) also (2.20) whenever the process N (t) is a renewal counting process, i.e., θ v = θ, for some θ > 0. In this case the function h v (w) is the renewal density of N (t) in the classical sense of renewal theory (e.g.[11, page 26]) and we have that Since the process N (t) is non-explosive the probability to have more than one jump in the interval ∆t is o(∆t), and therefore the numerator of (2.22) can be interpreted as the probability of having one, or more, jump in the interval ∆t.In our case the computation can be conducted by exploiting the time-change construction, as follows.By (2.19) we have that the renewal measure for our subordinators always has a density This density is proportional to the renewal density h v (w) of the renewal counting process N (t).Indeed, where N is a Poisson process independent on L(w).Since it follows that h v (w) = θu f (w) for any v.Further the density u f (w) is clearly in L 1 loc (R + ).

Boltzmann-type equations
In this section we derive rigorously the governing equation for the expectation function q v (t) defined in (2.6).Hence we should make rigorous the following assertion: the function q v (t) satisfies, on B, subject to q v (0) = u(v), where f is the Laplace exponent defined in (2.9).Whenever f (λ) = λ (hence there is no time-change and V (t) is Markovian) one recovers, formally, the linear Boltzmann-type equation In this section we proceed as follows.In Section 3.1 we address the problem of defining the operator f (∂ t − G v ) appearing in the left hand side of (3.1); as f is a Bernstein function, we take inspiration from the theory of Bochner subordination, and the corresponding functional calculus, whose basic facts will be outlined at the beginning of Section 3.1 (the reader can consult [54,Chapter 12] or [22,Chapter 2] for a thorough discussion on Bochner subordination).Then, in Section 3.2, we obtain some technical properties of q v (t) which will be needed throughout the paper.
3.1.The operator f (∂ t − G v ) through Bochner subordination.Take a family of operators (T t ) t≥0 and suppose that it forms a strongly continuous contraction semigroup on B and define the generator of T t on the domain Dom(A), i.e., Dom(A) := {u ∈ B : the limit (3.3) exists as strong limit} . (3.4) Then take a subordinator σ(t), t ≥ 0, with Laplace exponent f , and define the family (T f t ) t≥0 as where the integral (3.5) is meant as a Bochner integral.The Phillips' theory states that the family T f t is still a semigroup (this comes from Markov property of σ(t)) and that the generator of T f t is given by the operator A f (with domain Dom(A f )) such that and, in general, Dom(A f ) ⊃ Dom(A) (for all the assertions above, see [54, Propositions 12.1, 12.5 and Theorems 12.6]).We observe that formula (3.6) uses the representation (2.9) for the Bernstein function f (with a = b = 0) and the basic definition of pseudo-differential operators.We note that, on functions L 1 ([0, T ]; R), for any T > 0, the operator M D f t defined in (2.29) can be interpreted by means of the above theory.Indeed, if we define the family of operators then it is well known that this family forms a strongly continuous contraction semigroup on L 1 ([0, T ]; R), for T > 0. Hence one might be tempted to write, in the spirit of Bochner subordination, M D f t u(t) = f −A Γ , where A Γ denotes the generator of Γ t .However the generator of the (killed) translation Γ s is defined on functions that are differentiable, in appropriate sense, and such that u(0) = 0.So we apply Phillips' formula as in (3.6) to the function u 0 (t) := u(t) − u(0) and this yields the representation and the integral makes sense as a Bochner integral because of (3.6), under suitable assumptions on u(t).
In this paper we consider a generalization of (3.8) on L 1 ([0, T ]; B), T > 0, i.e., the space of functions [0, T ] t → u(t) ∈ B with finite L 1 norm T 0 u(t) dt.The reader can consult [28, Section 5.13] for a general theory of semigroups acting on Banach-valued functions (see in particular Proposition 5.13.1 and Theorem 5.13.1 of which the forthcoming Lemmas 3.1 and 3.3 are analogs for translations on L 1 ([0, T ], B)).
Indeed consider the operators on L 1 ([0, T ]; B), T > 0. It turns out that this family is a strongly continuous contraction semigroup, as it is outlined in the forthcoming result; in the following the derivative is meant on absolutely continuous functions u : [0, T ] → B with the representation for any t ∈ [0, T ].We remind that in this framework a function u is said to be absolutely continuous if for every > 0 there exists δ > 0 such that for every finite collection In view of the previous heuristic discussion it is clear that we will need the following technical Lemma.
Lemma 3.1.The family of operators Γ t , t ≥ 0, defined in (3.9) forms a strongly continuous contraction semigroup on L 1 ([0, T ]; B), for any T > 0. The generator Remark 3.2.We remark that W coincides with AC ([0, T ], B) (with u(0) = 0), whenever the Banach space B has the Radon-Nikodym property, i.e., when the Banach space is such that absolutely continuous functions are a.e.differentiable (see [1,Definition 1.2.5]).Otherwise W is a subset of it.
Proof of Lemma 3.1.Fix T > 0 arbitrarily.First we prove that Γ t is a contraction.We have that (3.12) Strong continuity of Γ s on L 1 ([0, T ], B) can be proved by checking it first on the set of continuous functions with compact support which is dense in L 1 ([0, T ], B) and then extending to the whole Banach space (see, for example, [14, Proposition 5.3] or the discussion in [1, page 14]).To obtain the generator and its domain we proceed as follows.Let A, Dom( A) be the operator

.13)
As h → 0 the rhs converges to t 0 g(s)ds, where g(s) := A Γ u(s), since integration over compact intervals is continuous on L 1 while the lhs instead converges to −u(t) for almost all t ∈ [0, T ].If we define appropriately u on a null set we get and thus u is an L 1 ([0, T ], B) function and is absolutely continuous (according to [1, Proposition 1.2.2]) with derivative −A Γ u(s) and u(0) = 0.This shows that Dom(A Γ ) ⊂ W and that A| Dom(A Γ ) = A Γ .Denote now ρ( A) and ρ(A Γ ) the resolvent sets of the two operators.It is easy to see that the resolvent operator of A is, for λ > 0. It follows easily that ρ( A) ∩ ρ(A Γ ) = ∅ and thus from [14, Exercise IV.1.21.( 5)] we can conclude that A Γ , Dom(A Γ ) = A, W .
We are now in position to define the operator f As −A Γ is the differentiation operator on absolutely continuous functions, in the rest of the paper we will use the notation −∂ t in place of A Γ .
For any v ∈ S we consider the family (U v s ) s≥0 of operators on functions u ∈ L 1 ([0, T ]; B) given by As a consequence of Lemma 3.1 it is easy to prove that the operators U v s form a strongly continuous contraction semigroup on L 1 ([0, T ]; B) whose generator has the form , and thus we will use Phillips formula to define for suitable functions u (and thus with u(0) = 0).
where the last inequality follows from Lemma 3.1 while the second last from the fact that T v (s) is a contraction on B. The property U v s U v t u = U v s+t u is easily checked.Strong continuity instead can be checked by observing that and then sending s → 0 in (3.20): the first member goes to zero by Lemma 3.1 while the second goes to zero by dominated convergence (e.g., [1, Theorem 1. where goes to zero as h → 0: the first member goes to zero since u(t) ∈ Dom(−∂ t ) while the second goes to zero by dominated convergence for Bochner integrals since T v (h) is strongly continuous (on B).
With the properties of U v s at hand, we are finally able to define f since on H v the generator A U v reduces to −∂ t + G v .In the following Lemma we clarify this in terms of Bochner subordination.
Lemma 3.4.The subordinate semigroup The expectation of a semi-Markov random evolution.We obtain here some properties of for suitable functions u : S → B.
Lemma 3.5.Let u : S → B 0 be such that Then we have that q v (t) ∈ B 0 for any v ∈ S and t ≥ 0 and further Proof of Lemma 3.5.We have by the assumptions that Example 3.6 (Translation).Let S be finite and let T v (t) be a translation on B = L 1 (R) at velocity v, say T v (t)h(x) = h(x + vt) (see, e.g., the case of the random evolution driven by the "telegraph" process, treated in section 4, where S contains two elements).Use the notation u(v n ) = h vn .Then, for h vn ∈ AC(R), such that h vn ∈ L 1 (R) we have the last inequality holding since S is finite.A similar argument applies to the case where since the sup-norm is invariant under translations.Then, by finiteness of S, we have where v = ±1 respectively denote the clockwise and the counterclockwise rotation, with generator Similarly to Example 3.6, it is easy to prove that assumption (3.30) is satisfied in this case of rotation operators, because the L 1 and the sup-norm are invariant even under rotations; we leave the proof to the interested reader.
The following result characterizes continuity and differentiability properties of q v (t).
Lemma 3.8.Assume that u : S → B 0 is such that sup v u(v) < ∞ and that the assumptions in Lemma 3.5 are satisfied.Suppose further that h v (w) satisfies (2.21).Then, for any v and T > 0, it is true that, q v (t) ∈ AC ([0, T ]; B), q v exists a.e. and is Bochner integrable on [0, T ] and further and use the Dynkin-type representation of the semi-Markov random evolution T (t) which can be found (for example) in [58, Corollary 2.5]: for s > 0, where g and F are the densities and the survival functions of the waiting times according to Section 2.1.The last equality is justified by the following arguments.Since T v (t)u(v ) ∈ B 0 for any v, v and t ≥ 0, by our assumptions, we can find constants, say it C 1 and C 2 , such that and further C 1 and C 2 depend on u but are independent from y.Therefore, using that F v is non increasing and that a.s.γ(y) ∈ [0, y], (3.41) where, using (2.18), it is easy to see that F (s Since the waiting times between jumps are absolutely continuous r.v.'s, then the distribution of γ(y) can be estimated as in [11, page 61] to be It follows that where u f is defined in (2.24).Since u f ∈ L 1 loc (R + ) it follows that the second term in (3.43) is in L 1 (R + ) by the properties of Laplace convolution (e.g.[1, page 22]).Therefore y → E v g V (y) (γ(y)) is in L 1 loc (R + ) for any v and thus the last term in (3.41) is finite.Hence the integrand on the lhs of (3.38) is the (Bochner integrable) function (a.e.q v (t)) we were looking for and further we have by [1, Proposition 1.2.2] that q v (t) is also AC ([0, T ]; B) and a.e.differentiable.
Before stating the governing equation we need the following techical Lemma.Lemma 3.9.Under the assumptions of Lemmas 3.5, 3.8 we have that the function )), for any v ∈ S. Proof.Note that q 0 v (0) = 0 and further where we used Lemma (3.8) and the well known integral representation of semigroups for u(v) ∈ B 0 .Then since we know from Lemma 3.5 that G v q(t) ∈ L 1 ([0, T ]; B) it follows from Lemma 3.3 that q 0 v ∈ H v .3.3.The governing equation.In this section we give the rigorous result on the governing equation of q v (t).Theorem 3.10.Let the assumptions of Lemma 3.9 prevail.Then q v (t) satisfies the following problem on B for any v ∈ S and t ≥ 0.
Proof.Use [29, Theorem 3.1] to say that we have that T (t)u(V (t))1 {J1≤t} is P v -Bochner integrable.It follows that also T v (−t)T (t)u(V (t))1 {J1≤t} is P v -Bochner integrable and thus we can apply T v (−t) to both sides of (3.46) and move T v (−t) inside the integral to get Introduce the notation ϕ v,v (t) := T v (−t)q v (t).We have that Since q v ∈ L 1 ([0, T ]; B) for any T > 0, it follows that q v ∈ L 1 loc (R + ; B) and thus we can take the Laplace transform (t → λ) in both sides of (3.48) in the sense of [1, chapter 1.4].As the last term in (3.48) is a convolution, we can use [1, Proposition 1.6.4] to get that, for λ > 0, For the Laplace transform of P v (J 1 ∈ ds) and P v (J Then the following equation where Γ s is the operator defined in (3.9), coincides in the Laplace space (t → λ) with (3.50) and this proves that (3.51) is verified for almost all t ≥ 0. First note that the lhs is an element of B since the integrand is such that where q 0 v (w) := q v (w) − T v (w)u(v) = T v (w)ϕ 0 (w), and the rhs of (3.52) is in L 1 ([0, T ]; B) by Lemma 3.9.For the rhs note that where in the last step we used (2.9), in the third last we used Fubini while in the second line we used again [1, Corollary 1.6.5].
The fact that the equality (3.51) is true for any t ≥ 0 (and not only for almost all t ≥ 0) comes from (strong) continuity of both sides which is a consequence of properties of Bochner integrals (e.g.[ ) and, since s → q v (s) is continuous by Lemma 3.8, we have that s → ϕ v,v (s) − ϕ v,v (s) is continuous.Thus we can take the strong derivative in (3.51), use representation (3.52) and apply T v (t) to both sides of the equation to get (3.55) Remark 3.11.Take a function u ∈ L 1 ([0, T ]; B) with u(0) = 0 so that u / ∈ H v and define u 0 (t) := u(t) − T v (t)u(0).Now u 0 (0) = 0 so that one could have u 0 ∈ H v .This is exactly the case of q v and q 0 v .Here the regularizing term −T v (t)u(0) has the same role of the regularization −u(0) in the canonical fractional derivative (3.8).Indeed, whenever u(t) is such that following computations are justified, the function f (∂ t − G v ) u 0 (t) can be rewritten as which has the form of the first line in (3.8).It follows that the lhs in (3.45) has the structure of a canonical fractional derivative.A particular case which will be of great interest in the next sections is given by specializing T v (t)u(x) = u(x + vt) on suitable function spaces, to get from (3.58) In this last case our operator provides a rigorous way to define explicitly α , which is usually understood as a pseudo-differential operator (see Section 5.1 below for details and references).
We proved until now that the function q v (t) satisfies the Boltzmann-type equation (3.45)where the operator f (∂ t − G v ) is obtained with Bochner subordination theory.This operator in the governing equation represents the "coupling" between the time evolution (delayed by the inverse subordinator) and the evolution on B (characterized by the non-exponential waiting times), induced by the non Markovian perturbations.Now we show that the function satisfies an integro-differential equation of fractional type.We remark that applying T v (−t) to the random operator T (t) makes, under P v , the evolution on B constant before the first perturbation induced by the semi-Markov process: Therefore we will see that the equation contains only the scattering component and not the operator G v .
In order to write the equation, let us first recall the canonical form of the fractional derivative (said Caputo-type), which can be generalized by replacing the kernel s −α /Γ(1 − α) with the tail of the Lévy measure ν(s) as for suitable functions φ.Here is the rigorous statement on the equation.
for any v, v ∈ S and t ≥ 0.
Proof.Take the representation in the Laplace space given in (3.50) Note that (3.65) is still valid here since it comes from the renewal equation (3.46).In the same spirit of the Theorem 3.10 we first prove that the equation coincides in Laplace space (t → λ) with (3.65).For the right-hand side we already did this in the proof of Theorem 3.10.For the left-hand side first note that since for any T > 0 and t → ν(t) is in L 1 ([0, T ]; R) for any T > 0, then their Laplace convolution is in L 1 ([0, T ]; B) (see the discussion at the end of [1, page 22]).Then, by an application of the convolution theorem for Laplace transform for Bochner integrals (e.g.[1, Proposition 1.6.4]),using that the equality follows.We already proved that the rhs of (3.66) is differentiable in Theorem 3.10.The lhs is instead continuous since, using that ν(s) ∈ L 1 ([0, T ], R), for any T > 0, and , we can apply [1, Proposition 1.3.2(and the discussion below)].Hence both sides of (3.66) can be differentiated to get, for any t > 0,

Abstract wave equation with semi-Markov damping
We find here a generalized wave equation with damping which governs the expected value of a particular class of random evolutions.The results in this section provide the semi-Markov counterpart of the theory valid in the Markov case, given in [19, Section 4].
(4) The generators G v1 , G v2 are scalar multiples of the same operator, with the form G v1 = G and G v2 = −G, where G generates the group T (t), t ∈ R. Assumption (3) means that G v1 generates T v1 (t) = T (t), t ≥ 0, the forward evolution for T , while G v2 = −G generates T v2 (t) = T (−t), t ≥ 0, the backward evolution for T .As a remarkable example, the above hypoteses include the translation group on C 0 (R) or L 1 (R), such that T (t)u(x) = u(x + t) and G = d/dx (in suitable sense).
Without losing the generality of the above hypotheses, we will assume S = {1, −1}.Hence, the underlying semi-Markov process can be written in the "telegraph" form where V 0 is a random variable with values in S and N (t) is the number of renewals up to time t.
As before use T (t) for the random evolution operator and define If V 0 is a r.v. with P (V 0 = 1) = P (V 0 = −1) = 1/2, hence q(t) := EE V0 (T (t)u(V (t))) = q 1 (t)P (V 0 = 1) + q −1 (t)P (V 0 = −1) In [19] the authors studied the special case of Markov random evolutions and found a second order abstract equation.By using our notations, we re-formulate their result in the following theorem (for the proof we refer to the original paper).
Equation (4.3) is also called abstract wave equation with damping.The reason of this name is that, in the case of translation groups on C 0 (R), where G 2 = d 2 /dx 2 , the abstract equation (4.3) is formally equivalent to the classical damped wave equation (5.70) as we will see in the next section (the equivalence is only formal because (4.3) is an abstract equation while (5.70) is a classical equation).
We observe that (4.3) has an interesting connection to the abstract wave equation where u ∈ Dom(G 2 ).It is immediate to verify that (4.4) is solved by the "free" evolution It is proved by Griego and Hersh [19] that the solution to (4.3) is related with w(t) by the following equality where N (τ ) is the underlying Poisson process pacing the jumps of the velocity.Heuristically, this means that the solution to (4.3) is obtained by perturbing the free evolution (4.5) at random times.This causes the origin of the damping term in eq. ( 4.3), making eq.( 4.3) different from eq. (4.4).This is proved in [19,Thm. 4].
In the following Theorem, we show that q(t) in (4.2), also in our general semi-Markov setting, is again the average of the free evolution (4.5) with the time t suitably randomized.Theorem 4.2.Let q(t) and w(t) be defined in (4.2) and (4.5).The following representation holds: where N (τ ) is the underlying renewal counting process.
Proof.Let us denote the occupation measure of V (t) in the state v by and observe that the difference of occupation times can be written as Taking into account that T 1 (t) and T −1 (t) commute, and using the semigroup property, the random evolution can be re-written as so that the waiting times J i are no more relevant.Using the assumption that T ±1 (t) = T (±t), where T (t) is a group, we can write (4.9) as Let γ t = t 0 (−1) N (τ ) dτ .The expected value of (4.10) can be written as and the proof is complete.
We derive now the semi-Markov version of the damped wave equation (4.3).Let us first note that the d'Alembert-type operator on the lhs of (4.3) is, formally, In order to write down the governing equation of q(t) in our semi-Markov setting we introduce the compact notation where the operators on the rhs of (4.13) and (4.14) are defined as in Section 3.1.Then we define the operator We observe that when f (λ) = λ α , the operator in (4.15) has the form of the fractional power Let us first note that, under the more restrictive assumptions in this section, the sets H v , v ∈ {−1, 1} defined in (3.18) are the same set for any v.Hence we will use the notation H. Proof.Note that (4.17) Since φ ∈ H, it follows that φ(0) = 0 and therefore we have from (4.17) where exhange in the order of integrals is guaranteed by Fubini Theorem for double Bochner integrals ([1, Thm.1.1.9]),since φ ∈ D.
We are now in position to state the following result, which is a semi-Markov counterpart of eq. ( 4.3).Before stating the following theorem we introduce the functions w ± : [0, ∞) → B defined as We remark that w ± are the unique solutions to the wave equation, under suitable initial conditions: under the initial condition q(0) = u, where w ± (t) = T (±t)q(0) and w(t) are the free evolution (see (4.5) for w(t)), i.e., the unique solutions to the abstract wave equation (4.4) under appropriate initial conditions, according to the above definitions.
Proof.The equations given in Theorem 3.10 split into under q 1 (0) = q −1 (0) = u.By adding and subtracting the appropriate terms we have By Lemma 3.9 we know that both q 1 (t) − T (t)q 1 (0) and q −1 (t) − T (−t)q −1 (0) are in H and therefore they lie in the domain of D + and D − and further, by linearity, also their linear combination does.Further, since u ∈ B 0 it is clear that (T (t) − T (−t)) u ∈ H and thus by Lemma 3.4 it also lies in the domain of D + .So we can apply D + to both sides of (4.24).By analogous considerations, we can apply D − to both sides of (4.25).It follows that (q 1 (t) − T (t)q 1 (0)) and q −1 (t) − T (−t)q −1 (0) are elements of D ∩ H. Hence we can apply Lemma 4.3 to get Using now (4.24) and (4.25) we see that and substituting in (4.26) and (4.27) we obtain and again by linearity and then dividing by 2 both sides of (4.33) we obtain the result.

Transport with infinite mean flight times and superdiffusion
We here consider a model for scattering transport such that the waiting times between velocity changes (collisions) have infinite expectation.Therefore this model substantially differs from the classical Markov case with exponentially distributed time intervals, in particular for what concerns scaling limits.It is a classical result (e.g.[61]) that a transport process with uniform velocity changes paced by i.i.d.exponential r.v.'s converges, after a suitable scaling limit, to a Brownian motion, therefore exhibiting a diffusive behaviour (the mean square displacement grows like Ct, C > 0, as t → ∞) and infinite velocity.However the exponential waiting time is not crucial to have this behaviour: if one takes arbitrary finite mean waiting times, then in the long run (or after a scaling limit) the convergence is still to a diffusive process (see, for example, [29, chapter 3]).However the infinite expectation case seems to be always ruled out by classical assumptions in this literature.In this section we are able to deal with the infinite expectation case using some CTRWs scaling limit theory.In particular we show that a suitable scaling yields to a superdiffusive transport process whose one-dimensional distribution is supported (when the process starts from the origin) on This agrees with the intuition because the longer flight times tend to be (on average) longer (heavy tailed) than in the exponential (or finite mean) case and permits a space scaling as fast as the time scaling.
Here is a more rigorous discussion.Consider a semi-Markov model of transporting particle in R d as follows.It is assumed that the particle originating at x ∈ R d moves along the unit vector v 1 with constant velocity 1, until it has a collision after a random waiting time J 1 ; then the particle moves (again with constant velocity 1) along the unit direction v 2 for a random time J 2 , and so on.Let the v i have uniform distribution on the unit sphere where • d stands for the euclidean norm, independently on the past history, and let the J i be i.i.d.random variables in R + .Hence, let V (t) be a semi-Markov chain on S d−1 , representing the unit velocity vector of the moving particle, i.e., be a continuous additive functional of V , representing the position of the particle.With the usual notation the number of scatterings up to time t, we can re-write (5.3) as (5.4) We call (X(t), V (t)) semi-Markov isotropic transport process.For any x ∈ R d and v ∈ S d−1 we denote the one-dimensional distribution of the process, started at x and v, as which is compactly supported on the set Denoting by E v the integration with respect to the measure (5.5), we define the mean value where h ∈ C 0 (R d × S d−1 ).Clearly, (5.6) is nothing more that the mean value defined in (2.6) with reference to the particular random evolution where T vi are translations groups on C 0 (R d × S d−1 ) acting on the coordinate x ∈ R d , namely In the special case where the waiting times have exponential law with mean 1/θ, then V (t) is a continuous time Markov chain and also (X(t), V (t)) is jointly Markov.Such a process has been studied in several papers (e.g.[40; 61]).In this case it is true that (5.6) defines a strongly continuous contraction semigroup on C 0 (R d × S d−1 ) equipped with the sup-norm (consult [61, section 2]); for all h ∈ C 1 0 (R d × S d−1 ) the following equation holds under the condition g(x, v, 0) = h(x, v), where µ(•) is the uniform measure on S d−1 .Eq. (5.8), also known as linear Boltzmann equation, is the backward equation for the Markov process (X(t), V (t)) and the operator acting on the right side is the infinitesimal generator.
In the framework of statistical physics, one of the things that makes this Markov process important is the fact that it is an approximation of a diffusion process; indeed X(t) converges to a Brownian motion by re-scaling the space variable as x → cx and the time variable as t → tc 2 and letting c → ∞.For a discussion on this point, consult, for example, [61, sections 3 and 4].
We stress again that, alongside this classic result, there is another important fact, which perhaps is not so well known: a large class of semi-Markov transport processes shares the same asymptotic property which leads to a limit diffusion.This means that, in the scaling limit of small and rapid jumps, the exponential distribution of waiting times (and the consequent Markovianity of the process) is not a crucial condition to have convergence to Brownian motion.Rather, the only thing that matters is that the waiting times have finite mean and variance.We refer to the next Remark (5.4) for a sketched proof, and to [29,Section 4.3] for a more general discussion.The reader can consult also [46] for a general theory concerning the limits of Markov transport processes.Moreover, for an example of a transport process whose flight times have finite mean but are not exponential, consult [41], where the authors assume Dirichlet distributed flight times.
Thus, transport processes with infinite mean waiting times are cut out from the above consideration.But yet, semi-Markov processes with such a property have proven to be a fundamental tool in statistical physics, especially in models of anomalous diffusions (see, for example, [36; 39] and references therein).In Section 5.2, we focus on this aspect and find new results in this direction using some CTRWs limit theory.

5.1.
Transport processes with heavy-tailed flight times.We here consider a particular type of semi-Markov process V (t) on S d−1 , whose waiting times exhibit power-law decaying densities, with infinite mean and variance.
In order to construct such a process, we refer to the time-change assumption introduced in section 2.2.Hence, assume that σ α (t) is a stable subordinator with index α ∈ (0, 1), corresponding to the Bernstein function f (λ) = λ α , and let L α (t) denote its inverse; if W (t) is a continuous time Markov chain on S d−1 , whose waiting times are exponentially distributed with mean 1/θ, we construct the time-changed process (5.9) By applying (2.17), the waiting times J n of V α (t) follow the heavy tailed distribution where denotes the so-called Mittag Leffler function.It is not hard to see that waiting times with these distribution have infinite expectation: one has indeed by [38,Theorem 2.1] that from which it easily follows that EJ n = ∞.Note that the Markov case is formally re-obtained by putting α = 1, whence we obtain the exponential waiting times P (J n > t) = e −θt .
Denote now (5.12) Note that in the one-dimensional case this process has already been considered in [12], where the authors studied several distributional properties (e.g., the distribution of the first passage time of the process).
We now prove that in this framework the transport process (X α (t), V α (t)) is governed by an equation which is formally similar to the Boltzmann equation (5.8), except that the material derivative ∂ t − v • ∇ x on the left-hand side will be replaced by its fractional power (∂ t − v • ∇ x ) α .This operator has been considered in several papers as a pseudo-differential operator with Laplace-Fourier (x, t) → (ξ, λ) symbol (λ + iv • ξ) α and associated with the so-called Lévy flights or Lévy walks (e.g.[31; 55]).The reader can consult, for example, [5; 62] for an introduction to the theory of Lévy walks and some interesting applications.
We provide here the mathematical background of this idea by showing how these processes are related to semi-Markov random evolutions and how this fractional material derivative can be adjusted to be included in our general framework.In other words, we reformulate part of the theory of Section 3 in terms of pseudo-differential operators.
So, let us consider the Banach space of functions L 1 ([0, T ] × R d ) endowed with its natural norm.Moreover, let H be the subset of L 1 ([0, T ] × R d ) whose elements have value zero at t = 0 and are absolutely continuous functions such that their first derivatives (in time and space) are in L 1 ([0, T ]×R d ).Finally, let T v (t) be the translation operator, such that T v (s)h(x, t) = h(x+vs, t), and let Γ s denote the killed time shift, i.e., Γ s h(x, t) = h(x, t − s)1 [s≤t] .Consider the family of operators {U v s } s≥0 defined by By Lemma 3.3, (U v s ) s≥0 defines a strongly continuous contraction semigroup on Consider the subordinate semigroup (U v s ) s≥0 defined by the following Bochner integral By Phillips theorem (see [54], Theorem 12.6), (U v s ) s≥0 is again a strongly continuous contraction semigroup on It turns out that we can take the last line in (5.14) as a Lebesgue integral, and use it to define the operator (∂ t − v • ∇ x ) α as a Lebesgue integral.We are now in position to prove the following result, which extends the linear Boltzmann equation (5.8) to this kind of semi-Markov processes. (5.15) Then q(x, v, t) solves the following equation where µ denotes the uniform measure on S d−1 and the operator α is defined as a Lebesgue integral, in the sense that the lhs exists for all v ∈ S and almost all (x, t).Further, it satisfies the equation in the Fourier-Laplace space Proof.From the construction of the process and the law of total probability, by conditioning on the first jump time (see [29,Theorem 3.1]) and using the Markov property of semi-Markov processes at jump times, the following renewal equation holds q(x + vs, v , t − s)µ(dv ).(5.18)It follows by the assumptions on h that, for any T > 0, and thus the Fourier-Laplace transform (x → ξ, t → λ) of q(x, v, t) exists.Hence we apply the Fourier transform in x to both members of (5.18), to obtain This allows to have a convolution operator in t on the right side.Then, by applying the Laplace transform in t, we get which can be re-arranged as This proves that (5.17) is verified.Moreover, by the assumption on h, we can apply the results of the previous section to say that Remark 5.2.We observe that the distribution of X α (t) has support on ||z − x|| d ≤ t and has a discrete component on the sphere ||z − x|| d = t, the last one being the probability that V α (t) has no jumps up to time t: where E α (•) is the Mittag-Leffler function.Whenever (5.5) has a Lebesgue density p t (z, w|x, v) on the open ball ||z − x|| d < t (this is true, for example, in the case of isotropic Markov transport process, consult [56] for the explicit expression) by Proposition 5.1 the following backward equation holds on the set z − x d ≤ t: (p t (z, w|x, v ) − p t (z, w|x, v))µ(dv ), (5.21)where δ(x) denotes the Dirac delta.Hence, on the open set ||z − x|| d < t, the density solves the following forward equation Remark 5.3.For another model of random flight process whose governing equations exhibit fractional operators, consult [16].

5.2.
Convergence to a superdiffusive transport process.We prove in this section that, under a suitable scaling, the process X α (t), defined in the previous section, converges in distribution to a superdiffusive process.It turns out that this limiting process can be represented as a transport process with continuous paths.
In order to study a scaling limit of X α (t), we recall here some basic notions from the theory of CTRWs (see, for example, [8; 37]).In this section we adopt the notation used for X α (t), with the intepretation given in the CTRWs setting.
Hence consider i.i.d.random vectors (J i , Y i ), where Y i ∈ R d represents a particle jump and J i ∈ R + is the waiting time preceeding that jump.Let Y (n) = Y 1 + • • • + Y n represent the location of the particle after n jumps and τ n = J 1 + • • • + J n denote the time of the n-th jump.Moreover, let N (t) = max{n ≥ 0 : τ n ≤ t} be the renewal counting process representing the number of jumps up to time t.A CTRW is defined as The wording "coupled" CTRWs means that J i and Y i are stochastically dependent.
By formula (5.4) we see that, apart from the initial position x, the process X(t) can be written as a particular CTRW with waiting times J i and jumps Y i = J i v i (this is a Lévy walk in the sense of [31]) plus a corrective term Our goal is to understand wheter the scaled process converges, as c → ∞, to some stochastic process for some H > 0.
Remark 5.4.As mentioned in the previous section, if J i have finite expectation and variance, it is well known that a Brownian motion is obtained in the limit when H = 1/2.The diffusive behavior is caused by some facts, which we recall here in a heuristic way, without claiming to be exhaustive.First note that when the r.v.'s J i have finite mean 1/θ the renewal theorem states that N (ct) ≈ cθt.Further (ct) ≤ c −H J N (ct)+1 ≈ c −H J cθt and the latter quantity tends to zero a.s., for any H > 0. Now, if the J i also have finite variance, putting H = 1/2 and using central limit arguments, we have that c −1/2 Y ([ct]) → B(t), where B(t) is a Brownian motion in R d .Combining these two facts, we have hence the time-change simply yields a change of scale in the limit process.
In our case the r.v.'s J i are i.i.d. with infinite expectation.We show here that this permits to obtain a limit process with the scaling x → c −1/α x, t → c 1/α t, α ∈ (0, 1), i.e., putting H = 1 in (5.23) and renaming c as c 1/α for convenience.
When studying this kind of limit of X α (t), there are two problems to address.One is that the Mittag-Leffler distributed waiting times J i have infinite expectation and variance, so there is no hope to apply the arguments of Remark 5.4.The second is that the quantity c −1/α (c 1/α t) does not converge to zero and gives a contribution to the limit.In order to distinguish the two components we rewrite (5.23) as where and α c (t) := c −1/α (c 1/α t). (5.26) It is instructive to deal with the two components separately, therefore we first study Y c α and α c and then we prove the convergence of the sum X c α .We begin with Y c α which is a particular CTRW whose waiting times belong to the domain of attraction of a stable law.In what follows, we refer to [8; 37] (and references therein) for the corresponding theory of scaling limits for this kind of coupled CTRWs.
Since J i has a Mittag-Leffler distribution, then the J i v i belong to the domain of attraction of a rotationally invariant α-stable law, hence where A(t) is a rotationally invariant stable process and fdd → denotes convergence of all finitedimensional distributions.Moreover, the time τ n of the n-th scattering and the renewal counting process N (t) are such that where σ α (t) is a α-stable subordinator and L α (t) is its inverse.The reader can consult [36, Section 6.4]) for the above results.Heuristically, combining the above results, we should have for the scaled process Actually this is not exactly true: it will turn out that the process A (L α (t)) is the limit of the overshooting CTRW, i.e., the process while our process converges to A(L α (t)−).The role of the following theorem is to make rigorous the above heuristic idea.
Remark 5.5.In particular, the theorem below will clarify what is the stochastic dependence between L α (t) and A(t), which can be described heuristically as follows.The process L α (t) is the first passage time through level t of a stable subordinator σ α (t), while A(t) is the sum of displacements whose direction is uniformly choosen on S d−1 and whose length is equal the jump of the stable subordinator σ α (t).It follows that A(L α (t)−) d ≤ σ α (L α (t)−) < t, a.s., since A(L α (t)−) is the position of the process A(t) when the subordinator is in the starting point of the jump over t.It is clear from the construction of A(t) that, on any finite interval of time, it performs a countable infinity of displacements with length less than δ, for any δ and a finite number with length more than δ, for some δ > 0.
Also, some physical properties of the limit process are interesting.It turns out that A (L α (t)−) is superdiffusive (we recall that a process is said to be superdiffusive if its mean square displacement grows as t γ , with γ > 1; in our case we find that γ = 2).Theorem 5.6.Let Y c α (t) be the process defined in (5.25).Suppose (without loss of generality), that the waiting time distribution (5.10) has θ = 1.Let e s be a Poisson point process on R d × (0, ∞) with the intensity It follows from (5.34) that Ee −λσ α (t) = e −tλ α (5.35) where B is the constant depending on α and d, given by B := cos(πα/2) where 1 is any unit vector.In (5.36) we used [36,Example 6.24] to say that Therefore, marginally, A(t) is a rotationally invariant α-stable process while σ(t) is an α-stable subordinator.( 2) Since (J i , v i ), i = 1 . . .n, are i.i.d., we can write the Fourier-Laplace transform of n −1/α (Y (n), τ n ) in the following way By using conditional expectation, (5.39) can be re-written as Remark 5.7.Let h(x, t) be a density of M (t).Then, following the lines of [8, page 748], Fourier-Laplace inversion of (5.31) gives, at least formally, the following equation To the best of our knowledge, (5.41) has never appeared.However, its one-dimensional version had already appeared in [34] in the study of one-dimensional continuous time random walks; our theory implies that (5.42) governs the scaling limit of a one-dimensional transport process.Eq. (5.42) was also studied in [31] where the authors find explicit solutions.
We deal now with the component α c .Write (t) as (5.43) where the process γ(t) is the sojourn time in the current position of V α (t).Since the r.v.'s v n are independent, v N (t)+1 is a uniform r.v. on S d−1 , independent from γ(t), it follows that γ(t)v N (t)+1 is a vector with norm γ(t) and independent uniform orientation.In our process (5.23) it represents the last displacement of X α (t) (after the last scattering).Therefore, in the limit, it converges to the last displacement of the process A(t) with length t − σ α (L α (t)−) and uniform orientation.Consider the process Since the orientation (velocity) is independent on γ and its distribution is uniform independently on c and t, when studying the convergence in distribution of α c it is sufficient to study the norm γ α c (t) := c −1/α γ(c 1/α t) separately.In the next result we show that c −1/α γ(c 1/α t), i.e., the length of the last scattering of the process X c α , converges (in distribution) to the process γ σ (t) := t − σ α (L α (t)−) , i.e., the current time passed from the last renewal of the process A (L α (t)−) (see [37, Section 2.1] for a thourough discussion on renewal times of semi-Markov processes.) Proposition 5.8.We have that γ α c (t) converges to γ σ (t) in distribution as c → ∞ and therefore α c (t) converges in distribution to γ σ (t)U where U is an independent uniform random variable on S d−1 .
Proof.We prove the result under the assumption θ = 1, without loss of generality.The proof can be conducted by computing directly the limit.The distribution of γ(t) is given by (e.g., [11, page 61]) where h(w) is the renewal density which can be computed explicitely by resorting to (2.23) and Theorem 5.6.By combining the information we know indeed that 5.47) where σ α (t) is a stable subordinator.It is well known that for a stable subordinator one has that It follows that, for z > 0, Therefore we get from (5.49), using (5.48) and after a change of variable, (5.50) Now, in order to compute the limit as c → ∞, we use (5.11).Indeed by repeatedly applying dominated convergence to the integral on the rhs of (5.50) (on the set [z, t)) we find By [9, Lemma 1.10], the distribution of σ α (L α (t)−) is So far, we dealt with the convergence in distribution of the two components Y c α (t) → A (L α (t)−) and α c (t) → γ σ (t)U separately.Now we show that their sum converges to the sum of the limit.Theorem 5.9.We have that the process X c α (t) converges to X ∞ (t) := A (L α (t)−) + γ σ (t)U , in the sense of one-dimensional distributions.Further the process X ∞ (t) is superdiffusive with order 2, i.e., E X ∞ (t) Proof.The proof can be conducted by resorting again to [8, Theorem 3.4], as follows.Consider the process (Y (N has the form of a coupled continuous time random walk in R d × (−∞, 0), plus a drift.Take now the vector process Z n := (Y (n), −τ n ).In order to apply [8,Theorem 3.4] we compute the Laplace-Fourier transform of n −1/α (Z(n), τ n ) by performing the same computation as in (5.40).We get indeed, for λ ≥ 0 It follows that n −1/α (Z n , τ n ) → (A(1), −σ α (1), σ α (1)) and also Therefore, we have by [8,Theorem 3.4] that ) and, equivalently, (5.57) By using the continuous mapping theorem we have that the scaled transport process X c α (t) = c −1/α Y (N (c 1/α t)) + c −1/α γ(c 1/α t)U, is such that for each t ≥ 0. The asymptotic behaviour of X ∞ (t) can be obtained as follows: we have that E X ∞ (t) (5.58) since U is uniform, independent on γ σ and M (t) and has zero expectation (with all its marginals).
It is clear from (5.51) that γ σ (t) Remark 5.10.It is interesting to note that the proof of previous theorem could be conducted explicitely, computing directly the limit on the distribution of (Y c α (t), γ α c (t)).However, since some computations are cumbersome, we outline here only the main parts of it in order to specify the explicit distributions.We have that P (Y (t) ∈ dx, γ(t) ∈ ds) = δ t (ds)µ t (dx)E α (−t α ) + 1 where, in the last step we used self-similarity of (A(t), σ α (t)), to say that u(c 1/α x, c 1/α s) = c 1−2/α u(x, s).The distribution in (5.65) coincides with the distribution of (M (t), γ σ (t)) which was obtained in [37,Remark 4.2].To check this, observe that .
(5.66) Remark 5.11.It is interesting to note that the theory of CTRW limit [37] states that the in distribution.Further both processes have discontinuous paths.The process X ∞ (t) instead has continuous paths: the particle is moving, for any t, from the point A (L(t)−) to the point A (L(t)) where a new displacement will be performed.
5.3.On the telegraph process: an hyperbolic-type equation.In the case d = 1, the isotropic transport process (X(t), V (t)) takes values in R × {+1, −1}.In compact form, it can be defined by where V 0 takes values in {1, −1} and N (t) denotes the number of renewals up to time t.The Markov case, i.e., the case where N (t) is a Poisson process, is usually called telegraph process and has been widely studied in the literature (consult, e.g., [18; 23; 26] and [49,Chapter 1]); it is useful to specify that such a process can be constructed in two ways which are equivalent in terms of governing equations: i) At random times governed by a Poisson process with intensity 2θ, the particle can either continue to move in the same direction or it can reverse direction with probability 1/2.
ii) At random times governed by a Poisson process with intensity θ, the particle reverses direction with probability 1.
These constructions are equivalent, in the sense that the semigroup of (X(t), V (t)) exhibits in both cases the same infinitesimal generator Moreover, concerning the Markov case, it is well known (see e.g.[23]) that the density of the continuous component of X(t), say p t (z|x) = ∂ z P (X(t) < z|X(0) = x) for |z − x| < t, is the fundamental solution of the damped wave equation We derive here, heuristically, a generalization of (5.71) holding for the telegraph process with Mittag-Leffler waiting times (i.e., the one-dimensional version of (X α (t), V α (t)) of previous section), where the related renewal counting process N (t) is the so-called fractional Poisson process (consult, e.g., [7; 35]).The forthcoming derivation is the heuristic version of the general case of section 4. Indeed, consider the equation (5.21)  which is the fractional version of (5.71) and formally reduces to (5.71) when α = 1.
Remark 5.12.In the paper [7] the authors studied another random flight driven by a fractional Poisson process, hence having Mittag-Leffler waiting times.However, such a process is obtained by the time-change of the position process and thus it strongly differs, e.g., pathwise, from our process X α (t).The reader should compare the results in this section with [15] where the author derives the governing equation for the probability density function of a classical Lévy walk.It turns out that this equation involves a classical wave operator together with memory integrals (induced by the spatiotemporal coupling) and therefore it can be viewed as an alternative to (5.74).

. 31 )
Assumption(3.30)  is satisfied in many situations of interest, such as the cases of translation and rotation groups, as shown in the following examples.

. 53 )
For checking the Laplace transform of the rhs one just need to apply Fubini theorem for Bochner integral (e.g.[1, Theorem 1.1.9])together with [1, Corollary 1.6.5].For the lhs the existence of Laplace transform can be ascertained by standard arguments (e.g.,[54, Theorem 13.6]) using the definition of A U v together with the estimates in (3.31), (3.41) and(3.43).Then we compute

Lemma 4 . 3 .
The operators D − and D + commute on H ∩ D.