Asymptotic Properties of Extremal Markov Processes Driven by Kendall Convolution

This paper is devoted to the analysis of the finite-dimensional distributions and asymptotic behavior of extremal Markov processes connected with the Kendall convolution. In particular, we provide general formulas for the finite dimensional distributions of the random walk driven by the Kendall convolution for a large class of step size distributions. Moreover, we prove limit theorems for random walks and associated continuous-time stochastic processes.


Introduction
In recent years, new approaches in Extreme Value Theory are gaining popularity.Besides the classical convolutions corresponding to the sum or maximum of independent random elements, the authors consider a much wider class of binary operations called generalized convolutions.This paper presents novel asymptotic properties of a certain class of extreme Markov sequences, introduced in [9], where the authors consider additive processes and Lévy processes under generalized convolutions.One example of such an operation is the Kendall convolution, which is the basis of the theory studied here.It is a binary operation that, applied for two identical point-mass measures, gives a Pareto distribution, i.e., a heavy tailed distribution.This allows the construction of new processes based on a generalized convolution.This is the first reason why the objects considered here are worth studying.The second one is an intuition saying that the results presented here can be extended to a much wider class of convolutions related to Kendall convolution.It is worth emphasizing that many phenomena can not be modeled by using classical convolution.We believe that the challenges of our times demand new mathematical tools based on utterly new techniques connected to heavy tailed distributions, as presented in this paper.
The Kendall convolution being the main building block in the construction of the extremal Markov process called Kendall random walk, {X n : n ∈ N 0 }, is an important example of a generalization of the classical convolutions, corresponding to the sum and the maximum of independent random variables.Originated by Urbanik [33][34][35][36][37] (see also [25]) generalized convolutions regain popularity in recent years (see, e.g., [3,9,22,31,32] and references therein).In this paper, we focus on the Kendall convolution see, e.g., [18,21,31] which, in view of its relations to heavy tailed distributions, and in particular slash distributions [3], Williamson transform [20], Archimedian copulas, renewal theory [22], non-comutative probability [17] or delphic semi-groups [14,15,24], presents a high potential of applicability.We refer to [9] for the definition and detailed description of the basics of the theory of generalized convolutions, for a survey on the most important classes of generalized convolutions, and a discussion of Lévy processes and stochastic integrals based on these convolutions, as well as to [16] for the definition and basic properties of Kendall random walks.
Our main goal is to study finite dimensional distributions and asymptotic properties of extremal Markov processes connected to the Kendall convolution.In particular, we present examples of finite dimensional distributions by the unit step characteristics, which create a new class of heavy tailed distributions.Innovative thinking about possible applications comes from the fact that the generalized convolution of two point-mass probability measures can be a non-degenerate probability measure.In particular, the Kendall convolution of two probability measures concentrated at 1 is the Pareto distribution, so it generates heavy tailed distributions.In this context, the theory of regularly varying functions (see, e.g., [6,7]) plays a crucial role.In this paper, regular variation techniques are used to investigate asymptotic behavior of Kendall random walks and convergence of the finite dimensional distributions of continuous time stochastic processes constructed from Kendall random walks.
Most of the proofs presented in this paper are also based on the application of the Williamson transform, which is a generalized characteristic function of the probability measure in the Kendall algebra (see, e.g., [9,38]).The great advantage of the Williamson transform is that it is easy to invert.It is also worth to mention that this kind of transform is a generator of Archimedean copulas [29,30] and is used to compute radial measures for reciprocal Archimedean copulas [13].Asymptotic properties of the Williamson transform in the context of Archimedean copulas and extreme value theory were given in [26].We believe that the results on Williamson transform presented in this paper might be applicable in copula theory, which can be an interesting topic for future research.
We start, in Sect.2, with the definition and basic properties of Kendall convolution.Next, the definition and construction of the random walk under Kendall convolution is presented, which leads to a stochastic representation of the process {X n }.Basic properties of the transition probability kernel are also proved.We refer to [16,19,20,22] for discussions and proofs of further properties of the process {X n }.The structure of the processes considered here (see Definition 2) is similar to the firstorder autoregressive maximal Pareto processes [4,5,28,39], max-AR(1) sequences [1], minification processes [27,28], max-autoregressive moving average processes MARMA [11], pARMAX and pRARMAX processes [12], and also the time series models studied in [2].Since the random walks form a class of extremal Markov chains, we believe that studying them will yield an important contribution to extreme value theory.
Section 3 is devoted to the analysis of finite-dimensional distributions of the process {X n }.We derive general formulas and present some important examples of processes with different types of step distributions, which leads to new classes of heavy tailed distributions.
In Sect.4, we study different limiting behaviors of Kendall convolutions and connected processes.We present asymptotic properties of random walks under Kendall convolution using regular variation techniques.In particular, we show the asymptotic equivalence between Kendall convolution and maximum convolution in the case of a regularly varying step distribution ν.The result has a link to classical extreme value theory (see, e.g., [10]) and suggests possible applications of the Kendall random walk in modeling phenomena where independence between events cannot be assumed.Moreover, limit theorems for Kendall random walks are given in the case of finite α-moment as well as in the case of a regularly varying tail of the unit step distribution.Finally, we define a continuous time stochastic process based on random walk under Kendall convolution and prove convergence of its finite-dimensional distributions using regular variation techniques.

Notation.
Through this paper, the distribution of the random element X is denoted by L(X ).By P + , we denote the family of all probability measures on the Borel σ -algebra B(R + ) with R + := [0, ∞).For a probability measure λ ∈ P + and a ∈ R + , the rescaling operator is given by T a λ = L(a X) if λ = L(X ).By b + , we denote the positive part of any b ∈ R. The set of all natural numbers, including zero, is denoted by N 0 .Additionally we use the notation π 2α , α > 0, for a Pareto random measure with probability density function (pdf) Moreover, by we denote the αth moment of the measure ν.The truncated α-moment of the measure ν with cumulative distribution function (cdf) F is given by By ν 1 α ν 2 , we denote the Kendall convolution of the measures ν 1 , ν 2 ∈ P + .For all n ∈ N, the Kendall convolution of n identical measures ν is denoted by ν α n := ν α • • • α ν (n-times).By F n , we denote the cumulative distribution function of the measure ν α n , whereas for the tail distribution of ν α n we use the standard notation F n .By d →, we denote convergence in distribution, whereas f dd → denotes convergence of finite-dimensional distributions.Finally, we say that a measurable and positive function f (•) is regularly varying at infinity and with index β if for all x > 0 we have lim t→∞ f (t x) f (t) = x β (see, e.g., [8]).

Stochastic Representation and Basic Properties of Kendall Random Walk
We start with the definition of Kendall generalized convolution (see, e.g., [9], Section 2).

Definition 1
The binary operation α : P 2 + → P + defined for point-mass measures by where α > 0, = m/M, m = min{x, y}, M = max{x, y}, is called the Kendall convolution.The extension of α for any B(R + ) and ν 1 , ν 2 ∈ P + is given by Remark 2.1 Note that the convolution of two point-mass measures is no longer a point mass measure and reduces to a continuous Pareto distribution π 2α in case of x = y = 1, which is different than the classical convolution or maximum convolution algebra, where convolution of discrete measures yields also a discrete one.We refer to Section 3.2 in [3] for general formula of the Kendall convolution of n point-mass measures and the discussion on its connections with Pareto distribution.
In the Kendall convolution algebra, the main tool used in the analysis of a measure ν ∈ P + is the Williamson transform (see, e.g., [38]) which is an analog of the characteristic function for Kendall convolution and plays analogous role to the classical Laplace or Fourier transform for convolutions defined by addition of independent random elements.We refer to [9], [31] and [33][34][35][36][37] for the definition and detailed discussion of the properties of generalized characteristic functions and their connections to generalized convolutions.Through this paper, the function G(t) defined as plays a crucial role in the analysis of Kendall convolutions and related stochastic processes.

Remark 2.2
Note that G(t) = ν 1 t and 3 Due to Proposition 2.3 and Example 3.4 in [9], the Williamson transform ν is a generalized characteristic function for Kendall convolution.Thus, the function G n (t) = ν α n 1 t has the following important property (see, e.g., Definition 2.2 in [9]) By using a recurrence construction, we define a stochastic processes {X n : n ∈ N 0 } that is called the Kendall random walk (see also [16,20,22]).Further, we show the essential connection of the process {X n } to the Kendall convolution.
Definition 2 A stochastic process {X n : n ∈ N 0 } is a discrete time Kendall random walk with parameter α > 0 and step distribution ν ∈ P + if there exist such that sequences {Y k }, {ξ k }, and {θ k } are mutually independent and where In the next proposition, we show that the process constructed in Definition 2 is a homogeneous Markov process driven by the Kendall convolution.We refer to [9], Section 4 for the proof and a general discussion of the existence of the Markov processes under generalized convolutions.

Proposition 2.4
The process {X n : n ∈ N 0 } with the stochastic representation given by Definition 2 is a homogeneous Markov process with transition probability kernel where The proof of Proposition 2.4 is presented in Sect.5.1.

Finite Dimensional Distributions
In this section, we study the finite dimensional distributions of the process {X n }.
We start with a Proposition 3.1 where in point (i) we introduce the formula for an inverse of function G(t).For the proof in this case, see, e.g., Lemma 3 in [22].We refer also to Lemma 1 in [3] and the comments that follow it, where formula (5) was derived as an inverse of an α-slash distribution.In point (ii), we describe the one-dimensional distributions of {X n } and their relationships with the Williamson transform and truncated α-moment.
Proposition 3.1 Let {X n : n ∈ N 0 } be a Kendall random walk with parameter α > 0 and unit step distribution ν ∈ P + with cdf F. Then (i) for any t ≥ 0, we have and 123 Proof For the proof of the case (ii), observe that where the first equation is by Remark 2.2 and the last equation follows from integration by parts.In order to complete the proof, it is sufficient to take derivatives on both sides of equation ( 6) and apply (i).
The next two lemmas characterize the transition probabilities of the process {X n } and play an important role in the analysis of its finite-dimensional distributions.Lemma 3.2 Let {X n : n ∈ N 0 } be a Kendall random walk with parameter α > 0 and unit step distribution ν ∈ P + .For all k, n ∈ N, x, y, t ≥ 0, we have The proof of Lemma 3.2 is presented in Sect.5.2.
The following lemma is the main tool in finding the finite-dimensional distributions of {X n }.In order to formulate the result, it is convenient to introduce the notation A k := {0, 1} k \{(0, 0, ..., 0)}, for any k ∈ N.
Additionally, for any ( 1 , ..., k ) ∈ A k we denote where n j ∈ N for all j ∈ N and any 0 ≤ The proof of Lemma 3.3 is presented in Sect.5.3.Now, we are able to derive a general formula for the finite-dimensional distributions of the process {X n }.
Theorem 1 Let {X n : n ∈ N 0 } be a Kendall random walk with parameter α > 0 and unit step distribution ν ∈ P + .Then for any 0 =: where n j ∈ N for all j ∈ N and any 0 ≤ Proof First, observe that Moreover, by the definition of (•), for any a > 0, we have lim Now in order to complete the proof it suffices to apply Lemma 3.3 with y 0 = 0 and x k+1 → ∞.
To better illustrate the result of Theorem 1, we present its special case for k = 2. Proposition 3.4 Let {X n : n ∈ N 0 } be a Kendall random walk with parameter α > 0 and unit step distribution ν ∈ P + .Then for any 0 =: n 0 ≤ n 1 ≤ n 2 , where n j ∈ N for all j = 1, 2 and any 0 ≤ x 1 ≤ x 2 we have 123 Finally, we present the cumulative distribution functions and characterizations of the finite-dimensional distributions of the process {X n } for the most interesting examples of unit step distributions ν.Since, by Theorem 1, the finite-dimensional distributions of {X n } are uniquely determined by the Williamson transform G(•) and the truncated α-moment H (•) of the step distribution ν, these two characteristics are presented for each of the examples.Additionally, in each example we derive the cdf of ν α n that is the one-dimensional distribution of the process {X n }.It should be noted that formulas for {X n } can be also derived using approach conceived in Theorem 2 in [3].We start with a basic case of a point-mass distribution ν in which ν α n is a particular case of beta-Pareto distribution (see, e.g., Remark 7 in [3]).
respectively.Hence, by Proposition 3.1 (ii), for any n = 2, 3, ..., we have By Proposition 3.4, we arrive at In the next example, we consider a mixture of δ 1 and the Pareto distribution that plays a crucial role in construction of the Kendall convolution.
In the next example, we consider the distribution ν being beta B(α, 1) distribution.Such ν has the lack of memory property for the Kendall convolution.We refer to [19] for a general result about the existence of measures with the lack of memory property for the so-called monotonic generalized convolutions.

Example 3.3
Let ν be a probability measure with the cdf Then the Williamson transform and the truncated α-moment of ν are given by and respectively.Hence, by Proposition 3.1 (ii), for any n = 2, 3, ..., we have In the next example, we consider a unit step distribution, which is a stable probability measure for Kendall random walk with unit step distribution ν (see Sect. 4, Theorem 4.4).

Example 3.4
The step distribution ν for the Kendall random walk in this example is given by the cdf of the form

Limit Theorems
In this section, we investigate the limiting behavior of Kendall random walks and connected continuous time processes.The analysis is based on inverting the Williamson transform, as given in Proposition 3.1.Moreover, as it is shown in Sect.2, the Kendall convolution is strongly related to Pareto distribution.Hence, regular variation techniques play a crucial role in the analysis of the asymptotic behavior and limit theorems for the processes studied in this section.We start with the analysis of the asymptotic behavior of the tail distribution of the random variables X n .
Theorem 2 Let {X n : n ∈ N 0 } be a Kendall random walk with parameter α > 0 and unit step distribution ν ∈ P + .Then The proof of Theorem 2 is presented in Sect.5.4.
The following Corollary is a direct consequence of the Theorem 2.
Corollary 4.1 Let {X n : n ∈ N 0 } be a Kendall random walk with parameter α > 0 and unit step distribution ν ∈ P + .Moreover, let (i) F(x) be regularly varying with parameter θ − α as x → ∞, where 0 < θ < α.Then Remark 4.2 Corollary 4.1 (i) shows that in case of regularly varying step distribution ν, the tail distribution of the random variable X n is asymptotically equivalent to the maximum of n i.i.d.random variables with distribution ν.
In the next proposition, we investigate the limit distribution of Kendall random walks in the case of a finite α-moment as well as for regularly varying tail of the unit step.We start with the following observation.[6], the random variable X belongs to the domain of attraction of a stable measure with respect to Kendall convolution if and only if 1 − G(t) is regularly varying function at ∞.

Remark 4.3 Due to Proposition 1 in
Notice that 1 − G(t) is regularly varying whenever the random variable X has a finite α-moment or its tail is regularly varying at infinity.The following proposition formalizes this observation by providing formulas for stable distributions with respect to Kendall convolution.Proposition 4.4 Let {X n : n ∈ N 0 } be a Kendall random walk with parameter α > 0 and unit step distribution ν ∈ P + .
where the cdf of the random variable X is given in Example 3.4 by the following formula: and the corresponding pdf of distribution of X is equal to (ii) If F is regularly varying as x → ∞ with parameter θ − α, where 0 ≤ θ < α, then there exists a sequence {a n }, a n → ∞, such that where the cdf of random variable X is given by and the corresponding pdf of X as follows The proof of Proposition 4.4 is presented in Sect.5.4.

Remark 4.5
Notice that stable distributions ρ ν,α and ρ ν,α,θ derived in Proposition 4.3 are special cases of the Inverse Slash Fréchet distribution described in Section 3.1 in [3], that is a stable distribution with respect to max-slash operation.
Now we define a new stochastic process {Z n (t) : n ∈ N 0 } connected with the Kendall random walk {X n : n ∈ N 0 } as follows: where [•] denotes the integer part and the sequence {a n } is such that a n > 0, and lim n→∞ a n = ∞.
In the following theorem, we prove convergence of the finite-dimensional distributions of the process {Z n (t)}, for an appropriately chosen sequence {a n }.
Theorem 3 Let {X n : n ∈ N 0 } be a Kendall random walk with parameter α > 0 and unit step distribution ν ∈ P + .

Proofs
In this section, we present detailed proofs of our results.

Proof of Proposition 2.4
Due to the independence of the sequences {Y k }, {ξ k }, and {θ k }, it follows directly from the Definition 2 that the process {X n } satisfies the Markov property.Now, let n ∈ N be fixed.We shall show that, for all k ∈ N, A ∈ B(R + ), x ≥ 0, α > 0, the transition probabilities of the process {X n } are of the form (4).In order to do this, we proceed by induction.By Definition 2, we have where and By the independence of the random variables θ n and ξ n , we have Hence, where (11) follows from Definition 1 and (12) follows from ( 2).This completes the first step of the proof by induction.Now, assuming that holds for k ≥ 2, we shall establish its validity for k + 1. Due to the Chapman-Kolmogorov equation ( [23]) for the process {X n }, we have where (14) follows from ( 12) and ( 13) while (15) follows from (2).This completes the induction argument and the proof.

Proof of Lemma 3.2
First, notice that, by Definition 1, we have 123 In order to prove (i), observe that by ( 2) and ( 17) we have In order to complete the proof of case (i), it suffices to combine ( 18) with (3) and Proposition 3.1 (ii).

Proof of Theorem 2
Due to Proposition 3.1, for any n ≥ 2, we have where (27) follows by the observation that, for any a ≥ 0, n ≥ 2, we have Thus, where as x → ∞, and x α = 0 for any measure ν ∈ P + and hence as x → ∞.This completes the proof.
The following lemma plays a crucial role in further analysis.
Proof First, observe that W (x) := x α /H α (x) is regularly varying function with parameter α − θ as x → ∞.Then, due to Theorem 1.5.12 in [8], there exists an increasing function V (x) such that W (V (x)) = x(1 + o(1)), as x → ∞.Now, in order to complete the proof it suffices to take a n = V (n).
6 Proof of Proposition 4.4 Using (3), the Williamson transform of a −1 n X n is given by In order to prove (i) observe that under assumption of the finiteness of m Due to Proposition 3.1 (i), there exists uniquely determined random variable X with cdf (7) and pdf (8) such that e −m (α) ν x α is its Williamson transform.This completes the proof of the case (i).
In order to prove (ii), notice that, due to Theorem 1.5.8 in [8], F ∈ RV θ−α , implies that H α ∈ RV θ .Hence, for any z > 0, we have Moreover, by Lemma 5.1, we can choose a sequence {a n } such that as n → ∞.Thus, Due to Proposition 3.1 (i), there exists a random variable X with cdf (9) and pdf (10) such that e −x α−θ is its Williamson transform.This completes the proof.
In order to prove (ii), notice that, similarly to the proof of Theorem 4.4, for any t j , z j > 0 and 1 ≤ j ≤ k, we obtain In order to complete the proof, it suffices to pass with n → ∞ in (30) applying (33) and (34).

Example 3 . 1
Let ν = δ 1 .Then the Williamson transform and truncated α-moment of the measure ν are given by