Abstract
This paper is devoted to the analysis of the finite-dimensional distributions and asymptotic behavior of extremal Markov processes connected with the Kendall convolution. In particular, we provide general formulas for the finite dimensional distributions of the random walk driven by the Kendall convolution for a large class of step size distributions. Moreover, we prove limit theorems for random walks and associated continuous-time stochastic processes.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In recent years, new approaches in Extreme Value Theory are gaining popularity. Besides the classical convolutions corresponding to the sum or maximum of independent random elements, the authors consider a much wider class of binary operations called generalized convolutions. This paper presents novel asymptotic properties of a certain class of extreme Markov sequences, introduced in [9], where the authors consider additive processes and Lévy processes under generalized convolutions. One example of such an operation is the Kendall convolution, which is the basis of the theory studied here. It is a binary operation that, applied for two identical point-mass measures, gives a Pareto distribution, i.e., a heavy tailed distribution. This allows the construction of new processes based on a generalized convolution. This is the first reason why the objects considered here are worth studying. The second one is an intuition saying that the results presented here can be extended to a much wider class of convolutions related to Kendall convolution. It is worth emphasizing that many phenomena can not be modeled by using classical convolution. We believe that the challenges of our times demand new mathematical tools based on utterly new techniques connected to heavy tailed distributions, as presented in this paper.
The Kendall convolution being the main building block in the construction of the extremal Markov process called Kendall random walk, \(\{X_n :n \in {\mathbb {N}}_0 \}\), is an important example of a generalization of the classical convolutions, corresponding to the sum and the maximum of independent random variables. Originated by Urbanik [33,34,35,36,37] (see also [25]) generalized convolutions regain popularity in recent years (see, e.g., [3, 9, 22, 31, 32] and references therein). In this paper, we focus on the Kendall convolution see, e.g., [18, 21, 31] which, in view of its relations to heavy tailed distributions, and in particular slash distributions [3], Williamson transform [20], Archimedian copulas, renewal theory [22], non-comutative probability [17] or delphic semi-groups [14, 15, 24], presents a high potential of applicability. We refer to [9] for the definition and detailed description of the basics of the theory of generalized convolutions, for a survey on the most important classes of generalized convolutions, and a discussion of Lévy processes and stochastic integrals based on these convolutions, as well as to [16] for the definition and basic properties of Kendall random walks.
Our main goal is to study finite dimensional distributions and asymptotic properties of extremal Markov processes connected to the Kendall convolution. In particular, we present examples of finite dimensional distributions by the unit step characteristics, which create a new class of heavy tailed distributions. Innovative thinking about possible applications comes from the fact that the generalized convolution of two point-mass probability measures can be a non-degenerate probability measure. In particular, the Kendall convolution of two probability measures concentrated at 1 is the Pareto distribution, so it generates heavy tailed distributions. In this context, the theory of regularly varying functions (see, e.g., [6, 7]) plays a crucial role. In this paper, regular variation techniques are used to investigate asymptotic behavior of Kendall random walks and convergence of the finite dimensional distributions of continuous time stochastic processes constructed from Kendall random walks.
Most of the proofs presented in this paper are also based on the application of the Williamson transform, which is a generalized characteristic function of the probability measure in the Kendall algebra (see, e.g., [9, 38]). The great advantage of the Williamson transform is that it is easy to invert. It is also worth to mention that this kind of transform is a generator of Archimedean copulas [29, 30] and is used to compute radial measures for reciprocal Archimedean copulas [13]. Asymptotic properties of the Williamson transform in the context of Archimedean copulas and extreme value theory were given in [26]. We believe that the results on Williamson transform presented in this paper might be applicable in copula theory, which can be an interesting topic for future research.
We start, in Sect. 2, with the definition and basic properties of Kendall convolution. Next, the definition and construction of the random walk under Kendall convolution is presented, which leads to a stochastic representation of the process \(\{X_n\}\). Basic properties of the transition probability kernel are also proved. We refer to [16, 19, 20, 22] for discussions and proofs of further properties of the process \(\{X_n\}\). The structure of the processes considered here (see Definition 2) is similar to the first-order autoregressive maximal Pareto processes [4, 5, 28, 39], max-AR(1) sequences [1], minification processes [27, 28], max-autoregressive moving average processes MARMA [11], pARMAX and pRARMAX processes [12], and also the time series models studied in [2]. Since the random walks form a class of extremal Markov chains, we believe that studying them will yield an important contribution to extreme value theory.
Section 3 is devoted to the analysis of finite-dimensional distributions of the process \(\{X_n\}\). We derive general formulas and present some important examples of processes with different types of step distributions, which leads to new classes of heavy tailed distributions.
In Sect. 4, we study different limiting behaviors of Kendall convolutions and connected processes. We present asymptotic properties of random walks under Kendall convolution using regular variation techniques. In particular, we show the asymptotic equivalence between Kendall convolution and maximum convolution in the case of a regularly varying step distribution \(\nu \). The result has a link to classical extreme value theory (see, e.g., [10]) and suggests possible applications of the Kendall random walk in modeling phenomena where independence between events cannot be assumed. Moreover, limit theorems for Kendall random walks are given in the case of finite \(\alpha \)-moment as well as in the case of a regularly varying tail of the unit step distribution. Finally, we define a continuous time stochastic process based on random walk under Kendall convolution and prove convergence of its finite-dimensional distributions using regular variation techniques.
Notation.
Through this paper, the distribution of the random element X is denoted by \({\mathcal {L}}(X)\). By \({\mathcal {P}}_+\), we denote the family of all probability measures on the Borel \(\sigma \)-algebra \({\mathcal {B}}({\mathbb {R}}_+)\) with \({\mathbb {R}}_{+}:= [0, \infty )\). For a probability measure \(\lambda \in {\mathcal {P}}_{+}\) and \(a \in {\mathbb {R}}_+\), the rescaling operator is given by \(\textbf{T}_a \lambda = {\mathcal {L}}(aX)\) if \(\lambda = {\mathcal {L}}(X)\). By \(b_+\), we denote the positive part of any \(b\in {\mathbb {R}}\). The set of all natural numbers, including zero, is denoted by \({\mathbb {N}}_0\). Additionally we use the notation \(\pi _{2\alpha }, \alpha > 0\), for a Pareto random measure with probability density function (pdf)
Moreover, by
we denote the \(\alpha \)th moment of the measure \(\nu \). The truncated \(\alpha \)-moment of the measure \(\nu \) with cumulative distribution function (cdf) F is given by
By \(\nu _1 \vartriangle _{\alpha } \nu _2\), we denote the Kendall convolution of the measures \(\nu _1, \nu _2 \in {\mathcal {P}}_{+}\). For all \(n \in {\mathbb {N}}\), the Kendall convolution of n identical measures \(\nu \) is denoted by \(\nu ^{ \vartriangle _{\alpha } n}:= \nu \vartriangle _{\alpha } \dots \vartriangle _{\alpha } \nu \) (n-times). By \(F_n\), we denote the cumulative distribution function of the measure \(\nu ^{\vartriangle _{\alpha }n}\), whereas for the tail distribution of \(\nu ^{\vartriangle _{\alpha }n}\) we use the standard notation \(\overline{F}_n\). By \({\mathop {\rightarrow }\limits ^{d}}\), we denote convergence in distribution, whereas \({\mathop {\rightarrow }\limits ^{fdd}}\) denotes convergence of finite-dimensional distributions. Finally, we say that a measurable and positive function \(f(\cdot )\) is regularly varying at infinity and with index \(\beta \) if for all \(x>0\) we have \(\lim _{t\rightarrow \infty }\frac{f(tx)}{f(t)}=x^{\beta }\) (see, e.g., [8]).
2 Stochastic Representation and Basic Properties of Kendall Random Walk
We start with the definition of Kendall generalized convolution (see, e.g., [9], Section 2).
Definition 1
The binary operation \(\vartriangle _{\alpha } :{\mathcal {P}}_+^2 \rightarrow {\mathcal {P}}_+\) defined for point-mass measures by
where \(\alpha > 0\), \(\varrho = {m/M}\), \(m = \min \{x, y\}\), \(M = \max \{x, y\}\), is called the Kendall convolution. The extension of \(\vartriangle _{\alpha }\) for any \({\mathcal {B}}({\mathbb {R}}_+)\) and \(\nu _1, \nu _2 \in {\mathcal {P}}_+\) is given by
Remark 2.1
Note that the convolution of two point-mass measures is no longer a point mass measure and reduces to a continuous Pareto distribution \(\pi _{2\alpha }\) in case of \(x = y = 1\), which is different than the classical convolution or maximum convolution algebra, where convolution of discrete measures yields also a discrete one. We refer to Section 3.2 in [3] for general formula of the Kendall convolution of n point-mass measures and the discussion on its connections with Pareto distribution.
In the Kendall convolution algebra, the main tool used in the analysis of a measure \(\nu \in {\mathcal {P}}_{+}\) is the Williamson transform (see, e.g., [38])
which is an analog of the characteristic function for Kendall convolution and plays analogous role to the classical Laplace or Fourier transform for convolutions defined by addition of independent random elements. We refer to [9, 31] and [33,34,35,36,37] for the definition and detailed discussion of the properties of generalized characteristic functions and their connections to generalized convolutions. Through this paper, the function G(t) defined as
plays a crucial role in the analysis of Kendall convolutions and related stochastic processes.
Remark 2.2
Note that \(G(t) = \hat{\nu }\left( \frac{1}{t}\right) \) and \(\Psi \left( \frac{x}{t}\right) = \widehat{\delta }_x(\frac{1}{t})\), is the Williamson transform of \(\delta _x, x\ge 0\) at \(\frac{1}{t}\).
Remark 2.3
Due to Proposition 2.3 and Example 3.4 in [9], the Williamson transform \(\hat{\nu }\) is a generalized characteristic function for Kendall convolution. Thus, the function \(G_n(t) = \widehat{\nu ^{ \vartriangle _{\alpha } n}}\left( \frac{1}{t}\right) \) has the following important property (see, e.g., Definition 2.2 in [9])
By using a recurrence construction, we define a stochastic processes \(\{X_n: n \in {\mathbb {N}}_0\}\) that is called the Kendall random walk (see also [16, 20, 22]). Further, we show the essential connection of the process \(\{X_n\}\) to the Kendall convolution.
Definition 2
A stochastic process \(\{X_n :n \in {\mathbb {N}}_0\}\) is a discrete time Kendall random walk with parameter \(\alpha >0\) and step distribution \(\nu \in {\mathcal {P}}_+\) if there exist
- 1.:
-
\(\{Y_k\}\) i.i.d. random variables with distribution \(\nu \),
- 2.:
-
\(\{\xi _k\}\) i.i.d. random variables with uniform distribution on [0, 1],
- 3.:
-
\(\{\theta _k\}\) i.i.d. random variables with Pareto distribution with the density (\(\alpha > 0\))
$$\begin{aligned} \pi _{2\alpha }\, (\text {d}y) = 2\alpha y^{-2\alpha -1} \varvec{1}_{[1,\infty )}(y)\, \text {d}y, \end{aligned}$$
such that sequences \(\{Y_k\}\), \(\{\xi _k\}\), and \(\{\theta _k\}\) are mutually independent and
where
In the next proposition, we show that the process constructed in Definition 2 is a homogeneous Markov process driven by the Kendall convolution. We refer to [9], Section 4 for the proof and a general discussion of the existence of the Markov processes under generalized convolutions.
Proposition 2.4
The process \(\{X_n: n \in {\mathbb {N}}_0\}\) with the stochastic representation given by Definition 2 is a homogeneous Markov process with transition probability kernel
where \(k, n \in {\mathbb {N}}, A \in {\mathcal {B}}({\mathbb {R}}_{+}), x\ge 0, \alpha >0\).
3 Finite Dimensional Distributions
In this section, we study the finite dimensional distributions of the process \(\{X_n\}\). We start with a Proposition 3.1 where in point (i) we introduce the formula for an inverse of function G(t). For the proof in this case, see, e.g., Lemma 3 in [22]. We refer also to Lemma 1 in [3] and the comments that follow it, where formula (5) was derived as an inverse of an \(\alpha \)-slash distribution. In point (ii), we describe the one-dimensional distributions of \(\{X_n\}\) and their relationships with the Williamson transform and truncated \(\alpha \)-moment.
Proposition 3.1
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\) with cdf F. Then
-
(i)
for any \(t \ge 0\), we have
$$\begin{aligned} G(t) = \alpha t^{-\alpha } \int \limits _0^t x^{\alpha -1} F(x) \textrm{d}x, \end{aligned}$$(5)and \(F(t) = G(t) + \frac{t}{\alpha } G'(t)\).
-
(ii)
for any \(t \ge 0, n \ge 1\), we have
$$\begin{aligned} F_n(t) = G(t)^{n-1} \bigl [n t^{-\alpha } H_\alpha (t) + G(t) \bigr ]. \end{aligned}$$
Proof
For the proof of the case (ii), observe that
where the first equation is by Remark 2.2 and the last equation follows from integration by parts. In order to complete the proof, it is sufficient to take derivatives on both sides of equation (6) and apply (i). \(\square \)
The next two lemmas characterize the transition probabilities of the process \(\{X_n\}\) and play an important role in the analysis of its finite-dimensional distributions.
Lemma 3.2
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). For all \(k, n \in {\mathbb {N}}, x, y, t \ge 0\), we have
-
(i)
$$\begin{aligned} P_n (x, ((0,t]))= & {} \left( G(t)^n + \frac{n}{t^{\alpha }} H_\alpha (t) G(t)^{n-1} \Psi \left( \frac{x}{t}\right) \right) \textbf{1}_{\{x < t \}}; \end{aligned}$$
-
(ii)
$$\begin{aligned} \int \limits _0^t w^{\alpha } P_n(x,dw) = \left( x^{\alpha } G(t)^n + nG(t)^{n-1} H_\alpha (t) \Psi \left( \frac{x}{t}\right) \right) \textbf{1}_{\{x < t \}}. \end{aligned}$$
The proof of Lemma 3.2 is presented in Sect. 5.2.
The following lemma is the main tool in finding the finite-dimensional distributions of \(\{X_n\}\). In order to formulate the result, it is convenient to introduce the notation
Additionally, for any \((\epsilon _1,...,\epsilon _k) \in {\mathcal {A}}_k\) we denote
Lemma 3.3
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Then for any \(0 = n_0 \le n_1\le n_2\cdots \le n_k, k \in {\mathbb {N}}\), where \(n_j \in {\mathbb {N}}\) for all \(j \in {\mathbb {N}}\) and any \(0 \le y_0 \le x_1 \le x_2 \le \cdots \le x_k \le x_{k+1}\) we have
where
The proof of Lemma 3.3 is presented in Sect. 5.3.
Now, we are able to derive a general formula for the finite-dimensional distributions of the process \(\{X_n\}\).
Theorem 1
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Then for any \(0 =: n_0 \le n_1\le n_2 \le \ldots \le n_k, k \in {\mathbb {N}}\), where \(n_j \in {\mathbb {N}}\) for all \(j \in {\mathbb {N}}\) and any \(0 \le x_1 \le x_2 \le \ldots \le x_k\) we have
where
Proof
First, observe that
Moreover, by the definition of \(\Psi (\cdot )\), for any \(a > 0\), we have
Now in order to complete the proof it suffices to apply Lemma 3.3 with \(y_0=0\) and \(x_{k+1} \rightarrow \infty \). \(\square \)
To better illustrate the result of Theorem 1, we present its special case for \(k = 2\).
Proposition 3.4
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Then for any \(0 =: n_0 \le n_1\le n_2\), where \(n_j \in {\mathbb {N}}\) for all \(j = 1,2\) and any \(0 \le x_1 \le x_2\) we have
Finally, we present the cumulative distribution functions and characterizations of the finite-dimensional distributions of the process \(\{X_n\}\) for the most interesting examples of unit step distributions \(\nu \). Since, by Theorem 1, the finite-dimensional distributions of \(\{X_n\}\) are uniquely determined by the Williamson transform \(G(\cdot )\) and the truncated \(\alpha \)-moment \(H(\cdot )\) of the step distribution \(\nu \), these two characteristics are presented for each of the examples. Additionally, in each example we derive the cdf of \(\nu ^{\vartriangle _{\alpha }n}\) that is the one-dimensional distribution of the process \(\{X_n\}\). It should be noted that formulas for \(\{X_n\}\) can be also derived using approach conceived in Theorem 2 in [3]. We start with a basic case of a point-mass distribution \(\nu \) in which \(\nu ^{\vartriangle _{\alpha }n}\) is a particular case of beta-Pareto distribution (see, e.g., Remark 7 in [3]).
Example 3.1
Let \(\nu = \delta _1\). Then the Williamson transform and truncated \(\alpha \)-moment of the measure \(\nu \) are given by
respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have
By Proposition 3.4, we arrive at
In the next example, we consider a mixture of \(\delta _1\) and the Pareto distribution that plays a crucial role in construction of the Kendall convolution.
Example 3.2
Let \(\nu = p \delta _1 + (1-p) \pi _{p}\), where \(p\in (0,1]\) and \(\pi _{p}\) is a Pareto distribution with the pdf (1) with \(2\alpha = p\). Then, the Williamson transform and the truncated \(\alpha \)-moment of measure \(\nu \) are given by
and
respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have
for \(p \ne \alpha \), and
for \(p = \alpha \).
In the next example, we consider the distribution \(\nu \) being beta \({\mathcal {B}}(\alpha , 1)\) distribution. Such \(\nu \) has the lack of memory property for the Kendall convolution. We refer to [19] for a general result about the existence of measures with the lack of memory property for the so-called monotonic generalized convolutions.
Example 3.3
Let \(\nu \) be a probability measure with the cdf \(F(x) = 1 - (1-x^{\alpha })_+,\ x \ge 0, \alpha >0\). Then the Williamson transform and the truncated \(\alpha \)-moment of \(\nu \) are given by
and
respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have
In the next example, we consider a unit step distribution, which is a stable probability measure for Kendall random walk with unit step distribution \(\nu \) (see Sect. 4, Theorem 4.4).
Example 3.4
The step distribution \(\nu \) for the Kendall random walk in this example is given by the cdf of the form
where \(\alpha > 0\) and \(m_{\nu }^{(\alpha )}>0\) is a parameter. Then the Williamson transform and the truncated \(\alpha \)-moment of the measure \(\nu \) are given by
respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have
Example 3.5
Let \(\nu \) be the standard uniform distribution with the pdf \(\nu (\text {d}x) = \varvec{1}_{(0,1)}(x) \text {d}x\), denoted by U(0, 1). Then the Williamson transform and the truncated \(\alpha \)-moment of \(\nu \) are given by
respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have
Example 3.6
Let \(\nu = \gamma _{a,b},\; a,b>0\), be Gamma distribution with the pdf
Then the Williamson transform and the truncated \(\alpha \)-moment of measure \(\nu \) are given by
and
where \(\gamma _{a,b}(0,x] = \frac{b^a}{\Gamma (a)} \! \int _0^x t^{s-1} e^{-t} dt\). Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have
4 Limit Theorems
In this section, we investigate the limiting behavior of Kendall random walks and connected continuous time processes. The analysis is based on inverting the Williamson transform, as given in Proposition 3.1. Moreover, as it is shown in Sect. 2, the Kendall convolution is strongly related to Pareto distribution. Hence, regular variation techniques play a crucial role in the analysis of the asymptotic behavior and limit theorems for the processes studied in this section.
We start with the analysis of the asymptotic behavior of the tail distribution of the random variables \(X_n\).
Theorem 2
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Then
as \(x \rightarrow \infty \).
The proof of Theorem 2 is presented in Sect. 5.4.
The following Corollary is a direct consequence of the Theorem 2.
Corollary 4.1
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Moreover, let
-
(i)
\(\overline{F}(x)\) be regularly varying with parameter \(\theta - \alpha \) as \(x \rightarrow \infty \), where \(0< \theta < \alpha \). Then
$$\begin{aligned} \overline{F}_n(x) = n\overline{F}(x) (1 + o(1)) \ \ \ \ \ \textrm{as} \ x \rightarrow \infty . \end{aligned}$$ -
(ii)
\(m_\nu ^{(\alpha )} < \infty \). Then
$$\begin{aligned} \overline{F}_n(x) = n\overline{F}(x) + \frac{1}{2} n (n-1) \left( m_\nu ^{(\alpha )}\right) ^2 x^{-2\alpha } (1 + o(1)) \ \ \ \ \ \textrm{as} \ x \rightarrow \infty . \end{aligned}$$ -
(iii)
\(\overline{F}(x) = o\left( x^{-2\alpha }\right) \) as \(x \rightarrow \infty \). Then
$$\begin{aligned} \overline{F}_n(x) = \frac{1}{2} n (n-1) \left( m_\nu ^{(\alpha )}\right) ^2 x^{-2\alpha }(1 + o(1)) \ \ \ \ \ \textrm{as} \ x \rightarrow \infty . \end{aligned}$$
Remark 4.2
Corollary 4.1 (i) shows that in case of regularly varying step distribution \(\nu \), the tail distribution of the random variable \(X_n\) is asymptotically equivalent to the maximum of n i.i.d. random variables with distribution \(\nu \).
In the next proposition, we investigate the limit distribution of Kendall random walks in the case of a finite \(\alpha \)-moment as well as for regularly varying tail of the unit step. We start with the following observation.
Remark 4.3
Due to Proposition 1 in [6], the random variable X belongs to the domain of attraction of a stable measure with respect to Kendall convolution if and only if \(1 - G(t)\) is regularly varying function at \(\infty \).
Notice that \(1 - G(t)\) is regularly varying whenever the random variable X has a finite \(\alpha \)-moment or its tail is regularly varying at infinity. The following proposition formalizes this observation by providing formulas for stable distributions with respect to Kendall convolution.
Proposition 4.4
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\).
-
(i)
If \(m_{\nu }^{(\alpha )} <\infty \) then, as \(n \rightarrow \infty \),
$$\begin{aligned} n^{-1/\alpha } X_n {\mathop {\rightarrow }\limits ^{d}} X, \end{aligned}$$where the cdf of the random variable X is given in Example 3.4 by the following formula:
$$\begin{aligned} \rho _{\nu ,\alpha }(0,x] = \left( 1 + m_{\nu }^{(\alpha )} x^{-\alpha } \right) e^{-m_{\nu }^{(\alpha )} x^{-\alpha }} \varvec{1}_{(0,\infty )}(x) \end{aligned}$$(7)and the corresponding pdf of distribution of X is equal to
$$\begin{aligned} \rho _{\nu ,\alpha }(\text {d}x) = \alpha \left( m_{\nu }^{(\alpha )} \right) ^2 x^{-2\alpha -1} \exp \{-m_{\nu }^{(\alpha )} x^{-\alpha }\} \varvec{1}_{(0,\infty )}(x) \text {d}x. \end{aligned}$$(8) -
(ii)
If \(\overline{F}\) is regularly varying as \(x \rightarrow \infty \) with parameter \(\theta - \alpha \), where \(0 \leqslant \theta < \alpha \), then there exists a sequence \(\{a_n\}\), \(a_n \rightarrow \infty \), such that
$$\begin{aligned} a_n^{-1} X_n {\mathop {\rightarrow }\limits ^{d}} X, \end{aligned}$$where the cdf of random variable X is given by
$$\begin{aligned} \rho _{\nu ,\alpha ,\theta }(0,x] = \left( 1 + x^{-(\alpha -\theta )}\right) e^{-x^{-(\alpha -\theta )}}\varvec{1}_{(0,\infty )}(x) \end{aligned}$$(9)and the corresponding pdf of X as follows
$$\begin{aligned} \rho _{\nu ,\alpha ,\theta }(\text {d}x) = \alpha x^{-2(\alpha -\theta )-1} \exp \{-x^{-(\alpha -\theta )}\} \varvec{1}_{(0,\infty )}(x) \textrm{d}x. \end{aligned}$$(10)
The proof of Proposition 4.4 is presented in Sect. 5.4.
Remark 4.5
Notice that stable distributions \(\rho _{\nu , \alpha }\) and \(\rho _{\nu , \alpha , \theta }\) derived in Proposition 4.3 are special cases of the Inverse Slash Fréchet distribution described in Section 3.1 in [3], that is a stable distribution with respect to max-slash operation.
Now we define a new stochastic process \(\{Z_n(t): n \in {\mathbb {N}}_0\}\) connected with the Kendall random walk \(\{X_n: n \in {\mathbb {N}}_0\}\) as follows:
where \([\cdot ]\) denotes the integer part and the sequence \(\{a_n\}\) is such that \(a_n > 0\), and \(\lim _{n \rightarrow \infty } a_n = \infty \).
In the following theorem, we prove convergence of the finite-dimensional distributions of the process \(\{Z_n(t)\}\), for an appropriately chosen sequence \(\{a_n\}\).
Theorem 3
Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\).
-
(i)
If \(m_{\nu }^{(\alpha )}<\infty \) and \(a_n = n^{1/\alpha } (1 + o(1))\), as \(n\rightarrow \infty \), then
$$\begin{aligned} \{Z_n(t)\}{\mathop {\rightarrow }\limits ^{fdd}} \{Z(t)\}, \end{aligned}$$where, for any \(0 = t_0 \le t_1 \le ... \le t_k\), the finite-dimensional distributions of \(\{Z(t)\}\) are given by
$$\begin{aligned}{} & {} {\mathbb {P}}\left( Z(t_1)\le z_1, Z(t_2)\le z_2,\ldots , Z(t_k)\le z_k \right) \\{} & {} \quad = \sum \limits _{({\epsilon }_1,{\epsilon }_2,\cdots ,{\epsilon }_k) \in \{0,1\}^k} \prod _{i=1}^{s-1} \Psi \left( \frac{z_{\tilde{\epsilon }_i}}{z_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^k \left( \frac{\left( t_j - t_{j-1}\right) }{z_j^\alpha } m_{\nu }^{(\alpha )} \right) ^{\epsilon _j} \\{} & {} \quad \qquad \cdot \exp \left\{ - m_{\nu }^{(\alpha )} \sum \limits _{i=1}^k z_i^{-\alpha } (t_i-t_{i-1})\right\} , \end{aligned}$$ -
(ii)
If \(\overline{F}(\cdot )\) is regularly varying as \(x \rightarrow \infty \) with parameter \(\theta - \alpha \), where \(0 \leqslant \theta < \alpha \), then there exists a sequence \(\{a_n\}, a_n \rightarrow \infty \) such that
$$\begin{aligned} Z_n(t){\mathop {\rightarrow }\limits ^{fdd}} Z(t), \end{aligned}$$where, for any \(0 = t_0 \le t_1 \le ... \le t_k\), the finite-dimensional distributions of \(\{Z(t)\}\) are given by
$$\begin{aligned}{} & {} {\mathbb {P}}\left( Z(t_1)\le z_1, Z(t_2)\le z_2,\cdots , Z(t_k)\le z_k \right) \\{} & {} \quad = \sum \limits _{({\epsilon }_1,{\epsilon }_2,\cdots ,{\epsilon }_k) \in \{0,1\}^k} \prod _{i=1}^{s-1} \Psi \left( \frac{z_{\tilde{\epsilon }_i}}{z_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^k \left( \left( t_j - t_{j-1}\right) z_j^{\theta -\alpha } \right) ^{\epsilon _j} \\{} & {} \qquad \cdot \exp \left\{ - \sum \limits _{i=1}^k z_i^{\theta - \alpha } (t_i-t_{i-1})\right\} , \end{aligned}$$
where in both above cases we have
with
5 Proofs
In this section, we present detailed proofs of our results.
5.1 Proof of Proposition 2.4
Due to the independence of the sequences \(\{Y_k\}\), \(\{\xi _k\}\), and \(\{\theta _k\}\), it follows directly from the Definition 2 that the process \(\{X_n\}\) satisfies the Markov property.
Now, let \(n \in {\mathbb {N}}\) be fixed. We shall show that, for all \(k \in {\mathbb {N}}, A \in {\mathcal {B}}({\mathbb {R}}_{+}), x\ge 0, \alpha >0\), the transition probabilities of the process \(\{X_n\}\) are of the form (4). In order to do this, we proceed by induction. By Definition 2, we have
where
and
By the independence of the random variables \(\theta _n\) and \(\xi _n\), we have
Hence,
where (11) follows from Definition 1 and (12) follows from (2). This completes the first step of the proof by induction.
Now, assuming that
holds for \(k \ge 2\), we shall establish its validity for \(k + 1\).
Due to the Chapman–Kolmogorov equation ([23]) for the process \(\{X_n\}\), we have
where (14) follows from (12) and (13) while (15) follows from (2). This completes the induction argument and the proof. \(\square \)
5.2 Proof of Lemma 3.2
First, notice that, by Definition 1, we have
In order to prove (i), observe that by (2) and (17) we have
In order to complete the proof of case (i), it suffices to combine (18) with (3) and Proposition 3.1 (ii).
To prove (ii), observe that integration by parts leads to
where (19) follows from (16). Applying (19), we obtain
where \(H_\alpha ^{(n)}(t):= \int _0^t y^\alpha \nu ^{\vartriangle _{\alpha } n}(\text {d}y) = t ^{\alpha } \left( F_n(t) - G_n(t) \right) \) by (6). Finally, the proof of the case (ii) is completed by combining (20) with Proposition 3.1 (ii). \(\square \)
5.3 Proof of Lemma 3.3
Let \(k=1\). Then by Lemma 3.2, we obtain
which ends the first step of proof by induction.
Now, assume that the formula holds for \(k\in {\mathbb {N}}\). We shall establish its validity for \(k+1\). Let
Additionally, we denote \(s_1:= \sum _{i = 1}^{k+1} \epsilon _i\).
Moreover, let
By splitting \({\mathcal {A}}_{k+1}\) into four subfamilies of sets: \({\mathcal {A}}_{k+1}^0\), \({\mathcal {A}}_{k+1}^1\), \(\{(1,0,...,0)\}\), and \(\{(0,...,0)\}\) and applying the formula for k and the first induction step we obtain
where
Observe that for any sequence \((0,\epsilon _2,\epsilon _3,\cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_{k+1}^0\), we have
\((\tilde{\epsilon }_1, \tilde{\epsilon }_2,...,\tilde{\epsilon }_{s_1}) = (\tilde{\eta }_1, \tilde{\eta }_2,..., \tilde{\eta }_{s_2}) \), with \(s_1 = s_2\), which implies that
Moreover,
Hence,
Analogously, for any sequence \((1,\epsilon _2,\epsilon _3,\cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_{k+1}^1\) we have
\((\tilde{\epsilon }_1, \tilde{\epsilon }_2,...,\tilde{\epsilon }_{s_1}) = (1, \tilde{\eta }_1, \tilde{\eta }_2,..., \tilde{\eta }_{s_2})\), with \(s_1 = s_2 + 1\) which implies that
where (23) is the consequence of
Moreover,
Hence,
Additionally, observe that
and due to the fact that \(n_0 = 0\), we have
Finally, by combining (22), (24), (25), and (26) with (21) we obtain that
This completes the induction argument and the proof. \(\square \)
5.4 Proof of Theorem 2
Due to Proposition 3.1, for any \(n \ge 2\), we have
where (27) follows by the observation that, for any \(a \ge 0\), \(n \ge 2\), we have
Thus,
where
as \(x \rightarrow \infty \), and
Note that \(\lim _{x \rightarrow \infty } \frac{H_\alpha (x)}{x^{\alpha }} = 0\) for any measure \(\nu \in {\mathcal {P}}_{+}\) and hence
as \(x \rightarrow \infty \). This completes the proof. \(\square \)
The following lemma plays a crucial role in further analysis.
Lemma 5.1
Let \(H_\alpha \in RV_{\theta }\) with \(0<\theta <\alpha \). Then, there exists a sequence \(\{a_n\}\) such that
as \(n\rightarrow \infty \).
Proof
First, observe that \( W(x):= x^{\alpha }/H_\alpha (x)\) is regularly varying function with parameter \(\alpha - \theta \) as \(x \rightarrow \infty \). Then, due to Theorem 1.5.12 in [8], there exists an increasing function V(x) such that \(W(V(x)) = x(1 + o(1)),\) as \(x \rightarrow \infty \). Now, in order to complete the proof it suffices to take \(a_n = V(n)\). \(\square \)
6 Proof of Proposition 4.4
Using (3), the Williamson transform of \(a_n^{-1} X_n\) is given by
since
In order to prove (i) observe that under assumption of the finiteness of \(m_\nu ^{(\alpha )}\), we have
which, by (28), yields that
Due to Proposition 3.1 (i), there exists uniquely determined random variable X with cdf (7) and pdf (8) such that \(e^{-m_{\nu }^{(\alpha )} x^{\alpha }} \) is its Williamson transform. This completes the proof of the case (i).
In order to prove (ii), notice that, due to Theorem 1.5.8 in [8], \(\overline{F} \in RV_{\theta -\alpha }\), implies that \(H_\alpha \in RV_{\theta }\). Hence, for any \(z > 0\), we have
Moreover, by Lemma 5.1, we can choose a sequence \(\{a_n\}\) such that
as \(n \rightarrow \infty \). Thus,
Due to Proposition 3.1 (i), there exists a random variable X with cdf (9) and pdf (10) such that \(e^{-x^{\alpha - \theta }}\) is its Williamson transform. This completes the proof. \(\square \)
6.1 Proof of Theorem 3
Proof
Let \(0 =: t_0 \le t_1 \le t_2 \le \cdots \le t_k\), where \(k \in {\mathbb {N}}\). By Theorem 1, the distribution of \(\left( Z_{n}(t_1), Z_{n}(t_2),\cdots , Z_{n}(t_k) \right) \) is given by
where
In analogous way to the proof of Theorem 4.4 (i), we obtain
and
In order to complete the proof of (i), it suffices to pass with \(n \rightarrow \infty \) in (30) applying (31) and (32).
In order to prove (ii), notice that, similarly to the proof of Theorem 4.4, for any \(t_j, z_j > 0\) and \(1 \le j \le k\), we obtain
and
In order to complete the proof, it suffices to pass with \(n \rightarrow \infty \) in (30) applying (33) and (34). \(\square \)
Data Availability
The authors declare that there are no additional data attached to this paper.
References
Alpuim, M.T., Catkan, N.A., Hüsler, J.: Extremes and clustering of non-stationary max-AR(1) sequences. Stoch. Proc. Appl. 56, 171–184 (1995)
Andersen, T.G., Davis, R.A., Kreiß, J., Mikosch, T.V. (eds.): Handbook of Financial Time Series. Springer-Verlag, Berlin Heidelberg (2009)
Arendarczyk, M., Kozubowski, T.J., Panorska, A.K.: Slash distributions, generalized convolutions, and extremes. Ann. Inst. Stat. Math. 75, 593–617 (2023)
Arnold, B.C.: Pareto processes. In: Stochastic Processes: Theory and Methods. Handbook of Statistics, vol. 19. pp. 1–33 (2001)
Arnold, B.C.: Pareto Distributions. In: Monographs on Statistics and Applied Probability, vol. 140. Taylor & Francis Group (2015)
Bingham, N.H.: Factorization theory and domains of attraction for generalized convolution algebra. Proc. Lond. Math. Sci. Infinite Dimens. Anal. Quantum Probab. Relat. Top. 23(4), 16–30 (1971)
Bingham, N.H.: On a theorem of Kłosowska about generalized convolutions. Colloq. Math. 48(1), 117–125 (1984)
Bingham, N.H., Goldie, C.M., Teugels, J.L.: Regular Variation. Cambridge University Press, Cambridge (1987)
Borowiecka-Olszewska, M., Jasiulis-Gołdyn, B.H., Misiewicz, J.K., Rosiński, J.: Lévy processes and stochastic integrals in the sense of generalized convolutions. Bernoulli 21(4), 2513–2551 (2015)
Embrechts, P., Klüppelberg, C., Mikosch, T.: Modelling Extremal Events for Insurance and Finance. Springer, Berlin (1997)
Ferreira, M.: On the extremal behavior of a Pareto process: an alternative for ARMAX modeling. Kybernetika 48(1), 31–49 (2012)
Ferreira, M., Canto e Castro, L.: Modeling rare events through a pRARMAX process. J. Stat. Plann. Inference 140, 3552–3566 (2010)
Genest, C., Nešlehová, J., Rivest, L.P.: The class of multivariate max-id copulas with \(l_1-\) norm symmetric exponent measure. Bernoulli 24(4B), 3751–3790 (2018)
Gilewski, J.: Generalized convolutions and delphic semigroups. Colloq. Math. 25, 281–289 (1972)
Gilewski, J., Urbanik, K.: Generalized convolutions and generating functions. Bull. Acad. Sci. Pol. Ser. Math. Astron. Phys. 16, 481–487 (1968)
Jasiulis-Gołdyn, B.H.: Kendall random walks. Probab. Math. Stat. 36(1), 165–185 (2016)
Jasiulis-Gołdyn, B.H., Kula, A.: The Urbanik generalized convolutions in the non-commutative probability and a forgotten method of constructing generalized convolution. Proc. Math. Sci. 122(3), 437–458 (2012)
Jasiulis-Gołdyn, B.H., Misiewicz, J.K.: On the uniqueness of the Kendall generalized convolution. J. Theor. Probab. 24(3), 746–755 (2011)
Jasiulis-Gołdyn, B.H., Misiewicz, J.K.: Classical definitions of the Poisson process do not coincide in the case of weak generalized convolution. Lith. Math. J. 55(4), 518–542 (2015)
Jasiulis-Gołdyn, B.H., Misiewicz, J.K.: Kendall random walk, Williamson transform and the corresponding Wiener–Hopf factorization. Lith. Math. J. 57(4), 479–489 (2017)
Jasiulis-Gołdyn, B.H., Misiewicz, J.K., Omey, E., Wesołowski, J.: How exceptional is the extremal Kendall and Kendall-type convolution, to appear in Results in Mathematics, (2023). ArXiV: 1912.13453
Jasiulis-Gołdyn, B.H., Naskręt, K., Misiewicz, J.K., Omey, E.: Renewal theory for extremal Markov sequences of the Kendall type. Stoch. Proc. Appl. 130(6), 3277–3294 (2019)
Kallenberg, O.: Foundations of Modern Probability, Probability Theory and Stochastic Modelling 99, 3rd edn. Springer International Publishing, Cham (2020)
Kendall, D.G.: Delphic semi-groups, infinitely divisible regenerative phenomena, and the arithmetic of p-functions. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete 9(3), 163–195 (1968)
Kingman, J.F.C.: Random walks with spherical symmetry. Acta Math. 109(1), 11–53 (1963)
Larsson, M., Nešlehová, J.: Extremal behavior of Archimedean copulas. Adv. Appl. Probab. 43, 195–216 (2011)
Lewis, P.A.W., McKenzie, Ed.: Minification processes and their transformations. J. Appl. Probab. 28(1), 45–57 (1991)
Lopez-Diaz, J., Angeles Gil, M., Grzegorzewski, P., Hryniewicz, O., Lawry, J.: Soft methodolody and random information systems. In: Advances in Inteligent and Soft computing, Springer (2004)
McNeil, A.J., Nešlehová, J.: From Archimedean to Liouville copulas. J. Multivariate Anal. 101(8), 1771–1790 (2010)
McNeil, A.J., Nešlehová, J.: Multivariate Archimedean copulas, \(d-\) monotone functions and \(l_1-\) norm symmetric distributions. Ann. Stat. 37(5B), 3059–3097 (2009)
Misiewicz, J.: Generalized convolutions and the Levi–Civita functional equation. Aequationes Math. 92(5), 911–933 (2018)
Misiewicz, J., Volkovich, V.: Every symmetric weakly-stable random vector is pseudo-isotropic, to appear. J. Math. Anal. Appl. 483(1), 123575 (2020)
Urbanik, K.: Generalized convolutions I. Studia Math. 23, 217–245 (1964)
Urbanik, K.: Generalized convolutions II. Studia Math. 45, 57–70 (1973)
Urbanik, K.: Generalized convolutions III. Studia Math. 80, 167–189 (1984)
Urbanik, K.: Generalized convolutions IV. Studia Math. 83, 57–95 (1986)
Urbanik, K.: Generalized convolutions V. Studia Math. 91, 153–178 (1988)
Williamson, R.E.: Multiply monotone functions and their Laplace transforms. Duke Math. J. 23, 189–207 (1956)
Yeh, H.C., Arnold, B.C., Robertson, C.A.: Pareto processes. J. Appl. Probab. 25(2), 291–301 (1988)
Acknowledgements
B. Jasiulis-Gołdyn and E. Omey were supported by “First-order Kendall maximal autoregressive processes and their applications,” Grant no POIR.04.04.00-00-1D5E/16, within the POWROTY/REINTEGRATION programme of the Foundation for Polish Science co-financed by the European Union under the European Regional Development Fund.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Arendarczyk, M., Jasiulis-Gołdyn, B. & Omey, E. Asymptotic Properties of Extremal Markov Processes Driven by Kendall Convolution. J Theor Probab 36, 2040–2065 (2023). https://doi.org/10.1007/s10959-023-01285-2
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10959-023-01285-2
Keywords
- Markov process
- Extremes
- Kendall convolution
- Regular variation
- Williamson transform
- Limit theorems
- Exact asymptotics