1 Introduction

In recent years, new approaches in Extreme Value Theory are gaining popularity. Besides the classical convolutions corresponding to the sum or maximum of independent random elements, the authors consider a much wider class of binary operations called generalized convolutions. This paper presents novel asymptotic properties of a certain class of extreme Markov sequences, introduced in [9], where the authors consider additive processes and Lévy processes under generalized convolutions. One example of such an operation is the Kendall convolution, which is the basis of the theory studied here. It is a binary operation that, applied for two identical point-mass measures, gives a Pareto distribution, i.e., a heavy tailed distribution. This allows the construction of new processes based on a generalized convolution. This is the first reason why the objects considered here are worth studying. The second one is an intuition saying that the results presented here can be extended to a much wider class of convolutions related to Kendall convolution. It is worth emphasizing that many phenomena can not be modeled by using classical convolution. We believe that the challenges of our times demand new mathematical tools based on utterly new techniques connected to heavy tailed distributions, as presented in this paper.

The Kendall convolution being the main building block in the construction of the extremal Markov process called Kendall random walk, \(\{X_n :n \in {\mathbb {N}}_0 \}\), is an important example of a generalization of the classical convolutions, corresponding to the sum and the maximum of independent random variables. Originated by Urbanik [33,34,35,36,37] (see also [25]) generalized convolutions regain popularity in recent years (see, e.g., [3, 9, 22, 31, 32] and references therein). In this paper, we focus on the Kendall convolution see, e.g., [18, 21, 31] which, in view of its relations to heavy tailed distributions, and in particular slash distributions [3], Williamson transform [20], Archimedian copulas, renewal theory [22], non-comutative probability [17] or delphic semi-groups [14, 15, 24], presents a high potential of applicability. We refer to [9] for the definition and detailed description of the basics of the theory of generalized convolutions, for a survey on the most important classes of generalized convolutions, and a discussion of Lévy processes and stochastic integrals based on these convolutions, as well as to [16] for the definition and basic properties of Kendall random walks.

Our main goal is to study finite dimensional distributions and asymptotic properties of extremal Markov processes connected to the Kendall convolution. In particular, we present examples of finite dimensional distributions by the unit step characteristics, which create a new class of heavy tailed distributions. Innovative thinking about possible applications comes from the fact that the generalized convolution of two point-mass probability measures can be a non-degenerate probability measure. In particular, the Kendall convolution of two probability measures concentrated at 1 is the Pareto distribution, so it generates heavy tailed distributions. In this context, the theory of regularly varying functions (see, e.g., [6, 7]) plays a crucial role. In this paper, regular variation techniques are used to investigate asymptotic behavior of Kendall random walks and convergence of the finite dimensional distributions of continuous time stochastic processes constructed from Kendall random walks.

Most of the proofs presented in this paper are also based on the application of the Williamson transform, which is a generalized characteristic function of the probability measure in the Kendall algebra (see, e.g., [9, 38]). The great advantage of the Williamson transform is that it is easy to invert. It is also worth to mention that this kind of transform is a generator of Archimedean copulas [29, 30] and is used to compute radial measures for reciprocal Archimedean copulas [13]. Asymptotic properties of the Williamson transform in the context of Archimedean copulas and extreme value theory were given in [26]. We believe that the results on Williamson transform presented in this paper might be applicable in copula theory, which can be an interesting topic for future research.

We start, in Sect. 2, with the definition and basic properties of Kendall convolution. Next, the definition and construction of the random walk under Kendall convolution is presented, which leads to a stochastic representation of the process \(\{X_n\}\). Basic properties of the transition probability kernel are also proved. We refer to [16, 19, 20, 22] for discussions and proofs of further properties of the process \(\{X_n\}\). The structure of the processes considered here (see Definition 2) is similar to the first-order autoregressive maximal Pareto processes [4, 5, 28, 39], max-AR(1) sequences [1], minification processes [27, 28], max-autoregressive moving average processes MARMA [11], pARMAX and pRARMAX processes [12], and also the time series models studied in [2]. Since the random walks form a class of extremal Markov chains, we believe that studying them will yield an important contribution to extreme value theory.

Section 3 is devoted to the analysis of finite-dimensional distributions of the process \(\{X_n\}\). We derive general formulas and present some important examples of processes with different types of step distributions, which leads to new classes of heavy tailed distributions.

In Sect. 4, we study different limiting behaviors of Kendall convolutions and connected processes. We present asymptotic properties of random walks under Kendall convolution using regular variation techniques. In particular, we show the asymptotic equivalence between Kendall convolution and maximum convolution in the case of a regularly varying step distribution \(\nu \). The result has a link to classical extreme value theory (see, e.g., [10]) and suggests possible applications of the Kendall random walk in modeling phenomena where independence between events cannot be assumed. Moreover, limit theorems for Kendall random walks are given in the case of finite \(\alpha \)-moment as well as in the case of a regularly varying tail of the unit step distribution. Finally, we define a continuous time stochastic process based on random walk under Kendall convolution and prove convergence of its finite-dimensional distributions using regular variation techniques.

Notation.

Through this paper, the distribution of the random element X is denoted by \({\mathcal {L}}(X)\). By \({\mathcal {P}}_+\), we denote the family of all probability measures on the Borel \(\sigma \)-algebra \({\mathcal {B}}({\mathbb {R}}_+)\) with \({\mathbb {R}}_{+}:= [0, \infty )\). For a probability measure \(\lambda \in {\mathcal {P}}_{+}\) and \(a \in {\mathbb {R}}_+\), the rescaling operator is given by \(\textbf{T}_a \lambda = {\mathcal {L}}(aX)\) if \(\lambda = {\mathcal {L}}(X)\). By \(b_+\), we denote the positive part of any \(b\in {\mathbb {R}}\). The set of all natural numbers, including zero, is denoted by \({\mathbb {N}}_0\). Additionally we use the notation \(\pi _{2\alpha }, \alpha > 0\), for a Pareto random measure with probability density function (pdf)

$$\begin{aligned} \pi _{2\alpha }\, (\text {d}y) = 2\alpha y^{-2\alpha -1} \varvec{1}_{[1,\infty )}(y)\, \text {d}y. \end{aligned}$$
(1)

Moreover, by

$$\begin{aligned} m_{\nu }^{(\alpha )}:= \int _0^\infty x^\alpha \nu (\text {d}x) \end{aligned}$$

we denote the \(\alpha \)th moment of the measure \(\nu \). The truncated \(\alpha \)-moment of the measure \(\nu \) with cumulative distribution function (cdf) F is given by

$$\begin{aligned} H_\alpha (t):= \int \limits _0^t y^{\alpha } F (\text {d}y). \end{aligned}$$

By \(\nu _1 \vartriangle _{\alpha } \nu _2\), we denote the Kendall convolution of the measures \(\nu _1, \nu _2 \in {\mathcal {P}}_{+}\). For all \(n \in {\mathbb {N}}\), the Kendall convolution of n identical measures \(\nu \) is denoted by \(\nu ^{ \vartriangle _{\alpha } n}:= \nu \vartriangle _{\alpha } \dots \vartriangle _{\alpha } \nu \) (n-times). By \(F_n\), we denote the cumulative distribution function of the measure \(\nu ^{\vartriangle _{\alpha }n}\), whereas for the tail distribution of \(\nu ^{\vartriangle _{\alpha }n}\) we use the standard notation \(\overline{F}_n\). By \({\mathop {\rightarrow }\limits ^{d}}\), we denote convergence in distribution, whereas \({\mathop {\rightarrow }\limits ^{fdd}}\) denotes convergence of finite-dimensional distributions. Finally, we say that a measurable and positive function \(f(\cdot )\) is regularly varying at infinity and with index \(\beta \) if for all \(x>0\) we have \(\lim _{t\rightarrow \infty }\frac{f(tx)}{f(t)}=x^{\beta }\) (see, e.g., [8]).

2 Stochastic Representation and Basic Properties of Kendall Random Walk

We start with the definition of Kendall generalized convolution (see, e.g., [9], Section 2).

Definition 1

The binary operation \(\vartriangle _{\alpha } :{\mathcal {P}}_+^2 \rightarrow {\mathcal {P}}_+\) defined for point-mass measures by

$$\begin{aligned} \delta _x \vartriangle _{\alpha } \delta _y := \textbf{T}_M \left( \varrho ^{\alpha } \pi _{2\alpha } + (1- \varrho ^{\alpha }) \delta _1 \right) , \end{aligned}$$

where \(\alpha > 0\), \(\varrho = {m/M}\), \(m = \min \{x, y\}\), \(M = \max \{x, y\}\), is called the Kendall convolution. The extension of \(\vartriangle _{\alpha }\) for any \({\mathcal {B}}({\mathbb {R}}_+)\) and \(\nu _1, \nu _2 \in {\mathcal {P}}_+\) is given by

$$\begin{aligned} (\nu _1 \vartriangle _{\alpha } \nu _2) (A) = \int \limits _{0}^{\infty } \int \limits _{0}^{\infty } \left( \delta _x \vartriangle _{\alpha } \delta _y\right) (A)\, \nu _1(\text {d}x) \nu _2(\text {d}y). \end{aligned}$$
(2)

Remark 2.1

Note that the convolution of two point-mass measures is no longer a point mass measure and reduces to a continuous Pareto distribution \(\pi _{2\alpha }\) in case of \(x = y = 1\), which is different than the classical convolution or maximum convolution algebra, where convolution of discrete measures yields also a discrete one. We refer to Section 3.2 in [3] for general formula of the Kendall convolution of n point-mass measures and the discussion on its connections with Pareto distribution.

In the Kendall convolution algebra, the main tool used in the analysis of a measure \(\nu \in {\mathcal {P}}_{+}\) is the Williamson transform (see, e.g., [38])

$$\begin{aligned} \hat{\nu }(t):= \int _0^\infty \left( 1 - (xt)^\alpha \right) _{+} \nu (\text {d}x), \end{aligned}$$

which is an analog of the characteristic function for Kendall convolution and plays analogous role to the classical Laplace or Fourier transform for convolutions defined by addition of independent random elements. We refer to [9, 31] and [33,34,35,36,37] for the definition and detailed discussion of the properties of generalized characteristic functions and their connections to generalized convolutions. Through this paper, the function G(t) defined as

$$\begin{aligned} G(t):= \int \limits _{0}^{\infty } \Psi \left( \frac{x}{t}\right) \nu (\text {d}x), \quad \nu \in {\mathcal {P}}_+,\ \ \ \textrm{with} \ \ \ \Psi \left( t\right) := \left( 1 - t^{\alpha } \right) _+ \end{aligned}$$

plays a crucial role in the analysis of Kendall convolutions and related stochastic processes.

Remark 2.2

Note that \(G(t) = \hat{\nu }\left( \frac{1}{t}\right) \) and \(\Psi \left( \frac{x}{t}\right) = \widehat{\delta }_x(\frac{1}{t})\), is the Williamson transform of \(\delta _x, x\ge 0\) at \(\frac{1}{t}\).

Remark 2.3

Due to Proposition 2.3 and Example 3.4 in [9], the Williamson transform \(\hat{\nu }\) is a generalized characteristic function for Kendall convolution. Thus, the function \(G_n(t) = \widehat{\nu ^{ \vartriangle _{\alpha } n}}\left( \frac{1}{t}\right) \) has the following important property (see, e.g., Definition 2.2 in [9])

$$\begin{aligned} G_n(t)=G(t)^n. \end{aligned}$$
(3)

By using a recurrence construction, we define a stochastic processes \(\{X_n: n \in {\mathbb {N}}_0\}\) that is called the Kendall random walk (see also [16, 20, 22]). Further, we show the essential connection of the process \(\{X_n\}\) to the Kendall convolution.

Definition 2

A stochastic process \(\{X_n :n \in {\mathbb {N}}_0\}\) is a discrete time Kendall random walk with parameter \(\alpha >0\) and step distribution \(\nu \in {\mathcal {P}}_+\) if there exist

1.:

\(\{Y_k\}\) i.i.d. random variables with distribution \(\nu \),

2.:

\(\{\xi _k\}\) i.i.d. random variables with uniform distribution on [0, 1],

3.:

\(\{\theta _k\}\) i.i.d. random variables with Pareto distribution with the density (\(\alpha > 0\))

$$\begin{aligned} \pi _{2\alpha }\, (\text {d}y) = 2\alpha y^{-2\alpha -1} \varvec{1}_{[1,\infty )}(y)\, \text {d}y, \end{aligned}$$

such that sequences \(\{Y_k\}\), \(\{\xi _k\}\), and \(\{\theta _k\}\) are mutually independent and

$$\begin{aligned} X_0 = 1, \quad X_1 = Y_1, \quad X_{n+1} = M_{n+1} \left[ \varvec{1}_{(\xi _n > \varrho _{n+1})} + \theta _{n+1} \varvec{1}_{(\xi _n <\varrho _{n+1})}\right] , \quad n \ge 1, \end{aligned}$$

where

$$\begin{aligned} M_{n+1} = \max \{ X_n, Y_{n+1}\}, \quad m_{n+1} = \min \{ X_n, Y_{n+1}\}, \quad \varrho _{n+1} = \frac{m_{n+1}^{\alpha }}{M_{n+1}^{\alpha }}. \end{aligned}$$

In the next proposition, we show that the process constructed in Definition 2 is a homogeneous Markov process driven by the Kendall convolution. We refer to [9], Section 4 for the proof and a general discussion of the existence of the Markov processes under generalized convolutions.

Proposition 2.4

The process \(\{X_n: n \in {\mathbb {N}}_0\}\) with the stochastic representation given by Definition 2 is a homogeneous Markov process with transition probability kernel

$$\begin{aligned} P_{n,n+k}(x, A) := P_k(x, A) = \left( \delta _x \vartriangle _\alpha \nu ^{\vartriangle _\alpha k} \right) (A), \end{aligned}$$
(4)

where \(k, n \in {\mathbb {N}}, A \in {\mathcal {B}}({\mathbb {R}}_{+}), x\ge 0, \alpha >0\).

The proof of Proposition 2.4 is presented in Sect. 5.1.

3 Finite Dimensional Distributions

In this section, we study the finite dimensional distributions of the process \(\{X_n\}\). We start with a Proposition 3.1 where in point (i) we introduce the formula for an inverse of function G(t). For the proof in this case, see, e.g., Lemma 3 in [22]. We refer also to Lemma 1 in [3] and the comments that follow it, where formula (5) was derived as an inverse of an \(\alpha \)-slash distribution. In point (ii), we describe the one-dimensional distributions of \(\{X_n\}\) and their relationships with the Williamson transform and truncated \(\alpha \)-moment.

Proposition 3.1

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\) with cdf F. Then

  1. (i)

    for any \(t \ge 0\), we have

    $$\begin{aligned} G(t) = \alpha t^{-\alpha } \int \limits _0^t x^{\alpha -1} F(x) \textrm{d}x, \end{aligned}$$
    (5)

    and \(F(t) = G(t) + \frac{t}{\alpha } G'(t)\).

  2. (ii)

    for any \(t \ge 0, n \ge 1\), we have

    $$\begin{aligned} F_n(t) = G(t)^{n-1} \bigl [n t^{-\alpha } H_\alpha (t) + G(t) \bigr ]. \end{aligned}$$

Proof

For the proof of the case (ii), observe that

$$\begin{aligned} G(t)^n = G_n(t) = F_n(t) - t^{-\alpha } \int \limits _0^t x^{\alpha } \nu ^{\vartriangle _\alpha n} (\text {d}x) = \alpha t^{-\alpha } \int _0^t x^{\alpha - 1} F_n(x) \text {d}x, \end{aligned}$$
(6)

where the first equation is by Remark 2.2 and the last equation follows from integration by parts. In order to complete the proof, it is sufficient to take derivatives on both sides of equation (6) and apply (i). \(\square \)

The next two lemmas characterize the transition probabilities of the process \(\{X_n\}\) and play an important role in the analysis of its finite-dimensional distributions.

Lemma 3.2

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). For all \(k, n \in {\mathbb {N}}, x, y, t \ge 0\), we have

  1. (i)
    $$\begin{aligned} P_n (x, ((0,t]))= & {} \left( G(t)^n + \frac{n}{t^{\alpha }} H_\alpha (t) G(t)^{n-1} \Psi \left( \frac{x}{t}\right) \right) \textbf{1}_{\{x < t \}}; \end{aligned}$$
  2. (ii)
    $$\begin{aligned} \int \limits _0^t w^{\alpha } P_n(x,dw) = \left( x^{\alpha } G(t)^n + nG(t)^{n-1} H_\alpha (t) \Psi \left( \frac{x}{t}\right) \right) \textbf{1}_{\{x < t \}}. \end{aligned}$$

The proof of Lemma 3.2 is presented in Sect. 5.2.

The following lemma is the main tool in finding the finite-dimensional distributions of \(\{X_n\}\). In order to formulate the result, it is convenient to introduce the notation

$$\begin{aligned} {\mathcal {A}}_k:= \{0,1\}^k \backslash \{(0,0,...,0) \},\ \ \ \ \ \textrm{for}\ \textrm{any}\ k\in {\mathbb {N}}. \end{aligned}$$

Additionally, for any \((\epsilon _1,...,\epsilon _k) \in {\mathcal {A}}_k\) we denote

$$\begin{aligned} \tilde{\epsilon }_1= & {} \min \left\{ i \in \{1,..., k\}: \epsilon _i = 1 \right\} , \; \dots , \tilde{\epsilon }_m:= \min \left\{ i>\tilde{\epsilon }_{m-1}: \epsilon _i = 1 \right\} , \\ m= & {} 2, \dots , s, \ \ \ s:= \sum _{i = 1}^k \epsilon _i. \end{aligned}$$

Lemma 3.3

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Then for any \(0 = n_0 \le n_1\le n_2\cdots \le n_k, k \in {\mathbb {N}}\), where \(n_j \in {\mathbb {N}}\) for all \(j \in {\mathbb {N}}\) and any \(0 \le y_0 \le x_1 \le x_2 \le \cdots \le x_k \le x_{k+1}\) we have

$$\begin{aligned}{} & {} \int \limits _{0}^{x_1}\int \limits _{0}^{x_2} \cdots \int \limits _{0}^{x_k} \Psi \left( \frac{y_k}{x_{k+1}}\right) P_{n_k-n_{k-1}}(y_{k-1}, \mathrm{{d}}y_k) P_{n_{k-1}-n_{k-2}}(y_{k-2}, \mathrm{{d}}y_{k-1}) \cdots P_{n_1}(y_0, \mathrm{{d}}y_1) \\{} & {} \quad =\sum \limits _{({\epsilon }_1,{\epsilon }_2,\cdots ,{\epsilon }_k) \in {\mathcal {A}}_k} \Psi \left( \frac{y_0}{x_{\tilde{\epsilon }_1}} \right) \Psi \left( \frac{x_{\tilde{\epsilon }_s}}{x_{k+1}} \right) \prod _{i=1}^{s-1} \Psi \left( \frac{x_{\tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^k (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \\{} & {} \qquad \cdot \left( \frac{(n_j - n_{j-1}) H_\alpha (x_j)}{x_j^\alpha }\right) ^{\epsilon _j} + \Psi \left( \frac{y_0}{x_{k+1}} \right) \prod _{j=1}^k (G(x_j))^{n_j - n_{j-1}}, \end{aligned}$$

where

$$\begin{aligned} \prod _{i=1}^{s-1} \Psi \left( \frac{x_{\tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) = 1 \ \ \ \textrm{for} \ \ \ s = 1. \end{aligned}$$

The proof of Lemma 3.3 is presented in Sect. 5.3.

Now, we are able to derive a general formula for the finite-dimensional distributions of the process \(\{X_n\}\).

Theorem 1

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Then for any \(0 =: n_0 \le n_1\le n_2 \le \ldots \le n_k, k \in {\mathbb {N}}\), where \(n_j \in {\mathbb {N}}\) for all \(j \in {\mathbb {N}}\) and any \(0 \le x_1 \le x_2 \le \ldots \le x_k\) we have

$$\begin{aligned}{} & {} {\mathbb {P}}(X_{n_k} \le x_k, X_{n_{k-1}} \le x_{k-1}, \cdots , X_{n_1} \le x_1)\\{} & {} \quad =\sum \limits _{({\epsilon }_1,{\epsilon }_2,\ldots ,{\epsilon }_k) \in \{0,1\}^k} \prod _{i=1}^{s-1} \Psi \left( \frac{x_{ \tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^k (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha } \right) ^{\epsilon _j}, \end{aligned}$$

where

$$\begin{aligned} \prod _{i=1}^{s-1} \Psi \left( \frac{x_{\tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) = 1 \ \ \ \textrm{for} \ \ \ s \in \{0,1\}. \end{aligned}$$

Proof

First, observe that

$$\begin{aligned}{} & {} {\mathbb {P}}(X_{n_k} \le x_k, X_{n_{k-1}} \le x_{k-1}, \ldots , X_{n_1} \le x_1)\\{} & {} \quad =\int \limits _{0}^{x_1}\int \limits _{0}^{x_2} \cdots \int \limits _{0}^{x_k} P_{n_k-n_{k-1}}(y_{k-1}, \text {d}y_k) P_{n_{k-1}-n_{k-2}}(y_{k-2}, \text {d}y_{k-1}) \cdots P_{n_1}(0, \text {d}y_1). \end{aligned}$$

Moreover, by the definition of \(\Psi (\cdot )\), for any \(a > 0\), we have

$$\begin{aligned} \lim _{x_{k+1} \rightarrow \infty } \Psi \left( \frac{a}{x_{k+1}}\right) =\Psi \left( 0\right) = 1. \end{aligned}$$

Now in order to complete the proof it suffices to apply Lemma 3.3 with \(y_0=0\) and \(x_{k+1} \rightarrow \infty \). \(\square \)

To better illustrate the result of Theorem 1, we present its special case for \(k = 2\).

Proposition 3.4

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Then for any \(0 =: n_0 \le n_1\le n_2\), where \(n_j \in {\mathbb {N}}\) for all \(j = 1,2\) and any \(0 \le x_1 \le x_2\) we have

$$\begin{aligned}{} & {} {\mathbb {P}}(X_{n_2} \le x_2, X_{n_1} \le x_1) \\{} & {} \quad = [G(x_1)]^{n_1 - 1}[G(x_2)]^{n_2 - n_1 - 1} \left\{ G(x_1) G(x_2) + \frac{n_1}{x_1^\alpha } H_\alpha (x_1)G(x_2) \right. \\{} & {} \qquad + \left. \frac{n_2 - n_1}{x_2^\alpha } G(x_1) H_\alpha (x_2) +\frac{(x_2^\alpha - x_1^\alpha )n_1(n_2 - n_1)}{x_1^\alpha x_2^{2\alpha } }H_\alpha (x_1) H_\alpha (x_2) \right\} . \end{aligned}$$

Finally, we present the cumulative distribution functions and characterizations of the finite-dimensional distributions of the process \(\{X_n\}\) for the most interesting examples of unit step distributions \(\nu \). Since, by Theorem 1, the finite-dimensional distributions of \(\{X_n\}\) are uniquely determined by the Williamson transform \(G(\cdot )\) and the truncated \(\alpha \)-moment \(H(\cdot )\) of the step distribution \(\nu \), these two characteristics are presented for each of the examples. Additionally, in each example we derive the cdf of \(\nu ^{\vartriangle _{\alpha }n}\) that is the one-dimensional distribution of the process \(\{X_n\}\). It should be noted that formulas for \(\{X_n\}\) can be also derived using approach conceived in Theorem 2 in [3]. We start with a basic case of a point-mass distribution \(\nu \) in which \(\nu ^{\vartriangle _{\alpha }n}\) is a particular case of beta-Pareto distribution (see, e.g., Remark 7 in [3]).

Example 3.1

Let \(\nu = \delta _1\). Then the Williamson transform and truncated \(\alpha \)-moment of the measure \(\nu \) are given by

$$\begin{aligned} G(x) = \left( 1-\frac{1}{x^{\alpha }}\right) _+ \ \ \ \hbox {and} \ \ \ H_\alpha (x)= \varvec{1}_{[1,\infty )}(x), \end{aligned}$$

respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have

$$\begin{aligned} F_n(x) = \left( 1+\frac{n-1}{x^{\alpha }}\right) \left( 1-\frac{1}{x^{\alpha }}\right) ^{n-1} \varvec{1}_{[1,\infty )}(x). \end{aligned}$$

By Proposition 3.4, we arrive at

$$\begin{aligned} {\mathbb {P}}(X_{n_2} \le x_2, X_{n_1} \le x_1)= & {} \left( 1 - \frac{1}{x_1^\alpha }\right) ^{n_1 - 1} \left( 1 - \frac{1}{x_2^\alpha }\right) ^{n_2 - n_1 - 1} \\{} & {} \left\{ \left( 1 + \frac{n_1-1}{x_1^\alpha }\right) \left( 1 + \frac{n_2 - n_1 - 1}{x_2^\alpha }\right) -\frac{n_1(n_2 - n_1)}{x_2^{2\alpha }}\right\} . \end{aligned}$$

In the next example, we consider a mixture of \(\delta _1\) and the Pareto distribution that plays a crucial role in construction of the Kendall convolution.

Example 3.2

Let \(\nu = p \delta _1 + (1-p) \pi _{p}\), where \(p\in (0,1]\) and \(\pi _{p}\) is a Pareto distribution with the pdf (1) with \(2\alpha = p\). Then, the Williamson transform and the truncated \(\alpha \)-moment of measure \(\nu \) are given by

$$\begin{aligned} G(x) = \left\{ \begin{array}{l} \left( 1-\frac{\alpha (1-p)}{(\alpha -p)}x^{-p} + \frac{p(1-\alpha )}{(\alpha -p)}x^{-\alpha }\right) \varvec{1}_{[1,\infty )}(x) \quad \hbox {if} \; p \ne \alpha , \\ \left( 1 - (1-p) x^{-p} - px^{-\alpha } +p (1-p) x^{-\alpha } \log (x) \right) \varvec{1}_{[1,\infty )}(x) \quad \hbox {if} \; p = \alpha \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} H_\alpha (x)= \left\{ \begin{array}{l} \left( \frac{p(1-\alpha )}{(p-\alpha )}+ \frac{p(1-p)}{(\alpha -p)} x^{\alpha -p} \right) \varvec{1}_{[1,\infty )}(x) \quad \hbox {if} \; p \ne \alpha , \\ p + p(1-p) \log (x) \quad \hbox {if} \; p = \alpha , \end{array} \right. \end{aligned}$$

respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have

$$\begin{aligned} F_n(x)= & {} \left[ 1 - \frac{\alpha (1-p)}{\alpha -p}x^{-p} + \frac{p(1-\alpha )}{\alpha -p}x^{-\alpha }\right] ^{n-1}\\{} & {} \cdot \left[ 1+\frac{(1-p)(np-\alpha )}{\alpha -p}x^{-p} -\frac{p(1-\alpha )(n-1)}{\alpha -p}x^{-\alpha }\right] \varvec{1}_{[1,\infty )}(x) \end{aligned}$$

for \(p \ne \alpha \), and

$$\begin{aligned} F_n(x)= & {} \left[ 1 - (1-p) x^{-p} - px^{-\alpha } +p (1-p) x^{-\alpha } \log (x) \right] ^{n-1}\\{} & {} \cdot \left[ 1 \!+\! (n\!-\!1) p x^{-\alpha } \!-\! (1-p) x^{-p} \!+\! p (1-p) (n+1) x^{-\alpha } \log (x)\right] \varvec{1}_{[1,\infty )}(x) \end{aligned}$$

for \(p = \alpha \).

In the next example, we consider the distribution \(\nu \) being beta \({\mathcal {B}}(\alpha , 1)\) distribution. Such \(\nu \) has the lack of memory property for the Kendall convolution. We refer to [19] for a general result about the existence of measures with the lack of memory property for the so-called monotonic generalized convolutions.

Example 3.3

Let \(\nu \) be a probability measure with the cdf \(F(x) = 1 - (1-x^{\alpha })_+,\ x \ge 0, \alpha >0\). Then the Williamson transform and the truncated \(\alpha \)-moment of \(\nu \) are given by

$$\begin{aligned} G(x) = \frac{x^{\alpha }}{2}\varvec{1}_{[0,1)}(x) +\left( 1-\frac{1}{2x^{\alpha }}\right) \varvec{1}_{[1,\infty )}(x) \end{aligned}$$

and

$$\begin{aligned} H_\alpha (x) = \frac{x^{2\alpha }}{2}\varvec{1}_{[0,1)}(x) + \frac{1}{2} \varvec{1}_{[1,\infty )}(x), \end{aligned}$$

respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have

$$\begin{aligned} F_n(x) = \left\{ \begin{array}{lcl} \frac{1}{2}\frac{ n+1}{2^n}\, x^{\alpha n} &{} \hbox { for } &{} x \in [0,1]; \\ \frac{1}{2}\left( 1 - \frac{1}{2x^{\alpha }} \right) ^{n-1} \left( 1 + \frac{ n-1}{2x^{\alpha }} \right) &{} \hbox { for } &{} x > 1. \end{array} \right. \end{aligned}$$

In the next example, we consider a unit step distribution, which is a stable probability measure for Kendall random walk with unit step distribution \(\nu \) (see Sect. 4, Theorem 4.4).

Example 3.4

The step distribution \(\nu \) for the Kendall random walk in this example is given by the cdf of the form

$$\begin{aligned} F(x) = \left( 1 + m_{\nu }^{(\alpha )} x^{-\alpha } \right) e^{-m_{\nu }^{(\alpha )} x^{-\alpha }} \varvec{1}_{(0,\infty )}(x), \end{aligned}$$

where \(\alpha > 0\) and \(m_{\nu }^{(\alpha )}>0\) is a parameter. Then the Williamson transform and the truncated \(\alpha \)-moment of the measure \(\nu \) are given by

$$\begin{aligned} G(x) = \exp \{-m_{\nu }^{(\alpha )} x^{-\alpha }\} \varvec{1}_{(0,\infty )}(x) \ \ \ \textrm{and} \ \ \ H_\alpha (x) = m_{\nu }^{(\alpha )} G(x), \end{aligned}$$

respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have

$$\begin{aligned} F_n(x) = \left( 1 + n m_{\nu }^{(\alpha )} x^{-\alpha } \right) \exp \{- n m_{\nu }^{(\alpha )} x^{-\alpha }\} \varvec{1}_{(0,\infty )}(x). \end{aligned}$$

Example 3.5

Let \(\nu \) be the standard uniform distribution with the pdf \(\nu (\text {d}x) = \varvec{1}_{(0,1)}(x) \text {d}x\), denoted by U(0, 1). Then the Williamson transform and the truncated \(\alpha \)-moment of \(\nu \) are given by

$$\begin{aligned} G(x) = (x \wedge 1) - \frac{(x \wedge 1)^{\alpha +1}}{(\alpha +1)x^{\alpha }}, \ \ \ \textrm{and} \ \ \ H_\alpha (x) = \frac{(x \wedge 1)^{\alpha +1}}{(\alpha +1)}, \end{aligned}$$

respectively. Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have

$$\begin{aligned} F_n(x)= & {} \left( \frac{\alpha }{\alpha +1}\right) ^n \left( 1+\frac{n}{\alpha }\right) x^n \varvec{1}_{[0,1)}(x)\\{} & {} + \left( 1-\frac{1}{(\alpha +1)x^{\alpha }}\right) ^{n-1} \left( 1+\frac{n-1}{( \alpha +1)x^{\alpha }}\right) \varvec{1}_{[1,\infty )}(x). \end{aligned}$$

Example 3.6

Let \(\nu = \gamma _{a,b},\; a,b>0\), be Gamma distribution with the pdf

$$\begin{aligned} \gamma _{a,b}(\text {d}x)=\frac{b^a}{\Gamma (a)} x^{a-1} e^{-bx} \varvec{1}_{(0,\infty )}(x) \text {d}x. \end{aligned}$$

Then the Williamson transform and the truncated \(\alpha \)-moment of measure \(\nu \) are given by

$$\begin{aligned} G(x) = \gamma _{a,b}(0,x] - \frac{\Gamma (a+\alpha )}{\Gamma (a)} b^{-\alpha } x^{-\alpha } \gamma _{a+\alpha ,b}(0,x], \end{aligned}$$

and

$$\begin{aligned} H_\alpha (x) = \frac{\Gamma (a+\alpha )}{\Gamma (a)} b^{-\alpha } \gamma _{a+\alpha ,b}(0,x], \end{aligned}$$

where \(\gamma _{a,b}(0,x] = \frac{b^a}{\Gamma (a)} \! \int _0^x t^{s-1} e^{-t} dt\). Hence, by Proposition 3.1 (ii), for any \(n = 2,3,...\), we have

$$\begin{aligned} F_n(x)= & {} \left[ \gamma _{a,b}(0,x] -\frac{\Gamma (a+\alpha )}{\Gamma (a)} b^{-\alpha } x^{-\alpha } \gamma _{a+\alpha ,b}(0,x]\right] ^{n-1} \\{} & {} \cdot \left[ \gamma _{a,b}(0,x] +\frac{\Gamma (a+\alpha )}{\Gamma (a)} (n-1) b^{-\alpha } x^{-\alpha } \gamma _{a+\alpha ,b}(0,x] \right] . \end{aligned}$$

4 Limit Theorems

In this section, we investigate the limiting behavior of Kendall random walks and connected continuous time processes. The analysis is based on inverting the Williamson transform, as given in Proposition 3.1. Moreover, as it is shown in Sect. 2, the Kendall convolution is strongly related to Pareto distribution. Hence, regular variation techniques play a crucial role in the analysis of the asymptotic behavior and limit theorems for the processes studied in this section.

We start with the analysis of the asymptotic behavior of the tail distribution of the random variables \(X_n\).

Theorem 2

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Then

$$\begin{aligned} \overline{F}_n(x) = n\overline{F}(x) + \frac{1}{2} n (n-1) (H_\alpha (x))^2 x^{-2\alpha } (1 + o(1)) \end{aligned}$$

as \(x \rightarrow \infty \).

The proof of Theorem 2 is presented in Sect. 5.4.

The following Corollary is a direct consequence of the Theorem 2.

Corollary 4.1

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\). Moreover, let

  1. (i)

    \(\overline{F}(x)\) be regularly varying with parameter \(\theta - \alpha \) as \(x \rightarrow \infty \), where \(0< \theta < \alpha \). Then

    $$\begin{aligned} \overline{F}_n(x) = n\overline{F}(x) (1 + o(1)) \ \ \ \ \ \textrm{as} \ x \rightarrow \infty . \end{aligned}$$
  2. (ii)

    \(m_\nu ^{(\alpha )} < \infty \). Then

    $$\begin{aligned} \overline{F}_n(x) = n\overline{F}(x) + \frac{1}{2} n (n-1) \left( m_\nu ^{(\alpha )}\right) ^2 x^{-2\alpha } (1 + o(1)) \ \ \ \ \ \textrm{as} \ x \rightarrow \infty . \end{aligned}$$
  3. (iii)

    \(\overline{F}(x) = o\left( x^{-2\alpha }\right) \) as \(x \rightarrow \infty \). Then

    $$\begin{aligned} \overline{F}_n(x) = \frac{1}{2} n (n-1) \left( m_\nu ^{(\alpha )}\right) ^2 x^{-2\alpha }(1 + o(1)) \ \ \ \ \ \textrm{as} \ x \rightarrow \infty . \end{aligned}$$

Remark 4.2

Corollary 4.1 (i) shows that in case of regularly varying step distribution \(\nu \), the tail distribution of the random variable \(X_n\) is asymptotically equivalent to the maximum of n i.i.d. random variables with distribution \(\nu \).

In the next proposition, we investigate the limit distribution of Kendall random walks in the case of a finite \(\alpha \)-moment as well as for regularly varying tail of the unit step. We start with the following observation.

Remark 4.3

Due to Proposition 1 in [6], the random variable X belongs to the domain of attraction of a stable measure with respect to Kendall convolution if and only if \(1 - G(t)\) is regularly varying function at \(\infty \).

Notice that \(1 - G(t)\) is regularly varying whenever the random variable X has a finite \(\alpha \)-moment or its tail is regularly varying at infinity. The following proposition formalizes this observation by providing formulas for stable distributions with respect to Kendall convolution.

Proposition 4.4

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\).

  1. (i)

    If \(m_{\nu }^{(\alpha )} <\infty \) then, as \(n \rightarrow \infty \),

    $$\begin{aligned} n^{-1/\alpha } X_n {\mathop {\rightarrow }\limits ^{d}} X, \end{aligned}$$

    where the cdf of the random variable X is given in Example 3.4 by the following formula:

    $$\begin{aligned} \rho _{\nu ,\alpha }(0,x] = \left( 1 + m_{\nu }^{(\alpha )} x^{-\alpha } \right) e^{-m_{\nu }^{(\alpha )} x^{-\alpha }} \varvec{1}_{(0,\infty )}(x) \end{aligned}$$
    (7)

    and the corresponding pdf of distribution of X is equal to

    $$\begin{aligned} \rho _{\nu ,\alpha }(\text {d}x) = \alpha \left( m_{\nu }^{(\alpha )} \right) ^2 x^{-2\alpha -1} \exp \{-m_{\nu }^{(\alpha )} x^{-\alpha }\} \varvec{1}_{(0,\infty )}(x) \text {d}x. \end{aligned}$$
    (8)
  2. (ii)

    If \(\overline{F}\) is regularly varying as \(x \rightarrow \infty \) with parameter \(\theta - \alpha \), where \(0 \leqslant \theta < \alpha \), then there exists a sequence \(\{a_n\}\), \(a_n \rightarrow \infty \), such that

    $$\begin{aligned} a_n^{-1} X_n {\mathop {\rightarrow }\limits ^{d}} X, \end{aligned}$$

    where the cdf of random variable X is given by

    $$\begin{aligned} \rho _{\nu ,\alpha ,\theta }(0,x] = \left( 1 + x^{-(\alpha -\theta )}\right) e^{-x^{-(\alpha -\theta )}}\varvec{1}_{(0,\infty )}(x) \end{aligned}$$
    (9)

    and the corresponding pdf of X as follows

    $$\begin{aligned} \rho _{\nu ,\alpha ,\theta }(\text {d}x) = \alpha x^{-2(\alpha -\theta )-1} \exp \{-x^{-(\alpha -\theta )}\} \varvec{1}_{(0,\infty )}(x) \textrm{d}x. \end{aligned}$$
    (10)

The proof of Proposition 4.4 is presented in Sect. 5.4.

Remark 4.5

Notice that stable distributions \(\rho _{\nu , \alpha }\) and \(\rho _{\nu , \alpha , \theta }\) derived in Proposition 4.3 are special cases of the Inverse Slash Fréchet distribution described in Section 3.1 in [3], that is a stable distribution with respect to max-slash operation.

Now we define a new stochastic process \(\{Z_n(t): n \in {\mathbb {N}}_0\}\) connected with the Kendall random walk \(\{X_n: n \in {\mathbb {N}}_0\}\) as follows:

$$\begin{aligned} \left\{ Z_n(t)\right\} {\mathop {=}\limits ^{d}} \left\{ a_n^{-1} X_{[nt]}\right\} , \end{aligned}$$

where \([\cdot ]\) denotes the integer part and the sequence \(\{a_n\}\) is such that \(a_n > 0\), and \(\lim _{n \rightarrow \infty } a_n = \infty \).

In the following theorem, we prove convergence of the finite-dimensional distributions of the process \(\{Z_n(t)\}\), for an appropriately chosen sequence \(\{a_n\}\).

Theorem 3

Let \(\{X_n: n \in {\mathbb {N}}_0\}\) be a Kendall random walk with parameter \(\alpha >0\) and unit step distribution \(\nu \in {\mathcal {P}}_+\).

  1. (i)

    If \(m_{\nu }^{(\alpha )}<\infty \) and \(a_n = n^{1/\alpha } (1 + o(1))\), as \(n\rightarrow \infty \), then

    $$\begin{aligned} \{Z_n(t)\}{\mathop {\rightarrow }\limits ^{fdd}} \{Z(t)\}, \end{aligned}$$

    where, for any \(0 = t_0 \le t_1 \le ... \le t_k\), the finite-dimensional distributions of \(\{Z(t)\}\) are given by

    $$\begin{aligned}{} & {} {\mathbb {P}}\left( Z(t_1)\le z_1, Z(t_2)\le z_2,\ldots , Z(t_k)\le z_k \right) \\{} & {} \quad = \sum \limits _{({\epsilon }_1,{\epsilon }_2,\cdots ,{\epsilon }_k) \in \{0,1\}^k} \prod _{i=1}^{s-1} \Psi \left( \frac{z_{\tilde{\epsilon }_i}}{z_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^k \left( \frac{\left( t_j - t_{j-1}\right) }{z_j^\alpha } m_{\nu }^{(\alpha )} \right) ^{\epsilon _j} \\{} & {} \quad \qquad \cdot \exp \left\{ - m_{\nu }^{(\alpha )} \sum \limits _{i=1}^k z_i^{-\alpha } (t_i-t_{i-1})\right\} , \end{aligned}$$
  2. (ii)

    If \(\overline{F}(\cdot )\) is regularly varying as \(x \rightarrow \infty \) with parameter \(\theta - \alpha \), where \(0 \leqslant \theta < \alpha \), then there exists a sequence \(\{a_n\}, a_n \rightarrow \infty \) such that

    $$\begin{aligned} Z_n(t){\mathop {\rightarrow }\limits ^{fdd}} Z(t), \end{aligned}$$

    where, for any \(0 = t_0 \le t_1 \le ... \le t_k\), the finite-dimensional distributions of \(\{Z(t)\}\) are given by

    $$\begin{aligned}{} & {} {\mathbb {P}}\left( Z(t_1)\le z_1, Z(t_2)\le z_2,\cdots , Z(t_k)\le z_k \right) \\{} & {} \quad = \sum \limits _{({\epsilon }_1,{\epsilon }_2,\cdots ,{\epsilon }_k) \in \{0,1\}^k} \prod _{i=1}^{s-1} \Psi \left( \frac{z_{\tilde{\epsilon }_i}}{z_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^k \left( \left( t_j - t_{j-1}\right) z_j^{\theta -\alpha } \right) ^{\epsilon _j} \\{} & {} \qquad \cdot \exp \left\{ - \sum \limits _{i=1}^k z_i^{\theta - \alpha } (t_i-t_{i-1})\right\} , \end{aligned}$$

where in both above cases we have

$$\begin{aligned} \prod _{i=1}^{s-1} \Psi \left( \frac{z_{\tilde{\epsilon }_i}}{z_{\tilde{\epsilon }_{i+1}}}\right) = 1 \ \ \ \textrm{for} \ \ \ s \in \{0,1\} \end{aligned}$$

with

$$\begin{aligned} \tilde{\epsilon }_1= & {} \min \left\{ i: \epsilon _i = 1 \right\} , \; \dots , \tilde{\epsilon }_m:= \min \left\{ i>\tilde{\epsilon }_{m-1}: \epsilon _i = 1 \right\} , \\ m= & {} 2, 3, \dots , s, \ \ \ s = \sum _{i = 1}^k \epsilon _i. \end{aligned}$$

The proof of Theorem 3 is presented in Sect. 6.1.

5 Proofs

In this section, we present detailed proofs of our results.

5.1 Proof of Proposition 2.4

Due to the independence of the sequences \(\{Y_k\}\), \(\{\xi _k\}\), and \(\{\theta _k\}\), it follows directly from the Definition 2 that the process \(\{X_n\}\) satisfies the Markov property.

Now, let \(n \in {\mathbb {N}}\) be fixed. We shall show that, for all \(k \in {\mathbb {N}}, A \in {\mathcal {B}}({\mathbb {R}}_{+}), x\ge 0, \alpha >0\), the transition probabilities of the process \(\{X_n\}\) are of the form (4). In order to do this, we proceed by induction. By Definition 2, we have

$$\begin{aligned} {\mathbb {P}}\left( X_n \in A | X_{n-1} = x\right)= & {} \int \limits _{0}^{\infty } {\mathbb {P}}\left( X_n \in A | X_{n-1} = x, Y_n = y \right) \nu (\text {d}y) \\= & {} I_1 + I_2, \end{aligned}$$

where

$$\begin{aligned} I_1 = \int \limits _{0}^{\infty } {\mathbb {P}}\left( \max (x,y) \theta _n \in A, \xi _n < \left( \frac{\min (x,y)}{\max (x,y)}\right) ^{\alpha } \right) \nu (\text {d}y) \end{aligned}$$

and

$$\begin{aligned} I_2 = \int \limits _{0}^{\infty } \textbf{1}_{A}(\max (x,y)) {\mathbb {P}}\left( \xi _n > \left( \frac{\min (x,y)}{\max (x,y)}\right) ^\alpha \right) \nu (\text {d}y). \end{aligned}$$

By the independence of the random variables \(\theta _n\) and \(\xi _n\), we have

$$\begin{aligned} I_1 = \int \limits _{0}^{\infty } \left\{ \left( \frac{\min (x,y)}{\max (x,y)}\right) ^{\alpha } {\mathbb {P}}\left( \max (x,y) \theta _n \in A \right) \right\} \nu (\text {d}y). \end{aligned}$$

Hence,

$$\begin{aligned} {\mathbb {P}}\left( X_n \in A | X_{n-1} = x\right)= & {} \int \limits _{0}^{\infty } \left( \frac{\min (x,y)}{\max (x,y)} \right) ^{\alpha } T_{\max (x,y)} \pi _{2\alpha }(A)\nu (\text {d}y) \nonumber \\{} & {} + \int \limits _{0}^{\infty } \left( 1-\left( \frac{\min (x,y)}{\max (x,y)}\right) ^{\alpha }\right) \delta _{\max (x,y)}(A)\nu (\text {d}y)\nonumber \\= & {} \int \limits _{0}^{\infty } \left( T_{\max (x,y)} \left( \delta _{\frac{\min (x,y)}{\max (x,y)}} \vartriangle _{\alpha } \delta _1\right) \right) (A)\nu (\text {d}y) \end{aligned}$$
(11)
$$\begin{aligned}= & {} \int \limits _{0}^{\infty } \left( \delta _x \vartriangle _{\alpha } \delta _y\right) (A)\nu (\text {d}y) =\left( \delta _x \vartriangle _{\alpha } \nu \right) (A), \end{aligned}$$
(12)

where (11) follows from Definition 1 and (12) follows from (2). This completes the first step of the proof by induction.

Now, assuming that

$$\begin{aligned} {\mathbb {P}}\left( X_{n+k} \in A | X_n = x\right) = \left( \delta _x \vartriangle _{\alpha } \nu ^{\vartriangle _{\alpha } k}\right) (A) \end{aligned}$$
(13)

holds for \(k \ge 2\), we shall establish its validity for \(k + 1\).

Due to the Chapman–Kolmogorov equation ([23]) for the process \(\{X_n\}\), we have

$$\begin{aligned} {\mathbb {P}}\left( X_{n+k+1} \in A | X_{n} = x\right)= & {} \int \limits _{0}^{\infty }\int \limits _{A} P_1(y, dz) P_k(x, \text {d}y) \nonumber \\= & {} \int \limits _{0}^{\infty }\left( \delta _y \vartriangle _{\alpha } \nu \right) (A) \left( \delta _x \vartriangle _{\alpha } \nu ^{\vartriangle _{\alpha } k} \right) (\text {d}y) \end{aligned}$$
(14)
$$\begin{aligned}= & {} \left( \delta _x \vartriangle _{\alpha } \nu ^{\vartriangle _{\alpha } k+1} \right) (A), \end{aligned}$$
(15)

where (14) follows from (12) and (13) while (15) follows from (2). This completes the induction argument and the proof. \(\square \)

5.2 Proof of Lemma 3.2

First, notice that, by Definition 1, we have

$$\begin{aligned} \left( \delta _x \vartriangle _{\alpha } \delta _y\right) ((0,t])= & {} \left( \frac{\min (x,y)}{\max (x,y)}\right) ^{\alpha } {\mathbb {P}}\left( \max (x,y) \theta \le t \right) \nonumber \\{} & {} + \left( 1-\left( \frac{\min (x,y)}{\max (x,y)}\right) ^{\alpha }\right) \textbf{1}_{\{\max (x,y)< t\}}\nonumber \\= & {} \left( 1-\frac{x^{\alpha }y^{\alpha }}{t^{2\alpha }}\right) \textbf{1}_{\{x< t, y < t\}} \end{aligned}$$
(16)
$$\begin{aligned}= & {} \left[ \Psi \left( \frac{x}{t}\right) + \Psi \left( \frac{y}{t}\right) - \Psi \left( \frac{x}{t}\right) \Psi \left( \frac{y}{t}\right) \right] \textbf{1}_{\{x< t, y < t\}}. \end{aligned}$$
(17)

In order to prove (i), observe that by (2) and (17) we have

$$\begin{aligned} P_n (x, ((0,t]))= & {} \int \limits _0^{\infty } \left( \delta _x \vartriangle _{\alpha } \delta _y\right) ((0,t]) \nu ^{\vartriangle _{\alpha } n}(\text {d}y)\nonumber \\= & {} \int \limits _0^{t} \left[ \Psi \left( \frac{x}{t}\right) + \Psi \left( \frac{y}{t}\right) - \Psi \left( \frac{x}{t}\right) \Psi \left( \frac{y}{t}\right) \right] \textbf{1}_{\{x< t \}} \nu ^{\vartriangle _{\alpha } n}(\text {d}y) \nonumber \\= & {} \left[ \Psi \left( \frac{x}{t} \right) F_n(t) + \left( 1-\Psi \left( \frac{x}{t} \right) \right) G_n(t)\right] \textbf{1}_{\{x < t \}}. \end{aligned}$$
(18)

In order to complete the proof of case (i), it suffices to combine (18) with (3) and Proposition 3.1 (ii).

To prove (ii), observe that integration by parts leads to

$$\begin{aligned} \int \limits _0^t w^{\alpha } \left( \delta _x \vartriangle _{\alpha } \delta _y\right) (dw)= & {} t^{\alpha } \left( \delta _x \vartriangle _{\alpha } \delta _y\right) ((0,t]) - \int \limits _0^t \alpha w^{\alpha -1} \left( \delta _x \vartriangle _{\alpha } \delta _y\right) ((0,w]) dw \nonumber \\= & {} \left( x^{\alpha } - 2 \frac{x^{\alpha } y^{\alpha }}{t^{\alpha }} + y^{\alpha } \right) \textbf{1}_{\{x \vee y < t \}}, \end{aligned}$$
(19)

where (19) follows from (16). Applying (19), we obtain

$$\begin{aligned} \int \limits _0^t w^{\alpha } P_n(x,dw)= & {} \int \limits _0^{\infty } \int \limits _0^t w^{\alpha } \left( \delta _x \vartriangle _{\alpha } \delta _y \right) (dw) \nu ^{\vartriangle _{\alpha } n}(\text {d}y) \nonumber \\= & {} \int \limits _0^t \left( x^{\alpha } - 2 \frac{x^{\alpha } y^{\alpha }}{t^{\alpha }} + y^{\alpha } \right) \nu ^{\vartriangle _{\alpha } n}(\text {d}y) \textbf{1}_{\{x< t \}} \nonumber \\= & {} \left( x^{\alpha } F_n(t)- 2 \frac{x^{\alpha }}{t^{\alpha }} H_\alpha ^{(n)}(t) + H_\alpha ^{(n)}(t)\right) \textbf{1}_{\{x < t \}}, \end{aligned}$$
(20)

where \(H_\alpha ^{(n)}(t):= \int _0^t y^\alpha \nu ^{\vartriangle _{\alpha } n}(\text {d}y) = t ^{\alpha } \left( F_n(t) - G_n(t) \right) \) by (6). Finally, the proof of the case (ii) is completed by combining (20) with Proposition 3.1 (ii). \(\square \)

5.3 Proof of Lemma 3.3

Let \(k=1\). Then by Lemma 3.2, we obtain

$$\begin{aligned}{} & {} \int \limits _{0}^{x_1}\Psi \left( \frac{y_1}{x_2}\right) P_{n_1}(y_0, \text {d}y_1) = P_{n_1}(y_0, ((0,x_1])) - x_2^{-\alpha } \int \limits _0^{x_1} y_1^{\alpha } P_n(y_0, \text {d}y_1)\\{} & {} = \left[ \Psi \left( \frac{y_0}{x_2} \right) G^{n_1}(x_1) +\frac{n_1}{x_1^{\alpha }} G^{n_1-1}(x_1) H_\alpha ^{(1)}(x_1) \Psi \left( \frac{y_0}{x_1} \right) \Psi \left( \frac{x_1}{x_2} \right) \right] \textbf{1}_{\{y_0 < x_1\}}, \end{aligned}$$

which ends the first step of proof by induction.

Now, assume that the formula holds for \(k\in {\mathbb {N}}\). We shall establish its validity for \(k+1\). Let

$$\begin{aligned} \tilde{\eta }_1&:= \min \left\{ i \ge 2: \epsilon _i = 1 \right\} , \; \dots , \tilde{\eta }_m := \min \left\{ i>\tilde{\eta }_{m-1} : \epsilon _i = 1 \right\} , \\ m&= 2, 3, \dots , s_2, \ \ \ s_2 := \sum _{i = 2}^{k+1} \epsilon _i. \end{aligned}$$

Additionally, we denote \(s_1:= \sum _{i = 1}^{k+1} \epsilon _i\).

Moreover, let

$$\begin{aligned}{} & {} {\mathcal {A}}_{k+1}^0:= \{ (0,\epsilon _2,\epsilon _3,\cdots , \epsilon _{k+1}) \in \{0,1\}^{k+1}: (\epsilon _2,\epsilon _3, \cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_k \}, \\{} & {} {\mathcal {A}}_{k+1}^1:= \{ (1,\epsilon _2,\epsilon _3,\cdots , \epsilon _{k+1}) \in \{0,1\}^{k+1}: (\epsilon _2,\epsilon _3,\cdots , \epsilon _{k+1}) \in {\mathcal {A}}_k \}. \end{aligned}$$

By splitting \({\mathcal {A}}_{k+1}\) into four subfamilies of sets: \({\mathcal {A}}_{k+1}^0\), \({\mathcal {A}}_{k+1}^1\), \(\{(1,0,...,0)\}\), and \(\{(0,...,0)\}\) and applying the formula for k and the first induction step we obtain

$$\begin{aligned}{} & {} \int \limits _{0}^{x_1}\left[ \int \limits _{0}^{x_2} \cdots \int \limits _{0}^{x_{k+1}} \Psi \left( \frac{y_{k+1}}{x_{k+2}}\right) P_{n_{k+1}-n_{k}}(y_{k}, \text {d}y_{k+1}) \cdots P_{n_2-n_1}(y_1, \text {d}y_2) \right] P_{n_1}(y_0, \text {d}y_1)\nonumber \\{} & {} \quad = \sum \limits _{(\epsilon _2,\epsilon _3,\cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_k} \Psi \left( \frac{x_{\tilde{\eta }_{s_2}}}{x_{k+2}} \right) \prod _{i=1}^{s_2-1} \Psi \left( \frac{x_{\tilde{\eta }_i}}{x_{\tilde{\eta }_{i+1}}}\right) \prod _{j=2}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \nonumber \\{} & {} \qquad \cdot \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha }\right) ^{\epsilon _j} \int _0^{x_1}\Psi \left( \frac{y_1}{x_{\tilde{\eta }_1}} \right) P_{n_1}(y_0, \text {d}y_1) \nonumber \\{} & {} \qquad + \prod _{j=2}^{k+1} (G(x_j))^{n_j - n_{j-1}} \int _0^{x_1}\Psi \left( \frac{y_1}{x_{k+2}} \right) P_{n_1}(y_0, \text {d}y_1)\nonumber \\{} & {} \quad = S[{\mathcal {A}}_{k+1}^0]+ S[{\mathcal {A}}_{k+1}^1] + S[\{(1,0,...,0)\}]+ S[\{(0,...,0)\}], \end{aligned}$$
(21)

where

$$\begin{aligned} S[{\mathcal {A}}_{k+1}^0]= & {} \sum \limits _{(0,\epsilon _2,\epsilon _3, \cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_{k+1}^0} \Psi \left( \frac{y_0}{x_{\tilde{\eta }_1}} \right) \Psi \left( \frac{x_{\tilde{\eta }_{s_2}}}{x_{k+2}} \right) \prod _{i=1}^{s_2-1} \Psi \left( \frac{x_{\tilde{\eta }_i}}{x_{\tilde{\eta }_{i+1}}}\right) \\{} & {} \cdot (G(x_1))^{n_1}\prod _{j=2}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \left( \frac{(n_j - n_{j-1}) H_\alpha (x_j)}{x_j^\alpha }\right) ^{\epsilon _j}, \\ S[{\mathcal {A}}_{k+1}^1]= & {} \sum \limits _{(1,\epsilon _2,\epsilon _3, \cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_{k+1}^1} \Psi \left( \frac{y_0}{x_1} \right) \Psi \left( \frac{x_{\tilde{\eta }_{s_2}}}{x_{k+2}} \right) \prod _{i=1}^{s_2-1} \Psi \left( \frac{x_{\tilde{\eta }_i}}{x_{\tilde{\eta }_{i+1}}}\right) \Psi \left( \frac{x_1}{x_{\tilde{\eta }_{1}}}\right) \\{} & {} \cdot \frac{n_1}{x_1^{\alpha }} (G(x_1))^{n_1-1} H_\alpha ^{(1)}(x_1) \prod _{j=2}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j}\\{} & {} \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha }\right) ^{\epsilon _j}, \\ S[\{(1,0,...,0)\}]= & {} \frac{n_1}{x_1^{\alpha }} G^{n_1-1} (x_1) H_\alpha ^{(1)}(x_1) \Psi \left( \frac{y_0}{x_1} \right) \Psi \left( \frac{x_1}{x_{k+2}} \right) \prod _{j=2}^{k+1} (G(x_j))^{n_j - n_{j-1}}, \\ S[\{(0,...,0)\}]= & {} \Psi \left( \frac{y_0}{x_{k+2}} \right) G(x_1)^{n_1}\prod _{j=2}^{k+1} (G(x_j))^{n_j - n_{j-1}}. \end{aligned}$$

Observe that for any sequence \((0,\epsilon _2,\epsilon _3,\cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_{k+1}^0\), we have

\((\tilde{\epsilon }_1, \tilde{\epsilon }_2,...,\tilde{\epsilon }_{s_1}) = (\tilde{\eta }_1, \tilde{\eta }_2,..., \tilde{\eta }_{s_2}) \), with \(s_1 = s_2\), which implies that

$$\begin{aligned} \Psi \left( \frac{y_0}{x_{\tilde{\eta }_1}} \right) \Psi \left( \frac{x_{\tilde{\eta }_{s_2}}}{x_{k+2}} \right) \prod _{i=1}^{s_2-1} \Psi \left( \frac{x_{\tilde{\eta }_i}}{x_{\tilde{\eta }_{i+1}}}\right) = \Psi \left( \frac{y_0}{x_{\tilde{\epsilon }_1}} \right) \Psi \left( \frac{x_{\tilde{\epsilon }_{s_1}}}{x_{k+2}} \right) \prod _{i=1}^{s_1-1} \Psi \left( \frac{x_{\tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) . \end{aligned}$$

Moreover,

$$\begin{aligned}{} & {} (G(x_1))^{n_1}\prod _{j=2}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha } \right) ^{\epsilon _j} \\{} & {} \quad =\prod _{j=1}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha } \right) ^{\epsilon _j}. \end{aligned}$$

Hence,

$$\begin{aligned} S[{\mathcal {A}}_{k+1}^0]= & {} \sum \limits _{(0,\epsilon _2,\epsilon _3,\cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_{k+1}^0} \Psi \left( \frac{y_0}{x_{\tilde{\epsilon }_1}} \right) \Psi \left( \frac{x_{\tilde{\epsilon }_{s_1}}}{x_{k+2}} \right) \prod _{i=1}^{s_1-1} \Psi \left( \frac{x_{\tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \nonumber \\{} & {} \cdot \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha }\right) ^{\epsilon _j}. \end{aligned}$$
(22)

Analogously, for any sequence \((1,\epsilon _2,\epsilon _3,\cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_{k+1}^1\) we have

\((\tilde{\epsilon }_1, \tilde{\epsilon }_2,...,\tilde{\epsilon }_{s_1}) = (1, \tilde{\eta }_1, \tilde{\eta }_2,..., \tilde{\eta }_{s_2})\), with \(s_1 = s_2 + 1\) which implies that

$$\begin{aligned}{} & {} \Psi \left( \frac{y_0}{x_1} \right) \Psi \left( \frac{x_1}{x_{\tilde{\eta }_{1}}}\right) \Psi \left( \frac{x_{\tilde{\eta }_{s_2}}}{x_{k+2}} \right) \prod _{i=1}^{s_2-1} \Psi \left( \frac{x_{\tilde{\eta }_i}}{x_{\tilde{\eta }_{i+1}}}\right) \nonumber \\{} & {} \quad =\Psi \left( \frac{y_0}{x_{\tilde{\epsilon }_1}} \right) \Psi \left( \frac{x_1}{x_{\tilde{\eta }_{1}}}\right) \Psi \left( \frac{x_{\tilde{\epsilon }_{s_1}}}{x_{k+2}} \right) \prod _{i=1}^{s_1-2} \Psi \left( \frac{x_{\tilde{\epsilon }_{i+1}}}{x_{\tilde{\epsilon }_{i+2}}}\right) \nonumber \\{} & {} \quad = \Psi \left( \frac{y_0}{x_{\tilde{\epsilon }_1}} \right) \Psi \left( \frac{x_{\tilde{\epsilon }_{s_1}}}{x_{k+2}} \right) \prod _{i=1}^{s_1-1} \Psi \left( \frac{x_{\tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) , \end{aligned}$$
(23)

where (23) is the consequence of

$$\begin{aligned} \Psi \left( \frac{x_1}{x_{\tilde{\eta }_{1}}}\right) \prod _{i=1}^{s_1-2} \Psi \left( \frac{x_{\tilde{\epsilon }_{i+1}}}{x_{\tilde{\epsilon }_{i+2}}}\right) = \prod \limits _{i=1}^{s_1-1} \Psi \left( \frac{x_{\tilde{\epsilon }_{i}}}{x_{\tilde{\epsilon }_{i+1}}}\right) . \end{aligned}$$

Moreover,

$$\begin{aligned}{} & {} \frac{n_1}{x_1^{\alpha }} (G(x_1))^{n_1-1} H_\alpha ^{(1)}(x_1) \prod _{j=2}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha } \right) ^{\epsilon _j} \\{} & {} \quad = \prod _{j=1}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha }\right) ^{\epsilon _j}. \end{aligned}$$

Hence,

$$\begin{aligned} S[{\mathcal {A}}_{k+1}^1]= & {} \sum \limits _{(1,\epsilon _2,\epsilon _3, \cdots ,\epsilon _{k+1}) \in {\mathcal {A}}_{k+1}^1} \Psi \left( \frac{y_0}{x_{\tilde{\epsilon }_1}} \right) \Psi \left( \frac{x_{\tilde{\epsilon }_{s_1}}}{x_{k+2}} \right) \prod _{i=1}^{s_1-1} \Psi \left( \frac{x_{\tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) \nonumber \\{} & {} \cdot \prod _{j=1}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha }\right) ^{\epsilon _j}. \end{aligned}$$
(24)

Additionally, observe that

$$\begin{aligned} S[\{(1,0,...,0)\}] = \Psi \left( \frac{y_0}{x_{\tilde{\epsilon }_1}} \right) \Psi \left( \frac{x_{s_1}}{x_{k+2}} \right) \frac{n_1 H_\alpha ^{(1)}(x_1) }{x_1^{\alpha }} \prod _{j=1}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \end{aligned}$$
(25)

and due to the fact that \(n_0 = 0\), we have

$$\begin{aligned} S[\{(0,...,0)\}] = \Psi \left( \frac{y_0}{x_{k+2}} \right) \prod _{j=1}^{k+1} (G(x_j))^{n_j - n_{j-1}}. \end{aligned}$$
(26)

Finally, by combining (22), (24), (25), and (26) with (21) we obtain that

$$\begin{aligned}{} & {} \int \limits _{0}^{x_1}\left[ \int \limits _{0}^{x_2} \cdots \int \limits _{0}^{x_{k+1}} \Psi \left( \frac{y_{k+1}}{x_{k+2}}\right) P_{n_{k+1}-n_{k}}(y_{k}, \text {d}y_{k+1}) \cdots P_{n_2-n_1}(y_1, \text {d}y_2) \right] P_{n_1}(y_0, \text {d}y_1)\\{} & {} \quad =\sum \limits _{({\epsilon }_1,{\epsilon }_2,\cdots , {\epsilon }_{k+1}) \in {\mathcal {A}}_{k+1}} \Psi \left( \frac{y_0}{x_{\tilde{\epsilon }_1}} \right) \Psi \left( \frac{x_{\tilde{\epsilon }_{s_1}}}{x_{k+2}} \right) \prod _{i=1}^{s_1-1} \Psi \left( \frac{x_{\tilde{\epsilon }_i}}{x_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^{k+1} (G(x_j))^{n_j - n_{j-1} - \epsilon _j} \\{} & {} \qquad \cdot \left( \frac{(n_j - n_{j-1})H_\alpha (x_j)}{x_j^\alpha }\right) ^{\epsilon _j} + \Psi \left( \frac{y_0}{x_{k+1}} \right) \prod _{j=1}^{k+1} (G(x_j))^{n_j - n_{j-1}}. \end{aligned}$$

This completes the induction argument and the proof. \(\square \)

5.4 Proof of Theorem 2

Due to Proposition 3.1, for any \(n \ge 2\), we have

$$\begin{aligned} F_n(x)= & {} \left( F(x) - \frac{1}{x^\alpha } H_\alpha (x)\right) ^{n-1} \left( F(x) + \frac{n-1}{x^\alpha }H_\alpha (x)\right) \nonumber \\= & {} (F(x))^n + n\sum _{k = 1}^{n-1} (-1)^{k-1} {n-1 \atopwithdelims ()k-1} \frac{k-1}{k} \left( \frac{H_\alpha (x)}{x^{\alpha }}\right) ^k (F(x))^{n-k} \nonumber \\{} & {} + (-1)^{n-1} (n-1) \left( \frac{H_\alpha (x)}{x^{\alpha }}\right) ^n, \end{aligned}$$
(27)

where (27) follows by the observation that, for any \(a \ge 0\), \(n \ge 2\), we have

$$\begin{aligned} (1 - a)^{n-1} (1 + a(n-1)) = 1 + n\sum _{k=1}^{n-1} (-1)^{k-1} {n-1 \atopwithdelims ()k-1} \frac{k-1}{k} a^k + (-1)^{n-1} (n-1) a^{n}. \end{aligned}$$

Thus,

$$\begin{aligned} \overline{F}_n(x) = I_1 + I_2, \end{aligned}$$

where

$$\begin{aligned} I_1 = 1- (F(x))^n = n\overline{F}(x) (1 + o(1)) \end{aligned}$$

as \(x \rightarrow \infty \), and

$$\begin{aligned} I_2 = n\sum _{k = 1}^{n-1} (-1)^k {n-1 \atopwithdelims ()k-1} \frac{k-1}{k} \left( \frac{H_\alpha (x)}{x^{\alpha }}\right) ^k (F(x))^{n-k} + (-1)^n (n-1) \left( \frac{H_\alpha (x)}{x^{\alpha }}\right) ^n. \end{aligned}$$

Note that \(\lim _{x \rightarrow \infty } \frac{H_\alpha (x)}{x^{\alpha }} = 0\) for any measure \(\nu \in {\mathcal {P}}_{+}\) and hence

$$\begin{aligned}{} & {} n\sum _{k = 3}^{n-1} (-1)^k {n-1 \atopwithdelims ()k-1} \frac{k-1}{k} \left( \frac{H_\alpha (x)}{x^{\alpha }}\right) ^k (F(x))^{n-k} + (-1)^n (n-1) \left( \frac{H_\alpha (x)}{x^{\alpha }}\right) ^n \\{} & {} \quad = o\left( \frac{1}{2} n (n-1) \left( \frac{H_\alpha (x)}{x^{\alpha }}\right) ^2\right) , \end{aligned}$$

as \(x \rightarrow \infty \). This completes the proof. \(\square \)

The following lemma plays a crucial role in further analysis.

Lemma 5.1

Let \(H_\alpha \in RV_{\theta }\) with \(0<\theta <\alpha \). Then, there exists a sequence \(\{a_n\}\) such that

$$\begin{aligned} \frac{H_\alpha (a_n)}{(a_n)^{\alpha }} = \frac{1}{n} (1 +o(1)) \end{aligned}$$

as \(n\rightarrow \infty \).

Proof

First, observe that \( W(x):= x^{\alpha }/H_\alpha (x)\) is regularly varying function with parameter \(\alpha - \theta \) as \(x \rightarrow \infty \). Then, due to Theorem 1.5.12 in [8], there exists an increasing function V(x) such that \(W(V(x)) = x(1 + o(1)),\) as \(x \rightarrow \infty \). Now, in order to complete the proof it suffices to take \(a_n = V(n)\). \(\square \)

6 Proof of Proposition 4.4

Using (3), the Williamson transform of \(a_n^{-1} X_n\) is given by

$$\begin{aligned} \left[ G\left( \frac{a_n}{x}\right) \right] ^n =\left( F\left( \frac{a_n}{x}\right) - \frac{x^{\alpha }}{a_n^{\alpha }} H_\alpha \left( \frac{a_n}{x}\right) \right) ^n, \end{aligned}$$
(28)

since

$$\begin{aligned} G(t)=F(t) - t^{-\alpha } H_\alpha (t). \end{aligned}$$
(29)

In order to prove (i) observe that under assumption of the finiteness of \(m_\nu ^{(\alpha )}\), we have

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } H_\alpha \left( \frac{n^{1/\alpha }}{x}\right) = m_{\nu }^{(\alpha )} \ \ \ \ \ \textrm{and} \ \ \ \ \ \lim _{n \rightarrow \infty } F\left( \frac{n^{1/\alpha }}{x}\right) = 1, \end{aligned}$$

which, by (28), yields that

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } \left[ G\left( \frac{n^{1/\alpha }}{x}\right) \right] ^n = e^{-m_{\nu }^{(\alpha )} x^{\alpha }}. \end{aligned}$$

Due to Proposition 3.1 (i), there exists uniquely determined random variable X with cdf (7) and pdf (8) such that \(e^{-m_{\nu }^{(\alpha )} x^{\alpha }} \) is its Williamson transform. This completes the proof of the case (i).

In order to prove (ii), notice that, due to Theorem 1.5.8 in [8], \(\overline{F} \in RV_{\theta -\alpha }\), implies that \(H_\alpha \in RV_{\theta }\). Hence, for any \(z > 0\), we have

$$\begin{aligned} H_\alpha \left( \frac{a_n}{x}\right) = x^{-\theta } H\alpha (a_n) (1 + o(1)). \end{aligned}$$

Moreover, by Lemma 5.1, we can choose a sequence \(\{a_n\}\) such that

$$\begin{aligned} \frac{H_\alpha (a_n)}{(a_n)^{\alpha }} = \frac{1}{n} (1 + o(1)) \end{aligned}$$

as \(n \rightarrow \infty \). Thus,

$$\begin{aligned} \lim _{n \rightarrow \infty } G\left[ \left( \frac{a_n}{x}\right) ^n\right] = \lim \limits _{n \rightarrow \infty }\left( F\left( \frac{a_n}{x}\right) - \frac{x^{\alpha }}{a_n^{\alpha }} H_\alpha \left( \frac{a_n}{x}\right) \right) ^n = e^{-x^{\alpha - \theta }}. \end{aligned}$$

Due to Proposition 3.1 (i), there exists a random variable X with cdf (9) and pdf (10) such that \(e^{-x^{\alpha - \theta }}\) is its Williamson transform. This completes the proof. \(\square \)

6.1 Proof of Theorem 3

Proof

Let \(0 =: t_0 \le t_1 \le t_2 \le \cdots \le t_k\), where \(k \in {\mathbb {N}}\). By Theorem 1, the distribution of \(\left( Z_{n}(t_1), Z_{n}(t_2),\cdots , Z_{n}(t_k) \right) \) is given by

$$\begin{aligned}{} & {} {\mathbb {P}}\left( Z_{n}(t_1)\le z_1, Z_{n}(t_2)\le z_2, \cdots , Z_{n}(t_k)\le z_k \right) \nonumber \\{} & {} \quad = {\mathbb {P}}\left( X_{[n t_1]} \le a_n z_1, X_{[n t_2]} \le a_n z_2,\cdots , X_{[n t_k]} \le a_n z_k \right) \nonumber \\{} & {} \quad = \sum \limits _{({\epsilon }_1, {\epsilon }_2,\cdots ,{\epsilon }_k) \in \{0,1\}^k} \prod _{i=1}^{s-1} \Psi \left( \frac{z_{\tilde{\epsilon }_i}}{z_{\tilde{\epsilon }_{i+1}}}\right) \prod _{j=1}^k (G(a_n z_j))^{[n t_j] - [n t_{j-1}] - \epsilon _j} \nonumber \\{} & {} \qquad \cdot \left( \frac{([n t_j] - [n t_{j-1}] ) H_\alpha (a_n z_j)}{a_n ^{\alpha } z_j^\alpha }\right) ^{\epsilon _j}, \end{aligned}$$
(30)

where

$$\begin{aligned} \prod _{i=1}^{s-1} \Psi \left( \frac{z_{\tilde{\epsilon }_i}}{z_{\tilde{\epsilon }_{i+1}}}\right) = 1 \ \ \ \textrm{for} \ \ \ s \in \{0,1\}. \end{aligned}$$

In analogous way to the proof of Theorem 4.4 (i), we obtain

$$\begin{aligned}{} & {} \lim \limits _{n \rightarrow \infty } G\left( a_n z_j \right) ^{[nt_j] -[nt_{j-1}]-\epsilon _j}\nonumber \\{} & {} \quad = \lim _{n \rightarrow \infty } \left( F(a_n z_j) -\frac{H_\alpha (a_n z_j)}{(a_n z_j)^\alpha } \right) ^{[nt_j]-[nt_{j-1}]-\epsilon _j} \nonumber \\{} & {} \quad = \exp \left\{ - m_{\nu }^{(\alpha )} \left( t_j-t_{j-1}\right) z_j^{\theta -\alpha } \right\} \end{aligned}$$
(31)

and

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } \left( \frac{([n t_j] - [n t_{j-1}] ) H_\alpha (a_n z_j)}{a_n ^{\alpha } z_j^\alpha }\right) ^{\epsilon _j} =\left( t_j - t_{j-1}\right) z_j^{-\alpha } m_{\nu }^{(\alpha )}. \end{aligned}$$
(32)

In order to complete the proof of (i), it suffices to pass with \(n \rightarrow \infty \) in (30) applying (31) and (32).

In order to prove (ii), notice that, similarly to the proof of Theorem 4.4, for any \(t_j, z_j > 0\) and \(1 \le j \le k\), we obtain

$$\begin{aligned}{} & {} \lim \limits _{n \rightarrow \infty } G\left( a_n z_j \right) ^{[nt_j] -[nt_{j-1}]-\epsilon _j} \nonumber \\{} & {} \quad = \lim _{n \rightarrow \infty } \left( F(a_n z_j) -\frac{H_\alpha (a_n z_j)}{(a_n z_j)^\alpha } \right) ^{[nt_j]-[nt_{j-1}]-\epsilon _j} \nonumber \\{} & {} \quad = \exp \left\{ -\left( t_j-t_{j-1}\right) z_j^{\theta -\alpha } \right\} \end{aligned}$$
(33)

and

$$\begin{aligned} \lim \limits _{n \rightarrow \infty } \left( \frac{([n t_j] - [n t_{j-1}]) H_\alpha (a_n z_j)}{a_n ^{\alpha } z_j^\alpha }\right) ^{\epsilon _j} =\left( t_j - t_{j-1}\right) z_j^{\theta -\alpha }. \end{aligned}$$
(34)

In order to complete the proof, it suffices to pass with \(n \rightarrow \infty \) in (30) applying (33) and (34). \(\square \)