1 Motivations

The most important motivation to write this paper is to solve the open problem mentioned in [43] (see problem 7.9.9, p.122) to make a measurable calibration of copulas in the context of generalized convolutions, in particular extremes. We present here the first step towards solving this problem and present the characterization of the maximum convolution, responsible for the construction of extremes, in the terms of other convolutions, which in their algebras already have well-developed tools (e.g. the Williamson transform corresponding to the Kendall convolution). Moreover, we are building new tools for the development of the Lévy processes theory in generalized sense [7], in particular extremes, and a groundbreaking renewal theory such that renewal functions have clear analytical formulas (see [24]).

Real world phenomena are often effects of accumulation processes. The most natural description of a accumulation process is through summation of components. However, accumulation of components can be described sometimes more adequately by the maximum function or by the \(\ell _p\)-norm of the vector of components, but often this dependence is more complicated and we are using some approximating methods. The origin of this point of view one can find in two papers: [7, 24], where the authors consider renewal theory and Lévy processes in the generalized convolution sense. Following this papers, we propose here to use generalized convolutions, in particular the Kendall convolution and the Kendall type convolutions to find connections with the extreme value theory. Our choice is motivated by many interesting properties of such convolutions including the close connection with the max-convolution and the extreme value theory, simplicity in calculating the corresponding characteristic functions and inverting these characteristic functions, representing convolutions by the convex linear combination of some measures or representing them by simple operations on some independent random variables. Since the Kendall convolution extends the concept of the max-convolution we call it extremal Kendall convolution to emphasize this property.

In this paper one can find the precise description of Kendall convolution and the Kendall-type convolutions, their exceptional properties and applications in stochastic models—some of them have not been known yet. We analyze also some examples of convolutions with similar properties. Finally, we prove a stochastic representation of the Kucharczak-Urbanik convolution in the terms of order statistics. This is the starting point for a novel approach to Archimedean copulas which are now extensively studied [11, 31, 34, 35].

Generalized convolutions were defined and intensively studied by K. Urbanik in [48,49,50,51,52]. His work has its origin in the paper of Kingman [29], where the first generalized convolution, called now the Kingman (or Bessel) convolution, was defined. This convolution—the ancestor of all generalized convolutions—is strictly connected with a Wiener process in \({\mathbb {R}}^n\) and the Bessel process describing the distance of the walking particle from the origin.

For a while it was not clear that the class of generalized convolutions is rich enough to be interesting for applications and useful in stochastic simulation and mathematical modeling, but by now we know that this class is very rich, worth studying. It turned out, for example, that each generalized convolution has its own Gaussian distribution, exponential law and Poisson process with corresponding distribution with lack of memory property (see [22, 23, 25, 27]). The origin of some generalized convolutions one can find also in Delphic semi-groups [16, 28]. A different approach to generalized convolutions appeared in the theory of harmonic analysis, see e.g. [33, 44, 45].

The classical convolution, corresponding to the summation of independent random variables and the max-convolution corresponding to taking the maximum of independent random variables, are examples of generalized convolutions. The extreme value theory described e.g. in [14, 41], based on the max-convolution, is widely developed and is applied e.g. in modelling rare events with important consequences, like floods, hurricanes (see [3, 9, 41, 47]). We focus here on the Kendall convolution, defined by Urbanik in [53] which can be used to model e.g. some hydrological phenomena: pretty stable behaviour of the “natural” water level together with rarely appearing floods. We describe some distributional properties of the Kendall and Kendall type convolutions (see [21, 30, 37]). Especially interesting and useful in modeling extremal events is that for Kendall and Kendall type convolutions the convolution of two measures with compact supports can have heavy tail.

In Sect. 2 we present basics of the theory of generalized convolutions.

Sect. 3 contains a list of generalized convolutions studied in this paper.

For Sect. 4 let us remind first that each generalized convolution corresponds to its own integral transform, for details and basic properties see [5, 6, 49, 51,52,53,54]. We describe some properties of the Kendall convolution through its generalized characteristic function—the Williamson transform. Especially simple and clear here is the inversion formula. More information and details one can find in [7] or [56]. The Williamson transform is also used in copula theory (see e.g. [34, 35]) since it is a generator of Archimedean copulas. For asymptotic properties of the Williamson transform see [2, 24] and [31]. In Sect. 4.1 we draw the reader attention to the fact that the generalized convolution can be defined by the corresponding integral transform as the proper generalized characteristic function. It turned out that such approach was already considered in the area of Harmonic Analysis and Theory of Special Functions, see e.g. [33, 44, 45]. However generalized convolutions considered in those papers may not satisfy all Urbanik’s assumptions.

In Sect. 5 we show that for \(\alpha \leqslant 1\) there exists a (weakly stable) distribution \(\mu \) such that the Kendall convolution \(\lambda _1 \vartriangle _{\alpha } \lambda _2\) can be defined by the following equation:

$$\begin{aligned} \bigl ( \lambda _1 \vartriangle _{\alpha } \lambda _2 \bigr ) \circ \mu = \bigl ( \lambda _1 \circ \mu \bigr ) *\bigl ( \lambda _2 \circ \mu \bigr ), \end{aligned}$$
(1)

where \(*\) is the classical convolution and the operation \(\circ :{\mathcal {P}}_+^2 \rightarrow {\mathcal {P}}_+\) is defined as follows: \({\mathcal {L}}(\theta _1) \circ {\mathcal {L}}(\theta _2) = {\mathcal {L}}(\theta _1 \, \theta _2)\) for independent random variables \(\theta _1, \theta _2\). Generalized convolutions with this property are called weak generalized convolutions. We indicate which of the convolutions which we considered are weak.

In Sect. 6 we study properties of generalized convolution allowing the construction of the corresponding Poisson process. We start from the monotonicity property stating that the generalized sum of positive random variables cannot be smaller than their maximum—this is necessary to have positive increments (of time). Not every generalized convolution has this property. We also study existence of distributions with the lack of memory property with respect to a given generalized convolution. For some convolutions such distributions do not exists. The main result of this section, Theorem 5, gives a few equivalent conditions for monotonic convolution to allow the existence of a distribution with the lack of memory property. We indicate such convolutions among ones we consider in this paper, e.g. for the Kendall convolution, \(\triangle _{\alpha }\), the power distribution \(\textrm{pow}(\alpha )\) with the density \(\alpha x^{\alpha -1} {\textbf{1}}_{[0,1]}(x)\) has the lack of memory property.

In Sect. 7 we show that for the Kendall convolution, \(\vartriangle _{\alpha }\), for \(\alpha \le 1\) there exists a distribution \(\nu \), which is weakly stable with respect to max-convolution, such that for any \(\lambda _1,\lambda _2\in \mathcal P_+\) (probability measures on \([0,\infty ))\) we have

$$\begin{aligned} \bigl ( \lambda _1 \vartriangle _{\alpha } \lambda _2 \bigr ) \circ \nu = \bigl ( \lambda _1 \circ \nu \bigr ) \triangledown \bigl ( \lambda _2 \circ \nu \bigr ), \end{aligned}$$

with the max-convolution \(\triangledown \) defined by \({\mathcal {L}}(\theta _1) \triangledown {\mathcal {L}}(\theta _2) = {\mathcal {L}} ( \max \{ \theta _1, \theta _2\})\), where \(\theta _1\) and \(\theta _2\) are independent positive random variables. We have also the following property which, as it will be shown in Sect. 3, trivially follows from the definition of the Kendall convolution: \(1 = \left( \delta _a \vartriangle _{\alpha } \delta _b\right) ([\max \{a,b \}, \infty ))\) \( > \left( \delta _a \vartriangle _{\alpha } \delta _b\right) ((\max \{a,b \}, \infty ))\). By these properties we can model such processes as the change of water level in the river in the continuous time which is pretty stable most of the time but sometimes goes into extremes.

An equivalent definition of Kendall convolution presented in Sect. 8, states that the Kendall convolution of two Dirac measures, \(\delta _a\), \(\delta _b\), is a convex linear combination of two fixed measures with coefficients of this combination depending on a and b. In [20] it was shown that the Kendall convolution is the only generalized convolution with this property. It was shown in [37] that if the generalized convolution of \(\delta _a\) and \(\delta _b\) is a convex combination of n fixed measures and with coefficients of this combination depending on a and b then the generalized convolution is similar to the Kendall convolution. We call them the Kendall-type convolutions. Such convex combination properties are not only useful in explicit calculations, but they allow to define a family of integral transforms parametrized by \(n\ge 2\) extending in this way the Williamson transform (which covers the case \(n=2\)).

Finally, in Sect. 9 we focus on preparation for studying path properties of the Lévy processes with respect to generalized convolution. In order to make it possible we need to express the given convolution in the language of operations on independent random variables. Such a construction for a given generalized convolution is called representability (for details see [7]). Here we study a simplified version of this property expressing a generalized convolution of two measures \(\lambda _1 \diamond \lambda _2\) corresponding to the independent random variables \(\theta _1, \theta _2\) as a distribution of an explicitly defined variable \(\Psi (\theta _1, \theta _2)\). If \(\Psi (\theta _1, \theta _2)(\omega ) = {\overline{\Psi }}(\theta _1(\omega ), \theta _2(\omega ))\) a.e. for some measurable function \({\overline{\Psi }}:{\mathbb {R}}^2 \rightarrow {\mathbb {R}}\), then \({\overline{\Psi }}(x,y) = (x^p + y^p )^{1/p}\) for some \(p \in (0,\infty ]\). In all other cases \(\Psi (\theta _1, \theta _2)\) depends also on some other random variables. For example for the Kendall convolution we have \({\mathcal {L}}(\theta _1) \vartriangle _{\alpha } {\mathcal {L}}(\theta _2)\) is the distribution of

$$\begin{aligned} M \bigl ({\textbf{1}}_{(\varrho ^{\alpha },1]}(U) + \Pi _{2 \alpha } {\textbf{1}}_{[0,\varrho ^{\alpha }]} (U) \bigr ), \end{aligned}$$

where \(M = \max \{ \theta _1, \theta _2\}\), \(\varrho = {{\min \{ \theta _1, \theta _2\}}/{\max \{ \theta _1, \theta _2\}}}\), \(\Pi _q\) is a variable with the Pareto distribution \(\pi _q\) and density \(q x^{-q-1} {\textbf{1}}_{[1,\infty )}(x)\), U has uniform distribution on [0, 1] and \(\theta _1, \Theta _2, \Pi _{2\alpha }, U\) are independent.

1.1 Notation

Through this paper, by \({\mathcal {P}}_+\) (respectively \({\mathcal {P}}\) or more general \({\mathcal {P}}({\mathbb {E}})\)) we denote family of all probability measures on the Borel subsets of \(\mathcal {{\mathbb {R}}_+}:= [0, \infty )\) (respectively \({\mathbb {R}}\) or more general separable Banach space \({\mathbb {E}}\)). The distribution of the random element X is denoted by \({\mathcal {L}}(X)\). A dilation family (rescalings) of operators \(T_a:{\mathcal {P}}_+\rightarrow {\mathcal {P}}_+\), \(a\in {\mathbb {R}}_+:=[0,\infty )\) is defined for \(\mu \in {\mathcal {P}}_+\) and any Borel set B in the following way: \(T_a\mu (B)=\mu (B/a)\) if \(a>0\) and \(T_0 \mu = \delta _0\). Equivalently, \(T_a\mu = {\mathcal {L}}(aX)\) for \(a \in {\mathbb {R}}_+\) and \({\mathcal {L}}(X) = \mu \).

2 A Primer on Generalized Convolutions

The Kendall convolution is a well known example of a generalized convolution defined by K. Urbanik in [48] and studied in [49,50,51,52]. Urbanik was mainly interested in generalized convolutions on \({\mathcal {P}}_+\) and we shall do the same in this paper, but a wider approach is also possible.

In this section we present this part of the theory of generalized convolutions, which is necessary for studying properties of Kendall and other convolutions.

Definition 1

A generalized convolution is a binary, associative and commutative operation \(\diamond \) on \( {\mathcal {P}}_+\) with the following properties:

  1. (i)

    \(\lambda \diamond \delta _0 = \lambda \) for all \(\lambda \in {\mathcal {P}}_+\);

  2. (ii)

    \((p\lambda _1 +(1-p)\lambda _2) \diamond \lambda = p(\lambda _1 \diamond \lambda ) + (1-p)(\lambda _2 \diamond \lambda )\) for all \(p \in [0,1]\) and \(\lambda , \lambda _1, \lambda _2 \in {\mathcal {P}}_+\);

  3. (iii)

    \(T_a(\lambda _1 \diamond \lambda _2) = (T_a\lambda _1) \diamond (T_a\lambda _2)\) for all \(a \ge 0\) and \(\lambda _1, \lambda _2 \in {\mathcal {P}}_+\);

  4. (iv)

    if \(\lambda _n \Rightarrow \lambda \) and \(\nu _n \Rightarrow \nu \), then \(\lambda _n \diamond \nu _n \Rightarrow \lambda \diamond \nu \) for \(\lambda _n, \mu _n, \lambda , \mu \in {\mathcal {P}}_+\), \(n \in {\mathbb {N}}\), where \(\Rightarrow \) denotes weak convergence;

  5. (v)

    there exists a sequence of positive numbers \((c_n)\) and a probability measure \(\nu \in {\mathcal {P}}_+\), \(\nu \ne \delta _0\), such that \(T_{c_n} \delta _1^{\diamond n}\Rightarrow \nu \), (here \(\lambda ^{\diamond n} = \underbrace{\lambda \diamond \lambda \diamond ... \diamond \lambda }_{n \;\textrm{times}}\)).

Remark 1

Note that any generalized convolution \(\diamond \) is uniquely determined by \(\delta _x \diamond \delta _1\), \(x\in [0,1]\). Indeed, by Definition 1,

  • first, for each choice of \(a,b\in {\mathbb {R}}_+\) the measure \(\delta _a \diamond \delta _b\) is uniquely determined by

    $$\begin{aligned} \delta _a \diamond \delta _b = \left\{ \begin{array}{ll} T_M \bigl ( \delta _x \diamond \delta _1 \bigr ), &{} \textrm{if}\;M>0, \\ \delta _0, &{} \textrm{if}\;M=0,\end{array}\right. \end{aligned}$$

    where \(M=a\vee b:=\max \{a,b\}\), \(m =a\wedge b:= \min \{a,b\}\) and \(x = \frac{m}{M}\);

  • second, for arbitrary measures \(\lambda _1, \lambda _2 \in {\mathcal {P}}_+\)

    $$\begin{aligned} \lambda _1 \diamond \lambda _2= \int _0^{\infty } \int _0^{\infty } \left( \delta _a \diamond \delta _b \right) \, \lambda _1(da) \, \lambda _2 (db). \end{aligned}$$

Characteristic functions are important tools for the analysis of classical convolution. It turns out that not every generalized convolution allows a reasonable analog of characteristic function. The next definitions, introduced by K. Urbanik in [48], select these convolutions for which such analog can be defined.

Definition 2

The class \({\mathcal {P}}_+\) equipped with the generalized convolution \(\diamond \) is called a generalized convolution algebra and denoted by \(({\mathcal {P}}_+, \diamond )\). A continuous (in the sense of weak convergence of measures) mapping \(h= h^{\diamond } :{\mathcal {P}}_+ \rightarrow {\mathbb {R}}\) is called a homomorphism of the algebra \(({\mathcal {P}}_+, \diamond )\) if for all \(\lambda _1, \lambda _2 \in {\mathcal {P}}_+\)

  • \(h(\lambda _1 \diamond \lambda _2) = h(\lambda _1) h(\lambda _2)\) and

  • \(h(p\lambda _1 + (1-p)\lambda _2) = p h(\lambda _1) + (1-p) h(\lambda _2)\) for all \(p \in [0,1]\).

Algebras admitting a non-trivial homomorphism (i.e. \(h\not \equiv 1\), \(h \not \equiv 0\)) and the corresponding generalized convolutions are called regular.

Definition 3

For a regular algebra \(({\mathcal {P}}_+, \diamond )\) (or for the regular generalized convolution \(\diamond \)) we define a probability kernel \(\Omega :{\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) by

$$\begin{aligned} \Omega (t) \overset{def}{=}\ h(T_t \delta _1) = h(\delta _t), \quad t \geqslant 0, \end{aligned}$$

and a \(\diamond \)-generalized characteristic function \(\Phi _{\lambda }^{\diamond }:{\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) of \(\lambda \in {\mathcal {P}}_+\) as an integral transform with the kernel \(\Omega \):

$$\begin{aligned} \Phi _{\lambda }^{\diamond } (t) \overset{def}{=}\ \int _0^{\infty } \Omega (st) \lambda (ds) = h(T_t \lambda ),\quad t\in {\mathbb {R}}_+. \end{aligned}$$
(2)

Note that if X is a random variable with distribution \(\lambda \in {\mathcal {P}}_+\) then

$$\begin{aligned} \Phi _{\lambda }^{\diamond } (t)=\textbf{E}\,\Omega (tX),\quad t\in {\mathbb {R}}_+. \end{aligned}$$

The function \(\Phi _{\lambda }^{\diamond }\) plays a similar role as the Laplace or Fourier transform for classical convolution on \({\mathcal {P}}_+\) or \({\mathcal {P}}\), respectively. Basic properties of \(\diamond \)-generalized characteristic functions are in [25, 48]. For the present paper it is important to know that each regular generalized convolution determines its generalized characteristic function uniquely up to a scale constant. Moreover, convergence of \(\diamond \)-generalized characteristic functions uniformly on compact sets is equivalent to weak convergence of the corresponding probability measures.

Some generalized convolutions admit only the existence of a function \(h:{\mathcal {P}}_+ \rightarrow {\mathbb {R}}\) which has all the required properties of homomorphism except continuity. Equivalently, the corresponding probability kernel \(\Omega :{\mathbb {R}}_+ \rightarrow {\mathbb {R}}\) is not continuous (and the corresponding generalized convolution is not regular). For example max-convolution is not regular since it admits only one (up to a scale) probability kernel: \(\Omega (x) = {\textbf{1}}_{[0,1)}(x)\), which is evidently not continuous. For such convolutions the corresponding generalized characteristic functions can be defined by (2), but then some of the properties, which hold in the regular case, may not be satisfied.

3 Basic Examples of Generalized Convolutions

We present here a basic list of generalized convolutions defined uniquely, according to Remark 1, by its value on \(\delta _x \diamond \delta _y\) or \(\delta _x \diamond \delta _1\) for \(x \in (0,1)\). In the latter case, the values for \(x \in \{ 0,1\}\) we define by continuity.

Example 3.0. The Kingman or Bessel convolution with parameter \(s > -\frac{1}{2}\) is defined for \(x,y \geqslant 0\) by

$$\begin{aligned} \delta _x \otimes _{\omega _s} \delta _y = {\mathcal {L}} \left( \sqrt{x^2 + y^2 + 2xy \theta _s} \right) , \end{aligned}$$

where \(\theta _s\) is a random variable with the following density function:

$$\begin{aligned} f_s(t) = \frac{\Gamma (s+1)}{\sqrt{\pi } \Gamma ( s + {1/2})} \, \bigl (1-t^2 \bigr )_+^{s-{1/2}}, \end{aligned}$$

where \(a_+ = \max \{a,0\}\). The measure \(\delta _x \otimes _{\omega _s} \delta _y\) has support \([ |x-y|, x+y ]\). If \(n:= 2(s+1) \in {\mathbb {N}}\), \(n >1\), then the variable \(\theta _s\) can be identified as \(\theta _s = U_1\), where \({\textbf{U}}_n = (U_1, \dots , U_n)\) is a random vector having uniform distribution on the unit sphere \(S_{n-1} \subset {\mathbb {R}}^n\).

Example 3.1

The classical convolution \(*\) on \({\mathcal {P}}_+\) is given by

$$\begin{aligned} \delta _x *\delta _y = \delta _{x+y}, \quad x,y \geqslant 0. \end{aligned}$$

Example 3.2

Symmetric convolution \(\bowtie \) on \({\mathcal {P}}_+\) we define by

$$\begin{aligned} \delta _x \bowtie \delta _y = \frac{1}{2} \delta _{x+y} + \frac{1}{2} \delta _{|x-y|}, \quad x,y \geqslant 0. \end{aligned}$$

This distribution can be considered as the limit of \(\delta _x \otimes _{\omega _s} \delta _y\) for \(s \searrow -\frac{1}{2}\).

Example 3.3

\(\alpha \)-stable convolution \(*_{\alpha }\) for \(\alpha >0\) is given for \(x,y \geqslant 0\) by

$$\begin{aligned} \delta _x *_{\alpha } \delta _y = \delta _{g_{\alpha }(x,y)}, \quad \hbox { where } \quad g_{\alpha }(x,y) = (x^{\alpha } +y^{\alpha } )^{1/{\alpha }}. \end{aligned}$$

Example 3.4

The Kendall generalized convolution \(\vartriangle _{\alpha }\) on \({\mathcal {P}}_+\), \(\alpha >0\), is defined by

$$\begin{aligned} \delta _x \vartriangle _{\alpha } \delta _1:= \bigl (1- x^{\alpha } \bigr ) \delta _1 + x^{\alpha } \pi _{2\alpha },\quad x\in [0,1], \end{aligned}$$

where \(\pi _{\beta }\) is the Pareto distribution with the density function \(f_{\beta }(t) = \beta t^{-\beta -1}\) on the set \([1,\infty )\) for some \(\beta >0\).

Example 3.5

The \(\max \)-convolution is simply defined by

$$\begin{aligned} \delta _x \triangledown \delta _y = \delta _{x \vee y}. \end{aligned}$$

This distribution can be considered as the limit of \(\delta _x \vartriangle _{\alpha } \delta _y\) for \(\alpha \rightarrow \infty \). This is the reason why we call the Kendall convolution by the extremal Kendall convolution.

Example 3.6

The Kucharczak convolution \(\delta _x \circ _1 \delta _y\) for \(x,y \geqslant 0\) defined in [54], Example 2.4, is a measure absolutely continuous with respect to the Lebesgue measure and for \(a\in (0,1]\), \(r >0\), given by

$$\begin{aligned} \delta _x \circ _1 \delta _y (dt) = \frac{r x^a y^a}{\Gamma (a) \Gamma (1-a)} \, \frac{t^{r-ar-1} (2t^r -x^r - y^r) {\textbf{1}}_{[g_{ar}(x,y), \infty )}(t)}{(t^r -x^r -y^r)^a (t^r -x^r) (t^r - y^r)} \, dt. \end{aligned}$$

Example 3.7

The Kucharczak-Urbanik generalized convolution defined in [30, 54] for \(\alpha >0\) and \(n \in {\mathbb {N}}\) is uniquely determined by

$$\begin{aligned} \delta _x \vartriangle _{\alpha ,n} \delta _1 (ds):= (1 - x^{\alpha })^{n} \delta _1(ds) + \sum _{k=1}^n {{n}\atopwithdelims (){k}} x^{\alpha k} (1 - x^{\alpha })^{n-k} \mu _{k,n} (ds) \end{aligned}$$
(3)

for \(x\in [0,1]\), where for \(k=1,\ldots ,n\) the probability measures \(\mu _{k,n}\), are defined by their density functions \(f_{k,n}\):

$$\begin{aligned} f_{k,n}(s) = \alpha k {{n+k}\atopwithdelims ()n} s^{-\alpha (n+1) - 1} \left( 1 - s^{-\alpha } \right) ^{k-1},\quad s>1. \end{aligned}$$
(4)

Example 3.8

A family of non-regular generalized convolutions \(\diamondsuit _{p,\alpha }\), \(p\in [0,1]\), \(\alpha >0\), was introduced by K. Urbanik in [54] initially for \(\alpha =1\). This family interpolates between two boundary cases: the max-convolution for \(p=0\) and the Kendall convolution for \(p=1\). The \(\diamondsuit _{p,\alpha }\)-convolution \(\delta _x \diamondsuit _{p,\alpha } \delta _1\), \(x \in [0,1]\), is defined for \(p \ne \frac{1}{2}\) by

$$\begin{aligned} \delta _x \diamondsuit _{p,\alpha } \delta _1 (ds)= (1-px^{\alpha })\, \delta _1(ds) +px^{\alpha } \,\tfrac{\alpha }{2p-1} \tfrac{2p - s^{q}}{s^{2\alpha +1}} {\textbf{1}}_{[1,\infty )}(s) ds,\quad x\in [0,1], \end{aligned}$$

where \(q = \frac{\alpha (1-2p)}{(1-p)}\). By continuity, for \(p\rightarrow {1/2}\) we have

$$\begin{aligned} \delta _x \diamondsuit _{1/2,\alpha } \delta _1 (ds)= \bigl (1-\tfrac{x^{\alpha }}{2} \bigr )\, \delta _1(ds) + \tfrac{x^{\alpha }}{2}\, \tfrac{\alpha ( 1 + 2 \ln s)}{s^{2\alpha +1}}{\textbf{1}}_{[1,\infty )}(s) ds. \end{aligned}$$

Example 3.9

In [37] one can find the description of the regular generalized convolutions called the Kendall-type convolutions. Their probability kernels are the following:

$$\begin{aligned} \varphi _{c,\alpha ,p}(t) = \left( 1 - (1+c) t^{\alpha } + ct^{\alpha p} \right) {\textbf{1}}_{[0,1]}(t), \end{aligned}$$

where \(p \geqslant 2\), \(\alpha >0\) and one of the following conditions holds

(1):

\(c= (p-1)^{-1}\),

(2):

\(c = (p^2 -1)^{-1}\),

(3):

\(c = \frac{1}{2} (2-p) (p-1)^{-1}\),

(4):

\(c = \frac{1}{2} (p-1)^{-1}\),

(5):

\(c \in \bigl ( (p^2 -1)^{-1}, \frac{1}{2} (p-1)^{-1} \bigr )\) and none of the previous cases holds.

For other parameters \(p, c, \alpha \) none of the functions \(\varphi _{c,\alpha ,p}\) can be a probability kernel of a regular generalized convolution. Such convolutions are given by

$$\begin{aligned} \delta _x \vartriangle _{c,\alpha ,p} \delta _1 = \varphi _{c,\alpha ,p}(x) \, \delta _1 + x^{\alpha p} \, \lambda _1 + (c+1)(x^{\alpha } -x^{\alpha p})\, \lambda _2, \end{aligned}$$

for properly chosen probability measures \(\lambda _1, \lambda _2\) supported in \([1,\infty )\). For details, in particular for the explicit densities and cumulative distribution functions of the measures \(\lambda _1, \lambda _2\), see [37].

4 The Kendall Convolution by the Corresponding Williamson Transform

Let \(m_0\) denote the sum of \(\delta _0\) and the Lebesgue measure on \([0,\infty )\). By Theorem 4.1 and Corollary 4.4 in [52] we know that the generalized convolution can be defined uniquely by its generalized characteristic function treated as an integral transform. Such approach is described by the next definition. Let us remind here that the \(L_1(m_0)\)-topology of \(L_{\infty }(m_0)\)-space means that \(f_n \rightarrow f\) for functions \(f_n, f \in L_{\infty }(m_0)\) if \(\int f_n(x) g(x) m_0(dx) \rightarrow \int f(x) g(x) m_0(dx)\) for all \(g \in L_1(m_0)\), or, equivalently if

$$\begin{aligned} \int f_n(yx) g(x) m_0(dx) \rightarrow \int f(yx) g(x) m_0(dx) \end{aligned}$$

for all \(g \in L_1(m_0)\) and all \(y\in [0,\infty )\).

Definition 4

We say that the Borel function \(\varphi :[0,\infty ) \rightarrow {\mathbb {R}}\), \(|\varphi (t) | \leqslant \varphi (0) = 1\), defines a \(\varphi \)-generalized convolution on \({\mathcal {P}}_+\) if

(i):

the integral transform

$$\begin{aligned} {\widehat{\lambda }} (t):= \int _0^{\infty } \varphi (tx) \lambda (dx), \quad \lambda \in {\mathcal {P}}_+, \end{aligned}$$

separates points in \({\mathcal {P}}_+\), i.e. \({\widehat{\lambda }}= {\widehat{\mu }}\) implies that \(\lambda = \mu \),

(ii):

the weak convergence \(\lambda _n \rightarrow \lambda \) is equivalent to the convergence \(\widehat{\lambda _n} \rightarrow {\widehat{\lambda }}\) in the \(L_1(m_0)\)-topology of \(L_{\infty }(m_0)\),

(iii):

for every \(x,y \geqslant 0\) there exists a measure \(\mu \in {\mathcal {P}}_+\), such that the following equality, called the product formula for the function \(\varphi \), holds

$$\begin{aligned} \forall \, x,y \geqslant 0 \, \exists \, \mu \in {\mathcal {P}}_+ \qquad \varphi (xt) \, \varphi (yt) = \int _0^{\infty }\! \varphi (st)\, \mu (ds). \end{aligned}$$
(5)

The corresponding \(\varphi \)-generalized convolution for such function \(\varphi \) and measure \(\mu \) described in (iii) is defined by

$$\begin{aligned} \forall \, x,y \in {\mathbb {R}}_+ \quad \delta _x \diamondsuit _{\varphi } \delta _y:= \mu . \end{aligned}$$

Remark 2

It is easy to notice that the operation \(\diamondsuit _{\varphi }\) defined for the point mass measures by the Definition 4 satisfies all conditions of Definition 1 thus it is a generalized convolution in the Urbanik sense. Continuity of the function \(\varphi \) is equivalent with the regularity of convolution \(\diamondsuit _{\varphi }\).

In all the following examples, except the Example 4.10, we see that the known generalized convolution \(\diamond \) is \(\varphi \)-generalized convolution for the function \(\varphi \) being the probability kernel for \(\diamond \). In Example 4.10 we describe Whittaker \(W_{\alpha ,\nu }\)-generalized convolution, based on a little changed product formula. This convolution does not satisfy the condition (iii) of the Urbanik definition of generalized convolution, however methods used in studying the properties of one convolutions can be helpful in studying the properties of others.

Example 4.0. The characteristic function of the variable \(\theta _s\), \(s > - \frac{1}{2}\) is given by the following formula (for the proof see e.g. [29])

$$\begin{aligned} \Phi _{s} (t):= \int _{-1}^1 e^{itx} f_s(x) dx = \Gamma (s+1) \Bigl ( \frac{t}{2} \Bigr )^s J_s(t), \end{aligned}$$

where \(J_s\) is the Bessel function of the first kind with the index s and

$$\begin{aligned} J_s(t) = \sum _{m=0}^{\infty } \frac{(-1)^m }{m! \Gamma (m+1 +s)} \ \Bigl ( \frac{t}{2} \Bigr )^{2m -s}. \end{aligned}$$

We recognize here \(\varphi = \Phi _s\), \({\widehat{\lambda }}(t) = \int _0^{\infty } \Phi _s(tx) \lambda (dx)\). The definition of the Kingman-Bessel convolution \(\otimes _{\omega _s}\) follows now from the Gegenbauer’s Formula (see e.g. [46], Chapter 8.19), which is the product formula (5) for this case:

$$\begin{aligned} \Phi _s (xt) \, \Phi _s (yt) = \int _0^{\infty } \Phi _s (rt) \, r_{s}(x,y,r) \, dr. \end{aligned}$$

Here for \(x,y>0\) the function \(r_s(x,y,r)\) as a function on r, is the density of the random variable \(\sqrt{x^2 + y^2 + 2xy \theta _s}\) and it is equal to

$$\begin{aligned} r_s(x,y,r) = \frac{\Gamma (s+1)}{\sqrt{\pi } \Gamma ( s + {1/2}) } \, \frac{2^{1-2s} (xy)^{-2s} \, r \, {\textbf{1}}_{(|x-y|, x+y)}(r) }{\bigl [ (r^2 - (x-y)^2) ((x+y)^2 - r^2 ) \bigr ]^{-s+ \frac{1}{2}} }. \end{aligned}$$

Example 4.1

For the classical convolution we have \(\varphi (t) = e^{-t}\) and the integral transform \(\lambda \rightarrow {\widehat{\lambda }}\) is the classical Laplace transform. The product formula (5) follows from the fact that the Laplace transform of the convolution of two measures is equal to the product of their Laplace transforms.

Example 4.2

For the symmetric convolution \(\bowtie \) we have \(\varphi (t) = \cos (xt)\) and equation (6) follows from the elementary formula:

$$\begin{aligned} \cos (xt) \cos (yt) = \frac{1}{2} \cos ((x+y)t) + \frac{1}{2} \cos ((x-y)t). \end{aligned}$$

Example 4.3

For the \(\alpha \)-stable convolution \(*_{\alpha }\) we have \(\varphi (t) = e^{-t^{\alpha }}\). This means that \({\widehat{\lambda }}\) is simply a modified Laplace transform.

Example 4.4

Recall that for \(\alpha >0\) and a non-negative, \(\sigma \)-finite on \([0,\infty )\) (finite on compact sets) measure \(\lambda \) on \({\mathbb {R}}_+\) the Williamson transform \({\mathcal {W}}_{\alpha }\lambda \) is defined by

$$\begin{aligned} {\mathcal {W}}_{\alpha } \lambda (t):= \int _0^{\infty } \bigl ( 1 - t^{\alpha } x^{\alpha } \bigr )_+ \lambda (dx), \end{aligned}$$

where \(a_+=\max \{a,0\}\). The product formula (5) for the Williamson transform is the following:

$$\begin{aligned} (1-t^{\alpha } x^{\alpha })_+ (1-t^{\alpha } y^{\alpha })_+ = \int _0^{\infty }\!\! (1-t^{\alpha } s^{\alpha } )_{+} (\delta _x \vartriangle _{\alpha } \delta _y) (ds), \qquad x,y \geqslant 0. \end{aligned}$$

This formula was introduced for studying of the Kendall \(\vartriangle _{\alpha }\) convolution in [51], thus \(\vartriangle _{\alpha }\)-generalized characteristic function is given by:

$$\begin{aligned} \Phi ^{\vartriangle _{\alpha }}_{\lambda }(t):= {\mathcal {W}}_{\alpha } \lambda (t)= \int _0^{\infty } \bigl ( 1 - t^{\alpha } s^{\alpha } \bigr )_+ \lambda (ds). \end{aligned}$$
(6)

The Williamson integral transform for \(\alpha =1\) was introduced when studying n-times monotonic functions, i.e. functions f on \([0,\infty )\) such that \((-1)^{\ell } f^{(\ell )}(r)\) is non-negative, non-decreasing and convex for \(\ell = 0,1,...,n-1\). R.E. Williamson showed (see [56] Th. 1 and 2) that f is n-times monotonic function on \((0,\infty )\) iff \(f(t) = \int _0^{\infty } (1-tx)_+^{n-1} \gamma (dx)\), for some non-negative, \(\sigma \)-finite measure \(\gamma \) on \([0,\infty )\).

Actually, the original Williamson transform and its modifications \(\gamma \longrightarrow \int _0^{\infty } \bigl ( 1- t^{\alpha } x^{\alpha } \bigr )_+^{d-1}\, \gamma (dx)\), for some \(\alpha , d >0\), are applied in many different areas of mathematics including actuarial science (see e.g. [8, 32]) and dependence modeling by copulas [15, 31, 34, 35].

Note that it is easy to retrieve the measure knowing its Williamson transform. This makes the proof of the fact that the Williamson transform uniquely determines the measure much simpler than that for the Fourier or Laplace transforms. To see this we integrate by parts the right hand side of (6) and we obtain

$$\begin{aligned} \Phi _{\lambda }^{\vartriangle _{\alpha }} (t) = \alpha t^{\alpha } \int _0^{1/t} x^{\alpha -1} F(x)\, dx, \end{aligned}$$

where F is the cumulative distribution function for \(\lambda \). Now, with the notation \(G(t) = \Phi _{\lambda }^{\vartriangle _{\alpha }}(1/t)\), we obtain

$$\begin{aligned} t^{\alpha } G(t) = \alpha \int _0^t x^{\alpha -1} F(x) dx, \quad \hbox { thus } \quad F(t) = G(t) + \alpha ^{-1} t^{-1} G'(t), \end{aligned}$$
(7)

at each continuity point of the function F. Consequently, \(\Phi _{\lambda _1}^{\vartriangle _{\alpha }} (t) = \Phi _{\lambda _2}^{\vartriangle _{\alpha }} (t)\) implies that \(\lambda _1 = \lambda _2\). Since \(\Phi _{\lambda }^{\vartriangle _{\alpha }} (t)\) is the generalized characteristic function for the Kendall convolution we know that for \(\lambda _1, \lambda _2 \in {\mathcal {P}}_+\)

$$\begin{aligned} \Phi _{\lambda _1 \vartriangle _{\alpha } \lambda _2}^{\vartriangle _{\alpha }}(t) = \Phi _{\lambda _1}^{\vartriangle _{\alpha }} (t) \, \Phi _{\lambda _2}^{\vartriangle _{\alpha }} (t), \qquad \qquad t \geqslant 0. \end{aligned}$$
(8)

The cumulative distribution function of the Kendall convolution of two measures can also be be easily expressed:

Theorem 1

For every \(\lambda _1, \lambda _2 \in {\mathcal {P}}_+\). The measure \(\lambda = \lambda _1\vartriangle _{\alpha } \lambda _2\) if and only if \(F_{\lambda }\), the cumulative distribution function of \(\lambda \), is given by

$$\begin{aligned} F_{\lambda }(t) = G_1 (t) F_2 (t) + G_2 (t) F_1(t) - G_1(t) G_2(t), \end{aligned}$$

where \(F_i\) is the cumulative distribution function of \(\lambda _i\), \(G_i(t) = \Phi _{\lambda _i}^{\vartriangle _{\alpha }}(1/t)\), \(i=1,2\), \(F_{\lambda }\) is the cumulative distribution function of \(\lambda \) and \(G_{\lambda } = \Phi _{\lambda }^{\vartriangle _{\alpha }}(1/t)\).

Proof

Assume that \(\lambda = \lambda _1 \vartriangle _{\alpha } \lambda _2\). By the formula expressing the cumulative distribution function by the Williamson transform and the equality \(\widehat{\lambda _1 \vartriangle _{\alpha } \lambda _2} = \widehat{\lambda _1} \widehat{\lambda _2}\) we have that \(G_{\lambda }(t) = G_1(t) G_2(t)\), \(t\geqslant 0\), and then, by (7)

$$\begin{aligned} F_{\lambda }(t)= & {} G_{\lambda }(t)) + \alpha ^{-1} t^{-1} G_{\lambda }'(t) \\= & {} G_{1}(t) G_{2}(t) + \alpha ^{-1} t^{-1} G_{1}'(t) G_{2}(t) + \alpha ^{-1} t^{-1} G_{1}(t) G_{2}'(t) \\= & {} G_1 (t) F_2 (t) + G_2 (t) F_1(t) - G_1 (t) G_2(t). \end{aligned}$$

Assume now that the cumulative distribution function \(F_{\lambda }\) can be written by the desired formula. Since \(F_i(t) = G_i(t) + \alpha ^{-1} t^{-1} G_i'(t)\), \(i=1,2\), then

$$\begin{aligned} F_{\lambda }(t)= & {} G_1 (t) F_2 (t) + G_2 (t) F_1(t) - G_1(t) G_2(t) \\= & {} G_1(t) G_2(t) + \alpha ^{-1} t^{-1} \bigl (G_{1} (t) G_2 (t) \bigr )'. \end{aligned}$$

By the uniqueness of the Williamson transform we see that the generalized characteristic function of \(\lambda \) is equal to \(G_1(t^{-1}) G_2(t^{-1})\), \(t \geqslant 0\) which is the generalized characteristic function of \(\lambda _1 \vartriangle _{\alpha } \lambda _2\). \(\square \)

Example 4.5

For the \(\max \)-convolution we have \(\varphi (t) = {\textbf{1}}_{[0,1]}(t)\). This function is not continuous, thus the corresponding convolution \(\triangledown \) is not regular, but the inversion formula is also equally easy to obtain:

$$\begin{aligned} {\widehat{\lambda }}(t) = \int _0^{\infty } {\textbf{1}}_{[0,1]}(tx) \lambda (dx) = \int _0^{t^{-1}} \lambda (dx)= F_{\lambda }(t^{-1}), \end{aligned}$$

thus \(F_{\lambda }(t) = {\widehat{\lambda }}(t^{-1})\) for all continuity points of the cumulative distribution function \(F_{\lambda }\).

Example 4.6

For \(a\in (0,1]\), \(r>0\), the Kucharczak generalized convolution \(\circ _1\) can be defined by the product formula (5) applied to its probability kernel:

$$\begin{aligned} \Omega (t) = \frac{\Gamma (a,t^r)}{\Gamma (a)} = \frac{1}{\Gamma (a)} \int _{t^r}^{\infty } x^{a-1} e^{-x} dx, \quad \quad t>0. \end{aligned}$$

This means that the measure \(\mu = \delta _x \circ _1 \delta _y\) is defined as a solution of the following integral equation:

$$\begin{aligned} \frac{1}{\Gamma (a)^2} \int _{t^r x^r}^{\infty } \! s^{a-1} e^{-s} ds \, \int _{t^r y^r}^{\infty } \! u^{a-1} e^{-u} du = \int _0^{\infty } \!\!\! \frac{1}{\Gamma (a)} \int _{t^r s^r}^{\infty } \! u^{a-1} e^{-u} du \,\mu (ds). \end{aligned}$$

Example 4.7

The Kucharczak-Urbanik convolution \(\vartriangle _{\alpha ,n}\) can be defined by equation (5) for \(\varphi (t):= (1-t^{\alpha })_+^n\). To see this note that for any \(x\in [0,1]\) and \(t\geqslant 0\) we have

$$\begin{aligned} (1-t^{\alpha } x^{\alpha })_+^n (1-t^{\alpha })_+^n = \sum _{k=0}^{n} {{n}\atopwithdelims (){k}} x^{\alpha k} (1 - x^{\alpha })^{n-k} (1-t^{\alpha })_+^{n+k}. \end{aligned}$$

It remains to show that for any integer \(k\ge 1\)

$$\begin{aligned} (1-t^{\alpha })_+^{n+k} = \int _0^{\infty }\!\! (1-t^{\alpha } s^{\alpha })_+\, f_{k,n}(s)\, ds, \end{aligned}$$

where the density functions \(f_{k,n}\), \(k=1,\ldots ,n\), \(n \in {\mathbb {N}}\), are described in Example 3.7. This equality we can obtain by a simple induction argument (with respect to k), where the first step of induction is based on the following property of the Pareto distribution:

$$\begin{aligned} \int _0^{\infty }\!(1- s^{\alpha } t^{\alpha })_+^n \, \pi _{\alpha (n+1)}(ds) = (1- t^{\alpha })_+^{n+1}. \end{aligned}$$

The final conclusion is a consequence of the uniqueness of probability kernel (up to a scale coefficient) of every generalized convolution (for the proofs see [39, 51]).

The inversion formula for the integral transform \(\lambda \rightarrow {\widehat{\lambda }}\) with the kernel \(\Omega _{\alpha , n}\) can be obtained using the same methods as inverting the Williamson transform, but the level of difficulty increases with the increase of n - for the detailed proof see [34, 35].

Example 4.8

The \(\diamondsuit _{p,\alpha }\) generalized convolution, \(\alpha >0\), \(p\in [0,1]\), can be defined by equation (5) for the probability kernel

$$\begin{aligned} \Omega _{\diamondsuit _{\alpha , p}}(t) = (1-pt^{\alpha })\textbf{1}_{[0,1]}(t),\quad t\ge 0 \end{aligned}$$

This function, except for the Kendall case \(p=1\), is not continuous thus the generalized convolution \(\diamondsuit _{p,\alpha }\) is not regular.

Example 4.9

The Kendall-type generalized convolutions \(\vartriangle _{c,\alpha ,p}\) were found by considering such parameters \(c,\alpha , p\) for which the function \(\varphi _{c,\alpha ,p}(t) = ( 1 - (1+c) t^{\alpha } + ct^{\alpha p} ) {\textbf{1}}_{[0,1]}(t)\) can play the role of probability kernel of some generalized convolution. In particular we choose such \(c,\alpha , p\) that for all \(x,y>0\) the measure \(\mu \) (depending on x and y) which satisfies the equality

$$\begin{aligned} \varphi _{c,\alpha ,p}(xt) \, \varphi _{c,\alpha ,p}(yt) = \int _0^{\infty } \! \varphi _{c,\alpha ,p}(ts) \mu (ds) \end{aligned}$$

is a probability measure.

4.1 Generalized Convolutions in Harmonic Analysis

The version of equation (5) appearing in the theory of special functions and harmonic analysis is called a product formula or a multiplication formula for the family \(\{ \chi _{_{\lambda }}\}_{\lambda \in \Lambda }\) of continuous functions on \(I \subset {\mathbb {R}}\):

$$\begin{aligned} \chi _{_{\lambda }}(x) \, \chi _{_{\lambda }}(y) = \int _{I} \chi _{_{\lambda }}(s) K(x,y,s) \, ds, \quad \lambda \in \Lambda , \end{aligned}$$
(6')

where the kernel K(xys) does not depend on \(\lambda \) and \(\Lambda \) is some indexing set. Such product formulas are the key ingredient for definitions of generalized translation and generalized convolution operators which have been introduced by J. Delsarte [13] and B. Levitan [33] in the theory of special functions and harmonic analysis. For details and examples see [4, 10].

For the generalized convolutions on \({\mathcal {P}}_+\) introduced by K. Urbanik in the probability theory we have

$$\begin{aligned} \bigl \{ \chi _{_{\lambda }}(\cdot ) :\lambda \in I \bigr \} = \bigl \{ \Omega (t\, \cdot ) :t \geqslant 0 \bigr \}, \end{aligned}$$

where \(\Omega :{\mathbb {R}}_+ \rightarrow {\mathbb {R}}_+\) is the probability kernel for the considered generalized convolution. In the definition of J. Delsarte [13] and B. Levitan [33] the set I in the family \(\{ \chi _{_{\lambda }}\}_{\lambda \in \Lambda }\) is some indexing set and the equality \(\chi _{_{\lambda }}(x) = \chi _{_{1}}(\lambda x)\) does not have to hold, but the family

$$\begin{aligned} \left\{ \int _{I} \chi _{_{\lambda }}(s) K(x,y,s) \, ds :\lambda \in \Lambda \right\} \end{aligned}$$

will identify the kernel K(xys) uniquely up to a set of Lebesgue measure zero for each choice of \(x,y \in I\).

Example 4.10. In [44] the authors proved the product formula for the index Whittaker transform and defined the corresponding generalized convolution operator. By the index Whittaker transform we understand here the integral transform \({\mathcal {P}}_+ \ni \mu \rightarrow (W_{\alpha ,\nu } \mu )(t)\) given by

$$\begin{aligned} {\widehat{\mu }}(\lambda ):= (W_{\alpha } \mu ) (\lambda ):= \int _0^{\infty } \! W_{\alpha , \Delta _{\lambda }}(x) \mu (dx), \qquad \lambda \geqslant 0, \end{aligned}$$

where \(\alpha < \frac{1}{2}\) is a parameter, \(\Delta _{\lambda } = \sqrt{(\frac{1}{2} - \alpha )^2 - \lambda }\) and \(W_{\alpha , \nu }\) is the Whittaker function

$$\begin{aligned} W_{\alpha , \nu }(x) = \frac{e^{-\frac{x}{2}} x^{\alpha }}{\Gamma ( \frac{1}{2} - \alpha + \nu )} \int _0^{\infty }\! e^{-s} s^{- \frac{1}{2} -\alpha + \nu } \Bigl ( 1 + \frac{s}{x}\Bigr )^{-\frac{1}{2} + \alpha + \nu } ds, \end{aligned}$$

for \({\mathfrak {R}}e \, x >0, {\mathfrak {R}}e \, \alpha < \frac{1}{2} + {\mathfrak {R}}e \,\nu \). Equivalently the Whittaker function is defined as the solution of Whittaker’s differential equation:

$$\begin{aligned} \frac{d^2 u}{d x^2} + \Bigl ( - \frac{1}{4} + \frac{\alpha }{x} + \frac{{1/4} - \nu ^2}{x^2} \Bigr ) u = 0 \end{aligned}$$

uniquely determined by the property \(W_{\alpha , \nu }(x) \sim x^{\alpha } e^{-{x/2}}\) for \(x \rightarrow 0\).

The index Whittaker transform \(\mu \rightarrow {\widehat{\mu }}\) has the following properties of the generalized characteristic function (see Prop. 4.4 in [45]):

(i):

\({\widehat{\mu }}\) is uniformly continuous on \([0,\infty )\). Moreover, for any indexing set J if the family \(\{ \mu _j|_{(0,\infty )} :j \in J \}\) is tight, then \(\{ \widehat{\mu _j} :j \in J\}\) is uniformly equicontinuous;

(ii):

\({\widehat{\mu }}\) uniquely determines \(\mu \in {\mathcal {P}}_+\);

(iii):

if \(\mu _n, \mu \in {\mathcal {P}}_+\), \(n \in {\mathbb {N}}\), and \(\mu _n \Rightarrow \mu \) then \(\widehat{\mu _n} \rightarrow {\widehat{\mu }}\) uniformly on compact sets;

(iv):

if \(\mu _n \in {\mathcal {P}}_+\), \(n \in {\mathbb {N}}\) and \(\widehat{\mu _n}(\lambda ) \rightarrow f(\lambda )\) pointwise in \(\lambda \geqslant 0\) for some real function f, continuous in a neighbourhood of zero then there exists \(\mu \in {\mathcal {P}}_+\) such that \(f = {\widehat{\mu }}\).

The product formula for the Whittaker function of the second kind is the following (see Th. 3.1 in [44]):

$$\begin{aligned} W_{\alpha ,\nu }(x) \, W_{\alpha ,\nu }(y) = \int _0^{\infty }\! W_{\alpha ,\nu }(s) K_{\alpha } (x,y,s) ds, \end{aligned}$$
(9)

where

$$\begin{aligned}{} & {} K_{\alpha } (x,y,s):= \frac{(xy)^{2\alpha -1}}{\sqrt{2\pi } s^{2\alpha }} \, \\{} & {} \quad \times \exp \left\{ \frac{1}{2x^2}+ \frac{1}{2y^2} - \frac{1}{2s^2} - \Bigl (\frac{x^2 + y^2 + s^2}{4xys}\Bigr )^2 \right\} D_{2\alpha } \left[ \frac{x^2 + y^2 + s^2}{{2xys}} \right] \end{aligned}$$

and \(D_{\mu }(s)\) is the parabolic cylinder function for \(s>0\), \({\mathfrak {R}}e \, \mu <1\):

$$\begin{aligned} D_{\mu }(s) = \frac{s^{\mu } e^{-{s^2}/4}}{\Gamma ( \frac{1}{2} (1-\mu ))} \int _0^{\infty } t^{\frac{1}{2}(1+\mu )} \Bigl ( 1 + \frac{2t}{s^2}\Bigr )^{{\mu }/2}\, e^{-t} ds. \end{aligned}$$

The equation (9) holds for all \(\nu \) for which the function \(W_{\alpha , \nu }\) can be defined, but considering generalized characteristic function in the sense of Delsarte and Levitan we will assume that \(\nu = \Delta _{\lambda }\). By Theorem 4.6 in [45] we have \(\int _0^{\infty } K_{\alpha } (x,y,s) ds = 1\) for all \(x,y >0\). Consequently we have that the product formula (9) for the Whittaker function defines a generalized convolution \(\maltese \) in the sense of Delsarte and Levitan:

$$\begin{aligned} \delta _x \,\maltese \, \delta _y (ds) = K_{\alpha }(x,y,s) {\textbf{1}}_{(0,\infty )} (s) \, ds. \end{aligned}$$

This proposal does not guarantee that \(\maltese \) is a generalized convolution in the Urbanik’s sense. In particular, we do not know if conditions (iii) or (v) of Definition 1 hold.

5 The Kendall Convolution as a Weak Generalized Convolution

Let us remind that the measure \(\nu \in {\mathcal {P}}({\mathbb {E}})\) is stable if for all \(a,b \geqslant 0\) there exists non-random \({\textbf{d}}(a,b) \in {\mathbb {E}}\) such that

$$\begin{aligned} T_a \nu *T_b \nu = T_{c(a,b)} \nu *\delta _{{\textbf{d}}(a,b)}, \end{aligned}$$

where \(c(a,b)^{\alpha } = a^{\alpha } + b^{\alpha }\) for some \(\alpha \in (0,2]\). If \({\textbf{d}}(a,b) \equiv 0\) then the measure \(\nu \) is called strictly stable. The complete characterization of both stable and strictly stable distributions is known and given e.g. in [42].

Similarly we define weakly stable distributions, which are measures on an arbitrary separable Banach space \({\mathbb {E}}\) (with the Borel \(\sigma \)-algebra):

Definition 5

We say that a measure \(\mu \in {\mathcal {P}}({\mathbb {E}})\) is weakly stable if

$$\begin{aligned} \forall \, a,b \in {\mathbb {R}} \,\, \exists \, \lambda = \lambda _{a,b} \in {\mathcal {P}}: \qquad T_a \mu *T_b \mu = \lambda \circ \mu , \end{aligned}$$

where \(*\) denotes the classical convolution and \({\mathcal {L}}(X) \circ {\mathcal {L}}(\theta ) = {\mathcal {L}}(X\theta )\) if the random elements X and \(\theta \) are independent.

It is known (see [36]) that \(\mu \) is weakly stable if and only if

$$\begin{aligned} \forall \, \lambda _1, \lambda _2 \in {\mathcal {P}} \,\, \exists \, \lambda = \lambda _{a,b} \in {\mathcal {P}} \quad (\lambda _1 \circ \mu ) *(\lambda _2 \circ \mu ) = \lambda \circ \mu . \end{aligned}$$
(*)

This property is the base for defining weak generalized convolution:

Definition 6

Let \(\mu \) be a weakly stable measure on a separable Banach space \({\mathbb {E}}\). The binary operation \(\otimes _{\mu } :{\mathcal {P}}_+^2 \rightarrow {\mathcal {P}}_+\), called a \(\mu \)-weak generalized convolution, is defined as follows: for any \(\lambda _1,\lambda _2\in {\mathcal {P}}_+\)

$$\begin{aligned} \lambda _1 \otimes _{\mu } \lambda _2 = \lambda \in {\mathcal {P}}_+ \quad \Longleftrightarrow \quad \bigl (\lambda _1 \circ \mu \bigr ) *\bigl (\lambda _2 \circ \mu \bigr ) = \lambda \circ \mu . \end{aligned}$$

The generalized convolution \(\diamond \) is called a weak generalized convolution if there exists a weakly stable measure \(\mu \) such that \(\diamond = \otimes _{\mu }\).

All known weakly stable measures are symmetric, i.e. satisfying the property \(\mu (A) = \mu (-A)\) for every Borel set \(A \in {\mathcal {B}}({\mathbb {E}})\). Moreover if \(\mu \) on \({\mathbb {E}}\) is weakly stable, then for every subspace \({\mathbb {E}}_1 \subset {\mathbb {E}}\) and every linear operator \(Q :{\mathbb {E}} \rightarrow {\mathbb {E}}_1\) the measure \(\mu _Q\) on \({\mathbb {E}}_1\) defined by

$$\begin{aligned} \forall \, A \in {\mathcal {B}}({\mathbb {E}}) \quad \mu _Q(A):= \mu (Q^{-1}(A)) \end{aligned}$$

is also weakly stable and both \(\mu \) and \(\mu _Q\) define the same weak generalized convolution on \({\mathcal {P}}_+\). For these reasons in defining weak generalized convolutions we will restrict our attention to weakly stable measures \(\mu \in {\mathcal {P}}_s\) (symmetric measures on \({\mathbb {R}}\)).

Remark 3

Let \({\widehat{\mu }}\) be the characteristic function of the weakly stable measure \(\mu \in {\mathcal {P}}_s\). By the weak stability condition (\(*\)) and Definition 6 written in the language of characteristic functions we have that there exists a measure \(\lambda = \lambda _1 \otimes _{\mu } \lambda _2\) such that

$$\begin{aligned} \int _0^{\infty } {\widehat{\mu }}(st) \lambda _1(ds) \int _0^{\infty } {\widehat{\mu }}(st ) \lambda _2(ds) = \int _0^{\infty } {\widehat{\mu }}(st) \lambda (ds). \end{aligned}$$

In this case the probability kernel of the generalized convolution \(\otimes _{\mu }\) is \(\varphi (t) = {\widehat{\mu }}(t) = \int _{{\mathbb {R}}}\, \cos (tx) \mu (dx)\) considered as a function on \({\mathbb {R}}_+\). Finally we see that the generalized convolution \(\diamond \) with the probability kernel \(\varphi \) is weak iff the function \(\varphi (|t|)\), \(t \in {\mathbb {R}}\), is a characteristic function of some measure \(\mu \in {\mathcal {P}}_s\), \(\diamond = \otimes _{\mu }\) and

$$\begin{aligned} \forall \, a,b,t \in {\mathbb {R}}_+ \quad \varphi (at) \varphi (bt) = \int _0^{\infty } \varphi (st) \, \delta _a \otimes _{\mu } \delta _b\, (ds). \end{aligned}$$
(10)

Theorem 2

The Kendall convolution \(\vartriangle _{\alpha }\) is a weak generalized convolution if \(\alpha \in (0,1]\). The corresponding weakly stable measure \(\mu :=\mu _{\alpha }\in {\mathcal {P}}_s\) is defined by the density function

$$\begin{aligned} g_{\alpha }(t) = \frac{2\alpha }{\pi }\, |t|^{-\alpha -1} \int _0^{|t|} x^{\alpha -1} \sin {x} \, dx,\quad t\in {\mathbb {R}}\setminus \{0\}. \end{aligned}$$

Proof

Since we already know that the probability kernel for the Kendall convolution is \(\Omega _{\vartriangle _{\alpha }}(t) = (1- t^{\alpha }) {\textbf{1}}_{[0,1]}(t)\) we only need to:

  1. (a)

    show that the function \(g(t):= \Omega _{\vartriangle _{\alpha }}(|t|)\) is a characteristic function of some probability \(\mu \) if \(\alpha \in (0,1]\);

  2. (b)

    identify \({\widehat{g}}_{\alpha }\) as the density of \(\mu \).

Indeed, if (a) and (b) hold then \({\widehat{\mu }} (t) = \Omega _{\vartriangle _{\alpha }}(|t|)\) thus, by equality (10) we see that \(\mu \) is weakly stable and defines the convolution \(\vartriangle _{\alpha }\).

To see that (a) holds true, note that for \(t>0\)

$$\begin{aligned} {\widehat{g}}'(t) = - \alpha t^{\alpha -1} < 0, \quad \hbox { and } \quad {\widehat{g}}''(t) = \alpha (1-\alpha ) t^{\alpha -2} > 0. \end{aligned}$$

Consequently, by the Polya Theorem, it follows that \({\widehat{g}}\) is indeed the characteristic function of a symmetric probability measure \(\mu \).

To see that (b) holds true we use the inverse Fourier transform for integrable characteristic function to obtain the density function of \(\mu \):

$$\begin{aligned} \frac{1}{2\pi }\int _{{\mathbb {R}}} g(x) e^{-itx} \, dx&= \!\int _{{\mathbb {R}}} \bigl ( 1 - |x|^{\alpha } \bigr )_+ e^{-itx} \, dx \\&= \frac{\alpha }{\pi } \, |t|^{-\alpha -1} \! \int _0^{|t|} x^{\alpha -1} \sin {x} \, dx=g_{\alpha }(t). \\ \end{aligned}$$

\(\square \)

Theorem 3

Assume that the Kendall convolution \(\vartriangle _{\alpha }\) is weakly stable. Then \(\alpha \in (0,2]\).

Proof

The Kendall convolution \(\vartriangle _{\alpha }\) is weakly stable iff the function \((1 - |t|^{\alpha })_+\) is the characteristic function \({\widehat{\mu }}\) of some symmetric probability distribution \(\mu \). Then we have

$$\begin{aligned} {\widehat{\mu }} \Bigl (\frac{t}{n^{1/{\alpha }}} \Bigr )^n = \Bigl ( 1 - \frac{|t|^{\alpha }}{n} \Bigr )^n \longrightarrow e^{- |t|^{\alpha }}, \end{aligned}$$

which means that the function \(e^{- |t|^{\alpha }}\) is also a characteristic function of some \(\alpha \)-stable probability measure. By the theory of symmetric stable distributions (see e.g. [42]) we get \(\alpha \leqslant 2\). \(\square \)

Example 5.0. As we have seen in Example 4.0 the probability kernel for the Kingman convolution is equal to the characteristic function \(\Phi _s(t) = \Gamma (s+1) \Bigl ( \frac{t}{2}\Bigr )^s J_s(t)\) of the variable \(\theta _s\) appearing in the definition of this convolution. Consequently, the Kingman convolution \(\otimes _{\omega _s}\) is weakly stable for all \(s> - \frac{1}{2}\).

Example 5.1. The classical convolution on \({\mathcal {P}}_+\) is weakly stable since for its probability kernel \(e^{-t}\) we have \(g(t) = e^{-|t|}\) which is the characteristic function of the Cauchy distribution.

Example 5.2. The symmetric convolution is weakly stable since \(g(t) = \cos (t)\) is the characteristic function of \(\mu _s = \frac{1}{2} \delta _1 + \frac{1}{2} \delta _{-1}\).

Example 5.3. The \(\alpha \)-stable convolution \(*_{\alpha }\) is weakly stable for \(\alpha \in (0,2]\) since in this case \(e^{-|t|^{\alpha }}\) is the characteristic function of a symmetric \(\alpha \)-stable measure. For \(\alpha >2\) the convolution \(*_{\alpha }\) is not weakly stable.

Example 5.4. Example 5.6. For the Kucharczak convolution the probability kernel is \(\Omega (t) = {{\Gamma (a,t^r)}/{\Gamma (a)}}\) for some \(a,r>0\), thus for the function \(g(t) = \Omega (|t|)\) we have \(g'(t) = - \frac{r}{\Gamma (a)} t^{ar-1} e^{-t^{r}} <0\) for \(t>0\) and \(g''(t) = \frac{r}{\Gamma (a)} (rt^r +1 -ar) t^{ar-2} e^{-t^r}\), which is positive for \(ar\leqslant 1\). This means that the Kucharczak convolution is weakly stable if \(ar\leqslant 1\).

Example 5.7. For all \(n \in {\mathbb {N}}\) the Kucharczak-Urbanik convolution \(\vartriangle _{\alpha , n}\) is weakly stable if \(\alpha \in (0,1]\).

Example 5.9. The Kendall-type convolutions \(\vartriangle _{c,\alpha ,p}\) with the probability kernel \(\varphi _{c,\alpha ,p}(t) = (1 - (c+1)t^{\alpha } + ct^{p\alpha }) {\textbf{1}}_{[0,1]}(t)\), \(p\geqslant 2\), \(\alpha >0\), is weakly stable for \(\alpha \leqslant 1\) since then, in all admissible cases, \(\varphi _{c,\alpha ,p}'(t) \leqslant 0\) and \(\varphi _{c,\alpha ,p}''(t) \geqslant 0\) for all \(t \in [0,1]\). This by the Pólya criterion shows that \(\varphi _{c,\alpha }(|t|)\), \(t\in {\mathbb {R}}\) is a characteristic function of some probability measure \(\mu \). This means that \(\vartriangle _{c,\alpha ,p}\) is \(\mu \)-weakly stable.

Of course the \(\max \)-convolution and \(\diamondsuit _{p,\alpha }\) convolution cannot be weak generalized convolutions since they are not regular.

6 Lack of Memory Property

In the classical theory of stochastic processes a very important role plays the Poisson process build on a the sequence of i.i.d. exponentially distributed random variables. This particular choice of distribution was caused by the lack of memory property exclusively satisfied by the exponential distribution. It turns out that a generalized convolution \(\diamond \) admits or not the existence of a distribution with the lack of memory property with respect to \(\diamond \). However if such distribution exists, then it is unique up to a scale coefficient. To analyze this notion more precisely we need to define monotonic convolutions first:

Definition 7

A generalized convolution \(\diamond \) on \({\mathcal {P}}_+\) is monotonic if for every \(x,y \geqslant 0\) we have

$$\begin{aligned} \delta _x \diamond \delta _y \bigl ( [x \vee y, \infty ) \bigr ) =1. \end{aligned}$$

Informally speaking the generalized convolution is monotonic if the corresponding generalized sum of independent positive random variables cannot be smaller than the biggest of them.

Example 6.0. Not every generalized convolution is monotonic. The best known convolution without this property is the Kingman (or Bessel) convolution since for every \(s >-\frac{1}{2}\) and \(x,y>0\) we have

$$\begin{aligned} \textrm{supp} \bigl ( \delta _x \otimes _{\omega _s} \delta _y \bigr ) = \bigl [ |x-y|, x+y \bigr ]. \end{aligned}$$

Definition 8

A probability measure \(\nu \in {\mathcal {P}}_+\) has the lack of memory property with respect to the generalized convolution \(\diamond \) if

$$\begin{aligned} {\textbf{P}} \left\{ X> x\diamond y \big | X>x \right\} = {\textbf{P}} \left\{ X>y \right\} , \quad x,y \geqslant 0, \end{aligned}$$

where X is a random variable with distribution \(\nu \) and \((x \diamond y)\) is any random variable with \({\mathcal {L}}(x \diamond y) = \delta _x \diamond \delta _y\), independent of X.

Remark 4

Notice that if the generalized convolution \(\diamond \) is monotonic then the equation from Definition 8 can be changed into

$$\begin{aligned} {\textbf{P}} \left\{ X> x\diamond y \right\} = {\textbf{P}} \left\{ X>x \right\} {\textbf{P}} \left\{ X>y \right\} , \quad x,y \geqslant 0. \end{aligned}$$

It was shown in [22], Prop. 5.2 that the measure \(\nu \in {\mathcal {P}}_+\) with the cumulative distribution function F has the lack of memory property with respect to the monotonic generalized convolution \(\diamond \) if and only if the probability kernel \(\Omega (t)\) is monotonically decreasing and \(F(t) = 1 - \Omega ( ct)\), \(t>0\), for some constant \(c>0\). In view of the previous considerations we have the following:

Theorem 4

Let \(\diamond \) be a monotonic generalized convolution with the probability kernel \(\varphi \). Then the following conditions are equivalent:

  1. (1)

    \(\varphi (t)\) is monotonically decreasing on \({\mathbb {R}}_+\) and \(\varphi (+\infty )=0\);

  2. (2)

    \( (1- \varphi (t)) {\textbf{1}}_{[0,\infty )}(t)\) is the distribution function of a measure with lack of memory property;

  3. (3)

    \(\varphi (t^{-1}) {\textbf{1}}_{[0,\infty )}(t)\) is the cumulative distribution function of some probability measure

Example 6.1

The classical convolution \(*\) is evidently monotonic, its probability kernel is \(e^{-t}\), thus it admits the distribution with lack of memory property, which is well known to be exponential.

Example 6.2

The symmetric convolution \(\bowtie \) is not monotonic, since for \(x,y\,{>}\,0\)

$$\begin{aligned} \textrm{supp} \bigl (\delta _x \bowtie \delta _y \bigr ) = \bigl \{ |x-y|, x+y \bigr \}. \end{aligned}$$

Example 6.3

The \(\alpha \)-stable generalized convolution \(*_{\alpha }\) is monotonic and has the kernel of generalized characteristic function \(\Omega (t) = e^{-t^{\alpha }}\). This function satisfies assumptions of Theorem 3 thus \(*_{\alpha }\) admits the distribution with lack of memory property with cumulative distribution function \(1- F_Z(t) = e^{-t^{\alpha }} {\textbf{1}}_{[0,\infty )}(t)\). The convolution \(*_{\alpha }\) is \(\mu \)-weak with respect to \(\otimes _{\triangledown }\)-convolution, where \(\mu \) has the cumulative distribution function \(F(t) =1- F_Z(t^{-1})\) and the density

$$\begin{aligned} f(t) = \alpha t^{-\alpha -1} e^{-t^{-\alpha }} {\textbf{1}}_{(0,\infty )} (t). \end{aligned}$$

Example 6.4

The Kendall convolution \(\vartriangle _{\alpha }\) is monotonic since \(\delta _a \vartriangle _{\alpha } \delta _b\), \(a,b >0\), is a measure supported in \([ a \vee b, \infty )\) and its probability kernel \(\Omega (t) = (1-t^{\alpha })_+\) satisfies the assumptions of Theorem 3, thus the measure \(\mu \) with lack of the memory property is \(\textrm{pow}(\alpha )\) since its cumulative distribution function is \( F(t) = t^{\alpha } {\textbf{1}}_{[0,1]}(t) +{\textbf{1}}_{[1,\infty )}(t)\).

Example 6.5

The \(\max \)-convolution is evidently monotonic and its distribution with the lack of memory property is \(\delta _1\). Note that the corresponding Poisson process is rather dull as it is not moving at all: \(\max \{1,1\}=1 = \max \{1, \max \{1,1\}\}\).

Example 6.6

The Kucharczak convolution for \(a\in (0,1]\), \(r>0\), is monotonic and its probability kernel is given by \(\Omega (t) = \frac{\Gamma (a, t^r)}{\Gamma (a)}\), \(t>0\). Thus the corresponding distribution with lack of memory property is the Weibull distribution with the distribution function F such that \(F(t) = (1-\Omega (t)) {\textbf{1}}_{[0,\infty )}(t)\) and the density

$$\begin{aligned} F'(t) = \frac{r}{\Gamma (a)} \, t^{ar-1} e^{-t^{r}} {\textbf{1}}_{(0,\infty )} (t). \end{aligned}$$

Example 6.7

The Kucharczak-Urbanik generalized convolution is monotonic and the function

$$\begin{aligned} f(x) = n \alpha t^{\alpha -1} \bigl ( 1 - t^{\alpha } \bigr )_+^{n-1} \end{aligned}$$

is the density of its distribution with lack of memory property.

Example 6.8

The \(\diamondsuit _{p,\alpha }\) generalized convolution is not regular but it is monotonic. It admits the existence of a distribution \(\lambda \) with lack of memory property, defined by

$$\begin{aligned} \lambda (dx) = \alpha p x^{\alpha -1} dx + (1-p) \delta _1(dx). \end{aligned}$$

Example 6.9

The Kendall type convolutions are monotonic since their probability kernels \(\varphi _{c,\alpha ,p}\) are monotonically decreasing. The measure with the lack of memory property has density

$$\begin{aligned} f_{c,\alpha ,p}(x) = \alpha \bigl [ 1+c - cp x^{\alpha (p-1)} \bigr ] x^{\alpha -1} \, {\textbf{1}}_{(0,1)}(x). \end{aligned}$$

7 The Kendall Convolution Expressed by the \(\max \)-Convolution

We can replace the classical convolution in the condition defining weak stability by any generalized convolution \(\diamond \), as it was done by Kucharczak and Urbanik in [30] and by Jasiulis-Gołdyn and Kula in [19]:

Definition 9

Let \(\diamond \) be a generalized convolution on \({\mathcal {P}}_+\). A distribution \(\mu \) is weakly stable with respect to \(\diamond \) \((\diamond \)-weakly stable) if

$$\begin{aligned} \forall \, a,b \geqslant 0 \,\, \exists \, \lambda = \lambda _{a,b} \in {\mathcal {P}}_+ \quad T_a \mu \diamond T_b \mu = \lambda \circ \mu , \end{aligned}$$

Distributions weakly stable with respect to \(\diamond \) define new generalized convolution, called the weak generalized convolution with respect to \(\diamond \).

Definition 10

Let \(\mu \) be a weakly stable measure with respect to the generalized convolution \(\diamond \). Then a \(\mu \)-weak generalized convolution \(\otimes _{\mu ,\diamond }\) with respect to \(\diamond \) is defined as follows: for any \(a,b \geqslant 0\)

$$\begin{aligned} \delta _a \otimes _{\mu ,\diamond } \delta _b = \lambda \quad \hbox { if } \quad T_a \mu \diamond T_b \mu = \lambda \circ \mu . \end{aligned}$$

Equivalently we can say that for every \(\lambda _1, \lambda _2, \lambda \in {\mathcal {P}}_+\)

$$\begin{aligned} \lambda _1 \otimes _{\mu , \diamond } \lambda _2 = \lambda \quad \hbox { if } \quad \bigl (\lambda _1 \circ \mu \bigr ) \diamond \bigl (\lambda _2 \circ \mu \bigr ) = \lambda \circ \mu . \end{aligned}$$

Even though the conditions described in Definitions 9 and 10 suggest a strict connection between the \(\diamond \)-weakly stable distribution and \(\diamond \)-stable distribution this is not the case. The measure \(\lambda \) is \(\diamond \) stable if

$$\begin{aligned} \forall \, a,b \geqslant 0 \,\, \exists \, c >0, \, \exists \, d \in {\mathbb {R}} \quad T_a \lambda \diamond T_b \lambda = T_c \lambda \diamond \delta _{d}. \end{aligned}$$
(11)

If \(d = d(a,b) \equiv 0\) then the measure \(\lambda \) is called \(\diamond \) strictly stable and the generalized characteristic function of \(\lambda \) is of the form \( \Phi _{\lambda }^{\diamond } (t) = e^{-A t^{\alpha }}\) for some \(A\geqslant 0\) and \(\alpha >0\) (see [49, 50, 53]). The \(\diamond \)-stable measures which are not \(\diamond \)-strictly stable distributions are studied in a series of papers [17, 18, 38, 40], but we still do not have their complete characterization even in the seemingly easier case of weak generalized convolution.

The following Theorem is a continuation of the Theorem 3 describing lack of memory property:

Theorem 5

Let \(\diamond \) be a monotonic generalized convolution with the probability kernel \(\varphi \). Then the following conditions are equivalent:

  1. (1)

    \(\diamond \) admits the existence of a distribution with lack of memory property;

  2. (4)

    \(\diamond \) is a weak generalized convolution with respect to the \(\triangledown \) convolution based on \(\triangledown \)-weakly stable measure \(\mu \) with the distribution function \(\varphi (t^{-1}) {\textbf{1}}_{[0,\infty )}(t)\), i.e. \(\diamond = \otimes _{\mu , \triangledown }\).

Proof

Only the implication \(1) \rightarrow 4)\) requires explanation: By 3) of Theorem 4 we can consider a random variable X with cumulative distribution function of the form \(F_X(t):= \varphi (t^{-1}) {\textbf{1}}_{[0,\infty )}(t)\). Since \(\varphi :[0,\infty ) \rightarrow {\mathbb {R}}\) is the probability kernel of \(\diamond \), then for \(a,b >0\)

$$\begin{aligned} F_{\max \{ aX, bX'\}} (t)= & {} F_{aX}(t ) F_{bX}(t) = F_X(ta^{-1} ) F_X(tb^{-1}) \\= & {} \int _0^{\infty } \varphi (t^{-1}s) \delta _a(ds) \cdot \!\! \int _0^{\infty } \varphi (t^{-1}s) \delta _b(ds) \\= & {} \int _0^{\infty } \varphi (t^{-1}s) (\delta _a \diamond \delta _b) (ds) = F_{\theta X}(t), \end{aligned}$$

where \(X'\) is an independent copy of X, \({\mathcal {L}}(\theta ) = \delta _a \diamond \delta _b\) and \(\theta \) is independent of X. \(\square \)

Remark 5

By Theorem 4 we know that the generalized convolution \(\diamond \) has a kernel \(\Omega \) that is monotonically decreasing to zero iff \(\diamond = \otimes _{\mu , \triangledown }\), where \(\mu \) is a \(\triangledown \)-weakly stable probability measure with the cumulative distribution function \(F(t):= \Omega (t^{-1}) {\textbf{1}}_{[0,\infty )}(t)\) and

$$\begin{aligned} \max \bigl \{ \theta _1 X_1, \theta _2 X_2 \bigr \} {\mathop {=}\limits ^{d}} \theta Z, \end{aligned}$$
(12)

where \(\theta , \theta _1, \theta _2\) are i.i.d. with cumulative distribution function F, \({\mathcal {L}}(X_1) \diamond {\mathcal {L}}(X_2) = {\mathcal {L}}(Z)\) such that \(\theta ,\theta _1, \theta _2, X_1, X_2, Z\) are independent.

Remark 6

Notice that the measure \(\mu \) with cumulative distribution function F is weakly stable with respect to \(\triangledown \)-convolution if

$$\begin{aligned} \forall \, x,y,t > 0 \, \exists \, \lambda \in {\mathcal {P}}_+ \quad F(xt) F(yt) = \int _0^{\infty } F(st) \lambda (ds). \end{aligned}$$

We do not have here the complete solution of this integral-functional equation but we present a rich list of examples connected with some selected generalized convolutions.

Example 7.1

There is a surprising connection between the classical and \(\max \)-convolution. The classical convolution \(*\) on \({\mathcal {P}}_+\) has the probability kernel \(\Omega (t) = e^{-t} {\textbf{1}}_{[0,\infty )}(t)\), which satisfies assumptions of Theorem 4. Thus the measure \(\mu \) with the cumulative distribution function \(F(t) = e^{-t^{-1}} {\textbf{1}}_{[0,\infty )}(t)\) and the density \(f(t) = t^{-2} e^{-t^{-1}} {\textbf{1}}_{[0,\infty )}(t)\) is \(\triangledown \)-weakly stable, \(*= \otimes _{\mu , \triangledown }\) and

$$\begin{aligned} \max \bigl \{ \theta _1 X_1,\, \theta _2 X_2 \bigr \} {\mathop {=}\limits ^{d}} \theta _1 \bigl ( X_1 + X_2\bigr ), \end{aligned}$$
(13)

where \(\theta _1, \theta _2\) have distribution \(\mu \) and \(X_1, X_2\) are arbitrary non-negative random variables such that \(\theta _1, \theta _2, X_1, X_2\) are independent.

Remark 7

The equality (13) is also a simple consequence of the lack of memory property of the exponential distribution if we notice that \({1/\theta _i}\) has the exponential distribution with expectation 1: For any \(u>0\)

$$\begin{aligned} {\textbf{P}} \left\{ \theta _1 \bigl (X_1 + X_2\bigr )<u \right\}&= {\textbf{P}} \left\{ \theta _1^{-1}> u^{-1} \bigl (X_1 + X_2\bigr ) \right\} \\&{\mathop {=}\limits ^{*}}{\textbf{P}} \left\{ \theta _1^{-1}> u^{-1} X_1 \right\} \,{\textbf{P}} \left\{ \theta _2^{-1} > u^{-1} X_2 \right\} \\ {}&={\textbf{P}}\left\{ \max \{\theta _1X_1,\,\theta _2X_2\}<u\right\} , \end{aligned}$$

where \({\mathop {=}\limits ^{*}}\) follows, upon conditioning with respect to \((X_1,X_2)\), by the lack of memory property of \(\theta _1^{-1}\).

Example 7.3. The stable convolution \(*_{\alpha }\) has the probability kernel \(e^{-t^{\alpha }}\), \(\alpha >0\), which satisfies assumptions of Theorem 4. Consequently the measure \(\mu \) with the cumulative distribution function \(F(t) = e^{-t^{-1}} {\textbf{1}}_{[0,\infty )}\) and density

$$\begin{aligned} f(t) = \alpha t^{-\alpha -1} e^{-t^{-\alpha }} {\textbf{1}}_{[0,\infty )} (t) \end{aligned}$$

is \(\triangledown \)-weakly stable and \(*_{\alpha } = \otimes _{\mu , \triangledown }\). This leads to an interesting property: if \(\theta _1, \theta _2\) have distributions with the density function f, variables \(\theta _1, \theta _2, X_1, X_2\) are non-negative and independent then

$$\begin{aligned} \max \bigl \{ \theta _1 X_1, \theta _2 X_2 \bigr \} {\mathop {=}\limits ^{d}} \theta _1 \left( X_1^{\alpha } + X_2^{\alpha } \right) ^{1/{\alpha }}. \end{aligned}$$

Example 7.4. For the Kendall convolution \(\vartriangle _{\alpha }\), \(\alpha >0\), the probability kernel \((1-t^{\alpha })_+\), \(\alpha >0\), satisfies assumptions of Theorem 4 thus \(\vartriangle _{\alpha } = \otimes _{\mu , \triangledown }\), where \(\mu \) is a measure with the cumulative distribution function \(F(t) = ( 1-t^{-\alpha }) {\textbf{1}}_{[1,\infty )}(t)\), i.e. \(\mu = \pi _{\alpha }\). Consequently: if \(\theta _1, \theta _2\) have distribution \(\pi _{\alpha }\), variables \(\theta _1, \theta _2, X_1, X_2\) are non-negative and independent then

$$\begin{aligned} \max \bigl \{ \theta _1 X_1, \theta _2 X_2 \bigr \} {\mathop {=}\limits ^{d}} \theta _1 \left( X_1 \vartriangle _{\alpha } X_2 \right) , \end{aligned}$$

where \(\left( X_1 \vartriangle _{\alpha } X_2 \right) \) is any random variable with distribution \({\mathcal {L}}(X_1) \vartriangle _{\alpha } {\mathcal {L}}(X_2)\) independent of \(\theta _1\).

Example 7.5. Notice that the following, rather trivial, property holds:

$$\begin{aligned} \forall \, x,y,t >0 \qquad {\textbf{1}}_{[0,1]} (xt) {\textbf{1}}_{[0,1]} (yt) = \int _0^{\infty } {\textbf{1}}_{[0,1]} (st) \delta _{\max \{x,y\}} (ds). \end{aligned}$$

This means that the cumulative distribution function \(F(t) = {\textbf{1}}_{[0,1]}(t^{-1})\) corresponds to the measure \(\delta _1\), which is weakly stable with respect to the \(\max \)-convolution. This seems to be interesting, but it is only another way to describe the following, trivial property:

$$\begin{aligned} \max \{ \theta _1 X_1, \theta _2 X_2 \} {\mathop {=}\limits ^{d}} \theta _1 \max \{ X_1, X_2\} \end{aligned}$$

for \(X_1, X_2, \theta _1, \theta _2\) independent, \({\mathcal {L}} (\theta _1) = {\mathcal {L}}(\theta _2) = \delta _1\).

Example 7.6. The Kucharczak convolution has the probability kernel \(\Omega (t) ={{\Gamma (a,t^r)}/{\Gamma (a)}}\) satisfying the assumptions of Theorem 4, thus \(F(t) = \Omega (t^{-1}) {\textbf{1}}_{[0,\infty )}(t)\) is the cumulative distribution function of a \(\triangledown \)-weakly stable measure \(\mu \) with

$$\begin{aligned} f(t):= F'(t) = \frac{r}{\Gamma (a)} \, t^{-ar-1} e^{-t^{-r}} {\textbf{1}}_{(0,\infty )} (t). \end{aligned}$$

Again we have: if \(\theta _1, \theta _2\) have distributions with the density function f, variables \(\theta _1, \theta _2, X_1, X_2\) are non-negative and independent then

$$\begin{aligned} \max \bigl \{ \theta _1 X_1, \theta _2 X_2 \bigr \} {\mathop {=}\limits ^{d}} \theta _1 \left( X_1 \circ _1 X_2 \right) , \end{aligned}$$

where \(\left( X_1 \circ _1 X_2 \right) \) is any random variable with distribution \({\mathcal {L}}(X_1) \circ _1 {\mathcal {L}}(X_2)\) independent of \(\theta _1\).

Example 7.7. The Kucharczak-Urbanik convolution \(\vartriangle _{\alpha ,n}\) can be defined by the probability kernel \(\Omega _{\alpha ,n}(t) = (1-t^{\alpha })_+^n\) and its property: for all \(\mu _1, \mu _2 \in {\mathcal {P}}_+\) there exists \(\mu =: \mu _1 \vartriangle _{\alpha ,n} \mu _2\) such that

$$\begin{aligned} \int _0^{\infty }\! \Omega _{\alpha ,n}(tx) \mu _1(dx) \int _0^{\infty } \!\Omega _{\alpha ,n}(ty) \mu _2(dy) = \int _0^{\infty }\! \Omega _{\alpha ,n}(tx) \mu (dx). \end{aligned}$$

Evidently the function \(\Omega _{\alpha ,n}(t)\) satisfies the assumptions of Theorem 4, thus the variable \(\theta _n\) with the cumulative distribution function \(F_{\alpha , n} (t) = (1-t^{-\alpha })^n {\textbf{1}}_{[1,\infty )}(t)\) is weakly stable with respect to the \(\max \)-convolution \(\triangledown \). Moreover, the Kucharczak-Urbanik convolution \(\vartriangle _{\alpha ,n}\) is a weak generalized convolution with respect to \(\max \)-convolution i.e. \(\vartriangle _{\alpha ,n} = \otimes _{\mu _n, \triangledown }\) and

$$\begin{aligned} {\mathcal {L}}(X_1) \vartriangle _{\alpha , n} {\mathcal {L}}(X_2) = {\mathcal {L}} (Z) \quad \hbox { iff } \quad \max \bigl \{ \theta _n X_1, \theta _n' X_2 \bigr \} {\mathop {=}\limits ^{d}} \theta _n Z, \end{aligned}$$

where \(\theta _n, \theta _n'\) are i.i.d. with the distribution \(\mu _n\) such that \(\theta _n, \theta _n', X_1, X_2, Z\) are independent. It is worth noticing also that if \(Q_1, \dots Q_n\) are i.i.d. random variables with Pareto distribution \(\pi _{\alpha }\) then

$$\begin{aligned} \theta _n {\mathop {=}\limits ^{d}} \max \bigl \{ Q_1, \dots , Q_n \bigr \}. \end{aligned}$$

Example 7.9. For the Kendall-type generalized convolution \(\vartriangle _{c, \alpha , p}\)the probability kernel

$$\begin{aligned} \varphi _{c,\alpha ,p} = \left( 1 - (1+c) t^{\alpha } + ct^{\alpha p} \right) {\textbf{1}}_{[0,1]}(t) \end{aligned}$$

is the tail of some cumulative distribution function. By Theorem 4 we have that \(\varphi _{c,\alpha ,p} (t) {\textbf{1}}_{[1,\infty )}(t)\) is the tail of distribution function of a measure with lack of memory property with respect to \(\vartriangle _{c,\alpha ,p}\) convolution and by Theorem 4 each Kendall type generalized convolution is a \(\mu \)-weak distribution with respect to the \(\max \)-convolution \(\triangledown \), where \(\mu \in {\mathcal {P}}_+\) has the cumulative distribution function \(F(t):= \varphi _{c,\alpha ,p} (t^{-1}) {\textbf{1}}_{[1,\infty )}(t)\).

8 Convex Linear Combination Property

In this section we give a collection of examples of generalized convolutions with the convex linear combination property. The generalized Kendall convolution is one of these examples.

Definition 11

The generalized convolution \(\diamond \) on \({\mathcal {P}}_+\) has the convex linear combination property with parameter \(n \in {\mathbb {N}}\), \(n \geqslant 2\), if there exist functions \(p_0, \dots , p_{n-1} :[0,1] \mapsto [0,1]\), \(\sum _{k=0}^{n-1} p_k(x) \equiv 1\) and there exist measures \(\lambda _0, \dots , \lambda _{n-1} \in {\mathcal {P}}_+\) such that

$$\begin{aligned} \forall \, x \in [0,1] \quad \quad \delta _x \diamond \delta _1 = \sum _{k=0}^{n-1} p_k(x) \lambda _k. \end{aligned}$$

Example 8.4. It is evident that the Kendall convolution has the convex linear combination property with the parameter \(n=2\). In fact we know much more, see [20]: it is the only regular generalized convolution with the convex linear convolution property for \(n=2\).

Example 8.5. The max-convolution (which is not regular) is a trivial example of a generalized convolution with the convex linear combination property with \(n=1\), since \(\delta _x \triangledown \delta _1 = \delta _1\) for \(x\in [0,1]\).

Example 8.7. The Kucharczak-Urbanik convolution \(\vartriangle _{\alpha ,n}\), \(\alpha >0\), \(n\in {\mathbb {N}}\), is another example of generalized convolution with the convex linear combination property for \(n+1\) since by equation (3)

$$\begin{aligned} \delta _x \vartriangle _{\alpha ,n} \delta _1 (ds):=(1 - x^{\alpha })^{n} \delta _1(ds) + \sum _{k=1}^n {n \atopwithdelims ()k} x^{\alpha k} (1 - x^{\alpha })^{n-k} \mu _{k,n} (ds), \end{aligned}$$

where \(\mu _{k,n}\) are probability densities given by (4).

Example 8.8. Every non-regular generalized convolutions \(\diamondsuit _{p,\alpha }\), \(p\in [0,1]\), \(\alpha >0\), described by its probability kernel \(\Omega _{\diamondsuit _{p,\alpha }} = ( 1 - p t^{\alpha }) {\textbf{1}}_{[0,1]}(t)\) has the convex linear combination property for \(n=2\). The \(\diamondsuit _{p,\alpha }\)-convolution is uniquely determined for \(p \ne \frac{1}{2}\) by

$$\begin{aligned} \delta _x \diamondsuit _{p,\alpha } \delta _1 (ds)= (1-px^{\alpha })\, \delta _1(ds) +px^{\alpha } \,\tfrac{\alpha }{2p-1} \tfrac{2p - s^{q}}{s^{2\alpha +1}} {\textbf{1}}_{[1,\infty )}(s) ds,\quad x\in [0,1], \end{aligned}$$

where \(q = \frac{\alpha (1-2p)}{(1-p)}\). By continuity, for \(p\rightarrow {1/2}\) we have

$$\begin{aligned} \delta _x \diamondsuit _{1/2,\alpha } \delta _1 (ds)= \bigl (1-\tfrac{x^{\alpha }}{2} \bigr )\, \delta _1(ds) + \tfrac{x^{\alpha }}{2}\, \tfrac{\alpha ( 1 + 2 \ln s)}{s^{2\alpha +1}}{\textbf{1}}_{[1,\infty )}(s) ds, \end{aligned}$$

Example 8.9. Notice that for Kendall-type generalized convolutions \(\vartriangle _{c,\alpha ,p}\) in each of the five admissible cases described in [37] we have \(\varphi _{c,\alpha ,p}(0) = 1\), \(\varphi _{c,\alpha ,p}(1) = \varphi _{c,\alpha ,p}(+\infty ) = 0\) and

$$\begin{aligned} \delta _x \vartriangle _{c,\alpha ,p} \delta _1 = \varphi _{c,\alpha ,p}(x) \, \delta _1 + x^{\alpha p} \, \lambda _1 + (c+1)(x^{\alpha } -x^{\alpha p})\, \lambda _2, \end{aligned}$$

for some probability measures \(\lambda _1, \lambda _2 \in {\mathcal {P}}_+\). This means that it has convex linear combination property with \(n=3\).

9 Description by Random Variables

While constructing stochastic processes with independent increments in the sense of generalized convolution it turns out that we have big trouble if we study path properties of such processes. This was the reason why the authors of [7] introduced the Definition 6.2 of representability for weak generalized convolutions. Roughly speaking the weak generalized convolution \(\diamond \) is representable if there exists a method of unique clear choice of variable X for which \({\mathcal {L}}(X) = \mu _1 \diamond \mu _2\). The proper Definition of representability of generalized convolution requires more conditions if it is suppose to be used in constructing stochastic processes by their paths—for details see Definition 6.2 in [7].

For the convenience, we denote by \(\theta _1 \diamond \theta _2\) any random variable with distribution \({\mathcal {L}}(\theta _1) \diamond {\mathcal {L}}(\theta _2)\) if \(\theta _1, \theta _2\) are non-negative and independent.

Example 9.0. There are at least three methods of representing the Kingman convolution \(\otimes _{\omega _s}\):

  1. (1)

    If \(n = 2(s+1) \in {\mathbb {N}}\) then we are using the weakly stable random vector \({\textbf{U}} = (U_1, \dots ,U_n)\) with uniform distribution on the unit sphere \(S_n\) in \({\mathbb {R}}^n\). Then for independent random variables \(\theta _1, \theta _2\) we choose independent copies \({\textbf{U}}_1, {\textbf{U}}_2\) of \({\textbf{U}}\) such that \(\theta _1, \theta _2, {\textbf{U}}_1, {\textbf{U}}_2\) are independent. Next we define an adding operator on pairs \((\theta _i, {\textbf{U}}_i)\), \(i=1,2\) by

    $$\begin{aligned} \theta _1 {\textbf{U}}_1 + \theta _2, {\textbf{U}}_2 = \Vert \theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2 \Vert _2 \cdot \frac{\theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2}{\Vert \theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2\Vert _2} \end{aligned}$$
    (14)

    where \(\Vert \cdot \Vert _2\) denotes the Euclidean norm in \({\mathbb {R}}^n\). The two product factors on the right are independent and

    $$\begin{aligned} \Vert \theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2 \Vert _2 {\mathop {=}\limits ^{d}} \theta _1 \otimes _{\omega _s} \theta _2, \qquad \frac{\theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2}{\Vert \theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2\Vert _2} {\mathop {=}\limits ^{d}} {\textbf{U}}_1. \end{aligned}$$

    We see that the equality (14) is the equality (\(*\)) given in Sect. 5, following Definition 5, written in the language of random elements, where \(\mu = {\mathcal {L}}(U)\) and

    $$\begin{aligned} \theta= & {} \theta (\theta _1, {\textbf{U}}_1, \theta _2, {\textbf{U}}_2) = \Vert \theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2 \Vert _2, \\ {\textbf{U}}= & {} {\textbf{U}}( \theta _1, {\textbf{U}}_1, \theta _2, {\textbf{U}}_2) = \frac{\theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2}{\Vert \theta _1 {\textbf{U}}_1 + \theta _2 {\textbf{U}}_2\Vert _2}. \end{aligned}$$
  2. (2)

    Recently Misiewicz and Volkovich showed in [39] that for arbitrary \(s> - \frac{1}{2}\) the random vector \({\textbf{W}} = (W_1, W_2)\) with the density proportional to \((1 - x^2 - y^2)^{s - \frac{1}{2}}\) is weakly stable. Moreover for every choice of independent \(\theta _1, \theta _2\), random vectors \({\textbf{W}}_1, {\textbf{W}}_2\) independent copies of \({\textbf{W}}\) such that \(\theta _1, \theta _2, {\textbf{W}}_1, {\textbf{W}}_2\) are independent we have

    $$\begin{aligned} \theta _1 {\textbf{W}}_1 + \theta _2, {\textbf{W}}_2 = \Vert \theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{W}}_2 \Vert _2 \cdot \frac{\theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{W}}_2}{\Vert \theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{W}}_2\Vert _2}. \end{aligned}$$
    (15)

    The two two product factors on the right are independent and

    $$\begin{aligned} \Vert \theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{W}}_2 \Vert _2 {\mathop {=}\limits ^{d}} \theta _1 \otimes _{\omega _s} \theta _2, \qquad \frac{\theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{W}}_2}{\Vert \theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{W}}_2\Vert _2} {\mathop {=}\limits ^{d}} {\textbf{W}}_1. \end{aligned}$$

    The equality (15) is the equality from Definition 6 in Sect. 5, written in the sense of equality almost everywhere and

    $$\begin{aligned} \theta= & {} \theta (\theta _1, {\textbf{W}}_1, \theta _2, {\textbf{W}}_2) = \Vert \theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{W}}_2 \Vert _2, \\ {\textbf{W}}= & {} {\textbf{W}}( \theta _1, {\textbf{W}}_1, \theta _2, {\textbf{W}}_2) = \frac{\theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{W}}_2}{\Vert \theta _1 {\textbf{W}}_1 + \theta _2 {\textbf{w}}_2\Vert _2}. \end{aligned}$$

    Notice that \({\textbf{W}}\) can be identified with the vector \((\cos \phi , \sin \phi )\), where \(\phi \) is a random variable with the density proportional to \((\sin ^2\varphi )^{s+ \frac{1}{2}}\) on the interval \([0,2\pi ]\). Moreover, the vector \({\textbf{W}}\) is living on the unit sphere in \({\mathbb {R}}^2\), but it does not have uniform distribution there.

  3. (3)

    For any \(s> - \frac{1}{2}\) Kingman in [29] gave the following explicit formula for the random variable \(\theta _1 \otimes _{s} \theta _2\):

    $$\begin{aligned} \theta _1 \otimes _{s} \theta _2 {\mathop {=}\limits ^{d}} \sqrt{\theta _1^2 + \theta _2^2 + 2 \theta _1 \theta _2 \cos \phi }, \end{aligned}$$

    where \(\phi \) is a random variable with the density proportional to the function \((\sin ^2\varphi )^{s+ \frac{1}{2}}\) on the interval \([0,2\pi ]\). It is known (and easy to check) that if \(\phi _1, \phi _2\) are independent copies of \(\phi \) then \(\cos (\phi _1 - \phi _2) {\mathop {=}\limits ^{d}} \cos \phi \). This leads to the following Kingmam’s interpretation: if Q is a vector of the length \(\theta \) forming the angle \(\varphi \) with the fixed straight line then we will use the notation \(Q = (\theta , \cos \varphi )\). Consequently, using elementary geometry we have

    $$\begin{aligned} \bigl (\theta _1, \cos \varphi _1\bigr ) \oplus \bigl (\theta _2, \cos \varphi _2 \bigr ) {\mathop {=}\limits ^{def}} \Bigl ( \sqrt{\theta _1^2 + \theta _2^2 + 2 \theta _1 \theta _2 \cos (\varphi _1 - \varphi _2)}, \cos (\varphi _1 - \varphi _2)\Bigr ), \end{aligned}$$

    and, by the previous considerations,

    $$\begin{aligned} \sqrt{\theta _1^2 + \theta _2^2 + 2 \theta _1 \theta _2 \cos (\phi _1 - \phi _2)} {\mathop {=}\limits ^{d}} \theta _1 \otimes _{s} \theta _2. \end{aligned}$$

In view of Example 9.0. we see that the random variable \(\theta _1 \diamond \theta _2\) with the distribution \({\mathcal {L}}(\theta _1) \diamond {\mathcal {L}}(\theta _2)\) can be expressed in many different ways. If we want to base on this representation the construction of stochastic processes with independent (with respect to the generalized convolution \(\diamond \)) increments only the first representation Example 9.0.1 is admissible - for details see [7].

Theorem 6

If there exists a function \(\psi :{\mathbb {R}}^2 \mapsto {\mathbb {R}}\) such that

$$\begin{aligned} \Psi (\theta _1, \theta _2) (\omega ) = \psi (\theta _1(\omega ), \theta _2(\omega )) \quad a.e. \end{aligned}$$

for all independent \(\theta _1, \theta _2\) then there exists \(\alpha \in (0, \infty ]\) such that

$$\begin{aligned} \psi (x,y) = \bigl ( |x|^{\alpha } + |y|^{\alpha } \bigr )^{1/{\alpha }}, \qquad x,y \in {\mathbb {R}}^2, \end{aligned}$$

which follows from the Bohnenblust theorem (for details see [7]).

Almost trivially we have the following representations of discussed here convolutions by random variables:

where \({\textbf{P}}\{ Q=1\} = {\textbf{P}}\{ Q = 0\} = \frac{1}{2}\) such that \(Q, \theta _1, \theta _2\) are independent.

Theorem 7

Assume that the generalized convolution \(\diamond \) on \({\mathcal {P}}_+\) has the convex linear combination property. Then \(\diamond \) is represented by random variables.

Proof

Assume that \({\mathcal {L}}(\theta _1) = \mu _1\), \({\mathcal {L}}(\theta _2) = \mu _2\) such that \(\theta _1, \theta _2\) are independent. By our assumptions for every \(x\in [0,1)\) there exist \(n\in {\mathbb {N}}\), \(p_0, \dots , p_{n-1} :[0,1] \mapsto [0,1]\), \(\sum _{k=0}^{n-1} p_k(x) = 1\) for all \(x\in [0,1]\), and there exist measures \(\lambda _0, \dots , \lambda _{n-1} \in {\mathcal {P}}_+\) such that

$$\begin{aligned} \forall \, x \in [0,1] \qquad \delta _x \diamond \delta _1 = \sum _{k=1}^{n-1} p_k(x) \, \lambda _k. \end{aligned}$$
(16)

Now we define some auxiliary random variables: \(M= \max \{\theta _1, \theta _2\}\), \(m = \min \{ \theta _1, \theta _2 \}\) and \(\varrho = \varrho (\theta _1, \theta _2):= {m/M}\). For the numbers \(s_0(x) = p_0(x)\), \(s_k(x) = \sum _{j=0}^{k-1} p_j(x)\) for \(k=1, \dots , n-1\), we define a sequence of intervals: \(A_0(x) = [0, p_0(x)]\) and

$$\begin{aligned} A_k(x) = \bigl (s_{k-1}(x), s_{k}(x)\bigr ], \quad k=1,\dots , n-1. \end{aligned}$$

Of course \(\bigcup _{k=0}^{n-1} A_k(x) = [0,1]\) for all \(x \in [0,1]\). Now we choose random variables \(Q_0, \dots , Q_{n-1}\) with distributions \(\lambda _0, \dots , \lambda _{n-1}\) respectively, a random variable U with uniform distribution on the interval [0, 1] such that \(\theta _1, \theta _2, \theta _3, U, Q_0, \dots , Q_{n-1}\) are independent. Now we are able to define the random variables representing the convolution \(\lambda _1 \diamond \lambda _2\):

$$\begin{aligned} x \diamond 1 {\mathop {=}\limits ^{d}} \sum _{k=0}^{n-1} {\textbf{1}}_{A_k(x)} (U) Q_k, \end{aligned}$$

and

$$\begin{aligned} \lambda _1 \diamond \lambda _2 = {\mathcal {L}} \Biggl ( M \sum _{k=0}^{n-1} {\textbf{1}}_{A_k(\varrho )} (U) \,Q_k \Biggr ). \end{aligned}$$

\(\square \)

Example 9.4. For representability of the Kendall convolution take non-negative independent random variables \(\theta _1, \theta _2\) and we define, as in the proof of Theorem 5, \(M= \max \{\theta _1, \theta _2\}\), \( m = \min \{ \theta _1, \theta _2\}\), \(\varrho ={m/M}\). Let U has the uniform distribution on [0, 1], \(\Pi _{2\alpha }\) has the Pareto distribution \(\pi _{2\alpha }\) and U, \(\Pi _{2\alpha }\) and \(\theta _1,\theta _2\) are independent. Then

$$\begin{aligned} \theta _1\vartriangle _{\alpha } \theta _2 {\mathop {=}\limits ^{d}} M \bigl ({\textbf{1}}_{(\varrho ^{\alpha },1]}(U) + \Pi _{2\alpha } {\textbf{1}}_{[0,\varrho ^{\alpha }]} (U) \bigr ). \end{aligned}$$

Another representation of \(\theta _1\vartriangle _{\alpha } \theta _2\), found in [26] or directly obtained from Theorem 1. Since \({\textbf{P}} \{ \frac{\theta _i}{Z_i} < t\} = G_i(t)\), we have the following:

$$\begin{aligned} \theta _1\vartriangle _{\alpha } \theta _2 {\mathop {=}\limits ^{d}} \max \left\{ \max \{ \theta _1, \theta _2\}, \min \left\{ \frac{\theta _1}{Z_1}, \frac{\theta _2}{Z_2} \right\} \right\} , \end{aligned}$$

where \(Z_1, Z_2\) are i.i.d. with \(\textrm{pow}(\alpha )\) distribution such that \(\theta _1, \theta _2, Z_1, Z_2\) are independent.

Remark 8

The construction proposed in Theorem 6 can be trivially adapted to Examples 9.7, 9.8 and 9.9, thus we have that the Kucharczak-Urbanik convolutions, \(\diamondsuit _{p,\alpha }\)-convolutions and Kendall type convolutions can be represented by random variables.

Example 9.7. For the Kucharczak-Urbanik convolution representation by random variables can be done in a more interesting way:

We introduce first an useful notation: for any \(1\leqslant k\leqslant n\) define a function \(\sigma _{k,n}:{\mathbb {R}}^n\rightarrow \{1,\ldots ,n\}\) by

$$\begin{aligned} \sigma _{k,n}(x_1,\ldots ,x_n)=x_j\quad \Leftrightarrow \quad \#\{i\in \{1,\ldots ,n\}:\,x_i\leqslant x_j\}=k, \end{aligned}$$

for any \(j, k \in \{1,\ldots ,n\}\). If \(X_1,\ldots ,X_n\) are i.i.d. random variables the random variable \(X_{k:n}:= \sigma _{k,n}(X_1,\ldots ,X_n)\) is called the k’th order statistics (based on n i.i.d. observations), \(k=1,\ldots ,n\). In particular, \(X_{1:n}=\min \{X_1,\ldots ,X_n\}\) and \(X_{n:n}=\max \{X_1,\ldots ,X_{n:n}\}\). For basic information on order statistics see e.g. [12, 55].

We need also to notice that if Q is the Pareto random variable with distribution \(\pi _{\alpha }\), then \(Q^{-1}\) has the power distribution \(\textrm{pow}(\alpha )\) with the density \(\alpha x^{\alpha -1} {\textbf{1}}_{[0,1]}(x)\). Moreover, if \(V_i=Q_i^{-1}\), \(i=1,\ldots ,n\), are i.i.d. variables with the power distribution \(\textrm{pow}(\alpha )\) then

$$\begin{aligned} Q_{k:n}=V_{n-k+1:n}^{-1}\quad k=1,\ldots ,n. \end{aligned}$$

Theorem 8

Let \(\theta _1\) and \(\theta _2\) be independent non-negative random variables with distributions \(\mu _1\) and \(\mu _2\). Then \(\mu _1\vartriangle _{\alpha ,n}\mu _2\) is the distribution of the random variable

$$\begin{aligned} M(\theta _1, \theta _2)\,\sum _{k=0}^n\,Q_{k:n+k}\,\textbf{1}_{\bigl (W_{k:n},\,W_{k+1:n}\bigr ]}\bigl (\varrho (\theta _1,\theta _2)\bigr ), \end{aligned}$$

where \(Q_1,\ldots ,Q_{2n}\) are i.i.d. random variables with the Pareto distribution \(\pi _{\alpha }\), \(W_1, \dots , W_n\) are i.i.d. random variables with the distribution \(\textrm{pow}(\alpha )\) such that \(Q_1,\ldots ,Q_{2n}, W_1, \dots , W_n\) are independent and \(Q_{0:n}:=1, W_{n+1:n}=\infty \).

Proof

Note that the basic components of the Kucharczak-Urbanik convolution, see (3), are probability measures with the densities \(f_{k,n}\), \(n \in {\mathbb {N}}\), \(k =1,\dots ,n\), defined in (4). The key observation here is that \(f_{k,n}\) is the density of \(Q_{k:n+k}\) where \(Q_1,\ldots ,Q_{2n}\) is an i.i.d. sample from the same Pareto \(\pi _{\alpha }\) distribution. Now by (3) in Sect. 3 we have:

$$\begin{aligned} x \vartriangle _{\alpha ,n} 1 {\mathop {=}\limits ^{d}} \sum _{k=0}^n Q_{k:n+k} {{\textbf{1}}}_{\{ B_n(x^{\alpha }) = k\}}, \end{aligned}$$

where \(B_n(x^{\alpha })\) is the Bernoulli random variable (counting successes in n trials with the success probability \(p=x^{\alpha }\)) such that \(B_n(x^{\alpha })\) and \((Q_1,\ldots ,Q_{2n})\) are independent.

It remains to show that for all \(k=0,1,\dots , n\) we have

$$\begin{aligned} {\textbf{P}} \bigl \{ B_n(x^{\alpha }) = k \bigr \} = {\textbf{E}}\, {\textbf{1}}_{(W_{k:n},W_{k+1:n}]} (x) = {\textbf{P}} \bigl \{ W_{k:n}< x \leqslant W_{k+1:n} \bigr \}, \end{aligned}$$

where \(W_1, \dots , W_n\) are i.i.d. random variables with the distribution \(\textrm{pow}(\alpha )\). To see this we recall (see e.g. [12]) that the bivariate density function \(f_{k,k+1:n}\) of \((X_{k:n},X_{k+1:n})\) for i.i.d. random variables \(X_1,\ldots ,X_n\) with the density f and cumulative distribution function F has the form

$$\begin{aligned} f_{k,k+1:n}(x,y)=\frac{n!}{(k-1)!(n-k-1)!}F^{k-1}(x)F^{n-k-1}(y)f(x)f(y)\textbf{1}_{\{x<y\}}. \end{aligned}$$

Therefore, for any r

$$\begin{aligned}&P\bigl \{X_{k:n}<r\leqslant X_{k+1:n} \bigr \} \\ {}&\,=\frac{n!}{(k-1)!(n-k-1)!} \int _{-\infty }^r\!\! F^{k-1}(x) f(x)\,dx \int _r^{\infty } \! (1-F(y))^{n-k-1} f(y)\,dy\\ {}&= \, \left( {\begin{array}{c}n\\ k\end{array}}\right) F^k(r)(1-F(r))^{n-k}={\textbf{P}} \bigl \{ B_n(F(r)) = k \bigr \}. \end{aligned}$$

The last formula applied to \(W_{k:n},\,W_{k+1:n}\) yields \(P\{W_{k:n}<r\leqslant W_{k+1} \} = {\textbf{P}} \bigl \{ B_n(x^{\alpha }) = k \bigr \}\). Now, assuming that \(Q_1, \dots , Q_{2n}\) and \(W_1,\dots , W_n\) are independent, we have

$$\begin{aligned} x \vartriangle _{\alpha ,n} 1 {\mathop {=}\limits ^{d}} \sum _{k=0}^n\,Y_{k:n+k} {\textbf{1}}_{\bigl (W_{k:n},W_{k+1:n}\bigr ]} (x). \end{aligned}$$
(17)

In order to get the final statement it is enough to choose \(Q_1, \dots , Q_{2n}\) and \(W_1,\dots , W_n\) independent of \(\theta _1, \theta _2\) and notice that

$$\begin{aligned} \theta _1 \vartriangle _{\alpha ,n} \theta _2 = M(\theta _1, \theta _2) \Bigl ( \delta _{\varrho (\theta _1, \theta _2)} \vartriangle _{\alpha ,n} \delta _1 \Bigr ). \end{aligned}$$

\(\square \)

Remark 9

Notice that for the generalized convolution \(\diamond \) on \({\mathcal {P}}_+\) with the convex linear combination property we have

$$\begin{aligned} \frac{1}{\theta _1 \diamond \theta _2}{} & {} {\mathop {=}\limits ^{d}} \frac{1}{M(\theta _1, \theta _2) \sum _{k=0}^{n-1} {\textbf{1}}_{A_k(\varrho (\theta _1, \theta _2))} (U) \, X_k} \\{} & {} = m(\theta _1^{-1}, \theta _2^{-1}) \sum _{k=0}^{n-1} {\textbf{1}}_{A_k(\varrho (\theta _1, \theta _2))} (U) \, X_k^{-1}, \end{aligned}$$

if \(\theta _1, \theta _2, X_0, \dots X_{n-1}\) are independent, \({\mathcal {L}}(X_k) = \lambda _k\), \(k=0,\dots , n-1\) as in the representation (16). We used here equality \(\varrho (\theta _1, \theta _2) = \varrho (\theta _1^{-1}, \theta _2^{-1})\).

Remark 10

Applying this techniques to the Kucharczak-Urbanik convolution \(\vartriangle _{\alpha ,n}\) and using the result of Theorem 1 we obtain

$$\begin{aligned} \frac{1}{\theta _1 \vartriangle _{\alpha ,n} \theta _2}\, {\mathop {=}\limits ^{d}}\, m(\theta _1, \theta _2)\,\sum _{k=0}^n\,\textbf{1}_{\bigl (W_{k:n},\,W_{k+1:n}\bigr ]}\bigl (\varrho (\theta _1,\theta _2)\bigr ) \,V_{n+1:n+k}^{-1}, \end{aligned}$$

where \(V_1, \dots , V_n, W_1, \dots , W_n\) are i.i.d. random variables with the distribution \(\textrm{pow}(\alpha )\) such that \(V_{0:n}:=1\), \(W_{n+1:n}=\infty \).