1 Introduction

The theory of quadratic stochastic operators (QSOs) is rooted in the work of Bernstein [6]. Such operators are applied there to model the evolution of a discrete probability distribution of a finite number of biotypes in a process of inheritance. The problem of a description of their trajectories was stated in [20]. Since the seventies of the twentieth century, the field is steadily evolving in many directions, for a detailed review of mathematical results and open problems, see [15].

In the infinite dimensional case, QSOs were first considered on the \(\ell _{1}\) space, containing the discrete probability distributions. Many interesting models were considered in [16] which is particularly interesting due to the presented extensions and indicated possibilities of studying limit behaviours of infinite dimensional quadratic stochastic operators through finite dimensional ones. A comprehensive survey of the field (including applications of quadratic operators to quantum dynamics) can be found in [17]. Recently, in [5], different types of asymptotic behaviours of quadratic stochastic operators in \(\ell _{1}\) were introduced and examined in detail.

Studies of QSOs on \(\ell _{1}\) are being generalized to more complex infinite dimensional spaces (e.g. [1, 12]). The results from [5] were also subsequently generalized in two papers [3, 4] to the \(L_{1}\) spaces of functions integrable with respect to a specified measure, not necessarily the counting one. Also, an algorithm to simulate the behaviour of iterates of quadratic stochastic operators acting on the \(\ell _{1}\) space was described [2].

The study of QSOs acting on \(L_{1}\) spaces is more complicated, in a sense because Schur’s lemma does not hold. To obtain results, one needs more restrictive, appropriate for \(L_{1}\) spaces, assumptions on the QSO, e.g. in [4] a kernel form (cf. Definition 1) was assumed. But even in this subclass, it is not readily possible to prove convergence of a trajectory of a QSO. Very recently in [18, 21], a more restrictive subclass of kernel QSOs corresponding to a model which “retains the mean” (according to Eq. (9) of [18]) was considered. The operators are built into models of continuous time evolution of the trait’s distribution and the size of the population. With these (and additional technical assumptions, like bounds on moment growth), they obtained a convergence slightly stronger than weak convergence. Here, motivated by the model described in Example 2, Section 5.4 of [18], we consider a very special but biologically extremely relevant type of “mean retention” where the kernel of the QSO corresponds to an additive perturbation of the parents’ traits. Specific properties related to the considered class of QSOs and basic assumptions of our models are presented in Sects. 2 and 3. Due to the strong restrictions, our results presented in Sect. 4 are less general than consequences of Theorems 3 and 4 in [18]. First, we consider discrete time evolution. Moreover, we are concerned only with weak limits. But it is the price we pay for being allowed to drop the assumptions of kernel continuity, moment growth, technical bounds on elements of the birth-and-death process and other elements of the continuous time process’ generator and kernel. Also, multidimensional traits are admitted. Our main results only require the perturbing term to have a finite second moment or alternatively to have tails of its distribution controlled by a power function, see Theorems 2, 3 and 4.

The model lacks uniqueness of the limit—it is seed specific. The family of all possible limits is not yet characterized. However, a construction of a wide subfamily of possible limits is obtained. This family is obtained from the fact that the model of additive perturbation of the parents’ mean factorizes the problem into two parts—the first one dependent on the initial distribution and the other one dependent on the distribution of the perturbation. Some sufficient conditions for separate existence of the limits are given in Sect. 5. It is an open problem whether a limit exists which is not factorizable in this way. In Sect. 6, we introduce a very special class of dyadically\(\alpha \)-stable probability distributions which give further insight into the stability of the studied operators.

2 Preliminaries

We begin by putting our work in a more general context. We describe the theoretical background of QSOs. Let \(( X, {\mathscr {A}})\) be a separable metric space, where \({\mathscr {A}}\) stands for the Borel \(\sigma \)-field. By \({\mathscr {M}}= {\mathscr {M}}\left( X, {\mathscr {A}}, \Vert \cdot \Vert _{TV} \right) \), we denote the Banach lattice of all signed measures on X with finite total variation where the norm is given by

$$\begin{aligned} \Vert F \Vert _{TV} := \sup _{ f \in X } \{ | \langle F, f \rangle | : f \text{ is } {\mathscr {A}}-\text{ measurable, } \sup _{ x \in X } | f (x)| \le 1 \} , \end{aligned}$$

with \( \langle F, f \rangle := \int _{X} \, f(x) \, dF(x). \) By \({\mathscr {P}}:= {\mathscr {P}}\left( X, {\mathscr {A}}\right) \), we denote the convex set of all probability measures on \(( X, {\mathscr {A}})\). In our work, X can be the space of random values of traits in inbreeding or hermaphroditic populations. Elements of \({\mathscr {P}}\) represent the admitted single generation probability distribution of the trait. The model of heritability is constructed with the use of quadratic stochastic operators, defined as below (we are suitably extending the definitions given in [3]). Let \({\mathscr {M}}_{0}={\mathscr {M}}(\mu )\) be the Banach sublattice of \({\mathscr {M}}\) of all finite Borel measures on \((X,{\mathscr {A}})\) absolutely continuous with respect to a fixed positive \(\sigma \)-finite measure \(\mu \) and denote by \({{\mathscr {P}}}_{0}\) the set of probability measures in \({\mathscr {M}}_{0}\), i.e. \({{\mathscr {P}}}_{0}={\mathscr {P}}\cap {\mathscr {M}}_{0}\). Clearly, \({\mathscr {M}}_{0} = L_{1}(\mu )\) and \({{\mathscr {P}}}_{0}\) is the convex set of all probability densities with respect to \(\mu \).

Definition 1

A bilinear symmetric operator \({\mathbf {Q}}:{\mathscr {M}}\times {\mathscr {M}}\rightarrow {\mathscr {M}}\) is called a quadratic stochastic operator on\({\mathscr {M}}\) (briefly: QSO on \({\mathscr {M}}\)) if for all \(F_{1}, F_{2} \in {\mathscr {M}}, \; F_{1}, F_{2} \ge 0 \)

$$\begin{aligned} {\mathbf {Q}}(F_1, F_2) \ge 0\ , \quad \text {and} \quad \Vert {\mathbf {Q}}(F_1, F_2) \Vert _{TV} = \Vert F_1\Vert _{TV}\, \Vert F_2\Vert _{TV} \, . \end{aligned}$$

The QSO \({\mathbf {Q}}\) on \({\mathscr {M}}\) is called a kernel quadratic stochastic operator if there exists a \( {\mathscr {A}}\otimes {\mathscr {A}}\)-measurable doubly-indexed family \({{\mathscr {G}}} =\{ G(\cdot ; x,y):\, (x,y) \in X^2\} \subset {\mathscr {M}}\) of probability measures on \(( X, {\mathscr {A}})\), such that for \(F_{1}, F_{2} \in {\mathscr {M}}\), \(G \in {{\mathscr {G}}}\) we have

$$\begin{aligned} {\mathbf {Q}}(F_{1},F_{2})(A) =\int _{X\times X} \, G(A; x, y) \ \mathrm {d}F_{1}\times F_{2}(x,y) , \; \text{ for } \text{ all } \; A \in {\mathscr {A}}. \end{aligned}$$

Then, the family \({{\mathscr {G}}}\) is called the kernel of \({\mathbf {Q}}\).

Notice that QSOs are bounded since \( \Vert {\mathbf {Q}}(F_{1},F_{2})\Vert _{TV} \le \Vert F_{1}\Vert _{TV} \Vert F_{2}\Vert _{TV}\) for all \(F_{1}, F_{2} \in {\mathscr {M}}\). Clearly, \({\mathbf {Q}}( {\mathscr {P}}\times {\mathscr {P}}) \ \subseteq \ {\mathscr {P}}\) and \({\mathbf {Q}}( {\mathscr {P}}\times {\mathscr {P}}) \ \subseteq \ {\mathscr {P}}_{0}\) if \({\mathbf {Q}}\) is a kernel QSO with \(G(\cdot , x,y ) \ll \mu \), for all \(x,y \in X\). According to our interpretation in terms of evolutionary biology, if \(F_{1}, F_{2} \in {\mathscr {P}}\) represent the trait distributions in two different populations, then \({\mathbf {Q}}(F_{1}, F_{2}) \in {\mathscr {P}}\) represents the distribution of this trait in the next generation coming from the mating of independent individuals, one from each of the two populations. Then, the nonlinear “diagonalized” mapping

$$\begin{aligned} {\mathscr {P}}\ni F \mapsto {\mathbb {Q}}(F) := {\mathbf {Q}}(F,F) \in {\mathscr {P}}. \end{aligned}$$

describes the probability distribution of the offspring’s trait when F is the law of the parents. In order to be more specific, the following Markov process is adequate for the interpretation of the quadratic stochastic operators \({\mathbf {Q}}\) with kernel \({{\mathscr {G}}}\), given by Definition 1.

Let \({\varXi }^{\{n\}}=\left( {\varXi }_1^{\{n\}}, {\varXi }_2^{\{n\}}\right) \), \(n=0,1,2,{\ldots }\) be a discrete time Markov process on the product space \(X\times X\), with the product \(\sigma \)-field \({{\mathscr {A}}}\otimes {{\mathscr {A}}}\) of measurable sets. Assume that the transition probability kernel is given by

$$\begin{aligned} P\left\{ {\varXi }^{\{n+1\}} \in A\times B \,| \, {\varXi }^{\{n\}} \right\}= & {} {\mathbf {P}}(A\times B \,| \, {\varXi }^{\{n\}} \,) , \; \; \text{ a.s. } \\ \quad \text{ where } \; {\mathbf {P}}(A\times B \,|\, (x,y) ):= & {} G(A;x,y) \cdot G(B;x,y), \; A,B \in {{\mathscr {A}}} . \end{aligned}$$

Note that the Markov operator \({\mathbf {P}}\) preserves product distributions of the form \(F\times F\) (with equal probability distribution on each coordinate). Thus, if \({\varXi }^{\{0\}}\) is a pair of i.i.d. random elements of X, each following the probability distribution (p.d.) given by \(F^{\{0\}}\), then every pair \({\varXi }^{\{n\}}\) is also i.i.d. and follows the product distribution \(F^{\{n\}}\times F^{\{n\}}\), where \(F^{\{n\}}(A):= P\left\{ {\varXi }_j^{\{n\}} \in A \right\} \), \(j=1,2\), satisfies the equations:

$$\begin{aligned} F^{\{n+1\}}(A)= & {} \int _{X\times X} G(A;x,y) \, \mathrm{d} F^{\{n\}}\times F^{\{n\}} (x,y) = {\mathbf {Q}}\left( F^{\{n\}},F^{\{n\}}\right) (A)\\= & {} {\mathbb {Q}}\left( F^{\{n\}}\right) (A) , \end{aligned}$$

for \(n=0,1,2, \dots , \; A \in {{\mathscr {A}}}.\) Therefore, the sequence of values of the iterates \({\mathbb {Q}}^{n}(F)\), \(n = 0,1,2,\ldots \), can be seen as a model of the evolution of the probability distribution of the X-valued trait of an inbreeding or hermaphroditic population, with F as the initial distribution. A typical question when working with quadratic stochastic operators is the long-term behaviour of the iterates \({\mathbb {Q}}^n(F)\), as \(n \rightarrow \infty \).

Different types of strong mixing properties of kernel quadratic stochastic operators were considered in [3, 5], on \({\mathscr {M}}_{0} \times {\mathscr {M}}_{0}\) and \(\ell _{1} \times \ell _{1}\), respectively. The distance between measures is defined there by the total variation of the difference of the measures. In particular, in these aforementioned works, equivalent conditions for uniform asymptotic stability of such operators in terms of nonhomogeneous chains of linear Markov operators are expressed. The study of the limit behaviour of quadratic stochastic operators is becoming a more and more important topic, for instance, recently in [11, 13, 14] non-ergodicity of QSOs was studied.

However, very often the strong convergence of distributions is not appropriate and weak convergence for vector-valued traits will suffice. This is in particular if the sequences consist of discrete measures and the limit distribution is not discrete. In this situation, the total variation distance will always equal 2 hence never going to zero. This makes strong convergence useless in such cases. Therefore, we analyse the long-term behaviour of quadratic stochastic operators based on the weak convergence of measures as described below.

Definition 2

Let \({\mathscr {M}}\) be the Banach lattice of all finite Borel measures on \(( X, {\mathscr {A}})\), where X is a complete separable metric space and \({\mathscr {A}}\) consists of all Borel sets. As before let \({\mathscr {P}}\) stand for the convex subspace of probability measures on \(( X, {\mathscr {A}})\). Then, a QSO \({\mathbf {Q}}\) on \({\mathscr {M}}\) is said to be weakly asymptotically stable at\(F\in {\mathscr {P}}\) if the weak limit of the sequence of values of the iterations of the diagonalized operators \({\mathbb {Q}}\) at F exists in \({\mathscr {P}}\) (we use the notation ).

3 The Centred QSO in \({\mathbb {R}}^{d}\)

We will focus on a very specific subclass of quadratic stochastic operators which we call centred. For this, we assume \(X= {\mathbb {R}}^d\), \(d \in {\mathbb {N}}_{+}\), for the trait value space. Correspondingly, for the domain of the QSOs, we have chosen the lattice \({\mathscr {M}}^{(d)}= {\mathscr {M}}({\mathbb {R}}^d, {\mathscr {B}}^{(d)})\) of all Borel finite measures (i.e. with finite total variation) on \({\mathbb {R}}^{d}\). Hence, the conditions of Definition 2 are fulfilled. For \(F,G \in {\mathscr {M}}^{(d)}\), the convolution is defined by

$$\begin{aligned} F \star G\, (A) := \int _{{\mathbb {R}}^d} \ F(A - y ) \ \mathrm {d}G(y), \quad A \in {\mathscr {B}}^{(d)}, \end{aligned}$$

and for \(n\in {\mathbb {N}}\) we denote by \(F^{*n}\) the nth convolutive power of F, where \(F^{*0} \) is the probability measure \(\delta _0\) concentrated at the origin (of \({\mathbb {R}}^{(d)}\)). Moreover, let us denote the density-type convolution of any \(f, g \in L_{1}^{(d)}:= L_1( {\mathbb {R}}^d, {\mathscr {B}}^{(d)}, \lambda ^{(d)})\) by

$$\begin{aligned} f \circledast h (z) := \int _{{\mathbb {R}}^{d}} f(z-y) \, g(y)\, \mathrm {d}y, \; \text{ for } \; z \in {\mathbb {R}}^{d}. \end{aligned}$$

For any \(F\in {\mathscr {M}}^{(d)}\), we define its characteristic function by

$$\begin{aligned} \varphi _F(s) := \int _{{\mathbb {R}}^{d}} \ \exp \left( i\, s \cdot x \right) \ \mathrm {d}F(x) , \; \text{ for } \; s \in {{\mathbb {R}}^{d}}, \end{aligned}$$

where \(\cdot \) stands for the canonical scalar product in \({{\mathbb {R}}^{d}}\). Moreover, the vector of moments of order 1 and the covariance matrix are defined by

$$\begin{aligned} m^{(1)}_{F}:= & {} \displaystyle \int _{{\mathbb {R}}^{d}} \ x \ \mathrm {d}F(x) = \left[ \int _{{\mathbb {R}}^{d}} \ x_{j} \ \mathrm {d}F(x) \ :\ j \in \{1,2,\dots ,d\}\right] , \\ v_{F}:= & {} \displaystyle \left[ \int _{{\mathbb {R}}^d} \ x_{j} x_{k} \, \mathrm {d}F(x) \ :\ (j,k) \in \{1,2,\dots ,d\}^{2} \right] , \end{aligned}$$

whenever they exist in \({\mathbb {R}}^{d}\) and \({\mathbb {R}}^{d\times d}\), respectively.

Definition 3

Let \(G\in {{\mathscr {P}}^{(d)}}\) be a probability measure on \({\mathbb {R}}^d\). The centred QSO with perturbationG denoted by \({\mathbf {Q}}_{G}\) is the QSO with kernel

$$\begin{aligned}{{\mathscr {G}}}_{G} := \left\{ G( \cdot ; x,y) = G\left( \cdot - \frac{x+y}{2} \right) : \; (x,y) \in {{\mathbb {R}}^{d}}\times {{\mathbb {R}}^{d}} \right\} . \end{aligned}$$

Remark 1

Our work has a biological motivation in the background and the centred QSO of Definition 3 can be interpreted as modelling traits with different values of heritability. Heritability (in the quantitative genetic sense) looks at (amongst other things) how the expectation of the offspring relates to the arithmetic average of the parental traits. If the expectation of G were 0—this would relate to a heritability of 1. Other expectations represent different families of quantitative genetic relationships.

We omit the straightforward proof that the above defined \({\mathbf {Q}}_{G}\) is a QSO. In order to give the operator another form, let us introduce the following notation for measures \(F\in {\mathscr {M}}^{(d)}\) and for densities \(f \in L_{1}^{(d)}\)

$$\begin{aligned} {\tilde{F}}(A) := F(2 \, A), \; \text{ for } \; A \in {\mathscr {B}}^{(d)}, \quad {\tilde{f}} (x) := 2 f ( 2 \, x ), \; \text{ for } \; x \in {\mathbb {R}}^d. \end{aligned}$$

Proposition 1

Let \(F_{1}, F_{2}\) and G be probability measures on \({\mathbb {R}}^d\). If \(\xi _{1}, \xi _{2}\) and \(\eta \) are independent \({\mathbb {R}}^{d}\)-valued random vectors distributed according to \(F_{1}\), \(F_{2}\) and G, respectively, then \({\mathbf {Q}}_{G}(F_{1}, F_{2}) \) is the probability distribution of

$$\begin{aligned} \zeta = \frac{\xi _1+\xi _2}{2} + \eta \, . \end{aligned}$$

Consequently,

$$\begin{aligned} {\mathbf {Q}}_{G}(F_{1}, F_{2}) = {\tilde{F}}_1 \star {\tilde{F}}_2 \star G\, . \end{aligned}$$
(1)

In particular, if the measures are absolutely continuous with respect to the Lebesgue measure \(\mathscr {\lambda }^{(d)}\), then their densities \(f_{1} := \tfrac{\mathrm {d}F_{1}}{\mathrm {d}{\lambda }^{(d)}}, f_{2} := \tfrac{\mathrm {d}F_{2}}{\mathrm {d}{\lambda }^{(d)}}, g := \tfrac{\mathrm {d}G}{\mathrm {d}{\lambda }^{(d)}}\) are elements of \(L_{1}^{(d)}\) and the value of the operator at \((F_1, F_2)\) is absolutely continuous, too, with density

$$\begin{aligned} \tfrac{\mathrm {d}}{\mathrm {d}\lambda ^{(d)}}{\mathbf {Q}}_{G}(F_{1},F_{2}) = {\tilde{f}}_{1} \circledast {\tilde{f}}_{2} \circledast g. \end{aligned}$$
(2)

As before, we pay special attention to the corresponding “diagonalized” mapping

$$\begin{aligned} {\mathscr {M}}^{(d)} \ni F \,\longmapsto \, {\mathbb {Q}}_{G}(F):= {\mathbf {Q}}_{G}(F,F) \equiv {\tilde{F}}^{\star 2} \star G \in {\mathscr {M}}^{(d)}. \end{aligned}$$
(3)

Equivalently, in terms of the corresponding characteristic functions we have

$$\begin{aligned} \varphi _{ {\mathbb {Q}}_{G} (F)}(s) = \left( \varphi _F(\tfrac{s}{2})\right) ^2 \, \varphi _G(s), \; \text{ for } \; s \in {\mathbb {R}}^{d} . \end{aligned}$$
(4)

The above propositions and equations justify the interpretation that \({\mathbb {Q}}_{G}\) describes the model of heritability of vector-valued traits in populations of inbreeding or hermaphroditic species, whenever the offspring trait is equal to the additively randomly perturbed arithmetic mean of the parents’ traits when the mating individuals are chosen independently. Moreover, the iterates \(({\mathbb {Q}}_{G})^{n}\), \(n \in {\mathbb {N}}_{0}\) are then interpreted as discrete time evolution operators acting in the space \({\mathscr {P}}^{(d)}\) of probability distributions of the vector-valued trait. The identity operator in \({\mathscr {P}}^{(d)}\) is denoted then by \(({\mathbb {Q}}_{G})^{0}\).

For further analysis let us introduce probability distributions \({F^{(n)}}, {G^{\{n\}}}\in {\mathscr {P}}^{(d)}\) defined by their characteristic functions as follows, where \( s \in {\mathbb {R}}\), \( n \in {{\mathbb {N}}}_{+}\)

$$\begin{aligned} \varphi _{F^{(n)}}(s) := \left( \varphi _{F}\left( \tfrac{s}{2^{n}}\right) \right) ^{2^{n}}, \ \quad \varphi _{G^{\{n\}}}(s) := \prod \limits _{j=0}^{n-1}\left( \varphi _{G}\left( \tfrac{s}{2^{j}}\right) \right) ^{2^{j}} = \prod \limits _{j=0}^{n-1} \varphi _{G^{(j)}} (s), \end{aligned}$$
(5)

and their limits, whenever they exist, are denoted by

(6)

Obviously, the operations given by Eq. (5) can be expressed as

$$\begin{aligned} F^{(n)} = ({\mathbb {Q}}_{\delta _0})^n(F) , \ \quad G^{\{n\}} = ({\mathbb {Q}}_{G})^n(\delta _0). \end{aligned}$$
(7)

Proposition 2

The characteristic function of the nth iterate of \({\mathbb {Q}}_{G}\) at F equals

$$\begin{aligned} \varphi _{({\mathbb {Q}}_{G})^{n}(F)}(s) = \varphi _{F^{(n)}}(s) \; \varphi _{G^{\{n\}}}(s), \; \text{ for } \; s \in {\mathbb {R}}, \, n \in {{\mathbb {N}}}_{+}. \end{aligned}$$
(8)

Consequently, for the nth iterate, we have

$$\begin{aligned} ({\mathbb {Q}}_G)^n(F) = F^{(n)} \star G^{\{ n \}}, \; \text{ for } \; n \in {{\mathbb {N}}}_{+}. \end{aligned}$$
(9)

Proof

First let us notice that according to Eqs. (4)–(5), Eq. (8) will hold for \(n=1\). Hence, for \(m := n+1 \ge 2\) by the additional use of the nth equation, we get

$$\begin{aligned} \begin{aligned} \varphi _{({\mathbb {Q}}_{G})^{m}(F)}(s)&= \varphi _{{\mathbb {Q}}_{G}(({\mathbb {Q}}_{G})^{n}(F))}(s) = \left( \varphi _{({\mathbb {Q}}_{G})^{n}(F)} \left( \tfrac{s}{2}\right) \right) ^{2} \, \varphi _{G}(s) \\&= \left( \varphi _{F}\left( \tfrac{s/2}{2^{n}}\right) \right) ^{2^{n} \cdot 2} \, \prod \limits _{j=0}^{n-1} \left( \varphi _{G}\left( \tfrac{s/2}{2^{j}}\right) \right) ^{2^{j}\cdot 2} \, \varphi _{G}(s) \\&= \left( \varphi _{F}\left( \tfrac{s}{2^{n+1}}\right) \right) ^{2^{n+1}} \, \prod \limits _{j=0}^{n} \left( \varphi _{G}\left( \tfrac{s}{2^{j}}\right) \right) ^{2^{j}}\\&= \varphi _{F^{(m)}}(s) \; \varphi _{G^{\{m\}}}(s). \end{aligned} \end{aligned}$$

By induction, Eq. (8) holds for all natural \(n\in {\mathbb {N}}_+\).

According to Definition 2 and the Lévy–Cramér continuity theorem (see e.g. Theorem 3.1 in [19], Chapter 13), we obtain

Corollary 1

For \(G \in {\mathscr {P}}^{(d)}\), the centred QSO \({\mathbf {Q}}_{G}\) is weakly asymptotically stable at \(F\in {\mathscr {P}}^{(d)}\) if and only if there exists the weak limit , i.e.

$$\begin{aligned} \varphi _{({\mathbb {Q}}_G)^n(F)}(s) \equiv \varphi _{F^{(n)}}(s) \, \varphi _{G^{\{n\}}}(s) \rightarrow \varphi _{H_{\infty }}(s) \; \text{ as } \; n\rightarrow \infty , \; \text{ for } \; s \in {\mathbb {R}}. \end{aligned}$$

In particular, such a limit exists whenever the limits of \(F^{(\infty )}\) and \( G^{\{\infty \}}\) exist in \( {\mathscr {P}}^{(d)}\). If this holds, then .

By convolution theorems, we obtain the following Corollary.

Corollary 2

(cf. Example 2 in [18], Section 5.4) Let \(\xi _{1},\xi _{2},\xi _{3},\ldots \) and \(\eta _{0;1},\eta _{1;1},\eta _{1;2},\)\(\eta _{2;1}, \ldots ,\)\(\eta _{2;4},\ldots , \eta _{j;1}, \ldots , \eta _{j;2^{j}},\ldots \) be independent sequences of random vectors such that all \(\xi \)-s are i.i.d. according to F and all \(\eta \)-s are i.i.d. according to G. Then, for every \(n\in {\mathbb {N}}_+\), we have

  1. (i)

    \(F^{(n)}\) is the probability distribution of the random d-dimensional vector

    $$\begin{aligned} \xi ^{(n)} := \frac{\xi _{1}+\xi _{2}+\cdots +\xi _{2^{n}}}{2^{n}}. \end{aligned}$$
  2. (ii)

    \(G^{\{n\}}\) is the probability distribution of the random d-dimensional vector

    $$\begin{aligned} \eta ^{\{n\}} := \sum \limits _{j=0}^{n-1} \frac{\eta _{j;1}+\eta _{j;2}+\cdots +\eta _{j;2^{j}}}{2^{j}}. \end{aligned}$$
  3. (iii)

    \(H_n := ({\mathbb {Q}}_G)^n(F)\) is the probability distribution of the random d-dimensional vector

    $$\begin{aligned} \zeta _{n} = \xi ^{(n)} + \eta ^{\{n\}}. \end{aligned}$$

For the one-dimensional case (\(d=1\)), the model of heritability determined by Eq. (1) has been previously discussed in Example 2 of [18]. In their case, the perturbation distribution G is absolutely continuous, with mean value equal to 0 and finite variance (amongst other assumptions). Although the whole model considered there is much more complicated (a continuous time process, with random distance between the mating instants etc.), the limit distribution of the trait values equals the limit of the discrete time evolution (with instants counted by the number of consecutive generations). In what follows here, we restrict ourselves to the discrete time model and extend the class of possible weak limits of the iterations. It is possible that the obtained class of limits is applicable to the continuous time model of the above cited Example 2 of [18]. We leave this question as well as the study of the convergence rate open for further investigation.

4 Main Results

Theorem 1

Let the centred QSO \({\mathbf {Q}}_{G}\) be weakly asymptotically stable at F, where \(F, G \in {\mathscr {P}}^{(d)}\). Then, the limit distribution is a fixed point of \({\mathbb {Q}}_G\), i.e. \(H = {\mathbb {Q}}_G( H)\), and the characteristic functions satisfy the equation

$$\begin{aligned} \varphi _{H}(s) = \left( \varphi _{H}(\tfrac{s}{2}) \right) ^{2} \varphi _{G}(s), \ s \in {\mathbb {R}}^d. \end{aligned}$$
(10)

Conversely, if the characteristic function of \(H \in {\mathscr {P}}^{(d)}\) satisfies the above equation for some probability measure \(G\in {\mathscr {P}}^{(d)}\), then H is a fixed point of \({\mathbb {Q}}_G\), and so H is the weak limit of \(({\mathbb {Q}}_{G})^n(H)\), as \(n \rightarrow \infty \).

In particular \(H\in {\mathscr {P}}^{(d)}\) is a weak limit of the sequence \(F^{(n)}= ({\mathbb {Q}}_{\delta _0})^n(F)\) defined by Eq. (5) for some \(F \in {\mathscr {P}}^{(d)}\), if and only if its characteristic function satisfies the following dyadically 1-stable equation

$$\begin{aligned} \varphi _{H}(2s ) = \left( \varphi _{H}(s ) \right) ^{2}, \; \text{ for } \text{ all } \; s \in {\mathbb {R}}^d . \end{aligned}$$
(11)

If this holds, then starting with \(F=H\) one gets \(F^{(n)} = H\) for every \(n \in {\mathbb {N}}_+\) (and , as well).

Proof

By the Lévy–Cramér continuity theorem and Eq. (4), for every \(G \in {\mathscr {P}}^{(d)}\) the operator \(F \mapsto {\mathbb {Q}}_{G}(F)\) is a continuous self-mapping of \({\mathscr {P}}^{(d)}\) with respect to the weak convergence. Denoting \(H_{n} := ({\mathbb {Q}}_{G})^{n}(F)\), , by continuity we have

Equation (10) and the second part of the theorem is a consequence of the equivalence of Eqs. (3) and (4). The last claim is a particular case of the first one by taking \(G = \delta _{0}\) the point measure concentrated at the origin of \({\mathbb {R}}^{(d)}\), cf. Eq. (7).

Theorem 2

  1. (i)

    A probability distribution \(H\in {\mathscr {P}}^{(d)}\) satisfies the dyadically 1-stable Eq. (11), if for some infinitely divisible \(H_{0}\in {\mathscr {P}}^{(d)}\) the logarithms of characteristic functions of H and \({H_{0}}\) are related as follows

    $$\begin{aligned} \ln (\varphi _H) (s) = \sum _{k\in {\mathbb {Z}}}\, 2^{-k} \ln (\varphi _{H_{0}}) (2^{k} s) \end{aligned}$$

    provided that the series is convergent, almost uniformly with respect to \(s \in {\mathbb {R}}\). Then where,

    $$\begin{aligned} \ln \left( \varphi _{F_{m}} (s) \right) := \sum _{k\in {\mathbb {N}}}\, 2^{-k+m} \ln (\varphi _{H_0}) (2^{k-m} s), \; \text{ for } \; s\in {\mathbb {R}}, \; m \in {\mathbb {Z}}. \end{aligned}$$
  2. (ii)

    Let \(F\in {\mathscr {P}}^{(d)} \) satisfy \(\left( \varphi _F(\tfrac{s}{n}) \right) ^n \rightarrow \varphi _S(s)\), as \(n \rightarrow \infty \) for some \(S\in {\mathscr {P}}^{(d)}\). Then \(F^{(\infty )}= S\). Moreover, in the case of \(d=1\), the weak limit is an element of the extended family of Cauchy distributions with characteristic function

    $$\begin{aligned} \varphi _{F^{(\infty )}}(s ) = \exp \left\{ - c \vert s \vert + i m s \right\} , \; \text{ for } \; s \in {\mathbb {R}}, \; \text{ where } \; c \ge 0, \; m \in {\mathbb {R}}. \end{aligned}$$
  3. (iii)

    Conditions of part (ii) are satisfied in each of the following situations:

    1. 1.

      \(F\in {\mathscr {P}}^{(d)} \) is of finite mean \( m^{(1)} \in {\mathbb {R}}^d\); then \(F^{(\infty )}\) is concentrated at \(m^{(1)} \).

    2. 2.

      \(F\in {\mathscr {P}}^{(d)} \) is the p.d. (probability distribution) of the random vector \( \xi = A ( \xi ') + B\), where all coordinates of \(\xi ' \in {\mathbb {R}}^{d'}\) are independent random variables, satisfying the condition of part (ii) transformed by a (deterministic) linear map \(A: {\mathbb {R}}^{d'} \rightarrow {\mathbb {R}}^{d}\) and shifted by a (deterministic) vector \(B \in {\mathbb {R}}^{d}\).

Proof

The infinite divisibility of \(H_{0}\) implies that all terms of the series in part (i) are logarithms of characteristic functions (of probability distributions on \({\mathbb {R}}^{(d)}\)), and therefore, the partial sums are also. By almost uniform convergence, the infinite sum is a logarithm of a characteristic function, as well. Equation (11) follows now by standard properties of limits. The second part of (i) holds since \((F_{m})^{(n)} = F_{m+n}\), for \( m \in {\mathbb {Z}}\), \( n \in {\mathbb {N}}\).

The first part of (ii) follows since \(\left( \varphi _{F^{(n)} }(s)\right) _{n \in {\mathbb {N}}}\) is a subsequence of \(\left( \left( \varphi _F(\frac{s}{n}) \right) ^n\right) _{n \in {\mathbb {N}}_+}\). For the one-dimensional case, by standard analysis, the limit S is stable with characteristic exponent 1 (unless concentrated at a single point) and in every case its characteristic functions can be obtained (see e.g. Eq. (3) in [19], Chapter 15, Section 3)

$$\begin{aligned} \ln \varphi _{S}(s) = i \,m \,s - c\, \vert s \vert \, ( 1 + i \, \theta \, \text{ sign }(s) \,\ln \vert s \vert ), \end{aligned}$$

where \(c\ge 0\), \(m \in {\mathbb {R}}\), \(\vert \theta \vert \le 1\). Moreover, by properties of limits, we have \(\left( \varphi _S(\tfrac{s}{n})\right) ^n = \varphi _S(s)\), for all \(s \in {\mathbb {R}}\), \(n \in {\mathbb {N}}_+\), implying that \(\theta = 0\).

Case 1 of (iii) follows from the strong law of large numbers combined with Corollary 2(i).

Case 2 of (iii) We denote by \(\;\circ \;\) the composition of matrices and treat vectors as rows unless transposed by \(^{T}\) to columns. Using a random vector \(\xi \) with p.d. \(F\in {\mathscr {P}}^{(d)}\) and the expectation functional \({\mathbb {E}}\), we can write for \(s \in {\mathbb {R}}\),

$$\begin{aligned} \varphi _F(s)= & {} \varphi _\xi (s) = {\mathbb {E}}\{ \exp ( i\, s \circ \xi ^{T} ) \}= {\mathbb {E}}\{ \exp ( i\, s \circ ( A \circ \xi '^{T} + B^{T}) )\} \\= & {} {\mathbb {E}}\{ \exp ( i\, (s \circ A )\circ \xi '^{T} + i\, s \circ B^{T} )\} = \varphi _{\xi '} (s \circ A ) \, \exp ( i\, s \circ B^{T} ) . \end{aligned}$$

Thus, the probability distribution of \(\xi \) is in the domain of attraction of a p.d. on \({\mathbb {R}}^d\) given by the following limit characteristic function

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\left( \varphi _\xi (\tfrac{1}{n} \, s)\right) ^{n}= & {} \exp ( i\, s \circ B^{T} )\, \lim \limits _{n\rightarrow \infty } \left( \varphi _{\xi '} (\tfrac{1}{n} \, s \circ A ) \right) ^{n} \\= & {} \exp ( i\, s \circ B^{T} )\, \prod _{k=1}^{d'} \, \varphi _{S_{k}} ( s \cdot A_{, k} ), \end{aligned}$$

where \(A_{, k}\) stands for the kth column of the coefficient matrix A and \(S_{k}\) is the attracting p.d. of the extended Cauchy type for the kth coordinate of \(\xi '\), \(k =1,2, \dots , d'\).

Remark 2

It is possible to give an example of a dyadically 1-stable H law that will not be 1-stable. Namely, such a “partially 1-stable” H example is provided by P. Lévy. Take, \(\varphi _{H}(n s ) = \left( \varphi _{H}(s )\right) ^{n}\), where the equality is valid for \(n= 2^{k}\), \(k=0,1,2, \ldots \) only. This is a particular case of Theorem 2 (i) with \(d=1\), where \(H_{0}\) is the p.d. of the difference of two i.i.d. Poisson random variables with \(\ln ( \varphi _{H_{0}} (s)) = -1 + \cos s \) (cf. Section 17.3 in [10]). Regarding cases (ii) and (iii) of Theorem 2, it is worth noting that many authors (e.g. Section 8.8 in [9]) provide a description of a wider class of 1-stable multidimensional distributions.

The following propositions present examples of conditions which are sufficient for the existence of the weak limit of the sequences of probability distributions \(G^{\{n\}}\) defined by Eq. (5).

Theorem 3

Let the probability distribution \(G \in {\mathscr {P}}^{(d)}\) be of finite second moments. Assume that \(m := m^{(1)}_G = 0\in {\mathbb {R}}^{d}\). Then , where

$$\begin{aligned} \varphi _{G^{\{\infty \}}}(s) = \lim _{n\rightarrow \infty }\, \varphi _{G^{\{n\}}}(s) =\prod _{j=0}^{\infty }\, \left( \varphi _G(2^{-j} s) \right) ^{2^j}, \ s \in {\mathbb {R}}^d. \end{aligned}$$
(12)

Moreover \(m^{(1)}_{G^{\{\infty \}}} = 0\) and regarding the covariance we have \(v_{G^{\{\infty \}}} = 2 v_{G}\).

Proof

According to Corollary 2, \(G^{\{n\}}\) is the p.d. of the sum \(\eta ^{\{n\}} := \sum _{j=0}^{n-1}U_{j}\), of independent averages \(U_{j} := (\eta _{j;1}+\eta _{j;2}+\cdots +\eta _{j;2^j})/2^{j}\), \(j = 0,1,2 \ldots \), where all \(\eta _{j;k}\) are i.i.d. according to G. By the assumptions on G, we have \(m_{U_{j}}^{(1)} =0 \in {\mathbb {R}}^d\) and \(v_{U_{j}}=2^{-j}{v_{G}}\) for every \(j = 0,1,2,\ldots \) . Hence, the series \(\eta ^{\{\infty \}} := \lim _{n\rightarrow \infty } \eta ^{\{n\}} = \sum _{j=0}^{\infty }U_{j}\) converges almost surely (as it converges coordinatewise, cf. Theorem 2.5.3 in [8]) to a random vector with mean 0 and covariance matrix equalling \( \sum _{j=0}^{\infty } \, 2^{-j} v_G = 2 v_{G}\). In particular, the probability distributions \(G^{\{n\}}\) of \(\eta ^{\{n\}}\) converge weakly to the probability distribution of \(\eta ^{\{\infty \}}\), which we denote by \(G^{\{\infty \}}\). Now Eq. (12) holds by the continuity theorem.

Theorem 4

Let the one-dimensional probability distribution \( G \in {\mathscr {P}}^{(1)}\) satisfy the following condition

$$\begin{aligned} \ \vert \ln \varphi _{G}(s) \vert \le C \vert s \vert ^{1+\varepsilon }{} \textit{ for }\vert s \vert \le s_{0} ,\textit{ where } s_{0}> 0, \varepsilon \in (0,1], C>0. \end{aligned}$$

Then, the sequence of p.ds. \(G^{\{n\}}\in {\mathscr {P}}^{(1)}\), \({n \in {\mathbb {N}}}\), defined according by Eq. (5) converges weakly to a p.d. \(G^{\{\infty \}} \in {\mathscr {P}}^{(1)}\) determined by the infinite product of characteristic functions Eq. (12), convergent almost uniformly with respect to \(s \in {\mathbb {R}}\).

Proof

Due to the bounds on \(\varphi _{G}\), for any positive real number \(T>0\), there exists a natural number \(J\in {\mathbb {N}}\) such that

$$\begin{aligned} \left| \ln \left( \varphi _{G}\left( \tfrac{s}{2^{j}}\right) \right) ^{2^{j}} \right| \equiv 2^{j} \left| \ln \varphi _{G}\left( \tfrac{s}{2^{j}}\right) \right| \le C \tfrac{\vert s \vert ^{1 + \varepsilon }}{2^{j\varepsilon }}, \; \text{ for } \; j \ge J, \;\vert s \vert < T . \end{aligned}$$

Therefore, the infinite product in Eq. (12) with respect to \(j \in {\mathbb {N}}\) is almost uniformly convergent and defines a characteristic function of a probability measure \({G^{\{\infty \}}}\) on \({\mathbb {R}}\). Then, by the continuity theorem, this measure is the weak limit of \({G^{\{n\}}}\), as \(n\rightarrow \infty \).

Remark 3

The above assumption on \(\ln \varphi _{G}\) implies \(m^{(1)}_G=0 \in {\mathbb {R}}^d\). Moreover, the assumed behaviour of the function \(\ln \varphi _G\) near zero can be equivalently replaced by estimates on \(\varphi _{G} -1\). This is a direct consequence of the following inequalities

$$\begin{aligned} \vert a \vert (1 - \vert a \vert ) \le \vert \ln ( 1+ a) \vert \le \vert a \vert (1 + \vert a \vert ), \; \text{ for } \; a\in {\mathbb {C}}, \vert a \vert < 0.5. \end{aligned}$$

Corollary 3

Let \(\eta ' \in {\mathbb {R}}^{d'}\) be a random vector with independent coordinates \(\eta _k'\) distributed according to \(G_k' \in {\mathscr {P}}^{(1)}\), satisfying the condition of Theorem 4 with (possibly different) parameters \(s_{0,k} >0 \), \(\varepsilon _k\in (0,1]\) and \(A_{k}>0\), \(k = 1,2,\ldots , d'\), respectively. Moreover, let \(G\in {\mathscr {P}}^{(d)} \) be the p.d. of the random vector \( \eta = A ( \eta ')\), transformed from \(\eta '\) by a (deterministic) linear map \(A: {\mathbb {R}}^{d'} \rightarrow {\mathbb {R}}^{d}\). Then the sequence of p.ds. \(G^{\{n\}}\), \(n \in {\mathbb {N}}\), given by Eq. (5) converges weakly to a p.d. \(G^{\{\infty \}} \in {\mathscr {P}}^{(d)}\) determined by the infinite product in Eq. (12).

Proof

In terms of \(\eta \) we may write

$$\begin{aligned} \varphi _G(s) = \varphi _\eta (s) = \varphi _{\eta '} (s \circ A ) , \; \text{ for } \; s \in {\mathbb {R}}^d. \end{aligned}$$

Therefore, the jth factor of the infinite product in Eq. (12) can be written as follows

$$\begin{aligned} \left( \varphi _\eta \left( \tfrac{1}{2^{j}} \, s\right) \right) ^{2^{j}} = \left( \varphi _{\eta '} \left( \tfrac{1}{2^{j}} \, s \circ A \right) \right) ^{2^{j}} = \prod _{k=1}^{d'} \, \left( \varphi _{G_{k}'} \left( \tfrac{1}{2^j} \, s \cdot A_{, k} \right) \right) ^{2^{j}}, \end{aligned}$$

where \(A_{, k}\) stands for the kth column of the coefficient matrix A. Now, taking into account the assumptions and boundedness of the scalar product, according by Theorem 4 the infinite product

$$\begin{aligned} \varphi _{G_{k}^{\{\infty \}}} (s):= \varphi _{G_{k}'^{\{\infty \}}} (s \cdot A_{,k} ) = \prod _{j=0}^{\infty } \, \left( \varphi _{G_{k}'} \left( \tfrac{1}{2^{j}} ( s \cdot A_{, k}) \right) \right) ^{2^{j}}, \; \text{ for } \; s \in {\mathbb {R}}^{d}, \end{aligned}$$

converges almost uniformly with respect to \(s \in {\mathbb {R}}^{d}\) to the characteristic function of the probability distribution of the random vector \(\ \eta _k' \, A_{,k}\), for every \(k=1,2,\ldots , d'\). Thus, the infinite product in Eq. (12) defines the weak limit of \(G^{\{n\}}\). This limit is the convolution of the limits \(G_{1}^{\{\infty \}} \star G_{2}^{\{\infty \} }\star \ldots \star G_{d'}^{\{\infty \}} \in {\mathscr {P}}^{(d)}.\)

Theorem 5

If the modulus of the characteristic function \(G\in {\mathscr {P}}^{(1)}\) is bounded as

$$\begin{aligned} \vert \varphi _{G}(s) \vert \le \exp \left( - C \vert s \vert ^{1+\varepsilon }\right) \textit{ for } \vert s \vert \le s_{0} , \textit{ where }s_{0}> 0, \varepsilon \in (0,1], C>0, \end{aligned}$$

then

$$\begin{aligned} \vert \varphi _{G^{\{\infty \}}}(s) \vert \le \exp \left( - 2^{- \varepsilon } C {s_{0}}^{\varepsilon } \, \vert s \vert \right) , \; \text{ for } \;\vert s \vert \ge s_{0}, \end{aligned}$$

whenever exists in \({\mathscr {P}}^{(1)}\). In particular, \(G^{\{\infty \}}\) is absolutely continuous with respect to the Lebesgue measure \(\lambda ^{(1)}\).

Proof

The modulus of each factor of the convergent infinite product in Eq. (12) is not greater than 1, since they are all characteristic functions. Thus, it suffices to obtain the desired estimates for at least one suitably chosen factor. Let us fix \(\vert s \vert \ge s_0\) and assign to it a natural \(n(s):= \left\lfloor \log _2 \left( \tfrac{\vert s \vert }{s_0}\right) \right\rfloor \in {\mathbb {N}}.\) Obviously, if \(n = n(s)\), then

$$\begin{aligned} \frac{\vert s \vert }{2^{n+1} } < s_0, \quad \text{ and } \quad 2^{-n} \ge \frac{s_0 }{\vert s \vert } \end{aligned}$$

and therefore by our assumptions the following estimates hold

$$\begin{aligned} \begin{aligned} |\varphi _{G^{\{\infty \}}}(s)|&\le \left| \varphi _G\left( \frac{s}{2^{n+1}} \right) \right| ^{2^{n+1}} \le \exp \left( - C \left( \frac{\vert s \vert }{2^{n+1}} \right) ^{1 + \varepsilon } {2^{n+1}} \right) \\&\le \exp \left( - 2^{-\varepsilon } C \, {{\vert s \vert }^{1+ \varepsilon }} \left( 2^{-n}\right) ^ \varepsilon \right) \le \exp \left( - 2^{-\varepsilon } C \,{{\vert s \vert }^{1+ \varepsilon }}{\left( \frac{s_0 }{\vert s \vert }\right) ^{ \varepsilon }} \right) \\&\le \exp \left( - 2^{- \varepsilon } C {s_0}^{\varepsilon } \, \vert s \vert \right) , \end{aligned} \end{aligned}$$

as required. Since this estimates hold for all \(\vert s \vert \ge s_0\), the obtained bound implies that \(\varphi _{G^{\{\infty \}}}\) is integrable over \({\mathbb {R}}\). Hence, the absolute continuity of the limit p.d., \(G^{\{\infty \}}\), follows from the inverse Fourier transformation theorem (see e.g. Theorem 3.3.5. in [8]).

5 Examples and Problems

The conditions on the logarithm of the kernel’s characteristic function in Theorems 4 and 5 are not “uncommon” ones. Besides distributions with finite variance, there is a large class of heavy tailed distributions (on \({\mathbb {R}}\)) with moments of order less than 2, which cover interesting domains of attraction to stable p.ds. For illustrative purposes, we consider some specific cases. We note for what follows that Markov’s inequality implies the following estimate of the tails, for arbitrary \(H \in {\mathscr {P}}^{(1)}\),

$$\begin{aligned} H\left( (-\infty , -x]\right) + H\left( [x, \infty )\right) \le x^{-p} \, \int _{{\mathbb {R}}} \ \vert u \vert ^{p} \mathrm {d}H(u) , \; \text{ for } x>0, \, p > 0. \end{aligned}$$

Proposition 3

If \(G\in {\mathscr {P}}^{(1)}\) is symmetric and satisfies

$$\begin{aligned} G(-\infty ,-x]= G[x, \infty ) \le C x^{-( 1 + \varepsilon )}, \; \text{ for } x>0, \; \text{ where } \; \varepsilon \in (0,1), \, C>0, \end{aligned}$$

then, possibly with another constant \(C >0 \), we have

$$\begin{aligned} \vert \ln \varphi _{G}(s) \vert < C\vert s \vert ^{1+\varepsilon }, \; \text{ for } \; s \in {\mathbb {R}}\end{aligned}$$

(equivalently, \(\vert \varphi _{G}(s)-1 \vert < C\vert s \vert ^{1+\varepsilon }\) ). In particular, G satisfies conditions of Theorem 4.

Proof

Due to the symmetry of G it suffices to consider only positive \(s>0\). Moreover, for every \(A >0\) we have

$$\begin{aligned} \begin{array}{rl} \displaystyle \vert 1-\varphi _{G}(s) \vert &{}= \displaystyle \left| 1- \int \limits _{{\mathbb {R}}} e^{isx} \ \mathrm {d}G(x) \right| = \displaystyle \left| \int \limits _{{\mathbb {R}}} (1-\cos (sx) ) \ \mathrm {d}G(x) \right| \\ &{} \le \displaystyle 2 \int \limits _{[0,A)} \frac{s^{2}x^{2}}{2} \ \mathrm {d}G(x) +4 \int \limits _{[A, \infty )} \ \mathrm {d}G(x). \end{array} \end{aligned}$$

Integrating the first term by parts and taking \(A = \pi /s\), we obtain

$$\begin{aligned} \begin{array}{rl} \displaystyle I &{} \displaystyle := \, 2 \int \limits _{[0,A)} \frac{s^{2}x^{2}}{2} \ \mathrm {d}G(x) = - 2\frac{s^2 A^2}{2} G([x, \infty )) + 2 s^2 \int \limits _{[0,A)} x G([x, \infty )) \mathrm {d}x\\ &{} \displaystyle \le 2 s^{2} \int \limits _{[0,A)} \ C \vert x\vert ^{-\varepsilon } \ \mathrm {d}x = 2 C \pi ^{1-\varepsilon } s^{1+\varepsilon }. \end{array} \end{aligned}$$

For the second term, again with \(A = \pi /s \) we have

$$\begin{aligned} II := 4 \int _{[A, \infty )} \hbox {d}G(x) = 4 G([A, \infty )) \le 4\, C\, \pi ^{-(1+\varepsilon )} s^{1+\varepsilon }. \end{aligned}$$

Next, let us indicate some stronger results, based on stability theory, for the asymptotic behaviour of the characteristic functions near the origin. Obviously, stronger assumptions will be required. The following two propositions provide simple tests for the existence and continuity of the limit distributions that may be easily exploited by the applied user. Since they follow from well known facts about stable distributions, the proofs are omitted. The interested reader is referred e.g. to Theorem 5 of Section 7.4 in [7], Proposition 2.2.13 in [9] or Chapter 17 in Feller’s book [10]. In order to make the propositions more approachable we assume that, as \(x \rightarrow \infty \),

$$\begin{aligned} G(-\infty ,-x]= C x^{-( 1 + \varepsilon )} + o\left( x^{-( 1 + \varepsilon )}\right) , \quad G[x, \infty ) = C x^{-( 1 + \varepsilon )} + o\left( x^{-( 1 + \varepsilon )}\right) .\nonumber \\ \end{aligned}$$
(13)

Proposition 4

If \(G\in {\mathscr {P}}^{(1)}\) with mean value 0 has tails satisfying Eq. (13) for some constants \(C>0\) and \( \varepsilon \in (0,1)\) then

$$\begin{aligned} \varphi _G(s) - 1 = - 2\,C\, c(\varepsilon )\, \vert s\vert ^{1 + \varepsilon } + o(\vert s\vert ^{1 + \varepsilon }), \; \text{ as } \; s \rightarrow 0, \end{aligned}$$

where \(c(\varepsilon ) ={\varepsilon }^{-1}\, {(1+\varepsilon )\, {\varGamma }(1-\varepsilon )\, \sin ( \tfrac{\pi }{ 2}\varepsilon )}\). In particular,

  • G is in the domain of attraction of a \((1+ \varepsilon )\)-stable p.d. with characteristic function \(\varphi _{G^{(\infty )}}(s) =\exp ( - c \vert s \vert ^{1+ \varepsilon })\), where \(c = 2 C c(\varepsilon )\) is a positive constant;

  • the assumptions of Theorems 4 and 5 are satisfied, implying that the limit \(G^{\{\infty \}}\) given by Eq. (12) exists and is absolutely continuous.

The specific properties of 1-stable distributions allow us to make further statements under only slightly stronger assumptions.

Proposition 5

If the tails of a symmetric p.d. \(F \in {\mathscr {P}}^{(1)}\) satisfy Eq. (13) with \(\varepsilon = 0\) (and G replaced by F), then the characteristic function \(\varphi _{F}\), satisfies the following

$$\begin{aligned} \varphi _F(s) - 1 = -\pi C \vert s\vert + o(\vert s\vert ), \; \text{ as } \; s \rightarrow 0. \end{aligned}$$

In particular, the assumptions of Theorem 2 (ii) are fulfilled, implying that F is in the domain of attraction of the Cauchy p.d.

$$\begin{aligned} \varphi _{F^{(\infty )}} (s) = \lim _{n\rightarrow \infty } \ \left( \varphi _{F} \left( \tfrac{s}{n} \right) \right) ^n = \exp ( - C \pi \vert s \vert ), \; \text{ for } \; s \in {\mathbb {R}}. \end{aligned}$$

Remark 4

We now give some examples of assumptions on a symmetric distribution \(G\in {\mathscr {P}}^{(1)}\), which are sufficient to satisfy Eq. (13).

  • For negative \(x<0\), \(G(-\infty , x] = \frac{ ( u(x) )^{\alpha } }{ (v(x))^{\beta } }\), is a positive increasing function, where u and v are polynomials of degree l and m, respectively, with \(\varepsilon := m \beta - l \alpha -1\), \(\varepsilon \ge 0\);

  • \(G\{ {\mathbb {Z}}\} =1\) and \(G\{j\} = \frac{ ( u(j) )^{\alpha } }{ (v(j))^{\beta } }, \; j>0\), where u and v are positive on \({\mathbb {Z}}_{+}\) polynomials of degrees l and m, respectively, with \( \varepsilon := m \beta - l \alpha -2 \ge 0\); for instance, this holds if \(G\{j\} = C \frac{1}{\vert j \vert ^{2 + \varepsilon }}\) for \(k\ne 0\);

  • \( \frac{\mathrm {d}G}{\mathrm {d}\lambda ^{(1)}}(x) = C \left( 1 + a \vert x - \mu \vert ^\alpha \right) ^{\tfrac{2 + \varepsilon }{ \alpha }}, \; x \in {\mathbb {R}}, \) where \(\alpha >0, \varepsilon \ge 0\).

Remark 5

According to Corollary 1, for \(G \in {\mathscr {P}}^{(d)}\) the centred QSO \({\mathbf {Q}}_{G}\) is weakly stable at \(F\in {\mathscr {P}}^{(d)}\), whenever the weak limit \(H_{\infty }\) of the convolutions \( F^{(n)} \star G^{\{n\}}\) exists in \({\mathscr {P}}^{(d)}\), as \(n \rightarrow \infty \). The main results supply some sufficient conditions for the existence of the limits separately for \( F^{(n)}\) and \( G^{\{n\}}\). It follows that the limit \(H_{\infty }\) is independent of F only within such families of F which possess a common limit \(F^{(\infty )} \in {\mathscr {P}}^{(d)}\). In particular, the class can consist of distributions with common mean value \(m^{(1)} \in {\mathbb {R}}^{d}\). In a more general class of initial probability distributions F which possess the limit \(F^{(\infty )} \in {\mathscr {P}}^{(d)}\), the stability is equivalent to existence of the weak limit \(G^{\{\infty \}} \in {\mathscr {P}}^{(d)}\). Can the limits exist separately? Indeed, by Theorem 1, it suffices that writing H for F Eq. (10) is satisfied. Then F is a fixed point of \({\mathbb {Q}}_{G}\) and the limit of \(({\mathbb {Q}}_{G})^{n}(F)\) equals F as the sequence is constant. We leave it as an open problem whether there are solutions of Eq. (10), for which \(G^{\{n\}}\) is not weakly convergent to a p.d. on \({\mathbb {R}}^{d}\).

6 Dyadically Stable Distribution

We conclude our work by considering a special kind of stable p.ds.

Definition 4

A one-dimensional p.d. \(F \in {\mathscr {P}}^{(1)}\) is said to be dyadically\(\alpha \)-stable if it is infinitely divisible and its characteristic function satisfies, cf. Eq. (11),

$$\begin{aligned} \varphi _F ( 2 \, s ) = \left( \varphi _F ( s )\right) ^{2^{\alpha }} , \; \text{ for } \; s \in {\mathbb {R}}, \; \text{ where } \; \alpha \in (0,2]. \end{aligned}$$
(14)

Remark 6

It is worth reminding the reader that the assumed infinite divisibility of F ensures that \(\varphi _{F}\) is strictly non-zero. Hence, the right-hand side of Eq. 14 is well defined as the unique continuous at the origin branch of a power function of a nowhere equal zero complex valued continuous function. Furthermore, the right-hand side of Eq. 14 as a positive power of characteristic function of an infinitely divisible distribution will remain a characteristic function of an infinitely divisible distribution.

A wide family of dyadically 1-stable probability distributions is generated through Theorem 2 (i). Following this, one can also prove the following.

Proposition 6

A probability distribution \(H\in {\mathscr {P}}^{(d)}\) satisfies Eq. (14), if for some infinitely divisible \(H_0\in {\mathscr {P}}^{(d)}\) the logarithms of characteristic functions of H and \(H_{0}\) are related as follows,

$$\begin{aligned} \ln (\varphi _H) (s) = \sum _{k\in {\mathbb {Z}}}\, 2^{k\alpha } \ln (\varphi _{H_0}) (2^{-k} s), \end{aligned}$$

provided that the series is convergent, almost uniformly with respect to \(s \in {\mathbb {R}}\).

Theorem 6

For \(\alpha \in (1,2]\), every one-dimensional dyadically \(\alpha \)-stable p.d. \(H \in {\mathscr {P}}^{(1)}\) is a weak limit of iterates of some centred QSO \({\mathbb {Q}}_{G}(\delta _{0})\). For this relationship between H and G to hold, it is sufficient that \(\varphi _{G}(s) = \left( \varphi _H(s)\right) ^{\frac{2^\alpha -2}{2^\alpha }}\). Furthermore, then H is a fixed point of \({\mathbb {Q}}_{G}\).

Proof

By definition, G is also dyadically \(\alpha \)-stable, and therefore we have \((\varphi _{G}(\frac{s}{2})) = (\varphi _{G}(s))^{ 2^{-\alpha }}\). By multiple application of the dyadic division, we obtain the following limit for \(G^{\{n\}} = \left( {\mathbb {Q}}_{G}\right) ^n(\delta _0)\), as \(n \rightarrow \infty \),

$$\begin{aligned} \prod _{j=0}^{n-1}\, \left( \varphi _{G}\left( \frac{s}{2^{j}}\right) \right) ^{2^{j}} = \prod _{j=0}^{n-1}\, (\varphi _{G}(s))^{ 2^{j(1- \alpha )} } \rightarrow (\varphi _G(s))^{ \frac{1}{ 1 - 2^{(1 - \alpha )}}} = (\varphi _{G}(s))^{\frac{2^\alpha }{2^\alpha -2}} =\varphi _{H}(s). \end{aligned}$$

Thus by Theorem 1 we also obtain that H is a fixed point of \({\mathbb {Q}}_{G}\).

We close by remarking that dyadically 1-stable distributions form the whole set of weak limits of \(F^{(n)}\), \(n \in {\mathbb {N}}\), with F running over \({\mathscr {P}}^{(1)}\), cf. Theorem 1. On the other hand, the union of the families of all dyadically \(\alpha \)-stable distributions, \(\alpha \in (1,2]\), do not cover the family of all weak limits of \(G^{\{n\}}\), \(n \in {\mathbb {N}}\). Take for example G equal to a convolution of two dyadically stable distributions with different exponents, say \(1< \alpha < \beta \le 2\), both not concentrated at the origin (of \({\mathbb {R}}\)). Then, repeating the above proof, one gets that the limit is again a convolution of two such distributions, which is not dyadically stable with any exponent.