1 Introduction

Let \(d >1\). Let \((\mathbf {T}_i)_{i \ge 1}\) be a sequence of random \(d\times d\)-matrices and \(Q\) a random vector in \({\mathbb {R}}^d\). Assume that

$$\begin{aligned} N := \# \{ i : \mathbf {T}_i \ne 0 \} \end{aligned}$$

is finite a.s. We will assume throughout that the collection \((\mathbf {T}_i)_{i \ge 1}\) is ordered in such a way that \(\mathbf {T}_i \ne 0\) if and only if \(i \le N\). For any random variable \(X \in {\mathbb {R}}^d\), let \((X_i)_{i \ge 1}\) be a sequence of i.i.d. copies of \(X\), and independent of \((Q, (\mathbf {T}_i)_{i \ge 1})\). Then the random variable \( \sum _{i =1}^N \mathbf {T}_i X_i + Q\) is well defined, and one can ask the question, whether it holds that

$$\begin{aligned} X \mathop {=}\limits ^{{\mathcal {L}}}\sum _{i=1}^N \mathbf {T}_i X_i + Q, \end{aligned}$$
(1.1)

where \(\mathop {=}\limits ^{{\mathcal {L}}}\) denotes equality in law. If \(X\) is a random variable such that (1.1) holds, then we call its law \({\mathcal {L}}\left( X \right) \) a solution of Eq. (1.1), and ask for a characterization of all solutions of Eq. (1.1). In other words, introducing the mapping \({\mathcal {S}}_Q\) on the set \({\mathcal {P}}_{}({\mathbb {R}}^d)\) of probability measures on \({\mathbb {R}}^d\), defined by

$$\begin{aligned} {\mathcal {S}}_Q\eta := {\mathcal {L}}\left( \sum _{i=1}^N \mathbf {T}_i X_i + Q \right) \!, \end{aligned}$$
(1.2)

where \((X_i)_{i \ge 1}\) are i.i.d. random variables with law \(\eta \), and independent of \((Q, (\mathbf {T}_i)_{i \ge 1})\), we want to find all \(\eta \in {\mathcal {P}}_{}({\mathbb {R}}^d)\), such that \({\mathcal {S}}_Q\eta = \eta \), i.e. all fixed points of \({\mathcal {S}}_Q\). Following Durrett and Liggett [27], we call \({\mathcal {S}}_Q\) the multivariate smoothing transform, and

$$\begin{aligned} {\mathcal {S}}_0\eta := {\mathcal {L}}\left( \sum _{i=1}^N \mathbf {T}_i X_i \right) \!, \end{aligned}$$
(1.3)

the homogeneous multivariate smoothing transform. By a shift of notation, we will also call a random variable \(X\) a fixed point (FP) or solution, if its law \({\mathcal {L}}\left( X \right) \) is a fixed point of the multivariate smoothing transform.

Note that \(X \equiv 0\) is always a solution of the homogeneous equation with \(Q \equiv 0\), which we refer to as the trivial one. If there is \(X \ne 0\) which is a fixed point of the homogeneous equation, then \(cX\) will be a fixed point of the homogeneous equation as well, for any \(c \in {\mathbb {R}}\). Observe that the following particular case of Eq. (1.1);

$$\begin{aligned} X \mathop {=}\limits ^{{\mathcal {L}}}\sum _{i=1}^N N^{-1/\alpha } X_i \end{aligned}$$
(1.4)

with \(\alpha \in (0,2]\) and \(N\) having a geometric distribution on \({\mathbb {N}}\), is solved by the multivariate strictly \(\alpha \)-stable laws.

Multivariate stochastic fixed point equations of the form (1.1) arise naturally in the study of branching processes, divide-and-conquer algorithms or generalized Polya urn models, some instances appear in Neininger and Rüschendorf [63, Theorem 4.1 and Theorem 4.3], [13, Theorem 1] or in [38, Eq. (3.5)], which can be written in matrix form, too. See as well [49, Section 7] or [45, Section 7] for a critical discussion. Recently, Bassetti and Matthes [8] used Eq. (1.1) with \(Q \equiv 0\) to study generalized kinetic models, aiming at the description of particle velocities in a Maxwell gas.

Understanding Eq. (1.1) (with \(Q \equiv 0\)) is an important step to study the so-called multiplicative chaos equation for matrices, i.e. to find random matrices \(\mathbf{X}\), satisfying

$$\begin{aligned} \mathbf{X} \mathop {=}\limits ^{{\mathcal {L}}}\sum _{i=1}^N \mathbf {T}_i \mathbf{X}_i, \end{aligned}$$
(1.5)

where \((\mathbf{X}_i)_{i \ge 1}\) is a sequence of i.i.d. copies of \(\mathbf{X}\) and independent of \((\mathbf {T}_i)_{i \ge 1}\). To wit, if \(X, X_1, X_2, \ldots \) satisfy Eq. (1.1) (with \(Q \equiv 0\)), then for every \(a=(a_1, \ldots , a_d)^\top \in {\mathbb {R}}^d\) it holds that

$$\begin{aligned} \left( a_1 X \quad a_2 X \quad \ldots \quad a_d X \right) \mathop {=}\limits ^{{\mathcal {L}}}\sum _{i=1}^N \mathbf {T}_i \left( a_1 X_i \quad a_2 X_i \quad \ldots \quad a_d X_i \right) , \end{aligned}$$

i.e. the rank-one matrix \(\mathbf{X}=Xa^\top \) is a fixed point of Eq. (1.5). There is substantial interest in understanding (1.5), for it may serve as a discrete approximation to Gaussian matrix-multiplicative chaos, a field of study which has just started, see Chevillard et al. [23]; Rhodes and Vargas [67].

The long-term goal is to characterize all solutions of the multivariate stochastic fixed point equation (1.1) for general matrices. As the historic survey for the one-dimensional case will show, techniques (for \(d=1\)) were first developed to understand nonnegative fixed points of the homogeneous equation with nonnegative weights \((T_i)_{i \ge 1}\). This equation then served as a model for the study of the more complicated case of real-valued fixed points, or even real-valued weights; a problem which was solved very recently in Iksanov and Meiners [33].

We set out our investigations by studying the multidimensional analogue of the nonnegative case, i.e. we assume that

$$\begin{aligned} (Q,(\mathbf {T}_i)_{i \ge 1}) \text { is a random variable in } {{\mathbb {R}}^d_\ge }\times M(d\times d, {\mathbb {R}}_\ge )^{\mathbb {N}}\end{aligned}$$
(A0)

and, as mentioned before, we assume that

$$\begin{aligned} \text {the r.v. }N := \# \{ i : \mathbf {T}_i \ne 0 \} \text { equals } \sup \{ i : \mathbf {T}_i \ne 0 \} \text { and is finite a.s.} \end{aligned}$$
(A1)

Here, \({\mathbb {R}}_\ge :=[0,\infty )\).

In this setting, Menshikov et al. [58] asked (Remark after Proposition 2.4) for sufficient conditions for the existence of a fixed point of the matrix-multiplicative chaos equation. As indicated above, by characterizing the fixed points of Eq. (1.1), we solve this open problem. Further applications will be given below.

In order to explain what assumptions and results can be expected, we will now review the history of the problem in dimension \(d=1\). Then we state all our assumptions and the main result.

1.1 History of the problem: results in the one-dimensional case

If \(d=1\), then \((Q, T_1, T_2, \ldots )\) is a sequence of nonnegative random variables, and one can define, for \(s \ge 0\), the quantity

$$\begin{aligned} \hat{m}(s) := \mathbb {E}\sum _{i=1}^N T_i^s \in [0, \infty ]. \end{aligned}$$

Then it is immediate, upon taking expectations, that \(\hat{m}(1)=1\) (\(\hat{m}(1)<1\)) is a necessary condition for the homogeneous (inhomogeneous with \(\mathbb {E}Q >0\)) equation to have a nontrivial, i.e. non-zero, fixed point with finite expectation. But the rôle of the function \(s \mapsto { \hat{m}}(s)\) is much more fundamental: it is log-convex on its domain of definition, and subject to the assumption \(\mathbb {E}N>1\), which is imposed in all the studies below, there are at most two values \(0 < \alpha < \beta \) with \({ \hat{m}}(\alpha )={ \hat{m}}(\beta )=1\). If derivatives exists, then \({ \hat{m}}'(\alpha ^-) \le 0\), \({\hat{m}}'(\beta ) \ge 0\), with strict inequality holding unless \(\beta =\alpha \), which then is called the critical case.

The set of fixed points is structured by the value of \(\alpha \), which generalizes the index of stability. Roughly speaking, there are two classes of fixed points: mixtures of \(\alpha \)-stable laws, which have an infinite moment of order \(\alpha \), and fixed points with a finite moment of order \(\alpha \), which then might have heavy tails with tail index \(\beta \), if \(\beta \) exists. The second class of fixed points attracts point masses (and can be found by using Banach’s fixed point theorem on a suitable subspace of probability measures, namely those having a finite moment of order \(\alpha + \varepsilon \)), while the first class attracts \(\alpha \)-stable laws and needs more involved techniques to be characterized.

It was shown in Doney [25]; Biggins [9]; Kahane and Peyrière [41], that \({\hat{m}}(1)=1\) together with \({\hat{m}}'(1^-)<0\) is a sufficient condition for the existence of fixed points of \({\mathcal {S}}_0\) with finite expectation. The first two papers are motivated by questions about convergence of martingales in the branching random walk, while the third paper studies so-called Mandelbrot cascades, a model introduced in Mandelbrot [54]. Motivated by questions from interacting particle systems, Durrett and Liggett [27] considered the homogeneous equation for general \(\alpha \in (0,1]\) under the assumption that \(N\) is bounded. Directly related to Eq. (1.1) is the functional equation

$$\begin{aligned} f(t)=\mathbb {E}\prod _{i=1}^N f(T_i t), \end{aligned}$$

where \(f(t)=\mathbb {E}\exp (-tX)\) is the Laplace transform (LT) of a nonnegative random variable. Durrett and Liggett [27] proved that there is a function \(L\), slowly varying at 0, such that the Laplace transform of each nontrivial fixed point satisfies (assuming non-arithmeticity)

$$\begin{aligned} \lim _{r \rightarrow 0} \frac{1 - f(t)}{L(t)t^\alpha } = K \end{aligned}$$
(1.6)

for some \(K >0\), and relation (1.6) uniquely characterizes the fixed point. The function \(L\) is constant in the non-critical case, while it equals \(L(t)=\left| {\log t} \right| \) in the critical case \(\alpha =1\). Using the Tauberian theorem for LTs, Feller [28], XIII.(5.22)], this relation yields that fixed points have tails of order \(\alpha \) , if \(\alpha <1\), and finite expectation, if \(\alpha =1\) and \({\hat{m}}'(\alpha ^-)<0.\) The results of Durrett and Liggett [27] were extended to the case of more general \(N\), namely random \(N\) with finite expectation, in Liu [50], which also contains many references and additional results. The homogeneous equation in the critical case, i.e. \({\hat{m}}'(\alpha ^-)=0\) was studied in Biggins and Kyprianou [11], Kyprianou [47], Biggins and Kyprianou [12], Aidekon and Shi [1].

Iksanov [34] studied the functional equation for functions \(f\) which are not necessary Laplace transforms, but satisfying (a generalization of) (1.6). It was proved only recently in Alsmeyer et al. [2], under the weakest assumptions known, that any monotone function with values in \([0,1]\), satisfying the functional equation, is of the form

$$\begin{aligned} f(t)=\mathbb {E}\exp (-K t^\alpha W), \end{aligned}$$
(1.7)

where \(W\) is the (up to scaling) unique fixed point of \(W \mathop {=}\limits ^{{\mathcal {L}}}\sum _{i\ge 1} T_i^\alpha W_i\), where \(W, W_1, \ldots \) are i.i.d. and independent of \((T_i)_{i \ge 1}\). Here \(\alpha \in (0, \infty )\) and \(\hat{m}'(\alpha ) \le 0\) is considered. Such functions are completely monotone (hence Laplace transforms of a random variable) if and only if \(\alpha \in (0,1]\). In that case, \(f(t)\) describes a mixture of \(\alpha \)-stable laws, the LTs of which are given by \(f_{\alpha ,c}(t):=\exp (-c t^\alpha )\), \(c>0\).

The set of fixed points for the inhomogeneous case was described in Alsmeyer and Meiners [4]: there is a particular fixed point \(W^*\) (called minimal solution), which has a finite moment of order \(\alpha \), and in general, there is a one-parameter family of fixed points, the Laplace transform of which satisfies

$$\begin{aligned} \mathbb {E}\exp (-tX) = { \mathbb {E}\exp (- K t^\alpha W - tW^*), \quad t \ge 0} \end{aligned}$$

for some \(K \ge 0\), with \(W\) as above. If \(K> 0\), these fixed points have an infinite moment of order \(\alpha \).

The inhomogeneous equation in the critical case was studied in Buraczewski and Kolesko [20]. Real-valued fixed points (still for \(T_i\) being nonnegative) were considered in Caliebe [21], Caliebe and Rösler [22], Alsmeyer and Meiners [5], and the most general question about fixed points for real-valued weights \((Q,T_1, T_2, \ldots )\) was solved very recently by Iksanov and Meiners [33].

All papers cited above studied, as their main objective, the structure of the set of fixed points of \({\mathcal {S}}_0\) or \({\mathcal {S}}_Q\) in the one-dimensional case, while our aim is to pursue the same question in the multivariate case. There is a lot of related work concerned with properties of particular fixed points from the second class, namely those of finite expectation (i.e. \(\alpha =1\)) for the homogeneous equation, or properties of \(W^*\) for the inhomogeneous equation. The tail behavior of \(L^1\)-fixed points of \({\mathcal {S}}_0\) was studied in Guivarc’h [30] and subsequently in Liu [51]. It turns out that these fixed points have Pareto-like tails with tail index \(\beta \), where \(\hat{m}(\beta )=1\), \(\hat{m}(\beta )>0\). Jelenković and Olvera-Cravioto [40] proved that the particular solution \(W^*\) of the inhomogeneous equation satisfies

$$\begin{aligned} \lim _{t \rightarrow \infty } t^\beta {\mathbb {P}}_{} \left( {W^* > t} \right) = C \ge 0, \end{aligned}$$

where positivity of \(C\) was proved in some cases, and under general assumptions later in Alsmeyer et al. [3], Buraczewski et al. [19]. Tail behavior of fixed points in the critical case was studied in Buraczewski [16] for the homogeneous, and in Buraczewski and Kolesko [20] for the inhomogeneous equation. The case of real-valued weights was considered in Jelenković and Olvera-Cravioto [39]. The r.v. \(W\) in (1.7) is the limit of a martingale \(W_n\) (more details given below), tails of \(\sup _n W_n\) were studied in Iksanov and Negadaĭlov [35]. Continuity properties of the density of fixed points were considered in Liu [52].

1.2 Previous results in the multivariate case

Up to now, only partial results about existence and/or uniqueness of fixed points of Eq. (1.1) for \(d >1\) have been achieved: Fixed points on \({\mathbb {R}}^d\) with a finite variance were studied in Neininger and Rüschendorf [63, Theorem 4.4] and in Bassetti and Matthes [8]. Properties of \(W^* \in {{\mathbb {R}}^d_\ge }\) were studied in Mirek [61], its counterpart in \({\mathbb {R}}^d\) in Buraczewski et al. [18]. Fixed points (in \({{\mathbb {R}}^d_\ge }\)) of the homogeneous equation with finite expectation were studied in Buraczewski et al. [17]. Let us point out that all these existence results are only concerned with fixed points from the second class, i.e. those with a finite moment of order \(\alpha + \varepsilon \).

The fixed points described in this work have an infinite moment of order \(\alpha \) if \(K>0\) and thus are not accessible by these methods. Inter alia, we are going to prove that the inhomogeneous equation studied in Mirek [61] has more than one solution [in this respect, the statement of (ibid., Theorem 1.7) is misleading].

1.3 Statement of the main result

The weighted branching model for matrices Let \(\mathbb {V}= \bigcup _{n=0}^\infty {\mathbb {N}}^n \) be the infinite Ulam–Harris tree with root \(\emptyset \). For a node \(v=v_1 \ldots v_n\) write \(\left| {v} \right| =n\) for its generation, \(v|k=v_1 \ldots v_k\) for its ancestor in the \(k\)-th generation, \(k \le n\), and \(vi\) for its \(i\)-th child. Attaching to each node an independent copy \(T(v)=(Q(v),\mathbf {T}_1(v), \mathbf {T}_2(v), \ldots )\) of \(T=(Q,\mathbf {T}_1, \mathbf {T}_2, \ldots )\), the random matrix \(\mathbf {T}_i(v)\) can be interpreted as a weight along the path from \(v\) to \(vi\). The product of the weights along the unique shortest path from \(\emptyset \) to \(v\) is defined recursively by \(\mathbf{L}(\emptyset )= \mathbf{Id}\), the identity matrix, and

$$\begin{aligned} \mathbf{L}(vi) = \mathbf{L}(v)\mathbf {T}_i(v). \end{aligned}$$
(1.8)

Due to the assumption \(N < \infty \) a.s., the number of nonzero weights \(\mathbf{L}(v)\) is almost surely finite in every generation, in fact, it follows a Galton–Watson process with offspring law \(N\). Define a filtration by

$$\begin{aligned} {\mathfrak {B}}_n := \sigma \left( T(v), \left| {v} \right| < n\right) . \end{aligned}$$

Observe that \(\mathbf{L}(v)\) is \({\mathfrak {B}}_{\left| {v} \right| }\)-measurable, while \(Q(v)\) is independent of \({\mathfrak {B}}_{\left| {v} \right| }\).

A particular fixed point With these definitions, a natural candidate for a fixed point of (1.1) is given by the law of the random variable

$$\begin{aligned} W^*:= \sum _{n=0}^\infty \sum _{\left| {v} \right| =n} \mathbf{L}(v) Q(v), \end{aligned}$$
(1.9)

as soon as this sum converges. Let \(\left| {\cdot } \right| \) be the Euclidean norm on \({\mathbb {R}}^d\) with unit sphere \({\mathbb {S}}\), and let \(\left\| {\mathbf{a}} \right\| := \sup _{x \in {\mathbb {S}}}\left| {\mathbf{a}x} \right| \) be the corresponding operator norm on the set of matrices. Then a sufficient condition for the convergence of \(W^*\) is \(\mathbb {E}\left| {W^*} \right| ^s < \infty \) for some \(s>0\). Considering e.g. \(s \le 1\), one obtains

$$\begin{aligned} \mathbb {E}\left| {W^*} \right| ^s < \mathbb {E}\left| {Q} \right| ^s \sum _{n=0}^\infty \mathbb {E}\sum _{\left| {v} \right| =n} \left\| {\mathbf{L}(v)} \right\| ^s\!, \end{aligned}$$
(1.10)

and the right hand side is finite if and only if \(\mathbb {E}\left| {Q} \right| ^s < \infty \) and the quantity

$$\begin{aligned} m(s) := \limsup _{n \rightarrow \infty } \left( \mathbb {E}\sum _{\left| {v} \right| =n} \left\| {\mathbf{L}(v)} \right\| ^s\right) ^{1/n} \end{aligned}$$

is smaller than 1. The function \(s \mapsto m(s)\) will play a fundamental role in the characterization of the set of fixed points below, it is the multivariate analogue of the function \(\hat{m}(s)\) that appeared in the one-dimensional case.

Assumptions on N Besides (A1) which was already introduced, assume that

$$\begin{aligned} N \ge 1 \text { a.s.}\quad \text {and}\quad 1 < \mathbb {E}N < \infty . \end{aligned}$$
(A2)

The finiteness of \(\mathbb {E}N\) allows to introduce a probability measure \(\mu \) on nonnegative matrices, defined by

$$\begin{aligned} \int \, f(\mathbf{a}) \, \mu (d\mathbf{a}) := \frac{1}{\mathbb {E}N} \, \mathbb {E}\left( \sum _{i=1}^N f(\mathbf {T}_i)\right) \!. \end{aligned}$$
(1.11)

If now \((\mathbf {M}_n)_{n \ge 1}\) is a sequence of i.i.d. random matrices with law \(\mu \), then

$$\begin{aligned} m(s) = \mathbb {E}N \, \left( \lim _{n \rightarrow \infty } \left( \mathbb {E}\left\| {\mathbf {M}_n \cdots \mathbf {M}_1} \right\| ^s\right) ^{1/n} \right) \!. \end{aligned}$$
(1.12)

We write

$$\begin{aligned} I_\mu := \{ s \ge 0 : m(s) < \infty \}. \end{aligned}$$

On \(I_\mu \), the function \(m(s)\) is log-convex, thus continuous and differentiable on the interior \(\mathrm {int}\left( I_\mu \right) \) of \(I_\mu \).

The assumption \(N \ge 1\) a.s. is for convenience, it assures that the underlying branching process [the subtree pertaining to nonzero weights \(\mathbf{L}(v)\)] survives with probability 1 and is supercritical due to the assumption \(\mathbb {E}N>1\).

Geometrical assumptions A nonnegative matrix \(\mathbf{a}\in {\mathcal {M}}_\ge :=M(d\times d, {\mathbb {R}}_\ge )\) is called allowable, if it has no zero row nor column. We say that \(\mathbf{a}\) is positive, if all its entries are positive (\(>\)0), i.e. \(\mathbf{a}\in \mathrm {int}\left( {\mathcal {M}}_\ge \right) \). Let \(\Gamma \) be a semigroup of nonnegative matrices.

Definition 1.1

We say that \(\Gamma \) satisfies condition \((C)\), if

  1. (1)

    every \(\mathbf{a}\) in \(\Gamma \) is allowable, and

  2. (2)

    \(\Gamma \) contains a positive matrix.

Then we impose the following assumption:

$$\begin{aligned} \text { The subsemigroup }[{\mathrm {supp}}\, \mu ] \text { generated by }{\mathrm {supp}}\, \mu \text { satisfies }(C). \end{aligned}$$
(A3)

We are also going to impose a non-arithmeticity assumption. Recall that a positive matrix \(\mathbf{a}\) has, due to the Perron-Frobenius theorem, a unique dominant eigenvalue \(\lambda _\mathbf{a}\), exceeding all other eigenvalues in modulus, which is furthermore positive and algebraically simple, with corresponding eigenvector \(u_\mathbf{a}\in {\mathbb {S}}_\ge := {\mathbb {S}}\cap {{\mathbb {R}}^d_\ge }\).

$$\begin{aligned} \text {The additive group generated by } \{ \log \lambda _{\mathbf{a}} : \mathbf{a}\in [{\mathrm {supp}}\, \, \mu ] \text { is positive}\} \text { is dense in } {\mathbb {R}}\end{aligned}$$
(A4)

Moment assumptions The results from the one-dimensional case indicate that it is natural to assume

$$\begin{aligned} \text {There is }\alpha \in (0,1] \cap \mathrm {int}\left( I_\mu \right) \quad \text { such that } m(\alpha )=1 \quad \text { and } m'(\alpha ) \le 0. \end{aligned}$$
(A5)

For \(d=1\), assuming only \({\mathbb {E}N>1}\) and \({\mathbb {P}}_{} \left( {(T_i)_{i \ge 1} \in \{0,1\}^{\mathbb {N}}} \right) <1\), it is shown in Alsmeyer and Meiners [5, Theorem 6.1] that \(\alpha \in (0,1]\) is necessary for the existence of fixed points of \({\mathcal {S}}_Q\). Our assumption (A5) is slightly stronger and guarantees in addition the existence of \(\varepsilon >0\) such that \(m(\alpha + \varepsilon ) <1\) as soon as \(m'(\alpha ) <0\).

Every allowable matrix \(\mathbf{a}\) acts on \({\mathbb {S}}_\ge \) by the definition

$$\begin{aligned} \mathbf{a}\cdot x := \frac{\mathbf{a}x}{\left| {\mathbf{a}x} \right| } \end{aligned}$$

and maps \(\mathbf{a}\cdot \mathrm {int}\left( {\mathbb {S}}_\ge \right) \subset \mathrm {int}\left( {\mathbb {S}}_\ge \right) \). Moreover, the quantity

$$\begin{aligned} \iota (\mathbf{a}) := \inf _{x \in {\mathbb {S}}_\ge } \left| {\mathbf{a}x} \right| \end{aligned}$$

is strictly positive, and we have for any \(x \in {\mathbb {S}}_\ge \), that \(\iota (\mathbf{a}) \le \left| {\mathbf{a}x} \right| \le \left\| {\mathbf{a}} \right\| \). We will use the following moment conditions, with \(\mathbf {M}\) having law \(\mu \).

$$\begin{aligned} \mathbb {E}\left\| {\mathbf {M}} \right\| ^\alpha \log (1+ \left\| {\mathbf {M}} \right\| ) < \infty , \quad \mathbb {E}\left\| {\mathbf {M}} \right\| ^\alpha \left| { \log \iota (\mathbf {M}^\top )} \right| < \infty \end{aligned}$$
(A6)
$$\begin{aligned} \mathbb {E}\left\| {\mathbf {M}} \right\| ^\alpha \log (1+ \left\| {\mathbf {M}} \right\| ) < \infty , \quad \mathbb {E}(1+\left\| {\mathbf {M}} \right\| )^\alpha \left| {\log \iota (\mathbf {M}^\top )} \right| < \infty \end{aligned}$$
(A6a)
$$\begin{aligned} \mathbb {E}\left\| {\mathbf {M}} \right\| < \infty \end{aligned}$$
(A7)
$$\begin{aligned} 0 < \mathbb {E}\left| {Q} \right| ^{\alpha +\varepsilon } < \infty \quad \text { for some } \varepsilon >0. \end{aligned}$$
(A8)

A martingale One additional piece of information is needed to formulate our main result in full detail. If \(\mu \) is any measure on nonnegative matrices which satisfies \((C)\), \({\mathcal {L}}\left( \mathbf {M} \right) =\mu \), then the following operator in the set \({\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \right) \) of continuous functions on \({\mathbb {S}}_\ge \) is well defined for any \(s \in I_\mu \):

$$\begin{aligned} P^{s} : {\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \right) \rightarrow {\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \right) , \quad P^{s} f(x) = \mathbb {E}\left| {\mathbf {M}x} \right| ^s f(\mathbf {M}\cdot x). \end{aligned}$$
(1.13)

Its adjoint operator \((P^{s})'\) is a mapping on the set of bounded measures on \({\mathbb {S}}_\ge \). Defining \(\tilde{P^{s}} \nu := [(P^{s})' \nu ({\mathbb {S}}_\ge )]^{-1} (P^{s})' \nu \), it induces a continuous self-map of \({\mathcal {P}}_{}({\mathbb {S}}_\ge )\). By the Schauder–Tychonoff theorem \(\tilde{P^{s}}\) has a fixed point \(\nu ^{s}\), say, which is in turn an eigenmeasure of \((P^{s})' \). Denote its eigenvalue by \(k(s)\). It is shown in Buraczewski et al. [17, Proposition 3.1] that \(\nu ^{s}\) is unique up to scaling and that \(k(s)=(\mathbb {E}N)^{-1}m(s)\), i.e.

$$\begin{aligned} \nu ^{\alpha } (P^{\alpha })' = \frac{1}{\mathbb {E}N} \nu ^{\alpha }. \end{aligned}$$

This property, together with the definition of \(\mu \) in (1.11) is fundamental for showing that

$$\begin{aligned} W_n(u) := \sum _{\left| {v} \right| =n} \int _{{\mathbb {S}}_\ge } \, \left\langle \mathbf{L}(v)^\top u, y \right\rangle \, \nu ^{\alpha }(dy) \end{aligned}$$
(1.14)

defines a martingale w.r.t. the filtration \(({\mathfrak {B}}_n)_{n \in {\mathbb {N}}_0}\). It is nonnegative, thus has an almost sure limit \(W(u)\) for every \(u \in {\mathbb {S}}_\ge \).

Statement of the main result

Theorem 1.2

Assume (A0)–(A7) and \(m'(\alpha ) <0\).

  1. (1)

    Then \(W(u)\) is positive a.s. with \(0 < \mathbb {E}W(u) < \infty \) for each \(u \in {\mathbb {S}}_\ge \). A random vector \(X \in {{\mathbb {R}}^d_\ge }\) is a solution of (1.1) with \(Q \equiv 0\), if and only if its Laplace transform satisfies

    $$\begin{aligned} \mathbb {E}\exp (-r \left\langle u,X \right\rangle ) = \mathbb {E}\exp (-K r^\alpha W(u)) \end{aligned}$$
    (1.15)

    for some \(K \ge 0\) and all \(r \in {\mathbb {R}}_\ge \), \(u \in {\mathbb {S}}_\ge \).

  2. (2)

    Assume in addition (A8) and \(\alpha <1\). Then \(W^*\) is a.s. finite, and a random vector \(X \in {{\mathbb {R}}^d_\ge }\) is a solution of (1.1) if and only if its Laplace transform satisfies

    $$\begin{aligned} \mathbb {E}\exp (-r \left\langle u,X \right\rangle ) = \mathbb {E}\exp \left( -K r^\alpha W(u) - r \left\langle u,W^* \right\rangle \right) \end{aligned}$$
    (1.16)

    for some \(K \ge 0\) and all \(r \in {\mathbb {R}}_\ge \), \(u \in {\mathbb {S}}_\ge \).

If (A0)–(A3), (A5) and (A6a) hold and \(m'(\alpha )=0\) then Eq. (1.1) with \(Q \equiv 0\) has a nontrivial fixed point.

Remark 1.3

In terms of random variables, Eq. (1.16) states that if \(X\) is a solution of (1.1), then for all \(u \in {\mathbb {S}}_\ge \),

$$\begin{aligned} \left\langle u,X \right\rangle \mathop {=}\limits ^{{\mathcal {L}}}\left\langle u,W^* \right\rangle + (KW(u))^{1/\alpha } Y_\alpha , \end{aligned}$$

where \(Y_\alpha \) is a one-dimensional \(\alpha \)-stable random variable with LT \(\mathbb {E}\exp (-tY) = \exp (-t^\alpha )\), independent of \((W(u),W^*)\).

A sufficient condition for the existence of \(\alpha \in (0,1)\) with \(m(\alpha )=1\), \(m'(\alpha ) <0\) is that the nonnegative matrix \(\mathbb {E}\mathbf {M}\) has spectral radius smaller than 1, see Buraczewski et al. [17, Lemma 4.14].

Additional properties of the fixed points, like multivariate regular variation or a representation as a mixture of multivariate \(\alpha \)-stable laws are given in Theorem 6.1.

It can be shown that if Assumption (A4) is violated, then there are more fixed points, the situation being similar to the one-dimensional arithmetic case. See Mentemeier [60, Section 18].

1.4 Application: random walk in random environment on multiplexed trees

The random walk in random environment (RWRE) on trees was studied in detail first by Lyons and Pemantle [53], there very general underlying trees were considered. Of particular interest for the present situation is the work of Menshikov and Petritis [56], where it was shown that recurrence properties of a RWRE on Galton–Watson trees are intimately connected to the existence of fixed points of the smoothing transform.

This model has been extended in Menshikov et al. [57, 58] to RWRE on multiplexed trees: Let \(d,N >1\). Equip every vertex \(v \in \bigcup _{n=0}^\infty \{1, \ldots , N\}^n\) with \(d\) different levels \(\{1, \ldots , d\}\). To every edge connecting the vertex \(vi\) with its ancestor \(v\) belongs a \(d \times d\)-matrix \(\mathbf {T}(v)_i\), the \(a\)-th column of which consists of the transition probabilities for going from \(v\) to \(vi\) and changing from level \(a\) to \(b\), divided by the transition probability of going from \(vi\) to \(v\) and returning to level \(a\). Due to the (i.i.d.) random environment, the \(\mathbf {T}_i\) become random matrices with nonnegative entries. It is assumed in Menshikov et al. [58] that the law of \(\mathbf {T}_i\) satisfies condition \((C)\), and it is proved there [58, Section 2], that positive recurrence holds in this model, if and only if

$$\begin{aligned} \left\langle \mathbf {1},Y \right\rangle :=\left\langle \mathbf {1}, \sum _{k=0}^\infty \sum _{\left| {v} \right| =k} \mathbf{L}(v) \mathbf {1} \right\rangle < \infty , \end{aligned}$$

where \(\mathbf {1}=(1, \ldots , 1)^\top \) is the vector with all entries equal to 1, and \(Y\) satisfies the inhomogeneous fixed point equation

$$\begin{aligned} Y \mathop {=}\limits ^{{\mathcal {L}}}\sum _{i=1}^N \mathbf {T}_i Y_i + \mathbf {1}. \end{aligned}$$

2 Organization of the paper and outline of the proof

Recall that in the one-dimensional case, fixed points can be characterized by the behavior of their Laplace transform at 0, see Eq. (1.6). Therefore, a major point will be to understand the behavior of

$$\begin{aligned} \lim _{r \rightarrow 0} \frac{1-\psi (ru)}{r^\alpha L(r)}, \end{aligned}$$
(2.1)

with \(\psi \) being the Laplace transform of a fixed point and \(L\) a slowly varying function. Observe that this limit, if it exists at all, may also depend on \(u \in {\mathbb {S}}_\ge \). Existence and uniqueness of this limit will turn out the be the most involved questions.

To give structure to the subsequent discussion, we introduce, following Iksanov [34], two special classes of Laplace transforms. As before, write \(\mathbf {1}=(1, \ldots , 1)^\top \) for the vector with all entries equal to 1.

Definition 2.1

Let \(\alpha \in (0,1]\) and \(L\) be a positive function which is slowly varying at 0. We call a Laplace transform \(\psi \) of a probability measure on \({{\mathbb {R}}^d_\ge }\) \(L\)-\(\alpha \)-elementary, if it satisfies

$$\begin{aligned} \lim _{r \rightarrow 0} \frac{1- \psi (r\mathbf {1})}{L(r) r^\alpha } \in (0, \infty ). \end{aligned}$$

It is called \(L\)-\(\alpha \)-regular, if it satisfies

$$\begin{aligned} 0 < \liminf _{r \rightarrow 0} \frac{1 - \psi (r\mathbf {1})}{L(r)r^\alpha } \le \limsup _{r \rightarrow 0} \frac{1-\psi (r\mathbf {1})}{L(r)r^\alpha } < \infty . \end{aligned}$$

If \(L \equiv 1\), we say that \(\psi \) is \(\alpha \)-elementary (\(\alpha \)-regular).

Observe that for fixed points \(X\) with finite expectation, the limit in Eq. (2.1) exists with \(L \equiv 1\) and equals \(\left\langle u, \mathbb {E}X \right\rangle \), i.e. such fixed points are in particular 1-elementary.

The organization of the paper is as follows: in Sect. 3, we study implications of assumptions (A3) and (A4), giving additional information on \(P^{s}\) and introducing a Markov random walk, associated with the measure \(\mu \). It then appears in a many-to-one identity in Sect. 4, which contains detailed informations about the weighted branching model given by \((\mathbf{L}(v))_{v \in \mathbb {V}}\), inter alia the mean convergence of \(W_n\). A remarkable result is Theorem 4.10, which is in the spirit of results for general branching processes as in Nerman [64], Jagers [37], applying for the first time the renewal theorem of Kesten [44] in this context.

With these prerequisites at hand, we turn to the proof of Theorem 1.2, which will be done in several steps, each of them given in a self-contained section. The first step is to prove that each Laplace transform of the form (1.15) resp. (1.16) is indeed a fixed point, which is done in Sect. 5.3. Subsequently, some properties of these fixed points, including a representation as mixtures of multivariate stable laws, are proved in Sect. 6. Having proved the existence of fixed points, we are going to show that all fixed points of \({\mathcal {S}}_0\) are of the form (1.15) (this will imply the analogous result for \({\mathcal {S}}_Q\), see Sect. 11). In order to so, we prove in Sect. 7 that all fixed points of \({\mathcal {S}}_0\) are \(\alpha \)-regular. This is done by generalizing the approach of Alsmeyer et al. [2] to the multidimensional situation.

At the heart of our proof lies Theorem 8.2 which shows that an \(\alpha \)-regular fixed point is already \(\alpha \)-elementary. Its proof is given in Sects. 9 and 10. The basic idea is to use the Arzelá–Ascoli theorem to infer the existence of subsequential limits of \(r^{-\alpha }(1-\psi (ru))\), there the proof of equicontinuity poses the most problems. This issue is solved by observing that \(\alpha \)-regularity implies that \(u \mapsto r^{-\alpha }(1-\psi (ru))\) is \(\alpha \)-Hölder continuous for each \(r\), see Sect. 9. Then one has to show that all subsequential limits coincide, this is done in Sect. 10 by identifying the limits as bounded harmonic functions for the associated Markov random walk, which are then constant due to a Choquet–Deny type result. Having proved that all fixed points of \({\mathcal {S}}_0\) are \(\alpha \)-elementary, we identify such fixed points to be of the form (1.15) in Sect. 8.

In Sect. 11, we use the results proved for \({\mathcal {S}}_0\) to deduce the uniqueness of fixed points of \({\mathcal {S}}_Q\). The final assertion of Theorem 1.2 about the critical case \(m'(\alpha )=0\) is proved in Sect. 12. The final Sects. 13 and 14 contain proofs for the results stated in Sect. 4. Some helpful inequalities for multivariate Laplace transforms are given in the “Appendix” (Sect. 15).

3 Implications of condition \((C)\) and a change of measure

In this section, we collect important implications of our geometrical assumptions and define an associated Markov random walk, which constitutes the multivariate analogue of the random walk associated with a branching random walk.

Let \(\mathbf {M}\) as before be a random matrix with law \(\mu \), satisfying condition \((C)\). Besides the operator \(P^{s}\) introduced in (1.13), consider also the operator

$$\begin{aligned} P_*^{s} : {\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \right) \rightarrow {\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \right) , \quad P_*^{s} f(x) = \mathbb {E}\left| {\mathbf {M}^\top x} \right| ^s f(\mathbf {M}^\top \cdot x). \end{aligned}$$
(3.1)

Initiated by Kesten [43], a detailed study of these operators under condition \((C)\) can be found in Buraczewski et al. [17], based on the fundamental works Guivarc’h and Le Page [31, 32] for invertible matrices. We cite for the reader’s convenience the following most important properties:

Proposition 3.1

Assume that \([{\mathrm {supp}}\, \mu ]\) satisfies \((C)\) and let \(s \in I_{\mu }\). Then the spectral radii of \(P^{s}\) and \(P_*^{s}\) are both equal to \(k(s)=(\mathbb {E}N)^{-1} m(s)\), and there is a unique probability measure \(\nu ^{s}\) satisfying \((P^{s})' \nu ^{s} = k(s) \nu ^{s}\) and an (up to scaling) unique eigenfunction \(H^s\) satisfying \(P_*^{s} H^s = k(s) H^s\).

The function \(H^s\) is strictly positive on \({\mathbb {S}}_\ge \), \(\min \{s,1\}\)-Hölder-continuous and given by

$$\begin{aligned} H^s(u) = \int _{{\mathbb {S}}_\ge } \, \left\langle u,y \right\rangle ^s \, \nu ^{s}(dy). \end{aligned}$$
(3.2)

Moreover, if there is a nonnegative, nonzero continuous function \(f\), satisfying \(P_*^{s} f=\lambda f\) for some \(\lambda >0\), then \(\lambda =k(s)\), \(f=cH^s\) for some \(c >0\).

Source

[17, Proposition 3.1]. \(\square \)

Using formula (3.2), we see that \(H^s\) extends to a \(s\)-homogeneous function on \({{\mathbb {R}}^d_\ge }\), i.e. \(H^s(x)=\left| {x} \right| ^s H^s(\left| {x} \right| ^{-1} x)\). In particular, the identity \(P_*^{s} H^s = k(s) H^s\) becomes

$$\begin{aligned} H^s(x) = \frac{\mathbb {E}N}{m(s)} \mathbb {E}_{} \left( { H^s(\mathbf {M}^\top x)} \right) = \frac{1}{m(s)} \mathbb {E}_{} \left( {\sum _{i =1}^N H^s(\mathbf {T}_i^\top x)} \right) \end{aligned}$$
(3.3)

This allows us to introduce for any \(s \in I_{\mu }\) and \(u \in {\mathbb {S}}_\ge \) a \(s\)-shifted probability measure \(\mathbb {P}_u^s\) on \({\mathbb {S}}_\ge \times {\mathcal {M}}_\ge ^{\mathbb {N}}\) by setting

$$\begin{aligned} \mathbb {E}_u^s \left[ f(U_0, \mathbf {M}_1, \ldots , \mathbf {M}_n) \right] := \frac{(\mathbb {E}N)^n }{m(s)^n H^s(u)} \mathbb {E}_{} \left( { H^s(\mathbf {M}_n^\top \cdots \mathbf {M}_1^\top u) f(u, \mathbf {M}_1, \ldots , \mathbf {M}_n) } \right) \end{aligned}$$
(3.4)

for all \(n \in {\mathbb {N}}\) and any bounded measurable function \(f : {\mathbb {S}}_\ge \times {\mathcal {M}}_\ge ^n \rightarrow {\mathbb {R}}\). See Buraczewski et al. [17, Section 3.1] for details. Note that \((\mathbf {M}_n)_{n \in {\mathbb {N}}}\) are i.i.d. with law \(\mu \) under \(\mathbb {P}_u^0\) for each \(u \in {\mathbb {S}}_\ge \), which implies \(H^0 \equiv 1\); and that \(\mathbb {P}_u^s(U_0=u)=1\) for all \(u \in {\mathbb {S}}_\ge \) and \(s \in I_\mu \).

3.1 The associated Markov random walk

Under \(\mathbb {P}_u^s\), define the following sequences of random variables:

$$\begin{aligned} \varvec{\Pi }_n&:= \mathbf {M}_n^\top \cdots \mathbf {M}_1^\top , \\ U_n&:= \varvec{\Pi }_n \cdot U_0 = \mathbf {M}_n^\top \cdot U_{n-1}, \\ S_n&:= - \log \left| {\varvec{\Pi }_n U_0} \right| = S_{n-1} + \left( - \log \left| {\mathbf {M}_n^\top U_{n-1}} \right| \right) \!. \end{aligned}$$

Then \((U_n, S_n)_{n \in {\mathbb {N}}_0}\) constitutes a Markov random walk under each \(\mathbb {P}_u^s\), i.e. \((U_n)_{n \in {\mathbb {N}}_0}\) is a Markov chain, conditioned under which \((S_n)\) has independent (but not identically distributed) increments. It is this Markov random that will take the rôle of the associated random walk appearing for instance in Durrett and Liggett [27] or Liu [50].

From (3.4), one obtains the following comparison formula:

$$\begin{aligned} \mathbb {E}_u^s \left[ f((U_k, S_k)_{k \le n}) \right] = \frac{1}{k(s)^n H^s(u)} \mathbb {E}_u^0 \left[ { H^s(e^{-S_n}) \, f((U_k, S_k)_{k \le n}) } \right] \end{aligned}$$
(3.5)

Assumptions (A3)–(A6), which are in fact assumptions on \(\mu \), guarantee that \(S_n\) satisfies a SLLN (see Buraczewski et al. [17, Theorem 6.1]), that \((U_n, S_n)\) satisfies the assumptions of Kesten’s renewal theorem [see (ibid., Proposition 7.2)] and a Choquet–Deny type lemma (see Mentemeier [59, Theorem 2.2]). We will make use of all of these results.

Since it is the particular instance, where (A6) enters, we state here the SLLN for \(S_n\).

Proposition 3.2

Assume (A0)–(A3), (A5) and (A6). Then

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{S_n}{n} = - {m^{\prime }(\alpha )} \quad \mathbb {P}_{u}^{\alpha }\text {-a.s.} \end{aligned}$$
(3.6)

Source

[17, Theorem 6.1]. \(\square \)

4 The weighted branching model with matrices

In this section, we give exhaustive information about the behavior of the family of branch weights \((\mathbf{L}(v))_{v \in \mathbb {V}}\), which can also be seen as a branching random walk on the semigroup of allowable matrices. By studying its action on particular vectors \(u \in {\mathbb {S}}_\ge \) (which includes the standard basis vectors), we obtain detailed information about its asymptotic behavior. In particular, Theorem 4.10 might be of interest in its own right. Some of the results presented here require quite involved proofs, which we postpone to Sects. 13 and 14.

We start with some additional notation concerned with \((\mathbf {T}(v))_{v \in \mathbb {V}}\). Here and below, \((\Omega , {\mathfrak {B}}, \mathbb {P})\) denotes a generic probability space. If no specific \(\sigma \)-field is mentioned, measurability is always understood with respect to the Borel-\(\sigma \)-field.

4.1 The weighted branching model with matrices

Recall that \(\mathbb {V}= \bigcup _{n=0}^\infty {\mathbb {N}}^n \) denotes the infinite Ulam–Harris tree with root \(\emptyset \), that \(\left| {v} \right| \) denotes the generation of the node \(v\), that \((T(v))_{v \in \mathbb {V}}\) is a family of i.i.d. copies of \(T\) with corresponding filtration \({\mathfrak {B}}_n := \sigma \left( T(v), \left| {v} \right| < n\right) \) and that we defined recursively \( \mathbf{L}(vi)=\mathbf{L}(v) \mathbf {T}_i(v)\), with \(\mathbf{L}(\emptyset )=\mathbf{Id}\).

Furthermore, introduce shift operators \(\left[ \cdot \right] _{v}\), \(v \in \mathbb {V}\): Given any function \(F=f({\mathcal {T}})\) of the weight family \({\mathcal {T}}=(T(w))_{w \in \mathbb {V}}\) pertaining to \(\mathbb {V}\), define

$$\begin{aligned} \left[ F \right] _{v} := f\left( (T(vw))_{w \in \mathbb {V}}\right) \end{aligned}$$
(4.1)

as the same function evaluated at the weight family \((T(vw))_{w \in \mathbb {V}}\) pertaining to the subtree rooted in \(w\). Note that the family \((T(vw))_{w \in \mathbb {V}}\) has the same distribution as \({\mathcal {T}}\) and is independent of \({\mathfrak {B}}_{\left| {v} \right| }\) as well as of all other weight families pertaining to subtrees rooted at the same level.

Instances of such functions are e.g. the branch weights \(\mathbf{L}(w)=\mathbf{l}(w)({\mathcal {T}})\), we obtain

$$\begin{aligned} \left[ \mathbf{L}(w) \right] _{v}= \mathbf {T}_{w_1}(v) \mathbf {T}_{w_2}(vw_1) \ldots \mathbf {T}_{w_m}(vw_1 \ldots w_{m-1}) \end{aligned}$$

if \(w=w_1 \ldots w_m\), and in particular

$$\begin{aligned} \mathbf{L}(vw)=\mathbf{L}(v)\left[ \mathbf{L}(w) \right] _{v} \end{aligned}$$

for any \(v, w \in \mathbb {V}\).

Define \( R_n :=\max _{\left| {v} \right| =n} \left\| {\mathbf{L}(v)} \right\| .\) Then we have the following result, which in contrast to Liu [50, Lemma 7.2] needs the stronger assumption that \(m(s)<1\) for some \(s > \alpha \), which follows from Assumption (A5) combined with \(m'(\alpha )<0\).

Lemma 4.1

Assume (A0)–(A2) and (A5) and let \(m'(\alpha )<0\). Then \( \lim _{n \rightarrow \infty } R_n = 0 \quad \mathbb {P}\text {-a.s.}\)

Proof

See Sect. 13. \(\square \)

Upon fixing \(u \in {\mathbb {S}}_\ge \), define for each \(v \in \mathbb {V}\) the induced random variables

$$\begin{aligned} U^u(v):= \mathbf{L}(v)^\top \cdot u , \quad S^u(v) := - \log \left| {\mathbf{L}(v)^\top u} \right| . \end{aligned}$$

Note that \(S^u(v), U^u(v)\) are measurable w.r.t \({\mathfrak {B}}_{\left| {v} \right| }\) and that \(\mathbf{L}(v)^\top u = e^{-S^u(v)} U^u(v)\). Let \(v \in {\mathbb {N}}^{\mathbb {N}}_0\) be an infinite branch. On the set \(\{\mathbf{L}(v|k) \ne 0 \, \forall \, k \in {\mathbb {N}}_0\}\), the sequence \((U(v|k))_{k \in {\mathbb {N}}_0}\) constitutes an (inhomogeneous) Markov chain with state space \({\mathbb {S}}_\ge \), conditioned on which the increments of \((S^u(v|k))_{k \in {\mathbb {N}}_0}\) are independent. This is why we call \((U^u(v), S^u(v))_{v \in \mathbb {V}}\) a branching Markov random walk. By Lemma 4.1, we have that \(\lim _{\left| {v} \right| \rightarrow \infty } S^u(v) = \infty \) a.s.

4.2 A many-to-one identity and a martingale

Lemma 4.2

Assume (A0)–(A3) and (A5). Then, with \(\mathbb {P}_u^\alpha \) as in (3.4), it holds for all \(u \in {\mathbb {S}}_\ge \), all \(n \in {\mathbb {N}}\) and any measurable function \(f : {\mathbb {S}}_\ge \times {\mathcal {M}}_\ge ^n \rightarrow {\mathbb {R}}_\ge \), that

$$\begin{aligned} \mathbb {E}_u^\alpha \left( f(U_0, \mathbf {M}_1, \ldots , \mathbf {M}_n) \right) = \frac{1}{H^\alpha (u)} \mathbb {E}_{} \left( { \sum _{\left| {v} \right| =n} H^\alpha (\mathbf{L}(v)^\top u) \ f(u, \mathbf{L}(v|1), \ldots , \mathbf{L}(v)) } \right) \end{aligned}$$
(4.2)

Proof

By combining the definition of \(\mu \), (1.11), with the definition (3.4) of the \(\alpha \)-shifted measure. \(\square \)

For the branching Markov random walk, we immediately infer the following:

Corollary 4.3

Assume (A0)–(A3) and (A5). Then for all \(u \in {\mathbb {S}}_\ge \), \(n \in {\mathbb {N}}\) and any nonnegative measurable function \(f : ({\mathbb {S}}_\ge \times {\mathbb {R}})^{n+1}\),

$$\begin{aligned}&~\frac{1}{H^\alpha (u)} \, \mathbb {E}_{} \left( {\sum _{\left| {v} \right| =n} e^{-\alpha S^u(v)} H^\alpha (U^u(v)) \, f\Bigl ((U^u(v|k), S^u(v|k))_{k \le n} \Bigr )} \right) \end{aligned}$$
(4.3)
$$\begin{aligned}&\qquad = \mathbb {E}_u^\alpha \left( \ f\Bigl ((U_k, S_k)_{k \le n}) \right) \!. \end{aligned}$$
(4.4)

Recalling the definitions (3.2) of \(H^\alpha (u)\) and (1.14) of \(W_n(u)\), we obtain that

$$\begin{aligned} W_n(u) = \sum _{\left| {v} \right| =n} H^\alpha (\mathbf{L}(v)^\top u) = \sum _{\left| {v} \right| =n-1} \sum _{i \ge 1} H^\alpha (\mathbf {T}(v)_i^\top \mathbf{L}(v)^\top u), \end{aligned}$$

and consequently, using identity (3.3) for \(H^\alpha \), that

$$\begin{aligned} \mathbb {E}[ W_n(u) \, | \, {\mathfrak {B}}_{n-1}] = \sum _{\left| {v} \right| =n-1} \mathbb {E}\left[ \left. \sum _{i \ge 1} H^\alpha (\mathbf {T}_i^\top \mathbf{L}(v)^\top u) \, \right| {\mathfrak {B}}_{n-1} \right] = W_{n-1}(u) \quad \mathbb {P}\text {-a.s.}, \end{aligned}$$

i.e. \(W_n(u)\) is a nonnegative \(\mathbb {P}\)-martingale for each \(u \in {\mathbb {S}}_\ge \) which, thus it has a limit \(W(u)\).

Mean convergence of this martingale (the intrinsic martingale in multitype branching random walk) is studied in [10] (see also Athreya [6], Jagers [37], Kyprianou and Rahimzadeh Sani [48], Olofsson [66]). We obtain the following result:

Proposition 4.4

Assume that (A0)–(A3), (A5) and (A6) hold, and that \(m'(\alpha ) <0\). In (A5), we may assume \(\alpha \in (0,\infty )\). Then for all \(u \in {\mathbb {S}}_\ge \), \(W_n(u)\) converges in mean to \(W(u)\), i.e.

$$\begin{aligned} \mathbb {E}W(u)=W_0(u)=\int _{{\mathbb {S}}_\ge } \left\langle u,y \right\rangle ^\alpha \nu ^{\alpha }(dy) = H^\alpha (u) >0. \end{aligned}$$
(4.5)

Furthermore, it holds that \(W(u)=w(u, (\mathbf{L}(v))_{v \in \mathbb {V}})\) \(\mathbb {P}\text {-a.s.}\) for a measurable function \(w : {\mathbb {S}}_\ge \times {\mathcal {M}}_\ge ^\mathbb {V}\rightarrow [0,\infty )\).

Proof

By combining the SLLN 3.2 with [10, Theorem 1.1 (i)], see Sect. 13 for details. The positivity of \(H^\alpha \) is an assertion of Proposition 3.1. \(\square \)

Applying Lemma 4.1, we obtain the following very useful Corollary:

Corollary 4.5

Let the assumptions of Proposition 4.4 hold. If a function \(F : {{\mathbb {R}}^d_\ge }\rightarrow {\mathbb {R}}_\ge \) satisfies \(\lim _{r \rightarrow 0} \sup _{u \in {\mathbb {S}}_\ge } \left| {F(ru) - \gamma } \right| =0\) for some \(\gamma \ge 0\), then

$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\left| {v} \right| =n} H^\alpha (\mathbf{L}(v)^\top u) F(\mathbf{L}(v)^\top u) = \gamma \, W(u) \quad \mathbb {P}\text {-a.s.}\end{aligned}$$
(4.6)

4.3 Stopping lines and the martingale

Let \(\tau = \tau ((\mathbf{m}_k)_{k \in {\mathbb {N}}})\) be a stopping time for a sequence of matrices, of the form

$$\begin{aligned} \tau ((\mathbf{m}_k)_{k \in {\mathbb {N}}}) = \inf \left\{ n \ge 0 \ : \ (\mathbf{m}_k)_{k=1}^n \in A_n \right\} \end{aligned}$$

for some sets \(A_n\). This gives rise to the homogeneous stopping line (HSL) \({\mathcal {I}}_{\tau }\) for a matrix branching process by the definition

$$\begin{aligned} {\mathcal {I}}_{\tau } := \left\{ \, v|\tau \bigl ( \bigl (\mathbf {T}_{v_k}^\top (v|k-1)\bigr )_{k \in {\mathbb {N}}}\bigr ) \ : \ v \in \{1, \dots , N\}^{\mathbb {N}}\right\} \end{aligned}$$
(4.7)

The pre-\({\mathcal {I}}_{}\) \(\sigma \)-algebra \({\mathfrak {B}}_{{\mathcal {I}}_{}}\) associated with the stopping line \({\mathcal {I}}_{}\) is defined as

$$\begin{aligned} {\mathfrak {B}}_{{\mathcal {I}}_{}} := \sigma \left( (T(v))_{ v \text { has no ancestor in } {\mathcal {I}}_{}} \right) \!. \end{aligned}$$

A HSL \({\mathcal {I}}_{}\) is called anticipating, if \(\{v \in {\mathcal {I}}_{}\} \in {\mathfrak {B}}_{{\mathcal {I}}_{}}\) for all \(v \in \mathbb {V}\). It is called a.s. dissecting, if \(\max \{ \left| {v} \right| \ : \ v \in {\mathcal {I}}_{}\}\) is finite a.s. (see Alsmeyer et al. [2, Section 7]).

The many-to-one identity remains valid under the application of a dissecting HSL:

Lemma 4.6

Assume (A0)–(A3) and (A5). Then for all \(u \in {\mathbb {S}}_\ge \), any a.s. dissecting HSL defined as above, any bounded measurable \(f\),

$$\begin{aligned} \frac{1}{H^\alpha (u)} \mathbb {E}\left( \sum _{v \in {\mathcal {I}}_{\tau }} H^\alpha (\mathbf{L}(v)^\top u) f(\mathbf{L}(v|1), \ldots , \mathbf{L}(v))\, \right) = \mathbb {E}_u^\alpha f(\mathbf {M}_1, \ldots , \mathbf {M}_{\tau }). \end{aligned}$$
(4.8)

Proof

Just by summing the many-to-one identity (4.2) over \(n \in {\mathbb {N}}_0\) when considering the sets \(\{\tau =n \}\). \(\square \)

Subsequently, we will focus on a particular class of HSL, namely

$$\begin{aligned} {\mathcal {I}}_{t}^u := \{ v \in \mathbb {V}\ : S^u(v)> t, \ S^u(v|k) \le t \ \quad \forall k < \left| {v} \right| \}, \end{aligned}$$

for arbitrary \(t \in {\mathbb {R}}_>\) and \(u \in {\mathbb {S}}_\ge \). Note that these stopping lines are dissecting by Lemma 4.1 and anticipating as well, since \(S^u(v)\) only depends on the initial state \(u\) and \((T(v|k))_{k < \left| {v} \right| }\). Moreover, \({\mathfrak {B}}_{{\mathcal {I}}_{t}^u}\) is a filtration with

$$\begin{aligned} {\mathfrak {B}}_\infty = \lim _{t \rightarrow \infty } \sigma ( ({\mathfrak {B}}_{{\mathcal {I}}_{s}^u}, s \le t)) = \sigma ((T(v), v \in \mathbb {V})), \end{aligned}$$

see the proof of Alsmeyer et al. [2, Lemma 8.7] for details.

The first part of the following lemma is then a direct consequence of Lemma 4.6, applied with \(f \equiv 1\).

Lemma 4.7

For each \(u \in {\mathbb {S}}_\ge \), the family (indexed by \(t \in {\mathbb {R}}_\ge \))

$$\begin{aligned} W_{{\mathcal {I}}_{t}^u}(u) := \sum _{v \in {\mathcal {I}}_{t}^u} \int _{{\mathbb {S}}_\ge } \left\langle \mathbf{L}(v)^\top u, y \right\rangle ^\alpha \, \nu ^{\alpha }(dy) = \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) \end{aligned}$$

is a \(\mathbb {P}\)-martingale with respect to the filtration \({\mathfrak {B}}_{{\mathcal {I}}_{t}^u}\).

Subject to \(m'(\alpha )<0\), it holds that

$$\begin{aligned} W_{{\mathcal {I}}_{t}^u}(u) = \mathbb {E}\left( W(u) | {\mathfrak {B}}_{{\mathcal {I}}_{t}^u} \right) \quad \mathbb {P}\text {-a.s.}, \end{aligned}$$

and consequently, \(\mathbb {P}\text {-a.s.}\) and in \(L^1(\mathbb {P})\),

$$\begin{aligned} \lim _{t \rightarrow \infty } W_{{\mathcal {I}}_{t}^u}(u) = W(u). \end{aligned}$$

Here and in what follows, \(\mathbb {P}\text {-a.s.}\)-convergence for \(t \rightarrow \infty \) means that for every sequence \(t_n \rightarrow \infty \), there is a set of full measure, on which the convergence takes place. This will be enough for our purposes.

Proof of Lemma 4.7

See Sect. 13. \(\square \)

4.4 Restricted versions of \(W_n\)

If \(m'(\alpha ) <0\), then the MRW \((U_n, S_n)_{n \in {\mathbb {N}}_0}\) is transient under \(\mathbb {P}_u^\alpha \) with \(\lim _{n \rightarrow \infty } S_n = \infty \) a.s. by Proposition 3.2. Thus \(\tau _t:= \inf \{n : S_n >t\}\) is a.s. finite, and one can define a semi-Markov process by

$$\begin{aligned} U(t) := U_{\tau _t}, \quad R(t):= S_{\tau _t} - t \end{aligned}$$

for all \(t \in {\mathbb {R}}_\ge \). Noting that \(\mathbb {P}_u^\alpha \)-a.s., \(\tau _t = \inf \{ n : - \log \left| {\mathbf {M}_n^\top \ldots \mathbf {M}_1^\top u} \right| > t\}\), we see that \(\tau _t\) corresponds to the stopping line \({\mathcal {I}}_{t}^u\), and Lemma 4.6 yields the following very helpful identity:

Corollary 4.8

For all \(t \in {\mathbb {R}}_\ge \), all bounded measurable \(f\)

$$\begin{aligned} \frac{1}{H^\alpha (u)} \mathbb {E}\left( \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) f(U^u(v), S^u(v)-t)\, \right) = \mathbb {E}_u^\alpha f(U(t), R(t)). \end{aligned}$$
(4.9)

It is very remarkable, that the renewal theorem of Kesten [44] applies to \((U(t), R(t))\), which then gives a nice description of the asymptotic composition of the matrix branching process:

Theorem 4.9

Assume that (A0)–(A6) hold and that \(m'(\alpha ) <0\). Then there is a probability measure \(\varrho \) on \({\mathbb {S}}_\ge \times {\mathbb {R}}\), with \(\varrho (\mathrm {int}\left( {\mathbb {S}}_\ge \right) \times {\mathbb {R}})=1\), such that for all \(u \in {\mathbb {S}}_\ge \) and all \(f \in {\mathcal {C}}_b^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \),

$$\begin{aligned} \lim _{t \rightarrow \infty } \frac{1}{H^\alpha (u)} \mathbb {E}\left( \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) f(U^u(v), S^u(v)-t)\, \right) = \int f(y,s) \, \varrho (dy, ds). \end{aligned}$$

The result remains valid, if \(f\) is a nonnegative radial function, or \(f(y,s)=g(y)h(s)\) for a continuous function \(g: {\mathbb {S}}_\ge \rightarrow {\mathbb {R}}_\ge \) and a bounded measurable function \(h: {\mathbb {R}}\rightarrow {\mathbb {R}}_\ge \).

Proof

It is proved in Buraczewski et al. [17, Proposition 7.2], that the Markov random walk \((U_n, S_n)_{n \ge 0}\) under the measure \(\mathbb {P}_u^\alpha \), defined in terms of a measure \(\mu \) which satisfies \((C)\) and

$$\begin{aligned} \int \left\| {\mathbf{a}^\top } \right\| \left( \left| {\log \left\| {\mathbf{a}^\top } \right\| } \right| + \left| {\log \iota (\mathbf{a}^\top )} \right| \right) \, \mu (d\mathbf{a}) < \infty \end{aligned}$$

and the aperiodicity assumption (A4), fulfills the conditions I.1–I.4 on Kesten [44, page 359]. Assumptions (A4)–(A6) warrant these properties for our measure \(\mu \). Thus, after an application of the many-to-one identity (4.9), we can use Kesten [44, Theorem 1.1], which gives

$$\begin{aligned} \lim _{t \rightarrow \infty } \mathbb {E}_u^\alpha f(U(t), R(t)) = \int f(y,s) \, \varrho (dy,ds) \end{aligned}$$

for a probability measure \(\varrho \) on \({\mathbb {S}}_\ge \times {\mathbb {R}}\), to infer the asserted convergence.

The convergence result can be rephrased as \( {\mathbb {Q}}_u^\alpha (U(t), R(t) \in \cdot ) \rightarrow \varrho \) weakly. The measure \(\varrho \) is sometimes referred to as the stationary Markov delay distribution. It follows from the expression for \(\varrho \) given in Kesten [44, Theorem 1.1], that \(\varrho ({\mathbb {S}}_\ge \times \cdot )\) is absolutely continuous, while \(\varrho (\cdot \times {\mathbb {R}})\) may have atoms. Thus the weak convergence implies convergence of

$$\begin{aligned} \mathbb {E}_u^\alpha f(R(t)) \rightarrow \int f(s) \varrho ({\mathbb {S}}_\ge \times ds) \end{aligned}$$

for all bounded measurable radial functions, as well as for functions \(f(u,s) =g(u) h(s)\), where \(g: {\mathbb {S}}_\ge \rightarrow {\mathbb {R}}\) is bounded continuous, and \(h: {\mathbb {R}}\rightarrow {\mathbb {R}}\) is bounded measurable.

The weak convergence implies in particular that \(\varrho \) is a stationary measure for \((U(t), R(t))\). By part (2) of \((C)\), the stopping time \(T :=\{ \inf _{n \in \in } : \varvec{\Pi }_n \text { is positive }\}\) is finite \(\mathbb {P}_u^\alpha \)-a.s., and \(\varvec{\Pi }_{T+k}\) is positive for all \(k \ge 0\). This implies that \(U_{T+k} \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \) for any initial vector \(U_0=u\) and all \(k \ge 0\), hence \(\varrho \) as a stationary measure satisfies \(\varrho (\mathrm {int}\left( {\mathbb {S}}_\ge \right) \times {\mathbb {R}})=1\). \(\square \)

The main result of this section is that the above convergence in mean also holds in probability: define for a nonnegative measurable function \(f\),

$$\begin{aligned} W^f_{{\mathcal {I}}_{t}^u}(u) := \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) f(U^u(v),S^u(v)-t). \end{aligned}$$

Such random variables are particular cases of so called \(\chi \)-counted populations, appearing in the study of general branching processes, see Jagers [36, 37], Nerman [64], Olofsson [66]. Our approach uses ideas from Jagers [37], Cohn and Jagers [24], but takes advantage by using the renewal theorem as an ergodic theorem for the \((U(t),R(t))\), rather than using the potential of \(S_n\) as in the previous works. We are going to prove the following result:

Theorem 4.10

Under the assumptions of Theorem 4.9, it holds for all \(u \in {\mathbb {S}}_\ge \) and all \(f \in {\mathcal {C}}_b^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \), that

$$\begin{aligned} \lim _{t \rightarrow \infty } W_{{\mathcal {I}}_{t}^u}^f(u) = \left( \int f(y,s) \, \varrho (dy, ds) \right) W(u) \end{aligned}$$

in \(\mathbb {P}\)-probability and in \(L^1(\mathbb {P})\). The result remains valid, if \(f\) is a nonnegative radial function, or \(f(y,s)=g(y)h(s)\) for a continuous function \(g: {\mathbb {S}}_\ge \rightarrow {\mathbb {R}}_\ge \) and a bounded measurable function \(h: {\mathbb {R}}\rightarrow {\mathbb {R}}_\ge \).

Proof

The proof consists of several steps and is given in Sect. 14. \(\square \)

5 Existence of fixed points

In this section, we prove the existence part of Theorem 1.2, i.e. we show that every random variable with a Laplace transforms given by (1.15) or (1.16) is a solution of the homogeneous reps. inhomogeneous equation.

Therefore, we introduce first the weighted branching process, which allows us to describe iterations of \({\mathcal {S}}_Q\) and its action on Laplace transforms. Then we will prove the following main result.

Theorem 5.1

Assume (A0)–(A3), (A5)–(A6) and \(m'(\alpha ) <0\). Then for all \(K > 0\), the function \({\phi }_0(ru):= \exp (- K r^\alpha H^\alpha (u))\), \((u,r) \in {\mathbb {S}}_\ge \times {\mathbb {R}}_\ge \), is the LT of a multivariate \(\alpha \)-stable law on \({{\mathbb {R}}^d_\ge }\) with spherical measure \(\nu ^{\alpha }\).

  1. (1)

    \( \psi _0(ru) := \lim _{n \rightarrow \infty } {\mathcal {S}}_0^n {\phi }_0(ru) = \mathbb {E}\exp (-K r^\alpha W(u))\) is a nontrivial fixed point of (1.1) with \(Q \equiv 0\).

  2. (2)

    Assuming in addition that \(\alpha <1\) and (A8) holds, then \(W^*\) as defined in Eq. (1.9) is finite a.s., and

    $$\begin{aligned} \psi (ru) := \lim _{n \rightarrow \infty } {\mathcal {S}}_Q^n {\phi }_0(ru) = \mathbb {E}\exp \left( -K r^\alpha W(u) - r \left\langle u,W^* \right\rangle \right) \end{aligned}$$

    is a nontrivial fixed point of (1.1).

5.1 Weighted branching process

The best way to describe iterations of \({\mathcal {S}}_Q\) is via the weighted branching process. Given a random variable \(Y \in {{\mathbb {R}}^d_\ge }\), let \((Y(v))_{v \in \mathbb {V}}\) be a family of i.i.d. copies of \(Y\), which are independent of \((T(v))_{v \in \mathbb {V}}\). Then the sequence

$$\begin{aligned} Y_n := \sum _{\left| {v} \right| =n} \mathbf{L}(v) Y(v) + \sum _{\left| {w} \right| <n} \mathbf{L}(w)Q(w), \end{aligned}$$
(5.1)

\(n \in {\mathbb {N}}_0\), is called the weighted branching process (WBP) associated with \(Y\) and \(T\). It can easily be shown then that

$$\begin{aligned} {\mathcal {L}}\left( Y_n \right) ={\mathcal {S}}_Q^n {\mathcal {L}}\left( Y \right) \end{aligned}$$
(5.2)

and moreover, that \(Y_n\) satisfies the identity

$$\begin{aligned} Y_n = \sum _{i=1}^N \mathbf {T}_i(\emptyset ) \left[ Y_{n-1} \right] _{i} + Q(\emptyset ), \end{aligned}$$

and \((\left[ Y_{n-1} \right] _{i})_{i \ge 1}\) is a sequence of i.i.d. copies of \(Y_{n-1}\) and independent of \((Q, (\mathbf {T}_i)_{i \ge 1})\). Thus, if \(Y_n\) converges a.s. to a random variable \(Y^*\), then this \(Y^*\) is a fixed point of \({\mathcal {S}}_Q\).

Observe that, subject to the assumption (A0)–(A2), (A8) and (A5) with \(m'(\alpha )<0\) and \(\alpha <1\), there is \(s \in (\alpha ,1)\), such that \(m(s)<1\) and \(\mathbb {E}\left| {Q} \right| ^s < \infty \). Referring to Eq. (1.10), \(W^*\) then is finite a.s.  and is the limit of the WBP associated with \(0\) and \(T\). Moreover, if \(Y\) is any random variable in \({{\mathbb {R}}^d_\ge }\) with a finite moment of order \(\alpha +\varepsilon \) for some \(\varepsilon >0\), then the associated WBP converges to a.s. to \(W^*\) as well, this follows from moment calculations as in Mirek [61, Section 3]. Thus we have the following result:

Lemma 5.2

Assume (A0)–(A2), (A5) with \(m'(\alpha )<0\) and \(\alpha <1\) and (A8). Then the series

$$\begin{aligned} W^*_n := \sum _{\left| {v} \right| < n} \mathbf{L}(v) Q(v) \end{aligned}$$

converges a.s. to a random variable \(W^*\), and \({\mathcal {L}}\left( W^* \right) \) is the unique FP of \({\mathcal {S}}_Q\) with a finite moment of order \(\alpha +\varepsilon \), for any \(\varepsilon >0\).

5.2 Laplace transforms

For a random variable \(Y \in {{\mathbb {R}}^d_\ge }\), its LT \({\phi }(x) = \mathbb {E}\exp (-\left\langle x,Y \right\rangle )\) is well defined for all \(x \in {{\mathbb {R}}^d_\ge }\). From Eq. (5.2), one obtains iteration formulas for the action of \({\mathcal {S}}_Q\) on LTs, namely

$$\begin{aligned} {\mathcal {S}}_Q^n{\phi }(x) = \mathbb {E}\left[ \exp \left( -\left\langle x, \sum _{\left| {v} \right| <n} \mathbf{L}(v) Q(v) \right\rangle \right) \prod _{\left| {v} \right| =n} {\phi }(\mathbf{L}(v)^\top x) \right] \!. \end{aligned}$$
(5.3)

Observe that \({\mathcal {S}}_Q\) defines a continuous mapping on \({\mathcal {C}}_b^{}\left( {{\mathbb {R}}^d_\ge } \right) \). Consequently, if \({\phi }\) is the LT of a distribution on \({{\mathbb {R}}^d_\ge }\), and \(\psi := \lim _{n \rightarrow \infty } {\mathcal {S}}_Q^n {\phi }\) exists and is a LT of a distribution as well, then this is a fixed point of \({\mathcal {S}}\), since

$$\begin{aligned} {\mathcal {S}}_Q\psi = {\mathcal {S}}_Q\left( \lim _{n\rightarrow \infty } {\mathcal {S}}_Q^n {\phi }\right) = \lim _{n \rightarrow \infty } {\mathcal {S}}_Q^{n+1} {\phi }= \psi . \end{aligned}$$

Below, we will consider the WBP associated with particular “initial” random variables, namely multivariate \(\alpha \)-stable ones. The next lemma describes their Laplace transforms.

Lemma 5.3

Let \(\nu \) be a probability measure on \({\mathbb {S}}_\ge \), \(K >0\) and \(\alpha \in (0,1]\). Then

$$\begin{aligned} {\phi }(x) := \exp \left( -K \int _{{\mathbb {S}}_\ge } \left\langle x,y \right\rangle ^\alpha \nu (dy)\right) \end{aligned}$$
(5.4)

is the LT of the multivariate \(\alpha \)-stable law on \({{\mathbb {R}}^d_\ge }\) with spherical measure \(\nu \).

Source

An idea of the proof is given in Nolan [65], Zolotarev [70], see Mentemeier [60, Section 5.2] for a detailed account based on these works. \(\square \)

5.3 Existence of fixed points

Now we turn to the proof of Theorem 5.1. Note that below \(W^* \equiv 0\) in the homogeneous case \( Q \equiv 0\).

Proof of Theorem 5.1

Step 1: By Lemma 5.3,

$$\begin{aligned} {\phi }_0(x) := \exp \left( -K H^\alpha (x) \right) = \exp \left( - K \int _{{\mathbb {S}}_\ge } \left\langle x,y \right\rangle ^\alpha \nu ^{\alpha }(dy) \right) \end{aligned}$$

is the LT of a probability law on \({{\mathbb {R}}^d_\ge }\). Hence \({\phi }_n := {\mathcal {S}}_Q^n {\phi }_0\) is a sequence of LTs, and by (5.3), for \((u,r) \in {\mathbb {S}}_\ge \times {\mathbb {R}}_\ge \),

$$\begin{aligned} {\phi }_n (ru) =\,&\mathbb {E}\left[ \exp \left( - r \left\langle u, W_n^* \right\rangle \right) \, \prod _{\left| {v} \right| =n} \exp \left( - K r^\alpha H^\alpha (\mathbf{L}(v)^\top u) \right) \right] \\ =\,&\mathbb {E}\exp (- r \left\langle u,W_n^* \right\rangle - K r^\alpha W_n(u)). \end{aligned}$$

The random variables \(W_n^*\) and \(W_n(u)\) converge a.s. (for any \(u \in {\mathbb {S}}_\ge \)) by Lemma 5.2 resp. Proposition 4.4. Hence, using the bounded convergence theorem, the sequence \({\phi }_n\) converges pointwise to a limit \(\psi \), given by

$$\begin{aligned} \psi (ru) = \mathbb {E}\exp \left( -r \left\langle u, W^* \right\rangle - K r^\alpha W(u)\right) . \end{aligned}$$
(5.5)

Using the continuity theorem for multivariate LTs (see e.g. [69, Lemma 4]), \(\psi \) is the LT of a probability measure on \({{\mathbb {R}}^d_\ge }\), which is then a FP of \({\mathcal {S}}\) by the considerations above. Since \(W(u)\) is not trivial by Proposition 4.4, Eq. (5.5) describes a one-parameter class of fixed points. \(\square \)

6 Properties of the fixed points

In this section, we will describe properties of the fixed points given by Theorem 5.1, such as multivariate regular variation, or a representation as a mixture of multivariate \(\alpha \)-stable laws. We are going to prove the following result.

Theorem 6.1

Assume (A0)–(A3), (A5)–(A6) with \(m'(\alpha ) <0\) and \(\alpha <1\). Let either \(Q \equiv 0\) (then \(W^* \equiv 0)\) or let (A8) hold. Then there is a random finite measure \(\Theta \) on \({\mathbb {S}}_\ge \), such that \(\int _{{\mathbb {S}}_\ge } \left\langle u,y \right\rangle ^\alpha \Theta (dy)= W(u)\) \(\mathbb {P}\text {-a.s.}\) for all \(u \in {\mathbb {S}}_\ge \), and hence

$$\begin{aligned} \psi (ru) = \mathbb {E}\exp \left( - r \left\langle u, W^* \right\rangle - K r^\alpha \int _{{\mathbb {S}}_\ge } \, \left\langle u,y \right\rangle ^\alpha \Theta (dy)\right) , \quad (u,r) \in {\mathbb {S}}_\ge \times {\mathbb {R}}_\ge . \end{aligned}$$
(6.1)

Moreover, \(\mathbb {E}\Theta = \nu ^{\alpha }\), and if \(X\) is a r.v. with LT \(\psi \) with \(K >0\), then

$$\begin{aligned} \lim _{r \rightarrow \infty } \frac{ {\mathbb {P}}_{} \left( {\frac{X}{\left| {X} \right| } \in \cdot , \left| {X} \right| >sr} \right) }{{\mathbb {P}}_{} \left( {\left| {X} \right| >r} \right) } = s^{-\alpha } \nu ^{\alpha }, \end{aligned}$$
(6.2)

and in particular,

$$\begin{aligned} \lim _{r \rightarrow \infty } r^\alpha {\mathbb {P}}_{} \left( {\left\langle u,X \right\rangle >r} \right) = \frac{K H^\alpha (u)}{\Gamma (1-\alpha )}. \end{aligned}$$
(6.3)

Remark 6.2

Note that the heavy tail properties (6.2) and (6.3) are subject to the assumptions \(K>0\) and \(\alpha <1\). If one of those fails, the tail behavior is governed by \(\beta \) rather then by \(\alpha \), as shown in Buraczewski et al. [17, Theorem 2.4] for the homogeneous case \(\alpha =1\) and \(Q \equiv 0\), and in Mirek [61, Theorem 1.9] for the inhomogeneous case with \(\alpha < 1/2\) and \(K=0\), i.e., for the particular fixed point \(W^*\).

It is remarkable that the directional dependence in the results cited above and in Eq. (6.3) is given by corresponding functions, namely by \(H^\beta \) there resp. \(H^\alpha \) here.

Define a sequence of Radon measures on \({\mathbb {R}}^d\! \setminus \! \{0\}\simeq {\mathbb {S}}\times {\mathbb {R}}_>\) by

$$\begin{aligned} \Lambda _0:=\nu ^{\alpha } \otimes {l}^\alpha , \end{aligned}$$

where \({l}^\alpha (dr)=\frac{1}{r^{\alpha +1}} dr\) (Lévy measure of a one-dimensional \(\alpha \)-stable random variable), and \( \Lambda _n\) being the random measure defined by

$$\begin{aligned} \Lambda _n(\omega )(A) := \sum _{\left| {v} \right| =n} \Lambda _0(\mathbf{L}(v)(\omega )^{-1}(A)) \end{aligned}$$

for all measurable sets \(A \in {\mathbb {R}}^d\! \setminus \! \{0\} \rightarrow {\mathbb {C}}\) which are bounded away from the origin. Consider for \(u \in {\mathbb {S}}\), \(t>0\) the half-space \(H_{u,t}=\{ x \in {\mathbb {R}}^d\! \setminus \! \{0\} : \left\langle u,x \right\rangle > t \}\).

Lemma 6.3

Under the assumptions of Theorem 6.1, for a.e. \(\omega \in \Omega \) the sequence \(\Lambda _n(\omega )\) converges vaguely to a Radon measure \(\Lambda (\omega )\) on \({\mathbb {R}}^d\! \setminus \! \{0\}\), which is of the form

$$\begin{aligned} \Lambda (\omega )=\Theta (\omega ) \otimes {l}^\alpha \end{aligned}$$

for a random finite measure \(\Theta \) on \({\mathbb {S}}\), supported on \({\mathbb {S}}_\ge \). We have that \(\mathbb {E}\Theta = \nu ^{\alpha }\), and for all \(u \in {\mathbb {S}}_\ge \), it holds that

$$\begin{aligned} W(u) = \alpha \int _{{\mathbb {S}}_\ge } \left\langle u,y \right\rangle ^\alpha \, \Theta (dy) \quad \mathbb {P}\text {-a.s.}. \end{aligned}$$
(6.4)

Moreover, for all \(u \in {\mathbb {S}}_\ge \) and \(t >0\),

$$\begin{aligned} \Lambda _0(H_{u,t})= \frac{t^{-\alpha }}{\alpha } H^\alpha (u). \end{aligned}$$
(6.5)

Proof

Using the substitute \(s=r \left\langle u,\mathbf{L}(v)y \right\rangle \),

$$\begin{aligned} \Lambda _n(H_{u,t})&= \sum _{\left| {v} \right| =n} \Lambda _0 \left( \left\{ x \in {\mathbb {R}}^d\! \setminus \! \{0\} : \left\langle u, \mathbf{L}(v) x \right\rangle > t \right\} \right) \nonumber \\&= \sum _{\left| {v} \right| =n} \Lambda _0 \left( \{ yr : (y,r) \in {\mathbb {S}}\times {\mathbb {R}}_>, \left\langle u,\mathbf{L}(v) y \right\rangle >0; \, r > t/\left\langle u, \mathbf{L}(v)y \right\rangle \} \right) \nonumber \\&= \sum _{\left| {v} \right| =n} \int _{t}^\infty \, \int _{{\mathbb {S}}_\ge } \mathbb {1}_{(0,\infty )}(\left\langle u, \mathbf{L}(v)y \right\rangle ) \, \left\langle u,\mathbf{L}(v)y \right\rangle ^\alpha \nu ^{\alpha }(dy) \, \frac{1}{s^{\alpha +1}} \, ds \nonumber \\&= \frac{t^{-\alpha }}{\alpha } \, \sum _{\left| {v} \right| =n} \int _{{\mathbb {S}}_\ge } \mathbb {1}_{(0,\infty )}(\left\langle u, \mathbf{L}(v)y \right\rangle ) \, \left\langle \mathbf{L}(v)^\top u,y \right\rangle ^\alpha \nu ^{\alpha }(dy) \end{aligned}$$
(6.6)

Observe that \(\overline{W}_n(u):= \sum _{\left| {v} \right| =n} \int _{{\mathbb {S}}_\ge } \mathbb {1}_{(0,\infty )}(\left\langle u, \mathbf{L}(v)y \right\rangle ) \, \left\langle \mathbf{L}(v)^\top u,y \right\rangle ^\alpha \nu ^{\alpha }(dy)\) defines a nonnegative martingale for all \(u \in {\mathbb {S}}\), which coincides with \(W_n(u)\) for \(u \in {\mathbb {S}}_\ge \). Thus \(\overline{W}_n(u)\) converges a.s. to a random limit \(\overline{W}(u)\), which is nontrivial for \(u \in {\mathbb {S}}_\ge \), and equal to zero for \(u \in - {\mathbb {S}}_\ge \).

Having thus shown that for all \(u \in {\mathbb {S}}\) and all \(t >0\)

$$\begin{aligned} \lim _{n \rightarrow \infty } \Lambda _n(H_{u,t}) = t^{-\alpha } \frac{\overline{W}(u)}{\alpha } \quad \text {a.s.}, \end{aligned}$$
(6.7)

we use Boman and Lindskog [14, Theorem 3’] to infer that for a.e. \(\omega \in \Omega \), there is a Radon measure \(\Lambda (\omega )\) on \({\mathbb {R}}^d\! \setminus \! \{0\}\) such that

$$\begin{aligned} \lim _{n \rightarrow \infty } \, \int f(x) \, \Lambda _n(\omega )(dx) = \int f(x) \Lambda (\omega )(dx) \end{aligned}$$

for all bounded continuous functions on \({\mathbb {R}}^d\) with support bounded away from the origin (this implies in particular the asserted vague convergence). The theorem moreover states that \(\Lambda (\omega )\) is \(-\alpha -d\)-homogeneous (c.f. Boman and Lindskog [14, p.692]), i.e. of the form \(\Lambda (\omega ) = \Theta (\omega ) \otimes {l}^\alpha \) for a finite measure \(\Theta (\omega )\) on \({\mathbb {S}}\). The assertion on the support follows, since every \(\Lambda _n\) is supported on \({{\mathbb {R}}^d_\ge }\! \setminus \! \{0\}\).

Applying (6.6) with \(n=0\) yields the identity (6.5). Then, taking expectations in Eq. (6.7), we deduce that

$$\begin{aligned} \mathbb {E}\Lambda (H_{u,t}) = (\mathbb {E}\Theta \otimes {l}^\alpha )(H_{u,t}) = \Lambda _0(H_{u,t}) = (\nu ^{\alpha } \otimes {l}^\alpha )(H_{u,t}). \end{aligned}$$

By Boman and Lindskog [14, Theorem 3], the measure of the half-spaces \(H_{u,t}\) uniquely determines \(\mathbb {E}\Lambda \) and \(\Lambda _0\), which thus coincide. We conclude that \(\mathbb {E}\Theta = \nu ^{\alpha }\).

Applying (6.6) with \(t=1\) and \(u \in {\mathbb {S}}_\ge \), we finally obtain that

$$\begin{aligned} \alpha \int _{{\mathbb {S}}_\ge } \left\langle u,y \right\rangle ^\alpha \, \Theta (dy) = \lim _{n \rightarrow \infty } \alpha \Lambda _n(H_{u,1}) = \lim _{n \rightarrow \infty } W_n(u) = W(u) \quad \mathbb {P}\text {-a.s.}\end{aligned}$$

\(\square \)

Proof of Theorem 6.1

In the lemma above, we have already proved the first part of Theorem 6.1, in particular the representation formula Eq. (6.1).

Next, we prove (6.3). Observe that for fixed \(u \in {\mathbb {S}}_\ge \), the sequence of random variables \(r^{-\alpha } \left[ 1- \exp (-r^\alpha W(u)) \right] \) is decreasing in \(r \in {\mathbb {R}}_>\): Replacing \(s=r^\alpha \) and fixing a realization of \(W(u)\), \(s \mapsto [1-\exp (-sW(u))]/s\) is a LT (see Feller [28], XIII]), hence decreasing. Since \(\alpha <1\) is assumed, then also the sequence \( r^{-\alpha } \left[ 1- \exp (-K r^\alpha W(u)) -r \left\langle u,W^* \right\rangle \right] \) is ultimately decreasing. This allows to use the monotone convergence theorem and Proposition 4.4 to infer

$$\begin{aligned} \lim _{r \downarrow 0} \frac{1 - \psi (ru)}{r^\alpha }&= \lim _{r \downarrow 0} \mathbb {E}\left[ \frac{1- \exp (- K r^\alpha W(u) - r \left\langle u,W^* \right\rangle )}{r^\alpha } \right] \nonumber \\&= \mathbb {E}\left[ \lim _{r \downarrow 0} \frac{1- \exp (-K r^\alpha W(u) - r \left\langle u,W^* \right\rangle )}{r^\alpha } \right] \end{aligned}$$
(6.8)
$$\begin{aligned}&= \mathbb {E}\left[ \lim _{r \downarrow 0} \frac{1- \exp (-K r^\alpha W(u)) }{r^\alpha } \right] = \mathbb {E}K W(u) = K H^\alpha (u). \nonumber \\ \end{aligned}$$
(6.9)

Let now \(X\) have Laplace transform \(\psi \). Using the Tauberian theorem for LTs Feller [28], XIII.5, (5.22)], the relation (6.9) with \(\alpha <1\) implies that

$$\begin{aligned} \lim _{r \rightarrow \infty } r^\alpha {\mathbb {P}}_{} \left( {\left\langle u,X \right\rangle >r} \right) = \frac{K H^\alpha (u)}{\Gamma (1-\alpha )} \end{aligned}$$
(6.10)

for all \(u \in {\mathbb {S}}_\ge \).

Now we turn to the proof of (6.2). For \(\alpha \in (0,1)\), the property (6.10) implies multivariate regular variation, i.e. there is a uniquely determined probability measure \(\varrho \) on \({\mathbb {S}}\) such that

$$\begin{aligned} \lim _{r \rightarrow \infty } \frac{{\mathbb {P}}_{} \left( {\left| {X} \right| >sr, \, X/\left| {X} \right| \in \cdot } \right) }{{\mathbb {P}}_{} \left( {\left| {X} \right| >r} \right) } = s^{-\alpha } \varrho , \end{aligned}$$

see Basrak et al. [7, Theorem1.1], or equivalently,

$$\begin{aligned} \lim _{r \rightarrow \infty } r^\alpha \mathbb {E}f(r^{-1}X) = \int _{{\mathbb {S}}_\ge } \int _0^\infty f(ru) \frac{1}{r^{1+\alpha }} \, dr \, \varrho (du) \end{aligned}$$

for all bounded continuous functions \(f\) on \({\mathbb {R}}^d\) with support bounded away from the origin.

Referring again to Boman and Lindskog [14, Theorem3], the measure on the right hand side is uniquely identified by the value it takes on the half-spaces \(H_{u,t}\). By (6.10),

$$\begin{aligned} \varrho \otimes {l}^\alpha (H_{u,t}) = \frac{K H^\alpha (u)}{\Gamma (1-\alpha )}, \end{aligned}$$

and thus, recalling Eq. (6.5), \(\varrho \otimes {l}^\alpha \) is a scalar multiple of \(\Lambda _0\), hence \(\varrho =\nu ^{\alpha }\).

7 Every fixed point of \({\mathcal {S}}_0\) is \(\alpha \)-regular

Having proved the existence part of the main theorem, we turn now to the much more involved proof of uniqueness. We will focus on fixed points of the homogeneous smoothing transform \({\mathcal {S}}_0\), for the inhomogeneous case can then be treated by exploiting a one-to-one correspondence between fixed points of \({\mathcal {S}}_0\) and \({\mathcal {S}}_Q\), to be proved later.

The proof of uniqueness consists of two fundamental steps: As the first step, we show that each fixed point of \({\mathcal {S}}_0\) is \(\alpha \)-regular; secondly, we are going to prove that every \(\alpha \)-regular fixed point is already \(\alpha \)-elementary, and that there is a unique function, namely \(H^\alpha (u)\), that describes the directional behavior. As the second step, we conclude by showing that \(\alpha \)-elementary fixed points are unique, using in essence Corollary 4.5.

In this section, we provide the first fundamental step. In what follows, let \({\phi }\) be (the LT of) a fixed point of \({\mathcal {S}}_0\), and write

$$\begin{aligned} D(x) := \frac{1-{\phi }(x)}{H^\alpha (x)}. \end{aligned}$$

Theorem 7.1

Assume (A0)–(A6) and \(m'(\alpha )<0\). If \({\phi }\) is the Laplace transform of a nontrivial fixed point of \({\mathcal {S}}_0\) on \({{\mathbb {R}}^d_\ge }\), then \( 0 < \liminf _{r \rightarrow 0} D(r\mathbf {1}) \le \limsup _{r \rightarrow 0} D(r\mathbf {1}) < \infty \), in particular, \({\phi }\) is \(\alpha \)-regular.

The proof of this theorem will be given by the Lemmata 7.8 and 7.9 at the end of this section, extending the approach of Alsmeyer et al. [2] to the multidimensional situation. Beforehand, we have to introduce the concept of disintegration, and provide several prerequisites, including an application of Theorem 4.10. It might be helpful to have a first glance at the proofs of Lemmata 7.8 and 7.9 after studying the section about disintegration, in order to understand where the prerequisites are needed.

7.1 Disintegration

Assume (A0)–(A2) and let \({\phi }\) be a FP of \({\mathcal {S}}\). For each \(x \in {{\mathbb {R}}^d_\ge }\), the sequence

$$\begin{aligned} M_n(x):= \prod _{\left| {v} \right| =n} {\phi }(\mathbf{L}(v)^\top x) \end{aligned}$$

is a bounded nonnegative martingale, which thus converges in \(L^1\).

Definition 7.2

The random variable \(M(x) = \lim _{n \rightarrow \infty } M_n(x)\) is called the disintegration of \({\phi }\). Set \(Z(x):= - \log M(x) \in [0,\infty ]\).

Since \(M_n(x)\) converges in \(L^1\), it holds that \(\psi (x) = \mathbb {E}\, M(x) = \mathbb {E}\, e^{-Z(x)}\).

Lemma 7.3

Assume (A0)–(A2) and (A5) with \(m'(\alpha )<0\). Then for all \(x \in {{\mathbb {R}}^d_\ge }\), \(u \in {\mathbb {S}}_\ge \),

  1. (1)

    \(\lim _{n \rightarrow \infty } \sum _{\left| {v} \right| =n} (1- {\phi }(\mathbf{L}(v)^\top x) = Z(x) \quad \mathbb {P}\text {-a.s.},\)

  2. (2)

    if a function \(F : {{\mathbb {R}}^d_\ge }\rightarrow {\mathbb {R}}_\ge \) satisfies \(\lim _{r \rightarrow 0} \sup _{u \in {\mathbb {S}}_\ge } \left| {F(ru) - \gamma } \right| =0\) for some \(\gamma \ge 0\), then

    $$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{\left| {v} \right| =n} (1- {\phi }(\mathbf{L}(v)^\top x) \, F(\mathbf{L}(v)^\top x) = \gamma \, Z(x) \quad \mathbb {P}\text {-a.s.}\end{aligned}$$
  3. (3)

    \( \lim _{t \rightarrow \infty } \sum _{v \in {\mathcal {I}}_{t}^u} (1- {\phi }(\mathbf{L}(v)^\top u)) = Z(u)\)    \(\mathbb {P}\text {-a.s.}\)

Proof

(1) Use the fact that \(\lim _{n \rightarrow \infty } \max _{\left| {v} \right| =n} \left\| {\mathbf{L}(v)} \right\| =0\) from Lemma 4.1 together with the convergence \(M_n(x) \rightarrow M(x)\), the approximation \(- \log r \approx 1- r\), valid for \(r\) close to \(1\), and \(\lim _{\left| {x} \right| \rightarrow 0} {\phi }(x)=1\) to infer

$$\begin{aligned} Z(x) = - \log M(x) = - \log \lim _{n \rightarrow \infty } \prod _{\left| {v} \right| =n} {\phi }(\mathbf{L}(v)^\top x) = \lim _{n \rightarrow \infty } \sum _{\left| {v} \right| =n} (1- {\phi }(\mathbf{L}(v)^\top x)). \end{aligned}$$

(2) Writing

$$\begin{aligned} \sum _{\left| {v} \right| =n} (1- {\phi }(\mathbf{L}(v)^\top x) \, F(\mathbf{L}(v)^\top x)&= \gamma \sum _{\left| {v} \right| =n} (1- {\phi }(\mathbf{L}(v)^\top x) \\&+\sum _{\left| {v} \right| =n} (1- {\phi }(\mathbf{L}(v)^\top x) \bigl (F(\mathbf{L}(v)^\top x) -\gamma \bigr ), \end{aligned}$$

the assertion follows from (1), using the uniform convergence of \(F\) and Lemma 4.1.

(3) The same proof as for Lemma 4.7 (given in Sect. 13) yields that

$$\begin{aligned} \prod _{v \in {\mathcal {I}}_{t}^u} \psi (\mathbf{L}(v)^\top u)= \mathbb {E}\left[ M(u)| {\mathfrak {B}}_{{\mathcal {I}}_{t}^u} \right] , \end{aligned}$$

from which we infer \(\lim _{t \rightarrow \infty } \prod _{v \in {\mathcal {I}}_{t}^u} {\phi }(\mathbf{L}(v)^\top u) = M(u)\) \(\mathbb {P}\text {-a.s.}\). Then the argument is the same as for (1), using now that for \(v \in {\mathcal {I}}_{t}^u\), \(\left| {\mathbf{L}(v)^\top u} \right| \le e^{-t}\). \(\square \)

7.2 Prerequisites

We start by applying Theorem 4.10 to particular functions \(f\).

Lemma 7.4

For all \(\varepsilon >0\) there is a \(c>0\) such that for all \(u \in {\mathbb {S}}_\ge \)

$$\begin{aligned} \lim _{t \rightarrow \infty } \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) \mathbb {1}_{\{S^u(v)-t>c \}} = \varepsilon W(u) \quad \text { in } \mathbb {P}\text {-probability}. \end{aligned}$$
(7.1)

Proof

The bounded measurable function \(f_c:=\mathbb {1}_{(c, \infty )} \) is radial, so by Theorem 4.10,

$$\begin{aligned} \lim _{t \rightarrow \infty } \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) \mathbb {1}_{(c,\infty )}(S^u(v)\!-\!t) \!=\! \lim _{t \!\rightarrow \! \infty } W_{{\mathcal {I}}_{t}^u}^{f_c}(u) \!=\! W(u) \, \int f_c(r) \, \varrho ({\mathbb {S}}_\ge \times dr) \end{aligned}$$

in \(\mathbb {P}\)-probability, and the integral becomes arbitrarily small for \(c\) large. \(\square \)

Lemma 7.5

For all sufficiently large \(c>0\) there is \(C>0\) such that for all \(u \in {\mathbb {S}}_\ge \)

$$\begin{aligned} \lim _{t \rightarrow \infty } \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) \, \min _{j} U^u(v)_j \, \mathbb {1}_{\{S^u(v)-t\le c \}} = C W(u) \quad \text { in } \mathbb {P}\text {-probability.} \end{aligned}$$
(7.2)

Proof

The function \(u \mapsto \min _j u_j\) is continuous on \({\mathbb {S}}_\ge \), while \(r \mapsto \mathbb {1}_{[0,c]}(r)\) is radial and bounded measurable, so by Theorem 4.10,

$$\begin{aligned} \lim _{t \rightarrow \infty } \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) \, \min _{j} U^u(v)_j \, \mathbb {1}_{\{S^u(v)-t\le c \}} = \int (\min _j y_j) \, \mathbb {1}_{[0,c]}(r) \, \varrho (dy \times dr) \end{aligned}$$

in \(\mathbb {P}\)-probability. By Theorem 4.9, \(\varrho (\cdot \times {\mathbb {R}}_>)\) is concentrated on \(\mathrm {int}\left( {\mathbb {S}}_\ge \right) \), hence \(\min _j u_j >0\) \(\varrho \)-a.s., and thus upon choosing \(c\) sufficiently large,

$$\begin{aligned} C:= \int (\min _j y_j) \, \mathbb {1}_{[0,c]}(r) \, \varrho (dy \times dr) >0. \end{aligned}$$

\(\square \)

The next lemma generalizes Alsmeyer et al. [2, Lemma 11.4] to the multidimensional case.

Lemma 7.6

For all \(u \in {\mathbb {S}}_\ge \), \(t \in {\mathbb {R}}\) it holds that

$$\begin{aligned} \frac{D(e^{-t}u)}{D(e^{-t}\mathbf {1})} \le \frac{H^\alpha (\mathbf {1})}{H^\alpha (u)}:= R \qquad \text {and} \qquad \frac{D(e^{-t}u)}{D(e^{-t}\mathbf {1}) \, \min _j u_j} \ge R. \end{aligned}$$

Moreover, for all \(c>0\) there is \(\delta >0\) such that for all \(u \in {\mathbb {S}}_\ge \) and all \(0 \le a \le c\),

$$\begin{aligned} \frac{D(e^{-(t+a)}u)}{D(e^{-t}\mathbf {1})} \le R e^{\delta } \quad \text { and } \quad \frac{D(e^{-(t-a)}u)}{D(e^{-t}\mathbf {1}) \min _j u_j} \ge R e^{-\delta }. \end{aligned}$$
(7.3)

Proof

The first inequality results from Inequality (15.5):

$$\begin{aligned} \frac{D(e^{-t}u)}{D(e^{-t}\mathbf {1})} = \frac{1- {\phi }(e^{-t}u)}{1- {\phi }(e^{-t}\mathbf {1})} \frac{H^\alpha (e^{-t}\mathbf {1})}{H^\alpha (e^{-t}u)} \le 1 \cdot \frac{e^{-\alpha t}H^\alpha (\mathbf {1})}{e^{-\alpha t}H^\alpha (u)} = R. \end{aligned}$$

For the second inequality, we use (15.8),

$$\begin{aligned} \frac{D(e^{-t}u)}{D(e^{-t}\mathbf {1}) \, \min _j u_j} = \frac{1- {\phi }(e^{-t}u)}{(\min _j u_j)(1- {\phi }(e^{-t}\mathbf {1})} \frac{e^{-\alpha t}H^\alpha (\mathbf {1})}{e^{-\alpha t}H^\alpha (u)} \ge R. \end{aligned}$$

To derive the remaining inequalities, use that \(e^{-\alpha t} D(e^{-t}u)\) is decreasing in \(t\):

$$\begin{aligned} \frac{D(e^{-(t+a)}u)}{D(e^{-t}u)} = \frac{e^{-\alpha (t+a)}D(e^{-(t+a)}u)}{e^{-\alpha (t+a)}D(e^{-t}u)} \le \frac{e^{-\alpha t}D(e^{-t}u)}{e^{-\alpha (t+a)}D(e^{-t}u)} \le e^{\alpha c}. \end{aligned}$$

Setting \(\delta :=\alpha c \), one obtains by taking the reciprocal that

$$\begin{aligned} \frac{D(e^{-(t-a)}u)}{D(e^{-t}u)} \ge e^{-\delta }. \end{aligned}$$

Now plug in the first and second inequality to derive the third resp. fourth one. \(\square \)

A priori, \(Z(x) = -\log M(x) \in [0,\infty ]\). Now we are going to show that in fact \(Z(u) \in (0,\infty )\) \(\mathbb {P}\text {-a.s.}\) for all \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \).

Lemma 7.7

For all \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \),

$$\begin{aligned} {\mathbb {P}}_{} \left( {Z(u)< \infty , W(u)>0} \right) =1, \qquad {\mathbb {P}}_{} \left( {Z(u)>0, W(u)< \infty } \right) =1. \end{aligned}$$

Proof

As in the one-dimensional case (see Durrett and Liggett [27, Theorem3.2]), one can show that \(\lim _{\left| {x} \right| \rightarrow \infty } {\phi }(x) = {\mathbb {P}}_{} \left( {Z(u)=0} \right) \) for all \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \) equals the extinction probability of the underlying branching process which is zero due to assumption (A2), and is independent of \(u\). In particular, \({\mathbb {P}}_{} \left( {Z(u) >0, W(u) < \infty } \right) = {\mathbb {P}}_{} \left( {Z(u) >0} \right) =1\), and \({\mathbb {P}}_{} \left( {W(u)>0} \right) =1\) as well. Note that we do not exclude the possibility \({\mathbb {P}}_{} \left( {Z(u)=0} \right) =1\) for \(u \in \partial {\mathbb {S}}_\ge \), which might appear if the fixed point is concentrated on a subspace orthogonal to \(u \in \partial {\mathbb {S}}_\ge \).

Since we assumed the fixed point to have no mass at \(\infty \), \({\phi }\) is continuous in \(0\) and since \({\phi }(u)=\mathbb {E}e^{-Z(u)}\), necessarily \({\mathbb {P}}_{} \left( {Z(u)< \infty } \right) =1\). Consequently, \({\mathbb {P}}_{} \left( {Z(u)<\infty , W(u)>0} \right) =1.\) \(\square \)

7.3 Proof of Theorem 7.1

Now we can prove that all non-degenerate fixed points of \({\mathcal {S}}_0\) are \(\alpha \)-regular. Though the final argument is close to the one given in Alsmeyer et al. [2, Lemma 11.5], a lot of additional technical machinery has been applied, inter alia the application of Kesten’s renewal theorem in the lengthy proof of Theorem 4.9.

Lemma 7.8

For all \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \), \(\overline{K}_u :=\limsup _{r \rightarrow 0} D(ru) < \infty \).

Proof

Fix \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \). Then

$$\begin{aligned}&\sum _{v \in {\mathcal {I}}_{t}^u} (1- {\phi }(\mathbf{L}(v)^\top u))\\&\quad = \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) \frac{1- {\phi }(\mathbf{L}(v)^\top u)}{H^\alpha (\mathbf{L}(v)^\top u)}\\&\quad \ge \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) D(\mathbf{L}(v)^\top u) \mathbb {1}_{\{ S^u(v) \le t+c\}} \\&\quad = \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) D(e^{-S^u(v)}U^u(v)) \mathbb {1}_{\{ t < S^u(v) \le t+c\}} \\&\quad \ge R e^{-\delta } D(e^{-(t+c)}\mathbf {1}) \, \sum _{v \in {\mathcal {I}}_{t}^u} (\min _j (U^u(v))_j) \, H^\alpha (\mathbf{L}(v)^\top u) \mathbb {1}_{\{S^u(v) \le t+c \}} \end{aligned}$$

Referring to Lemma 7.5, we have for \(c\) large enough and letting \(t \rightarrow \infty \) along a suitable subsequence, that

$$\begin{aligned} Z(u) \ge R e^{-\delta } \overline{K}_u C W(u) \quad \mathbb {P}\text {-a.s.}\end{aligned}$$

By Lemma 7.7, for any \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \), \({\mathbb {P}}_{} \left( {Z(u) < \infty , W(u) >0} \right) >0\), thus \(\overline{K}_u<\infty \). \(\square \)

Lemma 7.9

For all \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \), \(\underline{K}_u := \liminf _{r \rightarrow 0} D(ru) >0\).

Proof

Fix \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \). Again using Lemma 7.6, we estimate

$$\begin{aligned} \sum _{v \in {\mathcal {I}}_{t}^u} (1- {\phi }(\mathbf{L}(v)^\top u))&= \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) D(\mathbf{L}(v)^\top u) \mathbb {1}_{\{S^u(v)\le t+c \}} \\&+ \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) D(\mathbf{L}(v)^\top u) \mathbb {1}_{\{S^u(v)> t+c \}} \\&= \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) D(e^{-S^u(v)}U^u(v)) \mathbb {1}_{\{t <S^u(v)\le t+c \}} \\&+\sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) D(e^{-S^u(v)}U^u(v)) \mathbb {1}_{\{S^u(v)> t+c \}} \\&\le e^\delta D(e^{-t}\mathbf {1}) \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) \\&+ R\left( \sup _{s\ge t+c} D(e^{-s}\mathbf {1})\right) \sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) \mathbb {1}_{\{S^u(v)>t+c \}}. \end{aligned}$$

Considering Lemma 7.4, we have for \(t \rightarrow \infty \) along a suitable subsequence, that

$$\begin{aligned} Z(u) \le e^\delta \underline{K}_u W(u) + R \overline{K}_u \varepsilon W(u) \quad \mathbb {P}\text {-a.s.}\end{aligned}$$

By Lemma 7.7 \({\mathbb {P}}_{} \left( {Z(u)>0, W(u) <\infty } \right) =1\). Since \(\varepsilon \) can be made arbitrarily small for \(c\) large, we infer that \(\underline{K}_u>0\). \(\square \)

8 Uniqueness of fixed points of the homogeneous equation

In the subsequent sections, we will prove the following theorem.

Theorem 8.1

Assume (A0)–(A7) and \(m'(\alpha ) <0\). If \({\phi }\) is an \(\alpha \)-regular FP of \({\mathcal {S}}_0\), then there is \(K >0\) such that

$$\begin{aligned} {\phi }(ru) = \mathbb {E}\exp (- K r^\alpha W(u)) , \quad \text { for all } r \in {\mathbb {R}}_\ge , u \in {\mathbb {S}}_\ge . \end{aligned}$$

Together with Theorem 7.1, this completes the proof of Theorem 1.2 for the homogeneous equation, by proving the uniqueness of fixed points (recall that the existence of fixed points was shown in Theorem 5.1).

In this section, we are going to conclude Theorem 8.1 from the general result given below, the proof of which will be given in Sects. 9 and 10.

Theorem 8.2

Assume (A1)–(A4) and (A7). Let \(\alpha \in (0,1]\), not necessarily satisfying (A5).

  1. (1)

    If there is an \(L\)-\(\alpha \)-regular FP of \({\mathcal {S}}_0\), then \(m(\alpha )=1\), \(m'(\alpha ) \le 0\).

  2. (2)

    If \({\phi }\) is an \(L\)-\(\alpha \)-regular FP of \({\mathcal {S}}\), then for all fixed \(s >0\),

    $$\begin{aligned} \lim _{r \rightarrow 0} \, \sup _{u,v \in {\mathbb {S}}_\ge } \left| {\frac{1-{\phi }(sru)}{1- {\phi }(rv)}- s^\alpha \frac{H^\alpha (u)}{H^\alpha (v)}} \right| =0. \end{aligned}$$
    (8.1)
  3. (3)

    If \(\psi \) is a \(L\)-\(\alpha \)-elementary FP of \({\mathcal {S}}\), then there is \(K >0\) such that

    $$\begin{aligned} \lim _{r \rightarrow 0} \sup _{u \in {\mathbb {S}}_\ge }\left| {\frac{1- \psi (ru)}{L(r) r^\alpha } - K H^\alpha (u)} \right| =0. \end{aligned}$$
    (8.2)

Remark 8.3

This far reaching result covers the case of general slowly varying functions \(L\) (non-constant \(L\) become relevant in the critical case \(m'(\alpha )=0\), see Kolesko and Mentemeier [46]) and proves that the directional dependence of \(\lim _{r \rightarrow 0} ({1 - \psi (ru)})/({L(r) r^\alpha })\) is always given by \(H^\alpha \) (answering the question raised at the beginning of Sect. 2) and that the convergence is uniform.

8.1 Proof of Theorem 8.1

Using disintegration (see Sect. 7.1), each fixed point has the representation \({\phi }(x) = \mathbb {E}\exp (- Z(x))\). We will show that \(\mathbb {P}\text {-a.s.}\), the function \(Z(x)\) is \(\alpha \)-homogeneous, i.e. \(Z(x)=\left| {x} \right| ^\alpha Z(x/\left| {x} \right| )\), and subsequently that \(Z(u) = KW(u)\) \(\mathbb {P}\text {-a.s.}\) for all \(u \in {\mathbb {S}}_\ge \).

Proof of Theorem 8.1

We follow the proof given in Alsmeyer et al. [2, Lemma 7.6  and Theorem 10.2] for the one-dimensional case.

Step 1: \(Z\) is \(\alpha \)-homogeneous: Using Lemma 7.3 together with property (8.1) from Theorem 8.2, we obtain that for all \(u \in {\mathbb {S}}_\ge \) and \(s \in {\mathbb {R}}_\ge \),

$$\begin{aligned} Z(s u)&= \lim _{n \rightarrow \infty } \sum _{\left| {v} \right| =n} (1- {\phi }(r \mathbf{L}(v)^\top u)) \nonumber \\&= \lim _{n \rightarrow \infty } \sum _{\left| {v} \right| =n} \frac{1- {\phi }(s \mathbf{L}(v)^\top u)}{1- {\phi }(\mathbf{L}(v)^\top u)} \left[ 1- {\phi }(\mathbf{L}(v)^\top u) \right] \nonumber \\&= s^\alpha Z(u) \quad \mathbb {P}\text {-a.s.} \end{aligned}$$
(8.3)

Note that this implies \({\phi }(su) = \mathbb {E}\exp (-s^\alpha Z(u))\) for all \(u \in {\mathbb {S}}_\ge \), \(s \in {\mathbb {R}}_\ge \).

Step 2: \(0 <\mathbb {E}\, Z(u) < \infty \) for all \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \): By Lemma 7.7, \(0 < Z(u) < \infty \) \(\mathbb {P}\text {-a.s.}\) for all \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \), which implies \(\mathbb {E}Z(u) >0\). Moreover, we have the monotone convergence \(\lim _{s \rightarrow 0} s^{-\alpha } (1- \exp (-s^\alpha Z(u))) = Z(u)\), thus

$$\begin{aligned} \lim _{s \rightarrow 0} \frac{1- {\phi }(su)}{s^\alpha }= \lim _{s \rightarrow 0} \mathbb {E}\left[ \frac{1- e^{-s^\alpha Z(u)}}{s^\alpha } \right] = \mathbb {E}Z(u) , \end{aligned}$$
(8.4)

being finite or not. But now we can refer to Lemma 7.8, which yields finiteness of \(\mathbb {E}Z(u)\).

Step 3: Eq. (8.4) yields that \({\phi }\) is \(\alpha \)-elementary. Thus, Eq. (8.2) of Theorem 8.2 gives that there is \(K>0\) with

$$\begin{aligned} \lim _{s \rightarrow 0} \sup _{u \in {\mathbb {S}}_\ge } \left| {\frac{1- {\phi }(su)}{s^\alpha H^\alpha (u)} - K} \right| = \lim _{s \rightarrow 0} \sup _{u \in {\mathbb {S}}_\ge } \left| {\frac{1- {\phi }(su)}{ H^\alpha (su)} - K} \right| =0, \end{aligned}$$

which implies \(\mathbb {E}Z(u) = K H^\alpha (u)\) for all \(u \in {\mathbb {S}}_\ge \), and, together with Corollary 4.5, that

$$\begin{aligned} Z(u)&= \lim _{n \rightarrow \infty } \sum _{\left| {v} \right| =n} \left( 1- {\phi }(\mathbf{L}(v)^\top u) \right) \nonumber \\&= \lim _{n \rightarrow \infty } \sum _{\left| {v} \right| =n} H^\alpha (\mathbf{L}(v)^\top u) \, \frac{1- {\phi }(\mathbf{L}(v)^\top u)}{ H^\alpha (\mathbf{L}(v)^\top u)} = K W(u)\quad \mathbb {P}\text {-a.s.} \end{aligned}$$

\(\square \)

9 Proof of Theorem 8.2: compactness arguments via the Arzelá–Ascoli theorem

Let \({\phi }\) be \(L\)-\(\alpha \)-regular. Here and below, we will study the family \(({D_{L,s}})_{s \in {\mathbb {R}}}\) of functions on \({\mathbb {S}}_\ge \times {\mathbb {R}}\), given by

$$\begin{aligned} {D_{L,s}}(u,t) := \frac{1- {\phi }(e^{s+t}u)}{H^\alpha (e^t u) e^{\alpha s}L(e^s)} = \frac{1- {\phi }(e^{s+t}u)}{H^\alpha (u) e^{\alpha (s+t)} L(e^s)}. \end{aligned}$$
(9.1)

Note that \({\phi }\) is \(L\)-\(\alpha \)-elementary, if \(\lim _{s \rightarrow -\infty } {D_{L,s}}(\tilde{\mathbf {1}},0)\) exists and is finite and positive, where \(\tilde{\mathbf {1}}:= d^{-1/2} \mathbf {1}\) and that \(L\)-\(\alpha \)-regularity implies that for each fixed \((u,t) \in {\mathbb {S}}_\ge \times {\mathbb {R}}\), the family \(({D_{L,s}}(u,t))_{s \in {\mathbb {R}}}\) is uniformly bounded.

This already hints at using the Arzelá-Ascoli theorem. In fact, in this section, we are going to show that \(L\)-\(\alpha \)-regularity implies that the family \(({D_{L,s}})_{s \in {\mathbb {R}}}\) are \(\min \{1,\alpha \}\)-Hölder continuous in \(u\) for any fixed \(t \in {\mathbb {R}}\). This will imply equicontinuity of a restricted family \(({D_{L,s}})_{s \le s_0}\). The results of this section carry a lot of technical details, but their essence can be phrased as follows:

Lemma 9.1

Assume (A0)–(A3) and let \({\phi }\) be \(L\)-\(\alpha \)-regular for \(\alpha \in (0,1]\) and a positive function \(L\), slowly varying at 0. Then there is \(s_0 \in {\mathbb {R}}\), such that the family \(({D_{L,s}})_{s \le s_0}\) is contained in a compact subset of \({\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \) with respect to the topology of uniform convergence on compact sets. In particular, each sequence \(({D_{L,s_n}})_{n \ge 0}\) with \(\lim _{n \rightarrow \infty } s_n =-\infty \) has a convergent subsequence.

An immediate application is given by the following corollary:

Corollary 9.2

If \({\phi }(ru)=\mathbb {E}\, \exp (-K r^\alpha W(u))\), then

$$\begin{aligned} \lim _{r \rightarrow 0} \sup _{u \in {\mathbb {S}}_\ge } \left| {\frac{1-{\phi }(ru)}{r^\alpha } - K H^\alpha (u)} \right| =0. \end{aligned}$$

Proof

On the one hand, for \({\phi }\) of the given form,

$$\begin{aligned} \lim _{s \rightarrow -\infty } \frac{1- {\phi }(e^{s+t}u)}{e^{\alpha s}} = e^{\alpha t} H^\alpha (u) = H^\alpha (e^t u), \end{aligned}$$

i.e., for all \(u \in {\mathbb {S}}_\ge \), \(t \in {\mathbb {R}}\),

$$\begin{aligned} \lim _{s \rightarrow -\infty } {D_{1,s}}(t,u) = 1. \end{aligned}$$
(9.2)

On the other hand, by Lemma 9.1, any sequence \(({D_{1,{s_n}}})_{n \in {\mathbb {N}}}\) with \(s_n \le s_0\) and \(\lim _{n \rightarrow \infty } s_n =-\infty \) has a convergent subsequence, and this convergence is uniform on compact subsets of \({\mathbb {S}}_\ge \times {\mathbb {R}}\). But due to (9.2), the limit is always the same, hence \(\lim _{s \rightarrow -\infty } {D_{L,s}} =1 \) uniformly on compact subsets of \({\mathbb {S}}_\ge \times {\mathbb {R}}\). \(\square \)

9.1 Hölder continuity

Recall that for \({\phi }\) being \(L\)-\(\alpha \)-regular,

$$\begin{aligned} \underline{K}:= \liminf _{r \rightarrow 0} \frac{1-{\phi }(r\mathbf {1})}{L(r)r^\alpha } >0, \quad \overline{K}:=\limsup _{r \rightarrow 0} \frac{1-{\phi }(r\mathbf {1})}{L(r)r^\alpha } < \infty . \end{aligned}$$

Lemma 9.3

Let \({\phi }\) be \(L\)-\(\alpha \)-regular. Then there is \(t_0>0\) and \(K > 0\) such that for all \(t \in [0,t_0]\), \(u,w \in {\mathbb {S}}_\ge \),

$$\begin{aligned} \left| { \frac{1-{\phi }(tru)}{t^\alpha L(t)} - \frac{1-{\phi }(trw)}{t^\alpha L(t)} } \right| \le K (1 \vee r) \left| {u-w} \right| ^\alpha . \end{aligned}$$
(9.3)

Moreover, let \(C \subset \mathrm {int}\left( {\mathbb {S}}_\ge \right) \) compact. Then with

$$\begin{aligned} K_C := \left( \min _{y \in C} \min _i y_i \right) \!, \end{aligned}$$

it holds that for each \(r \in {\mathbb {R}}_>\) there is \(t_1=t_1(r) \le t_0\) such that for all \(u,w \in C\), \(t \in [0,t_1]\),

$$\begin{aligned} \left| {\frac{1- {\phi }(tru)}{1-{\phi }(tu)} - \frac{1- {\phi }(trw)}{1-{\phi }(tw)}} \right| \le 4 K K_C (1 \vee r) \left| {u-w} \right| ^\alpha . \end{aligned}$$
(9.4)

Proof

For \(u,w \in {\mathbb {S}}_\ge \) define the vector \(u \wedge w\) by \((u \wedge w)_i=\min \{u_i, w_i\}\), \(i=1,\dots ,d\). Then \(u-u\wedge w, w-u \wedge w \in {{\mathbb {R}}^d_\ge }\). Let \(X\) be a r.v. with LT \({\phi }\). Consider

$$\begin{aligned}&\left| {1- {\phi }(tru) - (1-{\phi }(trw))} \right| \\&\quad \le \mathbb {E}\left| { \exp (-tr\left\langle u,X \right\rangle ) - \exp (-tr\left\langle w,X \right\rangle ) } \right| \\&\quad \le \mathbb {E}\left| { \exp (-tr \left\langle u \wedge w, X \right\rangle ) \left( 1 - \exp (-tr \left\langle u-u\wedge w,X \right\rangle ) \right) } \right| \\&\qquad +\, \mathbb {E}\left| { \exp (-tr \left\langle u \wedge w, X \right\rangle ) \left( 1 - \exp (-tr \left\langle w-u\wedge w,X \right\rangle ) \right) } \right| \\&\quad \le \mathbb {E}\left| { 1 - \exp (-tr \left\langle u-u\wedge w,X \right\rangle )} \right| + \mathbb {E}\left| { 1 - \exp (-tr \left\langle w-u\wedge w,X \right\rangle )} \right| \\&\quad = 1- {\phi }(tr[u-u\wedge w]) + 1- {\phi }(tr[w-u\wedge w]) \end{aligned}$$

Due to symmetry, it is enough to consider \(1-{\phi }(tr[u-u\wedge w])\). Using inequality (15.5) and then (15.3) resp. (15.4), we infer

$$\begin{aligned} 1-{\phi }(tr[u-u\wedge w]) \le&\, 1- {\phi }(tr \left| {u - u \wedge w} \right| \mathbf {1}) \\ \le&{\left\{ \begin{array}{ll} 1 - {\phi }(t \left| {u - u \wedge w} \right| \mathbf {1}) &{} r < 1 \\ r ( 1 - {\phi }(t \left| {u - u \wedge w} \right| \mathbf {1})) &{} r \ge 1 \end{array}\right. } \end{aligned}$$

Since by assumption,

$$\begin{aligned} \limsup _{t \rightarrow 0} \frac{1- {\phi }(t\left| {u - u\wedge w} \right| \mathbf {1})}{t^\alpha \left| {u-u\wedge w} \right| ^\alpha L(t \left| {u-u \wedge w} \right| )} \le \overline{K} \end{aligned}$$

with \(L\) slowly varying at 0, there is \(t_0 >0\) and \(K' > \overline{K}\) such that

$$\begin{aligned} \frac{1- {\phi }(t\left| {u - u\wedge w} \right| \mathbf {1})}{t^\alpha L(t) } \le K' (1 \vee r) \left| {u-u\wedge w} \right| ^\alpha \le K' (1 \vee r) \left| {u-w} \right| ^\alpha \end{aligned}$$

for all \(t \in [0, t_0]\). This proves the first assertion.

Turning now to the second assertion, write \(F(x)=1-{\phi }(x)\). Then for all \(t \le t_0\),

$$\begin{aligned}&\left| {\frac{1- {\phi }(tru)}{1-{\phi }(tu)} - \frac{1- {\phi }(trw)}{1-{\phi }(tw)}} \right| \\&\quad \le \left| {\frac{F(tru)}{F(tw)}} \right| \left| {\frac{F(tw)-F(tu)}{F(tu)}} \right| + \left| {\frac{F(trw)}{F(tw)}} \right| \left| {\frac{F(tru)-F(trw)}{F(trw)}} \right| \\&\quad \le (1\vee r) \left| {\frac{F(tw)-F(tu)}{t^\alpha L(t)}} \right| \frac{t^\alpha L(t)}{F(tu)} + (1\vee r) \left| {\frac{F(tru)-F(trw)}{(tr)^\alpha L(tr)}} \right| \frac{(tr)^\alpha L(tr)}{F(tru)} \\&\quad \le (1 \vee r) K' \left| {u-w} \right| ^\alpha \frac{s^\alpha L(t)}{F(tu)} + { (1 \vee r)} K' \left| {u-w} \right| ^\alpha \frac{(tr)^\alpha L(tr)}{F(tru)}, \end{aligned}$$

where we used (15.1) and (15.2) to estimate \(\left| {F(trw)/F(tw)} \right| \) by \((1 \vee r)\), and subsequently the estimate for \(\left| {\frac{F(tu)-F(tw)}{(t)^\alpha L(t)}} \right| \) obtained above. To estimate further, observe that by (15.8)

$$\begin{aligned} \frac{F(tu)}{F(t\mathbf {1})} \ge \min _i u_i, \end{aligned}$$

hence

$$\begin{aligned} \cdots \le&{ (1\vee r)} K' \left| {u-w} \right| ^\alpha (\min _i u_i)^{-1} \left( \frac{t^\alpha L(t)}{F(t\mathbf {1})} + \frac{(tr)^\alpha L(tr)}{F(tr\mathbf {1})} \right) \end{aligned}$$

The term in the bracket is bounded by \(2/\underline{K}\) for \(t \rightarrow 0\), hence there is \(t_1\), depending on \(r\), such that the expression is bounded by \(4/\underline{K}\) for all \(t \le t_1\). To make the bound independent of \(u\), replace \((\min _i u_i)^{-1}\) by \(K_C\). Finally, choose \(K=\max \{K', K'/\underline{K}\}.\) \(\square \)

9.2 A compact subset of \({\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \)

Now we are going to construct a compact subset \({\mathcal {J}}_\alpha ^K \subset {\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \), such that there is \(s_0 \in {\mathbb {R}}\) with \({D_{L,s}} \in {\mathcal {J}}_\alpha ^K\) for all \(s \le s_0\). Its definition is given below, subsequently, we prove that it is compact and that it eventually contains the \(({D_{L,s}})_{s \le s_0}\). The definition is subject to assumptions (A0)–(A3), which guarantee the existence of \(H^\alpha \).

Definition 9.4

For \(\alpha \in (0,1)\), \(K >0\) let \({\mathcal {J}}_\alpha ^K\) be the set of continuous functions

$$\begin{aligned} g : {\mathbb {S}}_\ge \times {\mathbb {R}}\rightarrow [0, \infty ) \end{aligned}$$

satisfying

  1. (1)

    \(\sup _{u \in {\mathbb {S}}_\ge } g(u,0) H^\alpha (u) \le K\),

  2. (2)

    \(t \mapsto g(u,t) e^{\alpha t}\) is increasing for all \(u \in {\mathbb {S}}_\ge \),

  3. (3)

    \(t \mapsto g(u,t) e^{(\alpha -1)t}\) is decreasing for all \(u \in {\mathbb {S}}_\ge \),

  4. (4)

    \( u \mapsto g(u,t) H^\alpha (e^t u)\) is \(\alpha \)-Hölder with constant \((1 \vee e^t)K\) for each \(t \in {\mathbb {R}}\).

The idea of this construction goes back to Durrett and Liggett [27] for the one-dimensional case, in fact, properties (1)–(3) are the same as in (ibid., Lemma 2.11). The fundamental new contribution here is to take care of the directional dependence on \(u \in {\mathbb {S}}_\ge \), which necessitates the assumption of Hölder continuity in the directional component. Note that we needed \({\phi }\) to be \(L\)-\(\alpha \)-regular in order to prove Hölder continuity of \({D_{L,s}}\); and therefore had to show first that any fixed point is regular. This step is not needed for the one-dimensional arguments.

Lemma 9.5

Assume (A0)–(A3). The set \({\mathcal {J}}_\alpha ^K\) is a compact subset of \({\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \) w.r.t. to the topology of uniform convergence on compact sets.

Proof

The assertion will follow from the general Arzelà-Ascoli theorem for locally compact metric spaces, see e.g. Kelley [42, Theorem 7.18]. Properties (1)–(3) together imply the uniform bounds, valid for all \(g \in {\mathcal {J}}_\alpha ^K\)

$$\begin{aligned} g(u,t) \le {\left\{ \begin{array}{ll} K H^\alpha (u)^{-1} e^{(1-\alpha )t} &{} t \ge 0, \\ K H^\alpha (u)^{-1} e^{-\alpha t} &{} t \le 0. \end{array}\right. } \end{aligned}$$
(9.5)

Properties (1)–(4) are closed even under pointwise convergence of functions, thus \({\mathcal {J}}_\alpha ^K\) is particularly closed under compact uniform convergence. Turning to equicontinuity, fix \((u_0, t_0) \in {\mathbb {S}}_\ge \times {\mathbb {R}}\) and \(\varepsilon >0\) and consider first the variation in \(t\). Let \(\delta >0\). Then for any \(g \in {\mathcal {J}}_\alpha ^K\), it follows from property (2) that for all \(u \in {\mathbb {S}}_\ge \) and \(t \in [t_0 - \delta , t_0 + \delta ]\),

$$\begin{aligned} g(u,t)e^{\alpha (t_0-\delta )} \le g(u,t)e^{\alpha t} \le g(u_,t_0 + \delta )e^{\alpha (t_0 + \delta )}, \end{aligned}$$

thus \( g(u,t) \le g(u, t_0 + \delta )e^{2 \alpha \delta }.\) Similarly, from property (3), \( g(u,t) \ge g(u, t_0+\delta )e^{2(\alpha -1)\delta }\) and consequently

$$\begin{aligned} \left| {g(u,t)\!-\!g(u,t_0)} \right| \!\le \!\ g(u,t_0 \!+\! \delta )e^{2\alpha \delta } \!-\! g(u, t_0 \!+\! \delta )e^{2(\alpha \!-\!1)\delta } \!\le \!\ M\left( e^{2\alpha \delta } \!-\! e^{2(\alpha -1) \delta }\right) , \end{aligned}$$

where the uniform bound \(M\) exists due to (9.5). Hence there is \(\delta _1 >0\) such that

$$\begin{aligned} \left| {g(u,t)-g(u,t_0)} \right| < \frac{\varepsilon }{2} \end{aligned}$$
(9.6)

for all \(t \in B_{\delta _1}(t_0)\) and all \(u \in {\mathbb {S}}_\ge \). Considering the variation in \(u\), it follows again from (9.5) that for \(h(u,t):=g(u,t)H^\alpha (e^t u)\),

$$\begin{aligned} L := \sup \{ h(u,t) \ : \ g \in {\mathcal {J}}_\alpha ^K,\ (u,t) \in {\mathbb {S}}_\ge \times [t_0 - \delta _1, t_0 + \delta _1] \} < \infty . \end{aligned}$$

Using property (4), we infer that for all \(u \in {\mathbb {S}}_\ge \),

$$\begin{aligned} \left| {g(u,t_0) -g(u_0,t_0) } \right|&\le \frac{\left| {h(u,t_0)- h(u_0,t_0)} \right| }{H^\alpha (e^{t_0}u_0)} + h(u_,t_0)\left| { \frac{1}{H^\alpha (e^{t_0}u_0)} - \frac{1}{H^\alpha (e^{t_0}u_0)} } \right| \nonumber \\&\le \frac{K(1\vee e^{t_0}) \left| {u-u_0} \right| ^\alpha }{H^\alpha (e^{t_0}u_0)} + L \left| { \frac{1}{H^\alpha (e^{t_0}u_0)} - \frac{1}{H^\alpha (e^{t_0}u_0)} } \right| \end{aligned}$$

Hence there is \(\delta _2>0\) such that

$$\begin{aligned} \left| {g(u,t_0) -g(u_0,t_0) } \right| \le \varepsilon /2 \end{aligned}$$
(9.7)

for all \(u \in B_{\delta _2}(u_0)\). Combining (9.6) and (9.7), it holds that for all \((u,t) \in B_{\delta _2}(u_0) \times B_{\delta _1}(t_0)\),

$$\begin{aligned} \left| {g(u,t)- g(u_0,t_0)} \right| \le \left| {g(u,t) - g(u,t_0)} \right| + \left| {g(u,t_0)-g(u,t)} \right| \le \varepsilon . \end{aligned}$$

This proves the equicontinuity, hence Arzelà-Ascoli applies and yields the assertion. \(\square \)

The next result in particular proves Lemma 9.1.

Lemma 9.6

Assume (A0)–(A3) and let \({\phi }\) be \(L\)-\(\alpha \)-regular for some \(\alpha \in (0,1]\). Then there is \(s_0 \in {\mathbb {R}}\) and \(K >0\) such that \(\left( {D_{L,s}}(u,t)\right) _{s \le s_0} \subset {\mathcal {J}}_\alpha ^K\).

Proof

We have to check properties (1)–(4):

  1. (1)

    \(\sup _{u \in {\mathbb {S}}_\ge } {D_{L,s}}(u,0) H^\alpha (u) \le \frac{1- {\phi }(e^s \mathbf {1})}{e^{\alpha s}L(e^s)} \le K_s\), with \(K_s\) bounded by \(\overline{K}\) asymptotically

  2. (2)

    Just observe that \({D_{L,s}}(u,t) e^{\alpha t}= \frac{1- {\phi }(e^{s+t}u)}{H^\alpha (u)e^{\alpha s}L(e^s)}\) is increasing as a function of \(t\).

  3. (3)

    Recall that \((e^{-s})e^{-t}(1- {\phi }(e^{s+t}u))\) is a LT, hence decreasing. Consequently, \({D_{L,s}}(u,t) e^{(\alpha -1)t} = e^{-t}\frac{1-{\phi }(e^{s+t}u)}{H^\alpha (u)e^{\alpha s}L(e^s)}\) is decreasing as a function of \(t\) as well.

  4. (4)

    This is the content of Lemma 9.3. It gives \(e^{s_0} >0\) and \(K >0\) such that for all \(s < s_0\),

    $$\begin{aligned} \left| {{D_{L,s}}(u,t)H^\alpha (e^t u) - {D_{L,s}}(w,t)H^\alpha (e^t w)} \right| \le K(1 \vee e^t) \left| {u-w} \right| ^\alpha . \end{aligned}$$

Possibly by making \(s_0\) smaller, \(K_s \le K\) for all \(s \le s_0\), i.e. property (1) holds with this \(K\) as well. \(\square \)

10 Proof of Theorem 8.2: Choquet–Deny arguments

This section contains the technical cornerstone in the proof of Theorem 8.2. We have proved so far that for any \(L\)-\(\alpha \)-regular FP \({\phi }\), its associated sequence \({D_{L,s}}\) has convergent subsequences (for \(s \rightarrow - \infty \)), now we are going to identify their limits as bounded harmonic functions for the associated Markov random walk \((U_n, S_n)_{n \in {\mathbb {N}}_0}\). The following Choquet–Deny type result holds for \((U_n, S_n)_{n \in {\mathbb {N}}_0}\):

Proposition 10.1

Assume (A0)–(A4) and that \(H \in {\mathcal {C}}_b^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \) satisfies

  1. (i)

    \(H(u,t) = \mathbb {E}_{u} H(U_1,t-S_1)\) for all \((u,t) \in {\mathbb {S}}_\ge \times {\mathbb {R}}\), and

  2. (ii)

    for all \(z \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \),

    $$\begin{aligned} \lim _{y \rightarrow z} \sup _{t \in {\mathbb {R}}} \left| {H(y,t)- H(z,t)} \right| =0. \end{aligned}$$

Then \(H\) is constant.

Source

[59, Theorem 2.2]. \(\square \)

In order to apply this Choquet–Deny type result, we will introduce a subset \(H_{\alpha ,c}^K\) of \({\mathcal {J}}_\alpha ^K\) (Definition and Lemma 10.2) which contains all possible subsequential limits of \({D_{L,s}}\) (Lemma 10.4). Then we prove that \(H_{\alpha ,c}^K\) is a compact convex set, and we identify its extremal points by using Proposition 10.1 (Lemma 10.5). Finally, we prove Theorem 8.2.

10.1 The set containing the subsequential limits

We start by introducing the subset \({\mathcal {H}}_{\alpha ,c}^K\) of \({\mathcal {J}}_\alpha ^K\) which will contain the subsequential limits of \({D_{L,s}}\) for \(L\)-\(\alpha \)-regular fixed points.

Definition and Lemma 10.2

Let (A0)–(A3) and (A7) hold. For \(\alpha , c \in (0,1]\), define the subset \({\mathcal {H}}_{\alpha ,c}^K \subset {\mathcal {J}}_\alpha ^K \) as follows: A function \(g \in {\mathcal {J}}_\alpha ^K\) is in \({\mathcal {H}}_{\alpha ,c}^K\), if it satisfies the following additional properties:

  1. (1’)

    \(\sup _{u \in {\mathbb {S}}_\ge } g(u,0)H^\alpha (u) = c\) and \(g(u,0)H^\alpha (u) \ge \min _i u_i\) for all \(u \in {\mathbb {S}}_\ge \).

  2. (5)

    For all \((u,t) \in {\mathbb {S}}_\ge \times {\mathbb {R}}\),

    $$\begin{aligned} g(u,t) = m(\alpha ) \mathbb {E}_{u}^\alpha \ g(U_1,t-S_1). \end{aligned}$$
  3. (6)

    Introducing

    $$\begin{aligned} L_t : {\mathbb {S}}_\ge \times {\mathbb {R}}\rightarrow {\mathbb {R}}_>, \quad (u,z) \mapsto \frac{g(u,t+z)}{g(u,z)}, \end{aligned}$$

    the following holds: For all \(t \in {\mathbb {R}}\), all compact \(C \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \), all \(u,w \in C\):

    $$\begin{aligned} \sup _{z \in {\mathbb {R}}} e^{-\alpha t} \left| {L_t(u,z)-L_t(w,z)} \right| \le 4 K K_C { (1 \vee e^{t}) }\left| {u-w} \right| ^\alpha , \end{aligned}$$

    with \(K_C := \left( \min \{ y_i \ : \ y \in C, \ i=1, \dots , d \} \right) ^{-1}\).

Here, validity of (1\(')\) and (5) implies that the function \(L_t\) is well defined and continuous on \({\mathbb {S}}_\ge \times {\mathbb {R}}\).

The set \({\mathcal {H}}_{\alpha ,c}^K\) is a compact subset of \({\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \) w.r.t the compact uniform convergence.

Property (6) will provide the uniform continuity needed in the Choquet–Deny-Lemma 10.1.

Proof

Note that (A6) together with (A2) implies that \([0,1] \in I_\mu \), hence \(\mathbb {P}_u^\alpha \) is well defined for all \(\alpha \in (0,1]\).

The function \(L_t\) is well defined and continuous as soon as \(g(u,z)>0\) for all \((u,z) \in {\mathbb {S}}_\ge \times {\mathbb {R}}\). For \(u \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \), this is a direct consequence of (\(1'\)), combined with the lower bounds

$$\begin{aligned} g(u,t) \ge {\left\{ \begin{array}{ll} g(u,0)e^{-\alpha t} &{} t \ge 0, \\ g(u,0)e^{(1-\alpha )t} &{} t \le 0. \end{array}\right. } \end{aligned}$$

But due to property (2) of condition \((C)\), \(\mathbb {P}_u^\alpha \{U_n \in \mathrm {int}\left( {\mathbb {S}}_\ge \right) \} >0\) for all \(u \in {\mathbb {S}}_\ge \) and some \(n\), hence using property (5), \(g(u,t)>0\) everywhere.

Since \({\mathcal {J}}_\alpha ^K\) is compact, it suffices to show that the subset \({\mathcal {H}}_{\alpha ,c}^K\) is closed. It is readily checked that properties (\(1'\)), (6) persist to hold even under pointwise convergence of functions \(g_n \rightarrow g\). In order to show the closedness of property (5), uniform integrability of the sequence \(g_n(U_1, t-S_1)\) w.r.t the measures \(\mathbb {P}_{u}^\alpha \) is needed. This is the content of the subsequent lemma. \(\square \)

Lemma 10.3

Assume (A0)–(A3) and (A7) and let \(\alpha \in (0,1]\). Then for all \((u,t) \in {\mathbb {S}}_\ge \times {\mathbb {R}}\), the family \(\left\{ g(U_1, t-S_1) \ : \ g \in {\mathcal {J}}_{\alpha }^K \right\} \) is uniformly integrable w.r.t \(\mathbb {P}_{u}^\alpha \).

Proof

Recalling the uniform bounds (9.5), valid for all \(g \in {\mathcal {J}}_\alpha ^K\) and the finiteness of

$$\begin{aligned} C:= \sup _{y \in {\mathbb {S}}_\ge } H^\alpha (y)^{-1} = \frac{1}{\inf _{y \in {\mathbb {S}}_\ge } H^\alpha (y)} \end{aligned}$$

due to Proposition 3.1, it is sufficient to show that

$$\begin{aligned} e^{(1-\alpha ) (t-S_1)} \mathbb {1}_{t\ge S_1} + e^{-\alpha (t-S_1)} \mathbb {1}_{t \le S_1} \end{aligned}$$

is integrable w.r.t. \(\mathbb {P}_{u}^\alpha \). Using the definition of \(\mathbb {P}_{u}^\alpha \), (3.4),

$$\begin{aligned} \mathbb {E}_{u}^\alpha e^{(1-\alpha )(t-S_1)} ~=&~ \frac{1}{H^\alpha (u)} \mathbb {E}_{u}^0 H^\alpha (U_1) e^{\alpha (t-S_1)} e^{(1-\alpha )(t-S_1)} \\ ~\le&~ C (\mathbb {E}N) e^t \mathbb {E}\left| {\mathbf {M}^\top u} \right| ~\le ~ C (\mathbb {E}N) e^t \mathbb {E}\left\| {\mathbf {M}} \right\| , \\ \mathbb {E}_{u}^\alpha e^{-\alpha (t-S_1)} ~=&~ \frac{1}{H^\alpha (u)} \mathbb {E}_{u}^0 H^\alpha (U_1) e^{\alpha (t- S_1)} e^{-\alpha (t-S_1)} ~\le ~ C \mathbb {E}N, \end{aligned}$$

hence it is integrable due to assumption (A6). \(\square \)

Lemma 10.4

Let \(\psi \) be a \(L\)-\(\alpha \)-regular FP of \({\mathcal {S}}_0\), with associated sequence \({D_{L,s}}\). Then for any sequence \((s_k)_{k \in {\mathbb {N}}}\) with \(s_k \rightarrow -\infty \), there is a subsequence \((s_n)_{n \in {\mathbb {N}}} \subset (s_k)_{k \in {\mathbb {N}}}\) such that \({D_{\infty }}= \lim _{n \rightarrow \infty } {D_{L,s_n}}\) exists and is an element of \({\mathcal {H}}_{\alpha ,c}^K\) for some \(c >0\). The convergence is uniform on compact subsets of  \({\mathbb {S}}_\ge \times {\mathbb {R}}\).

Proof

By Lemma 9.6, \(({D_{L,s}})_{s \le s_0} \in {\mathcal {J}}_{\alpha }^{K}\) for some \(K>0\) and \(s_0 \in {\mathbb {R}}\). Hence for any sequence \(s_k \rightarrow -\infty \) there is a subsequence \((s_n)_{n \in {\mathbb {N}}}\), such that \({D_{L,s_n}}\) converges and its limit \({D_{\infty }}\) is again an element of \({\mathcal {J}}_{\alpha }^K\). By Lemma 9.5, the convergence is uniform on compact sets. Thus the burden of the proof is to show that the additional properties (\(1'\)), (5) and (6) hold for the limit \({D_{\infty }}\).

Step 1, Property (1’): Using (15.8), we infer that

$$\begin{aligned} 1-\psi (e^s u) \ge 1- \psi (e^s \min _{i} u_i \mathbf {1}) \ge \min _{i} u_i (1- \psi (e^s \mathbf {1})). \end{aligned}$$

Thus for any \(s\),

$$\begin{aligned} {D_{L,s}}(u,0) H^\alpha (u) = \frac{1- \psi (e^s u)}{e^{\alpha s}L(e^s)} \ge \min _{i} u_i \frac{1-\psi (e^s\mathbf {1})}{e^{\alpha s}L(e^s)}, \end{aligned}$$

and this is bounded from below by \((\min _{i} u_i )\underline{K}\) for \(s \rightarrow -\infty \), since \(\psi \) is \(L\)-\(\alpha \)-regular. This proves the lower bound for \({D_{\infty }}(u,0)\). Due to property (1), it is also bounded from above, thus \(c:= \sup _{u \in {\mathbb {S}}_\ge } {D_{\infty }}(u,0)H^{\alpha }(u)\) exists.

Step 2, Property (5): Write

$$\begin{aligned} G(x):= \mathbb {E}\left[ \prod _{i=1}^N \psi (\mathbf {T}_i^\top x) + \sum _{i=1}^N (1- \psi (\mathbf {T}_i^\top x)) - 1 \right] . \end{aligned}$$
(10.1)

\(G(x) \ge 0\) by a simple translation of the arguments in Durrett and Liggett [27, Lemma 2.4]. We use that \({\mathcal {S}}_0\psi = \psi \), a linearization and the many-to-one identity (4.2) to derive the following:

$$\begin{aligned} {D_{L,s}}(u,t)&= \frac{1- \psi (e^{s+t}u)}{H^\alpha (e^tu)L(e^s)e^{\alpha s}} = \frac{1- \mathbb {E}\prod _{i=1}^N \psi (e^{s+t}\mathbf {T}_i^\top u)}{H^\alpha (e^tu)L(e^s)e^{\alpha s}} \\&= \frac{\mathbb {E}\sum _{i=1}^N (1- \psi (e^{s+t}\mathbf {T}_i^\top u))}{H^\alpha (e^tu) L(e^s)e^{\alpha s}} - \frac{G(e^{s+t}u)}{H^\alpha (e^tu) L(e^s)e^{\alpha s}} \\&= \frac{1}{H^\alpha (u)} \, \mathbb {E}\left( \sum _{i=1}^N \frac{(1- \psi (e^{s+t}\mathbf {T}_i^\top u))}{H^\alpha (e^t \mathbf {T}_i^\top u) L(e^s)e^{\alpha s}} H^\alpha ( \mathbf {T}_i^\top u) \right) - \frac{G(e^{s+t}u)}{H^\alpha (e^tu) L(e^s)e^{\alpha s}} \\&= m(\alpha ) \mathbb {E}_u^\alpha \left( \frac{1- \psi (e^{s+t}\mathbf {M}^\top u)}{H^\alpha (e^t \mathbf {M}^\top u) L(s) e^{\alpha s}} \right) - \frac{G(e^{s+t}u)}{H^\alpha (e^tu) L(e^s)e^{\alpha s}}\\&= m(\alpha ) \mathbb {E}_{u}^\alpha {D_{L,s}}(U_1,t-S_1) - \frac{L(e^{s+t})}{L(e^s)H^\alpha (u)} \frac{G(e^{s+t}u)}{L(e^{s+t})e^{\alpha (s+t)}} \end{aligned}$$

Due to Lemma 10.3, the sequence \({D_{L,s_n}}(U_1,t- S_1)\) is uniformly integrable w.r.t \(\mathbb {P}_{u}^\alpha \), hence it remains to show that the second part tends to zero for \(s \rightarrow - \infty \). The following argument follows closely the ideas of Durrett and Liggett [27, Lemma 2.6]. Since \(L\) is slowly varying at \(0\), the quotient \( L(e^{s+t})/L(e^s)\) is bounded when \(s\) tends to \(- \infty \).

Consider \(G(ru)\), \(r \in {\mathbb {R}}_>\), \(u \in {\mathbb {S}}_\ge \). Defining the increasing function

$$\begin{aligned} f:{\mathbb {R}}_>\times \rightarrow {\mathbb {R}}_>,\quad f(s) = e^{-s} + s -1, \end{aligned}$$

and using the inequality \(s \le e^{-(1-s)}\) as well as (15.7), we calculate

$$\begin{aligned} G(ru)&\le ~ \mathbb {E}\left[ \exp \left( -\Big (\sum _{i=1}^N (1-\psi (\mathbf {T}_i^\top ru))\right) + \sum _{i=1}^N (1- \psi (\mathbf {T}_i^\top ru)) -1 \right] \\&=~ \mathbb {E}\, f\left( \sum _{i=1}^N (1- \psi (\mathbf {T}_i^\top ru))\right) \le \mathbb {E}\, f \left( \sum _{i=1}^N ( \left\| {\mathbf {T}_i} \right\| \vee 1) (1-\psi (r\mathbf {1})) \right) \end{aligned}$$

Writing \(C(T)=\sum _{i=1}^N (\left\| {\mathbf {T}_i} \right\| \vee 1)\), use that \(\mathbb {E}C(T) \le (\mathbb {E}N)(1+ \mathbb {E}\left\| {\mathbf {M}} \right\| ) < \infty \), the regular variation of \(\psi \) and \(\lim _{s \rightarrow 0} f(s)/s\) to deduce that

$$\begin{aligned} 0 \!\le \! \limsup _{r \rightarrow 0} \sup _{u \in {\mathbb {S}}_\ge } \frac{G(ru)}{L(r)r^\alpha } \!\le \!&\limsup _{r \rightarrow 0} \mathbb {E}\left[ \frac{ f\Bigl (C(T) (1-\psi (r\mathbf {1})) \Bigr )}{C(T)(1-\psi (r\mathbf {1}))} C(T) \frac{1- \psi (r\mathbf {1})}{L(r)r^\alpha } \right] \!=\! 0 \end{aligned}$$

Consequently, for the limit \({D_{\infty }}(u,t) = m(\alpha ) \mathbb {E}_{u}^\alpha {D_{\infty }}(U_1,t- S_1)\).

Step 3, Property (6): Fix \(t \in {\mathbb {R}}\), \(C \subset \mathrm {int}\left( {\mathbb {S}}_\ge \right) \) compact and compute for \(u,w \in C\), \(z \in {\mathbb {R}}\):

$$\begin{aligned} e^{-\alpha t} \left| {\frac{{D_{L,s}}(u,t+z)}{{D_{L,s}}(u,z)} - \frac{{D_{L,s}}(w,t+z)}{{D_{L,s}}(w,z)}} \right| = \left| {\frac{1- \psi (e^{s+t+z}u)}{1- \psi (e^{s+z}u)} - \frac{1- \psi (e^{s+t+z}w)}{1- \psi (e^{s+z}w)}} \right| . \end{aligned}$$

Using Lemma 9.3, there is \(s_1 \in {\mathbb {R}}\) such that the right hand side is bounded by

$$\begin{aligned} 4 K K_C { (1\vee e^t )} \left| {u-w} \right| ^\alpha \end{aligned}$$

as soon as \(e^{s+z} \le s_1\). For any fixed \(z\), this condition is satisfied eventually when taking the limit \(s_n \rightarrow - \infty \). Hence in the limit,

$$\begin{aligned} e^{- \alpha t} \left| {\frac{{D_{L,s}}(u,t+z)}{{D_{L,s}}(u,z)} - \frac{{D_{L,s}}(w,t+z)}{{D_{L,s}}(w,z)}} \right| \le 4 K K_C {(1 \vee e^{t})} \left| {u-w} \right| ^\alpha \end{aligned}$$

for all \(z \in {\mathbb {R}}\). \(\square \)

10.2 Extremal points of \({\mathcal {H}}_{\alpha ,c}^K\)

As a compact subset of a locally convex topological space, namely \({\mathcal {C}}^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \), the set \({\mathcal {H}}_{\alpha ,c}^K\) (if non-void) is contained in the convex hull of its extremal points due to the Krein-Milman theorem [26, TheoremV.8.4]. Using Proposition 10.1, we now compute all possible extremal points.

Lemma 10.5

Let (A0)–(A4), (A7) hold and \(\alpha \in (0,1]\). The extremal points of \({\mathcal {H}}_{\alpha ,c}^K\) are contained in the set

$$\begin{aligned} \mathcal {E}_{\alpha ,c} := \left\{ (u,t) \mapsto c \frac{H^\chi (e^tu)}{H^\alpha (e^tu)} \ : \ \chi \in (0,1], \ m(\chi )=1. \right\} \end{aligned}$$
(10.2)

Recall that the functions \(H^s(\cdot )\) are \(s\)-homogeneous, thus

$$\begin{aligned} \frac{H^\chi (e^tu)}{H^\alpha (e^tu)} = \frac{e^{\chi t} H^\chi (u)}{e^{\alpha t} H^\alpha (u)} = e^{(\chi -\alpha )t} \frac{ H^\chi (u)}{ H^\alpha (u)}. \end{aligned}$$

Proof

Let \(g \in {\mathcal {H}}_{\alpha ,c}^K\) be extremal.

Step 1: Use property (5) to compute for all \(u \in {\mathbb {S}}_\ge \), \(s,t \in {\mathbb {R}}\)

$$\begin{aligned} g(u,t+s)&= m(\alpha ) \mathbb {E}_{u}^\alpha g(U_1,t+s-S_1)\nonumber \\&= m(\alpha ) \int g(y, t+s-z)\ \mathbb {P}_u^\alpha (U_1 \in dy, S_1 \in dz) \end{aligned}$$
(10.3)
$$\begin{aligned}&= m(\alpha ) \int \frac{g(y,t+s-z)}{g(y,s-z)} g(u,s) \frac{g(y,s-z)}{g(u,s)}\ \mathbb {P}_u^\alpha (U_1 \in dy, S_1 \in dz)\nonumber \\ \end{aligned}$$
(10.4)

Recall that by Lemma 10.2, \(g>0\), thus the denominators are positive. Using (10.3) with \(t=0\), it follows that

$$\begin{aligned} m(\alpha ) \int \frac{g(y,s-z)}{g(u,s)}\ \mathbb {P}_u^\alpha (U_1 \in dy, S_1 \in dz) =1. \end{aligned}$$

Hence (10.4) is a convex combination of functions \(g_{y,z}(u,s,t)=\frac{g(y,t+s-z)}{g(y,s-z)}g(u,s)\). Consequently, since \(g\) is extremal,

$$\begin{aligned} \frac{g(u,t+s)}{g(u,s)}=\frac{g(y,t+s-z)}{g(y,s-z)} \end{aligned}$$
(10.5)

for all \(u \in {{\mathbb {S}}_\ge }\), \(t,s \in {\mathbb {R}}\) and all \((y,z) \in {\mathrm {supp}}\, \ \mathbb {P}_u^\alpha ((U_1, S_1) \in \cdot )\). But this support is the same as \({\mathrm {supp}}\, \ \mathbb {P}_u((U_1, S_1) \in \cdot )\). This yields that \(L_t(u,s):= \frac{g(u,t+s)}{g(u,s)}\) satisfies

$$\begin{aligned} L_t(u,s) = \mathbb {E}_{u}{L_t(U_1,s-S_1)}. \end{aligned}$$
(10.6)

Step 2: Proposition 10.1 will be applied in order to show that \(L_t\) is constant on \({{\mathbb {S}}_\ge } \times {\mathbb {R}}\), i.e. equation (10.5) holds for all \(u,y \in {{\mathbb {S}}_\ge }\), \(z, s, t \in {\mathbb {R}}\). The aperiodicity of \(\mu \), i.e. assumption (A4), enters here. Property (6) yields condition (ii) of the proposition, while (10.6) is its condition (i). It remains to show that \(L_t\) is bounded (for fixed t). If \(t \le 0\), by property (2), \(g(u,t+s)e^{\alpha (t+s)} \le g(u,s) e^{\alpha s}\), thus

$$\begin{aligned} 0 < L_t(u,s) = e^{- \alpha t} \frac{g(u,t+s)e^{\alpha (t+s)}}{g(u,s)e^{\alpha s}} \le e^{-\alpha t}. \end{aligned}$$

For \(t \ge 0\), use property (3) for an analogue argument. Referring also to Lemma 10.2, \(L_t \in {\mathcal {C}}_b^{}\left( {\mathbb {S}}_\ge \times {\mathbb {R}} \right) \), hence as a bounded harmonic function, it is constant.

Step 3: Validity of (10.5) for any \(u,y \in {{\mathbb {S}}_\ge }\), \(t,s,z \in {\mathbb {R}}\) implies that for some \(\tilde{f}: {{\mathbb {S}}_\ge } \rightarrow (0,\infty )\), \(a \in {\mathbb {R}}_>\), \(b \in {\mathbb {R}}\),

$$\begin{aligned} g(u,t)= \tilde{f}(u) a e^{bt}. \end{aligned}$$

Considering properties (2) and (3) it follows that \(b \in [\alpha -1, \alpha ]\), i.e. \(b = \chi - \alpha \) for some \(\chi \in [0,1]\). Rewriting \(a \tilde{f}(u)=: H^\alpha (u)^{-1}f(u)\), it follows that

$$\begin{aligned} g(u,t) = \frac{f(u)}{H^\alpha (u)} e^{(\chi -\alpha )t} = \frac{f(u)e^{\chi t}}{H^\alpha (e^tu)}. \end{aligned}$$
(10.7)

It remains to compute the possible values of \(f\) and \(\chi \). Therefore, use property (5) which gives together with the comparison formula (3.5)

$$\begin{aligned} f(u)&= e^{-\chi t}\, H^\alpha (e^tu) \, m(\alpha ) \mathbb {E}_u^\alpha \left( \frac{f(U_1)}{H^\alpha (e^{t-S_1}U_1)}e^{\chi (t-S_1)} \right) \\&= H^\alpha (e^tu)\, m(\alpha )\, \frac{1}{H^\alpha (e^tu)k(\alpha )} \, \mathbb {E}_{u}^0\, \left( {H^\alpha (e^{t-S_1}U_1) \frac{f(U_1)}{H^\alpha (e^{t-S_1}U_1)} e^{-\chi S_1} }\right) \\&= (\mathbb {E}N)\, \mathbb {E}_{u}^0\, {f(U_1)e^{-\chi S_1}} = (\mathbb {E}N)\, \mathbb {E}_{} \left( {f(\mathbf {M}^\top \cdot u)\left| {\mathbf {M}^\top u} \right| ^\chi } \right) \\&= (\mathbb {E}N) P_*^\chi f(u). \end{aligned}$$

This means that \(f\) is an eigenfunction of \(P_*^\chi \) with eigenvalue \(\frac{1}{\mathbb {E}N}\). Referring to the definition of \({\mathcal {H}}_{\alpha ,c}^K\), \(f >0\). By Proposition (3.1), scalar multiples of \(H^\chi \) are the only strictly positive eigenfunctions of \(P_*^{\chi }\). Thus \(f =c H^\chi \) where \(c\) is given by property (1). The eigenvalue of \(P_*^{\chi }\) corresponding to \(H^\chi \) is \(k(\chi )\). If now \(k(\chi )=\frac{1}{\mathbb {E}N}\), then \(m(\chi )=(\mathbb {E}N) k(\chi )=1\), which shows that all extremal points of \({\mathcal {H}}_{\alpha ,c}^K\) are in \(\mathcal {E}_{\alpha ,c}^K\). \(\square \)

10.3 Proof of Theorem 8.2

Now we can give the proof of Theorem 8.2. For the readers convenience, we repeat its statement.

Theorem

Assume (A1)–(A4) and (A7). Let \(\alpha \in (0,1]\), not necessarily satisfying (A5).

  1. (1)

    If there is an \(L\)-\(\alpha \)-regular FP of \({\mathcal {S}}_0\), then \(m(\alpha )=1\), \(m'(\alpha ) \le 0\).

  2. (2)

    If \({\phi }\) is an \(L\)-\(\alpha \)-regular FP of \({\mathcal {S}}\), then for all fixed \(s >0\),

    $$\begin{aligned} \lim _{r \rightarrow 0} \, \sup _{u,v \in {\mathbb {S}}_\ge } \left| {\frac{1-{\phi }(sru)}{1- {\phi }(rv)}- s^\alpha \frac{H^\alpha (u)}{H^\alpha (v)}} \right| =0. \end{aligned}$$
    (8.1)
  3. (3)

    If \(\psi \) is a \(L\)-\(\alpha \)-elementary FP of \({\mathcal {S}}\), then there is \(K >0\) such that

    $$\begin{aligned} \lim _{r \rightarrow 0} \sup _{u \in {\mathbb {S}}_\ge }\left| {\frac{1- \psi (ru)}{L(r) r^\alpha } - K H^\alpha (u)} \right| =0. \end{aligned}$$
    (8.2)

Proof

(a): Considering Lemma 10.5, there are at most two values \(\chi _1, \chi _2 \in (0,1]\), \(\chi _1 < \chi _2\), \(m(\chi _1)=m(\chi _2)=1\), such that every function in \({\mathcal {H}}_{\alpha ,c}^K\) can be written as a convex combination

$$\begin{aligned} (u,t) \mapsto \frac{c}{H^\alpha (u)} \left( \lambda H^{\chi _1}(u) e^{(\chi _1-\alpha ) t} + (1-\lambda ) H^{\chi _2}(u) e^{(\chi _2 - \alpha ) t} \right) \end{aligned}$$
(10.8)

for \(\lambda \in [0,1]\). Observe that unless \(\alpha \in \{\chi _1, \chi _2\}\), none of this convex combinations is a bounded function in \(t\) for fixed \(u\).

Let \(\varphi \) be an \(L\)-\(\alpha \)-regular FP. Recall the notation \(\tilde{\mathbf {1}}=\sqrt{d}^{-1} (1, \cdots , 1)^\top \in {\mathbb {S}}_\ge \). By Lemma 10.4, there is a subsequence \(s_n \rightarrow -\infty \), such that

$$\begin{aligned} {D_{\infty }}(\tilde{\mathbf {1}},t) = \lim _{n \rightarrow \infty } {D_{L,{s_n}}}(\tilde{\mathbf {1}},t) = \lim _{n \rightarrow \infty } \frac{1- \varphi (e^{s_n+t}\tilde{\mathbf {1}})}{H^\alpha ( \tilde{\mathbf {1}})L(e^{s_n})e^{\alpha {s_n+t}}}. \end{aligned}$$

Considering the definition of \(L\)-\(\alpha \)-regularity, the function \({D_{\infty }}(\tilde{\mathbf {1}},\cdot )\) is bounded from below and above by \(\underline{K}\) resp. \(\overline{K}\). Hence by the above, \(\alpha \in \{\chi _1, \chi _2\}.\)

Supposing that \(\alpha =\chi _2\), the upper bound

$$\begin{aligned} \limsup _{r \rightarrow 0} \frac{1- \varphi (r\mathbf {1})}{L(r)r^\alpha } \le \overline{K} < \infty \end{aligned}$$

still implies, using the Tauberian theorem for LTs Feller [28], XIII.5], that if \(X\) is a random variable with LT \(\varphi \), then for any \(\varepsilon >0\) there is \(C\) such that

$$\begin{aligned} {\mathbb {P}}_{} \left( {\left\langle X,\mathbf {1} \right\rangle > r} \right) \le C r^{-\chi _2+\varepsilon }. \end{aligned}$$

Thus there is \(\chi _1 < s < \chi _2 \le 1\) with \(m(s)<1\) and \(\mathbb {E}\left| {X} \right| ^s \le \mathbb {E}\left\langle X,\mathbf {1} \right\rangle ^s < \infty .\) But it can be deduced from Neininger and Rüschendorf [62, Lemma3.3] (see Mentemeier [60, Section 4] for details), that the unique FP of \({\mathcal {S}}_0\) with finite \(s\)-moment for \(m(s)<1\) and \(s \le 1\) is \(\delta _0\). Hence, \(\alpha = \chi _1\). This proves \(m(\alpha )=1\), \(m'(\alpha ) \le 0\).

(b): Moreover, the formula (10.8) for functions in \({\mathcal {H}}_{\alpha ,c}^K\), in particular for the limit \({D_{\infty }}\), simplifies to

$$\begin{aligned} {D_{\infty }}(u,t) = c \left( \lambda + (1-\lambda ) \frac{H^{\chi _2}(u)}{H^{\alpha }(u)} e^{(\chi _2 - \alpha ) t} \right) . \end{aligned}$$

Reasoning as before, the only possible choice is \(\lambda =1\), since otherwise \({D_{\infty }}\) would be unbounded. This proves that any subsequential limit of \({D_{L,s}}\) is a positive constant function, nevertheless, the value of the constant may depend on the subsequence. But this suffices to prove regular variation, since for any subsequence \(t_n\) such that \({D_{L,{t_n}}}\) converges,

$$\begin{aligned} \lim _{n \rightarrow \infty } \frac{1-\varphi (e^{s+ t_n} u)}{1- \varphi (e^{t_n}v)} = \lim _{n \rightarrow \infty } \frac{{D_{L,{t_n}}}(s,u)}{{D_{L,{t_n}}}(0,v)} \frac{H^\alpha (e^s u)}{H^\alpha (v)} = \frac{H^\alpha (e^s u)}{H^\alpha (v)}, \end{aligned}$$

i.e. the limit is independent of the particular subsequence. Since every subsequential limit is the same, the asserted limit for \(t \rightarrow 0\) exists.

The convergence is uniform, since \({D_{L,{t_n}}} \rightarrow {D_{\infty }}\) uniform on the compact set \({\mathbb {S}}_\ge \times [0,s]\) by Lemma 10.4.

(c): If now \(\psi \) is \(L\)-\(\alpha \)-elementary, then

$$\begin{aligned} \lim _{s \rightarrow -\infty } {D_{L,s}}(\tilde{\mathbf {1}},t) = \lim _{s \rightarrow - \infty } \frac{1- \psi (e^{s+t}\tilde{\mathbf {1}})}{L(e^s) e^{s+t}} = K, \end{aligned}$$

hence for any subsequential limit \({D_{\infty }}\), it holds that \(t \mapsto {D_{\infty }}(\tilde{\mathbf {1}},t) \equiv K\), thus \(\lambda =1\) and consequently\( {D_{\infty }} \equiv K\) on \({\mathbb {S}}_\ge \times {\mathbb {R}}\). This gives that any subsequence \({D_{L,{s_n}}}\) with \(s_n \rightarrow - \infty \) has the same limit \(K\), hence the compact uniform convergence \({D_{L,s}} \rightarrow K\). In particular,

$$\begin{aligned} \lim _{s \rightarrow - \infty } \frac{1- \psi (e^{s+0}u)}{H^\alpha (u) L(e^s)e^{\alpha s}} =K \end{aligned}$$

uniformly on the compact set \({\mathbb {S}}_\ge \times \{0\}\). Replacing \(t=e^s\), this gives the assertion. \(\square \)

11 Uniqueness of fixed points of the inhomogeneous equation

In this section, we finish the proof of Theorem 1.2 (2) concerning the inhomogeneous equation. We are going to prove the following result.

Theorem 11.1

Assume (A0)–(A8) and \(\alpha <1\) with \(m'(\alpha ) <0\). Then a r.v. \(X\) is a fixed point of (1.1) if and only if its LT is of the form

$$\begin{aligned} \mathbb {E}\exp (\!-\!r \left\langle u,X \right\rangle ) \!=\! \psi _{Q,K}(ru) \!:=\!\mathbb {E}\exp (-Kr^\alpha W(u) \!-\! r\left\langle u,W^* \right\rangle ), \quad u \in {\mathbb {S}}_\ge ,\, r \in {\mathbb {R}}_\ge \end{aligned}$$

for some \(K \ge 0\).

Recall that existence of fixed points for the inhomogeneous equation, i.e. the “if”-part in the Theorem above, has been proved in Theorem 5.1. Moreover, under the assumptions of Theorem 11.1, it follows from Theorems 5.1, 7.1 and 8.1 that a r.v. \(X\) is a fixed point of (1.1) with \(Q \equiv 0\) if and only if its LT is of the form

$$\begin{aligned} \mathbb {E}\exp (-r \left\langle u,X \right\rangle ) = \psi _{0,K}(ru) :=\mathbb {E}\exp (-Kr^\alpha W(u)), \qquad u \in {\mathbb {S}}_\ge ,\, r \in {\mathbb {R}}_\ge \end{aligned}$$

for some \(K \ge 0\). Note that \(\psi _{0,0} \equiv 1\) is the trivial fixed point.

Using the characterization of fixed points of \({\mathcal {S}}_0\), uniqueness of fixed points for \({\mathcal {S}}_Q\) will follow from a one-to-one correspondence given below. For probability laws \(\varrho \) and \(\eta \) on \({{\mathbb {R}}^d_\ge }\), define

$$\begin{aligned} l_s(\varrho , \eta ) := \inf \{ \mathbb {E}\left| {X-Y} \right| ^s \ : \ {\mathcal {L}}\left( X \right) =\varrho , \ {\mathcal {L}}\left( Y \right) =\eta \}. \end{aligned}$$

On the subspace of probability laws with a finite \(s\)-th moment, this quantity is always finite and defines the so-called minimal \(L_s\)-metric, which is a particular case of a Wasserstein distance. But it can also be used to measure distances between random variables with an infinite moment of order \(s\), then finiteness of \(l_s(\varrho , \eta )\) implies that \(\varrho \) and \(\eta \) have similar tail behavior, as will be seen in the proof of Theorem 11.1 below.

Proposition 11.2

Let \(s \in (0,1]\) and \(\mathbb {E}\left\| {\mathbf {M}} \right\| ^s + \left| {Q} \right| ^s < \infty \). Suppose that \(m(s) <1\). Then the following holds:

  1. (1)

    For any \(\eta _0\in {\mathcal {P}}_{}({{\mathbb {R}}^d_\ge })\) such that \({\mathcal {S}}_0\eta _0= \eta _0\), there exists exactly one \(\eta _Q\in {\mathcal {P}}_{}({{\mathbb {R}}^d_\ge })\) such that

    $$\begin{aligned} {\mathcal {S}}_Q\eta _Q= \eta _Q\quad \text { and } \quad l_s(\eta _0, \eta _Q) < \infty . \end{aligned}$$
  2. (2)

    For any \(\eta _Q\in {\mathcal {P}}_{}({{\mathbb {R}}^d_\ge })\) such that \({\mathcal {S}}_Q\eta _Q= \eta _Q\), there exists exactly one \(\eta _0\in {\mathcal {P}}_{}({{\mathbb {R}}^d_\ge })\) such that

    $$\begin{aligned} {\mathcal {S}}_0\eta _0= \eta _0\quad \text { and } \quad l_s(\eta _0, \eta _Q) < \infty . \end{aligned}$$

Source

The result can easily be obtained from Rüschendorf [68, Theorem 3.1], where the one-dimensional situation is covered. \(\square \)

Proof of Theorem 11.1

Assumptions (A5)–(A8) together with \(\alpha <1\) and \(m'(\alpha )<0\) imply the assumptions of Proposition 11.2, for \(s=\alpha +\varepsilon \) for some \(\varepsilon >0\). Writing \({\mathcal {F}}_{Q}\) and \({\mathcal {F}}_{0}\) for the set of fixed points of \({\mathcal {S}}_Q\) resp. \({\mathcal {S}}_0\), we know that \({\mathcal {F}}_{0} =\{ \eta _{0,K} : K \ge 0\}\), where \(\eta _{0,K}\) is the probability measure with LT \(\psi _{0,K}\). By Proposition 11.2, the induced mapping \(\mathfrak {P} : {\mathcal {F}}_{Q} \rightarrow {\mathcal {F}}_{0}, \mathfrak {P}\eta _Q = \eta _0\) is bijective, thus it suffices to show that \(\mathfrak {P}\left( \{ \eta _{Q,K} : K \ge 0 \}\right) = {\mathcal {F}}_{0}\).

Therefore, let us study further the property \(l_s(\eta _Q, \mathfrak {P}\eta _Q) < \infty \). Let \(\eta _Q \in {\mathcal {F}}_{Q}\) be arbitrary and Let \((X_Q, X_{0})\) be a coupling of \(\eta _Q\) and \(\mathfrak {P}\eta _Q\) with \(\mathbb {E}\left| {X_{0}- X_Q} \right| ^s < \infty \). Using the inequality \(\left| {a^s-b^s} \right| \le \left| {a-b} \right| ^s\) which is valid for \(s \in [0,1]\) and \(a,b \in {\mathbb {R}}_\ge \), it follows that for all \(u \in {{\mathbb {R}}^d_\ge }\)

$$\begin{aligned} \mathbb {E}\left| {\left\langle u,X_Q \right\rangle ^s - \left\langle u,X_{0} \right\rangle ^s } \right| \le \mathbb {E}\left| {\left\langle u, X_Q-X_{0} \right\rangle } \right| ^s \le \left| {u} \right| ^s \mathbb {E}\left| {X_Q-X_{0}} \right| ^s < \infty . \end{aligned}$$

Referring to [29, Lemma 9.4],

$$\begin{aligned} \int _0^\infty \frac{1}{r} \ \Bigl ( r^{s} \left| {{\mathbb {P}}_{} \left( {\left\langle u,X_Q \right\rangle >r} \right) -{\mathbb {P}}_{} \left( {\left\langle u,X_{0} \right\rangle >r} \right) } \right| \Bigr ) dr < \infty . \end{aligned}$$

From the fact that \(\int _1^\infty \frac{1}{r} dr \) diverges, it follows that necessarily

$$\begin{aligned} \limsup _{r \rightarrow \infty } r^{s} \left| { {\mathbb {P}}_{} \left( {\left\langle u,X_Q \right\rangle >r} \right) - {\mathbb {P}}_{} \left( {\left\langle u,X_{0} \right\rangle >r} \right) } \right| = 0. \end{aligned}$$

Since \(s > \alpha \), in particular

$$\begin{aligned} \lim _{r \rightarrow \infty } \left| { r^{\alpha } {\mathbb {P}}_{} \left( {\left\langle u,X_Q \right\rangle >r} \right) - r^{\alpha } {\mathbb {P}}_{} \left( {\left\langle u,X_{0} \right\rangle >r} \right) } \right| = 0. \end{aligned}$$
(11.1)

Consider now \(X_Q\mathop {=}\limits ^{{\mathcal {L}}}\eta _{Q,K}\) for some \(K \ge 0\). By Theorem 6.1, Eq. 6.3,

$$\begin{aligned} \lim _{r \rightarrow \infty } r^{\alpha } {\mathbb {P}}_{} \left( {\left\langle u,X_Q \right\rangle >r} \right) = \frac{K}{\Gamma (1-\alpha )} H^\alpha (u), \end{aligned}$$

and the only fixed point of \({\mathcal {S}}_0\) with the same tail behavior is \(\eta _{0,K}\) with the same \(K\). Hence, Eq. (11.1) implies that \(\mathfrak {P}\eta _{Q,K}=\eta _{0,K}\) for all \(K \ge 0\), which shows that \(\mathfrak {P}\{\eta _{Q,K} : K \ge 0\} = {\mathcal {F}}_{0}\). Since \(\mathfrak {P}\) is bijective, we conclude that \({\mathcal {F}}_{Q}=\{ \eta _{Q,K} : K \ge ~0\}.\)

12 Critical case

In this section, we prove the final part of Theorem 1.2 and show that in the situation \(m(\alpha )=1\) with \(m'(\alpha )=0\) for \(\alpha \in (0,1]\), there still exists a nontrivial fixed point of \({\mathcal {S}}_0\), thereby extending the results of Buraczewski et al. [17] to the situation \(m(1)=1\), \(m'(1)=0\). The existence of a fixed point in the critical case is proved by the same approximation argument as in Durrett and Liggett [27, Theorem3.5]. This is why we just sketch the main ideas and refer the interested reader to Mentemeier [60, Section10] for details.

For \(\chi \in (0, \alpha )\), define a biased version of \({\mathcal {S}}_0\) by

$$\begin{aligned} {\mathcal {S}}_0^\chi \ : \ \nu \mapsto {\mathcal {L}}\left( \sum _{i=1}^N m(\chi )^{-1/\chi } \mathbf {T}_i X_i \right) , \end{aligned}$$
(12.1)

where \(X_i\) are i.i.d. with law \(\nu \) and independent of \(T\). Writing

$$\begin{aligned} T_{\chi }=(\mathbf {T}_{\chi ,i})_{i\ge 1} = \left( m(\chi )^{-1/\chi }\mathbf {T}_{i}\right) _{i \ge 1}, \end{aligned}$$

define \(\mu _\chi \) and \(m_\chi \) in terms of \(T_\chi \) as \(\mu \) and \(m\) were defined in terms of \(T\).

Then it is readily checked that (A0)–(A3) carry over (see Mentemeier [60, Lemma 10.3]), that \(m_\chi (\chi )=1\) and \(m'_{\chi }(\chi ) <0\) and that (A6a) for \(T\) imply the validity of (A6) for \(T_\chi \) and with \(\alpha \) replaced by \(\chi \).

Hence Theorem 5.1 applied to \({\mathcal {S}}_0^\chi \) gives the existence of a nontrivial fixed point with LT \(\psi _\chi \), say. Fix \(u_0 \in {\mathbb {S}}_\ge \). Then, possibly after rescaling, \(\psi _\chi (u_0)=1/2\). In this manner, construct a family \((\psi _\chi )_{\chi \in (0, \alpha )}\), such that \(\psi _\chi (u_0)=1/2\) and \({\mathcal {S}}_0^\chi \psi _\chi = \psi _\chi \) for all \(\chi \in (0,\alpha )\).

For any sequence \(\chi _n \rightarrow \alpha \), there is a convergent subsequence \(\psi _{\chi _{n_k}}\) with limit \(\psi \), which is again a LT of a (sub-)probability measure, with \(\psi (u_0)=1/2\). It can be checked that \({\mathcal {S}}_0\psi (ru) = \psi (ru)\) for all \((u,r) \in {\mathbb {S}}_\ge \times {\mathbb {R}}_>\) (see Mentemeier [60, Lemma 10.4] for details). This is used to infer that \(\psi (0^+)=1\), hence \(\psi \) is the LT of a probability measure on \({{\mathbb {R}}^d_\ge }\), and it is nontrivial due to \(\psi (u_0)=1/2\).

In the particular case \(\alpha =1\), it is shown in Buraczewski et al. [17, Theorem 2.3] (under some restrictions on \(N\)) that the existence of a nontrivial FP with finite expectation is equivalent to \(m'(1)<0\). Thus if \(m'(1)=0\), then the nontrivial FP constructed above necessarily has infinite expectation. It is a.s. finite since \(\psi (0^+)=1\).

Further properties of fixed points in the critical case will be studied in Kolesko and Mentemeier [46].

13 Proofs of the results from Sects. 4.1 to 4.3

Proof of Lemma 4.1

\(R_n \ge 0\) for all \(n \in {\mathbb {N}}\), thus it suffices to show that \(\limsup _{n \rightarrow \infty } R_n = 0\). Writing \(R_{m,l} = \max _{\left| {w} \right| =ml} \left\| {\mathbf{L}(w)} \right\| \), it follows that

$$\begin{aligned} \limsup _{n \rightarrow \infty } R_n \le \sum _{k=0}^{l-1} \sum _{\left| {v} \right| =k} \left\| {\mathbf{L}(v)} \right\| \limsup _{m \rightarrow \infty } \left[ R_{m,l} \right] _{v}. \end{aligned}$$

Thus it is enough to consider \(\limsup _{m \rightarrow \infty } \left[ R_{m,l} \right] _{v}\) some \(l \in {\mathbb {N}}\) and \(\left| {v} \right| \le l\). By the assumption \(\alpha \in \mathrm {int}\left( I_\mu \right) \), there is \(s > \alpha \in I_{\mu }\), such that \(m(s)<1\). Referring to the definition of \(m(s)\), there is \(l \in {\mathbb {N}}\) such that

$$\begin{aligned} \varrho (s) := \mathbb {E}\sum _{\left| {v} \right| =l} \left\| {\mathbf{L}(v)} \right\| ^s = (\mathbb {E}N)^l \mathbb {E}\left\| {\varvec{\Pi }_l} \right\| ^s < 1. \end{aligned}$$

Fix this \(l\). Define \(Z_0=1\) and

$$\begin{aligned} Z_m = \sum _{\left| {v} \right| =l} \left\| {\mathbf{L}(v)} \right\| ^s \left[ Z_{m-1} \right] _{v} = \sum _{\left| {v} \right| =ml} \prod _{k=1}^m \left\| { \left[ \mathbf{L}(v|kl) \right] _{v|(k-1)l} } \right\| ^s \end{aligned}$$

as the sum over the norms of the weights, taken in blocks of \(l\) generations. Hence \(\mathbb {E}Z_1 = \varrho (s)\) and \((\left[ R_{m,l} \right] _{v})^s \le \left[ Z_m \right] _{v}\) for all \(m \in {\mathbb {N}}\), \(v \in \mathbb {V}\).

Considering the filtration \({\mathcal {F}}_m := {\mathfrak {B}}_{ml}\) and using the independence of \(\left[ {\mathcal {T}} \right] _{v}\) and \({\mathfrak {B}}_{\left| {v} \right| }\), it can easily be seen that \(\tilde{Z}_m := \varrho (s)^{-m} Z_m\) is a nonnegative \({\mathcal {F}}_m\)-martingale. Thus it converges to a random variable \(\tilde{Z}\) and by Fatou’s lemma, \(\mathbb {E}\ \tilde{Z} \le \mathbb {E}\ \varrho (s)Z_1 = 1 \). In particular, \(\tilde{Z}\) is almost sure finite, and this gives the final estimate

$$\begin{aligned} \limsup _{m \rightarrow \infty } (\left[ R_{m,l} \right] _{v})^s \le \limsup _{m \rightarrow \infty } \varrho (s)^m \left[ \tilde{Z}_m \right] _{v} = 0 \quad \mathbb {P}\text {-a.s.}\end{aligned}$$

for all \(\left| {v} \right| \le l\). \(\square \)

13.1 Proof of Proposition 4.4

The following result is the main tool to prove the mean convergence of \(W_n(u)\).

Proposition 13.1

Let \(u \in {\mathbb {S}}_\ge \). For \(r >0\), let

$$\begin{aligned} A(r) = \sum _{n=0}^\infty \mathbb {1}_{}(H^\alpha (e^{-S_n}U_n) > r^{-1}). \end{aligned}$$

Suppose that there is a random variable \(Z\) such that

$$\begin{aligned} {\mathbb {P}}_{} \left( {\frac{\sum _{i=1}^N H^\alpha (\mathbf {T}_i^\top x)}{H^\alpha (x)} > s} \right) \le {\mathbb {P}}_{} \left( {Z>s} \right) \qquad \forall x \in {{\mathbb {R}}^d_\ge }{\setminus } \{0\}, s \ge 0 \end{aligned}$$
(13.1)

(stochastic domination) and a function \(L\), slowly varying at infinity, such that

$$\begin{aligned} \sup _{r >0} \frac{A(r)}{L(r)} < \infty \qquad {\mathbb {P}}_{u}^{\alpha }\text {-a.s.} \end{aligned}$$
(13.2)

If \(\mathbb {E}Z L(Z) <\infty \), then \(\mathbb {E}W(u)= W_{0}(u)\).

Source

[10, Theorem1.1(i)], adopted to the present notation. \(\square \)

Proof of Proposition 4.4

We show that under the assumptions of Proposition 4.4, Proposition 13.1 applies for any \(u \in {\mathbb {S}}_\ge \). First, we prove that the g slowly varying function \(L(r)=1+\log (1+ r)\) satisfies (13.2). Since \(\sup _{u \in {\mathbb {S}}_\ge } H^\alpha (u) \le 1\),

$$\begin{aligned} A(r) \le \sum _{n=0}^\infty \mathbb {1}_{}( e^{-\alpha S_n} > r^{-1}) \le \sum _{n=0}^\infty \mathbb {1}_{}(\alpha S_n < \log r) \le \tau (\log (1+ r)), \end{aligned}$$

where

$$\begin{aligned} \tau (s) := \sup \{n \in {\mathbb {N}}_0: \alpha S_n \le s\}. \end{aligned}$$

The assumptions of the strong law of large numbers for \(S_n\) under \(\mathbb {P}_u^\alpha \), Proposition 3.2, are exactly the assumptions imposed here, hence \(\lim _{n \rightarrow \infty } \frac{S_n}{n} = m'(\alpha ) <0\) \(\mathbb {P}_u^\alpha \)-a.s. Now on the one hand, \(\sup _{r \in (0,1)} A(r)/L(r)\) is readily bounded by \(\tau (0)\), which is finite since \(S_n\) is transient. On the other hand, it is as well a consequence of the strong law of large numbers that

$$\begin{aligned} \lim _{s \rightarrow \infty } \frac{\tau (s)}{s}= \frac{1}{(-m'(\alpha ))} \quad \mathbb {P}_u^\alpha \text {-a.s} \end{aligned}$$

(see the argument in [15], bottomofp.219]) and consequently, \(\sup _{r >1} A(r)/L(r)\) is bounded \(\mathbb {P}_u^\alpha \)-a.s., too.

As the second step, observe that, upon defining

$$\begin{aligned} Z := C \sum _{i=1}^N \left\| {\mathbf {T}_i^\top } \right\| ^\alpha \end{aligned}$$

with \(C:= \sup _{u \in {\mathbb {S}}_\ge } H^\alpha (u)^{-1} < \infty \), (13.1) is satisfied. The finiteness of \(E Z L(Z)\) is then a direct consequence of assumption (A6).

Having thus proved that \(W_n(u)\) converges \(\mathbb {P}\text {-a.s.}\) to a nontrivial limit \(W(u)\) for all \(u \in {\mathbb {S}}_\ge \), let us discuss whether \(u \mapsto W(u)\) is measurable. We can write \(W_n(u):=w_n(u, (\mathbf{L}(v)_{v \in \mathbb {V}}))\) as a measurable function of \(u\) and the branch weights. Fix a countable dense subset \(\mathbb {S}:= \{u_k : k \in {\mathbb {N}}\} \subset {\mathbb {S}}_\ge \). Then there is a measurable exceptional set \(E \subset \Omega \) with \({\mathbb {P}}_{} \left( {E} \right) =0\), such that for all \(\omega \in E^c\) \( \lim _{n \rightarrow \infty } w_n(u, (\mathbf{L}(v)_{v \in \mathbb {V}})(\omega ))\) exists for all \(u \in {\mathcal {S}}\). Using a sandwich argument, the limit exists for all \(u \in {\mathbb {S}}_\ge \) and \(\omega \in E^c\). Define \(w\) as this limit on \(E^c\), and let \(w \equiv 1\) on \(E\). Then \(w\) is a measurable function on \({\mathbb {S}}_\ge \times {\mathcal {M}}_\ge ^\mathbb {V}\) and \(W(u)=w(u, (\mathbf{L}(v))_{v \in \mathbb {V}})\) \(\mathbb {P}\text {-a.s.}\). \(\square \)

13.2 Proof of Lemma 4.7

Proof of Lemma 4.7

The first part of the proof is valid for any anticipating and \(\mathbb {P}_{u}\text {-a.s.}\) dissecting HSL \({\mathcal {I}}_{}\). Following the lines of the proof of [11, Lemma 6.1], write

$$\begin{aligned} E_{{\mathcal {I}}_{}}(j) = \{ v \in \mathbb {V}\ : \ \left| {v} \right| =j, v \in {\mathcal {I}}_{}\} \end{aligned}$$

and

$$\begin{aligned} A_{{\mathcal {I}}_{}}(j) = \{ v \in \mathbb {V}\ : \ \left| {v} \right| = j, v \text { has no ancestor in } {\mathcal {I}}_{}\}. \end{aligned}$$

Consequently, if \(v \in A_{{\mathcal {I}}_{}}(j)\), then \(\sigma ((T(v|k))_{k=0}^{\left| {v} \right| }) \subset {\mathfrak {B}}_{{\mathcal {I}}_{}}\) as well as \((T(vw))_{w \in \mathbb {V}}\) and \(B_{{\mathcal {I}}_{}}\) are independent for \(v \in E_{{\mathcal {I}}_{}}(j)\). Then for \(m \in {\mathbb {N}}\),

$$\begin{aligned}&\mathbb {E}[ W_m(u) | {\mathfrak {B}}_{{\mathcal {I}}_{}}]\\&\quad = \mathbb {E}\left[ \left. \sum _{j=1}^m \sum _{v \in E_{{\mathcal {I}}_{}}(j)} \sum _{\left| {w} \right| =m-j} H^\alpha (\mathbf{L}(vw)^\top u) + \sum _{v \in A_{{\mathcal {I}}_{}}(m)} H^\alpha (\mathbf{L}(v)^\top u) \right| {\mathfrak {B}}_{{\mathcal {I}}_{}} \right] \\&\quad = \sum _{j=1}^m \sum _{v \in E_{{\mathcal {I}}_{}}(j)} \mathbb {E}\left[ \left. \sum _{\left| {w} \right| =m-j} H^\alpha (\left[ \mathbf{L}(w) \right] _{v}^\top \mathbf{L}(v)^\top u) \right| {\mathfrak {B}}_{{\mathcal {I}}_{}} \right] + \sum _{v \in A_{{\mathcal {I}}_{}}(m)} H^\alpha (\mathbf{L}(v)^\top u) \\&\quad = \sum _{j=1}^m \sum _{v \in E_{{\mathcal {I}}_{}}(j)} H^\alpha (\mathbf{L}(v)^\top u) + \sum _{v \in A_{{\mathcal {I}}_{}}(m)} H^\alpha (\mathbf{L}(v)^\top u) \qquad \mathbb {P}\text {-a.s.}\end{aligned}$$

Since \({\mathcal {I}}_{}\) is \(\mathbb {P}\text {-a.s.}\) dissecting, in the limit \(m \rightarrow \infty \),

$$\begin{aligned} \mathbb {E}[ W(u)| {\mathfrak {B}}_{{\mathcal {I}}_{}} ] = \sum _{v \in {\mathcal {I}}_{}} H^\alpha (\mathbf{L}(v)^\top u) = W_{{\mathcal {I}}_{}}(u) \qquad \mathbb {P}\text {-a.s.}\end{aligned}$$

this convergence being valid as well in \(L^1(\mathbb {P})\), for the right hand side can be bounded by the uniform integrable sequence \(2 W_m(u)\) in every step.

Now for the sequence of stopping lines \({\mathcal {I}}_{t}^u\), we have that \(W(u)\) is measurable w.r.t to \({\mathfrak {B}}_\infty \). Then by [15, Theorem 5.21]

$$\begin{aligned} \lim _{t \rightarrow \infty } W_{{\mathcal {I}}_{t}^u}(u) = \lim _{t \rightarrow \infty } \mathbb {E}[W(u) | {\mathfrak {B}}_{{\mathcal {I}}_{t}^u}] = \mathbb {E}[W(u) | {\mathfrak {B}}_\infty ] = W(u) \end{aligned}$$

\(\mathbb {P}\text {-a.s.}\) and in \(L^1(\mathbb {P})\). \(\square \)

14 Proof of Theorem 4.10

In this section, we prove Theorem 4.10, which states that

$$\begin{aligned} \lim _{t \rightarrow \infty } W_{{\mathcal {I}}_{t}^u}^f = \lim _{t \rightarrow \infty }\sum _{v \in {\mathcal {I}}_{t}^u} H^\alpha (\mathbf{L}(v)^\top u) f(U^u(v),S^u(v)-t) = \gamma W(u) \end{aligned}$$

in \(\mathbb {P}\)-probability and in \(L^1(\mathbb {P})\), where \(\gamma = \int f(y,s) \varrho (dy,ds)\) is the limit of \(\mathbb {E}_u^\alpha f(U(t), R(t))\) for \(t \rightarrow \infty \), and \(f\) is any bounded continuous function on \({\mathbb {S}}_\ge \times {\mathbb {R}}\), or a slight generalization thereof.

As a first step, we need the following stronger version of Theorem 4.9. Write

$$\begin{aligned} F(u,t):=\frac{\mathbb {E}W_{{\mathcal {I}}_{t}^u}^f}{H^\alpha (u)}. \end{aligned}$$

Proposition 14.1

Under the assumptions of Theorem 4.9, it holds that

$$\begin{aligned} \lim _{t \rightarrow \infty } \sup _{u \in {\mathbb {S}}_\ge } \left| {F(u,t)- \gamma } \right| =0. \end{aligned}$$

Proof

Recall that due to the many-to-one identity (4.9), \(F(u,t) = \mathbb {E}_u^\alpha f(U(t),R(t))\), and that, subject to the assumptions of Theorem 4.9, Kesten’s renewal theorem applies in order to show that \(\lim _{t \rightarrow \infty } \mathbb {E}_u^\alpha f(U(t),R(t)) = \gamma \). Melfi [55] proved (ibid., Theorem 2) that under the same assumptions, the convergence is uniform in \(u \in {\mathbb {S}}_\ge \). This implies the uniform convergence asserted above. \(\square \)

Subsequently, the assumptions of Theorem 4.10 will be in force throughout.

Lemma 14.2

We have the uniform integrability

$$\begin{aligned}&\lim _{q \rightarrow \infty } \sup _{u \in {\mathbb {S}}_\ge } \sup _{t \in {\mathbb {R}}_>} \mathbb {E}_{} \left( {\!W_{{\mathcal {I}}_{t}^u}^f(u) \mathbb {1}_{\{W_{{\mathcal {I}}_{t}^u}^f(u) > q \}}} \right) \\&\quad =\! \lim _{q \rightarrow \infty } \sup _{u \in {\mathbb {S}}_\ge } \sup _{t \in {\mathbb {R}}_>} \mathbb {E}_{} \left( {\!W_{{\mathcal {I}}_{t}^u(u)} \mathbb {1}_{\{W_{{\mathcal {I}}_{t}^u}(u) > q \}}} \right) \!=\!0. \end{aligned}$$

Proof

Since \(0 \le W_{{\mathcal {I}}_{t}^u}^f (u)\le \left| {f} \right| _\infty W_{{\mathcal {I}}_{t}^u}(u)\) and \(f\) is assumed to be bounded, it satisfies to prove the second equality.

Let \(u_1, \dots , u_d \in {\mathbb {S}}_\ge \) be the standard basis of \({\mathbb {R}}^d\), then for all \(u \in {\mathbb {S}}_\ge \), \(n \in {\mathbb {N}}_0\) it follows right from the definition (1.14) of \(W_n(u)\) that

$$\begin{aligned} W_n(u) \le \sum _{i=1}^d { W_n(u_i)}, \end{aligned}$$

and consequently, due to the \(\mathbb {P}\text {-a.s.}\)-convergence, \( W(u) \le \sum _{i=1}^d W(u_i).\) Now for fixed \(q\), the function \(g(t):= t \mathbb {1}_{(q,\infty )}(t)\) is convex. Using the estimate above, the conditional Jensen inequality and Lemma 4.7, we compute that for all \(u \in {\mathbb {S}}_\ge \), \(t \in {\mathbb {R}}_\ge \)

$$\begin{aligned} \mathbb {E}\, g(W_{{\mathcal {I}}_{t}^u}(u)) =&~ \mathbb {E}\, g(\mathbb {E}[ W(u) | {\mathfrak {B}}_{{\mathcal {I}}_{t}^u}]) \le \mathbb {E}\, \mathbb {E}[ g(W(u)) |{\mathfrak {B}}_{{\mathcal {I}}_{t}^u}] \\ =&~ \mathbb {E}\, g(W(u)) \le ~ \mathbb {E}\, g \left( \sum _{i=1}^d W(u_i) \right) . \end{aligned}$$

Since \(\sum _{i=1}^d W(u_i)\) is \(\mathbb {P}\)-integrable, the last expression tends to zero for \(q \rightarrow \infty \). \(\square \)

Lemma 14.3

Let \((t_n)_{n \in {\mathbb {N}}} \subset {\mathbb {R}}_>\) be a sequence with \(\lim _{n \rightarrow \infty } t_n = \infty \). Then we have

$$\begin{aligned} \lim _{n \rightarrow \infty } \sum _{v \in {\mathcal {I}}_{t_{n}/2}^u} e^{-\alpha S^u(v)} H^{\alpha }(U^u(v)) F(U^u(v),t_n-S^u(v)) = \gamma W(u) \quad \mathbb {P}\text {-a.s.}. \end{aligned}$$

Proof

This follows from the \(\mathbb {P}\text {-a.s.}\) convergence of \(W_{{\mathcal {I}}_{t_{n}/2}^u}^f\), proved in Lemma 4.7, together with the uniform convergence of \(F\) to \(\gamma \), shown in Proposition 14.1.

Set

$$\begin{aligned} \xi _{t}(u):= \frac{W_{{\mathcal {I}}_{t}^u}^f(u)}{H^{\alpha }(u)F(u,t)}. \end{aligned}$$

Then each \(\xi _t(u)\) has mean one, and the assertion of uniform convergence in Proposition 14.1 together with Lemma 14.2 above imply the following corollary:

Corollary 14.4

We have the uniform integrability

$$\begin{aligned} \lim _{q \rightarrow \infty } \sup _{u \in {\mathbb {S}}_\ge } \sup _{t \in {\mathbb {R}}_>} \, \mathbb {E}_{} \left( { \xi _t(u) \mathbb {1}_{\{ \left| {\xi _t(u)} \right| > q \}} } \right) = 0. \end{aligned}$$

Lemma 14.5

If \(r<t\), then for all \(u \in {\mathbb {S}}_\ge \), it holds \(\mathbb {P}\text {-a.s.}\) that

$$\begin{aligned} W_{{\mathcal {I}}_{t}^u}^f(u) = \sum _{v \in {\mathcal {I}}_{r}^u} e^{-\alpha S^u(v)} H^{\alpha }(U^u(v)) \, F(U^u(v),t_n-S^u(v)) \, {\left[ \xi _{t-S^u(v)}(U^u(v) \right] _{v}}. \end{aligned}$$
(14.1)

Note that \(W_s^f(u):=f(u,-s)\) for \(s<0\).

Proof

If \(r <t\), then \({\mathcal {I}}_{r}^u \prec {\mathcal {I}}_{t}^u\) in the sense that for every \(x \in {\mathcal {I}}_{t}^u\) there is \(v \in {\mathcal {I}}_{r}^u\) with \(v \prec x\), i.e. there is \(w \in \mathbb {V}\) s.t. \(x=vw\). Hence, we have the general decomposition

$$\begin{aligned} W_{{\mathcal {I}}_{t}^u}^f \!&= \! \sum _{v \in {\mathcal {I}}_{r}^u} \sum _{w \in \mathbb {V}} H^\alpha (\mathbf{L}(vw)^\top \cdot u) f(U^u(vw), S^u(vw)\!-\!t) \mathbb {1}_{\{S^u(vw) \!>\! t, S^u(vw|k) \le t\, \forall k \!<\! \left| {vw} \right| \}} \\&= \sum _{v \in {\mathcal {I}}_{r}^u} \sum _{w \in \mathbb {V}} e^{-\alpha (S^u(v)+\left[ S^{U(v)}(w) \right] _{v})} \, H^{\alpha }\left( \left[ \mathbf{L}(w)^\top \right] _{v} \cdot U(v)\right) \\&\quad \times f\left( \left[ \mathbf{L}(w)^\top \right] _{v} \cdot U^u(v), S^u(v) + \left[ S^{U(v)}(w) \right] _{v}-t \right) \\&\quad \times \mathbb {1}_{\{\left[ S^{U(v)}(w) \right] _{v} > t - S^u(v),\ \left[ S^{U(v)}(w|k) \right] _{v} \le t - S^u(v)\ \forall k < \left| {w} \right| \}} \\&= \sum _{v \in {\mathcal {I}}_{r}^u} e^{-\alpha S^u(v)} {\left[ W_{{\mathcal {I}}_{t-S^u(v)}^{U^u(v)}}^f(U^u(v)) \right] _{v}} \\&= \sum _{v \in {\mathcal {I}}_{r}^u} e^{-\alpha S^u(v)} H^{\alpha }(U^u(v)) \, F(U^u(v),t-S^u(v)) \, {\left[ \xi _{t-S^u(v)}(U^u(v) \right] _{v}} \end{aligned}$$

\(\square \)

Now we can give the proof of Theorem 4.10.

Proof of Theorem 4.10

Due to Lemma 14.5, Eq. (14.1), for \(\mathbb {P}\)-a.e. \(\omega \in \Omega \), \(W_{{\mathcal {I}}_{t_{n}/2}^u}^f(u)\) constitutes a triangular array with respect to the probabilities \({\mathbb {P}}_{} \left( {\cdot \, | {\mathfrak {B}}_{{\mathcal {I}}_{t_{n}/2}^u} } \right) (\omega )\). By Corollary 14.4 and Lemma 14.3 we can use Cohn and Jagers [24, Corollary 5] for the triangular array to infer the convergence

$$\begin{aligned} {\mathbb {P}}_{} \left( { \left| {W_{{\mathcal {I}}_{t_n}^u}^f(u) - \gamma W(u) } \right| > \varepsilon | {\mathfrak {B}}_{{\mathcal {I}}_{t_{n}/2}^u} } \right) (\omega ) = 0 \end{aligned}$$

for \(\mathbb {P}\)-a.e. \(\omega \). Using dominated convergence, we infer the convergence \(W_{{\mathcal {I}}_{t_n}^u}^f(u) \rightarrow \gamma W(u)\) in \(\mathbb {P}\)-probability. Together with the uniform integrability of \(W_{{\mathcal {I}}_{t}^u}^f\), proved in Lemma 14.2, this yields \(L^1(\mathbb {P})\)-convergence.