1 Introduction

This paper studies fine properties of one of the fundamental models of positive random measures illustrating multiplicative chaos theory, namely limits of log-infinitely divisible cascades.

Multiplicative chaos theory originates mainly from the intermittent turbulence modeling proposed by Mandelbrot [27], who introduced a construction of measure-valued log-Gaussian multiplicative processes. As its mathematical treatment was hard to achieve in complete rigor, the model was simplified by Mandelbrot [2830] himself, who considered the limit of canonical multiplicative cascades. The study of these statistically self-similar measures gave rise to a number of important contributions that we will describe in a while. In the eighties, Kahane [1820] founded multiplicative chaos theory, in particular for Gaussian multiplicative chaos (but also with applications to random coverings), providing the expected mathematical framework for Mandelbrot’s initial construction. Later, fundamental new illustrations of this theory by grid free statistically self-similar measures appeared, namely the compound Poisson cascades introduced by Barral and Mandelbrot [6] and their generalization to the wide class of log-infinitely divisible cascades built by Bacry and Muzy [2]; in particular one finds in [2] a subclass of log-infinitely divisible cascades whose limits possess a remarkable exact scaling property: let \(\mu \) be the measure on \({\mathbb {R}}_+\) obtained as the non-degenerate limit of such a cascade (the construction is made in dimension 1), there exists an integral scale \(T>0\) and a Lévy characteristic exponent \(\psi \) such that for all \(\lambda \in (0,1)\), there exists an infinitely divisible random variable \(\Omega _\lambda \) such that \({\mathbb {E}}(e^{iq\Omega _\lambda })=\lambda ^{-\psi (q)}\) for all \(q\in {\mathbb {R}}\), and

$$\begin{aligned} (\mu ([0,\lambda t]))_{0\le t\le T} \overset{\text {law}}{=} \lambda e^{\Omega _\lambda } (\mu ([0,t]))_{0\le t\le T}, \end{aligned}$$
(1.1)

where on the right hand side \((\mu ([0,t]))_{0\le t\le T}\) is independent of \(\Omega _\lambda \). Moreover, \(((\mu ([u,u+t])_{t\ge 0})_{u\ge 0}\) is stationary, and the \(\mu \)-measure of any two intervals being away from each other by more than \(T\) are independent (when the characteristic exponent is quadratic, the construction falls into Gaussian multiplicative chaos theory).

Higher dimensional versions have been built as well (see [9, 18, 34]). In particular, in dimension 2 and in the Gaussian case, they are closely related to the validity of the so-called KPZ formula and its dual version in Liouville quantum gravity (see [12] and [35], as well as [3]).

To fix ideas, let us recall the construction of dyadic canonical multiplicative cascades, as well as the construction of the subclass of exact scaling log-infinitely divisible cascades which are the closer to canonical ones, namely compound Poisson cascades. To build a dyadic canonical cascade in dimension 1, one can consider the dyadic tree

$$\begin{aligned} \{M_u\}_{u\in \bigcup _{j\ge 1}\{0,1\}^j}=\bigcup _{j\ge 1}\left\{ \left( 2^{-(j+1)}+\sum _{k=1}^ju_k 2^{-k},2^{-j}\right) \right\} _{u\in \{0,1\}^j} \end{aligned}$$

embedded in the upper half-plane \({\mathbb {H}}\) (this extends naturally to \(m\)-adic trees). Then to each point \(M_u\) one associates a random variable \(W_u\), so that the \(W_u,\, u\in \bigcup _{j\ge 1} \{0,1\}^j\), are independent and identically distributed with a positive random variable \(W\) of expectation 1, and one defines a sequence of measures on \([0,1]\) as

$$\begin{aligned} \mu _j (\mathrm{d} t)=\prod _{k=1}^j W_{u_1\cdots u_k}\cdot {\mathrm{d}t}\quad \text {if }t\in \left[ \sum _{k=1}^ju_k 2^{-k}, 2^{-j}+\sum _{k=1}^ju_k 2^{-k}\right) . \end{aligned}$$

This definition can be put into the same setting as that used to define the log-infinitely divisible cascades if we write

$$\begin{aligned} \mu _j (\mathrm{d} t)=e^{\Lambda (V_{2^{-j}}(t))}\,{\mathrm{d}t}, \end{aligned}$$

where \(V_{2^{-j}}(t)\) is the truncated cone \(\{z=x+iy\in \mathbb {H}: | x-t|<\min (1,y)/2,\ 2^{-j}\le y \}\) and \(\Lambda \) is the random measure on \(({\mathbb {H}},{\mathcal {B}}({\mathbb {H}}))\) defined as

$$\begin{aligned} \Lambda (A)=\sum _{u: M_u\in A}\log (W_u). \end{aligned}$$

In fact, one obtains examples of the exact scaling compound Poisson cascades mentioned above by replacing formally the dyadic tree \(\{M_u\}\) by the points of a Poisson point process in \({\mathbb {H}}\) with an intensity of the form \(ay^{-2} \mathrm{d}x\mathrm{d}y\) (\(a>0\)), the process being independent of the copies of \(W\) attached to its points (Fig. 1). The cones \(V_{2^{-j}}(t)\) have been exhibited by Bacry and Muzy after a careful inspection of the characteristic function of the process \((\Lambda (\widetilde{V}_{2^{-j}}(t)))_{t\in [0,1]}\) along a large family of cones \(\widetilde{V}_{2^{-j}}(t)\) (subject to contain \(\{z=x+iy\in \mathbb {H}: | x-t|<y/2,\ 2^{-j}\le y\le 1 \}\)), leading to choosing \(V_{2^{-j}}(t)\) and deriving (1.1) (this choice is the same for all exact scaling log-infinitely divisible cascades).

Fig. 1
figure 1

Grey areas for \(V_{2^{-2}}(t)\). a Dyadic tree. b Poisson point process

In both situations, the sequence \((\mu _j)_{j\ge 1}\) is a martingale which converges almost surely weakly to a limit \(\mu \) supported on \([0,1]\). In the case of canonical cascades, the self-similar structure of the dyadic tree together with the independence and identical distribution of the \(W_u\) directly yields the fundamental almost sure relation

$$\begin{aligned} \mu (A)=2^{-1}W_0\mu ^{(0)}(2(A\cap [0,1/2]))+ 2^{-1} W_1\mu ^{(1)}(2(A\cap [1/2,1])-1) \end{aligned}$$

for all Borel sets \(A\), where \(\mu ^{(0)}\) and \(\mu ^{(1)}\) are the independent copies of \(\mu \) obtained by making the substitution \(W_u:=W_{0u}\), and \(W_u:=W_{1u}\) respectively, in the construction, these measures being independent of \((W_0,W_1)\). Denoting the total masses \(Z=\Vert \mu \Vert ,\, Z(0)=\Vert \mu ^{(0)}\Vert \) and \(Z(1)=\Vert \mu ^{(1)}\Vert \), this gives the scalar equation,

$$\begin{aligned} Z=2^{-1}(W_0Z(0)+W_1Z(1)), \end{aligned}$$
(1.2)

which plays a crucial role in deriving fine properties of the distribution of \(Z\) and geometric properties of \(\mu \).

In the case of exact scaling log-infinitely divisible cascades, such an almost sure relation between \(\mu \) and its restrictions to contiguous non-trivial subintervals partitioning \([0,1]\) does not fall automatically from the construction, which is not genuinely based on geometric scaling properties. Nevertheless, a simple observation based on Bacry and Muzy’s calculation does provide such an analogue, with additional correlations (see (1.13) below, and Sect. 1.5). On the other hand, it is natural to seek for a family of cones whose geometric structure directly induces limit of log-infinitely divisible cascades satisfying both (1.1) and (1.13). We will introduce such a family. The formal definition of exact scaling log-infinitely divisible cascades built from it will be explained in the next subsections, as well as the equivalence with Bacry and Muzy’s original definition. All the proofs in the paper will use the definition based on the new cones; using the original ones would be equivalent.

Mandelbrot was especially interested in three questions related to canonical cascades: (1) under which necessary and sufficient conditions is \(\mu \) non-degenerate, i.e. \({\mathbb {P}}(\mu \ne 0)=1\) (\(\{\mu \ne 0\}\) is a tail event of probability 0 or 1)? (2) When \(\mu \) is non-degenerate, under which necessary and sufficient conditions the total mass has finite \(q\)th moment when \(q>1\), i.e. \({\mathbb {E}}(\Vert \mu \Vert ^q)<\infty \)? (3) When \(\mu \) is non-degenerate, what is the Hausdorff dimension of \(\mu \)? He formulated and partially solved related conjectures, which were finally solved by Kahane and Peyrière [21], who exploited finely the fundamental Eq. (1.2): let

$$\begin{aligned} \varphi (q)=\log _2{\mathbb {E}}(W^q)-(q-1). \end{aligned}$$
(1.3)

Then \(\mu \) is non-degenerate if and only if \(\varphi '(1^-)<0\); in this case the convergence of the total mass \(\Vert \mu _j\Vert \) holds in \(L^1\) norm, and for \(q>1\) one has \({{\mathbb {E}}}(\Vert \mu \Vert ^q)<\infty \) if and only if \(\varphi (q)<0\); also, the Hausdorff dimension of \(\mu \) is \(-\varphi '(1^-)\) (It was assumed in [21] that \({{\mathbb {E}}}(\Vert \mu \Vert \log ^+\Vert \mu \Vert )<\infty \), a condition removed in [19]).

It is not hard to see that all the positive moments of \(Z\) are finite if and only if \({\mathbb {P}}(W\le 2)=1\) and \({\mathbb {P}}(W=2)<1/2\) (recall that this is also equivalent to \(\varphi (q)<0\) for all \(q>1\)), and in this case it is shown in [21] that

$$\begin{aligned} \lim _{q\rightarrow \infty } \frac{\log {\mathbb {E}}(Z^q)}{q\log q}=\log _2 \mathrm{ess}\,\sup (W)\le 1. \end{aligned}$$
(1.4)

When there exists a (necessarily unique since \(\varphi (1)=0\) and \(\varphi \) is convex) solution \(\zeta \) to the equation \(\varphi (q)=0\) in \((1,\infty )\), Guivarc’h, motivated by a conjecture in [29], showed in [17] that when the distribution of \(\log (W)\) is non-arithmetic, there exists a constant \(0<d<\infty \) such that

$$\begin{aligned} \lim _{x\rightarrow \infty } x^{\zeta }{\mathbb {P}}(Z>x)=d. \end{aligned}$$
(1.5)

The proof is based on the connection of (1.2) with the theory of random difference equations. Regarding moments of negative orders, if \(\varphi '(1^-)<0\), given \(q>0\) one has \({{\mathbb {E}}} (Z^{-2q'})<\infty \) for all \(q'\in (0,q)\) if and only if \(\varphi (-q')<\infty \), i.e. \({{\mathbb {E}}}(W^{-q'})<\infty \), for all \(q'\in (0,q)\) [8, 24, 31].

The same series of questions arise for the limits of 1-dimensional exact scaling log-infinitely divisible cascades. In general, one expects answers similar to those obtained for limits of canonical cascades. We will sharpen some of the already known results, and provide new ones, especially regarding the right tail asymptotic behavior of the law of the total mass of such a measure restricted to compact intervals.

Let us now come to the definitions (Sects. 1.1 and 1.2) required to build 1-dimensional exact scaling log-infinitely divisible cascades from the new family of cones invoked above (Sect. 1.3). Section 1.4 will present our main results, and Sect. 1.5 the connection between Bacry–Muzy’s original construction and the one adopted in this paper.

1.1 Independently scattered random measures

Let \(\psi \) be a characteristic Lévy exponent given by

$$\begin{aligned} \psi {:} \,q \in {\mathbb {R}} \mapsto iaq -\frac{1}{2}\sigma ^2q^2+ \int _{\mathbb {R}} (e^{iq x}-1-iq x \mathbf{1}_{|x|\le 1})\, \nu (\mathrm{d}x), \end{aligned}$$
(1.6)

where \(a,\sigma \in {\mathbb {R}}\) and \(\nu \) is a Lévy measure on \({\mathbb {R}}\) satisfying

$$\begin{aligned} \nu (\{0\})=0\quad \text { and } \quad \int _{\mathbb {R}} 1 \wedge |x|^2 \, \nu (\mathrm{d}x) <\infty . \end{aligned}$$

Let \(\mathbb {H}={\mathbb {R}}\times i{\mathbb {R}}_+\) be the upper half plane and let \(\lambda \) be the hyperbolic area measure on \(\mathbb {H}\) defined as

$$\begin{aligned} \lambda (\mathrm{d}x\mathrm{d}y)= y^{-2} \mathrm{d}x \mathrm{d}y. \end{aligned}$$

Let \(\Lambda \) be an homogenous independently scattered random measure on \(\mathbb {H}\) with \(\psi \) as Lévy exponent and \(\lambda \) as intensity (see [32] for details). It is characterized by the following: for every Borel set \(B\in {\mathcal {B}}_\lambda =\{B\in {\mathcal {B}}({\mathbb {H}}): \lambda (B)<\infty \}\) and \(q\in {\mathbb {R}}\) we have

$$\begin{aligned} {\mathbb {E}}(e^{i q \Lambda (B)})=e^{\psi (q)\lambda (B)}, \end{aligned}$$

and for every sequence \(\{B_i\}_{i=1}^\infty \) of disjoint Borel subsets of \( {\mathcal {B}}_\lambda \) with \(\cup _{i=1}^\infty B_i\in \mathcal {B}_\lambda \), the random variables \(\Lambda (B_i),\, i\ge 1\), are independent and satisfy

$$\begin{aligned} \Lambda \left( \,\,\bigcup _{i=1}^\infty B_i\right) =\sum _{i=1}^\infty \Lambda (B_i)\quad \text {almost surely}. \end{aligned}$$
(1.7)

Let \(I_\nu \) be the interval of those \(q\in {\mathbb {R}}\) such that \(\int _{|x|\ge 1} e^{ q x}\, \nu (\mathrm{d}x)<\infty \). Then the function \(\psi \) has a natural extension to \(\{z\in \mathbb {C}: -\mathrm{Im} (z) \in I_\nu \}\). In particular for any \(q \in I_\nu \) and every \( B\in \mathcal {B}_\lambda \) we have

$$\begin{aligned} {\mathbb {E}}(e^{q \Lambda (B)})=e^{\psi (-iq)\lambda (B) }. \end{aligned}$$

Through out the paper we assume that at least one of \(\sigma \) and \(\nu \) is positive, and assume that \(I_\nu \) contains the interval \([0,1]\). We adopt the normalization

$$\begin{aligned} a=-\frac{\sigma ^2}{2}-\int _{\mathbb {R}} (e^{x}-1- x \mathbf{1}_{|x|\le 1}) \, \nu (\mathrm{d}x). \end{aligned}$$
(1.8)

Then for \(B\in \mathcal {B}_\lambda \) we define

$$\begin{aligned} Q(B)=e^{\Lambda (B)}, \end{aligned}$$

and by (1.8) we have

$$\begin{aligned} {\mathbb {E}}(Q(B))=1. \end{aligned}$$
(1.9)

More generally for \(q\in I_\nu \) we have

$$\begin{aligned} {\mathbb {E}}(Q(B)^q)=e^{\psi (-iq)\lambda (B)}. \end{aligned}$$
(1.10)

1.2 Cones and areas

Let \({\mathcal {I}}=\{[s,t]: s,t\in {\mathbb {R}}, s<t\}\) be the collection of all nontrivial compact intervals. For \(I=[s,t]\in {\mathcal {I}}\) denote by \(|I|\) its length \(t-s\).

For \(t\in {\mathbb {R}}\) define the cone

$$\begin{aligned} V(t)=\{z=x+iy\in \mathbb {H}: -y/2< x-t\le y/2 \}=V(0)+t. \end{aligned}$$

For \(I\in {\mathcal {I}}\) define

$$\begin{aligned} V(I)=\bigcap _{t\in I} V(t). \end{aligned}$$

For \(I\in {\mathcal {I}}\) and \(t\in I\) define

$$\begin{aligned} V^I(t)=V(t){\setminus } V(I). \end{aligned}$$

For \(I, J\in {\mathcal {I}}\) with \(J\subseteq I\) define

$$\begin{aligned} V^I(J)=\bigcap _{t\in J} V^I(t)=V(J){\setminus } V(I). \end{aligned}$$

A straightforward computation shows that

Lemma 1.1

For \(I, J\in {\mathcal {I}}\) with \(J\subseteq I\) one has

$$\begin{aligned} \lambda (V^I(J))=\log \frac{|I|}{|J|}. \end{aligned}$$

1.3 Exact scaling log-infinitely divisible cascades

For \(\epsilon >0\) denote by

$$\begin{aligned} \mathbb {H}_\epsilon =\{z\in \mathbb {H}: \mathrm{Im}(z)\ge \epsilon \}. \end{aligned}$$

For \(I\in {\mathcal {I}},\, t\in I\) and \(\epsilon >0\) define

$$\begin{aligned} V_\epsilon ^I(t)=V^I(t)\cap \mathbb {H}_\epsilon . \end{aligned}$$

Clearly we have \(V_\epsilon ^I(t)\in \mathcal {B}_\lambda \). Moreover, for each \(\epsilon >0\) there exists a càdlàg modification of \((Q(V_\epsilon ^I(t)))_{t\in I}\). In fact, similar to [2, Definition 4], one can define

$$\begin{aligned} \Lambda (V_\epsilon ^I(t))=\Lambda (A_\epsilon ^I(t))-\Lambda (B_\epsilon ^I(t)) +\Lambda (C_\epsilon ^I),\quad \ t\in I, \end{aligned}$$

where (see Fig. 2)

$$\begin{aligned} A_\epsilon ^I(t)&= \{x+iy\in \mathbb {H}:\ y/2\le x-\inf I \le t+y/2 \}\cap \mathbb {H}_\epsilon ,\\ B_\epsilon ^I(t)&= \{x+iy\in \mathbb {H}:\ -y/2\le x-\inf I \le t-y/2 \}\cap \mathbb {H}_\epsilon ,\\ C_\epsilon ^I&= \{x+iy\in \mathbb {H}:\ -y/2\le x -\inf I\le y/2\wedge (\sup I-y/2) \}\cap \mathbb {H}_\epsilon . \end{aligned}$$

It is easy to see that both \(\Lambda (A_\epsilon ^I(t))\) and \(\Lambda (B_\epsilon ^I(t))\) are Lévy processes and \(\Lambda (C_\epsilon ^I)\) does not depend on \(t\), thus \(\Lambda (V_\epsilon ^I(t))\) has a càdlàg modification.

Fig. 2
figure 2

The gray areas for the corresponding sets. a \(A^I_\epsilon (t)\). b \(B^I_\epsilon (t)\). c \(C^I_\epsilon \). d \(V^I_\epsilon (t)\)

We use this to define \(\mu _\epsilon ^I\), the random measure on \(I\) given by

$$\begin{aligned} \mu _\epsilon ^I(\mathrm{d}x)=\frac{1}{|I|}\cdot Q(V_\epsilon ^I(x)) \, \mathrm{d}x,\quad \ x\in I. \end{aligned}$$

The following lemma is due to Kahane [20] combined with Doob’s regularisation theorem (see [33, Chapter II.2] for example).

Lemma 1.2

Given \(I\in {\mathcal {I}},\, \{\mu ^I_{1/t}\}_{t>0}\) is measure-valued martingale. It possesses a right-continuous modification, which converges weakly almost surely to a limit \(\mu ^I\).

Throughout, we will work with this right-continuous version of \(\{\mu ^I_{1/t}\}_{t>0}\), and its limit \(\mu ^I\). We give the proof of this lemma with some details, since this point is not made explicit in the context of [2].

Proof

Let \(\Phi \) be a dense countable subset of \(C_0(I)\) (the family of nonnegative continuous functions on \(I\)). Let \(f_0\) be the constant mapping equal to 1 over \(I\). For \(f\in \Phi \cup \{f_0\}\) and \(t>0\) define

$$\begin{aligned} \mu ^I_{1/t}(f)=\int _I f(x) \, \mu ^I_{1/t}(\mathrm{d}x)=\frac{1}{|I|}\int _I f(x) \cdot Q(V_{1/t}^I(x)) \, \mathrm{d}x \end{aligned}$$

and

$$\begin{aligned} {\mathcal {F}}_t=(\sigma (\Lambda (V^I_{1/s}(x)): x\in I;\ 0<s\le t))_{t>0}. \end{aligned}$$

Let \(\mathcal {N}\) be the class of all \(\mathbb {P}\)-negligible, \(\mathcal {F}_\infty \)-measurable sets. Then define \(\mathcal {G}_0=\sigma (\mathcal {N})\) and \(\mathcal {G}_t=\sigma (\mathcal {F}_t\cup \mathcal {N})\) for \(t>0\). Due to the normalisation (1.8), the measurability of \((\omega ,x)\mapsto Q(V_\epsilon ^I(x))\) and the independence properties associated with \(\Lambda \), the family \(\{\mu ^I_{1/t}(f)\}_{t>0}\) is a positive martingale with respect to the right-continuous complete filtration \((\mathcal {G}_{t})_{t\ge 0}\), with expectation \({\mathbb {E}}(\mu ^I_{1/t})=|I|^{-1} \int _I f(x) \, dx<\infty \). Then from [33, Chapter II, Theorem 2.5] one can find a subset \(\Omega _0\subset \Omega \) with \(\mathbb {P}(\Omega _0)=1\) such that for every \(\omega \in \Omega _0\), for each \(f\in \Phi \cup \{f_0\}\) and \(t\in [0,\infty ),\, \lim _{r\downarrow t; r\in \mathbb {Q}} \mu ^I_{1/r}(f)\) exists. Define

$$\begin{aligned} \mu ^{I,+}_{1/t}(f)=\lim _{r\downarrow t; r\in \mathbb {Q}} \mu ^I_{1/r}(f) \quad \text { if } \omega \in \Omega _0 \text { and } \mu ^{I,+}_{1/t}(f)=0 \text { if } \omega \not \in \Omega _0. \end{aligned}$$

Then from [33, Chapter II, Theorem 2.9 and 2.10] we get that \(\mu ^{I,+}_{1/t}(f)\) is a càdlàg modification of \(\mu ^I_{1/t}(f)\) for each \(f\in \Phi \cup \{f_0\}\), thus \(\lim _{t\rightarrow \infty } \mu ^{I,+}_{1/t}(f)\) exists for each \(\omega \in \Omega _0\). Now write

$$\begin{aligned} \mu ^I(f)=\lim _{t\rightarrow \infty } \mu ^{I,+}_{1/t}(f)\quad \text { if } \omega \in \Omega _0 \text { and } \mu ^I(f)=0 \text { if } \omega \not \in \Omega _0 \end{aligned}$$

for each \(f\in \Phi \). Since \(\Phi \) is a dense subset of \(C_0(I)\), one can extend \(\mu ^{I,+}_{1/t}\) to \(C_0(I)\) for each \(\omega \in \Omega _0\) by letting

$$\begin{aligned} \mu ^{I,+}_{1/t}(g)=\lim _{\Phi \ni f\rightarrow g} \mu ^{I,+}_{1/t}(f), \ g\in C_0(I) \end{aligned}$$

(this limit does exist because for any \(f_1,f_2\in \Phi \) and \(r\in {\mathbb {Q}}\) we have \(| \mu ^I_{1/r}(f_1)-\mu ^I_{1/r}(f_2)|\le \mu ^I_{1/r}(f_0)\Vert f_1-f_2\Vert _\infty \)). This defines a right-continuous version of \((\mu ^{I}_{1/t})_{t>0}\). Then, since the positive linear forms \(\mu ^{I,+}_{1/t}\) are bounded in norm by \(\mu ^{I,+}_{1/t}(f_0)\) and converge over the dense family \(\Phi \), they converge. This defines a measure \(\mu ^I\) as the weak limit of \(\mu ^{I,+}_{1/t}\) for each \(\omega \in \Omega _0\), hence the conclusion. \(\square \)

For the weak limit \(\mu ^I\) we have:

Lemma 1.3

For \(I,J\in {\mathcal {I}},\, \mu ^I\circ f_{I,J}^{-1}\) and \(\mu ^J\) have the same law, where \(f_{I,J}: t\in I \mapsto \inf J+ (t-\inf I)|J|/|I|\).

Proof

Due to the scaling property of \(\lambda \) we have that

$$\begin{aligned} \{Q(V^I_\epsilon (f_{I,J}^{-1}(x)), x\in J\}\quad \text { and } \quad \{Q(V^{J}_{\epsilon |J|/|I|}(x)), x\in J \} \end{aligned}$$

have the same law. This implies that

$$\begin{aligned} \{\mu _{1/t}^I\circ f_{I,J}^{-1},t>0\}\quad \text { and } \quad \{ \mu _{|I|/(|J|t)}^{J},t>0\} \end{aligned}$$

have the same law, and so do \(\mu ^I\circ f_{I,J}^{-1}\) and \(\mu ^J\). \(\square \)

Now we come to the scaling property of \(\mu ^I\). Due to (1.7), for any fixed compact subinterval \(J\subset I\) and \(t>0\) we have the decomposition

$$\begin{aligned} Q(V_{1/t}^I(x))=Q(V^I(J))\cdot Q(V_{|J|/(|I|t)}^J(x)), \quad x\in J, \end{aligned}$$
(1.11)

hence

$$\begin{aligned} (\mu ^I_{1/t})_{|J}=\frac{|J|}{|I|}Q(V^I(J))\cdot \mu ^J_{|J|/(|I|t)}, \end{aligned}$$

almost surely. Consequently this holds almost surely simultaneously for any at most countable family of such intervals \(J\), but a priori not for all, since \(\Lambda \) is not almost surely a signed measure. This along with Lemma 1.2 and its proof gives simultaneously for all compact intervals \(J\) of such a family the following decomposition

$$\begin{aligned} (\mu ^I)_{|J}=\frac{|J|}{|I|}Q(V^I(J)) \cdot {\mu }^J \end{aligned}$$
(1.12)

almost surely, where \({\mu }^I\circ f_{I,J}^{-1}\) has the same law as \(\mu ^J\), and it is independent of \(Q(V^I(J))\) (the fact that \(\mu ^I\) is continuous assures that the weak limit of \(\mu ^I_{1/t}\) restricted to \(J\) equals \(\mu ^I\) restricted to \(J\); the right-continuous modifications of \((\mu ^I_{1/t})_{t>0}\) and the \(( \mu ^J_{|J|/(|I|t)})_{t>0}\) are built simultaneously, and the convergence of \(\mu ^I_{1/t}\) implies that of \(\mu ^J_{|J|/(|I|t)}\)). However, (1.12) also holds almost surely simultaneously for all \(J\in {\mathcal {I}}\) with \( J\subset I\) when \(\sigma =0\) and the Lévy measure \(\nu \) satisfies \(\int 1\wedge |u|\, \nu (du)<\infty \). Indeed, in this case \(\Lambda \) is almost surely a signed measure, which makes it possible to directly write (1.11) almost surely for all \(J\in {{\mathcal {I}}}\) with \(J\subset I\) and for all \(t>0\) (notice that in this case we easily have the nice property that almost surely \(Q(V_{1/t}^I(x))\) is càdlàg both in \(x\) and \(t\)).

We notice that (1.12) implies (1.1) (see Sect. 1.5 for details), but we also have now the following new equation giving \(\Vert \mu ^I\Vert \) as a weighted sum of its copies: given \(k\ge 2\) and \(\min I=s_0<\cdots <s_k=\max I\), for \(j=0,\ldots ,k-1\) write \(I_j=[s_j,s_{j+1}]\); provided that \(s_1,\ldots ,s_{k-1}\) are not atoms of \(\mu ^I\), we have almost surely

$$\begin{aligned} \Vert \mu ^I\Vert =\sum _{j=0}^{k-1} \frac{|I_j|}{|I|}\cdot Q(V^I(I_j)) \cdot \Vert {\mu }^{I_j}\Vert , \end{aligned}$$
(1.13)

where for each \(j,\, \Vert {\mu }^{I_j}\Vert \) is independent of \(Q(V^I(I_j))\) and has the same law as \(\Vert \mu ^I\Vert \). This equation will be crucial to get our main results.

Another interesting equation is the following. For \(I\in {\mathcal {I}}\) let

$$\begin{aligned} I_{0}=[\min (I),\min (I)+|I|/2] \quad \text { and }\quad I_1=[\min (I)+|I|/2,\max (I)]. \end{aligned}$$

One can also define \(I_{00}\) and \(I_{01}\) in the same way for \(I_0\). Then, provided \(I_{00}\cap I_{01}\) is not an atom of \(\mu ^{I_0}\), we have

$$\begin{aligned} (\mu ^I)_{|I_0}=\frac{1}{2}\cdot Q(V^I(I_0))\cdot ((\mu ^{I_0})_{|I_{00}}+(\mu ^{I_0})_{|I_{01}}), \end{aligned}$$
(1.14)

where \((\mu ^{I_0})_{|I_{00}}\circ f_{I_0,I_{00}}^{-1}\) and \((\mu ^{I_0})_{|I_{00}}\circ f_{I_0,I_{01}}^{-1}\) have the same law as \((\mu ^I)_{|I_0}\), and they are independent of \(\frac{1}{2}Q(V^I(I_0))\).

To complete the proof of (1.13), we now prove the following lemma.

Lemma 1.4

Almost surely \(\mu ^I\) has no atoms.

Proof

We can assume that \(I=[0,1]\). We start with proving that \(1/4\) is not an atom. Let \((f_n)_{n\ge 1}\) be uniformly bounded sequence in \(C_0([0,1])\) which converges pointwise to \(\mathbf{1}_{1/4}\), and such that \(\mathrm{supp}(f_n)\subset [1/4-\eta _n,1/4+\eta _n]\) with \(1/4>\eta _n\downarrow 0\). Then

$$\begin{aligned} {\mathbb {E}}(\mu ^I(\{1/4\}))&\le \liminf _{n\rightarrow \infty } {\mathbb {E}}(\mu ^I(f_n))\le \liminf _{n\rightarrow \infty }\liminf _{t\rightarrow \infty } {\mathbb {E}}(\mu ^I_{1/t}(f_n))\\&= \liminf _{n\rightarrow \infty }\int f_n(t)\, \mathrm{d}t\le \liminf _{n\rightarrow \infty } 2\eta _n \Vert f_n\Vert _\infty . \end{aligned}$$

So \({\mathbb {E}}(\mu ^I(\{1/4\}))=0\).

The fact that \(1/4\) is not an atom of \(\mu ^I\) yields the validity of (1.14). Denote by \(\widehat{\mu }=(\mu ^I)_{|I_0},\, \widehat{\mu }_0=(\mu ^{I_0})_{|I_{00}},\, \widehat{\mu }_1=(\mu ^{I_0})_{|I_{01}}\) and \(\widehat{W}=\frac{1}{2}Q(V^I(I_0))\). From (1.14) we get

$$\begin{aligned} \widehat{\mu }=\widehat{W}\cdot (\widehat{\mu }_0+\widehat{\mu }_1). \end{aligned}$$

Due to Lemma 1.3 we know that whether \(\mu ^I\) or \(\widehat{\mu }\) having an atom is equivalent. Let \(M\) be the maximal \(\widehat{\mu }\)-measure of an atom of \(\widehat{\mu }\), and let \(M_j\) be the maximal \(\widehat{\mu }_j\)-measure of an atom of \(\widehat{\mu }_j\) for \(j=0,1\). We have \(M=\widehat{W}\max (M_0,M_1)\), where \(\widehat{W}\) is independent of \((M_0,M_1)\), has expectation \(1/2\) and \(M, M_0, M_1\) have the same law. Thus

$$\begin{aligned} {\mathbb {E}}(M_0+M_1)/2= {\mathbb {E}}(M) = {\mathbb {E}}(\widehat{W}\max (M_0,M_1))= {\mathbb {E}}(\max (M_0,M_1))/2. \end{aligned}$$

This implies that, with probability 1, if \(M_j>0\) then \(M_{1-j}=0\) for \(j\in \{0,1\}\). However, \(\{M_j>0\}\) is a tail event of probability 0 or 1, thus the previous fact implies that \(M_0=M_1=0\) almost surely, hence \(\widehat{\mu }\) has no atoms (here we have adapted to our context the argument of [8, Lemma A.2] for canonical cascades). \(\square \)

1.4 Main results

Without loss of generality we may take \(I=[0,1]\). For convenience we write \(\mu =\mu ^{[0,1]}\) and \(Z=\Vert \mu \Vert \). For \(q\in I_\nu \) define

$$\begin{aligned} \varphi (q)=\psi (-iq)-(q-1). \end{aligned}$$

Notice that if we set

$$\begin{aligned} W=Q(V^{[0,1]}([0,1/2])), \end{aligned}$$

then this function coincides with that of (1.3) for the canonical cascades.

For the non-degeneracy we have

Theorem 1.1

The following assertions are equivalent:

$$\begin{aligned} \mathrm{(i)}\,\, {\mathbb {E}}(Z)=1; \mathrm{(ii)}\,\, {\mathbb {E}}(Z)>0; \mathrm{(iii)}\,\, \varphi '(1^-)<0. \end{aligned}$$

Moreover, in case of non-degeneracy the convergence of \(\Vert \mu _{1/t}^I\Vert \) to \(Z\) holds in \(L^1\) norm.

For moments of positive orders we have

Theorem 1.2

For \(q>1\) one has \(0<{\mathbb {E}}(Z^q)<\infty \) if and only if \(q\in I_\nu \) and \(\varphi (q)<0\).

Remark 1.1

In Theorem 1.1, the main point is the equivalence between (ii) and (iii). For compound Poisson cascades \(\text { (iii) }\Rightarrow \text { (ii)}\) was proved in [6], as well as \(\text { (ii) }\Rightarrow \text { (iii)}\) under the additional assumption \(\varphi ''(1^-)<\infty \), while \(\text { (ii) }\Rightarrow \varphi '(1^-)\le 0\) was known in general (notice that the construction of the measure used the cones \(V_\epsilon (t)=\{z=x+iy\in \mathbb {H}: -y< x-t\le y,\ \epsilon \le y\le 1\}\)). For the larger class of log-infinitely divisible cascades, \(\text { (iii) }\Rightarrow \text { (ii)}\) was proved in [2] under the existence of \(q>1\) such that \(\varphi (q)<\infty \), i.e. under the sufficient condition implying the boundedness of \(\Vert \mu _\epsilon \Vert \) in some \(L^p,\, p>1\).

Regarding Theorem 1.2, in [6] and [2], \((q\in I_\nu \) and \(\varphi (q)<0)\ \Rightarrow (0<{\mathbb {E}}(Z^q)<\infty )\) and \((0<{\mathbb {E}}(Z^q)<\infty )\Rightarrow (q\in I_\nu \) and \(\varphi (q)\le 0)\) were known for compound Poisson cascades and then log-infinitely divisible cascades. We will only prove \((0<{\mathbb {E}}(Z^q)<\infty )\Rightarrow (q\in I_\nu \) and \(\varphi (q)< 0)\).

We will see that thanks to Eq. (1.13), the sharp Theorems 1.1 and 1.2 concerning the exact scaling case can be obtained via an adaptation of the arguments used in [21] for canonical cascades. Then, these results also hold for the more general family of log-infinitely divisible cascades built in [2], since changing the shape of the cones used in the definition of the cascade only creates a random measure equivalent to that corresponding to the exact scaling, and the behaviors of such measures are comparable (see [2, Appendix E]).

When \(Z\) has finite moments of every positive order we have

Theorem 1.3

  1. (1)

    The following assertions are equivalent: \(\mathrm{(\alpha )} \,0<{\mathbb {E}}(Z^q)<\infty \) for all \(q>1;\, \mathrm{(\beta )}\, \sigma =0\), and \(\nu \) is carried by \((-\infty ,0],\, \int _{-\infty }^0 1\wedge |x|\, \nu (dx)<\infty \), and

    $$\begin{aligned} \gamma =\int _{-\infty }^0 (1-e^x) \, \nu (dx)\le 1. \end{aligned}$$
  2. (2)

    If \(\mathrm{(\beta )}\) holds, then

    $$\begin{aligned} \lim _{q\rightarrow \infty } \frac{\log {\mathbb {E}}(Z^q)}{q\log q}=\gamma . \end{aligned}$$

Remark 1.2

Under \((\beta )\) we have for \(q\in {\mathbb {R}}\) and \(W=Q(V^{[0,1]}([0,1/2]))\) that

$$\begin{aligned} {\mathbb {E}}(W^{iq})=\exp \left( \left[ iq\gamma +\int _{-\infty }^0 (e^{iqx}-1) \, \nu (dx)\right] \log 2\right) , \end{aligned}$$

which means that \(\log W\) is the value at 1 of a Lévy process with negative jumps, local bounded variations, and drift \(\gamma \log 2\), hence \(\log _2 \mathrm{ess}\,\sup (W)=\gamma \). This gives in case (2) that

$$\begin{aligned} \lim _{q\rightarrow \infty } \frac{\log {\mathbb {E}}(Z^q)}{q\log q}=\log _2 \mathrm{ess}\,\sup (W) \le 1, \end{aligned}$$

which coincides with (1.4) found for canonical cascades. The situation turns out to be more involved than in the case of canonical cascades, due to the correlations associated with (1.13), which are absent in (1.2). We use Dirichlet’s multiple integral formula to estimate from above the expectation of moments of positive integer orders of the total mass, and then follow the same approach as [21] for canonical cascades.

Our main result is the following one. In the case where \({\mathbb {E}}(Z^q)=\infty \) for some \(q>1\) we have

Theorem 1.4

Suppose that there exists \(\zeta \in I_\nu \cap (1,\infty )\) such that \(\varphi (\zeta )=0\); in particular one has \(\varphi '(1)<0\). Also suppose that \(\varphi '(\zeta )<\infty \).

  1. (i)

    If either \(\sigma \ne 0\) or \(\nu \) is not of the form \(\sum _{n\in \mathbb {Z}} p_n \delta _{nh}\) for some \(h>0\), then

    $$\begin{aligned} \lim _{x\rightarrow \infty } x^\zeta \mathbb {P}(Z>x)=d, \end{aligned}$$

    where

    $$\begin{aligned} d=\frac{2 {\mathbb {E}}(\mu ([0,1])^{\zeta -1}\mu ([0,1/2])-\mu ([0,1/2])^{\zeta })}{\zeta \varphi '(\zeta ) \log 2} \in (0,\infty ). \end{aligned}$$
  2. (ii)

    If \(\sigma =0\) and \(\nu \) is of the form \(\sum _{n\in \mathbb {Z}} p_n \delta _{nh}\) for some \(h>0\), then

    $$\begin{aligned} 0<\liminf _{x\rightarrow \infty } x^\zeta \mathbb {P}(Z>x)\le \limsup _{x\rightarrow \infty } x^\zeta \mathbb {P}(Z>x) <\infty \end{aligned}$$

Remark 1.3

The proof exploits (1.13) and the unexpected fact that in Goldie’s approach [16] to the right tail behavior of solutions of random difference equations, it is possible to relax some independence assumptions. It also requires to prove that at the critical moment of explosion \(\zeta \), although \({{\mathbb {E}}}(\mu ([0,1])^\zeta )=\infty \), we have \({{\mathbb {E}}}(\mu ([0,1/2])\mu ([1/2,1])^{\zeta -1})<\infty \), an inequality which is rather involved, while it is direct in the case of canonical cascades.

Remark 1.4

From the proof (see Remark 6.1) we know that in case (i), when \(\zeta =2\),

$$\begin{aligned} d=1/\varphi '(2), \end{aligned}$$

which provides us with a family of random difference equations whose solution has a explicit tail probability constant. See [14] for related topics.

For reader’s convenience we also give the extension to log-infinitely divisible cascades of the result on finiteness of moments of negative orders mentioned for limits of canonical cascades, though with some effort it may be deduced from [6] and [35] (the sufficiency result can also be found in the Ph.D. thesis [22]); this result provides some information on the left tail behavior of the distribution of \(\Vert \mu \Vert \). Finally, thanks to (1.13) again, we can quickly give fine information on the geometry of the support of \(\mu \).

Theorem 1.5

Suppose that \(\varphi '(1^-)<0\). Then for any \(q\in (-\infty ,0),\, {\mathbb {E}}(Z^q)<\infty \) if and only if \(q\in I_\nu \).

For the Hausdorff and packing measures of the support of \(\mu \) we have

Theorem 1.6

Suppose that \(\varphi '(1)<0\) and \(\varphi ''(1)>0\). For \(b\in {\mathbb {R}}\) and \(t>0\) let

$$\begin{aligned} \psi _b(t)=t^{-\varphi '(1)} e^{b\sqrt{\log ^+(1/t)\log ^+\log ^+\log ^+(1/t)}}. \end{aligned}$$

Denote by \(\mathcal {H}^{\psi _b}\) and \(\mathcal {P}^{\psi _b}\) the Hausdorff and packing measures with respect to the gauge function \(\psi _b\) (see [15] for the definition). Then almost surely the measure \(\mu \) is supported by a Borel set \(K\) with

$$\begin{aligned} \mathcal {H}^{\psi _b}(K)=\left\{ \begin{array}{l@{\quad }l} \infty , &{} \text { if } b>\sqrt{2\varphi ''(1)},\\ 0, &{} \text { if } b<\sqrt{2\varphi ''(1)}, \end{array} \right. \end{aligned}$$

and

$$\begin{aligned} \mathcal {P}^{\psi _b}(K)=\left\{ \begin{array}{l@{\quad }l} \infty , &{} \text { if } b>-\sqrt{2\varphi ''(1)},\\ 0, &{} \text { if } b<-\sqrt{2\varphi ''(1)}. \end{array} \right. \end{aligned}$$

Remark 1.5

To complete the previous considerations, it is worth mentioning that the notes [29, 30] also questioned the existence, when the limit \(\mu \) of the dyadic canonical cascade is degenerate, of a natural normalization of \(\mu _j\) by a positive sequence \((A_j)_{j\ge 1}\) such that \(\mu _j/A_j\) converges, in some sense, to a non trivial limit. This problem was solved only very recently thanks to the progress made in the study of freezing transition for logarithmically correlated random energy models [36] and in the study of branching random walks in which a generalized version of (1.2) appears naturally [1, 26]. Under weak assumptions, when \(\varphi '(1^-)=0,\, \mu _j\) suitably normalized converges in probability to a positive random measure \(\widetilde{\mu }\) whose total mass \(Z\) still satisfies (1.2), but is not integrable, while when \(\varphi '(1^-)>0\), after normalization \(\mu _j\) converges in law to the derivative of some stable Lévy subordinator composed with the indefinite integral of an independent measure of \(\widetilde{\mu }\) kind [7]. Previously, motivated by questions coming from interacting particle systems, Durrett and Liggett had achieved in [13] a deep study of the positive solutions of the Eq. (1.2) assuming that the equality holds in distribution only. Under weak assumptions, up to a positive multiplicative constant, the general solution take either the form of the total mass of a non-degenerate measure \(\mu \) or \(\widetilde{\mu }\), or it takes the form of the increment between 0 and 1 of some stable Lévy subordinator composed with the indefinite integral of an independent measure of \(\mu \) or \(\widetilde{\mu }\) kind. Also, fine continuity properties of the critical measure \(\widetilde{\mu }\) are analyzed in [5]. Similar properties are conjectured to hold for log-infinitely divisible cascades, and some of them have been established in the log-gaussian case [3, 4, 10, 11].

1.5 Connection with Bacry and Muzy’s construction

For a fixed closed interval \(I\) of length \(T>0\), the measure \(\mu ^{I}\) has the same law as the restriction to \([0,T]\) of the measure defined from the cone \(V^T(\cdot )\) used in [2], which is drawn on the picture (Fig. 3); this can be “seen” by an elementary geometric comparison between the two kinds of cones and the invariance properties of \(\Lambda \) (invariance in law by horizontal translation and homothetic transformations with apex on the real axis), a completely rigorous approach consisting in mimicking the proof of [2, Lemma 1] to get the joint distribution of the \(\Lambda \) measures of any finite family of cones \((V^{[0,T]}_\epsilon (t_1), \ldots ,V^{[0,T]}_\epsilon (t_q))\) and find it coincides with the one obtained with the cones \( (V^{T}_\epsilon (t_1), \ldots ,V^{T}_\epsilon (t_q))\).

Fig. 3
figure 3

Two ways of defining the cones. a \(V^{|I|}(t)\) by Bacry and Muzy. b \(V^{I}(t)\) in this paper

Relation (1.13) can be obtained from Bacry and Muzy construction by writing, for any \(c\in (0,1)\), the almost sure relation for \(0<\epsilon \le 1\)

$$\begin{aligned} (\Lambda (V^{T}_{c\epsilon T}(cx)))_{x\in [0,T]}=\Lambda (V^{T}(0)\cap V^{T}(cT))+(\omega _{\epsilon ,x})_{x\in [0,1]}; \end{aligned}$$
(1.15)

this defines the process \((\omega _{\epsilon ,x})_{x\in [0,T]}\), obviously independent of \(\Lambda (V^{T}(0)\cap V^{T}(cT))\), and which can be shown to have the same distribution as \((\Lambda (V^{T}_{\epsilon T}(x))_{x\in [0,T]}\) via Fourier transform, and implies (1.1) (see Fig.  4a).

Fig. 4
figure 4

Comparing scale invariance derivation. a \(I=[0,T]\). The light grey domain is \(V^{T}(0)\, \cap \,V^{T}(cT)\). The dark grey domain is the domain used to define \((\omega _{\epsilon ,x})_{x\in [0,T]},\, 0<\epsilon \le 1\); it is not homothetical to the domain used to define \((\Lambda (V^{T}_{\epsilon }(x)))_{x\in [0,T]}\). Scale invariance is shown via Fourier transform. b \(I=[0,T]\). The light grey domain is \(V^I([0,cT])\). The dark grey domain is the domain used to define \((\omega _{\epsilon ,x})_{x\in [0,T]},\, 0<\epsilon \le 1\); in this case, it is homothetical to the domain used to define \((\Lambda (V^{I}_{\epsilon }(x))_{x\in [0,T]},\, 0< \epsilon \le 1\): scale invariance appears geometrically

We have the same equation as (1.15) with the cones considered in this paper, with \((V^{[0,T]}_{c \epsilon T}(\cdot ), V^{[0,T]}([0,cT]))\) in place of \((V^{T}_{c\epsilon T}(\cdot ), V^{T}(0)\cap V^{T}(cT))\), and the fact that by the geometry of the construction, \((\omega _{\epsilon ,x})_{x\in [0,1]}\) is trivially identically distributed with \((\Lambda (V^{[0,T]}_{\epsilon T}(x)))_{x\in [0,T]}\) (see Fig.  4b).

An additional observation is that using the cones of Fig. 3b yields a measure on \({\mathbb {R}}_+\), by considering the vague limit of \(Q(V_\epsilon ^T(t)) \, \mathrm{d}t\), whose indefinite integral increments are stationary. However, there is no long range dependence between the increments of the indefinite integral of this measure, since two cones have no intersection when associated to points away from each other by at least \(T\). Notice that this measure can also be viewed as the juxtaposition of the limits of \((Q(V_\epsilon ^T(t)) \,\mathrm{d}t)_{|[nT,(n+1)T]},\, n\in {\mathbb {N}}\). Similarly, consider the measure \(\mu \) over \({\mathbb {R}}_+\) obtained by juxtaposing the limits of \((Q(V_\epsilon ^{[nT,(n+1)T]}(t))\,\mathrm{d}t )_{|[nT,(n+1)T]}\). Then, only the process \(\mu ([nT,(n+1)T])_{n\in {\mathbb {N}}}\) is stationary, but it has long range dependence: in case of non-degeneracy, if we assume that \(\psi (-i2)<\infty \), a calculation shows that

$$\begin{aligned} \mathrm{cov}(\mu ([0,T]),\mu ([nT,(n+1)T])\sim _{n\rightarrow \infty } \frac{2\psi (-i2)T^2}{3n}, \end{aligned}$$

so the series \(\sum _{n\ge 0}\mathrm{cov}(\mu ([0,T]),\mu ([nT,(n+1)T])\) diverges.

2 Preliminaries

Let \(\Sigma =\{0,1\}^{\mathbb {N}_+}\) be the dyadic symbolic space. For \(\mathbf{i}=i_1i_2\cdots \in \Sigma \) and \(n\ge 1\) define \(\mathbf{i}|_n=i_1\cdots i_n\). Let \(\rho \) be the standard metric on \(\Sigma \), that is

$$\begin{aligned} \rho (\mathbf{i},\mathbf{j})=2^{-\inf \{n\ge 1: \mathbf{i}|_n\ne \mathbf{j}|_n\}},\quad \ \mathbf{i},\mathbf{j}\in \Sigma . \end{aligned}$$

Then \((\Sigma ,\rho )\) forms a compact metric space. Denote by \(\mathcal {B}\) its Borel \(\sigma \)-algebra.

For \(\mathbf{i}=i_1i_2\cdots \in \Sigma \) define

$$\begin{aligned} \pi (\mathbf{i})=\sum _{j=1}^\infty i_j 2^{-j}. \end{aligned}$$

Then \(\pi \) is a continuous map from \(\Sigma \) to \([0,1]\).

For \(n\ge 1\) let \(\Sigma _n=\{0,1\}^n\), and use the convention that \(\Sigma _0=\{\emptyset \}\).

For \(n\ge 0\) and \(i=i_1\cdots i_n \in \Sigma _n\) define

$$\begin{aligned}{}[i]=\{\mathbf{i}\in \Sigma : \mathbf{i}|_n=i\}\quad \text { and } \quad I_i=\overline{\pi ([i])}, \end{aligned}$$

with the convention that \(\mathbf{i}|_0=\emptyset ,\, [\emptyset ]=\Sigma \) and \(I_\emptyset =[0,1]\).

Denote by \(\Sigma _*=\cup _{n\ge 0} \Sigma _n\). For \(i\in \Sigma _*\) define

$$\begin{aligned} W_i=Q(\Lambda (V^I(I_i)))\quad \text { and }\quad Z_i= \Vert {\mu }^{I_i}\Vert . \end{aligned}$$

Then from (1.13) we have for any \(n\ge 1\),

$$\begin{aligned} 2^n Z=\sum _{i\in \Sigma _n} W_i Z_i, \end{aligned}$$
(2.1)

where \(\{W_i,i\in \Sigma _n\}\) have the same law, \(\{Z_i,i\in \Sigma _n\}\) have the same law as \(Z\) and for each \(i\in \Sigma _n,\, W_i\) and \(Z_i\) are independent.

3 Proof of Theorem 1.1

3.1. First we prove (i) \(\Leftrightarrow \) (ii) and the \(L^1\) convergence. Clearly (i) implies (ii). We suppose that \({\mathbb {E}}(Z)=c>0\). For any positive finite Borel measure \(m\) on \(I\) and \(t>0\) define

$$\begin{aligned} m_t(f)=\frac{1}{|I|}\int _I f(x) \cdot Q(V_{1/t}^I(x)) \, m(\mathrm{d}x), \ f\in C_0(I). \end{aligned}$$

Following the same argument as in Lemma 1.2, \(m_t\) is a measure-valued right-continuous martingale, thus the Kahane operator \(EQ\):

$$\begin{aligned} EQ(m)={\mathbb {E}}\left( \,\,\lim _{t\rightarrow \infty } m_t\right) \end{aligned}$$

is well-defined. Denote by \(\ell \) the Lebesgue measure restricted to \([0,1]\). Then we have \(EQ(\ell )=c\ell \) since \({\mathbb {E}}(\lim _{t\rightarrow \infty }\ell _t(J))=c\ell (J)\) for any compact subinterval \(J\subset I\). From [20] we know that \(EQ\) is a projection, so \(EQ(EQ(\ell ))=EQ(\ell )\). This gives \(c=c^2\), hence \(c=1\). Consequently, since the limit of the positive martingale \(\Vert \mu ^I_{1/t}\Vert \) with expectation 1 has expectation 1 as well, the convergence also holds in \(L^1\) norm.

The rest of the proof adapts to our context, thanks to (1.13), the approach used by Kahane in [21] for canonical cascades.

3.2. Now we prove that (ii) implies (iii). From (2.1) we have that

$$\begin{aligned} 2Z=W_0Z_0+W_1Z_1. \end{aligned}$$
(3.1)

Assume that \({\mathbb {E}}(Z)>0\). For \(0<q<1\) the function \(x\mapsto x^q\) is sub-additive, hence (3.1) yields

$$\begin{aligned} 2^q{\mathbb {E}}(Z^q)\le {\mathbb {E}}(W_0^qZ_0^q)+{\mathbb {E}}(W_1^qZ_1^q)=2{\mathbb {E}} (W_0^q){\mathbb {E}}(Z^q). \end{aligned}$$
(3.2)

Since \({\mathbb {E}}(Z)>0\) implies \({\mathbb {E}}(Z^q)>0\), we get from (3.2), (1.10) and Lemma 1.1 that

$$\begin{aligned} 2^q \le 2{\mathbb {E}}(W_0^q)=2e^{\psi (-iq)\log 2}=2^{\psi (-iq)+1}. \end{aligned}$$

This implies \(\varphi \le 0\) on interval \([0,1]\), and it follows that \(\varphi '(1^-)\le 0\). To prove \(\varphi '(1^-)<0\) we need the following lemma.

Lemma 3.1

Let \(X_i=W_iZ_i\) for \(i=0,1\). There exists \(\epsilon >0\) such that

$$\begin{aligned} {\mathbb {E}}(X_0^q \mathbf{1}_{\{X_0\le X_1\}}) \ge \epsilon {\mathbb {E}}(X_0^q) \,\quad \text { for } 0\le q \le 1. \end{aligned}$$

Proof

If \({\mathbb {E}}(X_0^q \mathbf{1}_{\{X_0\le X_0\}})\) is strictly positive for all \(q\in [0,1]\), then it is easy to get the conclusion, since both expectations, as functions of \(q\), are continuous on \([0,1]\).

Suppose that there exists \(q\in (0,1]\) such that \({\mathbb {E}}(X_0^q \mathbf{1}_{\{X_0\le X_1\}})=0\), then almost surely either \(X_0>X_1\) or \(0=X_0\le X_1\). Due to the symmetry of \(X_0\) and \(X_1\) this actually implies that almost surely either \(X_0=X_1=0\), or \(X_0=0,X_1>0\), or \(X_1=0,X_0>0\). This yields

$$\begin{aligned} 2^q{\mathbb {E}}(Z^q)={\mathbb {E}}(X_0^q)+{\mathbb {E}}(X_1^q)=2{\mathbb {E}}(W_0^q){\mathbb {E}}(Z^q) \, \quad \text { for } 0\le q\le 1. \end{aligned}$$

So we have \(\psi (-iq)=q-1\) for \(q\in [0,1]\). Then from \(\frac{\partial ^2}{\partial q^2}\psi (-iq)=0\) we get that \(\sigma ^2=0\) and \(\nu \equiv 0\), which is a contradiction to our assumption. \(\square \)

Now as shown in [21], by applying the inequality \((x+y)^q\le x^q +qy^q\) for \(x\ge y>0\) and \(0<q<1\) we get from (3.1) and Lemma 3.1 that

$$\begin{aligned} 2^q{\mathbb {E}}(Z^q) \le 2{\mathbb {E}}(W_0^q){\mathbb {E}}(Z^q) -(1-q)\epsilon {\mathbb {E}}(W_0^q) {\mathbb {E}}(Z^q). \end{aligned}$$

This implies

$$\begin{aligned} \varphi (q) +\log \left( 1-\frac{(1-q)\epsilon }{2}\right) \ge 0\quad \text { on } [0,1]. \end{aligned}$$

Then it follows that \(\varphi '(1^-)-(\epsilon /2\log 2)\le 0\), thus \(\varphi '(1^-)<0\).

3.3. Finally we prove that (iii) implies (ii). Assume that \(\varphi '(1^-)<0\). For \(i\in \Sigma _*\) and \(n\ge 1\) define

$$\begin{aligned} Y_{n,i}= \mu _{2^{-n}}^I(I_i). \end{aligned}$$

Also denote by \(Y_n=\mu _{2^{-n}}^I(I)\). Then for any \(m\ge 1\) and \(n\ge m+1\) we have

$$\begin{aligned} Y_n=\sum _{i\in \Sigma _m} Y_{n,i}. \end{aligned}$$
(3.3)

We need the following lemma from [21].

Lemma 3.2

There exists a constant \(q_0\in (0,1)\) such that for any \(q\in (q_0,1)\) and any finite sequence \(x_1,\ldots , x_{k}>0\),

$$\begin{aligned} \left( \,\,\sum _{i=1,\ldots , k}x_i\right) ^q \ge \sum _{i=1,\ldots , k} x_i^q -(1-q)\sum _{i\ne j} (x_ix_j)^{q/2}. \end{aligned}$$

Applying Lemma 3.2 to (3.3) we get for any \(q\in (q_0,1)\),

$$\begin{aligned} Y_n^q \ge \sum _{i\in \Sigma _m} Y_{n,i}^q-(1-q)\sum _{i\ne j \in \Sigma _m} Y_{n,i}^{q/2}Y_{n,j}^{q/2}. \end{aligned}$$

Taking expectation from both sides we get

$$\begin{aligned} {\mathbb {E}}(Y_n^q) \ge \sum _{i\in \Sigma _m} {\mathbb {E}}(Y_{i,n}^q)-(1-q)\sum _{i\ne j \in \Sigma _m} {\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right) . \end{aligned}$$
(3.4)

Let

$$\begin{aligned} \mathcal {J}_1&= \left\{ (i,j)\in \Sigma _m^2: \mathrm{dist}(I_i,I_j)=0\right\} \\ \mathcal {J}_2&= \left\{ (i,j)\in \Sigma _m^2: \mathrm{dist}(I_i,I_j)\ge 2^{-m}\right\} . \end{aligned}$$

It is easy to check that \(\#\mathcal {J}_1=2(2^m-1)\) and \(\#\mathcal {J}_2=(2^m-1)(2^m-2)\). Then by using Hölder’s inequality we get

$$\begin{aligned} \sum _{i\ne j \in \Sigma _m} {\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right)&= \sum _{(i,j)\in \mathcal {J}_1}{\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right) +\sum _{(i,j)\in \mathcal {J}_2} {\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right) \nonumber \\&\le 2(2^m-1){\mathbb {E}}\left( Y_{n,\bar{0}}^q\right) +\sum _{(i,j)\in \mathcal {J}_2} {\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right) ,\qquad \end{aligned}$$
(3.5)

where we denote by \(\bar{0}=0\cdots 0\in \Sigma _m\). We need the following lemma:

Lemma 3.3

There exists a constant \(C\) such that for any \((i,j)\in \mathcal {J}_2\) and \(q\in (0,1)\),

$$\begin{aligned} {\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right) \le C\cdot 2^{(1+\varphi (q))m} \cdot {\mathbb {E}}\left( \mu ^{I_{\bar{0}}}_{2^{-n}}(I_{\bar{0}})^{q/2}\right) ^2. \end{aligned}$$

This gives

$$\begin{aligned} \sum _{(i,j)\in \mathcal {J}_2} {\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right) \le (2^m-1)(2^m-2)\cdot C \cdot 2^{(1+\varphi (q))m} \cdot {\mathbb {E}}\left( \mu ^{I_{\bar{0}}}_{2^{-n}}(I_{\bar{0}})^{q/2}\right) ^2. \end{aligned}$$

First notice that \(\mu ^{I_{\bar{0}}}_{2^{-n}}(I_{\bar{0}})\) has the same law as \(Y_{n-m}\). Then combing (3.4) and (3.5), and using the fact that \( {\mathbb {E}}(Y_n^q)\le {\mathbb {E}}(Y_{n-m}^q)\le 1 \) we get

$$\begin{aligned} {\mathbb {E}}(Y_n^q)\frac{1-e^{-\varphi (q)m\log 2}}{1-q} \le 2+ C(2^m-1){\mathbb {E}}(Y_{n-m}^{q/2})^2. \end{aligned}$$

By letting \(q\rightarrow 1^-\) we obtain

$$\begin{aligned} -\varphi '(1^-)m\log 2 \le 2+C(2^m-1) {\mathbb {E}}(Y_{n-m}^{1/2})^2. \end{aligned}$$

Choosing \(m\) large enough so that \(\varphi '(1^-)m\log 2+2<0\), we get \(\inf _{n\ge 1} {\mathbb {E}}(Y_n^{1/2})>0\). Consequently \({\mathbb {E}}(Z^{1/2})>0\), thus \({\mathbb {E}}(Z)>0\). \(\square \)

3.1 Proof of Lemma 3.3

The proof can be deduced from [2, Lemma 3, p. 495–496]. For reader’s convenience we present one here. Write

$$\begin{aligned} V_{2^{-n}}^I(t)=V_{2^{-m}}^I(t) \cup V^{m}_{n}(t), \end{aligned}$$

where \(V^{m}_{n}(t)=V_{2^{-n}}^I(t){\setminus }V_{2^{-m}}^I(t)\). Define the random measure

$$\begin{aligned} \mu ^m_n(t)=\frac{1}{|I|}\cdot Q(V^{m}_{n}(t))\, dt, \quad t\in I. \end{aligned}$$

Then for \(i\in \Sigma _m\) we have

$$\begin{aligned} \mu _{2^{-n}}^I(I_i) \le \left( \,\,\sup _{t\in I_i} e^{\Lambda \left( V_{2^{-m}}^I(t)\right) }\right) \mu ^m_n(I_i). \end{aligned}$$

Notice that for \((i,j)\in \mathcal {J}_2,\, \mu ^m_n(I_i)\) and \(\mu ^m_n(I_j)\) are independent, and they are independent of \(\sup _{t\in I_i} e^{\Lambda (V_{2^{-m}}^I(t))}\) and \(\sup _{t\in I_j} e^{\Lambda (V_{2^{-m}}^I(t))}\). Thus

$$\begin{aligned} {\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right)&\le {\mathbb {E}}\left( \,\, \prod _{l=i,j} \sup _{t\in I_l} e^{q\Lambda \left( V_{2^{-m}}^I(t)\right) /2} \cdot \mu ^m_n(I_l)^{q/2}\right) \nonumber \\&= \prod _{l=i,j} {\mathbb {E}}\left( \mu ^m_n(I_l)^{q/2}\right) \cdot {\mathbb {E}}\left( \,\, \prod _{l=i,j} \sup _{t\in I_l} e^{q\Lambda (V_{2^{-m}}^I(t))/2}\right) \nonumber \\ \!&\le \!\prod _{l=i,j} {\mathbb {E}}\left( \mu ^m_n(I_l)^{q/2}\right) \cdot \prod _{l=i,j} {\mathbb {E}}\left( \,\, \prod _{l=i,j} \sup _{t\in I_l} e^{q\Lambda \left( V_{2^{-m}}^I(t)\right) }\!\!\!\right) ^{1/2},\qquad \end{aligned}$$
(3.6)

where the last inequality comes from Hölder’s inequality.

Take \(J\in \{I_i,I_j\}\) with \(J=[t_0,t_1]\). For \(t\in J\) we can divide \(V_{2^{-m}}^I(t)\) into three disjoint parts:

$$\begin{aligned} V_{2^{-m}}^I(t)=V^I(J)\cup V^{J,l}(t) \cup V^{J,r}(t), \end{aligned}$$
(3.7)

where

$$\begin{aligned} V^{J,l}(t)&= \{z=x+iy\in V(t): 2^{-m} \le y < 2(t_1-x)\},\\ V^{J,r}(t)&= \{z=x+iy\in V(t): 2^{-m} \le y \le 2(x-t_0)\}. \end{aligned}$$

We need the following lemma.

Lemma 3.4

Let \(s\in \{l,r\}\). For \(q\in I_\nu \) there exists constant \(C_q<\infty \) such that

$$\begin{aligned} {\mathbb {E}}\left( \,\,\sup _{t\in J}e^{q\Lambda (V^{J,s}(t))} \right) \le C_q; \end{aligned}$$

For \(q\in {\mathbb {R}}\) there exists constant \(c_q>0\) such that

$$\begin{aligned} {\mathbb {E}}\left( \inf _{t\in J} e^{q\Lambda (V^{J,s}(t))}\right) \ge c_q. \end{aligned}$$

By using Lemma 3.4 we get from (3.7) that for \(q\in I_\nu \cap (0,\infty )\),

$$\begin{aligned} {\mathbb {E}}\left( \,\,\sup _{t\in J} e^{q\Lambda (V^{I}_{2^{-m}}(t))}\right) \le C_q^2 \cdot {\mathbb {E}}(e^{q\Lambda (V^{I}(J))})=C_q^2 \cdot 2^{m\psi (-iq)}. \end{aligned}$$
(3.8)

Also notice that for \(t\in J\) we have

$$\begin{aligned} V^{m}_n(t)\cup V^{J,l}(t) \cup V^{J,r}(t) =V^{I}_{2^{-n}}(t). \end{aligned}$$

So for any \(q'\in {\mathbb {R}}\) we have

$$\begin{aligned} \mu ^{J}_{2^{-n}}(J)^{q'} \ge \mu ^m_n(J)^{q'} \cdot \left( \inf _{t\in J} e^{q'\Lambda (V^{J,l}(t))} \right) \left( \inf _{t\in J} e^{q'\Lambda (V^{J,r}(t))}\right) . \end{aligned}$$

Applying Lemma 3.4 we get that

$$\begin{aligned} {\mathbb {E}}\left( \mu ^m_n(J)^{q/2}\right) \le c_q^{-2}\cdot 2^{-mq/2}\cdot {\mathbb {E}}\left( \mu ^{J}_{2^{-n}}(J)^{q/2}\right) . \end{aligned}$$
(3.9)

Together with (3.6) and (3.8) this implies

$$\begin{aligned} {\mathbb {E}}\left( Y_{n,i}^{q/2}Y_{n,j}^{q/2}\right) \le C_q^2 c_q^{-2} \cdot 2^{m(1+\varphi (q))} \cdot \prod _{l=i,j}{\mathbb {E}}\left( \mu ^{I_l}_{2^{-n}}(I_l)^{q/2}\right) . \end{aligned}$$

From the prove of Lemma 3.4 one can chose \(C_q c_q^{-1}\) as a increasing function of \(q\), and since \(1\in I_\nu \), we get the conclusion by taking \(C=C_1^2c_1^{-2}\). \(\square \)

3.1.1 Proof of Lemma 3.4

First let \(q\in I_\nu \). We have

$$\begin{aligned} {\mathbb {E}}(\Lambda (V^{J,r}(t)))=a\lambda (V^{J,r}(t)). \end{aligned}$$

From the fact that \(\lambda (V^{J,r}(t))=(t-t_0)/|J|\) we get

$$\begin{aligned} e^{q\Lambda (V^{J,r}(t))} \le e^{|aq|} \cdot e^{qM_t}, \end{aligned}$$

where \(M_t=\Lambda (V^{J,r}(t))-a (t-t_1)/|J|\) is a martingale. As \(x\mapsto e^{xq/2}\) is convex we have that \(e^{qM_t/2}\) is a positive submartingale. Due to Doob’s \(L^2\)-inequality we get

$$\begin{aligned} {\mathbb {E}}\left( \,\,\sup _{t\in J}e^{qM_t} \right) \le 4 \sup _{t\in J} {\mathbb {E}} (e^{qM_t}) \le 4e^{|aq|+|\psi (-iq)|}. \end{aligned}$$

This implies

$$\begin{aligned} {\mathbb {E}}\left( \,\,\sup _{t\in J} e^{q\Lambda (V^{J,r}(t))} \right) \le C_q, \end{aligned}$$

where the constant \(C_{q}\) only depends on \(q\).

Now let \(q\in {\mathbb {R}}\). Notice that

$$\begin{aligned}{}[0,1]\ni t\mapsto \Lambda (V^{J,r}(t_0+(t_1-t_0)t)) \end{aligned}$$

is a Lévy process restricted on \([0,1]\), thus for \(X_q=\inf _{t\in J} e^{q\Lambda (V^{J,r}(t))}\) we must have

$$\begin{aligned} \mathbb {P}\{X_q > \epsilon _q\}>0 \end{aligned}$$

for some \(1>\epsilon _q>0\), otherwise this would contradict the fact that almost surely the sample path of a Lévy process is càdlàg. Then

$$\begin{aligned} {\mathbb {E}}\left( \,\,\inf _{t\in J} e^{q\Lambda (V^{J,r}(t))}\right) \ge \mathbb {P}\{X_q > \epsilon _q\} \cdot \epsilon _q>0. \end{aligned}$$

The argument for \(V^{J,l}(t)\) is the same. \(\square \)

4 Proof of Theorem 1.2

We only need to prove that for \(q>1,\, 0<{\mathbb {E}}(Z^q)<\infty \) implies that \(q\in I_\nu \) and \(\varphi (q)<0\), the rest of the result comes from [2, Lemma 3].

Because the function \(x^q\) is super-additive, one has

$$\begin{aligned} 2^{q}Z^{q}\ge W_0^qZ_0^q+W_1^{q}Z_1^{q}, \end{aligned}$$

and the strict inequality holds if and only if \(W_0Z_0=W_1Z_1\). So if \(W_0Z_0\ne W_1Z_1\) with positive probability, then

$$\begin{aligned} 2^q{\mathbb {E}}(Z^q) >2{\mathbb {E}}(W_0^q){\mathbb {E}}(Z^q), \end{aligned}$$

that is \({\mathbb {E}}(W_0^q)<2^{q-1}\), which implies that \(q\in I_\nu \) and \(\varphi (q)<0\). Otherwise \(W_0Z_0= W_1Z_1\) almost surely, thus \(\varphi (q)=q-1\) for all \(q\in I_\nu \). This yields that \(\sigma ^2=0\) and \(\nu \equiv 0\), which is in contradiction to our assumption.

5 Proof of Theorem 1.3

5.1 Proof of (1)

According to Theorem 1.2, \((\alpha )\) implies that \(I_\nu \supset [0,\infty )\) and \(\varphi (q)<0\) for all \(q>1\). Recall that \(\varphi (q)=\psi (-iq)-q+1\) and

$$\begin{aligned} \psi (-iq)=aq+\frac{1}{2}\sigma ^2q^2+ \int _{\mathbb {R}}(e^{q x}-1-q x \mathbf{1}_{|x|\le 1}) \nu (\mathrm{d}x). \end{aligned}$$

Suppose that \(\nu ([\epsilon ,\infty ))>0\) for some \(\epsilon >0\), then one can find constant \(c_1,c_2>0\) such that

$$\begin{aligned} \psi (-iq)\ge c_1 e^{q\epsilon }-c_2q \end{aligned}$$

as \(q\rightarrow \infty \), which is in contradiction to \(\varphi (q)<0\) for all \(q>1\). It is also easy to see that \(\varphi (q)<0\) for all \(q>1\) implies \(\sigma =0\). Thus using the expression of the normalizing constant \(a\) (see (1.8)) we may write

$$\begin{aligned} \varphi (q)=1-q +\int _{-\infty }^0(e^{qx}-1+q(1-e^x)) \, \nu (\mathrm{d}x). \end{aligned}$$
(5.1)

It is easy to check that the integral term in (5.1) is non-negative, and goes to \(\infty \) faster than any multiple of \(q\) if \(\int _{-\infty }^01\wedge |x|\, \nu (dx)=\infty \), in which case we cannot have \(\varphi (q)<0\) for all \(q>1\). If \(\int _{-\infty }^01\wedge |x|\, \nu (dx)<\infty \), then

$$\begin{aligned} \varphi (q)=(\gamma -1) q+1-\int _{-\infty }^0 (1-e^{qx}) \, \nu (\mathrm{d}x), \end{aligned}$$
(5.2)

where

$$\begin{aligned} \gamma =\int _{-\infty }^0 (1-e^x) \, \nu (\mathrm{d}x). \end{aligned}$$

Clearly \(\varphi (q)<0\) for all \(q>1\) implies that \(\gamma -1 \le 0\).

Conversely, if \((\beta )\) holds, then \(I_\nu \supset [0,\infty )\), since \(\nu \) is carried by \((-\infty ,0]\) thus \(\int _{|x|>1} e^{qx} \nu (\mathrm{d}x)<\infty \) for any \(q>0\). We may write \(\varphi (q)\) as in (5.2). If \(\gamma <1\), then \(\lim _{q\rightarrow \infty } \varphi (q)=-\infty \) since \(\varphi (q)\sim (\gamma -1)q\) at \(\infty \). If \(\gamma =1\), then

$$\begin{aligned} \int _{-\infty }^0 (1-e^{qx}) \, \nu (dx)> \int _{-\infty }^0 (1-e^{x}) \, \nu (\mathrm{d}x)=\gamma =1 \end{aligned}$$

for any \(q>1\). Due to the convexity of \(\varphi \), it follows that in both cases \(\varphi '(1)<0\) and \(\varphi (q)<0\) for all \(q>1\), hence we get \((\alpha )\) from Theorems 1.1 and 1.2.

5.2 Proof of (2)

The proof is inspired by the approach used by Kahane [21] for canonical cascades. However, here again the correlations between \(Z_0\) and \(Z_1\) creates complications. For the sharp upper bound of \({\displaystyle \limsup \nolimits _{n\rightarrow \infty }}\frac{\log {\mathbb {E}}(Z^n)}{n\log n}\), we use a new approach consisting in writing an explicit formula for the moments of positive integer orders of \(Z\) and then estimate them from above by using Dirichlet’s multiple integral formula. For the lower bound of \({\displaystyle \liminf \nolimits _{n\rightarrow \infty }}\frac{\log {\mathbb {E}}(Z^n)}{n\log n}\), we first show that under \((\beta )\) the inequality \({\mathbb {E}}(\mu (I_0)^k\mu (I_1)^l)\ge {\mathbb {E}}(\mu (I_0)^k){\mathbb {E}}(\mu (I_1)^l)\) holds for any non negative integers \(k\) and \(l\), and then follow [21].

From \((\beta )\) we have that for \(q\ge 0\),

$$\begin{aligned} \psi (-iq)=\gamma \cdot q-\int _{-\infty }^0 (1-e^{qx}) \, \nu (\mathrm{d}x). \end{aligned}$$

We have almost surely

$$\begin{aligned} \mu (I)^n&= \lim _{\epsilon \rightarrow 0} \mu _\epsilon (I)^n\\&= \lim _{\epsilon \rightarrow 0}\left( \,\,\int _{t\in I} e^{\Lambda (V_\epsilon ^I(t))} \mathrm{d}t\right) ^n. \end{aligned}$$

Thus we get from the martingale convergence theorem, Fubini’s theorem and dominated convergence theorem that

$$\begin{aligned} {\mathbb {E}}(\mu (I)^n) = \int _{t_1,\ldots , t_{n}\in I} \lim _{\epsilon \rightarrow \infty } {\mathbb {E}}\left( \,\, \prod _{j=1}^{n} e^{\Lambda (V_\epsilon ^I(t_j))} \right) \mathrm{d}t_1\cdots \mathrm{d} t_{n}. \end{aligned}$$

For integers \( k\le j\) define

$$\begin{aligned} \alpha (j,k)&= \psi (-i(j-k+1))+\psi (-i((j-1)-(k+1)+1))\\&-\psi (-i((j-1)-k+1))-\psi (-i(j-(k+1)+1))\\&= \int _{-\infty }^0 e^{(j-k-1)x}(1-e^x)^2 \, \nu (\mathrm{d}x). \end{aligned}$$

Fix \(0<t_1<\cdots <t_n <1\). Then for \(\epsilon \) small enough one gets from [2, Lemma 1] that

$$\begin{aligned} \log {\mathbb {E}}\left( \,\,\prod _{j=1}^{n} e^{\Lambda (V_\epsilon ^I(t_j))} \right) =\sum _{k=1}^{n-1}\sum _{j=k+1}^n \alpha (j,k)\cdot \log \frac{1}{t_j-t_k}. \end{aligned}$$

This gives

$$\begin{aligned} {\mathbb {E}}(\mu (I)^n)= n! I_n, \end{aligned}$$

where

$$\begin{aligned} I_n=\int _{0<t_1<\cdots <t_n<1} \prod _{k=1}^{n-1}\prod _{j=k+1}^{n} (t_j-t_k)^{-\alpha (j,k)} \mathrm{d}t_1\cdots \mathrm{d}t_n. \end{aligned}$$

Let us use the change of variables \(x_1=t_1\) and \(x_k=t_k-t_{k-1}\) for \(k=2,\ldots ,n\). Then \(I_n\) becomes

$$\begin{aligned} I_n=\int _{x_1+\cdots +x_n\le 1} \prod _{k=1}^{n-1}\prod _{j=k+1}^n\left( \,\,\sum _{l=k+1}^j x_l\right) ^{-\alpha (j,k)} \mathrm{d}x_1\cdots \mathrm{d}x_n. \end{aligned}$$

For every integer \(l\) define

$$\begin{aligned} \gamma _l=\int _{-\infty }^0 e^{lx}(1-e^x)^2 \, \nu (\mathrm{d}x) \end{aligned}$$

so that

$$\begin{aligned} \alpha (j,k)=\gamma _{j-k-1}. \end{aligned}$$

Then we have

$$\begin{aligned} \prod _{k=1}^{n-1}\prod _{j=k+1}^n\left( \,\,\sum _{l=k+1}^j x_l\right) ^{-\alpha (j,k)}=\prod _{l=1}^{n-1} \left( \,\,\prod _{k=1}^{n-l}\left( \,\,\sum _{j=k+1}^{k+l}x_j\right) \right) ^{-\gamma _{l-1}}. \end{aligned}$$

Since \(x_j\in (0,1)\), it is easy to deduce that for \(l=1,\ldots ,n-1\),

$$\begin{aligned} \prod _{k=1}^{n-l}\left( \,\,\sum _{j=k+1}^{k+l}x_j\right) \ge \prod _{j=2}^{n} x_j. \end{aligned}$$

This implies

$$\begin{aligned} I_n\le \int _{x_1+\cdots +x_n\le 1} \left( \,\,\prod _{j=2}^{n}x_j\right) ^{-\sum _{l=1}^{n-1} \gamma _{l-1}}\, \mathrm{d}x_1\cdots \mathrm{d}x_n. \end{aligned}$$

Notice that

$$\begin{aligned} \sum _{l=1}^{n-1} \gamma _{l-1}=\int _{-\infty }^0 (1-e^{(n-1)x})(1-e^x) \, \nu (\mathrm{d}x):=\gamma '_{n-1}. \end{aligned}$$

Then we get from Dirichlet’s multiple integral formula that

$$\begin{aligned}&\int _{x_1+\cdots +x_n\le 1} \left( \,\,\prod _{j=2}^{n}x_j\right) ^{-\gamma '_{n-1}} \, \mathrm{d}x_1\cdots \mathrm{d}x_n\\&\quad =\int _{x_2+\cdots +x_n\le 1} \left( 1-\sum _{j=2}^nx_j\right) \cdot \left( \,\,\prod _{j=2}^{n}x_j\right) ^{-\gamma '_{n-1}} \, \mathrm{d}x_2\cdots \mathrm{d}x_n\\&\quad =\frac{\Gamma (1-\gamma _{n-1}')^{n-1}\Gamma (2)}{\Gamma ((n-1)(1-\gamma _{n-1}')+2 )}. \end{aligned}$$

Since \(\gamma '_n\rightarrow \gamma \) as \(n\rightarrow \infty \), by applying Stirling’s formula we finally get

$$\begin{aligned} \limsup _{n\rightarrow \infty } \frac{\log {\mathbb {E}}(Z^n)}{n\log n} \le 1-(1-\gamma )=\gamma . \end{aligned}$$

On the other hand, we have

$$\begin{aligned} \mu (I)^n=(\mu (I_0)+\mu (I_1))^n=\sum _{m=0}^n \frac{n!}{m!(n-m)!}\mu (I_0)^m\mu (I_1)^{n-m}. \end{aligned}$$
(5.3)

For \(1\le m\le n-1\) we have

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}(\mu (I_0)^m\mu (I_1)^{n-m})=m!(n-m)!\\&\quad \int _{0<t_1<\cdots <t_m<1/2<t_{m+1}<\cdots <t_n<1} \prod _{k=1}^{n-1}\prod _{j=k+1}^{n} (t_j-t_k)^{-\alpha (j,k)} \, \mathrm{d}t_1\cdots \mathrm{d}t_n. \end{aligned} \end{aligned}$$

Also

$$\begin{aligned} \prod _{k=1}^{n-1}\prod _{j=k+1}^{n} (t_j-t_k)^{-\alpha (j,k)}&= \prod _{k=1}^{m-1}\prod _{j=k+1}^{m}\prod _{k=1}^{m} \prod _{j=m+1}^{n}\prod _{k=m+1}^{n-1}\prod _{j=k+1}^{n} (t_j-t_k)^{-\alpha (j,k)}\\&\ge \prod _{k=1}^{m-1}\prod _{j=k+1}^{m}\prod _{k=m+1}^{n-1}\prod _{j=k+1}^{n} (t_j-t_k)^{-\alpha (j,k)}, \end{aligned}$$

where the inequality uses the fact that \(t_j-t_k\le 1\) and \(\alpha (j,k)\ge 0\). This implies that

$$\begin{aligned} {\mathbb {E}}(\mu (I_0)^m\mu (I_1)^{n-m})\ge {\mathbb {E}}(\mu (I_0)^m){\mathbb {E}}(\mu (I_1)^{n-m}). \end{aligned}$$

Notice that

$$\begin{aligned} {\mathbb {E}}(\mu (I_0)^m)=2^{-m}{\mathbb {E}}(W_0^m){\mathbb {E}}(Z^m) =2^{-m}2^{\psi (-im)}{\mathbb {E}}(Z^m). \end{aligned}$$

Since

$$\begin{aligned} \psi (-im)=\gamma m-\int _{-\infty }^0 (1-e^{mx}) \, \nu (\mathrm{d}x), \end{aligned}$$

for any \(\epsilon >0\) there exists \(c>0\) such that for all \(m\ge 0\) we have

$$\begin{aligned} \psi (-im)\ge (\gamma -\epsilon ) m +\log (c), \end{aligned}$$

and using (5.3)

$$\begin{aligned} {\mathbb {E}}(Z^n)&\ge c^2 2^{(\gamma -\epsilon ) n} \sum _{m=0}^n \frac{n!}{m!(n-m)!}2^{-n}{\mathbb {E}}(Z^m){\mathbb {E}}(Z^{n-m})\\&\ge c^2 2^{(\gamma -\epsilon ) n} {\mathbb {E}}(Z^{n/2})^2. \end{aligned}$$

Hence

$$\begin{aligned} \log {\mathbb {E}}(Z^{2n})\ge 2\log (c)+ (\gamma -\epsilon ) 2n\log 2+2\log {\mathbb {E}}(Z^{n}). \end{aligned}$$

Consequently,

$$\begin{aligned} \frac{\log {\mathbb {E}}(Z^{2^n})}{2^n}&\ge \frac{2\log (c)}{2^n} +(\gamma -\epsilon ) \log 2+\frac{\log {\mathbb {E}}(Z^{2^{n-1}})}{2^{n-1}}\\&\ge n(\gamma -\epsilon ) \log 2 + 2(1-2^{-n})\log (c). \end{aligned}$$

This easily yields

$$\begin{aligned} \liminf _{n\rightarrow \infty }\frac{\log {\mathbb {E}}(Z^n)}{n\log n}\ge \gamma -\epsilon , \end{aligned}$$

for any \(\epsilon >0\).

6 Proof of Theorem 1.4

6.1 Reduction to a key proposition

In the case of limits of canonical cascades, Guivarc’h [17] exploited (1.2) to connect our problem to a random difference equation one; then Liu [23] extended this idea for the case of supercritical Galton–Watson trees, and for this he used explicitly Peyrière measure. This is our starting point, the difference being that now we must exploit the more delicate equation (1.13).

Recall that \(\pi (\mathbf{i})=\sum _{j=1}^\infty i_j 2^{-j}\) is a continuous map from \(\Sigma \) to \([0,1]\). We shall use the same notation \(\mu \) for the pull-back measure \(\mu \circ \pi ^{-1}\) on \(\Sigma \). Let \(\Omega '=\Omega \times \Sigma \) be the product space, let \(\mathcal {F}'=\mathcal {F}\times \mathcal {B}\) be the product \(\sigma \)-algebra, and let \(\mathbb {Q}\) be the Peyrière measure on \((\Omega ', \mathcal {F}')\), defined as

$$\begin{aligned} \mathbb {Q}(E)={\mathbb {E}} \left( \,\,\int _\Sigma \mathbf{1}_E(\omega , \mathbf{i}) \, \mu (\mathrm{d}\mathbf{i}) \right) , \quad E\in \mathcal {F}'. \end{aligned}$$

Then \((\Omega ',\mathcal {F}',\mathbb {Q})\) forms a probability space.

For \(\omega \in \Omega \) and \(\mathbf{i}\in \Sigma \) let

$$\begin{aligned} A(\omega ,\mathbf{i})&= \sum _{i\in \{0,1\}} 2^{-1}W_i(\omega )\cdot \mathbf{1}_{\{\mathbf{i}|_1=i\}},\\ B(\omega ,\mathbf{i})&= \sum _{i\in \{0,1\}} 2^{-1} W_i(\omega ) Z_i(\omega ) \cdot \mathbf{1}_{\{\mathbf{i}|_1=1-i\}},\\ R(\omega ,\mathbf{i})&= \sum _{i\in \{0,1\}} Z_i(\omega )\cdot \mathbf{1}_{\{\mathbf{i}|_1=i\}},\\ \widetilde{R}(\omega ,\mathbf{i})&= Z(\omega ). \end{aligned}$$

We may consider \(A,\, B,\, R\) and \(\widetilde{R}\) as random variables on \((\Omega ',\mathcal {F}',\mathbb {Q})\), and we have the following equation

$$\begin{aligned} \widetilde{R}=AR+B. \end{aligned}$$

First we claim that \(R\) and \(\widetilde{R}\) have the same law. This is due to the fact that for any non-negative Borel function \(f\) we have

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}(f(R))&= {\mathbb {E}}\left( 2^{-1}\sum _{i\in \{0,1\}} f(Z_i) \cdot W_{i} \cdot Z_i \right) \\&= {\mathbb {E}}(f(Z)Z) \\&= {\mathbb {E}}_\mathbb {Q}(f(\widetilde{R})). \end{aligned}$$

Then we claim that \(A\) and \(R\) are independent, since for any non-negative Borel functions \(f\) and \(g\) we have

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}(f(A)g(R))&= {\mathbb {E}}\left( 2^{-1}\sum _{i\in \{0,1\}^n} f(W_{i})g(Z_i)\cdot W_i \cdot Z_i \right) \\&= {\mathbb {E}}(f(W_0)W_0){\mathbb {E}}(g(Z_0)Z_0) \\&= {\mathbb {E}}_\mathbb {Q}(f(A)){\mathbb {E}}_\mathbb {Q}(g(R)). \end{aligned}$$

We first deal with case (i). The following result comes from the implicit renewal theory of random difference equations given by Goldie [16] (Lemma 2.2, Theorem 2.3 and Lemma 9.4).

Theorem 6.1

Suppose there exists \(\kappa >0\) such that

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}(A^\kappa )=1, \quad {\mathbb {E}}_\mathbb {Q}(A^\kappa \log ^+ A) <\infty , \end{aligned}$$
(6.1)

and suppose that the conditional law of \(\log A\), given \(A\ne 0\), is non-arithmetic. For

$$\begin{aligned} \widetilde{R}=AR+B, \end{aligned}$$

where \(\widetilde{R}\) and \(R\) have the same law, and \(A\) and \(R\) are independent, we have that if

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}\left( (AR+B)^\kappa -(AR)^\kappa \right) <\infty , \end{aligned}$$

then

$$\begin{aligned} \lim _{t\rightarrow \infty } t^\kappa \mathbb {Q}(R>t)=\frac{{\mathbb {E}}_\mathbb {Q} \left( (AR+B)^\kappa -(AR)^\kappa \right) }{\kappa {\mathbb {E}}_\mathbb {Q} (A^\kappa \log A)} \in (0,\infty ). \end{aligned}$$

It is worth mentioning that the independence between \(B\) and \(R\) is not necessary, while in dealing with classical random difference equations it holds systematically and simplifies the verification of crucial assumptions. In our study, it is crucial that \(B\) and \(R\) do not need to be independent because the situation for log-infinitely divisible cascades presents much more correlations to control than the case of canonical cascades on homogeneous or Galton–Watson trees.

For \(q\in I_\nu \) we have

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}(A^{q-1})=2^{1-q}{\mathbb {E}}(W_0^q)=2^{\varphi (q)}. \end{aligned}$$

Take \(\kappa =\zeta -1\) then we get \({\mathbb {E}}_\mathbb {Q}(A^\kappa )=1\). From \(\varphi '(\zeta )<\infty \) it is easy to deduce that \({\mathbb {E}}_\mathbb {Q}(A^\kappa \log ^+ A)<\infty \). In case (i) we have either \(\sigma \ne 0\) or \(\nu \) is not of the form \(\sum _{n\in \mathbb {Z}} p_n \delta _{nh}\) for some \(h>0\) and \(p_n\ge 0\), thus the conditional law of \(\log A\), given \(A\ne 0\), is non-arithmetic. So in order to apply Theorem 6.1, it is only left to verify that \({\mathbb {E}}_\mathbb {Q}\left( (AR+B)^\kappa -(AR)^\kappa \right) <\infty . \) To do so, we need the following proposition (in the framework of canonical cascades such a fact is simple to establish due to the independences associated with the branching property (see [23, Lemma 4.1])).

Proposition 6.1

\({\mathbb {E}}(\mu (I_0)\mu (I_1)^{\kappa })<\infty \).

We have

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}((AR+B)^\kappa -(AR)^\kappa )=2{\mathbb {E}}((\mu (I)^\kappa -\mu (I_0)^\kappa )\cdot \mu (I_0)). \end{aligned}$$

By using the following inequality

$$\begin{aligned} (x+y)^\kappa -x^\kappa \le \left\{ \begin{array}{l@{\quad }l} y^\kappa , &{} 0<\kappa \le 1,\\ \kappa 2^{\kappa -1} y (x^{\kappa -1}+y^{\kappa -1}), &{} 1<\kappa <\infty . \end{array} \right. \quad x,y>0, \end{aligned}$$

it is easy to find a constant \(C_\kappa \) such that

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}((AR+B)^\kappa -(AR)^\kappa ) \le C_\kappa {\mathbb {E}}(\mu (I_0)\mu (I_1)^{\kappa }). \end{aligned}$$

Then from Proposition 6.1 we get \({\mathbb {E}}_\mathbb {Q}\left( (AR+B)^\kappa -(AR)^\kappa \right) <\infty \).

We have verified all the assumptions in Theorem 6.1, thus

$$\begin{aligned} \lim _{t\rightarrow \infty } t^\kappa \mathbb {Q}(R>t)=\frac{{\mathbb {E}}_\mathbb {Q} \left( (AR+B)^\kappa -(AR)^\kappa \right) }{\kappa {\mathbb {E}}_\mathbb {Q} (A^\kappa \log A)}=d' \in (0,\infty ). \end{aligned}$$

Notice that \(\mathbb {Q}(R>t)=\int _t^\infty x \, \mathbb {P}(Z\in dx)\). From [23, Lemma 4.3] we get

$$\begin{aligned} \lim _{t\rightarrow \infty } t^\zeta \mathbb {P}(Z>t) =\frac{d' (\zeta -1)}{\zeta }. \end{aligned}$$

It is easy to verify that

$$\begin{aligned} d'=\frac{2{\mathbb {E}}\left( \mu (I)^{\zeta -1}\mu (I_0)-\mu (I_0)^{\zeta }\right) }{(\zeta -1) \varphi '(\zeta )\log 2}, \end{aligned}$$

and this gives the conclusion.

For case (ii), we may apply the key renewal theorem in the arithmetic case instead of the non-arithmetic case used in Goldie’s proof of Theorem 2.3, Case 1 ([16, page 145, line 21]) to get that for \(x\in {\mathbb {R}}\),

$$\begin{aligned} \check{r}(x+nh)\rightarrow d(x), \quad n\rightarrow \infty , \end{aligned}$$

where \(0<d(x)<\infty ,\, r(t)=e^{\kappa t}\mathbb {Q}(R>e^t)\) and

$$\begin{aligned} \check{r}(x)=\int _{-\infty }^x e^{-(x-t)} r(t) \, \mathrm{d}t. \end{aligned}$$

We have for \(x+h>y\),

$$\begin{aligned} \check{r}(x+h)-\check{r}(y)&= \int _{0}^{e^{x+h}} e^{-(x+h)}u^\kappa \cdot \mathbb {Q}(R>u) \, \mathrm{d}u-\int _{0}^{e^{y}} e^{-y}u^\kappa \cdot \mathbb {Q}(R>u) \, \mathrm{d}u\\&= \frac{e^{-(x+h)}-e^{-y}}{e^{-y}}\check{r}(y) +e^{-(x+h)} \int _{e^y}^{e^{x+h}}u^\kappa \cdot \mathbb {Q}(R>u) \, \mathrm{d}u, \end{aligned}$$

thus

$$\begin{aligned} \check{r}(x+h)-e^{y-x-h}\check{r}(y)=e^{-(x+h)} \int _{e^y}^{e^{x+h}}u^\kappa \cdot \mathbb {Q}(R>u) \, \mathrm{d}u. \end{aligned}$$

On one hand we have

$$\begin{aligned} e^{-(x+h)} \int _{e^y}^{e^{x+h}}u^\kappa \cdot \mathbb {Q}(R>u) \, \mathrm{d}u&\le e^{-(x+h)} \cdot e^{(x+h)\kappa }\cdot \mathbb {Q}(R>e^y)\cdot (e^{x+h}-e^y)\\&= (1-e^{y-x-h})\cdot e^{(x+h)\kappa }\cdot \mathbb {Q}(R>e^y). \end{aligned}$$

This gives

$$\begin{aligned} \liminf _{n\rightarrow \infty }e^{(y+nh)\kappa }\cdot \mathbb {Q}(R\!>\!e^{y+nh}) \!\ge \! e^{-(x\!+\!h-y)\kappa }(1-e^{y-x-h})^{-1}[d(x)\!-\!e^{y-x-h}d(y)].\qquad \end{aligned}$$

On the other hand we have

$$\begin{aligned} e^{-(x+h)} \int _{e^y}^{e^{x+h}}u^\kappa \cdot \mathbb {Q}(R>u) \, \mathrm{d}u&\ge e^{-(x+h)} \cdot e^{y\kappa }\cdot \mathbb {Q}(R>e^{x+h})\cdot (e^{x+h}-e^y)\\&= (1-e^{y-x-h})\cdot e^{y\kappa }\cdot \mathbb {Q}(R>e^{x+h}). \end{aligned}$$

This gives

$$\begin{aligned} \limsup _{n\rightarrow \infty }e^{(x+nh)\kappa }\cdot \mathbb {Q}(R>e^{x+nh}) \le e^{(x+h-y)\kappa }(1-e^{y-x-h})^{-1}[d(x)-e^{y-x-h}d(y)]. \end{aligned}$$

From these two estimation we can get the conclusion by using the same arguments as in Lemma 4.3(ii) and Theorem 2.2 in [23]. \(\square \)

6.2 Proof of Proposition 6.1

We have almost surely

$$\begin{aligned} \mu (I_0)\mu (I_1)^\kappa&= \lim _{\epsilon \rightarrow 0} \mu _\epsilon (I_0)\mu _\epsilon (I_1)^\kappa \\&= \lim _{\epsilon \rightarrow 0}\left( \,\,\int _{t\in I_0} e^{\Lambda (V_\epsilon ^I(t))} \, \mathrm{d}t\right) \cdot \left( \,\,\int _{t\in I_1} e^{\Lambda (V_\epsilon ^I(t))} \, \mathrm{d}t\right) ^\kappa . \end{aligned}$$

Let \(n\ge 1\) be an integer such that \(n-1< \kappa \le n\), so \(q=\kappa -n+1\in (0,1]\). Thus

$$\begin{aligned} \left( \,\,\int _{t\in I_1} e^{\Lambda (V_\epsilon ^I(t))} \, \mathrm{d}t\right) ^\kappa =\left( \,\,\int _{t\in I_1} e^{\Lambda (V_\epsilon ^I(t))} \, \mathrm{d}t\right) ^{n-1}\left( \,\,\int _{t\in I_1} e^{\Lambda (V_\epsilon ^I(t))} \, \mathrm{d}t\right) ^q \end{aligned}$$

Then we get from Fatou’s lemma and Fubini’s theorem that

$$\begin{aligned}&{\mathbb {E}}(\mu (I_0)\mu (I_1)^\kappa ) \le \liminf _{\epsilon \rightarrow \infty } \nonumber \\&\quad \int _{t_0\!\in \! I_0,t_1,\ldots ,t_{n-1}\!\in \! I_1}{\mathbb {E}}\left( \,\, \prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))} \cdot \left[ \int _{1/2}^1e^{\Lambda (V_\epsilon ^I(t_n))}\, \mathrm{d}t_n \right] ^q\right) \, \mathrm{d}t_0 \cdots \mathrm{d} t_{n-1}.\qquad \end{aligned}$$
(6.2)

Denote by \(s_0=1/2,\, s_{n}=1\) and \(s_1<\cdots <s_{n-1}\) the permutation of \(t_1,\ldots ,t_{n-1}\). Then from the sub-additivity of \(x\mapsto x^q\) we get

$$\begin{aligned} \left[ \int _{1/2}^1e^{\Lambda (V_\epsilon ^I(t_n))}\, \mathrm{d}t_n \right] ^q \le \sum _{j=0}^{n-1} \left[ \int _{s_j}^{s_{j+1}}e^{\Lambda (V_\epsilon ^I(t_n))}\, \mathrm{d}t_n \right] ^q. \end{aligned}$$

Given \(0\le j\le n-1\), define the process \(Y_{t}= e^{q\Lambda (V_\epsilon ^I(s_{j+1}-t)\cap V_\epsilon ^I(t_0))},\, t\in [0,s_{j+1}-s_j]\) and its natural filtration \({\mathcal {F}}_t=\sigma (\Lambda (V_\epsilon ^I(s_{j+1}-t)\cap V_\epsilon ^I(t_0)): 0\le s\le t)\) (see Fig. 5).

Fig. 5
figure 5

The gray area for \(V_\epsilon ^{I} (s_{j+1}-t)\cap V_\epsilon ^I(t_0)\)

For \(\eta \in \{0,1\}\) define \(D_\eta =e^{\eta \Lambda (V_\epsilon ^{I}(t_n){\setminus } V^I_\epsilon (t_0))} \prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))}\). Under the probability \(\mathrm{d}{\mathbb {P}}_\eta =\frac{D_\eta }{{{\mathbb {E}}}(D_\eta )}\mathrm{d}{\mathbb {P}}\) we have the following two facts: (1) \(t\mapsto {\mathbb {E}}_{\mathbb {P}_\eta }(Y_{t})\) is continuous; (2) \(Y_t\) is a positive submartingale with respect to \(\mathcal {F}_{t}\). The continuity and positivity are obvious, and we leave the reader to check the following fact: for \(0<s<s+\epsilon <s_{j+1}\) if we write \(\Delta _{s,\epsilon }=(V_\epsilon ^I(s_{j+1}-t-\epsilon ){\setminus } V_\epsilon ^I(s_{j+1}-t ))\cap V_\epsilon ^I(t_0)\) and let \(m\) be the power to which \(e^{\Lambda (\Delta _{s,\epsilon })}\) appears in \(D_\eta \), then we have

$$\begin{aligned} {\mathbb {E}}_{\mathbb {P}_\eta }(Y_{s+\epsilon }| \mathcal {F}_{s})&= e^{(\psi (-i(q+m))-\psi (-im)) \lambda (\Delta _{s,\epsilon })} \cdot {\mathbb {E}}_{\mathbb {P}_\eta }(Y_{s}| \mathcal {F}_{s}) \\&\ge {\mathbb {E}}_{\mathbb {P}_\eta }(Y_{s}| \mathcal {F}_{s}), \end{aligned}$$

where the inequality comes from the fact that \(\psi (-ip)\) is an increasing function of \(p\) on the right of \(1\) since it is convex and \(\frac{d}{dp} \psi (-ip)|_{p=1}>0\). Thus (see [33, Th. 2.5, Prop. 2.6 and Th. 2.9], e.g.) the submartingale (under \(\mathbb {P}_\eta \)) \((Y_t)_{0\le t\le s_{j+1}-s_j}\) has a right-continuous version (with respect to the filtration made of the completions \(\sigma \)-algebras \(\mathcal {F}_{t^+},\, 0\le t<s_{j+1}-s_j\)) that we use to continue the study.

Now, for each \(j=0,\ldots ,n-1\) we have

$$\begin{aligned}&\left[ \int _{s_j}^{s_{j+1}}e^{\Lambda (V_\epsilon ^I(t_n))}\, \mathrm{d}t_n \right] ^q\\&\quad \le \sup _{s_j<t<s_{j+1} }e^{q\Lambda (V_\epsilon ^I(t)\cap V_\epsilon ^I(t_0))}\cdot \left[ \int _{s_j}^{s_{j+1}}e^{\Lambda (V_\epsilon ^{I}(t_n){\setminus } V^I_\epsilon (t_0))}\, \mathrm{d}t_n \right] ^q\\&\quad \le \sup _{s_j<t<s_{j+1} }e^{q\Lambda (V_\epsilon ^I(t)\cap V_\epsilon ^I(t_0))}\cdot \left[ 1+\int _{s_j}^{s_{j+1}}e^{\Lambda (V_\epsilon ^{I}(t_n){\setminus } V^I_\epsilon (t_0))}\, \mathrm{d}t_n \right] , \end{aligned}$$

where we have used the elementary inequality \(x^q\le 1+x\) for \(x>0\) and \(q\in (0,1]\).

Then Doob’s inequality applied with \(L^{\gamma }\) (\(\gamma >1\)) yields \(c=c(\gamma )\) such that

$$\begin{aligned}&{{\mathbb {E}}}\left( e^{\eta \Lambda (V_\epsilon ^{I}(t_n){\setminus } V^I_\epsilon (t_0))} \left( \,\,\prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))}\right) \sup _{s_j<t<s_{j+1} }e^{q\Lambda (V_\epsilon ^I(t)\cap V_\epsilon ^I(t_0))}\right) \\&\quad \le c {\mathbb {E}}(D_\eta ) ^{1-1/\gamma }\left[ {{\mathbb {E}}}\left( e^{\eta \Lambda (V_\epsilon ^{I}(t_n){\setminus } V^I_\epsilon (t_0))}\left( \,\,\prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))}\right) e^{q\gamma \Lambda (V_\epsilon ^I(s_j)\cap V_\epsilon ^I(t_0))}\right) \right] ^{1/\gamma }. \end{aligned}$$

Thus

$$\begin{aligned}&{\mathbb {E}}\left( \,\,\prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))} \cdot \left[ \int _{s_j}^{s_{j+1}}e^{\Lambda (V_\epsilon ^I(t_n))}\, dt_n \right] ^q\right) \\&\quad \le c{\mathbb {E}}(D_0)^{1-1/\gamma }\left[ {\mathbb {E}}\left( \,\, \prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))} \cdot e^{q\gamma \Lambda (V_\epsilon ^I(s_j)\cap V_\epsilon ^I(t_0))}\right) \right] ^{1/\gamma } + c {\mathbb {E}}(D_1)^{1-1/\gamma } \cdot \\&\quad \int _{s_j}^{s_{j+1}}\left[ {{\mathbb {E}}}\left( e^{\Lambda (V_\epsilon ^{I}(t_n){\setminus } V^I_\epsilon (t_0))} \left( \,\,\prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))}\right) e^{q\gamma \Lambda (V_\epsilon ^I(s_j)\cap V_\epsilon ^I(t_0))}\right) \right] ^{1/\gamma }\, \mathrm{d}t_n. \end{aligned}$$

For \(\eta ,\eta '\in \{0,1\}\) and \(t_n\in [s_j,s_{j+1})\) define

$$\begin{aligned} \widetilde{\Lambda }_{\eta ,\eta '}(t_n)= \left\{ \begin{array}{l@{\quad }l} q\gamma \eta '\Lambda (V_\epsilon ^I(s_j)\cap V_\epsilon ^I(t_0))+\eta \Lambda (V_\epsilon ^{I}(t_n){\setminus } V^I_\epsilon (t_0))&{} \text { if } q<1,\\ \Lambda (V_\epsilon ^I(t_n)) &{}\text { if } q=1. \end{array}\right. \end{aligned}$$

Then define

$$\begin{aligned} \overline{D}_{\eta ,\eta '}(t_0,\ldots ,t_n)={\mathbb {E}}\left( \,\, \prod _{j=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_j))} \cdot e^{\widetilde{\Lambda }_{\eta ,\eta '} (t_n)}\right) . \end{aligned}$$

It is easy to see that \({\mathbb {E}}(D_0)=\overline{D}_{0,0}(t_0,\ldots ,t_n),\, {\mathbb {E}}(D_1)=\overline{D}_{1,0}(t_0,\ldots ,t_n)\),

$$\begin{aligned} {\mathbb {E}}\left( \,\,\prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))} \cdot e^{q\gamma \Lambda (V_\epsilon ^I(s_j)\cap V_\epsilon ^I(t_0))}\right) =\overline{D}_{0,1}(t_0,\ldots ,t_n) \end{aligned}$$

and

$$\begin{aligned} {{\mathbb {E}}}\left( e^{\Lambda (V_\epsilon ^{I}(t_n){\setminus } V^I_\epsilon (t_0))} \left( \,\,\prod _{k=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_k))}\right) e^{q\gamma \Lambda (V_\epsilon ^I(s_j)\cap V_\epsilon ^I(t_0))}\right) =\overline{D}_{1,1}(t_0,\ldots ,t_n). \end{aligned}$$

Also set \(\gamma _q=\gamma \) if \(q<1\) and \(\gamma _q=1\) if \(q=1\). We finally get

$$\begin{aligned}&{\mathbb {E}}\left( \,\,\prod _{j=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_j))} \cdot \left[ \int _{1/2}^1e^{\Lambda (V_\epsilon ^I(t_n))}\, dt_n \right] ^q\right) \\&\quad \le 2c\cdot \sum _{\eta \in \{0,1\}}{\mathbb {E}}(D_{\eta ,0})(t_0,\ldots ,t_n)^{1-1/\gamma _q}\int _{1/2}^1\overline{D}_{\eta ,1}(t_0,\ldots ,t_n)^{1/\gamma _q} \mathrm{d}t_n\\&\quad \le 4c\cdot \int _{1/2}^1\max _{\eta ,\eta '\in \{0,1\}} \overline{D}_{\eta ,\eta '}(t_0,\ldots ,t_n) \, \mathrm{d}t_n. \end{aligned}$$

Now fix \(t_0,\ldots ,t_n\) and redefine \(s_0=t_0,\, s_1=1/2\) and \(s_2<\cdots <s_{n+1}\) the permutation of \(t_1,\ldots ,t_n\). Let \(j_*\) be such that \(s_{j_*}=t_n\). Define

$$\begin{aligned} \left\{ \begin{array}{l@{\quad }l} p_0=1; &{} \\ p_1=0; &{} \\ p_j=1, &{} \text {for } j\ne j_*;\\ p_{j_*}=\eta , &{} \text {in case of } q<1 ;\\ p_{j_*}=1, &{} \text {in case of } q=1.\\ \end{array} \right. \end{aligned}$$

For \(k=0,\ldots ,n\) and \(j=k,\ldots ,n+1\) define

$$\begin{aligned} r_{k,j}=\left\{ \begin{array}{l@{\quad }l} q\gamma \eta '+\sum \nolimits _{l=k,\ldots ,j; s_j\ne t_n} p_l, &{} \quad \text {if } q<1, k=0 \text { and } t_n\in \{s_{j},s_{j+1}\};\\ \sum \nolimits _{l=k,\ldots ,j} p_l, &{}\quad \text {otherwise}. \end{array} \right. \end{aligned}$$

and let \(r_{k,j}=0\) for \(k<j\). Then by using the same argument as [2, Lemma 1] (notice that \(r_{k,j}\) represents the power to \(e^{V^I_\epsilon (s_k)\cap V^I_\epsilon (s_j) {\setminus } (V^I_\epsilon (s_{k-1})\cup V^I_\epsilon (s_{j+1}))}\) which appears in the product \(\prod _{j=0}^{n-1} e^{\Lambda (V_\epsilon ^I(t_j))} \cdot e^{\widetilde{\Lambda }_{\eta ,\eta '} (t_n)}\), and that \(\lambda (V^I_\epsilon (s_k)\cap V^I_\epsilon (s_j) {\setminus } (V^I_\epsilon (s_{k-1})\cup V^I_\epsilon (s_{j+1})))=\log \frac{1}{s_j-s_k}+\log \frac{1}{s_{j+1}-s_k}-\log \frac{1}{s_j-s_{k-1}}-\log \frac{1}{s_{j+1}-s_{k-1}}\), see Fig. 6) we can get

$$\begin{aligned} \overline{D}_{\eta ,\eta '}(t_0,\ldots ,t_n)= \sum _{k=0}^{n}\sum _{j=k+1}^{n+1} \alpha (j,k)\cdot \log \frac{1}{s_j-s_k}, \end{aligned}$$

where

$$\begin{aligned} \alpha (j,k)=\psi (-i r_{k,j})+\psi (-i r_{k+1,j-1})-\psi (-ir_{k,j-1})-\psi (-ir_{k+1,j}). \end{aligned}$$
Fig. 6
figure 6

\(r_{k,j}\) is the power corresponding to the gray area

Let \(\widetilde{\psi }(p)=\psi (-ip)\). By definition of \(\kappa \), we have \(\widetilde{\psi }(p)<p-1\) for all \(p\in (1,n+q)\), and \(\widetilde{\psi }(n+q)=n+q-1\). Moreover, \(\widetilde{\psi }'(1)<1\) since \(\varphi '(1)<0\), and \(\widetilde{\psi }(1)=0\). Consequently, there exists \(\delta \in (0,1)\) such \(\widetilde{\psi }(p)\le (1-\delta )(p-1)\) for \(p\in [1,n]\); in particular by convexity of \(\widetilde{\psi }\) we have \(1-\delta \ge \widetilde{\psi }'(1)\). Moreover, notice that \(\widetilde{\psi }(p)\le 0\) for \(p\in (0,1)\) since \(\widetilde{\psi }(0)=0=\widetilde{\psi }(1)\) and \(\widetilde{\psi }\) is convex, and also \(\widetilde{\psi }(p)\ge \widetilde{\psi }'(1) (p-1)\) for all \(p\ge 0\), which yields for \(p\in [0,1],\, \widetilde{\psi }(p) \ge (1-\delta )(p-1)\). Finally, in case of \(q<1\), we take \(\gamma >1\) small enough such that \(q\gamma <1\) and \(\widetilde{\psi }(n+q\gamma )-n+1=q'<1\).

(i) If \(n=1\), that is \(0< \kappa \le 1,\, q=\kappa \) and \(\widetilde{\psi }(1+q\gamma )=q'<1\). We have \(s_0=t_0\in [0,1/2),\, s_1=1/2,\, s_2=t_1\in [1/2,1)\) and \(s_3=1\).

If \(q<1\), we have

$$\begin{aligned} r_{0,0}=1,\ r_{0,1}=1+q\gamma \eta ',\ r_{0,2}=1+q\gamma \eta ', \ r_{1,1}=0,\ r_{1,2}=\eta ,\ r_{2,2}=\eta . \end{aligned}$$

This gives

$$\begin{aligned} \alpha (0,1)&= \widetilde{\psi }(1+q\gamma \eta ')+\widetilde{\psi }(0)-\widetilde{\psi }(1)-\widetilde{\psi }(0)\le q',\\ \alpha (0,2)&= \widetilde{\psi }(1+q\gamma \eta ')+\widetilde{\psi }(0)-\widetilde{\psi }(1+q\gamma \eta ')-\widetilde{\psi }(\eta )=0,\\ \alpha (1,2)&= \widetilde{\psi }(\eta )+\widetilde{\psi }(0)-\widetilde{\psi }(0) -\widetilde{\psi }(\eta )=0. \end{aligned}$$

Thus

$$\begin{aligned} {\mathbb {E}}(\mu (I_0)\mu (I_1)^\kappa )\le 4c\cdot \int _{0}^{1/2} (1/2-s)^{-q'} \, ds <\infty . \end{aligned}$$

If \(q=1\), we have

$$\begin{aligned} r_{0,0}= r_{0,1}=r_{1,1}= r_{1,2}=1,\ r_{0,2}=2. \end{aligned}$$

This gives \(\alpha (0,1)=\alpha (1,2)=0\) and \(\alpha (0,2)=\widetilde{\psi }(2)=1\). Thus

$$\begin{aligned} {\mathbb {E}}(\mu (I_0)\mu (I_1))= \int _{0}^{1/2}\int _{1/2}^1 (t_1-t_0)^{-1} \, \mathrm{d}t_0 \mathrm{d}t_1 =\log 2<\infty . \end{aligned}$$

Remark 6.1

Here we have an equality since when \(q\) is an integer we do not need to use Doob’s inequality to estimate (6.2) and we can apply the martingale convergence theorem and dominated convergence theorem as in Sect. 5.2. The identity \({\mathbb {E}}(\mu (I_0)\mu (I_1))=\log 2\) yields the precise formula in Remark 1.4.

(ii) The case \(n\ge 2\) is more involved. For \(0\le k<j\le n+1\), write

$$\begin{aligned} \alpha (j,k)=\beta (j,k)-\beta (j,k+1),\text { where } \beta (j,k)=\widetilde{\psi }(r_{k,j})-\widetilde{\psi }(r_{k,j-1}). \end{aligned}$$

Then

$$\begin{aligned} \sum _{k=0}^{n}\sum _{j=k+1}^{n+1} \alpha (j,k)\cdot \log \frac{1}{s_j-s_k}&= \sum _{k=0}^{n}\sum _{j=k+1}^{n+1} (\beta (j,k)-\beta (j,k+1))\cdot \log \frac{1}{s_j-s_k}\\&= \sum _{j=1}^{n+1}\sum _{k=0}^{j-1}(\beta (j,k)-\beta (j,k+1))\cdot \log \frac{1}{s_j-s_k}\\&= \widetilde{A}+\widetilde{B}+\widetilde{C}, \end{aligned}$$

where

$$\begin{aligned} \widetilde{A}&=\sum _{j=1}^{n+1}\sum _{k=0}^{j-1}\beta (j,k)\cdot \log \frac{s_j-s_{k-1}}{s_j-s_k},\\ \widetilde{B}&=\sum _{j=1}^{n+1}\beta (j,0)\cdot \log \frac{1}{s_j-s_0},\, \widetilde{C}=-\sum _{j=1}^{n+1}\beta (j,j)\cdot \log \frac{1}{s_j-s_{j-1}}. \end{aligned}$$

Now, using the definition of \(\beta (j,k)\) we get

$$\begin{aligned} \widetilde{A}&=\sum _{k=1}^{n}\sum _{j=k+1}^{n+1}\beta (j,k)\cdot \log \frac{s_j-s_{k-1}}{s_j-s_k},\\&=\sum _{k=1}^{n}\widetilde{\psi }(r_{k,n+1})\cdot \log \frac{s_{n+1}-s_{k-1}}{s_{n+1}-s_k}\\&\quad + \sum _{k=1}^{n} \sum _{j=k+1}^{n}\widetilde{\psi }(r_{k,j})\cdot \left( \log \frac{s_j-s_{k-1}}{s_j-s_k} - \log \frac{s_{j+1}-s_{k-1}}{s_{j+1}-s_k}\right) \\&\quad -\sum _{k=1}^{n} \widetilde{\psi }(r_{k,k})\cdot \log \frac{s_{k+1}-s_{k-1}}{s_{k+1}-s_k}, \\ \widetilde{B}&= \widetilde{\psi }(r_{0,n+1})\cdot \log \frac{1}{s_{n+1}-s_0} \!+\!\sum _{j=1}^{n}\widetilde{\psi }(r_{0,j}) \cdot \log \frac{s_{j+1}\!-\!s_{0}}{s_{j}-s_0} \!-\!\widetilde{\psi }(r_{0,0})\cdot \log \frac{1}{s_{1}-s_0},\\ \widetilde{C}&=-\sum _{j=1}^n\widetilde{\psi }(r_{j,j})\cdot \log \frac{1}{s_j-s_{j-1}}. \end{aligned}$$

First notice that \(r_{j,j}\in \{0,1\}\) for \(j=1,\ldots ,n\), thus \( \widetilde{C}=0\). Let \(\widehat{\psi }(r)=(1-\delta ) (r-1)\) for \(r\ge 1\) and \(\widehat{\psi }(0)=0\). We have \(\widetilde{\psi }(r)\le \widehat{\psi }(r)\) for \(1\le r\le \zeta -q\), and \(\widetilde{\psi }(n+q\gamma )=n-1+q'=\widehat{\psi }(n+q')+\delta (n+q'-1)\) if \(q<1\), as well as \(\widetilde{\psi }(n+q)=n+q-1=\widehat{\psi }(n+q)+\delta (n+q-1)\) if \(q=1\). Now, define formally \(\widehat{A}\) and \(\widehat{B}\) as \(\widetilde{A}\) and \(\widetilde{B}\), by replacing \(\widetilde{\psi }\) by \(\widehat{\psi }\). Notice that all the \(\log \frac{1}{s_j-s_k}\) and \(\left( \log \frac{s_j-s_{k-1}}{s_j-s_k} - \log \frac{s_{j+1}-s_{k-1}}{s_{j+1}-s_k}\right) \) are positive. Then, remembering that \(r_{0,n+1}=n+q\gamma _q\) and rewriting \(\widetilde{\psi }(r_{0,n+1})=\delta (r_{0,n+1}'-1)+\widehat{\psi }(r_{0,n+1}')\) in expression \(\widetilde{B}\), where \(r_{0,n+1}'=n+q'\) if \(q<1\) and \(r_{0,n+1}'=n+q\) if \(q=1\), and remembering also that \(\widetilde{\psi }(r_{j,j})=\widehat{\psi }(r_{j,j})\) for \(j=0,\ldots ,n\) since \(r_{j,j}\in \{0,1\}\), the previous inequalities between \(\widetilde{\psi }\) and \(\widehat{\psi }\) yield:

$$\begin{aligned} \sum _{k=0}^{n}\sum _{j=k+1}^{n+1} \alpha (j,k)\cdot \log \frac{1}{s_j-s_k}\le \delta (r_{0,n+1}'-1)\cdot \log \frac{1}{s_n-s_0}+\widehat{A}+\widehat{B}. \end{aligned}$$

Now define \(\widehat{\beta }(j,k):=\widehat{\psi }(r_{k,j})-\widehat{\psi }(r_{k,j-1})\). It is easy to see that \(\widehat{\beta }(j,k)\le 1-\delta \) for \(0\le k<j\le n+1\) since \(r_{k,j}-r_{k,j-1} \le 1\) (when \(q<1\), we have chosen \(\gamma \) small enough such that \(q\gamma <1\)). Thus

$$\begin{aligned} \widehat{A}&= \sum _{j=1}^{n+1}\sum _{k=0}^{j-1}\widehat{\beta }(j,k)\cdot \log \frac{s_j-s_{k-1}}{s_j-s_k}\le (1-\delta )\sum _{j=1}^n \log \frac{s_j-s_{0}}{s_j-s_{j-1}}\\ \widehat{B}&= \sum _{j=1}^{n+1}\widehat{\beta }(j,0)\cdot \log \frac{1}{s_j-s_0}\le (1-\delta )\sum _{j=1}^n\cdot \log \frac{1}{s_j-s_0}. \end{aligned}$$

This gives

$$\begin{aligned} \widehat{A}+\widehat{B}\le (1-\delta ) \sum _{j=1}^n\log \frac{1}{s_j-s_{j-1}}, \end{aligned}$$

and bounding \(r_{0,n+1}-1\) by \(n\) (we have chosen \(q'<1\)), we get

$$\begin{aligned} \sum _{k=0}^{n}\sum _{j=k+1}^{n+1} \alpha (j,k)\cdot \log \frac{1}{s_j-s_k}\le n\delta \cdot \log \frac{1}{s_{n+1}-s_0}+ (1-\delta ) \sum _{j=1}^{n+1}\log \frac{1}{s_j-s_{j-1}}. \end{aligned}$$

One has

$$\begin{aligned} \begin{aligned}&\int _{0}^{1/2}\int _{1/2<s_2<\cdots <s_{n+1}<1} \frac{ds_{n+1}ds_{n}\cdots ds_2ds_0}{{\scriptstyle (s_{n+1}-s_0)^{n\delta }[(s_{n+1}-s_{n})\cdots (s_2-1/2) (1/2-s_0)]^{1-\delta }}}\\&=\int _{0}^{1/2}\int _{1/2<s_2<\cdots <s_{n}<1}\int _0^{1-s_n} \frac{duds_{n}\cdots ds_2ds_0}{{\scriptstyle (u+s_{n}-s_0)^{n\delta }[u(s_n-s_{n-1})\cdots (s_2-1/2) (1/2-s_0)]^{1-\delta }}}\\&=\frac{1}{\delta }\int _{0}^{1/2}\int _{1/2<s_2<\cdots <s_{n}<1}\int _0^{(1-s_n)^\delta } \frac{dvds_{n}\cdots ds_2ds_0}{{\scriptstyle (v^{1/\delta }+s_{n}-s_0)^{n\delta }[(s_n-s_{n-1})\cdots (s_2-1/2) (1/2-s_0)]^{1-\delta }}}\\&\le \frac{2^{n/\delta }}{\delta }\int _{0}^{1/2}\int _{1/2<s_2<\cdots <s_{n}<1}\int _0^{(1-s_n)^\delta } \frac{dvds_{n}\cdots ds_2ds_0}{{\scriptstyle (v+(s_{n}-s_0)^\delta )^{n}[(s_n-s_{n-1})\cdots (s_2-1/2) (1/2-s_0)]^{1-\delta } }}\\&\le \frac{2^{n/\delta }}{(n-1)\delta }\int _{0}^{1/2}\int _{1/2<s_2<\cdots <s_{n}<1} \frac{ds_{n}\cdots ds_2ds_0}{{\scriptstyle (s_{n}-s_0)^{(n-1)\delta }[(s_n-s_{n-1})\cdots (s_2-1/2) (1/2-s_0)]^{1-\delta }}}\\&\le \cdots \\&\le \frac{2^{(n+\cdots +2)/\delta }}{(n-1)!\delta }\int _{0}^{1/2}\int _{1/2}^1 \frac{ds_2ds_0}{(s_{2}-s_0)^{\delta }[(s_2-1/2) (1/2-s_0)]^{1-\delta }}\\&\le \frac{2^{(n+\cdots +2+1)/\delta }}{(n-1)!}\int _{0}^{1/2} \log \frac{2}{1/2-s_0}\cdot \frac{ds_0}{(1/2-s_0)^{(1-\delta )}}\\&< \infty . \end{aligned} \end{aligned}$$

This yields \({\mathbb {E}}(\mu (I_0)\mu (I_1)^\kappa )<\infty \). \(\square \)

7 Proof of Theorem 1.5

The proof follows the same lines as that given in [6] for compound Poisson cascades, and uses computations similar to those performed in [35] to find the sufficient condition of the finiteness.

Let \(J=[t_0,t_1]\in {\mathcal {I}}\). For \(t\in J\) and \(\epsilon <|J|\) we have

$$\begin{aligned} V_\epsilon ^J(t)=\widetilde{V}^J_\epsilon (t) \cup V^{J,l}(t) \cup V^{J,r}(t), \end{aligned}$$

where \(\widetilde{V}^J_\epsilon (t)=V_\epsilon ^J(t) {\setminus } V^J_{|J|}(t)\) and recall in Sect. 3.1 that

$$\begin{aligned} V^{J,l}(t)&= \left\{ z=x+iy\in V(t): |J| \le y < 2(t_1-x)\right\} ,\\ V^{J,r}(t)&= \left\{ z=x+iy\in V(t): |J| \le y \le 2(x-t_0)\right\} . \end{aligned}$$

Let \(s\in \{l,r\}\). Recall in Lemma 3.4 that for \(q\in I_\nu \) there exists a constant \(C_q<\infty \) such that

$$\begin{aligned} {\mathbb {E}}\left( \,\,\sup _{t\in J} e^{q\Lambda (V^{J,s}(t))}\right) \le C_q, \end{aligned}$$
(7.1)

and for \(q\in {\mathbb {R}}\) there exists a constant \(c_q>0\) such that

$$\begin{aligned} {\mathbb {E}}\left( \inf _{t\in J}e^{q\Lambda (V^{J,s}(t))}\right) \ge c_q. \end{aligned}$$
(7.2)

Let \(\widetilde{\mu }_\epsilon ^J(t)=Q(\widetilde{V}^J_\epsilon (t))\, dt,\, \widetilde{\mu }^J=\lim _{\epsilon \rightarrow 0} \widetilde{\mu }^J_\epsilon \) and \(\widetilde{Z}(J)=\widetilde{\mu }^J(J)/|J|\). Then it is easy to see that for \(q\in I_\nu \),

$$\begin{aligned} {\mathbb {E}}(\widetilde{Z}(J)^q)<\infty \Rightarrow {\mathbb {E}}(Z(J)^q)<\infty . \end{aligned}$$

and for \(q\in {\mathbb {R}}\),

$$\begin{aligned} {\mathbb {E}}(Z(J)^q)<\infty \Rightarrow {\mathbb {E}}(\widetilde{Z}(J)^q)<\infty . \end{aligned}$$

7.1. First we show that for \(q\in I_\nu \cap (-\infty ,0)\) we have \({\mathbb {E}}(Z^q)<\infty \). Let \(J_0=I_{00}\) and \(J_1=I_{11}\). It is clear that

$$\begin{aligned} \widetilde{\mu }^I(I)\ge \widetilde{\mu }^I(J_0)+\widetilde{\mu }^I(J_1). \end{aligned}$$

For \(i\in \{0,1\}\) define

$$\begin{aligned} V_i&= V^I(J_i)\cap \{z\in \mathbb {H}: \mathrm{Im}(z) \le |I|\},\\ V_{i,l}(t)&= V^{J_i,l}(t)\cap \{z\in \mathbb {H}: \mathrm{Im}(z) \le |I|\},\\ V_{i,r}(t)&= V^{J_i,r}(t)\cap \{z\in \mathbb {H}: \mathrm{Im}(z) \le |I|\}, \end{aligned}$$

and

$$\begin{aligned} m_{i,l}=\inf _{t\in J_i} e^{\Lambda (V_{i,l}(t))}; \ m_{i,r}=\inf _{t\in J_i} e^{\Lambda (V_{i,r}(t))}. \end{aligned}$$

For \(i=0,1\) let \(U_i=4^{-1} \cdot m_{i,l} \cdot m_{i,r} \cdot e^{\Lambda (V_i)}\). Then we have

$$\begin{aligned} \widetilde{Z}(I) \ge U_0 \widetilde{Z}(J_0)+U_1\widetilde{Z}(J_1), \end{aligned}$$

where \(\widetilde{Z}(I),\, \widetilde{Z}(J_0),\, \widetilde{Z}(J_1)\) have the same law; \(U_0,\, U_1\) have the same law; \(\widetilde{Z}(J_0), \widetilde{Z}(J_1)\) and \((U_0,U_1)\) are independent. So by using the approach of Molchan for Mandelbrot cascades in the general case [31, Theorem 4], we only need to show that \({\mathbb {E}}(U_0^q)<\infty \) to imply that \({\mathbb {E}}(\widetilde{Z}(I)^q)<\infty \), thus \({\mathbb {E}}(Z^q)<\infty \).

Since \(q<0\), we have

$$\begin{aligned} U_0^q=4^{-q}\cdot \sup _{t\in J_0} e^{q\Lambda (V_{0,l}(t))} \cdot \sup _{t\in J_0} e^{q\Lambda V_{0,r}(t))} \cdot e^{q\Lambda (V_0)}. \end{aligned}$$

Notice that these random variables are independent, so

$$\begin{aligned} {\mathbb {E}}(U_0^q)=4^{-q}\cdot {\mathbb {E}}\left( \,\,\sup _{t\in J_0}e^{q\Lambda (V_{0,l}(t))}\right) \cdot {\mathbb {E}}\left( \,\,\sup _{t\in J_0} e^{q\Lambda V_{0,r}(t))} \right) \cdot {\mathbb {E}}\left( e^{q\Lambda (V_0)}\right) . \end{aligned}$$

Then from the fact that \(q\in I_\nu \) and (7.1) we get the conclusion. \(\square \)

7.2. Now we show that for \(q\in (-\infty ,0)\), if \({\mathbb {E}}(Z^q)<\infty \) then \(q\in I_\nu \). Let \(J_0=\inf I +|I|[0,2/3],\, J_1=\inf I+ |I|[1/3,1]\) and \(J=\inf I +|I|[1/3,2/3]\). Then we have

$$\begin{aligned} \widetilde{\mu }^I(I)\le \widetilde{\mu }^I(J_0)+\widetilde{\mu }^I(J_1). \end{aligned}$$

For \(i\in \{0,1\}\) define

$$\begin{aligned} V_i&= (V^I(J_i) {\setminus } V^I(J))\cap \{z\in \mathbb {H}: \mathrm{Im}(z) < |I|\},\\ V_{i,l}(t)&= V^{J_i,l}(t)\cap \{z\in \mathbb {H}: \mathrm{Im}(z) < |I|\},\\ V_{i,r}(t)&= V^{J_i,r}(t)\cap \{z\in \mathbb {H}: \mathrm{Im}(z) < |I|\}. \end{aligned}$$

Also define \(V=V^I(J)\cap \{z\in \mathbb {H}: \mathrm{Im}(z) < |I|\}\). Then we get

$$\begin{aligned} \widetilde{Z}(I) \le e^{\Lambda (V)}\cdot \left( \,\,\sum _{i=0,1} 4^{-1}\cdot \sup _{t\in J_i} e^{\Lambda (V_{i,l}(t))}\cdot \sup _{t\in J_i} e^{\Lambda (V_{i,l}(t))} \cdot e^{\Lambda (V_i)}\cdot \widetilde{Z}(J_i) \right) . \end{aligned}$$

Since \(q<0\), this gives

$$\begin{aligned} \widetilde{Z}(I)^q \ge e^{q\Lambda (V)}\cdot \left( \,\,\sum _{i=0,1} 4^{-q}\cdot \inf _{t\in J_i} e^{q\Lambda (V_{i,l}(t))}\cdot \inf _{t\in J_i} e^{q\Lambda (V_{i,l}(t))} \cdot e^{q\Lambda (V_i)} \cdot \widetilde{Z}(J_i)^q \right) . \end{aligned}$$

Taking expectation from both side and using (7.2) we get

$$\begin{aligned} {\mathbb {E}}(\widetilde{Z}(I)^q) \ge {\mathbb {E}}(e^{q\Lambda (V)})\cdot 2\cdot 4^{-q}\cdot c_q^2 \cdot {\mathbb {E}}(e^{q\Lambda (V_0)})\cdot {\mathbb {E}}(\widetilde{Z}(I)^q). \end{aligned}$$

Then from \({\mathbb {E}}(\widetilde{Z}(I)^q)<\infty \) we get \({\mathbb {E}}(e^{q\Lambda (V\cup V_0)})\le 2^{-1} 4^q c_q^{-2}<\infty \). This yields \(q\in I_\nu \). \(\square \)

8 Proof of Theorem 1.6

The proof is similar to that of [23, Theorem 2.4].

For \(i\in \Sigma _*\) and \(j\in \{0,1\}\) let \(W_j^{[i]}=W_{ij}/ W_i\).

For \(n\ge 1,\, \omega \in \Omega \) and \(\mathbf{i}\in \Sigma \) define

$$\begin{aligned} A_n(\omega ,\mathbf{i})&= \sum _{i=i_1\cdots i_n\in \Sigma _{n}} W_{i_n}^{[i_1\cdots i_{n-1}]}(\omega )\cdot \mathbf{1}_{\{\mathbf{i}|_n=i\}}\\ R_n(\omega ,\mathbf{i})&= \sum _{i\in \Sigma _n} Z_i(\omega )\cdot \mathbf{1}_{\{\mathbf{i}|_n=i\}}. \end{aligned}$$

Thus for any \(i=i_1\cdots i_n\) and \(\mathbf{i}\in [i]\) we have

$$\begin{aligned} \mu (I_i)=\left( \,\,\prod _{k=1}^nA_k(\omega ,\mathbf{i})\right) \cdot R_n(\omega ,\mathbf{i}). \end{aligned}$$

We claim that for any \(n\ge 1,\, A_n\) has the same law as \(A\), and \(R_n\) has the same law as \(R\), where \(A\) and \(R\) are defined as in the beginning of Sect. 6.1; moreover, \(A_1,\ldots , A_n, R_n\) are independent. This is due to the fact that for any non-negative Borel functions \(f_1,\ldots ,f_n\) and \(g\) one gets

$$\begin{aligned}&{\mathbb {E}}_\mathbb {Q}\left( g(R_n)\prod _{j=1}^kf_j(A_j)\right) \\&\quad = {\mathbb {E}}\left( \,\,\sum _{i=i_1\cdots i_{n}\in \Sigma _n} g(Z_{i}) Z_{i} \prod _{k=1}^{n} f_k\left( W_{i_k}^{[i_1\cdots i_{k-1}]}\right) W^{[i_1\cdots i_{k-1}]}_{i_{k}} \right) \\&\quad ={\mathbb {E}}(g(Z)Z) \prod _{k=1}^n 2{\mathbb {E}}(f_k(W_0)W_0) \\&\quad ={\mathbb {E}}_\mathbb {Q}(g(R)) \prod _{k=1}^n {\mathbb {E}}_\mathbb {Q}(f_k(A)). \end{aligned}$$

Under the assumptions we have

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}(\log A)=2{\mathbb {E}}(W_0\log W_0)=\varphi '(1)\log 2:=\beta _1\in (-\infty ,0) \end{aligned}$$

and

$$\begin{aligned} {\mathbb {E}}_\mathbb {Q}((\log A)^2)-{\mathbb {E}}_\mathbb {Q}(\log A)^2=\varphi ''(1)\log 2:=\beta _2\in (0,\infty ). \end{aligned}$$

Denote by \(S_n=\log A_1+\cdots +\log A_n\). By using law of iterated logarithm we get

$$\begin{aligned} \limsup _{n\rightarrow \infty } \frac{S_n-n\beta _1}{\sqrt{2\beta _2 n\log \log n}}=1, \, \mathbb {Q}\text {-a.s.} \end{aligned}$$

It follows that for \(\mathbb {Q}\)-almost all \((\omega ,\mathbf{i})\in \Omega \times \Sigma \) and all \(0<\epsilon <1\),

$$\begin{aligned} e^{n\beta _1+(1-\epsilon )\sqrt{2\beta _2 n\log \log n}} \le e^{S_n} \le e^{n\beta _1+(1+\epsilon )\sqrt{2\beta _2 n\log \log n}}, \end{aligned}$$
(8.1)

where the left inequality holds for infinitely many \(n\in \mathbb {N}\), while the right inequality holds for all \(n\in \mathbb {N}\) sufficiently large. We also have the following lemma.

Lemma 8.1

For \(0<\epsilon <1\) one has for \(\mathbb {Q}\)-almost all \((\omega ,\mathbf{i})\in \Omega \times \Sigma \) and all \(n\in \mathbb {N}\) sufficiently large,

$$\begin{aligned} e^{-\sqrt{n}\epsilon }\le R_n \le e^{\sqrt{n}\epsilon }. \end{aligned}$$

Then the rest of the proof is exactly the same as [23, Theorem 2.4]. \(\square \)

8.1 Proof of Lemma 8.1

The proof is borrowed from Lemma 12 in [25]. First we have

$$\begin{aligned} \mathbb {Q}(|\log R_n| \ge \sqrt{n}\epsilon )&= \mathbb {Q}( R_n \ge e^{\sqrt{n}\epsilon } )+\mathbb {Q}(R_n \le e^{-\sqrt{n}\epsilon } )\\&= {\mathbb {E}}( Z\cdot \mathbf{1}_{\{Z \ge e^{\sqrt{n}\epsilon }\}} )+{\mathbb {E}}(Z\cdot \mathbf{1}_{\{Z \le e^{-\sqrt{n}\epsilon }\}} )\\&\le {\mathbb {E}}( Z\cdot \mathbf{1}_{\{Z \ge e^{\sqrt{n}\epsilon }\}} )+e^{-\sqrt{n}\epsilon }. \end{aligned}$$

Applying the elementary inequality \(\sum _{n\ge 1} \mathbf{1}_{\{ X\ge \sqrt{n}\}} \le X^2\) we get

$$\begin{aligned} \sum _{n\ge 1}\mathbb {Q}(|\log R_n| \ge \sqrt{n}\epsilon )&\le \sum _{n\ge 1}{\mathbb {E}}( Z\cdot \mathbf{1}_{\{Z \ge e^{\sqrt{n}\epsilon }\}} )+\sum _{n\ge 1}e^{-\sqrt{n}\epsilon }\\&= {\mathbb {E}}\left( Z\cdot \sum _{n\ge 1} \mathbf{1}_{\left\{ \frac{\log Z}{\epsilon } \ge \sqrt{n}\right\} } \right) +\sum _{n\ge 1}e^{-\sqrt{n}\epsilon }\\&\le \epsilon ^{-2}{\mathbb {E}}(Z(\log Z)^2)+\sum _{n\ge 1}e^{-n\epsilon }. \end{aligned}$$

Since \(\varphi '(1)<0\), there exists \(q>1\) such that \(\varphi (q)<0\), thus due to Theorem 1.2 we have \({\mathbb {E}}(Z^q)<\infty \). This implies \({\mathbb {E}}(Z (\log Z)^2)<\infty \), and the conclusion comes from Borel–Cantelli lemma.