Box-Counting Dimension in One-Dimensional Random Geometry of Multiplicative Cascades

We investigate the box-counting dimension of the image of a set \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$E \subset \mathbb {R}$$\end{document}E⊂R under a random multiplicative cascade function f. The corresponding result for Hausdorff dimension was established by Benjamini and Schramm in the context of random geometry, and for sufficiently regular sets, the same formula holds for the box-counting dimension. However, we show that this is far from true in general, and we compute explicitly a formula of a very different nature that gives the almost sure box-counting dimension of the random image f(E) when the set E comprises a convergent sequence. In particular, the box-counting dimension of f(E) depends more subtly on E than just on its dimensions. We also obtain lower and upper bounds for the box-counting dimension of the random images for general sets E.


Introduction
The random multiplicative cascade is a well-studied random measure on the unit cube in d-dimensional Euclidean space. It originally arose in Mandelbrot's study of turbulence [22] but has since been investigated in its own right, see e.g. [3][4][5][6]14,17,19,23]. In one dimension the measure may be constructed iteratively by subdividing the unit line into dyadic intervals, multiplying the length of each subdivision by an i.i.d. copy of a common positive random variable W with mean E(W ) = 1. The resulting measure μ can alternatively be thought of in terms of its cumulative distribution function f (x) = μ([0, x)) which may also be interpreted as a random metric by setting d(x, y) = | f (x)− f (y)|. The latter approach was picked up as a model for quantum gravity by Benjamini and Schramm [8], who analysed the change in Hausdorff dimension of deterministic subsets E ⊂ [0, 1] under the random metric, or equivalently, its image under f with the Euclidean metric. They obtained an elegant formula for the almost sure Hausdorff dimension s of F with respect to the random metric in terms of the Hausdorff dimension ST . (1.1) Further, when W has a log-normal distribution, they showed that the formula reduces to the famous KPZ equation, first established by Knizhnik, Polyakov, and Zamolodchikov [20], that links the dimensions of an object in deterministic and quantum gravity metrics. Barral et al. [5] removed some of the assumptions of Benjamini and Schramm, and Duplantier and Sheffield [11] studied the same phenomenon in another popular model of quantum gravity, Liouville quantum gravity. Duplantier and Sheffield show that a KPZ formula holds for the Euclidean expectation dimension, an "averaged" box-counting type dimension.
Using dimensions to study random geometry has a fruitful history, see e.g. [1,8,10,15,21,25], which use dimension theory in their methodology. Whilst much of the literature in random geometry considers Hausdorff dimension or other 'regular' scaling dimensions, box-counting dimensions have not been explored as thoroughly. In part this may be due to the more complicated geometrical properties of box-counting dimension of a set, manifested, for instance, in its projection properties, see [13].
One might hope that a formula analogous to (1.1) would also hold for the box-counting dimension of images of sets under the cascade function f . We investigate this question and find that this need not be the case for sets that are not sufficiently homogeneous. We give bounds that are valid for the box-counting dimensions of f (E) for general sets E, and then in Theorems 1.11 and 1.12 give an exact formula for the box dimension of f (E) for a large family of sets of a very different form from (1.1).
We remark that the study of dimensions of the images of sets under various random functions goes back a considerable time. For example, with B α : R → R as index-alpha fractional Brownian motion, dim H B α (E) = min{1, 1 α dim H E}, see Kahane [18]. On the other hand, the corresponding result for packing and box-counting dimensions is more subtle, depending on 'dimension profiles', as demonstrated by Xiao [26].

Notation and definitions.
This section introduces random multiplicative cascade functions and dimensions along with the notation that we shall use. We will use finite and infinite words from the alphabet {0, 1} throughout. We write finite words as i = i 1 i 2 . . . i k ∈ {0, 1} k for k ∈ N with ∅ as the empty word, with {0, 1} * = ∞ 0 {0, 1} k , and i = i 1 i 2 . . . ∈ {0, 1} N for the infinite words. We combine words by juxtaposition, and write |i| for the length of a finite word.
For i = i 1 i 2 . . . i k ∈ {0, 1} k let I i denote the dyadic interval taking the rightmost intervals [1 − 2 k , 1] to be closed. We denote the set of such dyadic intervals of lengths 2 −k by I k . Note that every interval of I k is the union of exactly two disjoint intervals in I k+1 .
Underlying the random cascade construction is a random variable W , with {W i : i ∈ {0, 1} * } a tree of independent random variables with the distribution of W . We will assume throughout that W is positive, not almost-surely constant and that E(W ) = 1 and E(W log 2 W ) ≤ 1.
(1.2) Note E(W log 2 W ) ≤ 1 implies E(W t ) < ∞ for t ∈ [0, 1]. We differentiate between the subcritical regime when E(W log 2 W ) < 1 and the critical regime when E(W log 2 W ) = 1. Unless otherwise noted, we assume the subcritical regime. Here, the length of the random image f ([0, 1]) is given by where |A| denotes the diameter of a set A, and with μ the (subcritical) random cascade measure. Comprehensive accounts of the properties of L can be found in [8] and [19], in particular the assumption that E(W log 2 W ) < 1 implies that L exists and 0 < L < ∞ almost surely and E(L) = 1. Similarly, the length of the random image of the interval I i ∈ I k is given by has the distribution of L, independently for i ∈ {0, 1} k for each fixed k. The random multiplicative cascade measure μ on [0, 1] is obtained by extension from the μ(I i ).
Almost surely, μ has no atoms and μ(I ) > 0 for every interval I , so the associated random multiplicative cascade function f : is almost surely strictly increasing and continuous. We do not need to refer to μ further and will work entirely with f . In the critical regime a similar measure exists. In particular, normalising with √ k gives where the convergence is in probability. The random limit L exists and 0 < L < ∞ almost surely under the additional assumption that E(W log 2 W ) < ∞, see [9]. Here E(L) = ∞, unlike the subcritical case. The associated measure μ is therefore finite almost surely, and it was shown in [5] that this measure almost surely has no atoms. We refer the reader to [5] for a detailed account of critical Mandelbrot cascades. Note further that the length of the random image of the interval I i is given by where L i is a random variable that is equal to L in distribution (and hence has infinite mean). Note that while we will consider image sets f (E) as subsets of R with the Euclidean metric, equivalently one could define a random metric d W by setting ([x, y]) and investigate (E, d W ) instead. For more details on such alternative interpretations, see [8].
The Hausdorff dimension dim H is the most commonly considered form of fractal dimension. The Hausdorff dimension of a subset E of a metric space (X, d) may be defined as Perhaps more intuitive are the box-counting dimensions. Let (X, d) be a metric space and E ⊂ X be non-empty and bounded. Write N r (E) for the minimal number of sets of diameter at most r > 0 needed to cover E. The upper and lower box-counting dimensions (or box dimensions) are given by If this limit exists, we speak of the box-counting dimension dim B E of E. Note that whilst many 'regular' sets (such as Ahlfors regular sets) have equal Hausdorff and box-counting dimension this is not true in general.  1 2 ) and in the critical case assume also that E(W log 2 W ) < ∞. If E ⊂ [0, 1] is non-empty and compact, and We can also apply Theorem 1.3 to the packing dimension.
Proof Recall that the packing dimension of a set E equals its modified upper boxcounting dimension, that is dim where the E i may be taken to be compact. The conclusion follows by applying Theorem 1.3 to countable coverings of E.
We also derive general lower bounds.
, (1.5) and, provided that additionally E(W p ) < ∞ for some p > 2, then Further, the same inequalities hold for critical random cascades under the additional assumptions that E(W −t ) < ∞ for some t > 0 and E(W log 2 W ) < ∞.
It should be noted that these upper and lower bounds are asymptotically equivalent for small dimensions. Proposition 1.7 Let d ∈ (0, 1) and let s 1 be the unique solution to Further, let give the actual box dimensions of f (E) for many sets E, and that the box dimension of the random image f (E) depends more subtly on E than just on its dimension, we will consider sets formed by decreasing sequences that accumulate at 0, and obtain the almost sure box dimensions of their images in our main Theorems 1.11 and 1.12. Let a = (a n ) n∈N be a sequence of positive reals that converge to 0. We write E a = {a n : n ∈ N} ∪ {0}. Given two sequences a and b of positive reals that are eventually decreasing and convergent to 0 we say that b eventually separates a if there is some n 0 ∈ N such that for all n ≥ n 0 there exists m ∈ N such that a n+1 ≤ b m ≤ a n . We will need this property, which is preserved under strictly increasing functions, when comparing dimensions of the images of sequences under the random function f . However, we first use it to compare the box-counting dimensions of deterministic sets. The simple proofs of the following two lemmas are given in Sect. 2.3. Lemma 1.8 Let a = (a n ) n and b = (b n ) n be strictly decreasing sequences convergent to 0 such that b eventually separates a. Then We write S p ( p > 0) for the set of sequences a = (a n ) n convergent to 0 such that − log a n log n → p. We say that the sequence a = (a n ) n is decreasing with decreasing gaps if a n 0 and a n − a n+1 is (not necessarily strictly) decreasing.

Lemma 1.9
Let a = (a n ) n ∈ S p and b = (b m ) m ∈ S q be decreasing sequences with decreasing gaps with 0 < q < p. Then b eventually separates a.
Of course, the most basic example of such sequences are the powers of reciprocals. For p > 0 let a( p) = (n −1/ p ) n ∈ S p and let E a( p) = 0, 1, 1 2 p , 1 3 p , . . . ∪ {0}. We may compare a( p) with other sequences in S p . Corollary 1.10 Let a = (a n ) n ∈ S p be a strictly decreasing sequence with decreasing gaps such that (a n ) n ∈ S p , where p > 0. Then Proof of Corollary 1.10 Clearly a(q) ∈ S q for q > 0 and it is well-known that dim B E a(q) = 1/(1 + q), see [12,Example 2.7]. If q 1 < p < q 2 then a(q 1 ) eventually separates a and a eventually separates a(q 2 ), by Lemma 1.9, so by Lemma 1.8, with similar inequalities for upper box dimension. Since we may take q 1 and q 2 arbitrarily close to p, the conclusion follows.

Random images of decreasing sequences with decreasing gaps
We aim to find the almost sure dimension of f (E a ) for sequences E a ∈ S p ( p > 0). To achieve this we work with special sequences E α ∈ S 1/α for which dim B f (E α ) is more tractable, and then extend these conclusions across the S p using the eventual separation property. Let α > 0 be a real parameter and let E α ⊂ [0, 1] be the set given in terms of binary expansions by where 0 m denotes m consecutive 0s and {0, 1} m represents all digit sets of length m of 0s and 1s. Equivalently, letting α be the set of infinite strings . ., and we will identify such strings with binary numbers in the obvious way throughout. Clearly, E α consists of a decreasing sequence of numbers with decreasing gaps, together with 0.
If the nth term in this sequence is α n = 0.0 k−1 1j00 · · · ∈ E α with j ∈ {0, 1} αk , then 2 −(k+1) < α n ≤ 2 −k . Moreover, Letting n → ∞ and thus k → ∞, it follows that (α n ) n ∈ S 1/α , so by Corollary 1.10 We may think of the structure of a set E ⊂ [0, 1] as a tree formed by the hierarchy of binary intervals that overlap E. The structure of E α , with a 'stem' at 0 and a sequence of full trees branching off this stem, see Fig. 1, makes it convenient for analysing the box dimension of the random image f (E α ). To obtain the lower bound, we will require a result on large deviations in binary trees that requires the additional assumptions that (1.10) The first condition implies that E(W t log n W ) < ∞ for all t > 0, and in particular that E(W t ) is smooth for all t > 0. Applying the dominated convergence theorem, we can compute the derivatives of the t-moments of W : We also note that We can now state our main results.
for all α > 0. We note that we only require (1.10) for the lower bound in (1.12).
The dimension formula is expressed in terms of the Legendre transform of the logarithmic moment log 2 E(W t ). Figure 2 shows the logarithmic moment and its Legendre transform for a log-normally distributed W that satisfies our assumptions. The right hand side of (1.12) is strictly increasing and continuous in α, as we verify in Lemma 2.3. Using this, and noting that the 'eventually separated' condition is preserved under monotonic increasing functions, we may compare f (E 1/ p ) with f (E a ), where a ∈ S p , to transfer this conclusion to more general sequences. Theorem 1.12 Let W be a positive random variable that is not almost surely constant and satisfies (1.2) and (1.10). Let f be the random homeomorphism given by the (subcritical) multiplicative cascade with random variable W . Then, almost surely, the random for all decreasing sequences with decreasing gaps a = (a n ) ∈ S p and p > 0 simultaneously.
The formula in (1.13) clearly does not coincide with (1.3) which gives the Hausdorff dimension in [8] or the average box-counting dimension in [11]. In particular, unlike Hausdorff dimension, the almost sure box-counting dimension of f (E) cannot be found simply in terms of the box-counting dimension of E and the random variable W underlying the f . One can easily construct a Cantor-like set E of box and Hausdorff (1.13), so E and E a( p) have the same box dimension but with their random images having different box dimensions. Thus the structure of the set and not just its box-counting dimension determine the image dimension.
We obtain different dimension results for sets accumulating at 0 because we seek a balance between the behaviour of products of the W i along the 'stem' {0 k } k∈N , which grows like exp E(log W ) (a 'geometric' mean), and that of the trees that branch off this stem and grow like E(W ) (an 'arithmetic' mean). These different large deviation behaviours are exploited in the proofs. The stark difference in these two behaviours was analysed in detail in [24] in a different context.
On the other hand, homogeneous, or regular sets, have a structure resembling that of a tree that grows geometrically and there is no 'stem' that distorts this uniform behaviour.
Finally we remark that Theorems 1.11 and 1.12 can be extended to critical cascades in a similar fashion to our general bounds. We ommit details to avoid unneccesary technicalities.

Specific W distributions
The expressions for the box-counting dimension in (1.13) and the lower and upper bounds above can be simplified or numerically estimated for particular distributions of W . Most often considered is a log-normal distribution, and we also examine a two-point discrete distribution, as was done for the Hausdorff dimension of images in [8].
1.3.1. Log-normal W Let E a be the set formed by the sequence a = a( p) ∈ S p , and let W be log-normally distributed with parameters μ, σ , that is W = exp X where X = N (μ, σ 2 ). The condition that E(W ) = 1 requires μ = −σ 2 /2 and we can compute γ = −E(log 2 W ) = −μ/ log 2 = σ 2 / log 4. The standing condition that E(W log 2 W ) < 1 can be shown to be equivalent to σ 2 < log 4. Further, the conditions in (1.2) and (1.10) can easily be checked. Let S 1 ( p) and S 2 ( p) be the general lower and upper bound given by Theorems 1.6 and 1.3, respectively, for these W . Then, .
Noting that we can calculate the upper bound since (1.4) becomes the quadratic To compute the almost sure dimension of f (E a ), first note that for x ≥ γ the infimum in the numerator of the dimension formula (1.13) is zero. For x ∈ (0, γ ) the infimum occurs at t 0 where for x < γ and 0 otherwise. Notice in particular that the infimum is clearly continuous at x = γ . We obtain Differentiating the right hand side with respect to x gives Equating this with 0 and solving for x gives two solutions since the numerator is quadratic and the denominator is non-zero for 0 < x < γ . Only one solution of the quadratic is positive so The upper bound S 2 ( p) from Theorem 1.3 is implicitly given by . Fig. 4. We were unable to find a closed form for dim B f (E a ) from (1.13) and the figure was produced computationally.

General bounds
(2.14) Let k ≥ 0 and 0 < r ≤ 1. For each I i ∈ I k , Markov's inequality gives We estimate the expected number of dyadic intervals with image of length at least r . For each k ∈ N, let J k be the set of intervals in I k that intersect E and let Let k 0 be the least integer such that where we have used (2.14) and (2.16), and where c 1 does not depend on k ≥ 0 or 0 < r ≤ 1. Note that, for 0 < r < 1, the image set f (E) is covered by the disjoint intervals . . , i k . We denote by N r (F) the minimal number of intervals of lengths at most r that intersect the set F. Then since each interval f (I i ) with i ∈ S r has a parent interval f (I i − ) with | f (I i − )| ≥ r with at most two such f (I i ) having a common parent interval. We now sum over a geometric sequence of r = 2 −n . Let ε > 0. From (2.18) and (2.17) Hence, almost surely, N 2 −n ( f (E))2 −nt−nε is bounded in n, so from the definition of box-counting dimension, noting that it is enough to take the limit through a geometric sequence r = 2 −n → 0, we conclude that If μ is the critical cascade measure, the proof follows similarly. We can first estimate Noting that E(L t ) < ∞ for t ∈ [0, 1), see [16,Theorem 1.5] or [5,Equation (26)], gives for some constant C > 0 and one obtains an additional subexponential contribution to the expected covering number. The rest of the proof follows in much the same way and details are left to the reader.

General lower bound
For the lower bound, Theorem 1.6, we note that, by the strong law of large numbers, almost surely, where the W i are independent with the distribution of W . This enables us to deduce that a significant proportion of the intervals f (I i ) that intersect f (E) must be reasonably large. Further, since we are taking logarithms we can ignore any subexponential growth which in particular means that also almost surely. We will use the following two lemmas.

Lemma 2.1
Let 0 < p ≤ 1 and let X 1 , . . . , X n be events such that P(X i ) ≥ p for all 1 ≤ i ≤ n. Let 0 < λ < p. Then P{at least λn of the X i occur} ≥ p − λ 1 − λ . (2.19) Note that there is no independence requirement on the X i .
Proof Let Y be the event {at least λ n of the X i occur} and let P(Y ) = ρ. By the law of total expectation pn ≤ E(#i : X i occurs) = E(#i : .
The following lemma can be derived from Hoeffding's inequality.

Lemma 2.2 Let (X i ) be a sequence of i.i.d. binomial random variables with P(X
Proof Hoeffding's inequality states that for any sequence of independent random variables Y i with a i ≤ Y i ≤ b i and for t > 0, Thus, where we have applied Hoeffding's inequality with Y i = −X i ,t = 1 2 pN ,a i = −1 and b i = 0.
For the second inequality we similarly obtain The same argument can be repeated for the critical case. Here, the strong law of large numbers gives 1 k log( − log 2 almost surely and so for k large enough, Again L i is equal to L in distribution and there exists τ > 0 such that P{L ≥ τ } ≥ 3 4 . We can now conclude that (2.20) also holds in the critical case.
For each k ∈ N, let J k be the set of intervals in I k that intersect E, and let #(J k ) be the number of such intervals. By the definition of upper box-counting dimension #(J k ) ≥ 2 k(d−ε) for infinitely many k; write K for this infinite set of k ≥ k 0 . Applying Lemma 2.1 to the intervals I i ∈ J k , taking p = 1 2 and λ = 1 4 , for all k ∈ K . Let N r (F) be the maximum number of disjoint intervals of lengths at least r that intersect a set F. Write r k = 2 −k(1−E(log 2 W )+ε) for each k ∈ N. From (2.21), N r k ( f (E)) ≥ 1 4 2 k(d−ε) with probability at least 1 3 for each k ∈ K , so with probability at least 1 3 it holds for infinitely many k ∈ K . It is easy to see that an equivalent definition of upper boxcounting dimension is given by dim B F = lim r →0 log 2 N r (F)/ log 2 (1/r ). It is enough to evaluate this limit along the geometric sequence r = r k , so with probability at least 1 3 , and therefore with probability 1, since dim B f (E) ≥ s is a tail event for all s. Since ε > 0 is arbitrary, (1.5) follows.
For the lower box dimensions for subcritical cascades, we let d = dim B E, which we may assume to be positive, and 0 < ε < d. We need an estimate on the rate of convergence in the laws of large numbers: if E(|X | p ) < ∞ for some p > 2 then (2. For each k ∈ N, let J k be the set of intervals in I k that intersect E, so there is a number k 0 such that if k ≥ k 0 then #(J k ) ≥ 2 k(d−ε) . Fixing k ≥ k 0 , let E k = {i ∈ J k : E i occurs}, which depends only on {W i : |i| ≤ k}. By Lemma 2.1, The random variables {L i : i ∈ I k } are independent of {W i : |i| ≤ k} and of each other. Let , a standard binomial distribution estimate, which follows from Hoeffding's inequality (see Lemma 2.2), gives that Hence, unconditionally, for each k, Since ∞ k=1 2P k < ∞ and ∞ k=1 exp − 1 4 p 2 2 k(d−ε) < ∞, the Borel-Cantelli lemma implies that, with probability one, for all sufficiently large k. As in the upper dimension part, but taking lower limits, it For the lower box dimensions and critical cascades we note that Following the same argument as above with the additional √ k term we conclude that for sufficiently large k. Again, taking lower limits and noting that 1 k log √ k → 0 we get the required lower bound for critical cascades.

Asymptotic behaviour
Proof of Proposition 1.7 Solving (1.8) for d and substituting in (1.7) gives Rearranging gives Note that s 1 , s 2 → 0 as d → 0. Recall that our assumptions imply E(log W ) < log 2 and E(W t ) < ∞ for all t ∈ [0, 1]. It is well-known that the power means converge to the geometric mean, i.e. E(W s 1 ) 1/s 1 → exp E(log W ). Combining this with the above means that s 1 /s 2 → 1 as required.

Box dimension of images of decreasing sequences
We now proceed to the substantial proof of Theorems 1.11 from which we easily deduce Theorem 1.12. First, the following lemma notes some properties of the expressions that occur in (1.12) and (1.13), in particular it follows that they are continuous in α and p respectively (for example, the right hand side of (1.12) is φ (1 + γ )/α) with φ as in (2.23)).
Then φ is strictly decreasing and continuous in β.
Proof (a) Let g x (t) = xt +log 2 E(W t ) for x ≥ 0 and t ≥ 0. Then g x (t) > 0 by (1.11) so g x is a strictly convex function. Also g , so in particular, g x (0) = x +E(log 2 W ) = x −γ and g x (1) = x +E(W log 2 W ) > x +E(W ) log 2 E(W ) = x > 0, by Jensen's inequality and that W is not almost surely constant, so the conclusions in (a) on the infimum follows. The function ψ is continuous for x ≥ 0 since it is the Legendre transform of the twice continuously differentiable strictly convex function log 2 E(W t ).
(b) Now consider the function Since the supremum in φ(β) is over a bounded interval, it is an exercise in basic analysis to see that φ is continuous in β and that, since η(x, β) is strictly decreasing in β for each x, φ is strictly decreasing.

Upper bound for dim B f (E α )
Throughout this section, the distribution of W , and so γ = −E(log 2 W ), are fixed, as is α > 0. First we bound the expected number of intervals of length at most r needed to cover the part of f (E α ∩ [2 −k , 2 −k+1 ]) by bounding the expected number of dyadic intervals for all 0 < r < 1. The numbers c t may be taken to vary continuously in t ∈ (0, 1) and do not depend on ε, k or r.
Proof We bound from above the expected number of dyadic intervals We split these intervals into three types.
Then, as in case (b) but including the terms for levels k + αk + , we get, just as in (2.25), Hence for each 1 ≤ < ∞, For 0 < r < 1, let J (r ) be the collection of all intervals I i of the form considered in (a),(b),(c) above that intersect E α and such that | f ( . Each I i ∈ J (r ) has a 'parent' interval I − i with at most two intervals in J (r ) having a common parent interval. These parent intervals have | f (I i − )| ≥ r and are included in those counted in (a),(b),(c) so N r ( f (E α ∩[2 −k , 2 −k+1 ])) is bounded above by twice this number of intervals.
By writing r in an appropriate form relative to 2 −k , we can bound the expectation in the previous lemma by r raised to a suitable exponent. Note that in the following lemma we have to work with the infimum over [t 1 , t 2 ] where 0 < t 1 < t 2 < 1 in order to get a uniform constant c(t 1 , t 2 ). At the end of the proof of Proposition 2.6 we show that the infimum can be taken over t > 0. Lemma 2.5 Let 0 < ε < γ . Let k ∈ N and suppose that W 0 W 00 . . . W 0 k−1 ≤ a2 −(k−1)(γ −ε) for some a > 0. Then for all 0 < t 1 < t 2 < 1, there exists c(t 1 , t 2 ) > 0, independent of k, r and ε, such that, provided that t 2 (ε) : Proof In Lemma 2.4 c t is continuous and positive on (0, 1), so let c(t 1 , We bound the right hand side of (2.24) using (2.33). For t ∈ [t 1 , t 2 ], Changing the base of logarithms to 1/r and taking the infimum over t ∈ [t 1 , t 2 ], Inequality (2.32) now follows from (2.24) by taking the supremum over since, by calculus, the middle term is increasing in x for −1 − (1 + γ − ε)/α < x ≤ 0, provided that t 2 (ε) < t 2 < 1, so it is enough to take the supremum over x > 0.
It remains to sum the estimates in Lemma 2.5 over 1 ≤ k ≤ K for an appropriate K and make a basic estimate to cover f (E α ∩ [0, 2 −K ])). The Borel-Cantelli lemma leads to a suitable bound for N r ( f (E α )) for all sufficiently small r , and finally we note that the infimum can be taken over t > 0.

Lower bound for dim B f (E α )
To obtain the lower bound of Theorem 1.11 we establish a bound on the distribution of the products W i 1 . . . W i 1 ...i n of independent random variables on a binary tree. We will use a well-known relationship between the free energy of the Mandelbrot measure that goes back to Mandelbrot [22] and has been proved in a very general setting in Attia and Barral [2]. Proposition 2.7 (Attia and Barral [2]) Let X be a random variable with finite logarithmic moment function (q) = log E(e q X ) for all q ≥ 0. Write R(x) = inf q∈R ( (q) − xq) for the rate function and assume that (q) is twice differentiable for q > 0. If {X i : i ∈ ∪ ∞ j=1 {0, 1} j } are independent and identically distributed with the distribution of X , then, We refer the reader to the well-written account of the history of this statement in [2], where Proposition 2.7 is a special case of their Theorem 1.3(1), see in particular (1.1) and situation (1) discussed in [2, page 142]. Note that the application of this theorem requires the strongest assumptions thus far on the random variable W . We derive a version of this Proposition suited to our setting.
We now develop Lemma 2.8 to consider the independent subtrees with nodes a little way down the main binary tree to get the probabilities to converge to 1 at a geometric rate. When we apply the following lemma, we will take ε, δ to be small and λ close to 1.
Proof of Theorem 1.11 For fixed α, Theorem 1.11 follows immediately from Propositions 2.6 and 2.10. Further, with probability 1, (1.13) holds simultaneously for all countable subsets A ⊂ (0, ∞) and so in particular for Q + . Since (1.13) is continuous in p, it must hold for all p > 0 simultaneously and so Theorem 1.11 holds.

2.2.3.
Box dimension of f (E a ) for a ∈ S p It remains to extend Theorem 1.11 to Theorem 1.12 which we do using the 'eventually separating' notion.
Since f is almost surely monotonic, it preserves 'eventual separation' for all pairs of sequences, so f (E 1/ p 1 ) eventually separates f (a) and f (a) eventually separates f (E 1/ p 2 ). By Lemma 1.8, By Lemma 2.3 φ is continuous in α, so taking p 1 , p 2 arbitrarily close to p, we conclude that dim B f (E a ) = φ(1/ p). Further, since 'eventual separation' is preserved almost surely for all pairs of sequences a and a , the box-counting dimension of E a is constant for all a ∈ S p . Applying Theorem 1.11 we get that dim B f (E a ) = φ(1/ p) for all a ∈ S p and p > 0 simultaneously with probability 1.

Decreasing sequences
We now prove the statements in Sect. 1

.2.2.
Proof of Lemma 1. 8 We may assume that n 0 = 1 in the definition of b eventually separating a, since removing a finite number of points from a sequence does not affect its box-counting dimensions. For r > 0 and E a bounded subset of R let N r (E) be the maximal number of points in an r -separated subset of E, and let {a n i } be a maximal r -separated subset of a (with n i increasing). Then for each 1 ≤ i ≤ N r (A) − 1 there exists b m i ∈ b such that a n i+1 ≤ b m i ≤ a n i . Then {b m 1 , b m 3 , b m 5 , . . . , b m N } is an r -separated set, where N is the largest odd number less than N r (a). It follows that N r (b) ≥ 1 2 (N + 1) ≥ 1 2 (N r (a) − 2). The inequalities now follow from the definition of the lower box-counting dimension dim B E = lim r →0 log N r (E)/ − log r , and similarly for upper box-counting dimension.