1 Exact dimensionality and Ledrappier–Young formulae

Given an iterated function system (IFS), which in this article will refer to a finite family of uniform contractions \(\{S_i: [0,1]^2 \rightarrow [0,1]^2\}_{i=1}^N\), it is well-known that there exists a unique, non-empty, compact set \(F \subseteq [0,1]^2\) such that \(F=\bigcup _{i=1}^N S_i(F)\) which we call the attractor of the IFS. We say that a measure \(\mu \) supported on F is invariant (respectively ergodic) if there exists a \(\sigma \)-invariant (respectively ergodic) measure m on \(\Sigma =\{1, \ldots , N\}^{\mathbb {N}}\) (where \(\sigma \) denotes the left shift map) such that \(\mu =m \circ \Pi ^{-1}\) where \(\Pi : \Sigma \rightarrow [0,1]^2\) is the canonical coding map defined by \(\Pi (i_1,i_2 \ldots )=\lim _{n \rightarrow \infty } S_{i_1}\circ \cdots \circ S_{i_n}([0,1]^2)\).

Recall that the (upper and lower) local dimensions of \(\mu \) at x are defined as

$$\begin{aligned} {\overline{\dim }}_{\mathrm {loc}}(x):=\limsup _{r \rightarrow 0} \frac{\log (\mu (B(x,r)))}{\log r} \;\; \text { and } \;\; {\underline{\dim }}_{\mathrm {loc}}(x):=\liminf _{r \rightarrow 0} \frac{\log (\mu (B(x,r)))}{\log r} \end{aligned}$$

where B(xr) denotes a ball of radius r centred at x. If \({\overline{\dim }}_{\mathrm {loc}}(x)={\underline{\dim }}_{\mathrm {loc}}(x)\) we denote the common value by \(\dim _{\mathrm {loc}}(x)\) and call it the local dimension of \(\mu \) at x. If the local dimension exists and is constant for \(\mu \)-almost all x we say that the measure \(\mu \) is exact dimensional and call this constant the exact dimension of \(\mu \), which we will denote by \(\dim \mu \). In this case, most well-known notions of dimension coincide with the exact dimension of \(\mu \).

In the dimension theory of measures, it is a problem of central interest to establish the exact dimensionality of ergodic invariant measures supported on attractors of IFS, and to provide a formula for the exact dimension. In many settings, the exact dimension has been shown to satisfy a formula in terms of Lyapunov exponents, various notions of entropy and the dimensions of projected measures.

In particular, if the maps \(S_i\) are conformal and the IFS satisfies an additional separation condition, it is a classical result that any ergodic invariant measure \(\mu \) supported on the attractor F is exact dimensional and its exact dimension is given by its measure-theoretic entropy over the Lyapunov exponent (see e.g. [3]). In the substantially more difficult case where no separation condition is assumed, Feng and Hu generalised this classical result by showing that any ergodic invariant measure \(\mu \) supported on the attractor F is exact dimensional and its exact dimension is given by the projection entropy over the Lyapunov exponent. In this sense, exact dimensionality is understood in the conformal setting.

On the other hand, the question of whether every ergodic invariant measure supported on the attractor of a non-conformal IFS is exact dimensional is still very much open, and this question has recently received a lot of attention in the particular case where the maps \(S_i\) are all affine. Feng [7] has very recently shown that all ergodic invariant measures supported on the attractors of IFS composed of affine maps are exact dimensional and satisfy a formula in terms of the Lyapunov exponents and conditional entropies. This answered a folklore open question in the fractal community and unified previous partial results obtained in [1, 10]. In the non-conformal setting, this formula for the exact dimension of \(\mu \) is often called a “Ledrappier–Young formula”, following the work of Ledrappier and Young on the dimension of invariant measures for \(C^2\) diffeomorphisms on compact manifolds [13, 14].

While Feng’s result settles the case of ergodic measures supported on self-affine sets, the more general case of ergodic measures supported on attractors of more general (i.e. nonlinear) non-conformal IFS is still open. In fact, the only result in this direction that the authors are aware of is [7, Theorem 2.11], where Feng and Hu prove exact dimensionality of ergodic invariant measures supported on the attractors of IFSs which can be expressed as the direct product of IFSs composed of \(C^1\) maps on \({\mathbb {R}}\). The fact that there is limited literature concerning the exact-dimensionality of measures supported on general non-conformal attractors reflects the wider challenge of understanding the dimension theory of nonlinear non-conformal attractors, although this appears to be an area of growing interest [4, 6, 9].

In this article we consider (pushforward) quasi-Bernoulli measures supported on the attractors of nonlinear, non-conformal IFS which were introduced in [6], and we show that these are exact dimensional and satisfy a Ledrappier–Young formula. In Sect. 2 we introduce the class of attractors and measures which will be studied and state our main result, Theorem 2.5. In Sect. 3 we recall some technical results which were proved in [6] concerning the contractive properties of the maps in our IFS. Section 4 contains the proof of Theorem 2.5, which adapts an approach used in [10].

2 Our setting and statement of results

Let \(f_{i,x}\), \(f_{i,y}\) denote partial derivatives of \(f_i\) with respect to x and y respectively and \(g_{i,x}\), \(g_{i,y}\) denote partial derivatives of \(g_i\) with respect to x and y respectively.

We will consider the following family of attractors which were introduced by Falconer, Fraser and Lee [6, Definitions 1.1 and 3.1], although we require a stronger separation condition than the one they assumed.

Definition 2.1

Suppose \({\mathcal {I}}\) is a finite index set with \(|{\mathcal {I}}|\ge 2\). For each \(i \in {\mathcal {I}}\) let \(S_i:[0, 1]^2\rightarrow [0, 1]^2\) be of the form \(S_i(a_1, a_2)= (f_i(a_1), g_i(a_1, a_2))\), where:

  1. 1.

    \(f_i\) and \(g_i\) are \(C^{1+\alpha }\) contractions \((\alpha >0)\) on [0, 1] and \([0, 1]^2\) respectively.

  2. 2.

    \(\{S_i\}_{i \in {\mathcal {I}}}\) satisfies the rectangular strong separation condition (RSSC) : the sets \(\lbrace S_{i}([0, 1]^2)\rbrace _{i\in {\mathcal {I}}}\) are pairwise disjoint.

  3. 3.

    \(\lbrace S_{i}\rbrace _{i\in {\mathcal {I}}}\) satisfies the domination condition: for each \(i \in {\mathcal {I}}\)

    $$\begin{aligned} \inf _{\mathbf{a}\in [0, 1]^2}|f_{i,x}(\mathbf{a})|>\sup _{\mathbf{a}\in [0, 1]^2}|g_{i,y}(\mathbf{a})|\ge \inf _{\mathbf{a}\in [0, 1]^2}|g_{i,y}(\mathbf{a})|\ge d, \end{aligned}$$
    (1)

    where \(d>0\).

Let F denote the attractor of \(\{S_i\}_{i \in {\mathcal {I}}}\).

For \(n\in {\mathbb {N}}\) we write \({\mathcal {I}}^n\) to denote the set of all sequences of length n over \({\mathcal {I}}\) and we let \({\mathcal {I}}^*=\bigcup _{n\ge 1}{\mathcal {I}}^n\) denote the set of all finite sequences over \({\mathcal {I}}\). We let \(\Sigma ={\mathcal {I}}^{\mathbb {N}}\) denote the set of infinite sequences over \({\mathcal {I}}\) and for \(\texttt {i}=(i_1, i_2,\dots )\in \Sigma \) we write \(\texttt {i}|n= (i_1, i_2,\dots , i_n)\in {\mathcal {I}}^n\) to denote the restriction of \(\texttt {i}\) to its first n symbols. For \(\texttt {i}=(i_1, i_2,\dots , i_n)\in {\mathcal {I}}^n\) we write \(S_{\texttt {i}}= S_{i_1}\circ \cdots \circ S_{i_n}\) and we write \([\texttt {i}]\subseteq \Sigma \) to denote the cylinder set corresponding to \(\texttt {i}\), which is the set of all infinite sequences over \({\mathcal {I}}\) which begin with \(\texttt {i}\).

Definition 2.2

We say that a measure m on \(\Sigma \) is quasi-Bernoulli if there exists some \(L>0\) such that for all \(\texttt {i}, \texttt {j}\in {\mathcal {I}}^*\)

$$\begin{aligned} L^{-1}m([\texttt {i}])m([\texttt {j}])\le m([\texttt {i}\texttt {j}])\le L m([\texttt {i}])m([\texttt {j}]). \end{aligned}$$
(2)

We will study the pushforward measure \(\mu =m \circ \Pi ^{-1}\) for an ergodic invariant quasi-Bernoulli measure m, noting that \(\mu \) is supported on F. Apart from including the important class of Bernoulli measures, quasi-Bernoulli measures also include the well-known class of Gibbs measures for Hölder continuous potentials. Furthermore it was shown in [2] that this inclusion is strict.

The Shannon–McMillan–Breiman theorem allows us to define the entropy of \(\mu \).

Definition 2.3

(Entropy) There exists a constant \(h(\mu ) \le 0\) such that for m-almost all \(\texttt {i}\in \Sigma \),

$$\begin{aligned} h(\mu )=\lim _{n \rightarrow \infty } \frac{1}{n} \log m([\texttt {i}|n]). \end{aligned}$$
(3)

We call \(h(\mu )\) the entropy of \(\mu \).

Apart from entropy the other key features of the Ledrappier–Young formula are the Lyapunov exponents, which describe the typical contraction rates in different directions.

Lyapunov exponents are defined in terms of Jacobian matrices of the maps \(S_{\texttt {i}|n}\). We denote the Jacobian matrix of \(S_{\texttt {i}|n}\) at a point \({\mathbf {a}}\in [0, 1]^2\) by \(D_{\mathbf {a}}S_{\texttt {i}|n}\). We recall that the singular values of an \(n\times n\) matrix A are defined to be the positive square roots of the eigenvalues of \(A^TA\), where \(A^T\) denotes the transpose of A. The Lyapunov exponents are defined in terms of singular values of the matrices \(D_{\mathbf {a}}S_{\texttt {i}|n}\). For fixed \({\mathbf {a}}\in [0,1]^2\) and any \(n\in {\mathbb {N}}\) we let \(\alpha _1(D_{\mathbf {a}}S_{\texttt {i}|n})\ge \alpha _2(D_{\mathbf {a}}S_{\texttt {i}|n})\) denote the singular values of \(D_{\mathbf {a}}S_{\texttt {i}|n}\). The sub-additive ergodic theorem then allows us to make the following definition.

Definition 2.4

(Lyapunov exponents) There exist constants \( \chi _2(\mu ) \le \chi _1(\mu )< 0\) such that for m-almost all \(\texttt {i}\in \Sigma \),

$$\begin{aligned} \chi _1(\mu )=\lim _{n \rightarrow \infty }\frac{1}{n} \log \alpha _1\left( D_{\Pi (\sigma ^n \texttt {i})} S_{\texttt {i}|n}\right) \end{aligned}$$
(4)

and

$$\begin{aligned} \chi _2(\mu )=\lim _{n \rightarrow \infty }\frac{1}{n} \log \alpha _2\left( D_{\Pi (\sigma ^n \texttt {i})} S_{\texttt {i}|n}\right) . \end{aligned}$$
(5)

We call \(\chi _1(\mu ), \chi _2(\mu )\) the Lyapunov exponents of the system with respect to \(\mu \).

Let \(\pi :[0,1]^2 \rightarrow [0,1]\) denote projection to the x-co-ordinate. Let \(\pi (\mu )=\mu \circ \pi ^{-1}\) denote the projected measure which is supported on \(\pi F\), which is the attractor of the (possibly overlapping) conformal IFS \(\{f_i\}_{i \in {\mathcal {I}}}\) on [0, 1]. Note that by [8, Theorem 2.8], \(\pi (\mu )\) is exact dimensional. We denote its exact dimension by \(\dim \pi (\mu )\).

We are now ready to state our main result.

Theorem 2.5

Let \(\mu \) be a pushforward ergodic invariant quasi-Bernoulli measure supported on F, where F satisfies Definition 2.1. Then \(\mu \) is exact dimensional and moreover its exact dimension \(\dim \mu \) satisfies the following Ledrappier–Young formula

$$\begin{aligned} \dim \mu = \frac{h(\mu )}{\chi _2(\mu )} + \frac{\chi _2(\mu )-\chi _1(\mu )}{\chi _2(\mu )} \dim \pi (\mu ). \end{aligned}$$

3 Preliminaries

Since each \(f_i\) (\(i \in {\mathcal {I}}\)) only depends on the x-co-ordinate of a given point, it is easy to see that the Jacobian of each \(S_i\) must be lower triangular. Denote the Jacobian by

$$\begin{aligned} D_{\mathbf {a}}S_i= \begin{pmatrix} f_{i,x}({\mathbf {a}}) &{}0 \\ g_{i,x}({\mathbf {a}})&{} g_{i,y}({\mathbf {a}}) \end{pmatrix}. \end{aligned}$$

It is easy to see by the chain rule for any \(\texttt {i}\in \Sigma \) and \(n\in {\mathbb {N}}\) the Jacobian of \(S_{\texttt {i}|n}\) must also be lower triangular which we will write as

$$\begin{aligned} D_{\mathbf {a}}S_{\texttt {i}|n}= \begin{pmatrix} f_{\texttt {i}|n,x}({\mathbf {a}}) &{}0 \\ g_{\texttt {i}|n,x}({\mathbf {a}})&{} g_{\texttt {i}|n,y}({\mathbf {a}}) \end{pmatrix}. \end{aligned}$$

Our IFS has several useful properties, which were established in [6]. To begin with we have the following bound, which allows us to control the off-diagonal entry:

Lemma 3.1

[6, Lemma 3.3] There exists a constant \(C>0\) such that for any \(\texttt {i}\in \Sigma \), \(n \in {\mathbb {N}}\) and \(\mathbf{a}, \mathbf{b}\in [0, 1]^2\),

$$\begin{aligned} \frac{|g_{\texttt {i}|n,x}({\mathbf {a}})|}{|f_{\texttt {i}|n,x}({\mathbf {b}})|} \le C. \end{aligned}$$
(6)

A consequence of Lemma 3.1 is that the singular values of the the Jacobian matrices are comparable to their diagonal entries.

Lemma 3.2

[6, Lemma 3.4] There exists a constant \(M\ge 1\) such that for all \(\mathbf{a}\in [0, 1]^2\), \(\texttt {i}\in \Sigma \) and \(n\in {\mathbb {N}}\) the singular values of the Jacobian matrices \(D_{\mathbf{a}}S_{\texttt {i}|n}\) satisfy

$$\begin{aligned} M^{-1} \ \le \ \frac{\alpha _1\left( D_{\mathbf{a}}S_{\texttt {i}|n}\right) }{|f_{\texttt {i}|n, x}(\mathbf{a})|},\ \frac{\alpha _2\left( D_{\mathbf{a}}S_{\texttt {i}|n}\right) }{|g_{\texttt {i}|n, y}(\mathbf{a})|}\ \le \ M. \end{aligned}$$
(7)

Lemma 3.2, together with the domination condition, implies that the two Lyapunov exponents are distinct, \(\chi _2(\mu )< \chi _1(\mu )\).

Another useful property of our IFS is that the diagonal entries of the Jacobian matrices satisfy a bounded distortion condition.

Lemma 3.3

[6, Lemma 3.2] There exists a constant \(A\ge 1\) such that for all \(\texttt {i}\in \Sigma \), \(n\in {\mathbb {N}}\) and all \(\mathbf{a}, \mathbf{b}\in [0, 1]^2\),

$$\begin{aligned} A^{-1} \ \le \ \frac{|f_{\texttt {i}|n,x}(\mathbf{a})|}{|f_{\texttt {i}|n,x}(\mathbf{b})|},\ \frac{|g_{\texttt {i}|n,y}(\mathbf{a})|}{|g_{\texttt {i}|n,y}(\mathbf{b})|}\ \le \ A. \end{aligned}$$
(8)

Finally, the singular values of the Jacobian matrices also satisfy bounded distortion.

Lemma 3.4

There exists a constant \(R\ge 1\) such that for all \(\texttt {i}\in \Sigma \), \(n\in {\mathbb {N}}\) and all \(\mathbf{a}, \mathbf{b}\in [0, 1]^2\),

$$\begin{aligned} R^{-1} \ \le \ \frac{\alpha _1\left( D_{\mathbf{a}}S_{\texttt {i}|n}\right) }{\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) },\ \frac{\alpha _2\left( D_{\mathbf{a}}S_{\texttt {i}|n}\right) }{\alpha _2\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\ \le \ R. \end{aligned}$$
(9)

Proof

Simply combine Lemmas 3.2 and 3.3. \(\square \)

An easy but useful consequence of Lemma 3.4 is that the Lyapunov exponents defined in Definition 2.4 may be expressed as

$$\begin{aligned} \chi _1(\mu )=\lim _{n \rightarrow \infty }\frac{1}{n} \log \alpha _1\left( D_{a_n} S_{\texttt {i}|n}\right) \;\; \text { and } \;\;\chi _2(\mu )=\lim _{n \rightarrow \infty }\frac{1}{n} \log \alpha _2\left( D_{a_n} S_{\texttt {i}|n}\right) \end{aligned}$$
(10)

for any sequence \((a_n)_{n\in {\mathbb {N}}}\) in \([0, 1]^2\), on the same set of \(\texttt {i}\in \Sigma \) of full m-measure that was used in Definition 2.4.

4 Proofs

The following key lemma allows us to estimate the \(\mu \)-measure of a small “approximate square” in \([0,1]^2\) by the product of the m-measure of an appropriate cylinder and the \(\pi (\mu )\)-measure of the \(\pi \)-projection of the “blow up” of the “approximate square”. It is worth noting that this lemma is the only place where the assumption that m is quasi-Bernoulli (Definition 2.2) is used.

For \(r>0\), \(n \in {\mathbb {N}}\), \({\mathbf {a}}=(a_1, a_2)\in [0, 1]^2\) and \(\texttt {i}\in \Sigma \) such that \(\Pi (\texttt {i})={\mathbf {a}}\) we write \(B_n({\mathbf {a}}, r)\) to denote the strip of points \({\mathbf {b}}=(b_1, b_2) \in S_{\texttt {i}|n}([0,1]^2)\) whose x co-ordinate satisfies \(|b_1-a_1| < r/2\). We note that by the RSSC, \(\Pi \) is an injective map and therefore \(B_n({\mathbf {a}},r)\) is well defined. For \(x\in {\mathbb {R}}\) and \(r>0\) we write \(Q_1(x, r) = \left( x-\frac{r}{2}, x+\frac{r}{2}\right) \).

Lemma 4.1

Let \(r>0\), \(n \in {\mathbb {N}}\), \({\mathbf {a}}=(a_1, a_2) \in F\) and \(\texttt {i}\in \Sigma \) such that \(\Pi (\texttt {i})={\mathbf {a}}\). Then

$$\begin{aligned}&\mu (B_n({\mathbf {a}}, r))\nonumber \\&\quad \le Lm([\texttt {i}|n]) \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})), \frac{Mr}{\min _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\right) \right) \end{aligned}$$
(11)

and

$$\begin{aligned}&\mu (B_n({\mathbf {a}}, r))\nonumber \\&\quad \ge L^{-1}m([\texttt {i}|n]) \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})), \frac{M^{-1}R_n}{\max _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\right) \right) \end{aligned}$$
(12)

where L is the constant from the quasi-Bernoulli property (2), where M is as defined in Lemma 3.2 and where \(R_n:=\min \left\{ r, \text {diam}\left( \pi S_{\texttt {i}|n}([0,1]^2)\right) \right\} \).

Proof

Let

$$\begin{aligned} {\mathcal {J}} = {\mathcal {J}}(\texttt {i}, n, r) = \left\{ \texttt {j}\in {\mathcal {I}}^* : S_{\texttt {i}|n\texttt {j}}([0,1]^2)\subseteq B_n({\mathbf {a}}, r) \text { and } S_{\texttt {i}|n\texttt {j}^{\dagger }}([0,1]^2) \nsubseteq B_n({\mathbf {a}}, r) \right\} \end{aligned}$$

writing \(\texttt {j}^{\dagger }\) to denote \(\texttt {j}\) with the last symbol removed. It follows by our separation assumption that the sets \(\{S_{\texttt {i}|n\texttt {j}}([0, 1]^2)\}_{\texttt {j}\in {\mathcal {J}}}\) are pairwise disjoint and exhaust \(B_n({\mathbf {a}}, r)\) in measure, therefore

$$\begin{aligned} \mu (B_n({\mathbf {a}}, r)) = \sum _{\texttt {j}\in {\mathcal {J}}}m([\texttt {i}|n\texttt {j}]). \end{aligned}$$

Furthermore, as m is quasi-Bernoulli (Definition 2.2) we get

$$\begin{aligned} L^{-1}m([\texttt {i}|n])\sum _{\texttt {j}\in {\mathcal {J}}}m([\texttt {j}])\le \mu (B_n({\mathbf {a}}, r))\le Lm([\texttt {i}|n])\sum _{\texttt {j}\in {\mathcal {J}}}m([\texttt {j}]). \end{aligned}$$
(13)

Note that the sets \(\{S_{\texttt {j}}([0, 1]^2)\}_{\texttt {j}\in {\mathcal {J}}}\) are pairwise disjoint and exhaust \(S_{\texttt {i}|n}^{-1}(B_n({\mathbf {a}}, r))\) in measure. Moreover, since \(S_{\texttt {i}|n}^{-1}B_n({\mathbf {a}}, r)\) necessarily has height 1 we have

$$\begin{aligned} \sum _{\texttt {j}\in {\mathcal {J}}}m([\texttt {j}])=\mu (S_{\texttt {i}|n}^{-1}B_n({\mathbf {a}}, r))=\pi (\mu )(\pi S_{\texttt {i}|n}^{-1}(B_n({\mathbf {a}}, r))). \end{aligned}$$

Observe that \(\pi S_{\texttt {i}|n}^{-1}({\mathbf {a}})=\pi (\Pi (\sigma ^n(\texttt {i})))\). Writing \({\hat{a}}\) and \({\hat{b}}\) for the left and right endpoints of \(\pi S_{\texttt {i}|n}^{-1}(B_n({\mathbf {a}}, r))\), it follows from the mean value theorem that

$$\begin{aligned} |\pi (\Pi (\sigma ^n \texttt {i}))-{\hat{a}}| = \frac{|a_1-f_{\texttt {i}|n}({\hat{a}})|}{|f_{\texttt {i}|n, x}({\hat{c}}_1)|} \end{aligned}$$

for some \({\hat{c}}_1\in [0, 1]\) and

$$\begin{aligned} |\pi (\Pi (\sigma ^n \texttt {i}))-{\hat{b}}| = \frac{|a_1-f_{\texttt {i}|n}({\hat{b}})|}{|f_{\texttt {i}|n, x}({\hat{c}}_2)|} \end{aligned}$$

for some \({\hat{c}}_2\in [0, 1]\). It is easy to see that

$$\begin{aligned} \max \left\{ |a_1-f_{\texttt {i}|n}({\hat{a}})|, |a_1-f_{\texttt {i}|n}({\hat{b}})|\right\} \le \frac{r}{2}, \end{aligned}$$

so Lemma 3.2 gives

$$\begin{aligned} \max \left\{ \frac{|a_1-f_{\texttt {i}|n}({\hat{a}})|}{|f_{\texttt {i}|n, x}({\hat{c}}_1)|}, \frac{|a_1-f_{\texttt {i}|n}({\hat{b}})|}{|f_{\texttt {i}|n, x}({\hat{c}}_2)|}\right\}\le & {} \frac{r}{2\min _{{\mathbf {b}}\in [0,1]^2} |f_{\texttt {i}|n,x}({\mathbf {b}})|}\\\le & {} \frac{Mr}{2\min _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }. \end{aligned}$$

Therefore

$$\begin{aligned} \sum _{\texttt {j}\in {\mathcal {J}}}m([\texttt {j}])\le \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})), \frac{Mr}{\min _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\right) \right) . \end{aligned}$$
(14)

We now consider the lower bound. Observe that

$$\begin{aligned} \max \left\{ |a_1-f_{\texttt {i}|n}({\hat{a}})|, |a_1-f_{\texttt {i}|n}({\hat{b}})|\right\} \ge \min \left\{ \frac{r}{2}, \frac{\text {diam}\left( \pi S_{\texttt {i}|n}([0,1]^2)\right) }{2}\right\} =\frac{R_n}{2}. \end{aligned}$$

so Lemma 3.2 gives

$$\begin{aligned} \max \left\{ \frac{|a_1-f_{\texttt {i}|n}({\hat{a}})|}{|f_{\texttt {i}|n, x}({\hat{c}}_1)|}, \frac{|a_1-f_{\texttt {i}|n}({\hat{b}})|}{|f_{\texttt {i}|n, x}({\hat{c}}_2)|}\right\}\ge & {} \frac{R_n}{2\max _{{\mathbf {b}}\in [0,1]^2} |f_{\texttt {i}|n,x}({\mathbf {b}})|}\nonumber \\\ge & {} \frac{M^{-1}R_n}{2\max _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }. \end{aligned}$$
(15)

If we also have

$$\begin{aligned} \min \left\{ |a_1-f_{\texttt {i}|n}({\hat{a}})|, |a_1-f_{\texttt {i}|n}({\hat{b}})|\right\} \ge \frac{R_n}{2}, \end{aligned}$$

the same reasoning gives

$$\begin{aligned} \min \left\{ \frac{|a_1-f_{\texttt {i}|n}({\hat{a}})|}{|f_{\texttt {i}|n, x}({\hat{c}}_1)|}, \frac{|a_1-f_{\texttt {i}|n}({\hat{b}})|}{|f_{\texttt {i}|n, x}({\hat{c}}_2)|}\right\} \ge \frac{M^{-1}R_n}{2\max _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }, \end{aligned}$$

so we may conclude that

$$\begin{aligned} \sum _{\texttt {j}\in {\mathcal {J}}}m([\texttt {j}])\ge \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})), \frac{M^{-1}R_n}{\max _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}}\right) }\right) \right) . \end{aligned}$$
(16)

Now suppose

$$\begin{aligned} \min \left\{ |a_1-f_{\texttt {i}|n}({\hat{a}})|, |a_1-f_{\texttt {i}|n}({\hat{b}})|\right\} <\frac{R_n}{2}. \end{aligned}$$

Note that if \(|a_1-f_{\texttt {i}|n}({\hat{a}})|<R_n/2\) then this implies that \(f_{\texttt {i}|n}({\hat{a}})\) is on the boundary of \(\pi S_{\texttt {i}|n}([0,1]^2)\), which means that \({\hat{a}}=0\). Similarly if \(|a_1-f_{\texttt {i}|n}({\hat{b}})|<R_n/2\) then \({\hat{b}}=1\). Consider

$$\begin{aligned} Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})), \frac{M^{-1}R_n}{\max _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}}\right) }\right) . \end{aligned}$$

By (15) and the above analysis of the points \({\hat{a}}\) and \({\hat{b}}\), either

$$\begin{aligned} Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})), \frac{M^{-1}R_n}{\max _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}}\right) }\right) \subseteq \pi S_{\texttt {i}|n}^{-1}(B_n({\mathbf {a}}, r)) \end{aligned}$$

or

$$\begin{aligned} Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})), \frac{M^{-1}R_n}{\max _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}}\right) }\right) \subseteq \pi S_{\texttt {i}|n}^{-1}(B_n({\mathbf {a}}, r)) \cup I \end{aligned}$$

where \(I=(-\beta ,0)\) or \(I=(1,\beta )\) for some \(\beta >0\). In either case, since \(\pi (\mu )\) is supported on [0, 1] we still have

$$\begin{aligned} \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})), \frac{M^{-1}R_n}{\max _{{\mathbf {b}}\in [0,1]^2}\alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}}\right) }\right) \right) \le \pi (\mu )\left( \pi S_{\texttt {i}|n}^{-1}(B_n({\mathbf {a}}, r))\right) . \end{aligned}$$

Therefore (16) also holds in this setting. Combining (14) and (16) with (13) completes the proof. \(\square \)

Recall by [8, Theorem 2.8] that as an ergodic measure on a self-conformal set, \(\pi (\mu )\) is exact dimensional with exact dimension \(\dim \pi (\mu )\). This informs us how \(\pi (\mu )\left( Q_1\left( \pi (\Pi ( \texttt {i})), \frac{1}{n}\right) \right) \) scales for an m-typical point \(\texttt {i}\in \Sigma \), although it does not provide any uniform bounds on the projected measure of this interval. The following lemma guarantees the existence of a set of positive measure on which we can uniformly bound \(\pi (\mu )\left( Q_1\left( \pi (\Pi ( \texttt {i})), \frac{1}{n}\right) \right) \).

Lemma 4.2

Let \(\dim \pi (\mu )=t\). There exists a set \(G\subseteq \Sigma \) with measure \(m(G)\ge 1/2\) such that if \(\varepsilon >0\), then for all n sufficiently large

$$\begin{aligned} \log \pi (\mu )\left( Q_1\left( \pi (\Pi ( \texttt {i})), \frac{1}{n}\right) \right) \le (t-\varepsilon )\log \left( \frac{1}{n}\right) \end{aligned}$$

and

$$\begin{aligned} \log \pi (\mu )\left( Q_1\left( \pi (\Pi ( \texttt {i})),\frac{1}{n}\right) \right) \ge (t+\varepsilon )\log \left( \frac{1}{n}\right) \end{aligned}$$

for all \(\texttt {i}\in G\).

Proof

Define \(f_n:\Sigma \rightarrow {\mathbb {R}}\) by

$$\begin{aligned} f_n(\texttt {i})= \frac{\log \pi (\mu )\left( Q_1\left( \pi (\Pi ( \texttt {i})),\frac{1}{n}\right) \right) }{-\log n }. \end{aligned}$$

Therefore for m-almost all \(\texttt {i}\)

$$\begin{aligned} \lim _{n\rightarrow \infty }f_n(\texttt {i})=\lim _{n\rightarrow \infty }\frac{\log \pi (\mu )\left( Q_1\left( \pi (\Pi ( \texttt {i})), \frac{1}{n}\right) \right) }{-\log n}=t \end{aligned}$$

because \(\pi (\mu )\) is exact dimensional. By Egorov’s Theorem there exists a Borel measurable set \(G\subseteq \Sigma \) with \(m(G) \ge 1/2\) such that \(f_n\) converges uniformly on G. In particular, for all \(\varepsilon >0\) there exists \(N_\varepsilon \in {\mathbb {N}}\) such that

$$\begin{aligned} t-\varepsilon \le \frac{\log \pi (\mu )\left( Q_1\left( \pi (\Pi ( \texttt {i})), \frac{1}{n}\right) \right) }{-\log n} \le t+\varepsilon \end{aligned}$$

for all \(n \ge N_\varepsilon \) and \(\texttt {i}\in G\). Rearranging this expression yields the desired result. \(\square \)

Next we show that for m-almost all \(\texttt {i}\in \Sigma \) the sequence of points \(\{\sigma ^n(\texttt {i})\}_{n \in {\mathbb {N}}}\) regularly visits the set G from Lemma 4.2, yielding uniform bounds on the projected measure of the intervals that appear in (11) and (12) along a subsequence of \(n \in {\mathbb {N}}\).

Lemma 4.3

Let \(\dim \pi (\mu )=t\) and for each \(\texttt {i}\in \Sigma \) let \((r_n(\texttt {i}))_{n \in {\mathbb {N}}}\) be a positive null sequence such that \(r_n(\texttt {i}) \rightarrow 0\) uniformly over all \(\texttt {i}\in \Sigma \). Then for m-almost all \(\texttt {i}\in \Sigma \) there exists a sequence \(\{n_k\}_{k\in {\mathbb {N}}}\) such that for all \(\varepsilon >0\)

$$\begin{aligned} \log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})), r_{n_k}(\texttt {i})\right) \right) \le (t-\varepsilon )\log \left( r_{n_k}(\texttt {i})\right) \end{aligned}$$
(17)

and

$$\begin{aligned} \log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})),r_{n_k}(\texttt {i})\right) \right) \ge (t+\varepsilon )\log \left( r_{n_k}(\texttt {i})\right) \end{aligned}$$
(18)

for all sufficiently large \(k \in {\mathbb {N}}\). Furthermore the sequence \(\{n_k\}_{k\in {\mathbb {N}}}\) can be chosen to satisfy

$$\begin{aligned} \lim _{k\rightarrow \infty }\frac{n_{k+1}}{n_k} = 1. \end{aligned}$$

Proof

Let G be the set from the statement of Lemma 4.2 and consider the characteristic function \({\mathbf {1}}_{G}\), which is easily seen to be in \(L^1(\Sigma )\). We can now apply the Birkhoff Ergodic Theorem to obtain that for m-almost all \(\texttt {i}\in \Sigma \)

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{n}\sum _{j=0}^{n-1}{\mathbf {1}}_{G}(\sigma ^j \texttt {i})=\int {\mathbf {1}}_{G} dm = m(G)\ge 1/2. \end{aligned}$$

This gives that for m-almost all \(\texttt {i}\in \Sigma \), \(\sigma ^j \texttt {i}\in G\) with frequency greater than or equal to 1/2. For each \(\texttt {i}\in \Sigma \) which satisfies this let \(\{n_k\}_{k\in {\mathbb {N}}}\) be the sequence for which \(\sigma ^{n_k}\texttt {i}\in G\) for all \(k\in {\mathbb {N}}\). Then by Lemma 4.2 for all \(\varepsilon >0\) there exists \(N_\varepsilon \in {\mathbb {N}}\) such that

$$\begin{aligned} \log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})), \frac{1}{n}\right) \right) \le \left( t-\frac{\varepsilon }{2}\right) \log \left( \frac{1}{n}\right) \end{aligned}$$

and

$$\begin{aligned} \log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})), \frac{1}{n}\right) \right) \ge \left( t+\frac{\varepsilon }{2}\right) \log \left( \frac{1}{n}\right) \end{aligned}$$

for \(n \ge N_\varepsilon \) and all \(k \in {\mathbb {N}}\). Since \(r_{n}(\texttt {i}) \rightarrow 0\) uniformly over all \(\texttt {i}\in \Sigma \), we can choose \(M_\varepsilon \in {\mathbb {N}}\) such that \(r_n(\texttt {i}) \le \frac{1}{N_\varepsilon }\) for all \(n \ge M_\varepsilon \). In particular for all \(n_k \ge M_\varepsilon \) and m-almost all \(\texttt {i}\in \Sigma \) there exists \(\ell \ge N_\varepsilon \) such that

$$\begin{aligned} \frac{1}{l+1}\le r_{n_k}(\texttt {i})\le \frac{1}{l}, \end{aligned}$$

which gives

$$\begin{aligned}&\frac{\log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})),\frac{1}{\ell }\right) \right) }{\log \left( \frac{1}{\ell }\right) +\log \left( \frac{\ell }{\ell +1}\right) }\\&\quad \le \frac{\log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})),r_{n_k}(\texttt {i})\right) \right) }{\log r_{n_k}(\texttt {i})} \end{aligned}$$

and

$$\begin{aligned}&\frac{\log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})),r_{n_k}(\texttt {i})\right) \right) }{\log r_{n_k}(\texttt {i})}\\&\quad \le \frac{\log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})),\frac{1}{\ell +1}\right) \right) }{\log \left( \frac{1}{\ell +1}\right) +\log \left( \frac{\ell +1}{\ell }\right) }. \end{aligned}$$

Hence there exists \(N_\varepsilon ' \ge M_\varepsilon \) such that for m-almost all all \(\texttt {i}\) and all \(n_k \ge N_\varepsilon '\),

$$\begin{aligned} \left| \frac{\log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k}\texttt {i})),r_{n_k}(\texttt {i})\right) \right) }{\log r_{n_k}(\texttt {i})}-t \right| \le \varepsilon , \end{aligned}$$

giving the first part of the result.

It remains to show that \(\lim _{k\rightarrow \infty }n_{k+1}/n_k = 1\). Let \(\texttt {i}\) belong to the set of full m-measure for which \(\sigma ^j \texttt {i}\in G\) with frequency at least 1/2 and write

$$\begin{aligned} S_{n_k}=\sum _{j=0}^{n_k-1}{\mathbf {1}}_{G}(\sigma ^j \texttt {i}). \end{aligned}$$

By Birkhoff’s Ergodic Theorem \(\lim _{k\rightarrow \infty }S_{n_k}/n_k = m(G)\ge 1/2\) and clearly \(S_{n_{k+1}}=S_{n_k}+1\). Now note that

$$\begin{aligned} \left| \frac{S_{n_k}}{n_k}\left( \frac{n_k}{n_{k+1}}-1\right) +\frac{1}{n_{k+1}} \right| = \left| \frac{S_{n_k}+1}{n_{k+1}}-\frac{S_{n_k}}{n_k}\right| \rightarrow 0 \end{aligned}$$

as \(k\rightarrow \infty \). As \(S_{n_k}/n_k\rightarrow m(G)\ge 1/2\) and \(1/n_{k+1}\rightarrow 0\) it follows that \(n_k/n_{k+1} \rightarrow 1\) as \(k\rightarrow \infty \), completing the result. \(\square \)

The RSSC gives us control over the distance between “level n” cylinders, as described in the following lemma.

Lemma 4.4

There exists \(\theta >0\) such that for any \(\texttt {i}, \texttt {l}\in \Sigma \) with \(\texttt {i}|n \ne \texttt {l}|n\),

$$\begin{aligned} \inf _{{\mathbf {a}}, {\mathbf {b}}\in [0,1]^2} d(S_{\texttt {i}|n}({\mathbf {a}}), S_{\texttt {l}|_n}({\mathbf {b}})) \ge \theta \alpha _2(D_{\Pi (\sigma ^{n-1} \texttt {i})}S_{\texttt {i}|_{n-1}}) \end{aligned}$$
(19)

where d denotes the standard Euclidean metric.

Proof

For notational convenience in this proof we shall sometimes write \(a\lesssim b\) to mean that for \(a, b\in {\mathbb {R}}\) we have \(a\le cb\) for some universal constant \(c>0\), where c is independent of any variable which a and b depend on.

We begin by showing that for any \(a_1, b_1, b_2 \in [0,1]\) with \(a_1\ne b_1\) and any \(\texttt {i}\in {\mathcal {I}}^*\),

$$\begin{aligned} \frac{|g_\texttt {i}(b_1,b_2)-g_\texttt {i}(a_1,b_2)|}{|f_\texttt {i}(b_1)-f_\texttt {i}(a_1)|} \le C \end{aligned}$$
(20)

where C is the constant from (6). To see this, notice that by the mean value theorem there exist \(c_1,c_2 \in (a_1,b_1)\) such that

$$\begin{aligned} \frac{|g_\texttt {i}(b_1,b_2)-g_\texttt {i}(a_1,b_2)|}{|f_\texttt {i}(b_1)-f_\texttt {i}(a_1)|}=\frac{|g_{\texttt {i},x}(c_1,b_2)||b_1-a_1|}{|f_{\texttt {i},x}(c_2)||b_1-a_1|}=\frac{|g_{\texttt {i},x}(c_1,b_2)|}{|f_{\texttt {i},x}(c_2)|} \le C, \end{aligned}$$

where the final inequality follows by (6).

Now, let \({\mathbf {a}}=(a_1,a_2), {\mathbf {b}}=(b_1,b_2) \in [0,1]^2\). Define \({\mathbf {c}}=(b_1,a_2)\). We will now show that

$$\begin{aligned} d(S_\texttt {i}({\mathbf {a}}), S_\texttt {i}({\mathbf {b}})) \gtrsim d(S_\texttt {i}({\mathbf {a}}),S_\texttt {i}({\mathbf {c}}))+d(S_\texttt {i}({\mathbf {c}}),S_\texttt {i}({\mathbf {b}})), \end{aligned}$$
(21)

where the implied constant is independent of \(\texttt {i}\in {\mathcal {I}}^*\), \({\mathbf {a}}\), and \({\mathbf {b}}\). To see this, we let \(\gamma =|f_\texttt {i}(a_1)-f_\texttt {i}(b_1)|\), \(\varepsilon =|g_\texttt {i}({\mathbf {a}})-g_\texttt {i}({\mathbf {b}})|\) and \(\eta =|g_\texttt {i}({\mathbf {a}})-g_\texttt {i}({\mathbf {c}})|\). Note that \(d(S_\texttt {i}({\mathbf {a}}), S_\texttt {i}({\mathbf {b}}))=\sqrt{\gamma ^2+\varepsilon ^2}\), \(d(S_\texttt {i}({\mathbf {a}}),S_\texttt {i}({\mathbf {c}}))=\sqrt{\gamma ^2+\eta ^2}\). This is displayed visually in Fig. 1.

Fig. 1
figure 1

The images of the points \({\mathbf {a}}, {\mathbf {b}}\) and \({\mathbf {c}}\) under \(S_\texttt {i}\) and the distances \(\gamma , \epsilon \) and \(\eta \)

There are now three possibilities: (i) \(d(S_\texttt {i}({\mathbf {c}}),S_\texttt {i}({\mathbf {b}}))=\eta +\varepsilon \), (ii) \(\eta >\varepsilon \) and \(d(S_\texttt {i}({\mathbf {c}}),S_\texttt {i}({\mathbf {b}}))=\eta -\varepsilon \) or (iii) \(\varepsilon >\eta \) and \(d(S_\texttt {i}({\mathbf {c}}),S_\texttt {i}({\mathbf {b}}))=\varepsilon -\eta \). Hence

$$\begin{aligned} \frac{d(S_\texttt {i}({\mathbf {a}}),S_\texttt {i}({\mathbf {c}}))+d(S_\texttt {i}({\mathbf {c}}),S_\texttt {i}({\mathbf {b}}))}{d(S_\texttt {i}({\mathbf {a}}), S_\texttt {i}({\mathbf {b}}))}= \frac{\sqrt{\gamma ^2+\eta ^2}+d(S_\texttt {i}({\mathbf {a}}),S_\texttt {i}({\mathbf {c}}))}{\sqrt{\gamma ^2+\varepsilon ^2}}. \end{aligned}$$
(22)

In cases (i) and (iii) we can use (20) to bound \(\eta \lesssim \gamma \), yielding that

$$\begin{aligned} \frac{\sqrt{\gamma ^2+\eta ^2}+d(S_\texttt {i}({\mathbf {a}}),S_\texttt {i}({\mathbf {c}}))}{\sqrt{\gamma ^2+\varepsilon ^2}} \lesssim \frac{\gamma +\varepsilon }{\sqrt{\gamma ^2+\varepsilon ^2}} \lesssim 1. \end{aligned}$$

whereas in case (ii) we can use \(\eta \lesssim \gamma \) to deduce that

$$\begin{aligned} \frac{\sqrt{\gamma ^2+\eta ^2}+d(S_\texttt {i}({\mathbf {a}}),S_\texttt {i}({\mathbf {c}}))}{\sqrt{\gamma ^2+\varepsilon ^2}} \lesssim \frac{\gamma }{\gamma } = 1. \end{aligned}$$

This completes the proof of (21).

Now, notice that by the mean value theorem there exists \({\mathbf {c}}_1 \in [0,1]^2\) such that

$$\begin{aligned} d(S_\texttt {i}({\mathbf {a}}), S_\texttt {i}({\mathbf {c}}))^2= & {} f_{\texttt {i},x}({\mathbf {c}}_1)^2|a_1-b_1|^2+g_{\texttt {i},x}({\mathbf {c}}_1)^2|a_1-b_1|^2\\\ge & {} d({\mathbf {a}},{\mathbf {c}})^2 f_{\texttt {i},x}({\mathbf {c}}_1)^2 \ge d({\mathbf {a}},{\mathbf {c}})^2 \sup _{{\mathbf {c}}_2 \in [0,1]^2} g_{\texttt {i},y}({\mathbf {c}}_2)^2 \end{aligned}$$

by (1). Similarly one can check that

$$\begin{aligned} d(S_\texttt {i}({\mathbf {b}}), S_\texttt {i}({\mathbf {c}}))^2 \ge d({\mathbf {b}},{\mathbf {c}})^2\inf _{{\mathbf {c}}_2 \in [0,1]^2}g_{\texttt {i},y}({\mathbf {c}}_2)^2. \end{aligned}$$

Therefore

$$\begin{aligned} d(S_\texttt {i}({\mathbf {a}}),S_\texttt {i}({\mathbf {b}}))\gtrsim & {} d(S_\texttt {i}({\mathbf {a}}), S_\texttt {i}({\mathbf {c}}))+d(S_\texttt {i}({\mathbf {c}}), S_\texttt {i}({\mathbf {b}})) \nonumber \\\gtrsim & {} (d({\mathbf {a}},{\mathbf {c}})+d({\mathbf {b}},{\mathbf {c}}))\inf _{{\mathbf {c}}_2 \in [0,1]^2}|g_{\texttt {i},y}({\mathbf {c}}_2)| \nonumber \\\ge & {} d({\mathbf {a}},{\mathbf {b}})\inf _{{\mathbf {c}}_2 \in [0,1]^2}|g_{\texttt {i},y}({\mathbf {c}}_2)|\nonumber \\\gtrsim & {} d({\mathbf {a}},{\mathbf {b}})\sup _{{\mathbf {c}}_3 \in [0,1]^2}\alpha _2(D_{{\mathbf {c}}_3} S_\texttt {i}) \end{aligned}$$
(23)

where the first inequality follows by (21) and the final one by Lemma 3.2.

Finally, note that by the RSSC, there exists \(\delta >0\) such that

$$\begin{aligned} \min _{i \ne j \in {\mathcal {I}}} \min _{x \in S_i(F)}\min _{y \in S_j(F)} d(x,y) \ge \delta . \end{aligned}$$
(24)

Let \(\texttt {i}=(i_1,i_2, \ldots ), \texttt {l}=(l_1,l_2, \ldots ) \in \Sigma \) with \(\texttt {i}|n \ne \texttt {l}|n\). In particular there exists \(0 \le m \le n-1\) such that \(\texttt {i}|m=\texttt {l}|m\) and \(i_{m+1} \ne l_{m+1}\). We write \(\texttt {i}|n=\texttt {i}|m\texttt {j}\) and \(\texttt {l}|n=\texttt {i}|m \texttt {k}\). Then for all \({\mathbf {a}}, {\mathbf {b}}\in [0,1]\),

$$\begin{aligned} d(S_{\texttt {i}|n}({\mathbf {a}}), S_{\texttt {l}|n}({\mathbf {b}}))= & {} d(S_{\texttt {i}|m}(S_{\texttt {j}}({\mathbf {a}})), S_{\texttt {i}|m}(S_\texttt {k}({\mathbf {b}}))) \\\gtrsim & {} d(S_\texttt {j}({\mathbf {a}}),S_\texttt {k}({\mathbf {b}}))\sup _{{\mathbf {c}}_3 \in [0,1]^2}\alpha _2(D_{{\mathbf {c}}_3} S_{\texttt {i}|m}) \\\gtrsim & {} \alpha _2(D_{\Pi (\sigma ^{n-1}\texttt {i})} S_{\texttt {i}|n-1}) \end{aligned}$$

where the second inequality follows by (23) and the final inequality follows by (24) (since \(\texttt {j}\) and \(\texttt {k}\) begin with different digits) and Lemma 3.4. \(\square \)

We are now in a position to be able to prove Theorem 2.5, our main result. We do so by establishing both the corresponding lower and upper bounds for the local dimension of \(\mu \) at \(\Pi (\texttt {i})\) for \(\texttt {i}\in \Sigma \) belonging to a set of full m-measure. It is worth noting that only the lower bound requires Lemma 4.4 and as such this is the only bound that requires the RSSC.

Proof of Theorem 2.5

Let \(\texttt {i}\in \Sigma \) belong to the set of full m-measure for which (3), (4) and (5) hold. Let

$$\begin{aligned} \eta :=\sup _{i\in {\mathcal {I}}, {\mathbf {a}}, {\mathbf {b}}\in [0, 1]^2}\left\{ \frac{|g_{i,y}({\mathbf {a}})|}{|f_{i,x}({\mathbf {b}})|}\right\} \end{aligned}$$

and note that by the domination condition from Definition 2.1\(\eta <1\). Using Lemma 3.2, applying the chain rule to \(g_{\texttt {i}|n,y}(\Pi (\sigma ^n \texttt {i}))\) and \(f_{\texttt {i}|n,x}({\mathbf {b}})\) for each \(n\in {\mathbb {N}}\) and pairing off appropriate terms we get

$$\begin{aligned} \frac{\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) }{\min _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\le \frac{M^2|g_{\texttt {i}|n,y}(\Pi (\sigma ^n \texttt {i}))|}{\min _{{\mathbf {b}}\in [0,1]^2} |f_{\texttt {i}|n,x}({\mathbf {b}})|}\le M^2\eta ^n \rightarrow 0. \end{aligned}$$

Similarly,

$$\begin{aligned} \frac{\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) }{\max _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\le M^2\eta ^n\rightarrow 0. \end{aligned}$$

Define the sequences

$$\begin{aligned} r_n(\texttt {i})=\frac{M \theta \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) }{\min _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\; \text { and } \; r_n'(\texttt {i})=\frac{M^{-1}\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) }{\max _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }, \end{aligned}$$
(25)

and observe that both \(r_n(\texttt {i})\) and \(r_n'(\texttt {i})\) converge to 0 uniformly over all \(\texttt {i}\in \Sigma \). Hence we can also assume that \(\texttt {i}\in \Sigma \) belongs to the set of full measure which satisfies (17) and (18) for the sequences \(r_n(\texttt {i})\) and \(r_n'(\texttt {i})\).

For a given \({\mathbf {a}}=(a_1,a_2)\in [0,1]^2\) and \(r>0\) write

$$\begin{aligned} Q_2({\mathbf {a}}, r)=\left( a_1-\frac{r}{2},a_1+\frac{r}{2}\right) \times \left( a_2-\frac{r}{2}, a_2+\frac{r}{2}\right) . \end{aligned}$$

Write \(\mathbf{x }=\Pi (\texttt {i})\), let \(n\in {\mathbb {N}}\) and consider the square \(Q_2\left( \mathbf{x }, \theta \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \right) \). By Lemma 4.4 note that \(Q_2\left( \mathbf{x }, \theta \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \right) \) intersects only the cylinder \(S_{\texttt {i}|n}([0, 1]^2)\), therefore it is easy to see that

$$\begin{aligned} Q_2\left( \mathbf{x }, \theta \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \right) \cap F\subseteq B_n(\mathbf{x }, \theta \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) ). \end{aligned}$$

Hence by Lemma 4.1,

$$\begin{aligned}&\mu \left( Q_2\left( \mathbf{x }, \theta \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \right) \right) \nonumber \\&\quad \le Lm([\texttt {i}|n]) \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^n \texttt {i})),\frac{M\theta \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) }{\min _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\right) \right) . \end{aligned}$$
(26)

Consider the subsequence \((n_k)_{k \in {\mathbb {N}}}\) guaranteed by applying Lemma 4.3 to the sequence \(r_n(\texttt {i})\). Applying the chain rule we have

$$\begin{aligned} \alpha _2\left( D_{\Pi (\sigma ^{n+1} \texttt {i})}S_{\texttt {i}|n+1}\right)&=\alpha _2\left( D_{S_{i_{n+1}}\Pi (\sigma ^{n+1} \texttt {i})}S_{\texttt {i}|n}D_{\Pi (\sigma ^{n+1} \texttt {i})}S_{i_{n+1}}\right) \\&\le \alpha _2\left( D_{S_{i_{n+1}}\Pi (\sigma ^{n+1} \texttt {i})}S_{\texttt {i}|n}\right) \alpha _1\left( D_{\Pi (\sigma ^{n+1} \texttt {i})}S_{i_{n+1}}\right) \\&<\alpha _2\left( D_{S_{i_{n+1}}\Pi (\sigma ^{n+1} \texttt {i})}S_{\texttt {i}|n}\right) \\&= \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) , \end{aligned}$$

where in the first inequality we have used that \(\alpha _2(AB) \le \alpha _2(A)\alpha _1(B)\) for \(2 \times 2\) matrices A and B. This implies that the null subsequence \(\left( \theta \alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) \right) _{k\in {\mathbb {N}}}\) is strictly decreasing. Hence for any \(r>0\) sufficiently small we can choose \(k\in {\mathbb {N}}\) sufficiently large so that

$$\begin{aligned} \theta \alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|n_{k+1}}\right) \le r< \theta \alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) . \end{aligned}$$
(27)

Let \(t=\dim \pi (\mu )\) and \(\varepsilon >0\). Let \(r>0\) be sufficiently small so that \(k \in {\mathbb {N}}\) that satisfies (27) is sufficiently large that (17) holds for \(\varepsilon \). Then, using (10), (26) and (17) we get

$$\begin{aligned}&\frac{\log \mu \left( Q_2(\mathbf{x }, r)\right) }{\log r}\\&\quad \ge \frac{\log \mu \left( Q_2\left( \mathbf{x }, \theta \alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) \right) \right) }{\log \left( \theta \alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) \right) }\\&\quad \ge \frac{\log Lm([\texttt {i}|n_k]) + \log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_k} \texttt {i})), \frac{M\theta \alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) }{\min _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n_k}\right) }\right) \right) }{\log \left( \theta \alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) \right) }\\&\quad \ge \frac{\log Lm([\texttt {i}|n_k]) + (t-\varepsilon )\log \left( \frac{M\theta \alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) }{\min _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n_k}\right) }\right) }{\log \left( \theta \alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) \right) }\\&\quad =\frac{\frac{1}{n_k}\log L+\frac{1}{n_k}\log m([\texttt {i}|n_k]) +\frac{(t-\varepsilon )}{n_k}\log \left( \frac{\alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) }{\min _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n_k}\right) }\right) +\frac{(t-\varepsilon )}{n_k}\log M\theta }{\frac{1}{n_k}\log \theta +\frac{1}{n_{k+1}}\frac{n_{k+1}}{n_k}\log \left( \alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) \right) }\\&\quad \rightarrow \frac{h(\mu )+(t-\varepsilon )(\chi _2(\mu )-\chi _1(\mu ))}{\chi _2(\mu )} \end{aligned}$$

as \(r\rightarrow 0\) (so \(k\rightarrow \infty \)). Since \(\varepsilon > 0\) was arbitrary, the lower bound is complete.

We now establish the corresponding upper bound. We begin by estimating

$$\begin{aligned} \sup _{(a_1,a_2),(b_1,b_2) \in B_n({\mathbf {x}}, \alpha _2(D_{\Pi (\sigma ^n(\texttt {i}))} S_{\texttt {i}|n})) } |a_2-b_2|. \end{aligned}$$

for each \(n\in {\mathbb {N}}\). For some \(a,b \in [0,1]\) with the property that

$$\begin{aligned} |f_{\texttt {i}|n}(b)-f_{\texttt {i}|n}(a)|\le \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \end{aligned}$$
(28)

we can write

$$\begin{aligned} \sup _{(a_1,a_2),(b_1,b_2) \in B_n({\mathbf {x}}, \alpha _2(D_{\Pi (\sigma ^n(\texttt {i}))} S_{\texttt {i}|n}))} |a_2-b_2|=|g_{\texttt {i}|n}(b, 1)-g_{\texttt {i}|n}(a, 0)|. \end{aligned}$$

Note that

$$\begin{aligned} |g_{\texttt {i}|n}(b, 1)-g_{\texttt {i}|n}(a, 0)|&\le |g_{\texttt {i}|n}(b, 1)-g_{\texttt {i}|n}(a, 1)| + |g_{\texttt {i}|n}(a, 1)-g_{\texttt {i}|n}(a, 0)|\\&\le C|f_{\texttt {i}|n}(b)-f_{\texttt {i}|n}(a)| + |g_{\texttt {i}|n,y}({\mathbf {c}})| \end{aligned}$$

for some \({\mathbf {c}}\in [0,1]^2\) where we have used (20) and the mean value theorem. Thus it follows from (28) and Lemma 3.3 that

$$\begin{aligned} |g_{\texttt {i}|n}(b, 1)-g_{\texttt {i}|n}(a, 0)|&\le C\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) + A\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \\&=(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) . \end{aligned}$$

It is now easy to see that

$$\begin{aligned} B_n(\mathbf{x }, \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) )\cap F\subseteq Q_2\left( \mathbf{x }, 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \right) \cap F. \end{aligned}$$

Thus Lemma 4.1 implies

$$\begin{aligned}&\mu \left( Q_2\left( \mathbf{x }, 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \right) \right) \nonumber \\&\ge L^{-1}m([\texttt {i}|n]) \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n} \texttt {i})), \frac{M^{-1}R_n}{\max _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\right) \right) , \end{aligned}$$
(29)

where here \(R_n=\min \left\{ \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) , \text {diam}\left( \pi S_{\texttt {i}|n}([0,1]^2)\right) \right\} \). Note that by the mean value theorem

$$\begin{aligned} \text {diam}\left( \pi S_{\texttt {i}|n}([0,1]^2)\right) =|f_{\texttt {i}|n}(1)-f_{\texttt {i}|n}(0)|=|f_{\texttt {i}|n, x}(c)| \end{aligned}$$

for some \(c\in [0, 1]\). Applying the domination condition and Lemma 3.2 gives

$$\begin{aligned} |f_{\texttt {i}|n, x}(c)|\ge |g_{\texttt {i}|n, y}(\Pi (\sigma ^{n} \texttt {i}))|\ge M^{-1} \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) , \end{aligned}$$

so we can conclude that \(R_n\ge M^{-1} \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \). Therefore (29) gives

$$\begin{aligned}&\mu \left( Q_2\left( \mathbf{x }, 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \right) \right) \nonumber \\&\quad \ge L^{-1}m([\texttt {i}|n]) \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n} \texttt {i})), \frac{M^{-2}\alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) }{\max _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n}\right) }\right) \right) , \end{aligned}$$
(30)

Consider the subsequence \((n_k)_{k \in {\mathbb {N}}}\) guaranteed by applying Lemma 4.3 for \(r_n'(\texttt {i})\), which was defined in (25). Since \(\left( \alpha _2\left( D_{\Pi (\sigma ^{n} \texttt {i})}S_{\texttt {i}|n}\right) \right) _{n\in {\mathbb {N}}}\) is strictly decreasing and null, for any \(r>0\) sufficiently small we can choose \(k\in {\mathbb {N}}\) sufficiently large so that

$$\begin{aligned} 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) \le r< 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) . \end{aligned}$$
(31)

Let \(\varepsilon >0\). Let \(r>0\) be sufficiently small so that \(k \in {\mathbb {N}}\) that satisfies (31) is sufficiently large that (18) holds for \(\varepsilon \). Therefore by using (10), (30) and (18) we get

$$\begin{aligned}&\frac{\log \mu \left( Q_2(\mathbf{x }, r)\right) }{\log r}\\&\quad \le \frac{\log \mu \left( Q_2\left( \mathbf{x }, 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) \right) \right) }{\log \left( 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) \right) }\\&\quad \le \frac{\log L^{-1}m([\texttt {i}|_{n_{k+1}}]) + \log \pi (\mu )\left( Q_1\left( \pi (\Pi (\sigma ^{n_{k+1}} \texttt {i})), \frac{M^{-2}\alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) }{\max _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n_{k+1}}\right) }\right) \right) }{\log \left( 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) \right) }\\&\quad \le \frac{\log L^{-1}m([\texttt {i}|_{n_{k+1}}]) + (t+\varepsilon )\log \left( \frac{M^{-2}\alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) }{\max _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n_{k+1}}\right) }\right) }{\log \left( 2(A+C)\alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) \right) }\\&\quad =\frac{\frac{1}{n_{k+1}}\log L^{-1}+\frac{1}{n_{k+1}}\log m([\texttt {i}|_{n_{k+1}}]) +\frac{(t+\varepsilon )}{n_{k+1}}\log \left( \frac{\alpha _2\left( D_{\Pi (\sigma ^{n_{k+1}} \texttt {i})}S_{\texttt {i}|_{n_{k+1}}}\right) }{\max _{{\mathbf {b}}\in [0,1]^2} \alpha _1\left( D_{\mathbf{b}}S_{\texttt {i}|n_{k+1}}\right) }\right) +\frac{(t+\varepsilon )}{n_{k+1}}\log M^{-2}}{\frac{1}{n_{k+1}}\log 2(A+C) +\frac{1}{n_k}\frac{n_k}{n_{k+1}}\log \left( \alpha _2\left( D_{\Pi (\sigma ^{n_k} \texttt {i})}S_{\texttt {i}|n_k}\right) \right) }\\&\quad \rightarrow \frac{h(\mu )+(t+\varepsilon )(\chi _2(\mu )-\chi _1(\mu ))}{\chi _2(\mu )} \end{aligned}$$

as \(r\rightarrow 0\) (so \(k\rightarrow \infty \)). Since \(\varepsilon > 0\) was arbitrary, the upper bound follows. \(\square \)