Abstract
Let \((\Sigma , \sigma )\) be a dynamical system, and let \(U\subset \Sigma \). Consider the survivor set
of points that never enter the subset U. We study the size of this set in the case when \(\Sigma \) is the symbolic space associated to a self-affine set \(\Lambda \), calculating the dimension of the projection of \(\Sigma _U\) as a subset of \(\Lambda \) and finding an asymptotic formula for the dimension in terms of the Käenmäki measure of the hole as the hole shrinks to a point. Our results hold when the set U is a cylinder set in two cases: when the matrices defining \(\Lambda \) are diagonal; and when they are such that the pressure is differentiable at its zero point, and the Käenmäki measure is a strong-Gibbs measure.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Study of dynamical systems with holes begins from the question of [19]: assume you are playing billiards on table where trajectories of balls are unstable with respect to the initial conditions, and assume further, that a hole big enough for a ball to fall through is cut off the table. What is the asymptotic behaviour of the probability that at time t a generic ball is inside some measurable set on the table, given that it is still on the table after time t? This and related questions have been studied in many dynamical systems, see [3, 5,6,7] to name only few of many.
We will focus on a related problem of studying the set of those points that never enter the hole. To put this in rigorous terms, consider a continuous dynamical system \(T: \Lambda \rightarrow \Lambda \) with a hole, the hole being an open subset \(U\subset \Lambda \). Assume further, that there is an ergodic measure \(\mu \) on \((T, \Lambda )\). How large is the survivor set,
By Poincaré’s recurrence theorem, this set will be of zero \(\mu \)-measure. Assuming that \(\Lambda \) is a space where the notions of box-counting or Hausdorff dimension can be defined, we can continue by asking about the size of the survivor set in terms of its dimension. This set has also been studied in several contexts [18, 21], and in fact it turns out that, for example, the set of badly approximable points in Diophantine approximation can be written in terms of survivor sets under the iteration by the Gauss map [13].
The asymptotic speed at which the measure \(\mu \) of the system escapes through the hole U is the escape rate
(when the limit exists). In many systems the escape rate can be described in terms of the \(\mu \)-measure of the hole. In particular, often the escape rate and measure of the hole can also be used to quantify the asymptotic rate of decrease of the dimension deficit; that is, speed at which the dimension of the system with a hole approaches the dimension of the full system [4, 8, 12, 16].
Recently, some interest has arisen in studying classical dynamical problems on self-affine fractal sets, under the dynamics that naturally arises from the definition of the set via an iterated function system [2, 11, 17] (for definitions, see Sect. 2). This is an interesting example to consider since this dynamical system has an easy symbolic representation in terms of a full shift space, the dynamics of which is generally very well understood. In the presence of a separation condition the shift space is in fact conjugate to the dynamical system on the fractal set. However, in the affine case this dynamical system is not conformal. This means that a lot of the standard methodology cannot be carried through—for example, the natural geometric potential is not in general multiplicative or commutative, and the dimension maximizing measure is not necessarily a Gibbs measure. In this article, as Theorems 4.11 and 2.2, we work out the asymptotic rate of decrease for the dimension deficit, for some classes of self-affine sets. As is to be expected from the historical point of view, the deficit is comparable to the measure of the hole, up to a constant which we quantify explicitly when possible. Our proofs work in the case when the iterated function system consists of diagonal matrices (Theorem 4.11, for a simpler corollary see Theorem 2.1) and in the case when the pressure corresponding to the iterated function system has a derivative at its zero point, and the Käenmäki measure is a strong-Gibbs measure (Theorem 2.2, for definitions see Sect. 2).
2 Problem set-up and notation
Let \(\{A_1,\ldots , A_k\}\) be a finite set of contracting non-singular \(d\times d\) matrices, and let \((v_1,\ldots ,v_k)\in {\mathbb {R}}^d\). Consider \(\{f_1,\ldots , f_k\}\), the iterated function system (IFS) of the affine mappings \(f_i:{\mathbb {R}}^d\rightarrow {\mathbb {R}}^d, f_i(x)=A_i(x)+v_i\) for \(i=1,\ldots , k\). It is a well known fact that there exists a unique non-empty compact subset \(\Lambda \) of \({\mathbb {R}}^d\) such that
This set has a description in terms of the shift space. Let \(\Sigma \) be the set of one-sided words of symbols \(\left\{ 1,\ldots ,k\right\} \) with infinite length, i.e. \(\Sigma =\left\{ 1,\ldots ,k\right\} ^{{\mathbb {N}}}\), and \(\Sigma _n=\{1,\ldots , k\}^n\). Let us denote the left-shift operator on \(\Sigma \) by \(\sigma \). When applied to a finite word \({\overline{\imath }}\in \Sigma _n\), \(\sigma ({\overline{\imath }})=i_2 \ldots i_n\), the word of shorter length with the first digit deleted. Let the set of words with finite length be \(\Sigma ^*=\bigcup _{n=0}^{\infty }\Sigma _n\) with the convention that the only word of length 0 is the empty word. Denote the length of \({\overline{\imath }}\in \Sigma ^*\) by \(|{\overline{\imath }}|\), and for finite or infinite words \({\overline{\imath }}, {\overline{\jmath }}\), let \({\overline{\imath }}\wedge {\overline{\jmath }}\) denote their joint beginning. If \({\overline{\imath }}\) can be written as \({\overline{\imath }}={\overline{\jmath }}{{\overline{k}}}\) for some finite or infinite word \({{\overline{k}}}\), denote \({\overline{\jmath }}<{\overline{\imath }}\). We define the cylinder sets of \(\Sigma \) in the usual way, that is, by setting \([{\overline{\imath }}]=\left\{ {\overline{\jmath }}\in \Sigma : {\overline{\imath }}<{\overline{\jmath }}\right\} \) for \({\overline{\imath }}\in \Sigma ^*\). For a word \({\overline{\imath }}=(i_1,\ldots ,i_n)\) with finite length let \(f_{{\overline{\imath }}}\) be the composition \(f_{i_1}\circ \cdots \circ f_{i_n}\) and \(A_{{\overline{\imath }}}\) be the product \(A_{i_1}\ldots A_{i_n}\). For \({\overline{\imath }}\in \Sigma ^*\cup \Sigma \), denote by \({\overline{\imath }}|_n\) the first n symbols of \({\overline{\imath }}\), i.e. \({\overline{\imath }}|_n=(i_1,\ldots ,i_n)\). We define \({\overline{\imath }}|_0=\emptyset \), \(A_{\emptyset }={{\mathrm{Id}}}\), the identity matrix, and \(f_{\emptyset }={{\mathrm{Id}}}\), the identity function.
We define a natural projection\(\pi :\Sigma \rightarrow \Lambda \) by
and note that \(\Lambda =\cup _{{\overline{\imath }}\in \Sigma }\pi ({\overline{\imath }})\).
Denote by \(\sigma _i(A)\) the i-th singular value of a matrix A, i.e. the positive square root of the i-th eigenvalue of \(AA^*\), where \(A^*\) is the transpose of A. We note that \(\sigma _1(A)=\Vert A\Vert \), and \(\sigma _d(A)=\Vert A^{-1}\Vert ^{-1}\), where \(\Vert \cdot \Vert \) is the usual matrix norm induced by the Euclidean norm on \({\mathbb {R}}^d\). Moreover, \(|\sigma _1(A)\cdots \sigma _d(A)|=|\det A|\). For \(s\ge 0\) define the singular value function\(\varphi ^s\) as follows
where \(\lceil \cdot \rceil \) and \(\lfloor \cdot \rfloor \) are the ceiling and floor function. Further, for an affine IFS, define the pressure function
When it is necessary to make the distinction, we will write \(P(s, (A_1,\ldots , A_k))\). Given a \(\sigma \)-invariant measure \(\nu \) on \(\Sigma \), we define the entropy
and energy
It is always the case that \(P(s)\ge E_\nu (s)+h_\nu \). Further, by a result of Käenmäki [14], for all \(s\ge 0\)equilibrium or Käenmäki measures exist, that is, for all s there is a measure \(\mu =\mu (s)\) on \(\Sigma \) such that
A classical result of Falconer [9] (see also [20]) asserts that when \(\Vert A_i\Vert <1/2\), for almost all \((v_1,\ldots , v_k)\in {\mathbb {R}}^{dk}\), the dimension of the self-affine set \(\Lambda \) is given by the s for which \(P(s)=0\) (or d if the number s is greater than d), and Käenmäki proves that \(\dim \Lambda =\dim \mu \) for the equilibrium measure at this value of s. Here \(\dim \) denotes the Hausdorff dimension.
We will need the notion of a Bernoulli measure, that is, given a probability vector \((p_1,\ldots , p_k)\) the Bernoulli measure \(\mathbf{p}\) is the probability measure on \(\Sigma \) giving the weight \(p_{\overline{\imath }}=p_{i_1}\cdots p_{i_n}\) to the cylinder \([{\overline{\imath }}]\). We will also need the notion of an s-semiconformal measure, that is, a measure \(\mu \) for which constants \(0<c\le C<\infty \) exist such that for all \({\overline{\imath }}\in \Sigma ^*\),
In this terminology we are following [15], where the existence of such measures for an affine iterated function system is investigated. We call an s-semiconformal measure \(\mu \) a strong-Gibbs measure, if it is both s-semiconformal and also a Gibbs measure for some multiplicative potential. This means that there is a potential \(\psi : \Sigma ^*\rightarrow {\mathbb {R}}\) with \(\psi ({\overline{\imath }}{\overline{\jmath }})=\psi ({\overline{\imath }})\psi ({\overline{\jmath }})\) and \(\mu \) satisfies the condition of (2.4) with \(\psi \) in place of \(\varphi ^s\) and P calculated with respect to \(\psi \). Notice that because the singular value function is not multiplicative, \(\mu \) can be semiconformal without being a Gibbs measure, and a Gibbs measure with respect to some multiplicative potential without being a semiconformal measure.
We now define the survivor sets we will be interested in. Fix some \({{\overline{q}}}\in \Sigma _q\) and let \(U=[{{\overline{q}}}]\). In the symbolic space \(\Sigma \) we define the survivor set as
Whenever it is the case that \(f_i(\Lambda )\cap f_j(\Lambda )=\emptyset \) for \(i\ne j\), then it is possible to define a dynamical system \(T:\Lambda \rightarrow \Lambda \) such that for \(x\in f_i(\Lambda )\) we let \(T(x)=f_i^{-1}(x)\). In this case it is also true that the projection map \(\pi \) is a bijection, and the dynamical system \((\Lambda , T)\) is conjugate to the full shift \((\Sigma , \sigma )\), that is, \(\pi \circ \sigma =T\circ \pi \). Hence in this case the survivor sets in the symbolic space \((\Sigma , \sigma )\) and on the fractal \((\Lambda , T)\) correspond to each other, that is,
This is why we define, also in the general situation, the survivor set on \(\Lambda \) to be \(\Lambda _{\pi [{{\overline{q}}}]}=\pi (\Sigma _{[{{\overline{q}}}]})\). In the following we will be interested in the dimension of the set \(\pi (\Sigma _{[{{\overline{q}}}]})\), regardless of whether or not the projection \(\pi \) is bijective and the dynamics T well-defined.
We can now formulate our main theorems concerning the Hausdorff dimension of the survivor set. In the following the point \({{\overline{q}}}\) will be fixed and it will cause no danger of misunderstanding to denote, \(\Lambda _{\pi [{{\overline{q}}}|_q]}=\Lambda _q\), where q is a positive integer. In Sect. 4, as Theorem 4.11, we prove a statement for diagonal matrices. However, the formulation of the theorem in the diagonal case requires technical notation that we want to postpone introducing. For the full statement we refer the reader to Theorem 4.11, here we only give the special case where the diagonal elements are in the same order.
Theorem 2.1
Let \(\Lambda \) be a self-affine set corresponding to an iterated function system \(\{A_1+v_1,\ldots , A_k+v_k\}\) with \(\Vert A_i\Vert <\tfrac{1}{2}\) for all \(i=1,\ldots , k\), and let \({{\overline{q}}}\in \Sigma \). Assume that \(A_i={{\mathrm{diag}}}(a^i_1,\ldots , a^i_d)\) are diagonal for all \(i=1,\ldots , k\), and, furthermore, that the diagonal elements are in the same increasing order \(a^i_1\le \cdots \le a^i_d\) in all of the matrices. Denote by \(\mu \) the Käenmäki measure for the value of s for which \(P(s)=0\). Then, for Lebesgue almost all \((v_1,\ldots , v_k)\in {\mathbb {R}}^{dk}\),
where the explicit constant Q, which only depends on the diagonal elements of the matrices \(A_i\), is defined in Remark 4.13.
Theorem 2.2
Let \(\Lambda \) be a self-affine set corresponding to an iterated function system \(\{A_1+v_1,\ldots , A_k+v_k\}\) with \(\Vert A_i\Vert <\tfrac{1}{2}\) for all \(i=1,\ldots , k\), and let \({{\overline{q}}}\in \Sigma \). Denote by \(\mu \) the Käenmäki measure for the value of s for which \(P(s)=0\), assume also that P is differentiable at this point. Assume that \(\mu \) is a strong-Gibbs measure—in particular, a Gibbs measure for a multiplicative potential \(\psi \). Then, for Lebesgue almost all \((v_1,\ldots , v_k)\in {\mathbb {R}}^{dk}\),
This theorem will be proved in Sect. 5.
Remark 2.3
-
1.
It might be tempting to think that, since the result above holds for diagonal matrices it would be easy to extend it to the case of upper triangular matrices. The temptation is due to [10, Theorem 2.5], which states that for an iterated function system with upper triangular matrices the pressure only depends on the diagonal elements of the matrices. However, this does not seem straightforward, see Remark 4.14.
-
2.
Notice that in the statements of Theorems 2.2 and 2.1 the normalizing factor in the denominator of the limit plays the same role as the Lyapunov exponent in, for example, [12].
3 The pressure formula for the dimension and other facts
From here on we consider the point \({{\overline{q}}}\in \Sigma \) fixed, and denote \(\Lambda _{\pi [{{\overline{q}}}|_q]}=\Lambda _q\) for a choice of positive integer q. We start by recalling a pressure formula for the dimension of the surviving set.
Denote
which is a \(\sigma \)-invariant set. For \(n\in {\mathbb {N}}\), let
Define the reduced pressure
This limit exists by submultiplicativity of \(\varphi ^t\).
Theorem 3.1
Let \({{\overline{q}}}\in \Sigma \), \(q \in {\mathbb {N}}\). For an iterated function system \(\{A_1+v_1,\ldots , A_k+v_k\}\) with \(\Vert A_i\Vert <\tfrac{1}{2}\), for Lebesgue almost all \((v_1,\ldots , v_k)\in {\mathbb {R}}^{dk}\),
where \(t_q\) is the unique value for which \(P_q(t_q)=0\).
Proof
This is [15, Theorem 5.2]. \(\square \)
Remark 3.2
Notice that as \(q\rightarrow \infty \), the reduced pressure approaches the full pressure, and hence the dimension of the surviving set \(\Lambda _{q}\) approaches the dimension of \(\Lambda \).
Remark 3.3
The set \(\Sigma _{n,q}\) can be written in an equivalent form
since any point that does not enter the hole in the first n iterations can be completed to a word that never enters the hole.
The following facts about the Käenmäki measure are standard.
Lemma 3.4
Consider the Käenmäki measure \(\mu \) at the value \(s_0\), where \(s_0\) is the root of P.
-
(a)
When there is some A such that \(A_i=A\) for all \(i=1,\ldots , k\), then \(\mu \) is the Bernoulli measure with equal weights.
-
(b)
When \(A_i\) are diagonal matrices with the size of the diagonal elements in the same order, then \(\mu \) is a Bernoulli measure with cylinder weights \(\varphi ^{s_0}(A_1),\ldots , \varphi ^{s_0}(A_k)\).
Proof
-
(a)
Immediate.
-
(b)
In the diagonal case the singular value function is multiplicative. Hence the zero of the pressure is obtained at the point where \(\sum _{i=1}^d\varphi ^s(A_i)=1\), so that \(\varphi ^{s_0}(A_i)\) define a probability vector. The Käenmäki measure is a Bernoulli measure with these weights.
\(\square \)
Define the escape rate of a measure \(\nu \) on \(\Sigma \) as
when the limit exists. We quote the following special case of Ferguson and Pollicott [12]. In the theorem we make a reference to \(P(\psi )\), the pressure corresponding to a potential \(\psi \). This is defined analogously to P(s) in (2.3), but with \(\psi \) in place of \(\varphi ^s\). We note that we will, in fact, only apply Theorem 3.5 when \(P(\psi )=P(s)\).
Theorem 3.5
Let \({{\overline{q}}}\in \Sigma \) and let \(U_q=[{{\overline{q}}}|_q]\). Consider a multiplicative potential \(\psi \) for which \(P(\psi )=0\). For a Gibbs measure \(\mu \) on \(\Sigma \), the escape rate \(r_\mu (U_q)\) always exists and
Proof
See [12, Proposition 5.2 and Theorem 1.1] or see [16, Theorem 2.1]. \(\square \)
Notice that in order for us to apply this theorem in our set-up it is essential that the measure \(\mu \) is also s-semiconformal.
Lemma 3.6
Let \({{\overline{q}}}\in \Sigma \) and let \(U_q=[{{\overline{q}}}|_q]\). Let s be where the pressure \(P(s)=0\). Let \(\mu \) be the Käenmäki measure at this value s, and assume that it is a strong-Gibbs measure, in particular, Gibbs for some multiplicative potential \(\psi \). Then
Proof
We have, using the s-semiconformal property
The proof is now finished by Theorem 3.5. \(\square \)
4 Diagonal matrices
Let us start from a more detailed description of the singular value pressure in the diagonal case. Let \(D=(e_1, \ldots , e_d)\in S_d\) be a permutation of \(\{1,\ldots ,d\}\). For a diagonal matrix \(A={{\mathrm{diag}}}(a_j)\) denote
Naturally,
Hence, if we define the D-pressure analogously to the singular value pressure
and the reduced D-pressure analogously to the reduced pressure
These limits exist by multiplicativity of \(\varphi ^s_D\). Then
In particular, denoting by \(t_q^D\) the zero of \(P_{D,q}\), we have
whenever the assumptions of Theorem 3.1 are satisfied.
Thus, in order to find the zero of \(P_q\) it will be enough for us to be able to find the zeroes \(t_q^D\) for all choices of D. Which will be significantly simplified by the fact that, contrary to \(\varphi ^s\), \(\varphi _D^s\) is a multiplicative potential. Moreover, to prove Theorem 2.1 we do not need to check all possible D: as \(P_{D,q}\rightarrow P_D\) when \(q\rightarrow \infty \), it is enough for us to only consider those D for which \(P_D(s_0)=P(s_0)=0\).
Let us start by denoting by \(\mu _D\) the Bernoulli measure with the probability vector \((p_1^D,\ldots , p_k^D)=(\varphi _D^{s_0}(A_1), \ldots , \varphi _D^{s_0}(A_k))\). Because \(\varphi _D^{s_0}\) is multiplicative, as in Lemma 3.4 we see that this really is a probability vector. Observe that even though we only consider D for which \(P_D(s_0)=0\), this measure can still in general depend on D.
Recall Lemma 3.6, and notice that by the multiplicativity of the potential \(\varphi ^s_D\), the proof of Lemma 3.6 goes through unaltered for \(\mu _D\), the D-pressure and reduced D-pressure. Furthermore, \(\mu _D\) is a Gibbs measure for the potential \(\varphi ^s_D\).
The idea of the proof of Theorem 2.1 is as follows. We fix some D for which \(P_D(s_0)=0\) and then we will bound \(s_0-t_q^D\) from above and below with bounds, the difference between which approaches 0 faster than \(-P_{D,q}(s_0)\) as \(q\rightarrow \infty \). This will let us estimate the limit of \((s_0-t_q^D)/\mu _D([{{\overline{q}}}|_q])\). To simplify the notation, we will skip the index D in the rest of this section—but the reader should remember that the potential \(\varphi ^s\) we work with is not the singular value function but an auxiliary multiplicative potential which is only equal to the singular value function in the case when the diagonal elements \((a_1^i,\ldots ,a_d^i)\) are in the same order for all i.
We need some notation. Denote by \(\Delta \) the simplex of length k probability vectors. Given a finite word \({\overline{\imath }}\in \Sigma _n\), let
and for an infinite word \({\overline{\imath }}\in \Sigma \),
if the limit exists. Fix \(\varepsilon >0\). Let \(E=E(\varepsilon )\) be \(\varepsilon \)-dense in \(\Delta \). Then the number of elements of E, \(\#E=\varepsilon ^{1-k}=:N\). Fix \(\alpha \in \Delta \), and denote
where \(\alpha (i)\) is the i-th coordinate of \(\alpha \) and the same for \({{\mathrm{freq}}}({\overline{\imath }})\). Assume, without loss of generality, that E was chosen in such a way that every point in \(\Sigma _n\) belongs to at most some \(K_d\) of the sets \(F_n(\alpha )\), where \(\alpha \in E(\varepsilon )\), with the constant \(K_d\) not depending on n.
Further, given some \(\alpha \in \Delta \), denote
This is a kind of a dummy matrix simulating the frequency \(\alpha \). Finally, let \(o(\varepsilon )\) be a function that approaches 0 as \(\varepsilon \rightarrow 0\), and o(n) a function that approaches 0 as \(n\rightarrow \infty \).
Lemma 4.1
At a given scale we can approximate P(s) by sequences of only one frequency; that is, given \(\varepsilon >0\) and \(n>0\), there is \(\alpha \in E(\varepsilon )\) such that the numbers
are all \(o(\varepsilon , n)\)-close to each other. The same statement holds when we restrict all these sums to \(\Sigma _{n,q}\).
Proof
Fix \(\varepsilon >0\) and \(n>0\). Notice that, when \(|\alpha -{{\mathrm{freq}}}({\overline{\imath }})|< \varepsilon \) for \({\overline{\imath }}\in \Sigma _n\), then
for constants \(c_1, c_2>0\) that do not depend on n and \(\varepsilon \). Furthermore, for all \(\alpha \in E\)
As E is a finite set, there exists \(\alpha \) for which
and we are done. The proof for sums restricted to \(\Sigma _{n,q}\) instead of \(\Sigma _n\) is exactly the same. \(\square \)
Fix \(\varepsilon >0\) and \(n>0\). Define \({\tilde{g}}^s, g^s_q:\Delta \rightarrow {\mathbb {R}}\) by setting for all \(\alpha \in \Delta \)
where \(F_n^q(\alpha )\subset \Sigma _{n,q}\) is defined analogously to \(F_n(\alpha )\). Further, for \(\alpha \in \Delta \), denote
for \(f(\alpha ) = - \sum _{i=1}^k \alpha (i) \log \alpha (i)\) and
By virtue of (4.1), for n large,
Given \(s\ge 0\), denote by \(\alpha ^s\) the point of \(\Delta \) where \(g^s\) achieves maximum, and by \(\alpha _q^s\) the point (or one of the points, if it is not unique) of \(\Delta \) where \(g^s_q\) achieves maximum. Observe that those are (almost exactly) the maximizing frequencies given by Lemma 4.1. Indeed, for the latter this is obvious, while for the former we have \(\#F_n(\alpha )=\exp (n(-\sum _i\alpha _i\log \alpha _i)+o(\varepsilon ))\), hence maximizing \(g^s\) means (almost) maximizing the sum \(\sum _{{\overline{\imath }}\in F_n(\alpha )} \varphi ^s(A_{\overline{\imath }})\).
Lemma 4.2
For any s, t, there exists \(w=w_s>0\) depending on only one of the parameters, such that
and
Proof
Note that as a function of \(\alpha \), the function \(g^s: \Delta \rightarrow {\mathbb {R}}\) is strictly concave for every s, so that there exists a number \(w=w_s>0\) such that for the second differential in direction e,
That means that
Next fix t and s and notice that
If the first claim does not hold, that is, \(|\alpha ^t - \alpha ^s|> |a(s) - a(t)|/(2w)\), we obtain from the above
which is a contradiction with the maximality of \(\alpha ^t\). The second claim is immediate from here. \(\square \)
Lemma 4.3
There is a constant L such that for all \(s, t\ge 0\),
Proof
By the definition of \(g^s\) and compactness of \(\Delta \), there is an L such that
Furthermore, by maximality of \(\alpha ^t\),
\(\square \)
The functions \(g^s\) and \(g^s_q\) are good approximations to P(s) and \(P_q(s)\), as demonstrated by the next lemma.
Lemma 4.4
We have
Proof
The second part of the assertion follows from
This calculation also applies to \({{\tilde{g}}}^s\), and by (4.2) \(g^s\) can be approximated \(o(\varepsilon , n)\)-closely by \({{\tilde{g}}}^s\). \(\square \)
Lemma 4.5
Let \(s_0\) satisfy \(P(s_0)=0\). The distance between the frequencies maximizing \(g^{s_0}\) and \(g^{s_0}_q\) is controlled by \(P_q(s_0)\). That is,
Proof
Notice that by Lemma 4.4 and (4.3)
Solve for \(|\alpha _q^{s_0} - \alpha ^{s_0}|\) and recall that
(because the sum in definition of \(g^s_q\) is over a smaller set \(F_n^q\)) to arrive at the conclusion.
\(\square \)
For the rest of the section, fix \(s_0\) to satisfy \(P(s_0)=0\) and define \({{\tilde{t}}}={\tilde{t}}_q\) through
Remark 4.6
Notice that
Furthermore, from the definition of \({\tilde{t}}\),
In order to prove Theorem 2.1 we need to compare \(s_0\) and \(t_q\). By the above remark, in fact it suffices to compare \({\tilde{t}}\) and \(t_q\). The next Lemma gives us a tool to do that.
Lemma 4.7
There are constants \(0<c\le C<\infty \) such that
Proof
It is standard to check that there are \(0<b\le B<\infty \) such that for all \(\varepsilon >0\), n,
It follows that there are \(0<c\le C<\infty \) such that for all t between \({\tilde{t}}\) and \(t_q\), the absolute value of the left and right derivatives of \(P_q\) at t are all bounded from below by c and above by C. The left and right derivatives exist at all points by convexity of \(P_q\). Hence, recalling \(P_q(t_q)=0\), the claim follows. \(\square \)
In the remainder of the section, instead of writing down explicit constants, we will use the notation \(O(-P_q(s_0))\) to mean a function of the form \(C(-P_q(s_0))\) where the constant \(C>0\) can be chosen to be independent of q, n and \(\varepsilon \).
Proposition 4.8
The quantity \(t_q - {\tilde{t}}\) has a lower bound in terms of \(P_q(s_0)\), namely
Proof
Notice that by Lemma 4.4, the definition of \({\tilde{t}}\), Remark 4.6 and Lemma 4.5
By Lemma 4.5 and Remark 4.6 this yields
Finally, apply Lemma 4.7 and let \(\varepsilon \rightarrow 0\) and \(n\rightarrow \infty \). \(\square \)
Lemma 4.9
The distance between \(\alpha _q^{s_0}\) and \(\alpha ^{{\tilde{t}}}_q\) is controlled by \(P_q(s_0)\), namely
Proof
Notice first that by Lemma 4.2,
where \(w=w_{s_0}\). Using Lemma 4.3 and Remark 4.6
We now obtain from (4.3) and the definition of \(g^s_q\)
These calculations combined amount to
Finally, through Lemma 4.5, (4.4)and (4.5),
\(\square \)
Proposition 4.10
The quantity \(t_q - {\tilde{t}}\) has an upper bound in terms of \(P_q(s_0)\), namely
Proof
Using Lemma 4.4 and the definition of \({\tilde{t}}\), Remark 4.6 and Lemmas 4.9 and 4.5
Finally, apply Lemma 4.7 and let \(\varepsilon \rightarrow 0\) and \(n\rightarrow \infty \). \(\square \)
We are now ready to formulate the main theorem (in the diagonal case). Please recall the notation introduced in the beginning of the section. Denote
Theorem 4.11
Let \(\Lambda \) be a self-affine set corresponding to an iterated function system \(\{A_1+v_1,\ldots , A_k+v_k\}\) with \(\Vert A_i\Vert <\tfrac{1}{2}\) for all \(i=1,\ldots , k\), and let \({{\overline{q}}}\in \Sigma \). Assume that all the matrices \(A_i\) are diagonal. Then, for Lebesgue almost all \((v_1,\ldots , v_k)\in {\mathbb {R}}^{dk}\),
Moreover, if \({{\overline{q}}}\) is not periodic then
while if \({{\overline{q}}}\) has period \(\ell \) then
Proof
For a fixed D the limit
exists: the value comes from Lemma 3.6, where the upper bound is from Proposition 4.8 and Remark 4.6, and the lower bound is obtained analogously, but using Proposition 4.10. To obtain the theorem we pass with q to \(\infty \) and for each q use the D for which \(t_q^D\) is maximal. \(\square \)
Remark 4.12
Observe that in general situation we cannot write the usual formula ‘the dimension deficit divided by the measure of the hole converges to the left derivative of the pressure’. The reason: consider an iterated function system as in [15, Example 6.2] with linear parts, say,
Then \(s_0=1/2\) and the collection of permutations which satisfies \(P_D(s_0)=0\) consists of two elements, and the corresponding Käenmäki measures are the Bernoulli measures with weights (1 / 3, 2 / 3) and (2 / 3, 1 / 3), respectively. Now choose a very rapidly increasing sequence of natural numbers \((m_j)\) and set \({{\overline{q}}}=(1^{m_1}2^{m_2}1^{m_3}...)\). Then the limits in (4.7) and (4.8) do not exist for either fixed D.
Remark 4.13
However, the following shows that sometimes we can: consider the case that \(A_i\) are diagonal for all \(i=1,\ldots , k\), and, furthermore, the diagonal elements are in the same order in all of the matrices. Then the Käenmäki measure \(\mu \) for the value of s for which \(P(s)=0\) is a Bernoulli measure with weights \((\varphi ^s(A_1),\ldots , \varphi ^s(A_k))\) (by Lemma 3.4), and one can check that in Theorem 4.11, \(\mu \) is the maximizing measure (or one of them, if there are many). Hence we obtain the statement of Theorem 2.1
Remark 4.14
Fix some \(\beta<\alpha <1/2\), and let \(\gamma <\alpha , \beta \). Consider the iterated function system which has as the linear parts of the mappings
Then for \(s<1\), \(\varphi ^s(A^nB^n)\) grows like \(\alpha ^{2ns}\), whereas \(\varphi ^s(B^nA^n)\) grows like \(\alpha ^{ns}\beta ^{ns}\) so that there is an exponential gap between the values, due to the off-diagonal element. Our proof of Theorem 4.11 depends on the exact connection between the singular value function and the Bernoulli measures given by the diagonal elements. Hence, despite the fact that according to [10, Theorem 2.6] the pressure only depends on the diagonal elements of A and B, our proof does not easily extend to the upper triangular case.
5 The case of strong-Gibbs measures (Theorem 2.2)
In this section, recall the assumptions that \(s_0\) is such that \(P(s_0)=0\), that the Käenmäki measure \(\mu \) at \(s_0\) is a strong-Gibbs measure, and given q, denote by \(t_q\) the value where \(P_q(t_q)=0\). Furthermore, we assume that the derivative \(P'(s_0)\) exists. We do not assume that \(P_q\) is differentiable, but since it is convex we know that left and right derivatives exist at all points. We know that \(P_q\) is a convex function not larger than P, which is also convex.
Let us begin with a simple lemma. Here by \(f'(x-0)\) and \(f'(x+0)\) we denote the left and right derivatives of f at x.
Lemma 5.1
Let P be a convex function. Let \(P_q\) be a sequence of convex functions such that \(P_q \le P\) but \(\lim _q P_q(s_0) = P(s_0)\). Then
Proof
It is enough to prove the first inequality: the second is immediate from convexity, and the third can be proved analogously to the first. Assume to the contrary, that there exists \(\varepsilon >0\), and we can choose a subsequence of convex functions \(P_q\le P\) with \(P_q(s_0)\rightarrow P(s_0)\), such that
As \(P_q\) is convex, \(P_q'(s-0) \le P_q'(s_0-0)\) for all \(s<s_0\). On the other hand,
hence there exists \(\delta >0\) depending only on P such that
for all \(s> s_0-\delta \). Hence, decreasing \(\delta >0\) further if necessary
Therefore, choosing q so large that \(P_q(s_0) > P(s_0) - \delta \varepsilon /4\), we obtain \(P_q(s_0-\delta ) > P(s_0-\delta )\), which is a contradiction. \(\square \)
Relying on Lemma 3.6, we wish to understand \(s_0 - t_q\) in terms of \(P_q(s_0)\). That is the content of the following lemma.
Lemma 5.2
Let P be a convex function. Let \(P_q\) be a sequence of convex functions such that \(P_q \le P\), \(\lim _q P_q(s_0) = P(s_0)=0\), and \(\lim _q P_q'(s_0-0) = P'(s_0-0)\). Then
Proof
We have
As \(P_q'(s-0) \le P_q'(s_0-0)\) for all \(s<s_0\), the upper bound follows immediately. For the lower bound, assume that it fails: for a subsequence of \(P_q\) we have
Then, necessarily,
Hence, for all \(s<t_q\) we have
On the other hand, like in the previous lemma, we can find \(\delta >0\) not depending on q such that
for all \(s> s_0-\delta \). Thus,
Comparing (5.1) with (5.2) we see that choosing q such that \(t_q\) is so close to \(s_0\) that
that is
then we get \(P_q(s_0-\delta ) > P(s_0-\delta )\), a contradiction. \(\square \)
Under the assumption that \(P'(s_0)\) exists, by Lemma 5.1 we can apply Lemma 5.2. The statement of Theorem 2.2 is now an immediate corollary of Lemmas 3.6 and 5.2, and Theorem 3.5.
Remark 5.3
The assumptions of Theorem 2.2 may look difficult to satisfy, but there are at least two classes of systems for which the Käenmäki measure is strong-Gibbs.
-
1.
Homogeneous case: assume that all the matrices \(A_i\) are powers of one matrix A. To demonstrate our result, consider the simplest case where \(A_i=A\) for all i. Then the Käenmäki measure is a Bernoulli measure with equal weights by Lemma 3.4 so that, in particular, it is strong-Gibbs. Writing \(\sigma _1,\ldots , \sigma _d\) for the singular values of A and assuming that the dimension \(s_0\) of \(\Lambda \) is not an integer, one can obtain
$$\begin{aligned} P'(s_0)=\log \sigma _{\lceil s_0\rceil }. \end{aligned}$$ -
2.
Dominated case: assume that \(d=2\) and the cocycle generated by matrices \(A_i\) is dominated, that is, there exist \(C>0\), \(0<\tau <1\) such that for all n and \({\overline{\imath }}\in \Sigma _n\),
$$\begin{aligned} \frac{\det (A_{\overline{\imath }})}{|A_{\overline{\imath }}|^2}\le C\tau ^{n}. \end{aligned}$$It is proved in [1] that also in this case the Käenmäki measure satisfies the strong-Gibbs assumption, and if \(s_0\) is not an integer then \(P'(s_0)\) is well defined. The dominated cocycles are an open subset of \(GL(2,{\mathbb {R}})\)-cocycles, we refer the reader to [1] for the discussion.
For more on the s-semiconformality of Käenmäki measures, see [15].
Remark 5.4
As one can see in Lemma 5.2, in both examples presented above the assertion of our theorem stays true for integer \(s_0\) (with \(P'(s_0)\) replaced by \(P'(s_0-0)\)). Indeed, while the singular value pressure is nondifferentiable at integer points because of nondifferentiability at those points of the definition of singular value function, the assumptions of Lemma 5.2 are satisfied (for those examples) at integer points as well.
References
Bárány, B., Käenmäki, A., Morris, I.: Domination and thermodynamic formalism for planar matrix cocycles (in preparation)
Bárány, B., Rams, M.: Shrinking targets on Bedford–McMullen carpets. arXiv:1703.08564
Bruin, H., Demers, M., Melbourne, I.: Existence and convergence properties of physical measures for certain dynamical systems with holes. Ergod. Theory Dyn. Syst. 30(3), 687–728 (2010)
Bunimovich, L.A., Yurchenko, A.: Where to place a hole to achieve a maximal escape rate. Isr. J. Math. 182, 229–252 (2011)
Chernov, N., Markarian, R., Troubetzkoy, S.: Invariant measures for Anosov maps with small holes. Ergod. Theory Dyn. Syst. 20(4), 1007–1044 (2000)
Collet, P., Martínez, S., Schmitt, B.: The Yorke–Pianigiani measure and the asymptotic law on the limit Cantor set of expanding systems. Nonlinearity 7(5), 1437–1443 (1994)
Demers, M.F.: Markov extensions and conditionally invariant measures for certain logistic maps with small holes. Ergod. Theory Dyn. Syst. 25(4), 1139–1171 (2005)
Dettmann, C.: Open circle maps: small hole asymptotics. Nonlinearity 26(1), 307–317 (2013)
Falconer, K.: The Hausdorff dimension of self-affine fractals. Math. Proc. Camb. Philos. Soc. 103(2), 339–350 (1988)
Falconer, K., Miao, J.: Dimensions of self-affine fractals and multifractals generated by upper-triangular matrices. Fractals 15(3), 289–299 (2007)
Ferguson, A., Jordan, T., Rams, M.: Dimension of self-affine sets with holes. Ann. Acad. Sci. Fenn. Math. 40(1), 63–88 (2015)
Ferguson, A., Pollicott, M.: Escape rates for Gibbs measures. Ergod. Theory Dyn. Syst. 32(3), 961–988 (2012)
Hensley, D.: Continued fraction Cantor sets, Hausdorff dimension, and functional analysis. J. Number Theory 40(3), 336–358 (1992)
Käenmäki, A.: On natural invariant measures on generalised iterated function systems. Ann. Acad. Sci. Fenn. Math. 29(2), 419–458 (2004)
Käenmäki, A., Vilppolainen, M.: Dimension and measures on sub-self-affine sets. Monatsh. Math. 161(3), 271–293 (2010)
Keller, G., Liverani, C.: Rare events, escape rates and quasistationarity: some exact formulae. J. Stat. Phys. 135(3), 519–534 (2009)
Koivusalo, H., Ramírez, F.: Recurrence to shrinking targets on typical self-affine fractals. In: Proceedings of the Edinburgh Mathematical Society (to appear). arXiv:1409.7593
Liverani, C., Maume-Deschamps, V.: Lasota–Yorke maps with holes: conditionally invariant probability measures and invariant probability measures on the survivor set. Ann. Inst. H. Poincaré Probab. Stat. 39(3), 385–412 (2003)
Pianigiani, G., Yorke, J.A.: Expanding maps on sets which are almost invariant. Decay and chaos. Trans. Am. Math. Soc. 252, 351–366 (1979)
Solomyak, B.: Measure and dimension for some fractal families. Math. Proc. Camb. Philos. Soc. 124(3), 531–546 (1998)
Urbański, M.: Hausdorff dimension of invariant subsets for endomorphisms of the circle with an indifferent fixed point. J. Lond. Math. Soc. (2) 40(1), 158–170 (1989)
Acknowledgements
Open access funding provided by University of Vienna.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by A. Constantin.
This Project was supported by OeAD Grant No. PL03/2017. M.R. was supported by National Science Centre Grant 2014/13/B/ST1/01033 (Poland).
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Koivusalo, H., Rams, M. Dimension of generic self-affine sets with holes. Monatsh Math 188, 527–546 (2019). https://doi.org/10.1007/s00605-018-1187-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00605-018-1187-6