Dimension Spectra of Lines

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10307)


This paper investigates the algorithmic dimension spectra of lines in the Euclidean plane. Given any line L with slope a and vertical intercept b, the dimension spectrum \({{\mathrm{sp}}}(L)\) is the set of all effective Hausdorff dimensions of individual points on L. We draw on Kolmogorov complexity and geometrical arguments to show that if the effective Hausdorff dimension \(\dim (a, b)\) is equal to the effective packing dimension \({{\mathrm{Dim}}}(a, b)\), then \({{\mathrm{sp}}}(L)\) contains a unit interval. We also show that, if the dimension \(\dim (a, b)\) is at least one, then \({{\mathrm{sp}}}(L)\) is infinite. Together with previous work, this implies that the dimension spectrum of any line is infinite.

1 Introduction

Algorithmic dimensions refine notions of algorithmic randomness to quantify the density of algorithmic information of individual points in continuous spaces. The most well-studied algorithmic dimensions for a point \(x\in \mathbb {R}^n\) are the effective Hausdorff dimension, \(\dim (x)\), and its dual, the effective packing dimension, \({{\mathrm{Dim}}}(x)\) [1, 7]. These dimensions are both algorithmically and geometrically meaningful [3]. In particular, the quantities \(\sup _{x\in E}\dim (x)\) and \(\sup _{x\in E}{{\mathrm{Dim}}}(x)\) are closely related to classical Hausdorff and packing dimensions of a set \(E\subseteq \mathbb {R}^n\) [5, 8], and this relationship has been used to prove nontrivial results in classical fractal geometry using algorithmic information theory [8, 10, 12].

Given the pointwise nature of effective Hausdorff dimension, it is natural to investigate not only the supremum \(\sup _{x\in E}\dim (x)\) but the entire (effective Hausdorff) dimension spectrum of a set \(E \subseteq \mathbb {R}^n\), i.e., the set
$$\begin{aligned} {{\mathrm{sp}}}(E)=\{\dim (x):x\in E\}\,. \end{aligned}$$
The dimension spectra of several classes of sets have been previously investigated. Gu et al. studied the dimension spectra of randomly selected subfractals of self-similar fractals [4]. Dougherty, et al. focused on the dimension spectra of random translations of Cantor sets [2]. In the context of symbolic dynamics, Westrick has studied the dimension spectra of subshifts [14].
This work concerns the dimension spectra of lines in the Euclidean plane \(\mathbb {R}^2\). Given a line \(L_{a,b}\) with slope a and vertical intercept b, we ask what \({{\mathrm{sp}}}(L_{a,b})\) might be. It was shown by Turetsky that, for every \(n\ge 2\), the set of all points in \(\mathbb {R}^n\) with effective Hausdorff 1 is connected, guaranteeing that \(1\in {{\mathrm{sp}}}(L_{a,b})\). In recent work [10], we showed that the dimension spectrum of a line in \(\mathbb {R}^2\) cannot be a singleton. By proving a general lower bound on \(\dim (x,ax+b)\), which is presented as Theorem 5 here, we demonstrated that
$$\begin{aligned} \min \{1,\dim (a,b)\}+1\in {{\mathrm{sp}}}(L_{a,b})\,. \end{aligned}$$
Together with the fact that \(\dim (a,b)=\dim (a,a^2+b)\in {{\mathrm{sp}}}(L_{a,b})\) and Turetsky’s result, this implies that the dimension spectrum of \(L_{a,b}\) contains both endpoints of the unit interval \([\min \{1,\dim (a,b)\},\min \{1,\dim (a,b)\}+1]\).
Here we build on that work with two main theorems on the dimension spectrum of a line. Our first theorem gives conditions under which the entire unit interval must be contained in the spectrum. We refine the techniques of [10] to show in our main theorem (Theorem 8) that, whenever \(\dim (a,b)={{\mathrm{Dim}}}(a,b)\), we have
$$\begin{aligned}{}[\min \{1,\dim (a,b)\},\min \{1,\dim (a,b)\}+1]\subseteq {{\mathrm{sp}}}(L_{a,b})\,. \end{aligned}$$
Given any value \(s\in [0,1]\), we construct, by padding a random binary sequence, a value \(x\in \mathbb {R}\) such that \(\dim (x, ax + b) = s + \min \{\dim (a, b), 1\}\). Our second main theorem shows that the dimension spectrum \({{\mathrm{sp}}}(L_{a,b})\) is infinite for every line such that \(\dim (a, b)\) is at least one. Together with Theorem 5, this shows that the dimension spectrum of any line has infinite cardinality.

We begin by reviewing definitions and properties of algorithmic information in Euclidean spaces in Sect. 2. In Sect. 3, we sketch our technical approach and state our main technical lemmas. In Sect. 4 we prove our first main theorem and state our second main theorem. We conclude in Sect. 5 with a brief discussion of future directions.

2 Preliminaries

2.1 Kolmogorov Complexity in Discrete Domains

The conditional Kolmogorov complexity of binary string \(\sigma \in \{0,1\}^*\) given a binary string \(\tau \in \{0,1\}^*\) is the length of the shortest program \(\pi \) that will output \(\sigma \) given \(\tau \) as input. Formally, it is
$$\begin{aligned} K(\sigma |\tau )=\min _{\pi \in \{0,1\}^*}\left\{ \ell (\pi ):U(\pi ,\tau )=\sigma \right\} \,, \end{aligned}$$
where U is a fixed universal prefix-free Turing machine and \(\ell (\pi )\) is the length of \(\pi \). Any \(\pi \) that achieves this minimum is said to testify to, or be a witness to, the value \(K(\sigma |\tau )\). The Kolmogorov complexity of a binary string \(\sigma \) is \(K(\sigma )=K(\sigma |\lambda )\), where \(\lambda \) is the empty string. These definitions extend naturally to other finite data objects, e.g., vectors in \(\mathbb {Q}^n\), via standard binary encodings; see [6] for details.

2.2 Kolmogorov Complexity in Euclidean Spaces

The above definitions can also be extended to Euclidean spaces, as we now describe. The Kolmogorov complexity of a point \(x\in \mathbb {R}^m\) at precision\(r\in \mathbb {N}\) is the length of the shortest program \(\pi \) that outputs a precision-r rational estimate for x. Formally, it is
$$\begin{aligned} K_r(x)=\min \left\{ K(p)\,:\,p\in B_{2^{-r}}(x)\cap \mathbb {Q}^m\right\} \,, \end{aligned}$$
where \(B_{\varepsilon }(x)\) denotes the open ball of radius \(\varepsilon \) centered on x. The conditional Kolmogorov complexity ofxat precisionrgiven\(y\in \mathbb {R}^n\)at precision\(s\in \mathbb {R}^n\) is
$$\begin{aligned} K_{r,s}(x|y)=\max \big \{\min \{K_r(p|q)\,:\,p\in B_{2^{-r}}(x)\cap \mathbb {Q}^m\}\,:\,q\in B_{2^{-s}}(y)\cap \mathbb {Q}^n\big \}\,. \end{aligned}$$
When the precisions r and s are equal, we abbreviate \(K_{r,r}(x|y)\) by \(K_r(x|y)\). As the following lemma shows, these quantities obey a chain rule and are only linearly sensitive to their precision parameters.

Lemma 1

(J. Lutz and N. Lutz [8], N. Lutz and Stull [10]). Let \(x \in \mathbb {R}^m\) and \(y\in \mathbb {R}^n\). For all \(r,s\in \mathbb {N}\) with \(r\ge s\),
  1. 1.

    \(K_r(x,y)=K_r(x|y)+K_r(y)+O(\log r)\).

  2. 2.

    \(K_r(x)=K_{r,s}(x|x)+K_s(x)+O(\log r)\).


As a matter of notational convenience, if we are given a nonintegral positive real as a precision parameter, we will always round up to the next integer. For example, \(K_{r}(x)\) denotes \(K_{\lceil r\rceil }(x)\) whenever \(r\in (0,\infty )\).

2.3 Effective Hausdorff and Packing Dimensions

J. Lutz initiated the study of algorithmic dimensions by effectivizing Hausdorff dimension using betting strategies called gales, which generalize martingales. Subsequently, Athreya, et al., defined effective packing dimension, also using gales [1]. Mayordomo showed that effective Hausdorff dimension can be characterized using Kolmogorov complexity [11], and Mayordomo and J. Lutz showed that effective packing dimension can also be characterized in this way [9]. In this paper, we use these characterizations as definitions. The effective Hausdorff dimension and effective packing dimension of a point \(x\in \mathbb {R}^n\) are
$$\begin{aligned} \dim (x)=\liminf _{r\rightarrow \infty }\frac{K_r(x)}{r}\quad \text {and}\quad {{\mathrm{Dim}}}(x) = \limsup _{r\rightarrow \infty }\frac{K_r(x)}{r}\,. \end{aligned}$$
Intuitively, these dimensions measure the density of algorithmic information in the point x. Guided by the information-theoretic nature of these characterizations, J. Lutz and N. Lutz [8] defined the lower and upper conditional dimension of \(x\in \mathbb {R}^m\) given \(y\in \mathbb {R}^n\) as
$$\begin{aligned} \dim (x|y)=\liminf _{r\rightarrow \infty }\frac{K_r(x|y)}{r}\quad \text {and}\quad {{\mathrm{Dim}}}(x|y) = \limsup _{r\rightarrow \infty }\frac{K_r(x|y)}{r}\,. \end{aligned}$$

2.4 Relative Complexity and Dimensions

By letting the underlying fixed prefix-free Turing machine U be a universal oracle machine, we may relativize the definition in this section to an arbitrary oracle set \(A \subseteq \mathbb {N}\). The definitions of \(K^A(\sigma |\tau )\), \(K^A(\sigma )\), \(K^A_r(x)\), \(K^A_r(x|y)\), \(\dim ^A(x)\), \({{\mathrm{Dim}}}^A(x)\)\(\dim ^A(x|y)\), and \({{\mathrm{Dim}}}^A(x|y)\) are then all identical to their unrelativized versions, except that U is given oracle access to A.

We will frequently consider the complexity of a point \(x \in \mathbb {R}^n\)relative to a point\(y \in \mathbb {R}^m\), i.e., relative to a set \(A_y\) that encodes the binary expansion of y is a standard way. We then write \(K^y_r(x)\) for \(K^{A_y}_r(x)\). J. Lutz and N. Lutz showed that \(K_r^y(x)\le K_{r,t}(x|y)+K(t)+O(1)\) [8].

3 Background and Approach

In this section we describe the basic ideas behind our investigation of dimension spectra of lines. We briefly discuss some of our earlier work on this subject, and we present two technical lemmas needed for the proof our main theorems.

The dimension of a point on a line in \(\mathbb {R}^2\) has the following trivial bound.

Observation 2

For all \(a,b,x\in \mathbb {R}\), \(\dim (x,ax+b)\le \dim (x,a,b)\).

In this work, our goal is to find values of x for which the approximate converse
$$\begin{aligned} \dim (x,ax+b)\ge \dim ^{a,b}(x)+\dim (a,b) \end{aligned}$$
holds. There exist oracles, at least, relative to which (1) does not always hold. This follows from the point-to-set principle of J. Lutz and N. Lutz [8] and the existence of Furstenberg sets with parameter \(\alpha \) and Hausdorff dimension less than \(1+\alpha \) (attributed by Wolff [15] to Furstenberg and Katznelson “in all probability”). The argument is simple and very similar to our proof in [10] of a lower bound on the dimension of generalized Furstenberg sets.

Specifically, for every \(s\in [0,1]\), we want to find an x of effective Hausdorff dimension s such that (1) holds. Note that equality in Observation 2 implies (1).

Observation 3

Suppose \(ax+b=ux+v\) and \(u\ne a\). Then
$$\begin{aligned} \dim (u,v)\ge \dim ^{a,b}(u,v)\ge \dim ^{a,b}\left( \frac{b-v}{u-a}\right) =\dim ^{a,b}(x) \,. \end{aligned}$$

This observation suggests an approach, whenever \(\dim ^{a,b}(x)>\dim (a,b)\), for showing that \(\dim (x,ax+b)\ge \dim (x,a,b)\). Since (ab) is, in this case, the unique low-dimensional pair such that \((x,ax+b)\) lies on \(L_{a,b}\), one might naïvely hope to use this fact to derive an estimate of (xab) from an estimate of \((x,ax+b)\). Unfortunately, the dimension of a point is not even semicomputable, so algorithmically distinguishing (ab) requires a more refined statement.

3.1 Previous Work

The following lemma, which is essentially geometrical, is such a statement.

Lemma 4

(N. Lutz and Stull [10]). Let \(a,b,x\in \mathbb {R}\). For all \((u,v)\in \mathbb {R}^2\) such that \(u x+v=ax+b\) and \(t=-\log \Vert (a,b)-(u,v)\Vert \in (0,r]\),
$$\begin{aligned} K_{r}(u,v)\ge K_t(a,b) + K^{a,b}_{r-t}(x)-O(\log r)\,. \end{aligned}$$

Roughly, if \(\dim (a,b)<\dim ^{a,b}(x)\), then Lemma 4 tells us that \(K_r(u,v)>K_r(a,b)\) unless (uv) is very close to (ab). As \(K_r(u,v)\) is upper semicomputable, this is algorithmically useful: We can enumerate all pairs (uv) whose precision-r complexity falls below a certain threshold. If one of these pairs satisfies, approximately, \(ux+v=ax+b\), then we know that (uv) is close to (ab). Thus, an estimate for \((x,ax+b)\) algorithmically yields an estimate for (xab).

In our previous work [10], we used an argument of this type to prove a general lower bound on the dimension of points on lines in \(\mathbb {R}^2\):

Theorem 5

(N. Lutz and Stull [10]). For all \(a,b,x\in \mathbb {R}\),
$$\begin{aligned} \dim (x,ax+b)\ge \dim ^{a,b}(x)+ \min \{\dim (a,b),\,\dim ^{a,b}(x)\}\,. \end{aligned}$$

The strategy in that work is to use oracles to artificially lower \(K_r(a,b)\) when necessary, to essentially force \(\dim (a,b)<\dim ^{a,b}(x)\). This enables the above argument structure to be used, but lowering the complexity of (ab) also weakens the conclusion, leading to the minimum in Theorem 5.

3.2 Technical Lemmas

In the present work, we circumvent this limitation and achieve inequality (1) by controlling the choice of x and placing a condition on (ab). Adapting the above argument to the case where \(\dim (a,b)>\dim ^{a,b}(x)\) requires refining the techniques of [10]. In particular, we use the following two technical lemmas, which strengthen results from that work. Lemma 6 weakens the conditions needed to compute an estimate of (xab) from an estimate of \((x,ax+b)\).

Lemma 6

Let \(a,b,x\in \mathbb {R}\), \(k \in \mathbb {N}\), and \(r_0=1\). Suppose that \(r_1,\ldots , r_k\in \mathbb {N}\), \(\delta \in \mathbb {R}_+\), and \(\varepsilon ,\eta \in \mathbb {Q}_+\) satisfy the following conditions for every \(1\le i\le k\).

  1. 1.

    \(r_i \ge \log (2|a|+|x|+6)+r_{i-1}\).

  2. 2.

    \(K_{r_i}(a,b)\le \left( \eta +\varepsilon \right) r_i\).

  3. 3.

    For every \((u,v)\in \mathbb {R}^2\) such that \(t=-\log \Vert (a,b)-(u,v)\Vert \in (r_{i-1},r_i]\) and \(ux+v=ax+b\), \(K_{r_i}(u,v)\ge \left( \eta -\varepsilon \right) r_i+\delta \cdot (r_i- t)\).

Then for every oracle set \(A \subseteq \mathbb {N}\),
$$\begin{aligned} K^A_{r_k}(a, b, x \, | \, x, ax + b) \le 2^{k}\left( K(\varepsilon )+ K(\eta ) + \frac{4\varepsilon }{\delta } r_k + O(\log r_k)\right) \,. \end{aligned}$$

Lemma 7 strengthens the oracle construction of [10], allowing us to control complexity at multiple levels of precision.

Lemma 7

Let \(z\in \mathbb {R}^n\), \(\eta \in \mathbb {Q}\cap [0,\dim (z)]\), and \(k\in \mathbb {N}\). For all \(r_1, \ldots , r_k \in \mathbb {N}\), there is an oracle \(D=D(r_1,\ldots , r_k,z,\eta )\) such that
  1. 1.

    For every \(t \le r_1\), \(K^D_t(z) =\min \{\eta r_1,K_t(z)\}+ O(\log r_k)\)

  2. 2.
    For every \(1 \le i \le k\),
    $$\begin{aligned} K^D_{r_i}(z) = \eta r_1 + \sum _{j =2}^i \min \{\eta (r_j - r_{j-1}), K_{r_j, r_{j-1}}(z \, | \, z)\} + O(\log r_k)\,. \end{aligned}$$
  3. 3.

    For every \(t\in \mathbb {N}\) and \(x\in \mathbb {R}\), \(K^{z,D}_t(x) = K^z_t(x) + O(\log r_k)\).


4 Main Theorems

We are now prepared to prove our two main theorems. We first show that, for lines \(L_{a, b}\) such that \(\dim (a, b) = {{\mathrm{Dim}}}(a, b)\), the dimension spectrum \({{\mathrm{sp}}}(L_{a,b})\) contains the unit interval.

Theorem 8

Let \(a, b \in \mathbb {R}\) satisfy \(\dim (a, b) = {{\mathrm{Dim}}}(a, b)\). Then for every \(s \in [0, 1]\) there is a point \(x\in \mathbb {R}\) such that \(\dim (x, ax + b) = s + \min \{\dim (a,b), 1\}\).


Every line contains a point of effective Hausdorff dimension 1 [13], and by the preservation of effective dimensions under computable bi-Lipschitz functions, \(\dim (a,a^2+b)=\dim (a,b)\), so the theorem holds for \(s=0\). For \(s = 1\), we may choose an \(x \in \mathbb {R}\) that is random relative to (ab). That is, there is some constant \(c \in \mathbb {N}\) such that for all \(r \in \mathbb {N}\), \(K^{a, b}_r(x) \ge r - c\). By Theorem 5,
$$\begin{aligned} \dim (x,ax+b)&\ge \dim ^{a, b}(x) +\min \{\dim (a,b), 1\}\\&= \min \{\dim (a,b), 1\} + \liminf _{r\rightarrow \infty }\frac{K_r(x)}{r}\\&=\min \{\dim (a,b), 1\} + 1, \end{aligned}$$
and the conclusion holds.
Now let \(s \in (0,1)\) and \(d=\dim (a, b) = {{\mathrm{Dim}}}(a, b)\). Let \(y \in \mathbb {R}\) be random relative to (ab). Define the sequence of natural numbers \(\{h_j\}_{j \in \mathbb {N}}\) inductively as follows. Define \(h_0 = 1\). For every \(j > 0\), let
$$\begin{aligned} h_j = \min \left\{ h \ge 2^{h_{j-1}}: K_h(a, b) \le \left( d + \frac{1}{j}\right) h\right\} \,. \end{aligned}$$
Note that \(h_j\) always exists. For every \(r \in \mathbb {N}\), let
$$\begin{aligned} x[r] = {\left\{ \begin{array}{ll} 0 &{}\text { if } \frac{r}{h_j} \in (s, 1] \text { for some } j \in \mathbb {N}\\ y[r] &{}\text { otherwise} \end{array}\right. } \end{aligned}$$
where x[r] is the rth bit of x. Define \(x \in \mathbb {R}\) to be the real number with this binary expansion. Then \(K_{sh_j}(x)=sh_j+O(\log sh_j)\).
We first show that \(\dim (x, ax + b) \le s + \min \{d, 1\}\). For every \(j \in \mathbb {N}\),
$$\begin{aligned} K_{h_j}(x,ax+b)&= K_{h_j}(x) + K_{h_j}(ax + b \, | \, x) + O(\log h_j) \\&= K_{sh_j}(x) + K_{h_j}(ax + b \, | \, x) + O(\log h_j) \\&= K_{sh_j }(y) + K_{h_j}(ax + b \, | \, x) + O(\log h_j) \\&\le sh_j + \min \{d,1\}\cdot h_j + o(h_j)\,. \end{aligned}$$
$$\begin{aligned} \dim (x, ax + b)&= \liminf _{r \rightarrow \infty } \frac{K_r(x, ax + b)}{r}\\&\le \liminf _{j \rightarrow \infty } \frac{K_{h_j}(x, ax + b)}{h_j}\\&\le \liminf _{j \rightarrow \infty } \frac{s h_j + \min \{d, 1\} h_j +o(h_j)}{h_j}\\&= s + \min \{d, 1\}\,. \end{aligned}$$
If \(1 > s \ge d\), then by Theorem 5 we also have
$$\begin{aligned} \dim (x,ax+b)&\ge \dim ^{a, b}(x) +\dim (a,b)\\&=\dim (x)+d\\&=\liminf _{r\rightarrow \infty }\frac{K_r(x)}{r}+d\\&=\liminf _{j\rightarrow \infty }\frac{K_{h_j}(x)}{h_j}+d\\&=s+\min \{d,1\}\,. \end{aligned}$$
Hence, we may assume that \(s < d\).
Let \(H = \mathbb {Q}\cap (s, \min \{d,1\})\). Let \(\eta \in H\), \(\delta = 1 - \eta > 0\), and \(\varepsilon \in \mathbb {Q}_+\). We now show that \(\dim (x, ax + b) \ge s + \eta - \frac{\alpha \varepsilon }{\delta }\), where \(\alpha \) is some constant independent of \(\eta \) and \(\varepsilon \). Let \(j \in \mathbb {N}\) and \(m = \frac{s - 1}{\eta - 1}\). We first show that
$$\begin{aligned} K_{r}(x, ax + b) \ge K_r(x) + \eta r - c\frac{\varepsilon }{\delta } r - o(r)\,, \end{aligned}$$
for every \(r \in (sh_j, mh_j]\). Let \(r \in (sh_j, mh_j]\). Set \(k = \frac{r}{sh_j} \), and define \(r_i = i s h_j\) for all \(1 \le i \le k\). Note that k is bounded by a constant depending only on s and \(\eta \). Therefore a \(o(r_k) = o(r_i)\) for all \(r_i\). Let \(D_{r} = D(r_1,\ldots , r_k, (a, b), \eta )\) be the oracle defined in Lemma 7. We first note that, since \(\dim (a, b) = {{\mathrm{Dim}}}(a, b)\),
$$\begin{aligned} K_{r_i, r_{i-1}}(a, b \, | \, a, b)&= K_{r_i}(a, b) - K_{r_{i-1}}(a, b) - O(\log r_i)\\&= \dim (a, b)r_i - o(r_i) - \dim (a, b) r_{i-1} - o(r_{i-1}) - O(\log r_i)\\&= \dim (a, b)(r_i - r_{i-1}) - o(r_i)\\&\ge \eta (r_i - r_{i-1}) - o(r_i). \end{aligned}$$
Hence, by property 2 of Lemma 7, for every \(1 \le i \le k\),
$$\begin{aligned} \vert K^{D_r}_{r_i}(a, b) - \eta r_i \vert \le o(r_k). \end{aligned}$$
We now show that the conditions of Lemma 6 are satisfied. By inequality (3), for every \(1 \le i \le k\),
$$\begin{aligned} K^{D_r}_{r_i}(a, b) \le \eta r_i + o(r_k)\,, \end{aligned}$$
and so \(K^{D_r}_{r_i}(a, b) \le (\eta + \varepsilon ) r_i\), for sufficiently large j. Hence, condition 2 of Lemma 6 is satisfied.
To see that condition 3 is satisfied for \(i=1\), let \((u, v) \in B_1(a, b)\) such that \(ux + v = ax + b\) and \(t=-\log \Vert (a,b)-(u,v)\Vert \le r_1\). Then, by Lemmas 4 and 7, and our construction of x,
$$\begin{aligned} K^{D_{r}}_{r_1}(u,v)&\ge K^{D_{r}}_t(a,b) + K^{D_{r}}_{r_1-t,r_1}(x|a,b)-O(\log r_1)\\&\ge \min \{\eta r_1, K_t(a, b)\} + K_{r_1-t}(x)- o(r_k)\\&\ge \min \{\eta r_1, dt - o(t)\} + (\eta + \delta )(r_1 - t) - o(r_k)\\&\ge \min \{\eta r_1, \eta t - o(t)\} + (\eta + \delta )(r_1 - t) - o(r_k) \\&\ge \eta t - o(t) + (\eta + \delta )(r_1 - t) - o(r_k)\,. \end{aligned}$$
We conclude that \(K^{D_{r}}_{r_1}(u,v) \ge (\eta - \varepsilon )r_1 + \delta (r_1 - t)\), for all sufficiently large j.
To see that that condition 3 is satisfied for \(1 < i \le k\), let \((u, v) \in B_{2^{-r_{i-1}}}(a, b)\) such that \(ux + v = ax + b\) and \(t=-\log \Vert (a,b)-(u,v)\Vert \le r_i\). Since \((u, v) \in B_{2^{-r_{i-1}}}(a, b)\),
$$\begin{aligned} r_i - t \le r_i - r_{i-1}= ish_j - (i-1)sh_j \le sh_j + 1\le r_1 + 1\,. \end{aligned}$$
Therefore, by Lemma 4, inequality (3), and our construction of x,
$$\begin{aligned} K^{D_{r}}_{r_i}(u,v)&\ge K^{D_{r}}_t(a,b) + K^{D_{r}}_{r_i-t,r_i}(x|a,b)-O(\log r_i)\\&\ge \min \{\eta r_i, K_t(a, b)\} + K_{r_i-t}(x)- o(r_i)\\&\ge \min \{\eta r_i, dt - o(t)\} + (\eta + \delta )(r_i - t) - o(r_i)\\&\ge \min \{\eta r_i, \eta t - o(t)\} + (\eta + \delta )(r_i - t) - o(r_i) \\&\ge \eta t - o(t) + (\eta + \delta )(r_i - t) - o(r_i)\,. \end{aligned}$$
We conclude that \(K^{D_{r}}_{r_i}(u,v) \ge (\eta - \varepsilon )r_i + \delta (r_i - t)\), for all sufficiently large j. Hence the conditions of Lemma 6 are satisfied, and we have
$$\begin{aligned} K_{r}(x, ax + b) \ge&\ K^{D_{r}}_{r}(x, ax + b) - O(1)\\ \ge&\ K^{D_{r}}_{r}(a, b, x) - 2^k\left( K(\varepsilon ) + K(\eta ) + \frac{4\varepsilon }{\delta } r + O(\log r)\right) \\ =&\ K^{D_{r}}_{r}(a, b) + K^{D_{r}}_{r}(x \, | \, a, b) \\&-\,2^k\left( K(\varepsilon ) + K(\eta ) + \frac{4\varepsilon }{\delta } r + O(\log r)\right) \\ \ge&\ sr + \eta r - 2^k\left( K(\varepsilon ) + K(\eta ) + \frac{4\varepsilon }{\delta } r + O(\log r)\right) . \end{aligned}$$
Thus, for every \(r \in (sh_j, mh_j]\),
$$\begin{aligned} K_{r}(x, ax + b) \ge sr + \eta r - \frac{\alpha \varepsilon }{\delta } r - o(r)\,, \end{aligned}$$
where \(\alpha \) is a fixed constant, not depending on \(\eta \) and \(\varepsilon \).
To complete the proof, we show that (2) holds for every \(r \in [mh_j, sh_{j + 1})\). By Lemma 1 and our construction of x,
$$\begin{aligned} K_r(x)&= K_{r, h_j}(x\,|\,x) + K_{h_j}(x)+o(r) \\ {}&= r - h_j + sh_j+o(r) \\ {}&\ge \eta r+o(r)\,. \end{aligned}$$
The proof of Theorem 5 gives \(K_r(x, ax + b) \ge K_r(x) + \dim (x)r - o(r)\), and so \(K_r(x, ax + b) \ge r(s + \eta )\).
Therefore, equation (2) holds for every \(r \in [sh_j, sh_{j+1})\), for all sufficiently large j. Hence,
$$\begin{aligned} \dim (x, ax + b)&= \liminf \limits _{r \rightarrow \infty } \frac{K_r(x, ax + b)}{r}\\&\ge \liminf \limits _{r \rightarrow \infty } \frac{K_r(x) + \eta r - \frac{\alpha \varepsilon }{\delta } r - o(r)}{r}\\&\ge \liminf \limits _{r \rightarrow \infty } \frac{K_r(x)}{r} + \eta - \frac{\alpha \varepsilon }{\delta }\\&= s + \eta - \frac{\alpha \varepsilon }{\delta }\,. \end{aligned}$$
Since \(\eta \) and \(\varepsilon \) were chosen arbitrarily, the conclusion follows.    \(\square \)

Theorem 9

Let \(a, b \in \mathbb {R}\) such that \(\dim (a, b) \ge 1\). Then for every \(s \in [\frac{1}{2}, 1]\) there is a point \(x\in \mathbb {R}\) such that \(\dim (x, ax + b) \in \left[ \frac{3}{2} + s - \frac{1}{2s}, s + 1\right] \).

Corollary 10

Let \(L_{a, b}\) be any line in \(\mathbb {R}^2\). Then the dimension spectrum \({{\mathrm{sp}}}(L_{a, b})\) is infinite.


Let \((a, b) \in R^2\). If \(\dim (a, b) < 1\), then by Theorem 5 and Observation 2, the spectrum \({{\mathrm{sp}}}(L_{a, b})\) contains the interval \([\dim (a, b), 1]\). Assume that \(\dim (a, b) \ge 1\). By Theorem 9, for every \(s \in [\frac{1}{2}, 1]\), there is a point x such that \(\dim (x, ax + b) \in [\frac{3}{2} + s - \frac{1}{2s}, s + 1]\). Since these intervals are disjoint for \(s_n = \frac{2n - 1}{2n}\), the dimension spectrum \({{\mathrm{sp}}}(L_{a, b})\) is infinite.

5 Future Directions

We have made progress in the broader program of describing the dimension spectra of lines in Euclidean spaces. We highlight three specific directions for further progress. First, it is natural to ask whether the condition on (ab) may be dropped from the statement our main theorem: Does Theorem 8hold for arbitrary\(a,b\in \mathbb {R}\)?

Second, the dimension spectrum of a line \(L_{a,b}\subseteq \mathbb {R}^2\) may properly contain the unit interval described in our main theorem, even when \(\dim (a,b)={{\mathrm{Dim}}}(a,b)\). If \(a\in \mathbb {R}\) is random and \(b=0\), for example, then \({{\mathrm{sp}}}(L_{a,b})=\{0\}\cup [1,2]\). It is less clear whether this set of “exceptional values” in \({{\mathrm{sp}}}(L_{a,b})\) might itself contain an interval, or even be infinite. How large (in the sense of cardinality, dimension, or measure) may\({{\mathrm{sp}}}(L_{a,b})\cap \big [0,\min \{1,\dim (a,b)\}\big )\)be?

Finally, any non-trivial statement about the dimension spectra of lines in higher-dimensional Euclidean spaces would be very interesting. Indeed, an n-dimensional version of Theorem 5 (i.e., one in which \(a,b\in \mathbb {R}^{n-1}\), for all \(n\ge 2\)) would, via the point-to-set principle for Hausdorff dimension [8], affirm the famous Kakeya conjecture and is therefore likely difficult. The additional hypothesis of Theorem 8 might make it more conducive to such an extension.


  1. 1.
    Athreya, K.B., Hitchcock, J.M., Lutz, J.H., Mayordomo, E.: Effective strong dimension in algorithmic information and computational complexity. SIAM J. Comput. 37(3), 671–705 (2007)MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Dougherty, R., Lutz, J., Mauldin, R.D., Teutsch, J.: Translating the Cantor set by a random real. Trans. Am. Math. Soc. 366(6), 3027–3041 (2014)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    Downey, R., Hirschfeldt, D.: Algorithmic Randomness and Complexity. Springer, New York (2010)CrossRefMATHGoogle Scholar
  4. 4.
    Xiaoyang, G., Lutz, J.H., Mayordomo, E., Moser, P.: Dimension spectra of random subfractals of self-similar fractals. Ann. Pure Appl. Logic 165(11), 1707–1726 (2014)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Hitchcock, J.M.: Correspondence principles for effective dimensions. Theory Comput. Syst. 38(5), 559–571 (2005)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Li, M., Vitányi, P.M.B.: An Introduction to Kolmogorov Complexity and Its Applications, 3rd edn. Springer, New York (2008)CrossRefMATHGoogle Scholar
  7. 7.
    Lutz, J.H.: The dimensions of individual strings and sequences. Inf. Comput. 187(1), 49–79 (2003)MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Lutz, J.H., Lutz, N.: Algorithmic information, plane Kakeya sets, and conditional dimension. In: Vollmer, H., Vallee, B. (eds.) 34th Symposium on Theoretical Aspects of Computer Science (STACS 2017), Leibniz International Proceedings in Informatics (LIPIcs), vol. 66, pp. 53:1–53:13. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, Germany (2017)Google Scholar
  9. 9.
    Lutz, J.H., Mayordomo, E.: Dimensions of points in self-similar fractals. SIAM J. Comput. 38(3), 1080–1112 (2008)MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Lutz, N., Stull, D.M.: Bounding the dimension of points on a line. In: Gopal, T.V., Jäger, G., Steila, S. (eds.) TAMC 2017. LNCS, vol. 10185, pp. 425–439. Springer, Cham (2017). doi:10.1007/978-3-319-55911-7_31 CrossRefGoogle Scholar
  11. 11.
    Mayordomo, E.: A Kolmogorov complexity characterization of constructive Hausdorff dimension. Inf. Process. Lett. 84(1), 1–3 (2002)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Reimann, J.: Effectively closed classes of measures and randomness. Ann. Pure Appl. Logic 156(1), 170–182 (2008)MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Turetsky, D.: Connectedness properties of dimension level sets. Theor. Comput. Sci. 412(29), 3598–3603 (2011)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Westrick, L.B.: Computability in ordinal ranks and symbolic dynamics. Ph.D. thesis, University of California, Berkeley (2014)Google Scholar
  15. 15.
    Wolff, T.: Recent work connected with the Kakeya problem. In: Prospects in Mathematics, pp. 129–162 (1999)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer ScienceRutgers UniversityPiscatawayUSA
  2. 2.Department of Computer ScienceIowa State UniversityAmesUSA

Personalised recommendations