1 Introduction

Let B denote standard d-dimensional Brownian motion and suppose that \(f:[0,1] \rightarrow {\mathbb {R}}^d\) is continuous. Recall that for f in the Dirichlet space

$$\begin{aligned} D[0,1]= \left\{ f \in C[0,1]: \exists g \in {\mathbf {L}}^2[0,1] \hbox { s.t. } f(t) = \int _{0}^{t} g(s) ds, \forall t \in [0,1] \right\} , \end{aligned}$$

the Cameron–Martin theorem ensures that the law of \(B+f\) is mutually absolutely continuous w.r.t. the law of B (see, e.g., [13], Chapter 1). If f is Hölder with exponent 1 / 2, then the fractal properties of \(B+f\) are still quite similar to those of B, see [15], but for less smooth functions f, they can be markedly different.

Our main goal is to answer the following three questions.

  1. 1.

    In [15] the authors showed that the Hausdorff dimension \(\dim \hbox {Gr}(B+f)\) of the Graph of \(B+f\) is almost surely constant. How can this constant be determined explicitly from f?

  2. 2.

    Let \(d=1\). Is there a continuous function f such that the inequality established in [15], \(\dim (\hbox {Gr}(B+f)) \ge \max \{\dim (\hbox {Gr}(B)), \dim (\hbox {Gr}(f)) \}\) a.s., is strict? For Minkowski (\(=\)Box) dimension \(\dim _M\) in place of Hausdorff dimension, the corresponding inequality is an equality for all continuous f, see [6].

  3. 3.

    Falconer [7] and Solomyak [16] showed that for almost all parameters in the construction of a self-affine set K, the Hausdorff dimension \(\dim K\) and the Minkowski dimension \(\dim _M K\) coincide. Earlier, McMullen [12] and Bedford [2] exhibited a special class of self-affine sets K with \(\dim (K)<\dim _M (K)\). Is this strict inequality robust under some class of perturbations, at least when K is the graph of a function?

We will study fractal properties of graphs and images in a more general setting. Let X be a fractional Brownian motion in \({\mathbb {R}}^d\) and f a Borel measurable function. We will express the dimension of both the image and the graph of \(X+f\) in terms of the so-called parabolic Hausdorff dimension of the graph of f, which was used by Taylor and Watson in [17] in order to determine polar sets for the heat equation, following the seminal work of Watson [18]. We start by introducing some notation and then give the definition of the parabolic Hausdorff dimension.

For a function \(h:[0,1]\rightarrow {\mathbb {R}}^d\) we denote by \(\hbox {Gr}_{A}(h)=\{(t,h(t): t\in A\})\) the graph of h over the set A and by \(\mathcal {R}_{A}(h) = \{h(t): t\in A\}\) the image of A under h. We write simply \(\hbox {Gr}(h)=\hbox {Gr}_{[0,1]}(h)\).

Definition 1.1

Let \(A\subseteq {\mathbb {R}}_+ \times {\mathbb {R}}^d\) and \(H\in [0,1]\). For all \(\beta >0\) the H-parabolic \(\beta \)-dimensional Hausdorff content is defined by

$$\begin{aligned} \Psi ^\beta _H(A) = \inf \left\{ \sum _{j}\delta _j^\beta : \ A \subseteq \cup _j [a_j,a_j+\delta _j] \times [b_{j,1}, b_{j,1} + \delta _j^H] \times \cdots \times [b_{j,d},b_{j,d} + \delta _j^H] \right\} , \end{aligned}$$

where the infimum is taken over all covers of A by rectangles of the form given above. The H-parabolic Hausdorff dimension is then defined to be

$$\begin{aligned} \dim _{\Psi ,H}(A) = \inf \left\{ \beta : \Psi ^\beta _H(A) =0\right\} . \end{aligned}$$

This was employed for \(H=1/2\) by Taylor and Watson [17] in their study of polar sets for the heat equation.

We are now ready to state our main result which gives the dimension of the graph and the image of \(X+f\) in terms of \(\dim _{\Psi ,H}(\hbox {Gr}(f))\). We write \(\dim (A)\) for the Hausdorff dimension of the set A.

Theorem 1.2

Let X be a fractional Brownian motion in \({\mathbb {R}}^d\) of Hurst index H, let \(f:[0,1]\rightarrow {\mathbb {R}}^d\) be a Borel measurable function and A a Borel subset of [0, 1]. If \(\alpha = \dim _{\Psi ,H}(\hbox {Gr}_{A}(f))\), then almost surely

$$\begin{aligned} \dim (\hbox {Gr}_{A}(X+f))&= \min \{ \alpha /H, \alpha + d(1-H)\} \quad \mathrm{and} \quad \dim (\mathcal {R}_{A}(X+f))\\&= \min \{\alpha /H,d\}. \end{aligned}$$

Remark 1.3

Note that when \(d=1\) and \(A=[0,1]\), then the minimum in the expressions above is always the second term.

We prove Theorem 1.2 in Sect. 2. We now define a class of self-affine sets analysed by Bedford [2] and McMullen [12].

Definition 1.4

Let \(n> m\) be two positive integers and \(D\subseteq \{0,\ldots , n-1\}\times \{0,\ldots ,m-1\}\). We call D a pattern. The self-affine set corresponding to the pattern D is defined to be

$$\begin{aligned} K(D) = \left\{ \sum _{k=1}^{\infty } (a_kn^{-k}, b_k m^{-k}): (a_k,b_k) \in D \ \hbox { for all } \ k\right\} . \end{aligned}$$

We set for the number of rectangles of row j.

Corollary 1.5

Let X be a fractional Brownian motion in \({\mathbb {R}}\) of Hurst index H. Let \(D\subseteq \{0,\ldots ,n-1\}\times \{0,\ldots ,m-1\}\) be a pattern such that there exists \(f:[0,1]\rightarrow [0,1]\) with \(\hbox {Gr}(f) = K(D)\) and \(\log _n(m) < H\). Then almost surely

$$\begin{aligned} \dim (\hbox {Gr}(X+f)) = 1-H + H\log _m\left( \sum _{j=0}^{m-1} r(j)^{\log _n (m)/H} \right) . \end{aligned}$$

In [15, Theorems 1.8 and 1.9] it was shown that if B is a standard Brownian motion and \(f:[0,1]\rightarrow {\mathbb {R}}^d\) for \(d\ge 1\) is a continuous function, then almost surely

$$\begin{aligned}&\dim (\hbox {Gr}(B+f)) \ge \max \{\dim (\hbox {Gr}(f)),\dim (\hbox {Gr}(B))\}\quad \hbox { and } \\&\dim (\mathcal {R}(B+f)) \ge \max \{\dim (\mathcal {R}(f)),\dim (\mathcal {R}(B))\}. \end{aligned}$$

In the same paper it was shown that in dimensions 3 and above there exist continuous functions f such that the Hausdorff dimension of the image and the graph are strictly larger than the maxima given above. In dimension 1 though, the question of finding a continuous function f with \(\dim (\hbox {Gr}(B+f)) >\dim (\hbox {Gr}(f))\) remained open.

Fig. 1
figure 1

The patterns A and B used in each iteration

Fig. 2
figure 2

Finite approximations of \(\hbox {Gr}(f)\)

As an application of Theorem 1.2 for the case of the graph we give an example of a function f which is Hölder continuous with parameter \(\log 2/\log 6<1/2\) and for which we can calculate exactly the parabolic Hausdorff dimension. The patterns used in each iteration of the construction of the graph of f are depicted in Fig. 1 and the first few approximations to the graph of f are shown in Fig. 2. We defer the formal definition to Sect. 3 where we also calculate the parabolic dimension of the graph of f.

Corollary 1.6

Let B be a standard Brownian motion in one dimension. Then there exists a function \(f:[0,1]\rightarrow [0,1]\) (the first approximations to its graph are depicted in Fig. 2) which is Hölder continuous of parameter \(\theta = \log 2/\log 6\), its graph is a self-affine set with \(\dim _M(\hbox {Gr}(f))=1+\log 3/ \log 6\) and it satisfies almost surely

$$\begin{aligned} \dim (\hbox {Gr}(B+f)) = \frac{1+ \log _2(5^{2\theta } + 1)}{2} > \max \left\{ \dim (\hbox {Gr}(f)), \frac{3}{2} \right\} = \log _2(5^\theta + 1). \end{aligned}$$

We prove Corollaries 1.5 and 1.6 in Sect. 3.

Remark 1.7

If K(D) is a self-affine set corresponding to the pattern D and \(r(j)\ge 1\) for all j, then McMullen [12] showed

$$\begin{aligned} \dim _M({K(D)}) = 1 + \log _n\frac{|D|}{m} \quad \hbox {and} \quad \dim (K(D)) = \log _m\left( \sum _{j=1}^{m} r(j)^{\log _n m} \right) . \end{aligned}$$
(1.1)

From [6, Theorem 1.8] we have that if \(h:[0,1]\rightarrow {\mathbb {R}}\) is a continuous function, then almost surely

$$\begin{aligned} \dim _M(\hbox {Gr}(B+h)) = \max \{ \dim _M(\hbox {Gr}(h)), \dim _M(\hbox {Gr}(B))\}. \end{aligned}$$
(1.2)

The proof of (1.1) applies to the graph of f, where f is the function of Corollary 1.6. In conjunction with (1.2), this gives that almost surely

$$\begin{aligned} \dim _M({\hbox {Gr}(B+f)})= & {} \dim _M({\hbox {Gr}(f)}) = 1 + \log _6 3 = 1.6131 \cdots >\dim (\hbox {Gr}(B+f)) \\= & {} 1.5807 \ldots \end{aligned}$$

This shows that despite the Brownian perturbations, the Hausdorff and Minkowski dimensions still disagree (as is the case for the graph of f). Comparisons of Hausdorff and Minkowski dimensions for other self-affine graphs perturbed by Brownian motion are in Sect. 4.

Related work Khoshnevisan and Xiao [11] also employ the parabolic dimension of Taylor and Watson [17] to determine the Hausdorff dimension of the image of Brownian motion intersected with a compact set. The problem of estimating the dimension of fractional Brownian motion with drift was studied by Bayart and Heurteaux [1] (the case of Brownian motion was considered in [15]). These papers obtain upper and lower bounds for the dimension which differ in general. The lower bounds are proved by the energy method. The novel aspect of Theorem 1.2 is that it gives an exact expression for the dimension of the graph of \(X+f\) valid for all Borel functions f.

2 Dimension of \(\hbox {Gr}(X+f)\) and \(\mathcal {R}(X+f)\)

In this section we prove Theorem 1.2. We start with an easy preliminary lemma that relates the parabolic Hausdorff dimension to Hausdorff dimension.

Note that for functions f, g we write \(f(n)\lesssim g(n)\) if there exists a constant \(c>0\) such that \(f(n)\le cg(n)\) for all n. We write \(f(n)\gtrsim g(n)\) if \(g(n)\lesssim f(n)\).

Lemma 2.1

For all \(A\subseteq {\mathbb {R}}_+\times {\mathbb {R}}^d\) we have

$$\begin{aligned} \dim (A) \le \left( \dim _{\Psi ,H}(A) + d(1-H) \right) \wedge \frac{\dim _{\Psi ,H}(A)}{H}. \end{aligned}$$

Proof

For \(\beta >0\) we let \({\mathcal {H}}_\beta (A)\) be the \(\beta \)-Hausdorff content of A, i.e.

$$\begin{aligned} {\mathcal {H}}_\beta (A) = \inf \left\{ \sum _j |E_j|^\beta : A\subseteq \cup _j E_j\right\} . \end{aligned}$$

Let \(\varepsilon >0\) and \(\eta <1\). We set \(\beta = \frac{\dim _{\Psi ,H}(A)}{H}+\frac{\varepsilon }{H}\) and \(\gamma -d(1-H)= \dim _{\Psi ,H}(A) +\varepsilon \). Then \(\Psi _{H}^{\beta H}(A) =0\), and hence there exists a cover \(([a_j,a_j+\delta _j] \times [b_{j,1},b_{j,1}+\delta _j^H]\times \cdots \times [b_{j,d},b_{j,d}+\delta _j^H])_j\) of the set A, such that

$$\begin{aligned} \sum _{j} \delta _j^{\beta H} < \eta . \end{aligned}$$
(2.1)

From (2.1) it follows that \(\delta _j<1\) for all j, and hence the diameter of every set in the above cover of A is at most \(\sqrt{d+1} \delta _j^H\). Therefore we obtain

$$\begin{aligned} {\mathcal {H}}_\beta (A)\le \sum _{j} (d+1)^{\beta /2}({\delta _j}^H)^\beta = (d+1)^{\beta /2}\sum _j \delta _{j}^{\beta H} <(d+1)^{\beta /2}\eta , \end{aligned}$$
(2.2)

where in the last step we used (2.1). Each interval \([b_{j,i},b_{j,i}+\delta _j^H]\) can be divided into \(\delta _j^{H-1}\) intervals of length \(\delta _j\) each. (We omit integer parts to lighten the notation.) In this way we obtain a new cover of the set A which satisfies

$$\begin{aligned} {\mathcal {H}}_\gamma (A) \le \sum _j \delta _j^{(H-1)d} \delta _j^\gamma (d+1)^{\gamma /2}= (d+1)^{\gamma /2}\sum _j \delta _j^{\beta H}<(d+1)^{\gamma /2}\eta . \end{aligned}$$
(2.3)

From (2.2) and (2.3) we deduce that

$$\begin{aligned} \dim (A) \le \beta \wedge \gamma = \left( \frac{\dim _{\Psi ,H}(A)}{H} + \frac{\varepsilon }{H}\right) \wedge (\dim _{\Psi ,H}(A) + d(1-H) + \varepsilon ). \end{aligned}$$

Therefore letting \(\varepsilon \) go to 0 we conclude

$$\begin{aligned} \dim (A) \le \left( \dim _{\Psi ,H}(A) + d(1-H) \right) \wedge \frac{\dim _{\Psi ,H}(A) }{H} \end{aligned}$$

and this finishes the proof. \(\square \)

Lemma 2.2

Let \(f:[0,1]\rightarrow {\mathbb {R}}^d\) be a Borel measurable function. Then for all Borel sets \(A\subseteq [0,1]\) almost surely

$$\begin{aligned} \dim _{\Psi ,H}(\hbox {Gr}_{A}(X+f)) = \dim _{\Psi ,H}(\hbox {Gr}_{A}(f)). \end{aligned}$$

Proof

Since X is a fractional Brownian motion of Hurst index H, it follows that it is almost surely Hölder continuous on [0, 1] of parameter \(H-\varepsilon \) for all \(\varepsilon >0\) (see for instance [8, Section 18]). Therefore, for \(\zeta >0\) there exists a random constant \(C>1\) such that almost surely for all \(s,t\in [0,1]\) we have

$$\begin{aligned} \left\| X_s- X_t \right\| \le C |t-s|^{H-\zeta }. \end{aligned}$$
(2.4)

Let \(\eta >0\). We set \(\alpha = \dim _{\Psi ,H}(\hbox {Gr}_{A}(f))\). Then \(\Psi _H^{\alpha +2\varepsilon }(\hbox {Gr}_{A}(f))=0\), and hence there exists a cover \(([a_j,a_j+\delta _j]\times [b_j^1,b_{j,1}+\delta _j^H]\times \cdots \times [b_{j,d},b_{j,d}+\delta _j^H])_j\) of \(\hbox {Gr}_{A}(f)\) such that

$$\begin{aligned} \sum _j \delta _j^{\alpha +\varepsilon } <\eta . \end{aligned}$$
(2.5)

Using this cover we will derive a cover of \(\hbox {Gr}_{A}(X+f)\). By (2.4) if \(t\in [a_j,a_j+\delta _j]\), then

$$\begin{aligned} \Vert X_t - X_{a_j}\Vert \le C \delta _j^{H-\zeta }. \end{aligned}$$

Therefore the collection of sets

$$\begin{aligned} \left( \left[ a_j,a_j+(2C)^{1/H}\delta _j^{1-\zeta /H} \right] \times \left[ r_{j,1}, r_{j,1} +2C\delta _j^{H-\zeta }\right] \times \cdots \times \left[ r_{j,d}, r_{j,d} + 2C\delta _j^{H-\zeta }\right] \right) _j, \end{aligned}$$

where \(r_{j,i}=b_{j,i}+X^i_{a_j} -C\delta _j^{H-\zeta }\) is a cover of \(\hbox {Gr}_{A}(X+f)\). From (2.5) we obtain that for a positive constant c we have

$$\begin{aligned} \Psi _H^{\alpha +2\varepsilon }(\hbox {Gr}_{A}(X+f)) \le (2C)^{(\alpha +2\varepsilon )/H}\sum _{j} \left( \delta _j^{1-\zeta /H} \right) ^{\alpha +2\varepsilon } \le c\sum _j \delta _j^{\alpha +\varepsilon } <c \eta , \end{aligned}$$

where the penultimate inequality follows by choosing \(\zeta >0\) sufficiently small and c is a positive constant. We thus showed that almost surely \(\Psi _H^{\alpha +2\varepsilon }(\hbox {Gr}_{A}(X+f)) =0\) for all \(\varepsilon >0\), which implies that almost surely

$$\begin{aligned} \dim _{\Psi ,H}(\hbox {Gr}_{A}(X+f)) \le \alpha . \end{aligned}$$

The other inequality follows in the same way and this concludes the proof. \(\square \)

Lemmas 2.1 and 2.2 give the following:

Corollary 2.3

Let \(f:[0,1]\rightarrow {\mathbb {R}}^d\) be a function. Then almost surely we have

$$\begin{aligned} \dim (\hbox {Gr}_{A}(X+f)) \le \frac{\dim _{\Psi ,H}(\hbox {Gr}_{A}(f))}{H} \wedge \left( \dim _{\Psi ,H}(\hbox {Gr}_{A}(f)) + d(1-H)\right) . \end{aligned}$$

We now recall the definition of the capacity of a set.

Definition 2.4

Let \(K:{\mathbb {R}}^d \rightarrow [0,\infty ]\) and A a Borel set in \({\mathbb {R}}^d\). (Sometimes K is called a difference kernel.) The K-energy of a Borel measure \(\mu \) on \({\mathbb {R}}^d\) is defined to be

$$\begin{aligned} {\mathcal {E}}_K(\mu ) = \int \int K(x-y) \,d\mu (x)d\mu (y) \end{aligned}$$

and the K-capacity of A is defined as

$$\begin{aligned} \hbox {Cap}_{K}(A) = [\inf \{{\mathcal {E}}_K(\mu ): \mu \hbox { a probability measure on } A\}]^{-1}. \end{aligned}$$

When the kernel has the form \(K(x) = |x|^{-\alpha }\), then we write \(\mathcal {E}_\alpha (\mu )\) for \(\mathcal {E}_K(\mu )\) and \(\hbox {Cap}_\alpha (A)\) for \(\hbox {Cap}_K(A)\) and we refer to them as the \(\alpha \)-energy of \(\mu \) and the Riesz \(\alpha \)-capacity of A respectively.

We recall the following theorem which gives the connection between the Hausdorff dimension of a set and its Riesz \(\alpha \)-capacity. For the proof see [5].

Theorem 2.5

(Frostman) For any Souslin set \(A \subset {\mathbb {R}}^d\),

$$\begin{aligned} \dim (A) = \sup \{\alpha : \hbox {Cap}_\alpha (A) >0 \}. \end{aligned}$$

Let X be a fractional Brownian motion in \({\mathbb {R}}^d\) of Hurst index H. For \((s,x) \in {\mathbb {R}}_+ \times {\mathbb {R}}^d\) we define the difference kernel

$$\begin{aligned} I_{\gamma ,H}(s,x) = \mathbb {E}\!\left[ \frac{1}{(\Vert X_s+x\Vert ^2 + s^2)^{\gamma /2}}\right] . \end{aligned}$$
(2.6)

Lemma 2.6

Let X be a fractional Brownian motion in \({\mathbb {R}}^d\) of Hurst index H and let \(f:[0,1]\rightarrow {\mathbb {R}}^d\) be a Borel measurable function. Let A be a closed subset of [0, 1]. If \(\hbox {Cap}_{I_{\gamma ,H}}(\hbox {Gr}_{A}(f))>0\), then almost surely

$$\begin{aligned} \hbox {Cap}_{\gamma }(\hbox {Gr}_{A}(X+f))>0. \end{aligned}$$

Proof

Since by assumption \(\hbox {Cap}_{I_{\gamma ,H}}(\hbox {Gr}_{A}(f))>0\), there exists a probability measure \(\nu _f\) on \(\hbox {Gr}_{A}(f)\) with finite energy, i.e.

$$\begin{aligned}&\int _{\hbox {Gr}_{A}(f)} \int _{\hbox {Gr}_{A}(f)} I_{\gamma ,H}(s-t,f(s)-f(t)) \,d\nu _f(s,f(s)) \,d\nu _f(t,f(t)) \\ {}&\quad = \int _A \int _A I_{\gamma ,H}(s-t,f(s)-f(t)) \,d\nu (s) \,d\nu (t) <\infty , \end{aligned}$$

where \(\nu \) is the measure on A satisfying \(\nu = \nu _f \circ \pi ^{-1}\), where \(\pi ((s,f(s))) = s\) is the projection mapping. We now define a measure \(\widetilde{\nu }\) on \(\hbox {Gr}_{A}(X+f)\) via

$$\begin{aligned} \widetilde{\nu }(B) = \nu (\{t: (t,(X+f)(t)) \in B\}). \end{aligned}$$

We will show that this measure has finite \(\gamma \) energy almost surely. Indeed,

$$\begin{aligned}&\mathbb {E}\!\left[ \int \int \frac{1}{\left\| x-y \right\| ^{\gamma }} \,d\widetilde{\nu }(x) \,d\widetilde{\nu }(y)\right] \\&\quad = \mathbb {E}\!\left[ \int \int \frac{d\nu (s) d\nu (t)}{\left( \left\| (X+f)(t) - (X+f)(s) \right\| ^2 + |t-s|^2 \right) ^{\gamma /2}}\right] \\&\quad = \int \int I_{\gamma ,H}(s-t,f(s)-f(t)) \,d\nu (s) \,d\nu (t) <\infty , \end{aligned}$$

and hence it follows that \({\hbox {Cap}}_{\gamma }(\hbox {Gr}_{A}(X+f))>0\) almost surely. \(\square \)

Lemma 2.7

Let \(f:[0,1]\rightarrow {\mathbb {R}}^d\) be a bounded Borel measurable function and A a closed subset of [0, 1]. If \(\alpha = \dim _{\Psi ,H}(\hbox {Gr}_{A}(f))\), then

$$\begin{aligned} \min \left\{ \frac{\alpha }{H}, \alpha + d(1-H)\right\} \le \sup \{\gamma : \ {\hbox {Cap}}_{I_{\gamma ,H}}(\hbox {Gr}_{A}(f)) > 0\}. \end{aligned}$$

Before proving Lemma 2.7 we show how we can bound from above the kernel \(I_{\gamma ,H}\) in three different regimes.

Lemma 2.8

Fix \(M>0\). There exists a positive constant C such that for all \(t\in (0,1/e]\) and all u satisfying \(\Vert u\Vert \le M\), the kernel \(I_{\gamma ,H}\) defined in (2.6) satisfies

$$\begin{aligned} I_{\gamma ,H}(t,u) \lesssim {\left\{ \begin{array}{ll} \left\| u \right\| ^{-\gamma } &{} \hbox {if }\left\| u \right\| >C t^{H}\sqrt{|\log t|},\\ t^{d(1-H)-\gamma } &{}\hbox {if }\left\| u \right\| \le C t^H\sqrt{|\log t|}\text { and }d<\gamma ,\\ t^{-\gamma H} &{}\hbox {if }\left\| u \right\| \le Ct^H\sqrt{|\log t|}\text { and }d>\gamma . \end{array}\right. } \end{aligned}$$

Proof

By scaling invariance of fractional Brownian motion we have

$$\begin{aligned} I_{\gamma ,H}(t,u) = \mathbb {E}\!\left[ \frac{1}{\left( \left\| t^{H} X_1 + u \right\| ^2 + t^2 \right) ^{\gamma /2}}\right] . \end{aligned}$$

Let C be a constant to be determined and let \(\left\| u \right\| > C t^H\sqrt{ |\log t|}\). By the Gaussian tail estimate we have

$$\begin{aligned} \mathbb {P}\!\left( t^H\left\| X_1 \right\| >\frac{\left\| u \right\| }{2}\right) \le 2d e^{-\frac{\left\| u \right\| ^2}{8d^2t^{2H}}} \le 2d e^{-\frac{C^2\log (1/t)}{8}} \le 2d t^{C^2/8}. \end{aligned}$$

On the event \(\{t^H \left\| X_1 \right\| < \left\| u \right\| /2\}\) we have

$$\begin{aligned} \left\| t^H X_1 + u \right\| \ge \left\| u \right\| - t^H \left\| X_1 \right\| \ge \frac{\left\| u \right\| }{2}. \end{aligned}$$

Therefore, taking C sufficiently large we get

$$\begin{aligned} \mathbb {E}\!\left[ \frac{1}{\left( \left\| t^H X_1 + u \right\| ^2 + t^2 \right) ^{\gamma /2}}\right]&\lesssim \mathbb {P}\!\left( t^H\left\| X_1 \right\| >\frac{\left\| u \right\| }{2}\right) \frac{1}{t^{\gamma }} + \mathbb {P}\!\left( t^H\left\| X_1 \right\| \le \frac{\left\| u \right\| }{2}\right) \frac{1}{\left\| u \right\| ^\gamma } \\&\lesssim t^{C^2/8 -\gamma } + \left\| u \right\| ^{-\gamma } \lesssim \left\| u \right\| ^{C^2/8 - \gamma } + \left\| u \right\| ^{-\gamma } \lesssim \left\| u \right\| ^{-\gamma }, \end{aligned}$$

since \(\left\| u \right\| \le M\) and this finishes the proof of the first part. Next, let \(\left\| u \right\| \le C t^H\sqrt{ |\log t|}\). Then

$$\begin{aligned} \mathbb {E}\!\left[ \frac{1}{\left( \left\| t^H X_1 + u \right\| ^2 + t^2 \right) ^{\gamma /2}}\right]\le & {} \int \frac{1}{\left( \left\| x+u \right\| ^2 + t^2 \right) ^{\gamma /2}} e^{-\frac{\left\| x \right\| ^2}{2t^{2H}}}\,dx\\= & {} \int f(x+u) g(x)\,dx, \end{aligned}$$

where \(f(x) = (\left\| x \right\| ^2+t^2)^{-\gamma /2}\) and \(g(x) = e^{-\left\| x \right\| ^2/(2t^{2H})}\). Since they are both decreasing as functions of \(\left\| x \right\| \), it follows that

$$\begin{aligned} \int (f(x+u) - f(x)) (g(x+u) - g(x)) \,dx \ge 0, \end{aligned}$$
(2.7)

which implies that \(\int f(x)g(x)\,dx\ge \int f(x+u)g(x)\,dx\). Hence this gives

where \(c_1\) is a positive constant. If \(d>\gamma \), then from the above we deduce that

$$\begin{aligned} I_{\gamma ,H}(t,u) \lesssim t^{-\gamma H}, \end{aligned}$$

while when \(d<\gamma \), then

$$\begin{aligned} I_{\gamma ,H}(t,u) \lesssim t^{d(1-H)-\gamma } \end{aligned}$$

and this concludes the proof of the lemma. \(\square \)

The next theorem is the analogue of Frostman’s theorem for parabolic Hausdorff dimension. The statement can be found in Taylor and Watson [17, Lemma 4] and the proof follows along the same lines as the proof of Frostman’s theorem for Hausdorff dimension. We include the statement here for the reader’s convenience.

Theorem 2.9

(Frostman’s theorem) Let A be a Borel set. If \(\dim _{\Psi ,H}(A) >\beta \), then there exists a Borel probability measure \(\mu \) supported on A such that

$$\begin{aligned} \mu ([a,a+\delta ] \times \cup _{j}[b_{j,1},b_{j,1}+ \delta ^H] \times \cdots [b_{j,d},b_{j,d} + \delta ^H]) \le C\delta ^\beta , \end{aligned}$$

where C is a positive constant.

We now give the proof of Lemma 2.7.

Proof of Lemma 2.7

Let \(\beta = \alpha - \varepsilon /2\). Since the graph of a Borel function is always a Borel set, it follows by Theorem 2.9 that there exists a probability measure \(\mu \) supported on \(\hbox {Gr}_{A}(f)\) such that

$$\begin{aligned} \mu ([a,a+\delta ] \times \cup _{j=1}^{d} [b_j,b_j + \delta ^H]) \le c_2 \delta ^\beta . \end{aligned}$$
(2.8)

From this it follows that the measure \(\mu \) is non-atomic. Suppose first that \(\min \{\alpha /H, \alpha +d(1-H)\} = \alpha /H\). Let \(\gamma = \beta /H - \varepsilon <d\). We show that \(\hbox {Cap}_{I_{\gamma ,H}}(\hbox {Gr}_{A}(f))>0\). It suffices to prove that

$$\begin{aligned} {\mathcal {E}}_{I_{\gamma ,H}}(\mu ) < \infty . \end{aligned}$$
(2.9)

Since \(\gamma <d\) and f is bounded on [0, 1], if we define

then from Lemma 2.8 we get that

$$\begin{aligned} {\mathcal {E}}_{I_{\gamma ,H}}(\mu )&= \iint \mathbb {E}\!\left[ \frac{1}{( \left\| X_s - X_t + f(s) - f(t) \right\| ^2 + |t-s|^2 )^{\gamma /2}}\right] \\&\quad \times \, d\mu ((s,f(s))) d\mu ((t,f(t))) \lesssim e^{\gamma } + I_1 + I_2. \end{aligned}$$

We first show that \(I_1<\infty \). Since \(\mu \) is non-atomic, we have

$$\begin{aligned} I_1 \le \sum _{k=0}^{\infty } 2^{k\gamma H} \mu \otimes \mu \{2^{-k} \le |t-s| < 2^{-k+1}, \left\| f(t) - f(s) \right\| \le 2C2^{-k H}\sqrt{k} \}. \end{aligned}$$
(2.10)

Let \(M = \max _{t\in [0,1]} \left\| f(t) \right\| <\infty \). Then the measure \(\mu \) is supported on \([0,1]\times [-M,M]^d\). For \(k>0\), we partition the space \([0,1]\times [-M,M]^d\) into rectangles of size \(2^{-k}\times 2^{-k H}\times \cdots \times 2^{-k H}\). We let \({\mathcal {D}}_k\) be the collection of rectangles of generation k. For two rectangles \(Q,Q'\) of the same generation we write \(Q\sim Q'\) if there exist \((s,x) \in Q, (t,y) \in Q'\) such that \(2^{-k} \le |s-t| < 2^{-k+1}\) and \(\left\| x-y \right\| \le 2C2^{-k H} \sqrt{k}\). Then from (2.10) we obtain

$$\begin{aligned} I_1 \le \sum _{k=0}^{\infty } 2^{k\gamma H} \sum _{\begin{array}{c} Q,Q' \in {\mathcal {D}}_k \\ Q \sim Q' \end{array}} \mu \otimes \mu (Q\times Q') = \sum _{k=0}^{\infty } 2^{k\gamma H} \sum _{\begin{array}{c} Q,Q' \in {\mathcal {D}}_k \\ Q \sim Q' \end{array}} \mu (Q) \mu (Q'). \end{aligned}$$

We now notice that if we fix \(Q\in {\mathcal {D}}_k\), then the number of \(Q'\) such that \(Q\sim Q'\) is up to constants \(k^{d/2}\). Using the obvious inequality

$$\begin{aligned} \mu (Q) \mu (Q') \le \frac{1}{2}(\mu (Q)^2 + \mu (Q')^2) \end{aligned}$$
(2.11)

and (2.8) to get \(\mu (Q)\le c_2 2^{-k\beta }\) we deduce

$$\begin{aligned} I_1 \lesssim \!\sum _{k=0}^{\infty } 2^{k\gamma H} k^{d/2} \sum _{Q \in {\mathcal {D}}_k} \mu (Q)^2 \lesssim \!\sum _{k=0}^{\infty } 2^{k\gamma H} k^{d/2} 2^{-k\beta }\sum _{Q \in {\mathcal {D}}_k} \mu (Q) = \!\sum _{k=0}^{\infty } \frac{k^{d/2}}{2^{k\varepsilon H}}<\infty , \end{aligned}$$

since \(\sum _{Q\in {\mathcal {D}}_k} \mu (Q)=1\) as \(\mu \) is a probability measure. It remains to show that \(I_2<\infty \). By defining a new equivalence relation on rectangles in \({\mathcal {D}}_k\), i.e. that \(Q\sim Q'\) if there exist \((s,x)\in Q, (t,y)\in Q'\) such that \(|t-s|\le 2^{-k}\) and \(2^{-k H}\le \left\| f(t)-f(s) \right\| <2^{-kH + H}\) we get

$$\begin{aligned} I_2&\lesssim \sum _{k=0}^{\infty } 2^{k\gamma H}\mu \otimes \mu \{2^{-k H}\le \left\| f(t)- f(s) \right\| \lesssim 2^{-k H + H}, |t-s| \le 2^{-k}\} \\&\lesssim \sum _{k=0}^{\infty } 2^{k\gamma H} 2^{-k\beta } <\infty , \end{aligned}$$

where we used (2.11) again and the fact that the number of \(Q'\in {\mathcal {D}}_k\) such that \(Q\sim Q'\) is of order 1. This completes the proof in the case when \(\alpha /H\le d\).

Suppose now that \(\alpha /H> d\). Take \(\varepsilon >0\) small enough such that \(\alpha -2\varepsilon >dH\) and set \(\beta = \alpha - \varepsilon \). Let \(\gamma = \beta + d(1-H) - \varepsilon >d\). Then using the measure \(\mu \) from (2.8) and following the same steps as above we can write the same expression for the energy. Then, since \(\gamma >d\), the quantity \(I_1\) in view of Lemma 2.8 is bounded by

Following the same steps as earlier we deduce

$$\begin{aligned} I_1 \lesssim \sum _{k=0}^{\infty } 2^{-k(d(1-H) - \gamma )} k^{d/2} 2^{-k\beta } = \sum _{k=0}^{\infty } 2^{-k\varepsilon } k^{d/2}<\infty . \end{aligned}$$

For the quantity \(I_2\) in the same way as above we have

$$\begin{aligned} I_2\lesssim \sum _{k=0}^{\infty } 2^{k\gamma H} 2^{-k\beta } = \sum _{k=0}^{\infty } 2^{-k((1-H)(\alpha - d H) +2\varepsilon H -\varepsilon )}<\infty , \end{aligned}$$

since \(\alpha -2\varepsilon >dH\) and this completes the proof of the lemma. \(\square \)

Proof of Theorem 1.2

(dimension of the graph)

We first assume that f is bounded. We set \(\alpha =\dim _{\Psi ,H}(\hbox {Gr}_{A}(f))\). In view of Corollary 2.3 we only need to show that almost surely

$$\begin{aligned} \dim (\hbox {Gr}_{A}(X+f)) \ge \alpha /H \wedge \left( \alpha + d(1-H) \right) . \end{aligned}$$
(2.12)

Observe that

$$\begin{aligned} \inf \{\gamma : \ \hbox {Cap}_{I_{\gamma ,H}}(\hbox {Gr}_{A}(f)) =0\} = \sup \{\gamma : \ \hbox {Cap}_{I_{\gamma ,H}}(\hbox {Gr}_{A}(f)) >0\} = \gamma _*. \end{aligned}$$

Let \(\gamma _n\) be such that \(\hbox {Cap}_{I_{\gamma _n,H}}(\hbox {Gr}_{A}(f))>0\) and \(\gamma _n\rightarrow \gamma _*\) as \(n\rightarrow \infty \). Then by Lemma 2.6 we get that for all n a.s. \(\hbox {Cap}_{\gamma _n}(\hbox {Gr}_{A}(X+f))>0\), and hence a.s.

$$\begin{aligned} \dim (\hbox {Gr}_{A}(X+f)) \ge \gamma _n \ \hbox { for all } \ n, \end{aligned}$$

which gives that almost surely \(\dim (\hbox {Gr}_{A}(X+f))\ge \gamma _*\). This combined with Lemma 2.7 implies that almost surely

$$\begin{aligned} \dim (\hbox {Gr}_{A}(X+f)) \ge \min \left\{ \frac{\alpha }{H},\left( \alpha +d(1-H)\right) \right\} \end{aligned}$$

and this concludes the proof in the case when f is bounded. For the general case, we define the increasing sequence of sets \(A_n = \{s\in A: |f(s)| \le n\}\). Then by the countable stability property of Hausdorff and parabolic dimension we have

$$\begin{aligned}&\dim (\hbox {Gr}_{A}(X+f))\nonumber \\&\quad = \sup _{n} \dim (\hbox {Gr}_{A_n}(X+f)) \quad \hbox {and} \quad \dim _{\Psi ,H}(\hbox {Gr}_{A_n}(f))\uparrow \dim _{\Psi ,H}(\hbox {Gr}_{A}(f)).\nonumber \\ \end{aligned}$$
(2.13)

From above we have

$$\begin{aligned} \dim (\hbox {Gr}_{A_n}(X+f))= \min \left\{ \frac{\dim _{\Psi ,H}(\hbox {Gr}_{A_n}(f))}{H} , \dim _{\Psi ,H}(\hbox {Gr}_{A_n}(f))+d(1-H)\right\} . \end{aligned}$$

Using this and (2.13) proves the theorem in the general case. \(\square \)

Proof of Theorem 1.2

(dimension of the image)

As in the proof of Theorem 1.2 in the case of the graph, we can assume that f is bounded. The general case follows exactly in the same way as for the graph.

The dimension of the image satisfies

$$\begin{aligned} \dim (\mathcal {R}_{A}(X+f))\le & {} \dim (\hbox {Gr}_{A}(X+f))\le \frac{\dim _{\Psi ,H}(\hbox {Gr}_{A}(X+f))}{H} \\= & {} \frac{\dim _{\Psi ,H}(\hbox {Gr}_{A}(f))}{H} =\frac{\alpha }{H}, \end{aligned}$$

where the second inequality follows from Lemma 2.1 and the first equality follows from Lemma 2.2. Hence the upper bound on the dimension of \(\mathcal {R}_{A}(X+f)\) is immediate. It only remains to show the lower bound. Let \(\beta = (\alpha \wedge d H) -\varepsilon H\) and \(\gamma = \beta /H - \varepsilon \). Then since the image of a Borel set under a Borel measurable function is a Souslin set (see for instance [9]), it follows from Theorem 2.5 that it suffices to show that \(\hbox {Cap}_{\gamma }(\mathcal {R}_{A}(X+f)) >0\), i.e. it is enough to find a measure of finite \(\gamma \)-energy. By Theorem 2.9 there exists a probability measure \(\mu \) on \(\hbox {Gr}_{A}(f)\) such that

$$\begin{aligned} \mu ([a,a+\delta ] \times \cup _{j=1}^{d} [b_j, b_j + \delta ^H) \le \delta ^\beta . \end{aligned}$$

Let \(\pi \) be the projection mapping from \(\hbox {Gr}_{A}(f)\) to A, i.e. \(\pi ((s,f(s)))=s\) for all s. Let \(\nu \) be the measure on A such that

$$\begin{aligned} \nu = \mu \circ H^{-1}. \end{aligned}$$

Let \(\widetilde{\mu }\) be a measure on \(\mathcal {R}_{A}(X+f)\) given by

$$\begin{aligned} \widetilde{\mu }(R) = \nu ((X+f)^{-1}(R)) \end{aligned}$$

where \(R \subseteq \mathcal {R}_{A}(X+f)\). We will show that almost surely

$$\begin{aligned} {\mathcal {E}}_\gamma (\widetilde{\mu }) = \int \int \frac{d\widetilde{\mu }(x) d\widetilde{\mu }(y)}{\left\| x-y \right\| ^{\gamma }} <\infty . \end{aligned}$$

Taking expectations we get

$$\begin{aligned} \mathbb {E}\!\left[ {\mathcal {E}}_{\gamma }(\widetilde{\mu })\right] = \int \int \mathbb {E}\!\left[ \frac{1}{\left\| X_s - X_t + f(s) - f(t) \right\| ^\gamma }\right] d\mu ((s,f(s))) d\mu ((t,f(t))). \end{aligned}$$

We now show that

$$\begin{aligned} \mathbb {E}\!\left[ \frac{1}{\left\| X_s - X_t + f(s) - f(t) \right\| ^\gamma }\right] \lesssim \min \{\left\| f(t) - f(s) \right\| ^{-\gamma }, |t-s|^{-\gamma H} \}. \end{aligned}$$
(2.14)

The calculations that lead to (2.14) can be found in the proof of [15, Theorem 1.8], but we include the details here for the convenience of the reader. Using (2.7) we have

$$\begin{aligned} \mathbb {E}\!\left[ \frac{1}{\left\| X_s - X_t + f(s) - f(t) \right\| ^\gamma }\right] \le \mathbb {E}\!\left[ \frac{1}{\left\| X_s - X_t \right\| ^\gamma }\right] \lesssim |t-s|^{-\gamma H}. \end{aligned}$$

We set \(u=(f(s)-f(t))/|s-t|^H\) and we get

$$\begin{aligned} \mathbb {E}\!\left[ \frac{1}{\left\| X_s - X_t + f(s) - f(t) \right\| ^\gamma }\right] = \frac{1}{|t-s|^{\gamma H}} \int _{{\mathbb {R}}^d} \frac{1}{(2\pi )^{d/2} \left\| x+u \right\| ^\gamma } e^{-\left\| x \right\| ^2/2}\,dx. \end{aligned}$$

We now upper bound the last integral appearing above

$$\begin{aligned} \int _{{\mathbb {R}}^d} \frac{1}{\left\| x+u \right\| ^\gamma } e^{-\left\| x \right\| ^2/2}\,dx&= \int _{\left\| x+u \right\| \ge \left\| u \right\| /2} \frac{1}{\left\| x+u \right\| ^\gamma } e^{-\left\| x \right\| ^2/2}\,dx \\&\quad + \int _{\left\| x+u \right\| < \left\| u \right\| /2} \frac{1}{\left\| x+u \right\| ^\gamma } e^{-\left\| x \right\| ^2/2}\,dx \\&\lesssim \frac{1}{\left\| u \right\| ^{\gamma }} + e^{-\left\| u \right\| ^2/4} \int _{\left\| x \right\| <\left\| u \right\| } \frac{1}{\left\| x \right\| ^{\gamma }} \,dx \lesssim \left\| u \right\| ^{-\gamma }, \end{aligned}$$

where the last step follows from passing to polar coordinates and using the fact that \(d>\gamma \). Therefore multiplying the last upper bound by \(|t-s|^{-\gamma H}\) proves (2.14). We now need to decompose the energy in these two regimes, i.e. for \(\left\| f(t) - f(s) \right\| \le |t-s|^H\) and \(\left\| f(t)-f(s) \right\| >|t-s|^H\). This now follows in the same way as the proof that \(I_1, I_2<\infty \) in the proof of Lemma 2.7. \(\square \)

3 Self-affine sets

In this section we give the proofs of Corollaries 1.5 and 1.6. We start by calculating the parabolic Hausdorff dimension of any self-affine set as defined in the Introduction. Then we use Theorem 1.2 to prove Corollary 1.5.

Lemma 3.1

Let \(n>m\) and let \(D\subseteq \{0,\ldots , n-1\} \times \{0,\ldots , m-1\}\) be a pattern. If \(\log _n (m)<H\), then

$$\begin{aligned} \dim _{\Psi ,H}(K(D)) = H\log _m\left( \sum _{j=0}^{m-1}r(j)^{\log _n(m)/H} \right) , \end{aligned}$$

where .

Before proving this lemma, we state the analogue of Billingsley’s lemma for the parabolic Hausdorff dimension. See Billingsley [3] and Cajar [4] for the proof. We first introduce some notation. Let b be an integer. We define the b-adic rectangles contained in \([0,1]^2\) of generation k to be

$$\begin{aligned} R_{k} = \left[ \frac{(j-1)}{b^{k}}, \frac{j}{b^{k}}\right) \times \left[ \frac{(i-1)}{b^{[kH]}}, \frac{i}{b^{[kH]}}\right) , \end{aligned}$$

where j ranges from 1 to \([b^{k/H}]\) and i ranges from 1 to \(b^k\), and we write \(R_k(x)\) for the unique dyadic rectangle containing x.

Lemma 3.2

(Billingsley’s lemma). Let A be a Borel subset of \([0,1]^2\) and let \(\mu \) be a measure on \([0,1]^2\) with \(\mu (A)>0\). If for all \(x \in A\) we have

$$\begin{aligned} \alpha \le \liminf _{n\rightarrow \infty } \frac{\log ( \mu (R_n(x)))}{\log (b^{-n/H})} \le \beta , \end{aligned}$$

then \(\alpha \le \dim _{\Psi ,H}(A) \le \beta \).

We are now ready to give the proof of Lemma 3.1. The proof follows the steps for the calculation of the Hausdorff dimension of a self-affine set as given in [12] and [14].

Proof of Lemma 3.1

Let \(\theta =\log _n m<H\). For \((x,y) \in [0,1]^2\) we define \(Q_k(x,y)\) to be the closure of the set of points \(x',y'\) such that the first \(\lfloor \theta k/H\rfloor \) digits of \(x'\) and x agree in the n-ary expansion and the first k digits of \(y'\) and y agree in the m-ary expansion. Let \(\pi =(p(d), d\in D)\) be a probability measure on D.

Let \(\mu \) be the image of the product measure \(\pi ^{\otimes {\mathbb {N}}}\) under the map

$$\begin{aligned} R: \{(a_k,b_k)\}_{k\ge 1} \mapsto \sum _{k= 0}^{\infty }\left( a_kn^{-k}, b_km^{-k} \right) , \end{aligned}$$

where \((a_k,b_k)\in D\) for all k. We now consider the rectangle \(n^{-k}\times m^{-k}\) defined by specifying the first k digits of the base n expansion of x and the first k digits of the base m expansion of y. This has \(\mu \) measure equal to \(\prod _{i=1}^{k} p(x_i,y_i)\). Since r(j) is the number of rectangles contained in row j of the pattern, it follows that the rectangle \(Q_{k}(x,y)\) contains \(\prod _{i=\lfloor \theta k/H\rfloor + 1}^{k} r(y_i)\) rectangles of size \(n^{-k}\times m^{-k}\). We now assume that p(d) only depends on the second coordinate. Hence we get

$$\begin{aligned} \mu (Q_k(x,y)) = \prod _{\ell =1}^{k} p(x_\ell ,y_\ell ) \prod _{\ell =\lfloor \theta k/H \rfloor +1}^{k} r(y_\ell ). \end{aligned}$$
(3.1)

Taking logarithms of (3.1) we obtain

$$\begin{aligned} \log (\mu (Q_k(x,y))) = \sum _{\ell =1}^{k} \log (p(x_\ell ,y_\ell )) + \sum _{\ell =\lfloor \theta k/H\rfloor +1}^{k} \log (r(y_\ell )). \end{aligned}$$
(3.2)

Since the digits \((x_\ell ,y_\ell )_\ell \) are i.i.d. wrt to the product measure \(\pi ^{\otimes {\mathbb {N}}}\), by the strong law of large numbers we get

$$\begin{aligned} \lim _{k\rightarrow \infty } \frac{1}{k} \log (\mu (Q_k(x,y)) ) = \sum _{d\in D} p(d) \log (p(d)) + (1-\theta /H) \sum _{d\in D} p(d) \log (r(d)) \end{aligned}$$

for \(\mu \)-almost every x, y.

Let A be the set of (x, y) for which the convergence holds. Then \(\mu (A^c)=0\). By the definition of the measure \(\mu \) it is clear that it is supported on the set K(D). Hence \(\mu (K(D)^c\cup A^c) =0\) and for all \(x\in K(D)\cap A\) we have

$$\begin{aligned} \lim _{k\rightarrow \infty } \frac{1}{k} \log (\mu (Q_k(x,y)) ) = \sum _{d\in D} p(d) \log (p(d)) + (1-\theta /H) \sum _{d\in D} p(d) \log (r(d)). \end{aligned}$$

Therefore using Lemma 3.2 we deduce

$$\begin{aligned}&\dim _{\Psi ,H}(K(D)\cap A) =-\frac{H}{\log m}\\&\quad \times \,\left( \sum _{d\in D} p(d) \log (p(d)) + (1-\theta /H) \sum _{d\in D} p(d) \log (r(d))\right) , \end{aligned}$$

and hence we obtain a lower bound for the parabolic dimension of K(D)

$$\begin{aligned} \dim _{\Psi ,H}(K(D)) \ge -\frac{H}{\log m}\!\left( \sum _{d\in D} p(d) \log (p(d)) + (1-\theta /H) \sum _{d\in D} p(d) \log (r(d))\!\right) . \end{aligned}$$

Maximizing the right hand side of the above inequality over all probability measures (p(d)) gives that the maximizing measure is

$$\begin{aligned} p(d) = \frac{1}{Z}r(d)^{\theta /H-1} \quad \hbox {and} \quad Z = \sum _{d\in D} r(d)^{\theta /H-1} = \sum _{j=0}^{m-1}r(j)^{\theta /H}. \end{aligned}$$
(3.3)

This choice of probability measure immediately gives

$$\begin{aligned} \dim _{\Psi ,H}(K(D)) \ge H\log _m\sum _{j=0}^{m-1}r(j)^{\theta /H}, \end{aligned}$$

and hence it remains to prove the upper bound. From now we fix the choice of probability measure as in (3.3). We define

$$\begin{aligned} S_k(x,y) = \sum _{\ell =1}^{k} r(y_\ell ). \end{aligned}$$

Using (3.3) we can rewrite (3.2) as follows

$$\begin{aligned} \log (\mu (Q_k(x,y)))&= \sum _{\ell =1}^{k} \log \left( \frac{1}{Z} r(y_\ell )^{\theta /H-1} \right) + \sum _{\ell =1}^{k} \log (r(y_\ell )) - \sum _{\ell =1}^{\lfloor \theta k/H\rfloor } \log (r(y_\ell ) ) \\&= -k \log (Z) + (\theta /H-1) S_k(x,y) + S_k(x,y) - S_{\lfloor \theta k/H\rfloor }(x,y). \end{aligned}$$

Therefore

$$\begin{aligned} \frac{H}{\theta k} \log (\mu (Q_k(x,y)))+ \frac{H}{\theta } \log (Z) = \frac{S_k(x,y)}{k} - \frac{S_{\lfloor \theta k/H\rfloor }(x,y)}{\theta k/H}. \end{aligned}$$
(3.4)

We can write the right hand side as follows

$$\begin{aligned} \frac{S_k(x,y)}{k} - \frac{S_{\lfloor \theta k/H\rfloor }(x,y)}{\theta k/H} = \frac{S_{\lfloor k\rfloor }(x,y)}{\lfloor k\rfloor } - \frac{S_{\lfloor \theta k/H\rfloor }(x,y)}{\lfloor \theta k/H\rfloor }\left( 1- \frac{\{\theta k/H\}}{\theta k/H} \right) , \end{aligned}$$

where for all x we write \(\{x\} = x-\lfloor x\rfloor \). Denote the right hand side by f(k) and observe it is defined for all \(k\in {\mathbb {R}}_+\). Summing f(k) over all \(k=H/\theta ,(H/\theta )^{2},\ldots \) yields a telescoping series and a convergent one, since \((S_\ell /\ell )\) is bounded and \(\theta /H <1\). This implies that

$$\begin{aligned} \limsup _{x\rightarrow \infty } f(x) \ge 0, \end{aligned}$$

since otherwise the sum above would converge to \(-\infty \). Since \(\sup _{x\in [k,k+1]}|f(x)-f(k)|\rightarrow 0\) as \(k\rightarrow \infty \), we deduce that

$$\begin{aligned} \limsup _{k\rightarrow \infty } \left( \frac{S_k(x,y)}{k} - \frac{S_{\lfloor \theta k/H\rfloor }(x,y)}{\theta k/H} \right) \ge 0. \end{aligned}$$

Hence, from (3.4) we infer

$$\begin{aligned} \liminf _{k\rightarrow \infty } \frac{\log (\mu (Q_k(x,y)))}{\log (m^{-k/H})} \le H\log _m(Z) \end{aligned}$$

and applying Lemma 3.2 we immediately conclude

$$\begin{aligned} \dim _{\Psi ,H}(K(D)) \le H\log _m(Z). \end{aligned}$$

This finishes the proof of the theorem. \(\square \)

Proof of Corollary 1.5

The statement of the corollary follows immediately from Remark 1.3 and Lemma 3.1. \(\square \)

We now proceed to prove Corollary 1.6. To this end we first define a self-affine set K and then show that there exists a function \(f:[0,1]\rightarrow [0,1]\) which is Hölder continuous with parameter \(\log 2/\log 6\) and satisfies \(\hbox {Gr}(f) = K\).

We start by defining the self-affine set that corresponds to the patterns A and B given by the matrices

$$\begin{aligned} A = \left( \begin{matrix} 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 1\\ 1&{}\quad 1&{}\quad 1&{}\quad 1&{}\quad 1&{}\quad 0 \end{matrix}\right) \quad \hbox {and} \quad B = \left( \begin{matrix} 1&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0&{}\quad 0\\ 0&{}\quad 1&{}\quad 1&{}\quad 1&{}\quad 1&{}\quad 1 \end{matrix}\right) . \end{aligned}$$

Let \({\mathcal {Q}}_0 = \{[0,1]^2\}\) be the set containing the rectangles of the 0-th generation. To each rectangle in \({\mathcal {Q}}_0\) we assign label A. Suppose we have defined the collection \({\mathcal {Q}}_j\) and assigned labels to the rectangles in \({\mathcal {Q}}_j\). Then we subdivide each rectangle \(R_j\) in \({\mathcal {Q}}_j\) into 12 equal closed rectangles of width \(6^{-(j+1)}\) and height \(2^{-(j+1)}\). If the label assigned to \(R_j\) is A (resp. B), then in the subdivision we keep only those rectangles that correspond to the pattern A (resp. B). If the label of \(R_j\) is A, then to the rectangles that we kept we assign labels A, B, A, B, A, A going from left to right. If the label of \(R_j\) is B, then to the rectangles that we kept we assign labels B, B, A, B, A, B again going from left to right. The collection \({\mathcal {Q}}_{j+1}\) consists of those rectangles that we kept in the above procedure. Continuing indefinitely gives a compact set which we will denote K. The patterns A and B and the labels used in each iteration are depicted in Fig. 1 and the first four approximations to the set K are shown in Fig. 2 in the Introduction.

Claim 3.3

There exists a function \(f:[0,1]\rightarrow [0,1]\) such that \(\hbox {Gr}(f) = K\). Moreover, f is Hölder continuous with parameter \(\theta = \log 2/\log 6\) and is not Hölder continuous with parameter \(\theta '\) for any \(\theta '>\theta \).

Proof

For every \(x\in [0,1]\) let \(x = \sum _{i=1}^{\infty } x_i 6^{-i}\) with \(x_i\in \{0,1,2,3,4,5\}\) be its expansion in base 6. Note that if \(x = k6^{-i}\) for some \(k\in \{0,1,\ldots , 6^i\}\), then x has two different expansions in base 6; one with an infinite number of 0’s and one with an infinite number of 5’s. To define the function f we consider the expansion with the infinite number of 0’s. We now define a sequence \((y_i)\) corresponding to the sequence \((x_i)\), where \(y_i\in \{0,1\}\). For each rectangle \(R\in {\mathcal {Q}}_j\) we consider the interval of the j-th generation which is the projection of R on [0, 1]. This way we obtain a partition of [0, 1] into disjoint subintervals of length \(6^{-j}\) in generation j.

To determine \(y_j\) we find the interval of the j-th generation where x belongs to. If the pattern used in the rectangle of the j-th generation that corresponds to this interval is A, then if \(x_j\ne 5\), we set \(y_j = 0\), otherwise we set \(y_j=1\). If the pattern used is B, then if \(x_j\ne 1\), we set \(y_j = 0\), otherwise we set \(y_j=1\). We finally define

$$\begin{aligned} f(x) = \sum _{i=1}^{\infty } y_i 2^{-i}. \end{aligned}$$

It is now clear that \(\hbox {Gr}(f) = K\). It remains to show the Hölder property.

We first argue that the definition of f remains unchanged if we do not require for the representation of x to have an infinite number of 0’s. Suppose that x lies on a dividing line of the i-th generation. Then the first i digits of x are independent of the representation. Thus the first i digits of f(x) are also independent. Then there are several cases. We illustrate four of them in Fig. 3. In Fig. 4a the labels of the two rectangles above x from left to right are A, B. This means that \(y_{i+1}=1\) and this is independent of the representation. In the case of Fig. 4b the two rectangles from left to right are assigned A, A. In the representation from the left \(y_{i+1} = 0\) and from the right \(y_{i+1}' =0\). In the next generations \(y_{i+k}= 1\) and \(y_{i+k}'=0\) for all \(k\ge 2\). This now implies that f(x) is independent of the representation in this case. The other cases follow similarly.

Fig. 3
figure 3

The graph of \(B+f\) in green and the graph of B in blue with f of Corollary 1.6 (color figure online)

Fig. 4
figure 4

Patterns A, B and location of x on the dividing line

It now remains to show that f is Hölder continuous. Let x and \(x'\) satisfy

$$\begin{aligned} 6^{-k} < |x-x'| \le 6^{-k+1} \quad \hbox {and} \quad x_i = x_i', \quad \forall i\le k. \end{aligned}$$

Then by the construction of f it follows that \(y_i = y_i'\) for all \(i\le k\), and hence

$$\begin{aligned} |f(x) - f(x')| =\left| \sum _{i=1}^{\infty } (y_i - y_i') 2^{-i} \right| \le \sum _{i=k+1}^{\infty } 2^{-i} = 2^{-k} = 6^{-\theta k} \le |x-x'|^{\theta }. \end{aligned}$$

If \(x,x'\) satisfy \(6^{-k}<|x-x'|\le 6^{-k+1}\) but disagree in the first k digits, then let \(x_0 = \ell 6^{-k}\) be the unique point of the k-th subdivision that agrees with x and \(x'\) in the first k digits if we consider its two representations in base 6. Then by the above argument it follows that

$$\begin{aligned} |f(x)-f(x_0)| \le |x-x_0|^\theta \quad \hbox {and} \quad |f(x') - f(x_0)| \le |x'-x_0|^\theta . \end{aligned}$$

Therefore by the triangle inequality we immediately get that

$$\begin{aligned} |f(x) - f(x')| \le 2|x-x'|^\theta \end{aligned}$$

and this proves that f is Hölder continuous with parameter \(\theta \).

We note that f is not Hölder continuous for any \(\theta '>\theta \), since \(f(0)=0\) and \(f(6^{-n}) = 2^{-n}\). \(\square \)

Proof of Corollary 1.6

We first explain how we can adapt the proof of Lemma 3.1 in order to get the parabolic dimension of K, since the patterns used are not the same in each iteration as was the case there. We only outline where the two proofs differ.

Let \(D_1 = \{(0,0), (0,1), (0,2), (0,3), (0,4), (1,5)\}\) and \(D_2 = \{(1,0), (0,1), (0,2), (0,3), (0,4), (0,5)\}\) correspond to patterns A and B respectively. We define two probability distributions on \(D_1\) and on \(D_2\). Let \(p>0\) and \(q>0\) satisfy \(5p+q=1\). Then we let \(p_1(x,0) = p\) for all \(x\ne 5\) and \(p_1(5,1) = q\). This is a distribution on \(D_1\). We also let \(p_2(0,1) = q\) and \(p_2(x,0) = p\) for \(x\ne 0\). This is a distribution on \(D_2\). We notice that both distributions only depend on the second coordinate and give the same values to this coordinate. We now generate \((\xi _i^1,\xi _i^2)_{i\ge 1}\) an i.i.d. sequence from \(p_1\) and independently \((\zeta _i^1,\zeta _i^2)_{i\ge 1}\) an i.i.d. sequence from \(p_2\). We sample \((x,y) \in K\) by sampling the digits. Namely, \((x_1,y_1)= (\xi _1^1,\xi _1^2)\) and then iteratively depending on the history of the process we set either \((x_i,y_i) = (\xi _{r(i)}^1,\xi _{r(i)}^2)\) or \((x_i,y_i) = (\zeta _{i-r(i)}^1,\zeta _{i-r(i)}^2)\), where r(i) is the number of times that we have used the distribution \(p_1\). Then if \(\mu \) is the measure induced by these distributions we get for \((x,y)\in K\) and \(Q_{k}(x,y)\) as defined in Lemma 3.1

$$\begin{aligned} \mu (Q_k(x,y)) = \prod _{i=1}^{k} w(x_i,y_i) \prod _{j=\lfloor 2\theta k\rfloor +1}^{k} r(y_j), \end{aligned}$$

where \(w(x_i,y_i)\) is either equal to \(p_1(x_i,y_i)\) or to \(p_2(x_i,y_i)\) and \(\theta = \log 2/\log 6\). By the construction above it easily follows that \(w(x_i,y_i)\) is an i.i.d. sequence that takes the value p with probability 5p and the value q with probability q. By the strong law of large numbers we then deduce that for \(\mu \)-almost every (x, y)

$$\begin{aligned} \lim _{k\rightarrow \infty } \frac{1}{k} \log \left( \mu (Q_k(x,y)) \right) = 5p\log p + q\log q + (1-2\theta ) 5p\log 5. \end{aligned}$$

Now the rest of the proof follows in exactly the same way as the proof of Lemma 3.1 to finally give

$$\begin{aligned} \dim _{\Psi ,H}(\hbox {Gr}(f)) = \frac{1}{2} \log _2(5^{2\theta } + 1), \end{aligned}$$
(3.5)

where we used \(H=1/2\) for the Brownian motion. Let \(f:[0,1]\rightarrow [0,1]\) be the function of Claim 3.3 which is Hölder continuous with exponent \(\theta \) and satisfies \(\hbox {Gr}(f) = K\). Then from (3.5) and Corollary 1.5 we immediately get

$$\begin{aligned} \dim (\hbox {Gr}(B+f)) = \frac{\log _2\left( 5^{2\theta } +1 \right) +1}{2}. \end{aligned}$$

It is proved in [10] that

$$\begin{aligned} \dim (\hbox {Gr}(f)) = \log _2\left( 5^\theta +1\right) . \end{aligned}$$

Thus it follows that

$$\begin{aligned} \dim (\hbox {Gr}(B+f)) > \max \{\dim (\hbox {Gr}(f)), 3/2\} \end{aligned}$$

and this concludes the proof. \(\square \)

4 Comparing dimensions of \(\hbox {Gr}(B+f)\) when \(\hbox {Gr}(f)\) is a self affine set

Theorem 4.1

Let B be a standard Brownian motion in \({\mathbb {R}}\) and \(n>m^2\). Let \(D\subseteq \{0,\ldots ,n-1\}\times \{0,\ldots , m-1\}\) be a pattern such that every row always contains a chosen rectangle (i.e. \(r_j \ge 1\) for all \(j \le m-1\)) and every column contains exactly one chosen rectangle. Then there exists a function f with \(\hbox {Gr}(f) = K(D)\) and we have almost surely

$$\begin{aligned} \dim _M (\hbox {Gr}(B+f)) = \dim _M (\hbox {Gr}(f)) = 1 + \log _n \frac{n}{m}. \end{aligned}$$
(4.1)

Moreover, if the \(r_j\) are not all equal, then almost surely

$$\begin{aligned} \max \{\dim (\hbox {Gr}(B)), \dim (\hbox {Gr}(f))\} < \dim (\hbox {Gr}(B+f)) < \dim _M(\hbox {Gr}(B+f)) . \end{aligned}$$
(4.2)

Proof

Note that the function f can be made càdlàg without affecting \(\dim _M\hbox {Gr}(B+f)\) and \(\dim _M\hbox {Gr}(f)\). Then we can apply [6, Theorem 1.7] to get that almost surely

$$\begin{aligned} \dim _M(\hbox {Gr}(B+f)) \ge \dim _M(\hbox {Gr}(f)). \end{aligned}$$
(4.3)

It only remains to prove the upper bound. We follow McMullen’s proof [12] for the calculation of the Minkowski dimension of \(\hbox {Gr}(f)\). First notice that \(\theta =\log _n m <1/2\).

Consider a rectangle of the j-th generation of the construction of \(\hbox {Gr}(f)\) with size \(n^{-j}\times m^{-j}\). Then it is of the form \(R=[pn^{-j},(p+1)n^{-j}]\times [qm^{-j},(q+1)m^{-j}]\). By the Hölder property of Brownian motion it follows that for \(\zeta >0\) there exists a constant C such that almost surely for all \(s,t\in [0,1]\) we have

$$\begin{aligned} |B_t-B_s| \le C|t-s|^{1/2-\zeta }. \end{aligned}$$
(4.4)

When \(\hbox {Gr}(f)\) is perturbed by Brownian motion, then the above rectangle becomes

$$\begin{aligned} R'= & {} [pn^{-j},(p+1)n^{-j}]\\&\times [qm^{-j}+B_{pn^{-j}} -Cn^{-j(1/2-\zeta )},(q+1)m^{-j} + B_{pn^{-j}} +Cn^{-j(1/2-\zeta )}]. \end{aligned}$$

This means that if \((t,f(t)) \in R\), then by (4.4) we have \((t,B_t+f(t)) \in R'\). If \(\theta = \log _n m\), then the rectangle \(R'\) requires \(n^{j-[\theta j]}\) squares of side \(n^{-j}\) to cover it, since \(\theta <1/2\). Therefore the number of squares of side \(n^{-j}\) needed to cover \(\hbox {Gr}(B+f)\) is at most \(|D|^j n^{j-[\theta j]}\). Taking logarithms and then the limit as \(j\rightarrow \infty \) we obtain that almost surely

$$\begin{aligned} \dim _M(\hbox {Gr}(B+f)) \le \lim _{j\rightarrow \infty } \frac{\log (|D|^j n^{j-[\theta j]})}{\log n^j} = 1 + \log _n\frac{|D|}{m} = \dim _M(\hbox {Gr}(f)) \end{aligned}$$

and this together with (4.3) concludes the proof of (4.1).

It remains to prove (4.2). By Cauchy–Schwartz, since the \(r_j\) are not all equal, we have

$$\begin{aligned} \sum _{j=0}^{m-1} r_j^\theta < \left( m \sum _{j=0}^{m-1} r_j^{2\theta } \right) ^{1/2}. \end{aligned}$$

Therefore from Corollary 1.5 almost surely we get

$$\begin{aligned} \dim (\hbox {Gr}(f)) =\log _m\left( \sum _{j=0}^{m-1} r_j^\theta \right) < \frac{1}{2}\left( 1+ \log _m\left( \sum _{j=0}^{m-1} r_j^{2\theta }\right) \right) = \dim (\hbox {Gr}(B+f)). \end{aligned}$$
(4.5)

Since \(2\theta <1\), we have \(\sum _{j=0}^{m-1} r_j^{2\theta } > \sum _{j=0}^{m-1} r_j n^{2\theta -1} =n^{2\theta }\). Thus

$$\begin{aligned} \dim (\hbox {Gr}(B+f)) > \frac{ 1+\log _m (n^{2\theta })}{2}=\frac{3}{2}=\dim (\hbox {Gr}(B)) \quad a.s. \end{aligned}$$

and together with (4.5), this proves the first inequality in (4.2).

If the \(r_j\) are not all equal, then by Jensen’s inequality we get

$$\begin{aligned} \frac{1}{m} \sum _{j=0}^{m-1} r_j^{2\theta } < \left( \frac{1}{m}\sum _{j=0}^{m-1} r_j\right) ^{2\theta } =(n/m)^{2\theta }, \end{aligned}$$

whence

$$\begin{aligned} \dim (\hbox {Gr}(B+f)) < \frac{ 1+\log _m \left( m (n/m)^{2\theta }\right) }{2}=2-\theta , \end{aligned}$$

establishing the second inequality in (4.2). \(\square \)