1 Introduction

We start our discussion with a function \(g\in {\mathcal {C}}[0,1] := \{f: [0,1]\rightarrow {\mathbb {R}}: \)f\( \,\,\text {is} \text {continuous on}\) [0,1]\(\}\) with \( \dim G_{g} > 1\). Here and in the following, we denote the graph of a function g by \(G_g\). In the present informal discussion, we use \(\dim \) to denote a fractal dimension. For the existence of such functions g, see, for instance, [24].

The function \(f:[0,1] \rightarrow {\mathbb {R}}\) defined by \(f(x) :=\int \limits _{0}^{x} g(t)dt\) will have the following properties:

$$\begin{aligned} \dim G_{f} =1\quad \text {and}\quad \dim G_{f'} =\dim G_{g} > 1. \end{aligned}$$

If we approximate f by Bernstein polynomials \(B_n(f)\) of order n,

$$\begin{aligned} B_{n}(f)(x) := \sum _{k=0}^n \left( {\begin{array}{c}n\\ k\end{array}}\right) f\left( \frac{k}{n}\right) \, x^{k} (1-x)^{n-k}, \end{aligned}$$

then \(B_n(f)\) converges uniformly to f and \((B_n(f))'\) converges uniformly to \(f'=g\). (We refer the interested reader to [13] for Bernstein polynomials and their properties.) Note that \((B_n(f))'\) is again a polynomial and, thus, the fractal dimension of \((B_n(f))'\) is equal to one. The above conveys that the approximation of a function by Bernstein polynomials preserves the function class but not the dimension of its derivative.

The current article targets to study approximation aspects with respect to fractal dimensions of a function and its derivative.

The structure of this paper is as follows. After a brief introduction to fractal dimensions in Sect. 2, the novel concept of dimension preserving approximation is introduced in Sect. 3 and some of its properties are discussed. Section 4 deals with the restriction and extension of continuous functions in regards to fractal dimensions.

2 Hausdorff dimension, box dimension, and packing dimension

In this section, we introduce those fractal dimensions that are relevant for the present paper. These are the Hausdorff dimension, the box dimension, and the packing dimension defined for nonempty subsets of a separable metric space \((X,d_X).\) For more details about these fractal dimensions and for proofs, we refer the interested reader to, for instance, [11, 12, 19].

To this end, let \((X,d_X)\) be a separable metric space. For a non-empty subset U of X,  the diameter of U is defined as

$$\begin{aligned} |U| := \sup \{d_{X}(x,y): x,y \in U\}. \end{aligned}$$

Let F be a subset of X and s a non-negative real number. The s-dimensional Hausdorff measure of F is defined by

$$\begin{aligned} H^{s}(F) := \lim _{\delta \rightarrow 0^{+}} \inf \left\{ \sum _{i=1}^{\infty }|U_{i}|^{s}: F \subseteq \bigcup \limits _{i=1}^{\infty } U_{i}~~\text {and }~ |U_{i}| < \delta \right\} , \end{aligned}$$

where the infimum is taken over all countable covers \(\{U_i\}_{i\in {\mathbb {N}}}\) of F by sets \(U_i \subset F\) with \(|U_i| < \delta \).

Definition 2.1

Let \(F \subset X\) and let \(s \ge 0.\) The Hausdorff dimension of F is defined by

$$\begin{aligned} \dim _{H} F :=\inf \{s:H^{s}(F)=0\} =\sup \{s:H^{s}(F)=\infty \}. \end{aligned}$$

The Hausdorff dimension satisfies the countable stability property: Let \(\{X_i\}_{i\in I}\) be a countable family of sets. Then

$$\begin{aligned} \dim _{H} \left( \bigcup _{i\in I} X_{i}\right) = \sup \limits _{i\in I}\left\{ \dim _{H} X_{i}\right\} . \end{aligned}$$
(2.1)

Definition 2.2

Let F be any non-empty bounded subset of X and let \(N_{\delta }(F)\) be the smallest number of sets of diameter at most \(\delta \) which can cover F. The lower and upper box dimensions of F are defined as

$$\begin{aligned} {\underline{\dim }}_{B} F :=\varliminf _{\delta \rightarrow 0^{+}} \frac{\log N_{\delta }(F)}{- \log \delta } \end{aligned}$$

and

$$\begin{aligned} {\overline{\dim }}_{B} F :=\varlimsup _{\delta \rightarrow 0^{+}} \frac{\log N_{\delta }(F)}{- \log \delta }, \end{aligned}$$

respectively. If the above two expressions are equal, their common value is called the box dimension of F:

$$\begin{aligned} \dim _{B} F :=\lim _{\delta \rightarrow 0^{+}} \frac{\log N_{\delta }(F)}{- \log \delta }. \end{aligned}$$

Let us introduce a few notions that will lead to the definition of packing dimension. Let \(s \ge 0\) and \(\delta > 0\). We denote by

$$\begin{aligned}&{\mathcal {P}}^{s}_{\delta } (F) := \sup \Bigg \{ \sum _{i=1}^{\infty } |B_i|^s: \{B_i\} ~\text {is a collection of countably many} \\&\text {disjoint balls of radii at most } \delta \text { with centres in}~F \Bigg \}. \end{aligned}$$

It is observed that \({\mathcal {P}}^s_{\delta } (F)\) decreases with \(\delta .\) This further implies that the limit

$$\begin{aligned} {\mathcal {P}}^{s}_{0} (F)= \lim _{\delta \rightarrow 0^{+}} {\mathcal {P}}^{s}_{\delta } (F) \end{aligned}$$

exists. As \({\mathcal {P}}^s_{\delta }\) is only a pre-measure, one defines

$$\begin{aligned} {\mathcal {P}}^{s} (F) :=\inf \Bigg \{\sum _{i=1}^{\infty } {\mathcal {P}}^{s}_{0} (F_i): F \subseteq \bigcup \limits _{i=1}^{\infty } F_{i} \Bigg \}. \end{aligned}$$

Thus, the packing measure \({\mathcal {P}}^s\) of F is the infimum of the packing pre-measures \({\mathcal {P}}^s_0\) of countable covers of F.

Definition 2.3

Let \(F \subset X\) and \(s \ge 0.\) The packing dimension of F is defined by

$$\begin{aligned} \dim _{P} F :=\inf \{s:{\mathcal {P}}^{s}(F)=0\}=\sup \{s:{\mathcal {P}}^{s}(F)=\infty \}. \end{aligned}$$

It is known that the following inequalities hold between these types of fractal dimensions [11]:

$$\begin{aligned} \dim _{H} F \le {\underline{\dim }}_{B} F \le {\overline{\dim }}_{B} F \end{aligned}$$

and

$$\begin{aligned} \dim _{H} F \le \dim _{P} F \le {\overline{\dim }}_{B} F. \end{aligned}$$

Although there are several other notions of fractal dimension, this article will deal only with those that were introduced above.

3 Dimension preserving approximation

In this section, we present some results relating to the invariance of fractal dimensions under certain maps. In what follows, let \((X,d_X)\) be a separable metric space, and \((Y,d_Y)\) be a separable normed linear space. We equip the space \(X\times Y\) with a metric d defined by

$$\begin{aligned} d\big ((x,y),(x',y')\big ) := \sqrt{d_X(x,x')^2 +d_Y(y,y')^2}. \end{aligned}$$

The Lipschitz constant of a map \(f:X\rightarrow Y\) is given by

$$\begin{aligned} {{\,\mathrm{Lip}\,}}(f) = \sup _{x,x'\in X, x \ne x'} \frac{d_Y\big (f(x),f(x')\big )}{d_X(x,x')}. \end{aligned}$$

A map f is said to be Lipschitz if \({{\,\mathrm{Lip}\,}}(f) < + \infty \).

The following result is a generalization of Theorem 1 in [20].

Lemma 3.1

Let \(g:X \rightarrow Y\) be a continuous map between metric spaces \((X,d_X)\) and \((Y,d_Y)\). For a fixed Lipschitz map \(f:X \rightarrow Y\), we have that

$$\begin{aligned} \dim _{H} G_{f+g} = \dim _H G_g\quad \text {and}\quad \dim _P G_{f+g} = \dim _P G_g. \end{aligned}$$

Proof

We define a map \(T_f : G_g \rightarrow G_{f+g} \) by \(T_f((x,g(x))) :=(x,f(x)+g(x))\), \(x\in A\). It is easy to check that the map \(T_f\) is onto. Now,

$$\begin{aligned}&d\big (T_{f}((x,g(x))),T_{f}((y, g(y)))\big ) = d\big ((x,f(x)+g(x)),(y,f(y)+g(y))\big )\\&\quad =\sqrt{d_{X}(x,y)^{2}+d_{Y}\big (f(x)+g(x),f(y)+g(y)\big )^{2}}\\&\quad \le \sqrt{d_{X}(x,y)^{2}+2 d_{Y}(f(x),f(y))^{2}+2 d_{Y}(g(x),g(y))^{2}}\\&\quad \le \sqrt{d_{X}(x,y)^{2}+2L^{2}d_{X}(x,y)^{2}+2 d_{Y}(g(x),g(y))^{2}}\\&\quad \le M\sqrt{d_{X}(x,y)^{2}+d_{Y}(g(x),g(y))^{2}}\\&\quad = M d\big ((x,g(x)),(y,g(y))\big ), \end{aligned}$$

where L is the Lipschitz constant of f and \(M := \max \{ \sqrt{1+2L^2},\sqrt{2}\}.\)

On the other hand,

$$\begin{aligned} d\big (T_f((x,&g(x))),T_f((y,g(y)))\big ) = d(\big (x,f(x)+g(x)\big ),\big (y,f(y)+g(y)\big ))\\ =&~\sqrt{d_X(x,y)^2+d_Y\big (f(x)+g(x),f(y)+g(y)\big )^2}\\ =&~ \frac{M}{M} \sqrt{d_X(x,y)^2+d_Y\big (f(x)+g(x),f(y)+g(y)\big )^2}\\ \ge&~ \frac{1}{M} \sqrt{d_X(x,y)^2 (1+2L^2)+2 d_Y\big (f(x)+g(x),f(y)+g(y)\big )^2}\\ \ge&~ \frac{1}{M} \sqrt{d_X(x,y)^2+2d_Y\big (f(x)+g(x),f(y)+g(y)\big )^2+2 d_Y(f(x),f(y))^2}\\ \ge&~ \frac{1}{M} d\big ((x,g(x)),(y,g(y))\big ). \end{aligned}$$

Therefore, \(T_f\) is a bi-Lipschitz map. Since the Hausdorff dimension and packing dimension are Lipschitz invariant (see, for instance, [12]), we have that \(\dim _H G_{f+g} = \dim _H T_f(G_f) = \dim _H G_f\) and \(\dim _P G_{f+g} = \dim _P T_f(G_f) = \dim _P G_f.\) \(\square \)

Remark 3.2

Since the upper and lower box dimensions and the box dimension (if it exists) are also Lipschitz invariant (cf. [12]), the previous lemma also holds for these fractal dimensions.

It is well-known that the set of Lipschitz functions \([0,1]\rightarrow {\mathbb {R}}\), which we denote by \({\mathcal {L}}ip [0,1]\), is a dense subset of \({\mathcal {C}}[0,1]\) when the latter is endowed with the supremum norm \(\Vert \cdot \Vert _{\infty }.\) We use this fact to prove the following theorem.

Theorem 3.3

Let \(1 \le \beta \le 2\). Then the set \(S_{\beta }:=\{f\in {\mathcal {C}}[0,1]: \dim G_{f} = \beta \}\) is dense in \({\mathcal {C}}[0,1].\)

Proof

Let \(f\in {\mathcal {C}}[0,1].\) From the density of \({\mathcal {L}}ip [0,1]\) in \({\mathcal {C}}[0,1]\), there exists a sequence \((g_k)\) in \({\mathcal {L}}ip [0,1]\) which converges to f uniformly. Now let \(g \in {\mathcal {L}}ip [0,1]\) be arbitrary but fixed and fix an \(h \in S_{\beta }\). We define a sequence \((f_k)\) by \(f_k= g+ \frac{1}{k}h.\) Since g is a Lipschitz function, Lemma 3.1 implies that \(f_k \in S_{\beta }.\) With the convergence of \((f_k)\) to g, a basic real analysis result completes the proof.

\(\square \)

It is known that the box dimension, Hausdorff dimension and packing dimension are also Lipschitz invariant and therefore the above theorem is also valid for these dimensions.

For the next result, we require the following definition.

Definition 3.4

[2] Let \( T: (X,d_X) \rightarrow (Y,d_Y) \) be a set-valued map between two metric spaces.

  1. (1)

    T is called lower semicontinuous at \(x\in X\) if for any open set U in Y such that \( U \cap T(x) \ne \emptyset \) there exists a \(\delta > 0\) satisfying \(U \cap T(x') \ne \emptyset \) whenever \(d_X(x,x') < \delta .\) The map T is called lower semicontinuous if it is lower semicontinuous at every \(x \in X.\)

  2. (2)

    T is said to be closed if the graph of T defined by \(G_T:=\{(x,y):y \in T(x)\}\) is a closed subset of \(X \times Y\).

Theorem 3.5

The set-valued function \(D:[1,2] \rightarrow {\mathcal {C}} [0,1]\) defined by

$$\begin{aligned} D(\beta ) :=\{f \in {\mathcal {C}} [0,1]: \dim G_f =\beta \}=S_{\beta } \end{aligned}$$

is lower semicontinuous.

Proof

Let \(\beta \in [1,2]\) and U be any open set such that \(D(\beta ) \cap U \ne \emptyset .\) Since \(S_{\beta }\) is dense in \({\mathcal {C}} [0,1]\), we obtain \(D(\alpha ) \cap U \ne \emptyset , ~\forall \alpha \in [1,2],\) establishing the proof.

\(\square \)

Remark 3.6

The set-valued map D is not closed. If we choose a sequence of polynomials \((p_n)\) converging to a Weierstrass-type nowhere differentiable function f with Hausdorff dimension \(>1\) (for examples of such functions, see, e.g., [20]) then \((1, p_n) \in G_D\) and \((1, p_n) \rightarrow (1,f)\) but \(\dim _H(G_f) > 1.\) Therefore, we deduce that \(G_D\) is not closed.

The following result is well-known in analysis but repeated for the sake of completeness.

Theorem 3.7

[23] Let \(\big (f_n\big )\) be a sequence of differentiable functions on [0, 1]. Assume that the sequence \(\big (f_n(x_0)\big )\) converges for some \(x_0 \in [0,1].\) If \((f'_n)\) converges uniformly on [0, 1],  then \(\big (f_n\big )\) converges uniformly on [0, 1] to a function f, and

$$\begin{aligned} f'(x)=\lim _{n \rightarrow \infty }f'_n(x), \end{aligned}$$

for every \(x \in [0,1].\)

Note that if f is a continuously differentiable function, then \(\dim (G_{f})=1.\) However, we cannot say anything about the dimension of its derivative. For example, take a Weierstrass-type nowhere differentiable continuous function \(g:[0,1] \rightarrow {\mathbb {R}}\) as in, for instance [24], with \(1\le \dim G_{g}\le 2\). Then the function f defined by \(f(x)=\int \nolimits _{0}^{x} g(t)dt\) satisfies the following conditions: \(\dim G_{f} =1\) and \(1\le \dim G_{f'} =\dim G_{g}\le 2.\) Moreover, we emphasize the fact that functions f defined by an integral formula are always absolutely continuous. Hence, for such functions f we have \(\dim G_{f} =1\).

Theorem 3.8

Suppose f is a continuously differentiable function with \(\dim G_{f'}=\beta \) for some \(1 \le \beta \le 2.\) Then there exists a sequence of continuously differentiable functions \((f_n)\) satisfying \(\dim G_{f_n'} =\beta \), and \((f_n)\) converges uniformly to f.

Proof

From Theorem 3.3 we obtain a sequence of continuous functions \((g_n)\) with \(\dim G_{g_n} =\beta \), which converges uniformly to \(f'\). Define a function \(f_n:[0,1] \rightarrow {\mathbb {R}}\) by \(f_n(x)=\int \nolimits _{0}^{x} g_n(t)dt.\) Then, \(f_n'=g_n\) and \((f_n')\) converges to \(f'.\) Moreover, one verifies that the sequence \(\big (f_n(0)\big )\) converges to zero. In view of Theorem 3.7, the sequence \((f_n)\) converges uniformly to f with the required condition \(\dim G_{f_n'} =\beta \). \(\square \)

Remark 3.9

The above theorem can be extended as follows. Suppose f is a \(k-\)times continuously differentiable function with \(\dim G_{ f^{(k)} } =\beta \) for some \(1 \le \beta \le 2.\) Then there exists a sequence of \(k-\)times continuously differentiable functions \((f_n)\) satisfying \(\dim G_{ f_n^{(k)} } =\beta \), which converges uniformly to f.

The next theorem deals with both dimension preserving and shape preserving approximation of a continuous function.

Theorem 3.10

Suppose f is a continuously differentiable function with \(\dim G_{f'}=\beta \) for some \(1 \le \beta \le 2\) and \(f(x)\ge 0, \forall x \in [0,1].\) Then there exists a sequence of continuously differentiable functions \((f_n)\) satisfying \(\dim G_{f_n'} =\beta \) and \(f_n(x)\ge 0, \forall x \in [0,1]\), and \((f_n)\) converges uniformly to f.

Proof

The proof uses arguments similar to those given in Theorems 3.3 and 3.8, and is omitted. \(\square \)

3.1 Construction of dimension preserving approximants

Hutchinson constructed parametrized curves in [16] and Barnsley [5] used iterated function systems (IFSs) to define a class of functions called fractal interpolation functions (FIFs). A FIF is a continuous function whose graph is the (attractor) invariant set of a suitably chosen IFS. For the benefit of the reader, we briefly revisit the construction of a fractal interpolation function. For material about IFSs and FIFs, we refer the interested reader to, e.g., [6, 19].

To this end, let \((X,d_X)\) be a complete metric space and let \(f:X\rightarrow X\). The map f is said to be a contraction (on X) if \({{\,\mathrm{Lip}\,}}(f) < 1\).

Definition 3.11

Let \((X,d_X)\) be a complete metric space and let \({\mathcal {F}}:=\{f_1, \ldots , f_n\}\) be a finite set of contractions on X. Then the pair \((X,{\mathcal {F}})\) is called an iterated function system on X.

Definition 3.12

A nonempty compact subset K of X is called an invariant set or an attractor of the IFS \((X, {\mathcal {F}})\) if it satisfies the self-referential equation

$$\begin{aligned} K = \bigcup _{i=1}^{n} f_{i} (K). \end{aligned}$$
(3.1)

It can be shown that if such a set K exists, it is unique.

Let a set of interpolation points \(\{(x_i,y_i) : i=0,1,2,...,N\} \subset {\mathbb {R}}^2\) with increasing abscissae \(0 =: x_0< x_1<x_2< \dots <x_N:=1\) be given. Set \(J := \{1,2,...,N-1,N\}\), \(I :=[0,1]\) and \(I_i := [x_{i-1}, x_{i}]\), \(i\in J.\) Let \(L_i: I \rightarrow I_n\) be affine functions such that \(L_i(x_0)=x_{i-1}\) and \(L_i(x_N)=x_{i}\) for \(i \in J\). Suppose that \(F_i: I \times {\mathbb {R}} \rightarrow {\mathbb {R}}\) are functions that are continuous in the first variable and contractive in the second variable such that

$$\begin{aligned} F_{i}(x_{0},y_{0})=y_{i-1}, \quad F_{i}(x_{N},y_{N}) =y_{i}. \end{aligned}$$
(3.2)

Define

$$\begin{aligned} w_{i}(x,y) := \Big (L_{i}(x),F_{i}(x,y) \Big ), \quad i \in J, \end{aligned}$$

and consider the IFS \({\mathcal {W}}=(I \times {\mathbb {R}}, w_i: i \in J).\)

Theorem 3.13

[5] Let \({\mathcal {W}}\) be the IFS defined above. Then \({\mathcal {W}}\) has a unique attractor \(G= G_f\) which is the graph of a continuous function \(f: I \rightarrow {\mathbb {R}}\). Moreover, f interpolates the data set \(\{(x_i,y_i) : i\in J\} \), that is, \(f(x_i)=y_i\) for all \(i\in \{0,1,\ldots , N\}.\)

The function f in the above theorem whose graph is the attractor of an IFS is termed a fractal interpolation function. Main features of FIFs are that their graphs are self-referential in the sense of (3.1) and that they usually have non-integral box or Hausdorff dimension.

For a special choice of mappings \(F_i\), namely, \(F_i (x,y) := c_i x + d_i + \alpha _i y\), where the coefficients \(c_i\) and \(d_i\) are determined by the conditions (3.2), and the \(\alpha _i \in (-1,1)\) are free parameters, the resulting FIF is called affine.

Estimates for the Hausdorff dimension of an affine FIF were presented in [5] and also in [10]. The box dimension of classes of affine FIFs was computed in [7, 8, 14] and for FIFs generated by bilinear maps in [9]. In [15], a formula for the box dimension of FIFs \({\mathbb {R}}^n\rightarrow {\mathbb {R}}^m\) was derived.

In [21, 22, 26], the idea of fractal interpolation was explored further leading to a class of fractal functions associated with a given (classical) function \(f\in {\mathcal {C}}(I)\) as follows. (See also, [19] for a similar approach.)

Let \(\Delta :=(x_0,x_1, \dots ,x_N)\) be a partition of \(I :=[0,1]\) such that, without loss of generality, \(0=x_0<x_1<\dots < x_N=1\). For \(i \in J\), let \(L_i :I\rightarrow I_i\) be affine (see above) and \(F_i : I\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) be given by

$$\begin{aligned} F_{i}(x,y) := \alpha _{iy} + f \big ( L_{i}(x) \big )-\alpha _{i} b(x), \end{aligned}$$

where \(b \ne f \) is any continuous function satisfying

$$\begin{aligned} b(x_{0}) = f(x_{0}), \quad b(x_{N}) = f(x_{N}), \end{aligned}$$

and \(\alpha := (\alpha _1, \alpha _2, \dots , \alpha _N) \in (-1,1)^N\). The corresponding FIF, denoted by \(f_{\Delta ,b}^\alpha \), is called an \(\alpha \)-fractal function. In [22], it is noted that \(\alpha \)-fractal functions satisfy the self-referential equation

$$\begin{aligned} f_{\Delta ,b}^\alpha (x) = f(x) + \alpha _i (f_{\Delta ,b}^{\alpha }- b)\big (L_{i}^{-1}(x)\big ), \quad \forall ~~ x \in I_{i},~~ i \in J. \end{aligned}$$
(3.3)

The following result is a special case of Theorem 3 in [8] applied to Lipschitz functions. (See, also [1, Corollary 5.1].)

Theorem 3.14

Let \(\Delta =(x_0,x_1,\dots , x_N)\) be a partition of \(I=[x_0,x_N]\) satisfying \(x_0<x_1< \dots < x_N\) and let \( \alpha =(\alpha _1,\alpha _2, \dots ,\alpha _{N}) \in (-1,1)^N.\) Assume that f and b are Lipschitz functions defined on I with \(b(x_0)=f(x_0)\) and \(b(x_N)=f(x_N).\) If the data points \( \{(x_i, f(x_i)): i =0,1 \dots , N\}\) are not collinear, then

$$\begin{aligned} \dim _B G_{f_{\Delta ,b}^\alpha } = {\left\{ \begin{array}{ll} D, \hbox { if}\ \sum \limits _{i=1}^{N} |\alpha _i| > 1;\\ 1, \text { otherwise,} \end{array}\right. } \end{aligned}$$

where D is the unique positive solution of \(\sum \nolimits _{i=1}^{N} |\alpha _i|a_i^{D-1}=1\). Here, \(G_{f_{\Delta ,b}^\alpha }\) denotes the graph of \(f_{\Delta ,b}^\alpha \).

Note 3.15

We define the second modulus of smoothness with step-weight function \(\phi (x):=\sqrt{x(1-x)}\) by

$$\begin{aligned} \omega _{\phi }(f;\delta )=\sup _{0 \le t \le \delta }\sup _{x}|f(x-t\phi (x))- 2 f(x) + f(x+t \phi (x))|, \end{aligned}$$

where the second supremum is taken over those values of x for which every argument belongs to the domain [0, 1]. In [25] the following estimate was proved:

$$\begin{aligned} \Vert B_n(f) -f\Vert _{\infty } \le C ~ \omega _{\phi }\Big (f; \frac{1}{\sqrt{n}}\Big ), \end{aligned}$$

for some constant \(C>0\). Here, \(B_n:{\mathcal {C}}(I)\rightarrow \Pi _n\) denotes the n-th order Bernstein operator and \(\Pi _n\) the space of polynomials of degree \(\le n\).

Now we are ready to prove the next result.

Theorem 3.16

Let \(f \in {\mathcal {C}}(I)\) and \(\beta \in (1,2)\). Then there exists a sequence \((f_n)\) of fractal functions converging uniformly to f and \(\dim _B G_{f_n} = \beta .\)

Proof

For a given \(f \in {\mathcal {C}}(I)\) and \( \beta \in (1,2)\), we choose the partition \(\Delta = (0, \frac{1}{2}, 1)\) of \(I=[0,1]\) and a scale vector \( \alpha =(\alpha _1,\alpha _2) \in (-1,1)^2\) by

$$\begin{aligned} \alpha _1=\alpha _2 \quad \text {and}\quad \beta = 2+\frac{\log (|\alpha _1|)}{\log 2}. \end{aligned}$$

Further assume, without loss of generality, that the sampling points in \(\big \{\big (x_i,f(x_i)\big ): i=0,1,2\big \}\) corresponding to f are not collinear. Let \((p_n)_{n \in {\mathbb {N}}}\) be the sequence of Bernstein polynomials \(p_n = B_n(f)\) that converges uniformly to f. For each fixed \(n \in {\mathbb {N}}\), construct the \(\alpha \)-fractal function \((p_n)_{\Delta , B_n(p_n)}^\alpha \) corresponding to \(p_n\) by choosing the parameter function b (see above) as \(B_n(p_n)\). In the light of Eq. (3.3) and Note 3.15, a simple and straightforward calculation produces

$$\begin{aligned} \begin{aligned} \Vert f- (p_n)_{\Delta , B_n(p_n)}^\alpha \Vert _\infty \le&~ \Vert f-p_n \Vert _\infty + \Vert p_n - (p_n)_{\Delta , B_n(p_n)}^\alpha \Vert _\infty \\ \le&~ \Vert f -p_n \Vert _\infty + \frac{|\alpha _1|}{1-|\alpha _1|} \Vert p_n - B_n(p_n)\Vert _\infty \\ \le&~ C ~ \omega _{\phi }\Big (f; \frac{1}{\sqrt{n}}\Big ) + \frac{C ~|\alpha _1|}{1-|\alpha _1|} ~ \omega _{\phi }\Big (p_n; \frac{1}{\sqrt{n}}\Big ). \end{aligned} \end{aligned}$$

We therefore conclude that the sequence \((p_n)_{\Delta , B_n(p_n)}^\alpha \) converges uniformly to f. For each fixed \(n \in {\mathbb {N}}\), the functions \(p_n\) and \(B_n(p_n)\) are Lipschitz continuous and the set of data points \(\{(x_i, p_n(x_i)): i=0,1,2\}\) is not collinear. Hence, with the help of Theorem 3.14, the box dimensions of the graphs of \((p_n)_{\Delta , B_n(p_n)}^\alpha \), which depend only on the partition and the scaling vector, are all the same and are equal to \(\beta .\) \(\square \)

Remark 3.17

The previous theorem also determines the order of approximation by fractal functions. More precisely, for a given function \(f \in {\mathcal {C}}(I)\) and \(\beta \in (1,2)\) we have the following estimate

$$\begin{aligned} \Vert f -f_n\Vert _{\infty } \le C ~ \Bigg [\omega _{\phi }\Big (f; \frac{1}{\sqrt{n}}\Big ) +\omega _{\phi }\Big (B_n(f); \frac{1}{\sqrt{n}}\Big )\Bigg ], \end{aligned}$$

where \((f_n)\) is a sequence of fractal functions as in the above theorem and the constant C depends only on \(\beta .\) The above order of approximation is not claimed to be the optimal. Note that though there are many other approximation polynomials, so called Bernstein-type polynomials, we have used only the Bernstein polynomials in the previous theorem. The reader is encouraged to consult [13] for a more detailed study on order of convergence by Bernstein-type polynomials.

Next, we approximate a given function by a sequence of fractal functions having the same Hausdorff dimension. For this purpose, we need to quote the following result that can be found in [4] and is based on work presented in [3].

Theorem 3.18

([4], Theorem 2.1) Let the data set \(\triangle =\{(x_i,y_i) \in I \times {\mathbb {R}}:i=1,2,\dots ,m\}\) be given so that \(0=x_0< x_1< \dots < x_m=1\). Assume that \(\sum \nolimits _{i=1}^{m}|\alpha _i| > 1\) and that there exists an \(i \ne j\) such that

$$\begin{aligned} \frac{y_i -y_{i-1} -\alpha _i(y_m -y_0)}{x_i -x_{i-1}- \alpha _i} \ne \frac{y_j -y_{j-1} -\alpha _j(y_m -y_0)}{x_j -x_{j-1}- \alpha _j}. \end{aligned}$$
(3.4)

Let f be an affine FIF associated with the above data set and denote by \(G_f\) its graph. Then, \(\dim _H G_f = s\) where s is the unique positive solution of

$$\begin{aligned} \sum \limits _{i=1}^{m} |\alpha _{i}| (x_{i} -x_{i-1})^{s-1}=1. \end{aligned}$$

Note that Theorem 3.18 implies that under the condition (3.4) the box dimension of \(G_f\) equals its Hausdorff dimension.

Theorem 3.19

Let \(f \in {\mathcal {C}}(I)\) and \(\beta \in (1,2)\). Then there exists a sequence of fractal functions converging uniformly to f with graphs having Hausdorff dimension \(\beta .\)

Proof

We consider a sequence of data sets \(\triangle _n=\{(x_i,f(x_i)) \in I \times {\mathbb {R}}:i=0,1,2,\ldots ,n\}\) with \(0=x_0< x_1< \dots < x_n=1\) and \(x_i -x_{i-1}= \frac{1}{n}.\) Choose \(\alpha _i = \alpha = \frac{1}{n^{2-\beta }}\) for every \(i =1,2,\dots , n.\) Then we have \(s=2 +\frac{\log (|\alpha |)}{\log n}=\beta .\) Moreover, \(\sum \nolimits _{i=1}^{n}|\alpha _i|= n |\alpha |= n^{\beta -1} > 1.\) By Theorem 3.18 it suffices to show that \(f(x_i) -f(x_{i-1}) \ne f(x_j) -f(x_{j-1})\), for some \(i \ne j\), in order to verify condition (3.4). For each \(n \ge 2\), we define a data set \({\tilde{\triangle }}_n\) by

$$\begin{aligned} {\tilde{\triangle }}_n= {\left\{ \begin{array}{ll} \triangle _n, ~~~ \text {if} ~ f(x_1) -f(x_0) \ne f(x_n) -f(x_{n-1}) \\ \{(x_i,y_i): i= 0,1,2,\dots ,n\}, ~~\text {otherwise}, \end{array}\right. } \end{aligned}$$

where \(y_0= f(x_0)+\frac{1}{n}, ~ y_i=f(x_i)\) for \(i =1,2,\dots ,n.\) Finally, we obtain a sequence \((g_n)\) of fractal interpolation functions generated by the data set \({\tilde{\triangle }}_n\) and the aforementioned scale vector \(\alpha \) converging to f and satisfying the desired condition. \(\square \)

4 Restrictions and extensions of continuous functions

In this section, we focus on some restrictions and extensions of continuous functions in regards to fractal dimensions. For this purpose, we need to state some known results.

Theorem 4.1

([12], Theorem 4.10) Let \(A \subset {\mathbb {R}}^n\) be a Borel set such that \( 0 < {\mathcal {H}}^s(A) \le \infty .\) Then there exists a compact set \(K \subset A\) such that \( 0< {\mathcal {H}}^s(K) <\infty .\)

In [17] the above result was also established for the packing dimension.

Theorem 4.2

Let \(A \subset {\mathbb {R}}^n\) be a Borel set such that \( 0 < {\mathcal {P}}^s(A) \le \infty .\) Then there exists a compact set \(K \subset A\) such that \( 0< {\mathcal {P}}^s(K) <\infty .\)

Lemma 4.3

Let A be a compact subset of \({\mathbb {R}}^n\), and \(s \le \dim _H A\). Then there exists a compact set \(K \subset A\) such that \( \dim _H K =s.\) The analogous result holds for the packing dimension.

Proof

Suppose that \(s< \dim _H A\). Using the definition of Hausdorff dimension, we have \({\mathcal {H}}^s(A)=\infty \). Theorem 4.1 produces a compact subset K of A satisfying \(0< {\mathcal {H}}^s (K) < \infty .\) Again using the definition of Hausdorff dimension, this implies that \(\dim _H K =s.\) The case \(s=\dim _H A\) is trivial. Thanks to Theorem 4.2, we have the same result for the packing dimension. \(\square \)

Theorem 4.4

Suppose \(f\in {\mathcal {C}}[0,1]\). Then, for each \(0 \le \beta \le \dim _H G_f\), there exists a compact set \(K \subset [0,1]\) such that \(\dim G_f(K) =\beta \), where \(G_f(K)=\{(x,f(x)): x \in K\} \subset {\mathbb {R}}^2.\) The same result holds for the packing dimension.

Proof

For \(\beta \le \dim G_f\). Using Lemma 4.3 we have a compact subset \(K_1\) of \(G_f\) such that \(\dim K_1=\beta .\) We now show that there exists a compact set \(K_2\subset [0,1]\) such that \(G_f(K_2)=K_1.\) Define \(K_2\) by \(K_2:=\{x \in [0,1]: (x,f(x)) \in K_1\}\). If \((x_n)\) is a sequence in \(K_2\) then \((x_n, f(x_n)) \in K_1 \subset G_f.\) By the compactness of \(K_1\) there exists a convergent subsequence of \(\big ((x_n, f(x_n)) \big )\). Denote this convergent subsequence again by \((x_n)\) and let \((x,f(x)) \in K_1\) be its limit. Hence, \((x_n)\) converges to x and \(x \in K_2\) completing the proof. \(\square \)

Lemma 4.5

For fixed \(y_0,y_1 \in {\mathbb {R}}\) and \(\beta \in [1,2]\), there exists \(f \in {\mathcal {C}}(I)\) such that \(f(0)=y_0,~f(1)=y_1\) and \(\dim _H G_f=\beta .\)

Proof

In the light of Lemma 3.3 we choose \(h \in {\mathcal {C}}(I)\) with \(h(0)=y_0\) and \(\dim _H G_h=\beta .\) Define a Lipschitz mapping \(g:[0,1] \rightarrow {\mathbb {R}}\) by \(g(x)=(y_1 -h(1))x.\) Defining a map \(f:[0,1]\rightarrow {\mathbb {R}}\) by \(f=g+h\), Lemma 3.1 in turn yields the result. \(\square \)

The next theorem is a modification of Proposition 2.3 appeared in [18]. For the convenience of the reader, we include the proof.

Theorem 4.6

Let X be a proper compact subset of [0, 1] and let \(f : X \rightarrow {\mathbb {R}}\) be a continuous function. Then for each \(\max \{\dim _H G_f(X), 1\} \le \beta \le 2\), the function f can be extended continuously to a continuous function \({\tilde{f}}:[0,1]\rightarrow {\mathbb {R}}\) such that

$$\begin{aligned} \dim _H G_{{\tilde{f}}}([0,1]) = \beta . \end{aligned}$$

The result also holds for the packing dimension.

Proof

Let X be a proper compact subset of [0, 1] and \(f:X \rightarrow {\mathbb {R}}\) a continuous function. Now we consider the following possibilities:

  1. (1)

    \(0,1\in X.\)

  2. (2)

    \(0 \in X\) and \(1 \notin X.\)

  3. (3)

    \(1 \in X\) and \(0 \notin X.\)

  4. (4)

    \(0,1\notin X\).

We write \([0,1]\backslash X\) for each of the four cases above as follows:

  1. (1)

    \([0,1]\setminus X = \displaystyle \bigcup \nolimits _{i = 1}^{\infty }(a_i,b_i)\), with \(a_i,b_i\in X\) for each \(i \in {\mathbb {N}}.\)

  2. (2)

    \([0,1]\setminus X = \displaystyle \bigcup \nolimits _{i = 1}^{\infty }(a_i,b_i)\cup \{1\}\), with \(a_i,b_i\in X\) for each \(i \in {\mathbb {N}}.\)

  3. (3)

    \([0,1]\setminus X = \displaystyle \bigcup \nolimits _{i = 1}^{\infty }(a_i,b_i)\cup \{0\}\), with \(a_i,b_i\in X\) for each \(i \in {\mathbb {N}}.\)

  4. (4)

    \([0,1]\setminus X = \displaystyle \bigcup \nolimits _{i = 1}^{\infty }(a_i,b_i)\cup \{0,1\}\), with \(a_i,b_i\in X\) for each \(i \in {\mathbb {N}}.\)

By the finite stability of the Hausdorff dimension (cf. [12]), we claim that it is enough to deal with the first case. Applying Lemma 4.5 for each intervals \([a_i,b_i]\), we extend the function f as follows:

$$\begin{aligned} {\tilde{f}}(x) := {\left\{ \begin{array}{ll} f(x),&{}~~x\in X;\\ \\ g_{i}(x),&{}~~x\in (a_i,b_i)~ \text {for some}~ i \in {\mathbb {N}}, \end{array}\right. } \end{aligned}$$

where \(g_i(a_i)=f(a_i),g_i(b_i)=f(b_i)\) and \(\dim _H G_{g_i}=\beta .\) Clearly, \({\tilde{f}}\) is continuous on [0, 1]. Using the countable stability of the Hausdorff dimension (2.1), it follows that

$$\begin{aligned} \dim _H G_{{\tilde{f}}}([0,1]\backslash X) =\sup _{i \in {\mathbb {N}} } \{ \dim _H G_{{\tilde{f}}}((a_i,b_i)) \} = \sup _{i \in {\mathbb {N}} } \{ \beta \} = \beta , \end{aligned}$$

and

$$\begin{aligned} \dim _{H} G_{{\tilde{f}}}([0,1])&= \max \{ \dim _H G_{{\tilde{f}}}(X) ,\dim _{H} G_{{\tilde{f}}} ([0,1]\backslash X) \}\\&= \max \{ \dim _H G_{f}(X) , \beta \}\\&=\beta . \end{aligned}$$

Hence we obtain the result for the Hausdorff dimension. Since the packing dimension is also countably stable, the result for \(\dim _P\) follows immediately.

\(\square \)

5 Summary

In this article we investigated a new notion of constrained approximation through fractal dimensions. Further, we constructed dimension preserving approximants to a prescribed function. In the last part of the article, we introduced and investigated the restrictions to and extensions of continuous functions in terms of fractal dimensions.