1 Introduction

We provide an approximation of bounded domains from inside and from outside by uniform domains in doubling quasiconvex metric spaces. A metric space (Xd) is called (metrically) doubling, if there exists a constant \(C_d\) so that for all \(r>0\), any ball of radius r can be covered by \(C_d\) balls of radius r/2. A metric space is called quasiconvex, if there exists a constant \(C_q < \infty \) such that any \(x,y \in X\) can be connected by a curve \(\gamma \) in X with the length bound

$$\begin{aligned} \ell (\gamma ) \le C_q \mathrm{{d}}(x,y). \end{aligned}$$

A domain \(\Omega \subset X\) is called uniform, if there exists a constant \(C_u < \infty \) such that for every \(x,y \in \Omega, \) there exists a curve \(\gamma \subset \Omega \) such that

$$\begin{aligned} \ell (\gamma ) \le C_u \mathrm{{d}}(x,y) \end{aligned}$$

and for all \(z \in \gamma \) it holds

$$\begin{aligned} \min \left\{ \ell (\gamma _{x,z}),\ell (\gamma _{z,y})\right\} \le C_u{{\,\mathrm{dist}\,}}(z,X\setminus \Omega ), \end{aligned}$$

where \(\gamma _{x,z}\) and \(\gamma _{z,y}\) denote the shortest subcurves of \(\gamma \) joining z to x and y, respectively.

With the definitions now recalled we can state the result of this paper.

Theorem 1.1

Let (Xd) be a doubling quasiconvex metric space and \(\Omega \subset X\) a bounded domain. Then for every \(\varepsilon > 0,\) there exist uniform domains \(\Omega _I\) and \(\Omega _O\) such that

$$\begin{aligned} \Omega _I \subset \Omega \subset \Omega _O, \end{aligned}$$

\(\Omega _O \subset B(\Omega ,\varepsilon )\), and \(X \setminus \Omega _I \subset B(X \setminus \Omega ,\varepsilon )\).

In the above theorem we have used the notation

$$B(A,r)= \bigcup_{x \in A}B(x,r) $$

for the open \(r\)-neighbourhood of a set \(A \subset X\), with \(r>0\), and \(B(x,r)\) denoting the open ball of radius \(r\) centred at a point \(x \in X\).

Although there are characterizations of uniform domains in metric spaces, for instance via tangents [6], we are not aware of previous general existence results such as Theorem 1.1.

The setting of Theorem 1.1 is motivated by Sobolev- and BV-extension domains in complete metric measure spaces with a doubling measure and supporting a \((1,p)\)-Poincaré inequality (\(p\)-PI spaces for short). A measure \(\mu\) on \((X,d)\) is doubling, if there exists a constant \(C>0\) such that \(\mu (B(x,2r)) \leq C\mu(B(x,r)) \) for every \(x \in X\) and \(r>0\). Recall that a metric space supporting a positive and locally finite doubling measure \(\mu\) is doubling in the metric sense. The metric measure space \((X,d,\mu)\) supports a \((1,p)\)-Poincaré inequality if there exist constants \(C,\lambda \geq 1\) so that the following holds: for any \(x \in X\) and \(r>0\) the ball \(B(x,r) \subset X\) has positive and finite \(\mu\)-measure and

$$\frac{1}{\mu(B(x,r))}\int_{B(x,r)}|u-u_{B(x,r)}|\,d\mu \le C r \left(\frac{1}{\mu(B(x,\lambda r))}\int_{B(x,\lambda r)} \rho^p \right)^\frac{1}{p}$$

holds for any measurable function \(u\) and its upper gradient \(\rho\), with \(u_{B(x,r)}\) being the average of \(u\) in \(B(x,r)\). On one hand, \(p\)-PI spaces [5] are known to be quasiconvex [3, 10]. On the other hand, in [2] it was shown that uniform domains in \(p\)-PI-spaces are \(N^{1,p}\)-extension domains, for \(1 \leq p \leq \infty \), for the Newtonian Sobolev spaces, and in [11] it was shown that bounded uniform domains in 1-PI-spaces are BV-extension domains. See [13] for the definitions of upper gradients and Newtonian Sobolev spaces and [1, 12] for the BV space.

The main purpose of this paper is to increase the applicability of the results in [2, 11] by providing a large collection of uniform domains. As a straightforward corollary, we have the following approximation result by extension domains.

Corollary 1.2

Let \(1 \leq p \leq \infty \) let \((X,d,\mu)\) be a complete metric measure space, with \(\mu \) doubling, supporting a \(({1,p})\)-Poincaré inequality, and let \(\Omega \subset X\) be a bounded domain. Then \(\Omega \) can be approximated (as in Theorem 1.1) by \(N^{1,p}\)-extension and, in the case \(p=1\), also by BV-extension domains.

Notice also that in the case, when \(\Omega \) is unbounded, we can for example fix a point \(x_0 \in \Omega \) and for each \(i \in {\mathbb {N}}\) approximate the connected component of \(B(x_0,i) \cap \Omega \) containing \(x_0\) from inside by \(\Omega _i\) using Theorem 1.1 with the choice \(\varepsilon = 1/i\), and thus obtain

$$\begin{aligned} \Omega = \bigcup _{i=1}^\infty \Omega _i, \end{aligned}$$

with \(\Omega _i\) uniform for all \(i \in {\mathbb {N}}\).

2 Construction of the uniform domains

In the Euclidean setting, we could use closed dyadic cubes to construct the uniform domains. Using just the fact that a Euclidean cube is John (and not that it is in fact uniform), we could start with a finite union of cubes of some fixed side length, then take all the neighbouring cubes with a constant \(c \in (0,1)\) times smaller side length than the original ones and continue taking smaller and smaller cubes. The main thing one has to take care about is that two points near the boundary that are some small distance r from each other can be connected by going via cubes not much larger than r in side length. This is handled by taking the constant c small enough because of the nice property of closed Euclidean dyadic cubes: if two cubes of side length l do not intersect, then their distance is at least l.

We will use the above idea in the metric setting. However, none of the dyadic cube constructions that we have seen (for instance [4, 7,8,9]) take care about the separation of non-intersecting cubes but only about other properties such as nestedness and size. Luckily, we do not need a nested structure, nor a decomposition, so we will work with coverings by balls having the needed separation property. The existence of such coverings is provided by the next lemma.

Lemma 2.1

Let (Xd) be a doubling metric space. Then there exists a constant \(c \in (0,1)\) depending only on the doubling constant so that for every \(r>0,\) there exist r-separated points \(\{x_i\} \subset X\) and radii \(r_i \in [r,2r]\) such that

$$\begin{aligned} X \subset \bigcup _{i}B(x_i,r_i) \end{aligned}$$

and

$$\begin{aligned} \mathrm{{d}}(x_i,x_j) -r_i -r_j \notin (0,cr) \qquad \text {for all }i,j. \end{aligned}$$

Proof

Let \(\{x_i\}\) be a maximal r-separated net of points in X. Because of the maximality of the net, the balls \(B(x_i,r_i)\) will cover X. We select the suitable radii by induction. Let \(r_1 = r\). Suppose that \(r_1, \dots , r_k\) have been selected. Since \(x_i\) are r separated, by the metric doubling property of (Xd), there exists an integer \(N > 1\) depending only on the doubling constant \(C_d\) so that there exist at most N−1 points \(x_i \in \{x_1, \dots , x_k\}\) with \(\mathrm{{d}}(x_{k+1},x_i) \le 4r\). Write

$$\begin{aligned} I_k = \left\{i:d(x_i,x_{k+1})-r_i \in [r,2r],i \leq k\right\} \end{aligned}.$$

Then \(I_k\) contains at most \(N\)−1 points. Let \(\lambda_1 < \lambda _2 < \ldots < \lambda _M\), with \(M<N\), be so that

$$\begin{aligned}\{ \lambda _j\}_{j=1}^M = \left\{ d(x_i,x_{k+1})-r_i :i \in I _k\right\} \end{aligned}.$$

Denote \(\lambda_0=r\) and \(\lambda_{M+1}=2r\). Let \(m\in\{0, \cdots ,M\}\) be the smallest integer for which \(\lambda_{m+1}-\lambda_m \geq r/N\). (If such \(m\) did not exist we would have

$$\begin{aligned}r= \lambda _{M+1} - \lambda _0= \sum_{j=0}^M \lambda _{j+1}- \lambda _j < (M+1)\frac{r}{N} \leq r,\end{aligned}$$

which is a contradiction.) We now define \(r_{k+1}=\lambda_m\). In particular, we then have

$$\begin{aligned}r_{k+1} \in [r,2r-r/N]\end{aligned}.$$

By the definition of \(m\), we have

$$\begin{aligned} d(x_i,x_{k+1}) - r_i - r_{k+1} \notin (0,r/N) \end{aligned}$$

for all \(i \in I_k\). Now, if \(i \leq k\) with \(i \notin I_k\), either \( d(x_i,x_{k+1}) - r_i < r \), in which case \( d(x_i,x_{k+1}) - r_i - r_{k+1} < 0 \), or \( d(x_i,x_{k+1}) - r_i > 2r \), in which case we have \( d(x_i,x_{k+1}) - r_i - r_{k+1} > r/N.\) Thus,

$$\begin{aligned} d(x_i,x_{k+1}) - r_i - r_{k+1} \notin (0,r/N) \end{aligned}$$

for all \(i \in\{1,\dots,k\}.\) This shows that the claim holds with the constant \( c = 1/N\). \(\square \)

With the replacement of the Euclidean dyadic cubes by balls given in Lemma 2.1, we can now follow the idea presented for the Euclidean case to prove the metric version.

Proof of Theorem 1.1

We start by noting that since our space (Xd) is quasiconvex, the induced length distance

$$\begin{aligned} d_l(x,y) = \inf \left\{ \ell (\gamma )\,:\,\text {the curve } \gamma \text { joins }x \text { to }y\right\} \end{aligned}$$

satisfies \(d \le d_l \le C_qd\) with the quasiconvexity constant \(C_q\). If we would assume the space \((X,d)\) to be complete, by the generalized Hopf-Rinow Theorem we would know that \(d_l\) is in fact a geodesic distance. However, we want to avoid making the extra assumption on completeness. In any case, because the property of being a uniform domain is invariant under a biLipschitz change of the distance, we may then assume that \((X,d)\) is a length space.

construction The constructions of \(\Omega _I\) and \(\Omega _O\) are similar. The only difference is the starting point of the construction. Fix a point \(x_0 \in \Omega \) and let \(\tau \in (0,\min \{{{\,\mathrm{dist}\,}}(x_0,\partial \Omega ),1\})\). The choice of \(\tau\) will depend on \(\varepsilon\), and the estimate on how small \(\tau\) we need to select is postponed to the end of the proof. For constructing \(\Omega _O,\) we simply start with the set

$$\begin{aligned} E_1 = \Omega , \end{aligned}$$

and for \(\Omega _I,\) we take \(E_1\) to be the connected component of

$$\begin{aligned} \left\{ x \in X\,:\,{{\,\mathrm{dist}\,}}(x,X\setminus \Omega ) > \tau \right\} \end{aligned}$$
(2.1)

containing the fixed point \(x_0\). Let us consider the case \(\Omega _I\). Thus \(E_1\) is defined via (2.1).

Let \(c>0\) be the constant from Lemma 2.1. Define

$$\begin{aligned} \delta =\min \left\{ \frac{c}{20+c},\frac{\tau }{5+\tau }\right\} . \end{aligned}$$

We construct \(\Omega _I\) using induction as follows. Suppose \(E_k\) has been defined for a \(k \in {\mathbb {N}}\). Let \(\{x_i\}\) and \(\{r_i\}\) be the points and radii given by Lemma 2.1 for the choice \(r = \delta ^k\), and define

$$\begin{aligned} {\mathcal {B}}_k = \left\{ B(x_i,r_i) \,:\, B(x_i,r_i)\cap B(E_k,\delta ^k) \ne \emptyset \right\} . \end{aligned}$$

We then set

$$\begin{aligned} E_{k+1} = \bigcup _{B(x,r) \in {\mathcal {B}}_k} B(x,r). \end{aligned}$$

Finally, we define

$$\begin{aligned} \Omega _I = \bigcup _{k=1}^\infty E_k. \end{aligned}$$

uniformity Let us next show that \(\Omega _I\) is uniform. Take \(x,y \in \Omega _I\) with \(x \neq y\). Let \(k_x\) and \(k_y\) be the smallest integers such that \(x \in E_{k_x}\) and \(y \in E_{k_y}\). Without loss of generality, we may assume \(k_x \le k_y\).

Suppose first that \(\mathrm{{d}}(x,y) < \frac{1}{4}c\delta \). Let \(n \in {\mathbb {N}}\) be such that

$$\begin{aligned} \frac{1}{4}c\delta ^{n+1} \le \mathrm{{d}}(x,y) <\frac{1}{4}c\delta ^n. \end{aligned}$$

Notice that since in each construction step \(k+1,\) we take a neighbourhood \(\delta ^k\) of the previous set \(E_k\), we have that

$$\begin{aligned} {{\,\mathrm{dist}\,}}(E_k, X \setminus \Omega _I) \ge \sum _{i=k}^\infty \delta ^i = \frac{\delta ^k}{1-\delta } \ge \delta ^k. \end{aligned}$$
(2.2)

Therefore, if \(k_x < n\), we may take \(\gamma\) to be a curve connecting \(x\) to \(y\) so that \(\ell(\gamma) < 2 d(x,y)\), in which case for all \(z \in \gamma\) we have

$${{\,\mathrm{dist}\,}}(z,X \setminus \Omega_I) \ge {{\,\mathrm{dist}\,}}(x,X \setminus \Omega_I)- d(z,x) >\delta^{k_x} - \ell(\gamma) > 4d(x,y) - 2d(x,y) \ge 2d(x,y),$$

and, consequently, we get uniformity with constant \(C_u = 2\).

If \(k_x \ge n\), we first connect x and y to \(E_n\). We do this as follows. Starting with x, let \(B(z,r) \in {\mathcal {B}}_{k_x-1}\) be such that \(x \in B(z,r)\), which exists by the definitions of \(k_x\) and \(E_{k_x}\). Next take \( v \in B(z,r) \cap B(E_{k_x-1}, \delta^{k_x-1})\) and \(w \in E_{k_x-1}\) with \(\mathrm{{d}}(z,w) < r + \delta ^{k_x-1}\), which we have by the definition of \({\mathcal {B}}_{k_x-1}\). Now we take the concatenation \(\gamma _{k_x}^x\) of the curves α1 going from x to z, α2 going from z to v, and α3 going from v to w with the length bounds \(\ell(\alpha_1), \ell(\alpha_2) < r~and~\ell(\alpha_3)<\delta^{k_x-1}\). Notice that \(\gamma_{k_x}^x\subset E_{k_x} \) and that the curve \(\gamma_{k_x}^x\) has the length bound

$$\begin{aligned} \ell (\gamma _{k_x}^x) < r + r + \delta ^{k_x-1} \le 5\delta ^{k_x-1}, \end{aligned}$$

and for the distance to the complement of \(\Omega _I\) we can estimate

$$\begin{aligned} {{\,\mathrm{dist}\,}}(\gamma _{k_x}^x,X \setminus \Omega _I) > \delta ^{k_x} \end{aligned}$$
(2.3)

by the fact that in the construction of \(E_{k_x+1}\) we take a \(\delta ^{k_x}\)-neighbourhood of \(E_{k_x}\) and the curve \(\gamma _{k_x}^x\) is contained in \(E_{k_x}\). We then continue inductively connecting w to \(E_{k_x-2}\) by \(\gamma _{k_x-1}^x\) and so on, until we have connected x to a point \(x'\) in \(E_n\).

The curve \(\gamma ^{x,x'}\) obtained by concatenating the previous curves \(\gamma _{k_x}^x, \gamma _{k_x-1}^x, \dots , \gamma _{n+1}^x\) has the length bound

$$\begin{aligned} \ell (\gamma ^{x,x'}) \le \sum _{i=n}^{k_x-1}5\delta ^i \le 5\frac{\delta ^n}{1-\delta } \le \frac{1}{4}c\delta ^{n-1}. \end{aligned}$$
(2.4)

With a similar construction, we connect y to a point \(y' \in E_n\) by a curve \(\gamma ^{y,y'}\) with length bounded from above by \(c\delta ^{n-1}/4\). We can bound the distance between \(x'\) and \(y'\) by

$$\begin{aligned} \mathrm{{d}}(x',y')& \le \mathrm{{d}}(x',x)+\mathrm{{d}}(x,y)+\mathrm{{d}}(y,y')\nonumber \\&< \frac{1}{4}c\delta ^{n-1} + \frac{1}{4}c\delta ^n + \frac{1}{4}c\delta ^{n-1}\nonumber \\& < c\delta ^{n-1}. \end{aligned}$$
(2.5)

Now we use the crucial separation property given by Lemma 2.1. Let \(B(z_x,r_x),B(z_y,r_y) \in {\mathcal {B}}_{n-1}\) be such that \(x' \in B(z_x,r_x)\) and \(y' \in B(z_y,r_y)\). Since the collection \({\mathcal {B}}_{n-1}\) was defined via Lemma 2.1 with the radius \(\delta ^{n-1}\), we have

$$\begin{aligned} \mathrm{{d}}(z_x,z_y) - r_x-r_y \notin (0,c\delta ^{n-1}), \end{aligned}$$

whereas (2.5) gives

$$\begin{aligned} \mathrm{{d}}(z_x,z_y) - r_x-r_y&\le \mathrm{{d}}(z_x,x') + \mathrm{{d}}(x',y')\\&\quad +\,\mathrm{{d}}(y',z_y)- r_x-r_y\\&\le \mathrm{{d}}(x',y') < c\delta ^{n-1}. \end{aligned}$$

Therefore, \(\mathrm{{d}}(z_x,z_y) \le r_x+r_y,\) and thus, we can connect \(x'\) to \(y'\) by a curve \(\gamma ^{x',y'}\) defined by going first with a curve β1 from \(x'\) to \(z_x\), then with β2 from \(z_x\) to \(z_y\) and finally with β3 from \(z_y\) to \(y'\). By selecting the curves so that \(\ell(\beta_1) < r_x \),

$$\ell(\beta_2) < r_x + r_y + \min\left\{r_x - \ell(\beta_1) , \delta^{n+1}\right\},$$

and \(\ell(\beta_3) < r_y \), the curve \(\gamma^{x',y'}\) has the length bound

$$\begin{aligned} \ell (\gamma ^{x',y'}) \le \ell(\beta_1) + \ell(\beta_2) + \ell(\beta_3) \le 2r_x+2r_y \le 8\delta ^{n-1}, \end{aligned}$$
(2.6)

and its distance to the complement of \(\Omega _I\) has the bound

$$\begin{aligned} {{\,\mathrm{dist}\,}}(\gamma ^{x',y'},X \setminus \Omega _I) > \delta ^n. \end{aligned}$$
(2.7)

Now, the curve \(\gamma \) obtained by concatenating \(\gamma ^{x,x'},\gamma ^{x'y'}\) and \(\gamma ^{y,y'}\) has, by (2.4) and (2.6), length at most

$$\begin{aligned} \ell (\gamma )&\le \frac{1}{4} c\delta ^{n-1} + 8\delta ^{n-1} + \frac{1}{4} c\delta ^{n-1}\nonumber \\&\le 9\delta ^{n-1} = \frac{36}{c\delta ^2} \cdot \frac{1}{4}c\delta ^{n+1}\nonumber \\&\le \frac{36}{c\delta ^2} \mathrm{{d}}(x,y). \end{aligned}$$
(2.8)

Let us check the uniformity for this curve. Let \(z \in \gamma \). Suppose first that \(z \in \gamma ^{x',y'}\). Then by (2.7) and (2.8), we get

$$\begin{aligned} \min \left\{ \ell (\gamma _{x,z}),\ell (\gamma _{z,y})\right\}&\le \frac{1}{2}\ell (\gamma ) \le \frac{9}{2}\delta ^{n-1} = \frac{9}{2\delta } \delta ^n\nonumber \\&\le \frac{9}{2\delta }{{\,\mathrm{dist}\,}}(z,X\setminus \Omega _I). \end{aligned}$$
(2.9)

By symmetry, it then remains to check the case \(z \in \gamma ^{x,x'}\). Then there exists \(k \ge n\) such that \(z \in \gamma _k^x\). Then by (2.3) and the same estimate as in (2.4), we get

$$\begin{aligned} \min \left\{ \ell (\gamma _{x,z}),\ell (\gamma _{z,y})\right\}\le & {} 5\frac{\delta ^k}{1-\delta } \le 10\delta ^k \nonumber \\\le & {} 10 {{\,\mathrm{dist}\,}}(z,X\setminus \Omega _I). \end{aligned}$$
(2.10)

By combining the estimates (2.8), (2.9) and (2.10), we see that \(\gamma \) satisfies the uniformity condition with the constant \(C_u = 36/(c\delta ^2)\).

We are still left with proving the uniformity in the case \(\mathrm{{d}}(x,y) \ge (1/4)c\delta \). For this, we first observe that we can connect x to a point \(x' \in E_1\), and y to a point \(y' \in E_1\) by curves having lengths bounded from above by c/4 and with pointwise lower bounds for the distance to the boundary along the curves being enough for the uniformity condition. What remains to do is to connect \(x'\) to \(y'\) with a curve of which length is bounded by a constant (independent of \(x'\) and \(y'\)) from above and of which distance to the boundary of \(\Omega _I\) is bounded by another constant from below. This is achieved directly by compactness: on one hand, any two points in the totally bounded set \(E_I\) can be joined by a rectifiable curve inside \(B(E_1,\delta/2) \subset \Omega_I \) and the infimum over the lengths of curves joining two given points is a continuous function in terms of the endpoints and this function extends to the completion of \(E_I\) as a continuous function. Thus, there exists the needed constant upper bound for the lengths of curves. On the other hand, the distance of these curves to the boundary of \(\Omega _I\) is at least \(\delta /2\).

closeness Let us then show that for every \(\varepsilon >0,\) there exists \(\tau >0\) so that using the \(\tau \) in the construction above we get \(X \setminus \Omega _I \subset B(X \setminus \Omega ,\varepsilon )\).

In order to have the dependence on \(\tau \), write now \(E_1(\tau )\) to be the connected component of \(\left\{ x \in X\,:\,{{\,\mathrm{dist}\,}}(x,X\setminus \Omega ) > \tau \right\} \) containing \(x_0\). Since \(X \setminus B(X \setminus \Omega,\varepsilon)\) is totally bounded, there exists a set of points \(\{x_i\}_{i=1}^N \subset \Omega\) so that

$$X \setminus B(X \setminus \Omega,\varepsilon) \subset \bigcup_{i=1}^N B(x_i,\varepsilon / 2).$$

Each \(x_i\) can be connected to \(x_0\) by a curve inside \(\Omega\) and so there exist \(r_i>0\) for which \(x_i \in E_1(\tau_i) \). Consequently, with \(\tau = \min\{\varepsilon/2,\tau_1, \dots, \tau_N\} \) we have \( X \setminus B(X \setminus \Omega,\varepsilon) \subset E_1(\tau)\), and thus,

$$\begin{aligned} X \setminus \Omega _I \subset X \setminus E_1(\tau ) \subset B(X \setminus \Omega ,\varepsilon ). \end{aligned}$$

The final thing we still need to observe is that \(\Omega _I \subset \Omega \). By the construction procedure, we have

$$\begin{aligned} E_{k+1} \subset B(E_k,5\delta ^k) \end{aligned}$$

for every \(k \in {\mathbb {N}}\). Thus, by the choice of \(\delta \) we get

$$\begin{aligned} \Omega _I \subset B(E_1,\sum _{k=1}^\infty 5\delta ^k) \subset B(E_1,\tau ) \subset \Omega . \end{aligned}$$

This completes the proof for \(\Omega _I\). The proof for \(\Omega _O\) goes almost verbatim. Only the argument for closeness becomes easier in this case. In particular, for \(\Omega _O,\) one can then take \(\tau = \varepsilon \). \(\square \)