1 Introduction

Capacities are intimately related to function spaces in the sense that various properties, such as quasicontinuity and Lebesgue points, of functions in such spaces are measured by a capacity. Capacities also reflect metric and measure-theoretic properties of the underlying space on which they are defined. For example, it is well known that the -capacity of a spherical condenser in \(\textbf{R}^n\) with \(0<2r\le R\) reflects the dimension of the space as follows,

$$\begin{aligned} {{\,\textrm{cap}\,}}_{p}(B(x,r),B(x,R)) \simeq {\left\{ \begin{array}{ll} r^{n-p} &{}\text {if } 1\le p<n, \\ (\log (R/r))^{1-p} &{}\text {if } 1\le p=n, \\ R^{n-p} &{}\text {if } p>n. \end{array}\right. } \end{aligned}$$
(1.1)

Capacities also play an important role in fine potential theory and appear in the famous Wiener criterion characterizing boundary regularity for various equations, such as \(\Delta _pu=0\) (Maz’ya [30] and Kilpeläinen–Malý [22], with the -capacity as in (1.1)) and the fractional -Laplace equation \((-\Delta _p)^s u =0\) (Kim–Lee–Lee [23], using the fractional Besov capacity (1.4) below).

In this paper we study Besov capacities on a complete metric space \(Y=(Y,d)\) equipped with a doubling measure \(\nu .\) Analogously to (1.1), we are primarily interested in estimates for (thick) annuli, i.e. of the capacity for a ball \(B(x_0,r)\) within \(B(x_0,R)\) where \(0 < 2r \le R.\)

Throughout the paper we assume that \(1 \le p < \infty .\) We also fix a point \(x_0\) and let \(B_r=B(x_0,r)\) be the open ball with radius r and centre \(x_0.\)

The following are our main results.

Theorem 1.1

Assume that Y is a complete metric space which is uniformly perfect at \(x_0\) and equipped with a doubling measure \(\nu .\) Let \(p>1\) and \(0<\theta <1.\) Then for all \(0<2r\le R \le \tfrac{1}{4}{{\,\textrm{diam}\,}}Y,\)

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R) \simeq \biggl ( \int _r^R \biggl ( \frac{\rho ^{\theta p}}{\nu (B_\rho )} \biggr )^{1/(p-1)} \,\frac{d\rho }{\rho } \biggr )^{1-p} \end{aligned}$$
(1.2)

and

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(\{x_0\},B_R) \simeq \biggl ( \int _0^R \biggl ( \frac{\rho ^{\theta p}}{\nu (B_\rho )} \biggr )^{1/(p-1)} \,\frac{d\rho }{\rho } \biggr )^{1-p}, \end{aligned}$$
(1.3)

with the comparison constants in “\(\simeq \)” independent of \(x_0,\) r and R.

Here, \({{{\,\textrm{cap}\,}}_{\theta ,p}}\) is the Besov condenser capacity defined for bounded sets \(E \Subset \Omega \) as

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(E,\Omega ) = \inf _u \int _Y\int _Y \frac{|u(x)-u(y)|^p}{d(x,y)^{\theta p}} \frac{d\nu (y)\, d\nu (x)}{\nu (B(x,d(x,y)))}, \end{aligned}$$
(1.4)

where \(\Omega \) is open and the infimum is taken over all measurable u such that \(0 \le u \le 1\) everywhere, \(u = 1\) in a neighbourhood of E and \({{\,\textrm{supp}\,}}u\Subset \Omega .\)

The Euclidean spaces and their subsets, equipped with the Lebesgue measure or weighted measures \(w\,dx,\) and even singular doubling measures, are included as special cases of our results. We emphasize that we do not assume any Poincaré inequalities for upper gradients on Y (as in Gogatishvili–Koskela–Zhou [17, Section 4] and Koskela–Yang–Zhou [25]). This makes our results applicable to many disconnected spaces and spaces carrying few rectifiable curves, including fractals.

To formulate the next two results we need the following exponent sets:

These sets were introduced in Björn–Björn–Lehrbäck [5] to capture the local behaviour of the measure at \(x_0.\) For example, for the Lebesgue measure in \(\textbf{R}^n,\)

The subscript 0 in the above definitions stands for the fact that the inequalities are required to hold for small radii. It is easily verified (see [5, Lemmas 2.4 and 2.5]) that the exponent sets can equivalently be defined using \(0<r\le \Theta R\le R_0\) for any fixed \(0<\Theta <1\) and \(R_0>0,\) even though the implicit comparison constants in “\(\lesssim \)” and “\(\gtrsim \)” will then depend on \(\Theta \) and \(R_0.\)

All of these sets are intervals. The reason for introducing them as sets is that they may or may not contain their endpoints

(1.5)

Note that if \(\nu \) is doubling, and that if Y is also uniformly perfect at \(x_0\) (see Heinonen [19, Exercise 13.1]).

When \(p>1\) and or \(,\) Theorem 1.1 provides us with exact estimates for the capacity \({{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R)\) in terms of \(\nu (B_r)\) or \(\nu (B_R).\) When \(p=1,\) Theorem 1.1 cannot be used, but we obtain the following similar estimates for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R)\) by using results from Björn–Björn–Lehrbäck [5], which cover all \(p\ge 1.\) The borderline cases and are considered in Theorem 9.1. See also Remarks 9.2 and 9.3.

Theorem 1.2

Assume that Y is a complete metric space which is uniformly perfect at \(x_0\) and equipped with a doubling measure \(\nu .\) Let \(0<\theta <1\) and \(0 < R_0 \le \tfrac{1}{4} {{\,\textrm{diam}\,}}Y,\) with \(R_0\) finite.

  1. (a)

    If \(,\) then

    $$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_{R})\simeq \frac{\nu (B_r)}{r^{\theta p}} \quad \text {whenever } 0<2r \le R \le R_0. \end{aligned}$$
    (1.6)
  2. (b)

    If \(,\) then

    $$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_{R})\simeq \frac{\nu (B_{R})}{R^{\theta p}} \quad \text {whenever } 0<2r \le R \le R_0. \end{aligned}$$
    (1.7)

In both cases, the comparison constants in “\(\simeq \)” depend on \(R_0.\)

Moreover,  the lower bound in (1.6) implies \(,\) while the lower bound in (1.7) implies \(.\) If \(p>1\) then (1.7) holds if and only if \(.\)

In Ahlfors regular spaces, estimates (1.6) and (1.7) were given in Lehrbäck–Shanmugalingam [28], and used to show that Besov-norm-preserving homeomorphisms between such spaces are quasisymmetric.

In many situations it is important whether singletons have zero or positive capacity. In the following result, we characterize these cases in terms of the exponent sets and \(.\)

Theorem 1.3

Assume that Y is a complete metric space which is uniformly perfect at \(x_0\) and equipped with a doubling measure \(\nu .\) Let \(0<\theta <1.\)

  1. (a)

    If \(,\) then

    $$\begin{aligned} {C_{\theta ,p}}(\{x_0\})>0 \quad \text {and} \quad {{{\,\textrm{cap}\,}}_{\theta ,p}}(\{x_0\},B_R)>0 \quad \text {for every }0< R <\tfrac{1}{2} {{\,\textrm{diam}\,}}Y, \end{aligned}$$

    where the capacity \({C_{\theta ,p}}\) is defined by means of the Besov norm as in Definition 3.1.

  2. (b)

    If (in particular if ),  or if \(p>1\) and \(,\) then

    $$\begin{aligned} {C_{\theta ,p}}(\{x_0\})=0 \quad \text {and} \quad {{{\,\textrm{cap}\,}}_{\theta ,p}}(\{x_0\},B_R)=0 \quad \text {for every }R>0. \end{aligned}$$

In Anttila [1], the numbers and are called the upper and lower local dimensions of \(\nu \) at \(x_0,\) while \(\overline{q}\) in Remark 9.2 is called the pointwise Assouad dimension of \(\nu \) at \(x_0.\) (See [5, Lemma 2.4] for why the definitions of and in [1] are equivalent to those in (1.5).) In [6], played a decisive role in determining the sharp integrability properties for -harmonic Green functions and their gradients.

On \(\textbf{R}^n,\) the spaces defined by means of the energy integral in (1.4) are often called fractional Sobolev spaces and are the traces of Sobolev spaces on sufficiently nice domains (Jonsson–Wallin [20]). As such, they are suitable as boundary values for various Dirichlet problems and appear in boundary regularity results for elliptic differential equations (Kristensen–Mingione [26]).

For \(p=2,\) these spaces are related via Dirichlet forms to jump processes on fractals and metric spaces, see e.g. Kumagai [27] and Chen–Kumagai [12, Theorem 1.2]. They also play an important role for nonlinear nonlocal problems, such as the fractional -Laplace equation \((-\Delta _p)^s u =0.\) These problems have attracted a lot of attention in the past 2 decades, see e.g. Kim–Lee–Lee [23], Korvenpää–Kuusi–Lindgren [24] and Lindgren–Lindqvist [29], to name just a few.

Recently, similar problems and the associated Besov spaces have been studied for metric measure spaces in e.g. Capogna–Kline–Korte–Shan-mu-ga-lin-gam–Snipes [11], Eriksson-Bique–Giovannardi–Korte–Shanmugalingam–Speight [15], Gogatishvili–Koskela–Shanmugalingam [16], Gogatishvili–Koskela–Zhou [17] and Koskela–Yang–Zhou [25]. The role of Besov spaces as traces of Sobolev type spaces was in the metric setting studied in Bourdon–Pajot [10], Björn–Björn–Gill–Shanmugalingam [4] and Björn–Björn–Shanmugalingam [8], and will be one of our main tools.

Our approach to the above estimates is based on extensions of Besov functions from Y to hyperbolic fillings of Y,  together with estimates from Björn–Björn–Lehrbäck [5, 6] for -capacities associated with Sobolev spaces. More precisely, we use the comparison between the Besov seminorms of functions on Y and the Dirichlet energy of their extensions to a uniformized hyperbolic filling of Y,  obtained in [8]. These constructions and comparisons are done in Sect. 5.

However, since the results in [8] only cover bounded spaces, special care has to be taken for unbounded Y. This is done in Sect. 7 by replacing Y with a suitably chosen bounded subset, so that the restriction of \(\nu \) is still doubling. Even when Y is bounded, it is only biLipschitz equivalent to the boundary of the uniformized hyperbolic filling of Y,  which would in turn put serious restrictions on the allowed radii r and R in our estimates. In Sect. 6 we therefore show how to replace Y by a carefully constructed enlarged space so that the involved capacities are comparable and all radii \(\le \tfrac{1}{4}{{\,\textrm{diam}\,}}Y\) can be treated.

Along the way, in Sects. 3 and 4, we prove various fundamental properties of Besov capacities in metric spaces, both for doubling and nondoubling measures, including in some cases also \(\theta \ge 1.\) Finally, in Sects. 8 and 9, we prove Theorems 1.11.3.

As mentioned above, we use hyperbolic fillings to obtain our main results. It would be interesting to find more direct proofs. On the other hand, our technique shows that there is a direct correspondence between these results and the corresponding results for Sobolev spaces in [5, 6].

2 Preliminaries

In this section we assume that \(X=(X,d)\) is a metric space equipped with a Borel measure \(\mu \) such that \(0< \mu (B)<\infty \) for every ball \(B \subset X.\) To avoid pathological situations we also assume that all metric spaces considered in this paper contain at least two points.

As is often customary we extend \(\mu ,\) and other measures, as outer measures defined on all sets. This plays a role at least in Proposition 3.3(ii).

A metric space is proper if all closed bounded sets are compact. We denote balls in X by

$$\begin{aligned} B(x,r)=\{y \in X: d(y,x) <r\} \quad \text {and let }\lambda B(x,r)=B(x,\lambda r). \end{aligned}$$

All balls in this paper are open. In metric spaces it can happen that balls with different centres and/or radii denote the same set. We will however use the convention that a ball comes with a predetermined centre and radius.

The space X is uniformly perfect at x if there is a constant \(\kappa >1\) such that

$$\begin{aligned} B(x,\kappa r) {\setminus }B(x,r) \ne \varnothing \quad \text {whenever } B(x,\kappa r) \ne X. \end{aligned}$$
(2.1)

In fact, it then follows that (2.1) holds whenever \(B(x,r) \ne X,\) since if \(B(x,\kappa r) = X\) then \(B(x,\kappa r) {\setminus }B(x,r) = X {\setminus }B(x,r) \ne \varnothing .\) We will use this observation without further ado.

The space X is uniformly perfect if it is uniformly perfect at every x with the same constant \(\kappa .\) This definition coincides with the one in Heinonen [19, Section 11.1], see therein for more on the history of this assumption. We do not know if pointwise uniform perfectness has been used before. Note that X is uniformly perfect with any \(\kappa >1\) if X is connected.

The measure \(\mu \) is doubling if there is a doubling constant \(C_\mu > 1\) such that

$$\begin{aligned} 0< \mu (2B)\le C_\mu \mu (B) < \infty \quad \text {for all balls } B. \end{aligned}$$

Similarly, \(\mu \) is reverse-doubling at x,  if there are constants \(C,\hat{\kappa }>1\) such that

$$\begin{aligned} \mu (B(x,\hat{\kappa }r))\ge C \mu (B(x,r)) \quad \text {for all }0<r < {{\,\textrm{diam}\,}}X/2\hat{\kappa }. \end{aligned}$$
(2.2)

By continuity of the measure, the estimate (2.2) holds also if \(r = {{\,\textrm{diam}\,}}X/2\hat{\kappa }< \infty ,\) as required in Björn–Björn–Lehrbäck [5]. If \(\mu \) is doubling, it is easy to see that X is uniformly perfect at x if and only if \(\mu \) is reverse-doubling at x. (For necessity we can choose any \(\hat{\kappa }> \kappa ,\) and for sufficiency any \(\kappa > 2\hat{\kappa }.\)) If \(\mu \) is doubling and X is connected, then \(\mu \) is reverse-doubling at every x with any \(\hat{\kappa }>1.\)

Throughout the paper, we write \(a \lesssim b\) if there is an implicit constant \(C>0\) such that \(a \le Cb,\) and analogously \(a \gtrsim b\) if \(b \lesssim a,\) and \(a \simeq b\) if \(a \lesssim b \lesssim a.\) The implicit comparison constants are allowed to depend on the standard parameters. We will carefully explain the dependence in each case. See Remarks 8.2 and 9.3 for the dependence in Theorems 1.1 and 1.2.

Sometimes, when dealing with several different spaces simultaneously, we will write \(B^X(x,r),\) \(d_X,\) \(,\) etc. to indicate that these notions are taken with respect to the metric space X. As mentioned in the introduction, we will often fix a point \(x_0 \in X\) and let \(B^X_r=B^X(x_0,r).\)

3 Besov spaces and capacities

In this section we assume that \(Y=(Y,d)\) is a proper metric space equipped with a Borel measure \(\nu \) such that \(0< \nu (B) <\infty \) for every ball \(B \subset Y.\) We also assume that \(\theta >0,\) and emphasize that in this section \(\theta \ge 1\) is allowed. Recall from the introduction that \(1 \le p <\infty \) throughout the paper.

For a measurable function \(u: Y \rightarrow [-\infty ,\infty ]\) (which is finite \(\nu \)-a.e.) we define the Besov seminorm by

$$\begin{aligned}{}[u]_{\theta ,p}= [u]_{\theta ,p,Y}= \biggl ( \int _Y\int _Y \frac{|u(x)-u(y)|^p}{d(x,y)^{\theta p}} \frac{d\nu (y)\, d\nu (x)}{\nu (B(x,d(x,y)))} \biggr )^{1/p}. \end{aligned}$$

Here and elsewhere, the integrand should be interpreted as zero when \(y=x.\)

The Besov space \(B^\theta _p(Y)\) consists of the functions u such that the Besov norm

$$\begin{aligned} \Vert u\Vert _{B^\theta _p(Y)}^p:=[u]_{\theta ,p}^p+\Vert u\Vert _{L^p(Y)}^p < \infty . \end{aligned}$$
(3.1)

This space is a Banach space, see Remark 9.8 in Björn–Björn–Shanmugalingam [8]. (The norm (3.1) is only equivalent to the one in [8], but the norm-capacity \({C_{\theta ,p}}\) below exactly coincides with the one in [8].)

We restrict our attention to Besov spaces with two indices (i.e. “\(q=p\)”). Such Besov spaces are often called fractional Sobolev spaces or Sobolev–Slobodetskiĭ spaces, although Besov spaces seem to be the most common name in the metric space literature.

Assuming that \(\nu \) is doubling, equivalent definitions, using equivalent seminorms, are given in Gogatishvili–Koskela–Shanmugalingam [16, Theorem 5.2 and (5.1)]. When \(\nu \) is also reverse-doubling (or equivalently, uniformly perfect), further equivalent definitions can be found in Gogatishvili–Koskela–Zhou [17, Theorem 4.1 and Proposition 4.1], for example that the Besov space \(B^\theta _p(Y)\) considered here coincides with the corresponding Hajłasz–Besov space. By [16, Lemmas 6.1 and 6.2], it is related to fractional Hajłasz spaces, considered already in Yang [35].

Our definition (3.1) is also equivalent to certain norms based on heat kernels (with “\(q=p\)”), under suitable a priori estimates for the kernel, see Saloff-Coste [33, Théorème 2] (on Lie groups) and Pietruska-Pałuba [32, Theorem 3.1] (on metric spaces). For \(p=2,\) our definition of \([u]_{\theta ,2}^2\) coincides with the energy used in connection with heat kernel estimates in Chen–Kumagai [12] when

$$\begin{aligned} J(x,y)=\frac{1}{\mu (B(x,d(x,y))\rho (x,y)^{2\theta }} \end{aligned}$$

therein.

See the above papers for the precise definitions and earlier references to the theory on \(\textbf{R}^n,\) on fractals and on Ahlfors regular metric spaces.

We are interested in two types of Besov capacities, the norm-capacity and the condenser capacity.

Definition 3.1

The Besov norm-capacity of \(E \subset Y\) is

$$\begin{aligned} {C_{\theta ,p}}(E) = {C_{\theta ,p}^{Y}}(E) =\inf _u {\Vert u\Vert _{B^\theta _p(Y)}^p}, \end{aligned}$$

where the infimum is taken over all measurable u such that \(0 \le u \le 1\) everywhere and \(u = 1\) in a neighbourhood of E. Such u are called admissible for \({C_{\theta ,p}}(E).\)

By truncation it follows that one can equivalently take the infimum over all u such that \(u \ge 1\) in a neighbourhood of E. As usual, when requiring \(u \ge 1\) or that \(0 \le u \le 1\) everywhere we mean that there is a representative of u satisfying these requirements. By \(E \Subset \Omega \) we mean that is a compact subset of \(\Omega .\) Recall also that \({{\,\textrm{supp}\,}}u := \overline{\{x:u(x) \ne 0\}}.\)

Definition 3.2

Let \(\Omega \subset Y\) be a bounded open set and \(E \Subset \Omega .\) The Besov condenser capacity is given by

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(E,\Omega ) = {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(E,\Omega ) = \inf _u {[u]_{\theta ,p}^p}, \end{aligned}$$

where the infimum is taken over all measurable u such that \(0 \le u \le 1\) everywhere, \(u = 1\) in a neighbourhood of E and \({{\,\textrm{supp}\,}}u\Subset \Omega .\) Such u are called admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E,\Omega ).\)

The corresponding capacities for Sobolev spaces are called Sobolev resp. variational capacity in [2]. Condenser capacities are also often called “relative”.

There do not seem to be very many papers on Besov capacities in metric spaces. Nuutinen [31] and Heikkinen–Koskela–Tuominen [18] extensively studied the norm-capacity, defined using the Hajłasz–Besov norm, under the assumption that \(\nu \) is doubling. In [18], they also considered the corresponding Triebel–Lizorkin norm-capacity, which was later studied by Karak [21]. The Besov norm-capacity \({C_{\theta ,p}}\) was used by Björn–Björn–Shanmugalingam [8]. The Besov condenser capacity \({{{\,\textrm{cap}\,}}_{\theta ,p}}\) was studied in the Ahlfors Q-regular case by Bourdon [9] (\(p >Q\) and \(\theta =1/p\)), Costea [14] (\(p >Q\)) and Lehrbäck–Shanmugalingam [28].

Our main estimates remain the same (up to changes in implicit constants) when the (semi)norm is replaced by an equivalent (semi)norm. However, some of the basic properties, such as subadditivity, are not directly transferable between equivalent (semi)norms, although the proofs often are, so we include them here.

Proposition 3.3

Let \(E,E_1,E_2,\ldots \subset Y.\) Then the following properties hold:

  1. (i)

    if \(E_1 \subset E_2,\) then \({C_{\theta ,p}}(E_1) \le {C_{\theta ,p}}(E_2),\)

  2. (ii)

    \(\nu (E) \le {C_{\theta ,p}}(E),\)

  3. (iii)

    if \(K_1 \supset K_2 \supset \cdots \) are compact subsets of X,  then

    $$\begin{aligned} {C_{\theta ,p}}\biggl (\bigcap _{i=1}^\infty K_i\biggr ) = \lim _{i \rightarrow \infty } {C_{\theta ,p}}(K_i), \end{aligned}$$
  4. (iv)

    \({C_{\theta ,p}}\) is countably subadditive,  i.e. if \(E=\bigcup _{i=1}^\infty E_i\) then

    $$\begin{aligned} {C_{\theta ,p}}(E) \le \sum _{i=1}^\infty {C_{\theta ,p}}(E_i). \end{aligned}$$

The monotonicity (i) is trivial, while (ii) follows directly from the definition. The property (iii) follows from the fact that \({C_{\theta ,p}}\) is an outer capacity (by definition), i.e.

$$\begin{aligned} {C_{\theta ,p}}(E) = \inf _{\begin{array}{c} G \supset E \\ G \text { open} \end{array}} {C_{\theta ,p}}(G), \end{aligned}$$

and elementary properties of compact sets, see Nuutinen [31, Section 3]. As for (iv), Nuutinen [31] only obtains quasi-subadditivity since he works in a more general setting in which the countable subadditivity does not always hold. We therefore provide a proof.

Proof of (iv)

We may assume that the right-hand side is finite. Let \(\varepsilon >0.\) For each \(i=1,2,\ldots ,\) choose \(u_i\) admissible for \({C_{\theta ,p}}(E_i)\) with

$$\begin{aligned}{}[u_i]_{\theta ,p}^p+\Vert u_i\Vert _{L^p(Y)}^p < {C_{\theta ,p}}(E_i)+ \frac{\varepsilon }{2^i}. \end{aligned}$$

Let \(u=\sup _i u_i.\) Then \(u = 1\) in a neighbourhood of \(\bigcup _{i=1}^\infty E_i.\) Moreover, for \(x,y \in Y,\)

$$\begin{aligned} |u(x)-u(y)|^p \le \sup _i |u_i(x)-u_i(y)|^p \le \sum _{i=1}^\infty |u_i(x)-u_i(y)|^p \end{aligned}$$

and similarly, \(|u(x)|^p = \sup _i |u_i(x)|^p \le \sum _{i=1}^\infty |u_i(x)|^p.\) Hence

$$\begin{aligned} {C_{\theta ,p}}(E)&\le ([u]_{\theta ,p}^p+\Vert u\Vert _{L^p(Y)}^p) \le \sum _{i=1}^\infty ([u_i]_{\theta ,p}^p+\Vert u_i\Vert _{L^p(Y)}^p) \\&< \sum _{i=1}^\infty \Bigl ( {C_{\theta ,p}}(E_i)+ \frac{\varepsilon }{2^i}\Bigr ) = \sum _{i=1}^\infty {C_{\theta ,p}}(E_i) + \varepsilon . \end{aligned}$$

Letting \(\varepsilon \rightarrow 0\) completes the proof. \(\square \)

Proposition 3.4

Let \(\Omega \subset \Omega ' \subset Y\) be bounded open sets and \(E, E_1, E_2,\ldots \Subset \Omega .\) Then the following properties hold:

  1. (i)

    if \(E_1 \subset E_2 \subset \Omega ,\) then \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E_1,\Omega ') \le {{{\,\textrm{cap}\,}}_{\theta ,p}}(E_2,\Omega ),\)

  2. (ii)

    if \(K_1 \supset K_2 \supset \cdots \) are compact subsets of \(\Omega ,\) then

    $$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}\biggl (\bigcap _{i=1}^\infty K_i,\Omega \biggr ) = \lim _{i \rightarrow \infty } {{{\,\textrm{cap}\,}}_{\theta ,p}}(K_i,\Omega ), \end{aligned}$$
  3. (iii)

    \({{{\,\textrm{cap}\,}}_{\theta ,p}}\) is countably subadditive,  i.e. if \(E=\bigcup _{i=1}^\infty E_i\) then

    $$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(E) \le \sum _{i=1}^\infty {{{\,\textrm{cap}\,}}_{\theta ,p}}(E_i,\Omega ). \end{aligned}$$

Again, (i) is trivial, while (ii) follows from elementary properties of compact sets since \({{{\,\textrm{cap}\,}}_{\theta ,p}}\) is an outer capacity (by definition). The proof of (iii) is similar to the proof of Proposition 3.3(iv).

In the Ahlfors Q-regular case with \(p>Q > 1,\) these facts were stated in Costea [14] with a comment that the proof is essentially the same as in Costea [13, Theorem 3.1]. His proof of (iii) uses reflexivity. Our proof is considerably shorter and also covers the case \(1\le p\le Q\) as well as the non-Ahlfors regular case.

4 Capacity estimates when \(\nu \) is doubling

In this section we assume that Y is a complete metric space equipped with a doubling measure \(\nu \) and that \(0<\theta <1.\)

Note that Y is proper, see Björn–Björn [2, Proposition 3.1]. The comparison constants in this section are independent of the choice of \(x_0\) and the radii r and R. They depend only on \(\theta ,\) p and \(C_\nu \) unless said otherwise.

Our next aim is to deduce the following result, which will be important later on. Note that \(B_{2R} \ne Y\) whenever \(R < \tfrac{1}{4} {{\,\textrm{diam}\,}}Y.\) Recall the definition of uniform perfectness and the associated constant \(\kappa \) from (2.1).

Proposition 4.1

Assume that Y is uniformly perfect at \(x_0\) with constant \(\kappa .\) Fix \(0< \Theta < 1.\) If \(0 < \Theta R \le 2r \le R\) and \(B_{2R} \ne Y,\) then

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R) \simeq \frac{\nu (B_r)}{r^{\theta p}} \simeq \frac{\nu (B_R)}{R^{\theta p}}, \end{aligned}$$

with comparison constants also depending on \(\kappa \) and \(\Theta .\)

We split the proof of Proposition 4.1 into two parts. We begin with the lower bound, which holds also when \(\theta \ge 1.\)

Proposition 4.2

Assume that Y is uniformly perfect at \(x_0\) with constant \(\kappa ,\) and that \(0 < 2r \le R \) with \(B_{2R} \ne Y.\) Then

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R) \gtrsim \frac{\nu (B_r)}{R^{\theta p}}, \end{aligned}$$

with comparison constant also depending on \(\kappa .\)

Proof

Let u be admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R).\) As \(B_{2R}\ne Y\) it follows from the uniform perfectness that there exists \(z\in B_{2\kappa R} {\setminus }B_{2R}.\) Since \(B(z,R) \cap B_R = \varnothing \) and \(d(x,y)\le (2\kappa +2)R\) for all \(x\in B(z,R)\) and \(y\in B_r,\) we get that

$$\begin{aligned}{}[u]_{\theta ,p}^p \ge \int _{B(z,R)}\int _{B_r} \frac{1}{((2\kappa +2)R)^{\theta p}} \frac{d\nu (y)\, d\nu (x)}{\nu (B(x,(2\kappa +2)R))} \gtrsim \frac{\nu (B_r)}{R^{\theta p}}. \end{aligned}$$

Taking the infimum over all u that are admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R)\) concludes the proof. \(\square \)

To prove the upper bound in Proposition 4.1 we will use the following simple lemma, which will also be used when proving Lemma 4.5.

Lemma 4.3

Assume that \(0\le \eta \le 1\) is an M-Lipschitz function on Y. If \(x \in Y,\) then

$$\begin{aligned} I(x):= \int _Y \frac{|\eta (x)-\eta (y)|^p}{d(x,y)^{\theta p}} \frac{d\nu (y)}{\nu (B(x,d(x,y)))} \lesssim M^{\theta p}. \end{aligned}$$

Proof

Let \(B^j=B(x,2^j/M),\) \(j \in \textbf{Z}.\) Since \(\nu (B(x,d(x,y))) \simeq \nu (B^{j})\) for \(y\in B^{j} {\setminus }B^{j-1}\) and \(0<\theta <1,\) we see that

$$\begin{aligned} I(x)&\simeq \sum _{j=-\infty }^\infty \int _{B^j {\setminus }B^{j-1}} \frac{|\eta (x)-\eta (y)|^p}{d(x,y)^{\theta p}} \frac{d\nu (y)}{\nu (B^{j})} \\&\lesssim \sum _{j=-\infty }^0 \int _{B^{j} {\setminus }B^{j-1}} \frac{M^p d(x,y)^{(1-\theta ) p}\, d\nu (y)}{\nu (B^{j})} + \sum _{j=1}^\infty \int _{B^{j} {\setminus }B^{j-1}} \frac{d(x,y)^{-\theta p} \,d\nu (y)}{\nu (B^{j})} \\&\lesssim \sum _{j=-\infty }^0 M^{\theta p} 2^{j(1-\theta ) p} + \sum _{j=1}^\infty M^{\theta p} 2^{-j\theta p} \simeq M^{\theta p}. \end{aligned}$$

\(\square \)

This now leads to the following estimate.

Proposition 4.4

Assume that \(0 < 2r \le R.\) Then

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R) \lesssim \min \biggl \{\frac{\nu (B_r)}{r^{\theta p}},\frac{\nu (B_R)}{R^{\theta p}}\biggr \}. \end{aligned}$$

Proof

Let \(u:=\min \{\max \{\tfrac{5}{2} - 3d(\,\cdot \,,x_0)/R,0\},1\}\) be a 3/R-Lipschitz function admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(B_{R/2},B_R).\) The doubling property and symmetry in x and y imply that

$$\begin{aligned}{}[u]_{\theta ,p}^p \simeq \int _{B_R} \int _{Y} \frac{|u(x)-u(y)|^p}{d(x,y)^{\theta p}} \frac{d\nu (y)\,d\nu (x)}{\nu (B(x,d(x,y)))}. \end{aligned}$$

Integrating the estimate from Lemma 4.3 over \(x\in B_R\) gives

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R) \le {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_{R/2},B_R) \lesssim \frac{\nu (B_R)}{R^{\theta p}}. \end{aligned}$$

Applying the last estimate with R replaced by 2r gives

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_r,B_R) \le {{{\,\textrm{cap}\,}}_{\theta ,p}}(B_{r},B_{2r}) \lesssim \frac{\nu (B_r)}{r^{\theta p}}. \end{aligned}$$

\(\square \)

Proof of Proposition 4.1

This follows directly from Propositions 4.2 and 4.4, together with the doubling property. \(\square \)

Lemma 4.5

Assume that \(\Omega \subset Y\) is a bounded open set and \(E \Subset \Omega .\) If \({C_{\theta ,p}}(E)=0,\) then \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E,\Omega )=0.\)

Proof

Since \(E \Subset \Omega ,\) there is a Lipschitz function \(0 \le \eta \le 1\) such that \(\eta =1\) in a neighbourhood of E and \({{\,\textrm{supp}\,}}\eta \Subset \Omega .\) Let M be the Lipschitz constant of \(\eta \) and let \(\varepsilon >0.\) As \({C_{\theta ,p}}(E)=0\) there is a function u admissible for \({C_{\theta ,p}}(E)\) with \(\Vert u\Vert _{B^\theta _p(Y)}^p < \varepsilon .\) Let \(v=u\eta .\) Then

$$\begin{aligned} |v(x)-v(y)|&= |u(x)\eta (x)-u(x)\eta (y)+u(x)\eta (y)- u(y)\eta (y)| \\&\le u(x) |\eta (x)-\eta (y)| + |u(x)- u(y)|. \end{aligned}$$

Hence, by Lemma 4.3,

$$\begin{aligned}{}[v]_{\theta ,p}^p&\le 2^p \int _Y u(x)^p \int _Y \frac{|\eta (x)-\eta (y)|^p}{d(x,y)^{\theta p}} \frac{d\nu (y)\, d\nu (x)}{\nu (B(x,d(x,y)))} + 2^p [u]_{\theta ,p}^p \\&\lesssim M^{\theta p} \Vert u\Vert _{L^p(Y)}^p + [u]_{\theta ,p}^p \le (M^{\theta p}+1) \Vert u\Vert _{B^\theta _p(Y)}^p < (M^{\theta p}+1) \varepsilon . \end{aligned}$$

As \(v =1\) in a neighbourhood of E and \({{\,\textrm{supp}\,}}v \Subset \Omega ,\) we see that

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(E,\Omega ) \le [v]_{\theta ,p}^p \lesssim (M^{\theta p}+1) \varepsilon . \end{aligned}$$

Letting \(\varepsilon \rightarrow 0\) completes the proof. \(\square \)

Note that the converse of Lemma 4.5 does not hold in general; consider e.g. a compact Y in which case \({{{\,\textrm{cap}\,}}_{\theta ,p}}(Y,Y)=0\) (as \(u \equiv 1\) is admissible) while \({C_{\theta ,p}}(Y) \ge \nu (Y) >0.\) Nevertheless, we will prove the following characterization.

Proposition 4.6

Assume that \(\Omega \subset Y\) is a bounded open set such that \(\nu (Y {\setminus }\Omega )>0.\) Let \(E \Subset \Omega .\) Then \({C_{\theta ,p}}(E)=0\) if and only if \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E,\Omega )=0.\)

The following simple observation will serve as a Poincaré type inequality. We will use it to prove Proposition 4.6 as well as Lemma 4.9 below.

Lemma 4.7

If u is a measurable function such that \(u=0\) outside a bounded measurable set \(\Omega \) and \(A \subset Y {\setminus }\Omega \) is a bounded measurable set with \(\nu (A)>0,\) then for every \(z\in A,\)

$$\begin{aligned} \int _Y |u|^p \, d\nu \le R^{\theta p} \frac{\nu (B(z,R))}{\nu (A)} [u]_{\theta ,p}^p, \end{aligned}$$

where

$$\begin{aligned} R={{\,\textrm{diam}\,}}A + \sup \{d(x,y) : x \in A \text { and } y \in \Omega \}. \end{aligned}$$

Proof

Since \(u=0\) outside \(\Omega ,\) and in particular in A,  and \(B(x,d(x,y))\subset B(z,R)\) for all \(x\in A\) and \(y\in \Omega ,\) we see that

$$\begin{aligned} \int _Y |u|^p \, d\nu&= \frac{1}{\nu (A)} \int _{A}\int _{\Omega } |u(x)-u(y)|^p \, d\nu (y)\, d\nu (x) \\&\le R^{\theta p} \frac{\nu (B(z,R))}{\nu (A)} \int _{A} \int _\Omega \frac{|u(x)-u(y)|^p}{d(x,y)^{\theta p}} \frac{d\nu (y)\, d\nu (x)}{\nu (B(x,d(x,y)))}. \end{aligned}$$

\(\square \)

Proof of Proposition 4.6

One implication was shown in Lemma 4.5. Conversely, assume that \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E,\Omega )=0.\) Let \(A=B {\setminus }\Omega ,\) where B is a large enough ball so that \(\nu (A)>0.\) Let \(z \in A\) and \(\varepsilon >0.\) Then there is u admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E,\Omega )\) with \([u]_{\theta ,p}^p < \varepsilon .\) Since u is admissible also for \({C_{\theta ,p}}(E),\) Lemma 4.7 implies that

$$\begin{aligned} {C_{\theta ,p}}(E) \le \biggl ( 1+R^{\theta p} \frac{\nu (B(z,R))}{\nu (A)} \biggr ) \varepsilon , \end{aligned}$$

and letting \(\varepsilon \rightarrow 0\) gives \({C_{\theta ,p}}(E)=0.\) \(\square \)

As an immediate consequence of Proposition 4.6 and monotonicity, we obtain the following characterization.

Corollary 4.8

The following are equivalent : 

  1. (a)

    \({C_{\theta ,p}}(\{x_0\})=0,\)

  2. (b)

    \({{{\,\textrm{cap}\,}}_{\theta ,p}}(\{x_0\},B_r)=0\) for every \(r>0,\)

  3. (c)

    \({{{\,\textrm{cap}\,}}_{\theta ,p}}(\{x_0\},B_r)=0\) for some \(0< r < \tfrac{1}{2} {{\,\textrm{diam}\,}}Y.\)

If \(Y=[-1,1]\) (with Lebesgue measure), \(x_0=0\) and \(r > 1 = \tfrac{1}{2} {{\,\textrm{diam}\,}}Y,\) then \(u \equiv 1\) is admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(\{x_0\},B_r)\) and thus \({{{\,\textrm{cap}\,}}_{\theta ,p}}(\{x_0\},B_r)=0.\) On the other hand \({C_{\theta ,p}}(\{x_0\})>0\) if \(\theta p > 1,\) by Theorem 1.3. This shows that the range in (c) is sharp.

When \(\Omega \) is a ball, the following result gives more precise information than Lemma 4.5.

Lemma 4.9

Assume that \(E\subset B_r.\) Then

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(E,B_{2r}) \lesssim (1+r^{-\theta p}) {C_{\theta ,p}}(E). \end{aligned}$$

If,  moreover,  Y is uniformly perfect at \(x_0\) with constant \(\kappa \) and \(Y{\setminus }B_{3 r}\ne \varnothing ,\) then

$$\begin{aligned} {C_{\theta ,p}}(E) \lesssim (1+r^{\theta p}) {{{\,\textrm{cap}\,}}_{\theta ,p}}(E,B_{2r}), \end{aligned}$$

with comparison constant also depending on \(\kappa .\)

Proof

Let u be admissible for \({C_{\theta ,p}}(E)\) and let \(0 \le \eta \le 1\) be a (2/r)-Lipschitz function such that \(\eta =1\) in a neighbourhood of \(B_r\) and \({{\,\textrm{supp}\,}}\eta \Subset B_{2r}.\) Let \(v=u\eta .\) Then v is admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E,B_{2r})\) and as in the proof of Lemma 4.5,

$$\begin{aligned} |v(x)-v(y)| \le u(x) |\eta (x)-\eta (y)| + |u(x)-u(y)|. \end{aligned}$$

Hence by symmetry and Lemma 4.3,

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}}(E,B_{2r}) \le [v]_{\theta ,p}^p&\lesssim \int _{B_{2r}} \int _Y \frac{|v(x)-v(y)|^p}{d(x,y)^{\theta p}} \frac{d\nu (y)\, d\nu (x)}{\nu (B(x,d(x,y)))} \\&\lesssim r^{-\theta p} \int _{B_{2r}} |u|^{p}\, d\nu + [u]_{\theta ,p}^p. \end{aligned}$$

Taking the infimum over all u that are admissible for \({C_{\theta ,p}}(E)\) proves the first inequality in the statement of the lemma.

For the second inequality, note that every u admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E,B_{2r})\) is admissible also for \({C_{\theta ,p}}(E).\) Next, use the uniform perfectness at \(x_0\) to find \(z \in B_{3\kappa r} {\setminus }B_{3r}.\) Lemma 4.7 with \(\Omega =B_{2r},\) \(A=B(z,r)\) and \(R= (3\kappa +3) r,\) together with \(\nu (B(z,R))\lesssim \nu (B(z,r)),\) then implies that

$$\begin{aligned} {C_{\theta ,p}}(E) \le \int _Y |u|^p \, d\nu + [u]_{\theta ,p}^p \lesssim (1+r^{\theta p} ) [u]_{\theta ,p}^p. \end{aligned}$$

Taking the infimum over all u that are admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}}(E,B_{2r})\) concludes the proof. \(\square \)

We conclude this section by comparing capacities with respect to different underlying spaces, which will be a useful tool later on, when enlarging Y in Sect. 6 and when dealing with unbounded Y in Sect. 7. Since the seminorm \([u]_{\theta ,p}\) is nonlocal, the sets where u vanishes cannot be ignored.

Lemma 4.10

Let \(E \Subset \Omega \subset X \subset Y,\) with X compact and \(\Omega \) an open subset of Y. Assume that

$$\begin{aligned} \nu (B^X(x,r)) \simeq \nu (B^Y(x,r)) \quad \text {for all }x \in X \text { and }0< r < 2 {{\,\textrm{diam}\,}}X, \end{aligned}$$
(4.1)

and that for a.e. \(x\in \Omega ,\)

$$\begin{aligned} \int _{Y {\setminus }X} I(x,y) \,d\nu (y) \lesssim \int _{X {\setminus }\Omega } I(x,y) \,d\nu (y), \end{aligned}$$
(4.2)

where

$$\begin{aligned} I(x,y)=\frac{1}{d(x,y)^{\theta p}\,\nu (B^Y(x,d(x,y)))}. \end{aligned}$$

Then

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}^{X}}(E,\Omega ) \simeq {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(E,\Omega ) \end{aligned}$$

with comparison constants also depending on the implicit comparison constants in (4.1) and (4.2).

Proof

Note that u is admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}^{X}}(E,\Omega )\) if and only if its zero extension to \(Y{\setminus }X\) is admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(E,\Omega ).\) Hence it is enough to show that \([u]_{\theta ,p,X} \simeq [u]_{\theta ,p,Y}\) for any u admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(E,\Omega ).\) Consider such a function u.

By (4.1), \([u]_{\theta ,p,X} \lesssim [u]_{\theta ,p,Y}.\) Conversely, the doubling property and symmetry in x and y,  together with (4.1) and (4.2), imply that

$$\begin{aligned}{}[u]_{\theta ,p,Y}^p&\simeq [u]_{\theta ,p,X}^p + \int _{\Omega } u(x)^p \int _{Y {\setminus }X} I(x,y) \,d\nu (y)\,d\nu (x) \\&\lesssim [u]_{\theta ,p,X}^p + \int _{\Omega } u(x)^p \int _{X {\setminus }\Omega } I(x,y) \,d\nu (y)\,d\nu (x) \simeq [u]_{\theta ,p,X}^p. \end{aligned}$$

\(\square \)

5 Hyperbolic fillings and capacities on them

In this section, we let Z be a compact metric space with \(0<{{\,\textrm{diam}\,}}Z <1\) and equipped with a doubling measure \(\nu .\) Let \(x_0\in Z\) be fixed.

Hyperbolic fillings will be one of our main tools when obtaining precise estimates for condenser capacities, based on results from Björn–Björn–Lehrbäck [5, 6]. We follow the construction of the hyperbolic filling in Björn–Björn–Shanmugalingam [8] as follows: Fix two parameters \(\alpha ,\tau >1\) and let X be a hyperbolic filling of Z,  constructed with these parameters. More precisely, fix \(z_0\in Z\) and set \(A_0=\{z_0\}.\) Note that \(Z=B^Z(z_0,1).\) By a recursive construction using Zorn’s lemma or the Hausdorff maximality principle, for each positive integer n we can choose a maximal \(\alpha ^{-n}\)-separated set \(A_n\subset Z\) such that \(A_n\subset A_m\) when \(m\ge n \ge 0.\) A set \(A\subset Z\) is \(\alpha ^{-n}\)-separated if \(d_Z(z,z')\ge \alpha ^{-n}\) whenever \(z,z'\in A\) are distinct. Then the balls \(B^Z(z,\tfrac{1}{2}\alpha ^{-n}),\) \(z\in A_n,\) are pairwise disjoint. Since \(A_n\) is maximal, the balls \(B^Z(z,\alpha ^{-n}),\) \(z\in A_n,\) cover Z.

We define the “vertex set”

$$\begin{aligned} V=\bigcup _{n=0}^\infty V_n, \quad \text {where } V_n=\{(z,n): z \in A_n\}. \end{aligned}$$

The vertices \(v=(x,n)\) and \(v'=(y,m)\) form an edge (denoted \([v,v']\)) in the hyperbolic filling X of Z if and only if \(|n-m|\le 1\) and

$$\begin{aligned} \tau B^Z(x,\alpha ^{-n})\cap \tau B^Z(y,\alpha ^{-m}) \ne \varnothing ,&\quad \text {if } m=n, \\ B^Z(x,\alpha ^{-n})\cap B^Z(y,\alpha ^{-m}) \ne \varnothing ,&\quad \text {if } m=n\pm 1. \end{aligned}$$

The hyperbolic filling X,  seen as a metric space with edges of unit length, is a Gromov hyperbolic space. Its uniformization \(X_\varepsilon \) with parameter \(\varepsilon =\log \alpha \) is given by the uniformized metric

$$\begin{aligned} d_\varepsilon (x,y) = \inf _\gamma \int _\gamma e^{-\varepsilon d(\cdot ,v_0)}\,ds = \inf _\gamma \int _\gamma \alpha ^{-d(\cdot ,v_0)}\,ds, \end{aligned}$$

where \(d(\,\cdot \,,v_0)\) denotes the graph distance to the root \(v_0=(z_0,0)\) of the hyperbolic filling, ds denotes the arc length, and the infimum is taken over all paths in X joining x to y. We let

be the completion of \(X_\varepsilon \) and equip it with the measure \(\mu _\beta \) as in [8, Section 10], with

$$\begin{aligned} \beta =\varepsilon (1-\theta )p. \end{aligned}$$

Roughly, \(\mu _\beta \) is obtained by smearing out the measure \(e^{-\beta n}\nu (B(x,\alpha ^{-n}))\) to the edges adjacent to the vertex \((x,n)\in V.\) Note that \(e^\varepsilon =\alpha \) and that \(\sigma ,\) appearing in various places in [8], is in our case

$$\begin{aligned} \sigma = \frac{\varepsilon }{\log \alpha } = 1. \end{aligned}$$

By [8, Proposition 4.4], Z and \(\partial _\varepsilon X\) are biLipschitz equivalent (since \(\sigma =1\)) and we will therefore identify them as sets. However, the metrics are different. More precisely, by [8, Proposition 4.4],

$$\begin{aligned} C_1 d_Z(x,y) \le d_\varepsilon (x,y) \le C_2 d_Z(x,y) \quad \text {for all }x,y\in Z, \end{aligned}$$
(5.1)

where \(C_1=1/2\tau \alpha ,\) \(C_2=4\alpha ^{(l+1)}/\varepsilon \) and l is the smallest nonnegative integer such that \(\alpha ^{-l}\le \tau -1.\)

Clearly, Z is uniformly perfect at \(x_0\) if and only if \(\partial _\varepsilon X\) is uniformly perfect at \(x_0\) (with comparable constants \(\kappa \) and \(\kappa _\varepsilon \)). Moreover, if \(\Omega \subset Z\) is open and \(E \Subset \Omega ,\) then

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}^{Z}}(E,\Omega ) \simeq {{{\,\textrm{cap}\,}}_{\theta ,p}^{\partial _\varepsilon X}}(E,\Omega ). \end{aligned}$$
(5.2)

Note however that because of (5.1), if E and \(\Omega \) are balls with respect to Z,  they will not in general be balls with respect to \(\partial _\varepsilon X,\) which needs to be taken into account when estimating the capacity of annuli.

We will need the Newtonian (Sobolev) space on and its Sobolev and condenser capacities, which we now introduce, see [2] or [8] for further details.

A property holds for -almost every curve in if the curve family \(\Gamma \) for which it fails has zero -modulus, i.e. there is such that \(\int _\gamma \rho \,ds=\infty \) for every \(\gamma \in \Gamma .\) A measurable function is a -weak upper gradient of if for -almost all rectifiable curves \(,\)

$$\begin{aligned} |u(\gamma (0)) - u(\gamma (l_{\gamma }))| \le \int _{\gamma } g\,ds, \end{aligned}$$

where the left-hand side is \(\infty \) whenever at least one of the terms therein is infinite. If u has a -weak upper gradient in \(,\) then it has a minimal-weak upper gradient in the sense that \(g_u \le g\) a.e. for every -weak upper gradient of u.

For measurable \(,\) we let

where the infimum is taken over all -weak upper gradients of u. The Newtonian space on is

Note that functions in are defined pointwise everywhere, not only up to a.e.-equivalence classes.

The Sobolev capacity of is

where the infimum is taken over all such that \(u=1\) on E. The condenser capacity of \(E \subset \Omega \) with respect to an open set is

where the infimum is taken over all such that \(u=1\) on E and \(u=0\) outside \(\Omega .\) (In contrast to Definition 3.2, it is not required that \({{\,\textrm{supp}\,}}u \Subset \Omega .\)) For both capacities we call such u admissible.

By [8, Theorem 10.3], \({\mu _\beta }\) is doubling and supports a 1-Poincaré inequality on \(,\) i.e. there exist \(C,\lambda >0\) such that for each ball and for all integrable functions u and 1-weak upper gradients g of u on \(\lambda B,\)

(5.3)

where

As is geodesic, the dilation constant in the 1-Poincaré inequality can be chosen to be \(\lambda =1\) and moreover supports a (pp)-Poincaré inequality (i.e. (5.3) with averaged \(L^p\)-norms on both sides) with dilation \(\lambda =1,\) see e.g. [2, Theorem 4.39]. It thus follows from Björn–Björn–Shanmugalingam [7, Corollary 1.3] and [2, Theorems 6.7 (vii) and 6.19 (vii)] that and are outer capacities.

Another consequence of [8, Theorem 10.3] is that for every and \(x \in Z,\)

(5.4)

From (5.1) and (5.4) it follows that the exponent sets at \(x_0\in Z\) are the same for Z and \(\partial _\varepsilon X,\) and that, for \(q>0,\)

and similarly for the other exponent sets. Moreover, if Z is uniformly perfect at \(x_0,\) then the doubling property implies that all the exponent sets for \(\nu \) and \(\mu _\beta \) are nonempty, see [5, (2.3)]. Hence

(5.5)

and similarly for the other exponents. In particular,

(5.6)

We are now ready to estimate capacities on \(\partial _\varepsilon X\) in terms of capacities on \(,\) with the aim to later translate them to capacities on the original space Z. The comparison constants in this section are independent of the choice of \(x_0\) and radii r and R,  and depend only on \(\theta ,\) p\(C_\nu ,\) \(\alpha \) and \(\tau ,\) unless said otherwise.

Lemma 5.1

Assume that \(E \subset B^{\partial _\varepsilon X}_R.\) Then

Proof

As both capacities are outer, we may assume that E is open in Z. Let be admissible for \(.\) Then \(u=0\) outside and hence \(.\) Thus the restriction \(u|_Z\) is admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}^{\partial _\varepsilon X}}(E,B^{\partial _\varepsilon X}_{2R}),\) and by [8, Theorem 11.3],

Taking the infimum over all u that are admissible for shows that

where the last comparison follows from [2, Lemma 11.22]. \(\square \)

The following lemma controls how function values spread from Z to the hyperbolic filling. This property will be essential for obtaining a reverse estimate to Lemma 5.1.

Lemma 5.2

Assume that \(u\in B^\theta _p(Z)\) and \(b \in \textbf{R}\) are such that \(u=b\) in \(B^{\partial _\varepsilon X}(x,Lr),\) where \(L=1+\alpha (1+\varepsilon + C_2\varepsilon )\) with \(C_2\) as in (5.1). Let U be the extension of u to \(,\) given by

(5.7)

extended piecewise linearly (with respect to \(d_\varepsilon \)) to each edge in \(X_\varepsilon ,\) and then by

(5.8)

Then \(U=u\) \(\nu \)-a.e. in Z and \(U \equiv b\) in \(.\)

Proof

That \(U=u\) \(\nu \)-a.e. in Z was shown in [8, Theorem 12.1]. Let \(.\) Then y belongs to an edge \([v_1,v_2],\) where \(v_1=(x_1,n_1)\) and \(v_2=(x_2,n_2)\) are vertices in the hyperbolic filling. We can assume that \(n_1 \le n_2\le n_1+1.\) Then for \(j=1,2,\) since \(\alpha =e^\varepsilon ,\)

$$\begin{aligned} d_\varepsilon (y,v_j) \le \int _{0}^{1} \alpha ^{- n_1} \, dt = \alpha ^{-n_1} \quad \text {and} \quad d_\varepsilon (v_j,x_j) = \frac{\alpha ^{-n_j}}{\varepsilon } \le \frac{\alpha ^{-n_1}}{\varepsilon }. \end{aligned}$$

Since also

$$\begin{aligned} r > d_\varepsilon (y,x) \ge {{\,\textrm{dist}\,}}_\varepsilon (y,Z) \ge \int _{n_2}^\infty \alpha ^{-t} \, dt \ge \frac{\alpha ^{-n_1-1}}{\varepsilon }, \end{aligned}$$

we have for all \(z\in B^Z(x_j,\alpha ^{-n_j}),\) \(j=1,2,\) that using also (5.1),

$$\begin{aligned} d_\varepsilon (x,z)&< d_\varepsilon (x,y) + d_\varepsilon (y,v_j) + d_\varepsilon (v_j,x_j) + C_2 \alpha ^{- n_j} \\&< r + \alpha ^{-n_1}\biggl (1+\frac{1}{\varepsilon } + C_2\biggr ) < r + \alpha \varepsilon r\biggl (1+\frac{1}{\varepsilon } + C_2\biggr ) = Lr, \end{aligned}$$

and thus \(u(z)=b\) by assumption. It follows from (5.7) that \(U(x_j)=b,\) \(j=1,2,\) and hence also \(U(y)=b.\) For \(,\) the claim follows from (5.8). \(\square \)

Theorem 5.3

Assume that Z is uniformly perfect at \(x_0\) with constant \(\kappa ,\) and that \(E \subset B^{\partial _\varepsilon X}_R.\) If \(B^{\partial _\varepsilon X}_{3R} \ne Z\) then

(5.9)

with comparison constants also depending on \(\kappa .\)

Proof

The “\(\lesssim \)” inequality follows from Lemma 5.1, so it remains to show the “\(\gtrsim \)” inequality. As both capacities are outer, we may assume that E is open in Z. Let u be admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}^{\partial _\varepsilon X}}(E,B^{\partial _\varepsilon X}_{2R}).\) Consider the extension U to given by (5.7) and (5.8). It then follows from Lemma 5.2 that \(U=u\) \(\nu \)-a.e. in \(\partial _\varepsilon X,\) \(U \equiv 1\) on E and \(0 \le U \le 1\) on \(.\) Moreover, by [8, Theorem 12.1],

(5.10)

Let next be a 2/R-Lipschitz cut-off function with such that \(\eta =1\) in \(.\) Then, by [2, Theorem 2.15],

$$\begin{aligned} g_{\eta U} \le \eta g_U + U g_\eta \le g_U + \frac{2U}{R}. \end{aligned}$$

Since \(\eta U\) is admissible for \(,\) we have

(5.11)

In view of (5.10), it therefore suffices to estimate the last term in (5.11) using the first integral on the right-hand side. To this end, let \(,\) where \(\kappa _\varepsilon \) is the uniform perfectness constant of \(\partial _\varepsilon X\) at \(x_0.\) We will use that

$$\begin{aligned} \frac{{\mu _\beta }(B {\setminus }{{\,\textrm{supp}\,}}U)}{{\mu _\beta }(B)}\ge \Theta >0, \end{aligned}$$

where \(\Theta \) is independent of U and B and only depends on \(\varepsilon ,\) \(\kappa _\varepsilon \) and \(C_{\mu _\beta }.\) We postpone the verification of this to the end of the proof and first show how it leads us to conclude the proof. The Minkowski inequality yields

(5.12)

Since \(U_B=|U-U_B|\) in \(B{\setminus }{{\,\textrm{supp}\,}}U,\) we have

Inserting this into (5.12) and using the (pp)-Poincaré inequality for \({\mu _\beta }\) gives

Together with (5.10) and (5.11) the last estimate implies that

(5.13)

Taking the infimum over all u that are admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}^{\partial _\varepsilon X}}(E,B^{\partial _\varepsilon X}_{2R})\) shows the “\(\gtrsim \)” inequality in (5.9).

It remains to show that \(\Theta > 0.\) By the uniform perfectness and the fact that \(B^{\partial _\varepsilon X}_{3 R} \ne Z,\) there is some \(x \in B^{\partial _\varepsilon X}_{3 \kappa _\varepsilon R} {\setminus }B^{\partial _\varepsilon X}_{3 R}.\) Then \(u=0\) in \(B^{\partial _\varepsilon X}(x,R)\) and hence by Lemma 5.2, \(U=0\) in \(.\) From this and the doubling property of \({\mu _\beta }\) we see that

where \(\Theta \) only depends on \(\varepsilon ,\) \(\kappa _\varepsilon \) and \(C_{\mu _\beta }.\) \(\square \)

We are interested in the Besov capacity of annuli in Z. This is related to the Besov capacity of annuli in \(\partial _\varepsilon X\) (through (5.1) and (5.2)), which in turn is related to the capacity of annuli in as follows.

Theorem 5.4

Assume that Z is uniformly perfect at \(x_0\) with constant \(\kappa .\) Let \(0 < 2r \le R\) and \(L=\alpha (1+\varepsilon + C_2\varepsilon )\) as in Lemma 5.2. Assume that \(B^{\partial _\varepsilon X}_{3R/2} \ne Z.\) Then

(5.14)

with comparison constant also depending on \(\kappa .\)

Proof

We proceed as in the proof of Theorem 5.3 with \(E=B^{\partial _\varepsilon X}_r\) and 2R replaced by R. Lemma 5.2 shows that the function U constructed in (5.7) and (5.8) satisfies \(U \equiv 1\) in and is thus admissible for \(,\) i.e. we can replace E by in (5.13). Taking the infimum over all u that are admissible for \({{{\,\textrm{cap}\,}}_{\theta ,p}^{\partial _\varepsilon X}}(B^{\partial _\varepsilon X}_r,B^{\partial _\varepsilon X}_R)\) shows (5.14). \(\square \)

6 Enlarging Y

In this section we assume that Y is a compact metric space, equipped with a doubling measure \(\nu ,\) and let \(x_0\in Y\) be fixed.

Our aim is to embed Y into a suitable larger metric space Z. We will do this recursively, but in this section we only do the first step.

As Y is compact there is a point \(x_1\) such that \(d(x_1,x_0)=\max _{x \in Y} d(x,x_0).\) Let \(Y'=(Y',d',\nu ')\) be a copy of \(Y=(Y,d,\nu ),\) where we identify \(x_1\) with its copy, but do not identify any other points. Equip \({\widehat{Y}}=Y \cup Y'\) with the measure

$$\begin{aligned} {\hat{\nu }}(A)=\nu (A\cap Y)+\nu '(A\cap Y') \end{aligned}$$

and the metric \({\hat{d}}\) so that

$$\begin{aligned} {\hat{d}}(x,y)={\left\{ \begin{array}{ll} d(x,x_1)+d'(y,x_1), &{} \text {if } x \in Y \text { and } y \in Y', \\ d(x,y) &{} \text {if } x,y \in Y, \\ d'(x,y) &{} \text {if } x,y \in Y'. \end{array}\right. } \end{aligned}$$

Lemma 6.1

The measure \({\hat{\nu }}\) is doubling on \({\widehat{Y}}\) with doubling constant \(C_{\hat{\nu }}\le 2 C_\nu \) and satisfies

$$\begin{aligned} \nu (B^Y(x,r)) \le {\hat{\nu }}(B^{{\widehat{Y}}}(x,r)) \le 2\nu (B^Y(x,r)) \quad \text {if }x \in Y \text { and }r>0. \end{aligned}$$
(6.1)

Moreover,  if Y is uniformly perfect at \(x_0\) with constant \(\kappa ,\) then \({\widehat{Y}}\) is uniformly perfect at \(x_0\) with constant \(\hat{\kappa }=\max \{\kappa ,2\}.\)

Proof

That (6.1) holds follows directly from the construction. A similar formula holds if \(x \in Y'.\) It follows that \({\hat{\nu }}\) is doubling with \(C_{{\hat{\nu }}} \le 2 C_\nu .\)

As for the uniform perfectness, let \(r>0\) be such that \(B^{{\widehat{Y}}}_{\hat{\kappa }r} \ne {\widehat{Y}}.\) Then \(\hat{\kappa }r \le 3 d(x_0,x_1)\) and hence \(r \le \tfrac{3}{2} d(x_0,x_1).\) If \(r \le d(x_0,x_1)\) then \(x_1 \in Y {\setminus }B^Y_{r}\) and thus there is

$$\begin{aligned} y \in B^Y_{\kappa r} {\setminus }B^Y_r \subset B^{{\widehat{Y}}}_{\hat{\kappa }r} {\setminus }B^{{\widehat{Y}}}_r, \end{aligned}$$

by the uniform perfectness of Y. On the other hand, if \(d(x_0,x_1) < r \le \tfrac{3}{2} d(x_0,x_1),\) then \(B^{{\widehat{Y}}}_{\hat{\kappa }r} {\setminus }B^{{\widehat{Y}}}_r\) contains the copy of \(x_0\) in \(Y'.\) \(\square \)

The constant 2 in \(\hat{\kappa }\) in Lemma 6.1 is optimal as seen by the following example: Let \(Y=[-1,0] \cup \{1\}\) with \(x_0=0\) and \(x_1=1.\) In this case Y is uniformly perfect at 0 with any constant \(\kappa >1,\) but \({\widehat{Y}}\) is only uniformly perfect at 0 with constant \(\hat{\kappa }\ge 2.\)

From now on we denote the distance by d and the measure by \(\nu \) also on \({\widehat{Y}}.\)

Lemma 6.2

Let \(\Omega \subset B_{d(x_0,x_1)/2}\) be open and \(E \Subset \Omega .\) Then

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(E,\Omega ) \simeq {{{\,\textrm{cap}\,}}_{\theta ,p}^{{\widehat{Y}}}}(E,\Omega ), \end{aligned}$$

with comparison constants depending only on \(\theta ,\) p and \(C_\nu .\)

Proof

Lemma 6.1 shows that (4.1) in Lemma 4.10 holds for the spaces \(Y\subset {\widehat{Y}}.\) By the doubling property of \(\nu ,\)

$$\begin{aligned} \nu ({\widehat{Y}}{\setminus }Y) \simeq \nu (B(x_1, \tfrac{1}{2} d(x_0,x_1)) \simeq \nu (Y {\setminus }\Omega ). \end{aligned}$$

Since for all \(x\in \Omega ,\) \(y\in {\widehat{Y}}{\setminus }Y\) and \(y'\in Y{\setminus }\Omega ,\) we have

$$\begin{aligned} d(x,y) \simeq d(x_0,x_1) \quad \text {and} \quad d(x,y') \le \tfrac{3}{2} d(x_0,x_1), \end{aligned}$$

the statement follows from Lemma 4.10. \(\square \)

7 From unbounded to bounded spaces

In this section, we let Y be a metric space equipped with a doubling measure \(\nu \) and fix \(x_0 \in Y.\) We also remind the reader that throughout the paper, balls without a specified centre are centred at \(x_0.\)

Lemma 7.1

Let \(Y_0=\{x_0\}\) and \(\delta >0.\) For \(n=0,1,\ldots ,\) let

$$\begin{aligned} Y_{n+1} = \bigcup _{x\in Y_n} B^Y(x, 2^{-n}\delta ) \quad \text {and} \quad Y'= \overline{\bigcup _{n=0}^\infty Y_n}. \end{aligned}$$

Also let \(\nu ':= \nu |_{Y'}.\) Then the following hold : 

  1. (a)

    \(B^Y_{\delta } \subset Y' \subset \overline{B^Y_{2\delta }},\)

  2. (b)

    \(\nu '\) is doubling with \(C_{\nu '} \le C_\nu ^6,\)

  3. (c)

    for all \(x\in Y'\) and \(0< r < 2{{\,\textrm{diam}\,}}Y',\)

    $$\begin{aligned} \nu '(B^{Y'}(x,r)) \le \nu (B^Y(x,r)) \le C_d^5 \nu '(B^{Y'}(x,r)), \end{aligned}$$
  4. (d)

    if Y is uniformly perfect at \(x_0\) with constant \(\kappa ,\) then \(Y'\) is uniformly perfect at \(x_0\) with constant \(\kappa '=\max \{\kappa ,2\}.\)

Proof

That (a) holds is clear from the construction. If \(Y'=\{x_0\}\) then (b) is trivial, while (c) and (d) are void. So assume that \(Y' \ne \{x_0\}.\)

(c) The first inequality is obvious. By (a), \(r < 2 {{\,\textrm{diam}\,}}Y' \le 8 \delta .\) Let \(r'= 2^{3-k} \delta ,\) where \(k \ge 0\) is an integer such that \( \tfrac{1}{2} r < r' \le r.\) Then there is \(z\in Y_n\) for some \(n>k,\) such that \(d(z,x)<\tfrac{1}{8}r'.\) By construction, there are \(z_j\in Y_j,\) \(j=k,\ldots ,n-1,\) such that \(d(z_j,z_{j+1})<2^{-j}\delta ,\) where \(z_n:=z.\) Hence \(x':=z_k\in Y_k\) and

$$\begin{aligned} d(x,x')< 2^{1-k} \delta + d(z,x) < \tfrac{1}{4} r' + \tfrac{1}{8} r' = \tfrac{3}{8} r'. \end{aligned}$$

Moreover,

$$\begin{aligned} \nu (B^Y(x,r))&\le \nu (B^Y(x',4r')) \le C_\nu ^5 \nu (B^Y(x',\tfrac{1}{8} r')) \\&= C_\nu ^5 \nu '(B^{Y'}(x',\tfrac{1}{8} r')) \le C_\nu ^5 \nu '(B^{Y'}(x,\tfrac{1}{2} r')) \le C_\nu ^5 \nu '(B^{Y'}(x,r)). \end{aligned}$$

(b) Let \(x \in Y'\) and \(r >0.\) If \(r < 2 {{\,\textrm{diam}\,}}Y',\) then by (c),

$$\begin{aligned} \nu '(B^{Y'}(x,2r)) \le \nu (B^Y(x,2r)) \le C_\nu \nu (B^Y(x,r)) \le C_\nu ^6 \nu '(B^{Y'}(x,r)). \end{aligned}$$

If instead \(r \ge 2 {{\,\textrm{diam}\,}}Y',\) then \(B^{Y'}(x,2r)=Y'=B^{Y'}(x,r)\) and the estimate is trivial.

(d) Let \(r>0\) be such that \(B^{Y'}_{\kappa 'r} \ne Y'.\) Then \(Y {\setminus }B^Y_{\kappa ' r} \supset Y' {\setminus }B^{Y'}_{\kappa 'r}\ne \varnothing \) and \(\kappa ' r \le 2 \delta .\) Hence, if \(\kappa ' r \le \delta \) then there is \(z \in B^Y_{\kappa ' r} {\setminus }B^Y_{r} = B^{Y'}_{\kappa ' r} {\setminus }B^{Y'}_{r}.\) So we may assume that \(\delta < \kappa ' r \le 2 \delta .\) As \(Y_1=B^{Y'}_{\delta } \subset B^{Y'}_{\kappa 'r} \varsubsetneq Y'\) we see that \(Y_2 {\setminus }Y_1 \ne \varnothing .\) Therefore there are \(x_1 \in Y_1\) and \(x_2 \in Y_2 {\setminus }Y_1\) with \(d(x_1,x_2) < \tfrac{1}{2} \delta .\)

Assume for a contradiction that \(B^{Y'}_{\kappa ' r} {\setminus }B^{Y'}_r = \varnothing .\) Since \( r \le \delta < \kappa ' r\) we must have \(d(x_1,x_0) < r\) and hence also

$$\begin{aligned} d(x_2,x_0) \ge \kappa ' r \ge 2r > 2 d(x_1,x_0). \end{aligned}$$

Thus,

$$\begin{aligned} \tfrac{1}{2} \delta> d(x_1,x_2) \ge d(x_2,x_0) - d(x_1,x_0)> \tfrac{1}{2} d(x_2,x_0) \ge \tfrac{1}{2}\kappa ' r >\tfrac{1}{2} \delta , \end{aligned}$$

a contradiction. Hence \(B^{Y'}_{\kappa ' r} {\setminus }B^{Y'}_r \ne \varnothing .\) \(\square \)

The following lemma shows that for the condenser capacity, the (possibly unbounded) space Y can be effectively replaced by the bounded space \(Y'.\)

Lemma 7.2

Let \(Y'\) be the space constructed in Lemma 7.1 with parameter \(\delta >0.\) Assume that Y is uniformly perfect at \(x_0\) with constant \(\kappa .\) Let \(R=\delta /2 \kappa ,\) \(\Omega \subset B^Y_R\) be open and \(E \Subset \Omega .\) Then

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y'}}(E,\Omega ) \simeq {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(E,\Omega ), \end{aligned}$$

with comparison constants depending only on \(\theta ,\) p\(C_\nu \) and \(\kappa .\)

That the assumption of uniform perfectness cannot be dropped can be seen as follows: Let

$$\begin{aligned} Y=\{0\} \cup \bigcup _{j=1}^\infty \partial B(0,1/j!) \subset \textbf{R}^n \quad \text {and} \quad x_0=0. \end{aligned}$$

Since Y is a compact doubling metric space, it can be equipped with a doubling measure \(\nu ,\) see Volberg–Konyagin [34, Theorem 2] (or Heinonen [19, Theorem 13.3]). Note that Y is not uniformly perfect at 0.

Let \(\kappa >1\) be given and \(\delta =1/k!\) for some integer \(k>2\kappa .\) Then

$$\begin{aligned} Y'=B_\delta ^Y = B_R^Y, \quad \text {where } R=\delta /2\kappa . \end{aligned}$$

Hence, for \(E:=\Omega :=B_R^Y,\) we have

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y'}}(E,\Omega )=0<{{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(E,\Omega ). \end{aligned}$$

Proof of Lemma 7.2

We shall use Lemma 4.10. If \(Y'=Y,\) there is nothing to prove, so assume that \(Y' \ne Y.\) Let \(Y_1\) and \(Y_2\) be as in Lemma 7.1. Then \(B^Y_{2 \kappa R} =B^Y_{\delta } = Y_1 \ne Y.\) By the uniform perfectness of Y,  there is some \(z \in B^Y_{2 \kappa R} {\setminus }B^Y_{2 R} \subset Y_1.\) Then \(B^Y(z,R) \subset Y_2 {\setminus }\Omega \subset Y' {\setminus }\Omega .\) Let \(x \in \Omega \) and \(y\in Y'.\) Then

$$\begin{aligned} d(x,y) \le 2\delta + R = (4\kappa +1)R, \end{aligned}$$

and hence, using that \(\nu '=\nu |_{Y'}\) is doubling by Lemma 7.1, we obtain

$$\begin{aligned} \nu '(B^{Y'}(x,d(x,y)))&\lesssim \nu '(B^{Y'}(x,R)) \\&\le \nu '(B^{Y'}(z,2(\kappa +1)R)) \simeq \nu '(B^{Y'}(z,R)) = \nu (B^Y(z,R)). \end{aligned}$$

Thus, with I(xy) as in Lemma 4.10,

$$\begin{aligned} \int _{Y' {\setminus }\Omega } I(x,y) \,d\nu (y) \gtrsim \int _{B^Y(z,R)} \frac{d\nu (y)}{R^{\theta p}\nu (B^Y(z,R))} = R^{-\theta p}. \end{aligned}$$

On the other hand, for \(y\in A^j:=B^Y_{2^{j+1}\delta } {\setminus }B^Y_{2^j\delta },\) \(j=0,1,\ldots ,\) we have

$$\begin{aligned} d(x,y) \simeq 2^jR \quad \text {and} \quad \nu (B^Y(x,d(x,y))) \gtrsim \nu (A_j). \end{aligned}$$

Hence

$$\begin{aligned} \int _{Y {\setminus }Y'} I(x,y) \,d\nu (y)&\le \sum _{j=0}^\infty \int _{A^j} I(x,y) \,d\nu (y) \\&\lesssim \sum _{j=0}^\infty \frac{1}{(2^{j}R)^{\theta p}} \simeq {R^{-\theta p}} \lesssim \int _{Y' {\setminus }\Omega } I(x,y) \,d\nu (y). \end{aligned}$$

An application of Lemma 4.10, together with Lemma 7.1(c), concludes the proof. \(\square \)

8 Proof of Theorem 1.1

Lemma 8.1

Let \(0<\Theta _1<\Theta _2< \infty \) and \(R>0.\) If \(\nu \) is doubling,  then

$$\begin{aligned} \int _{\Theta _1R}^{\Theta _2R} \biggl ( \frac{\rho ^{\theta p}}{\nu (B_\rho )} \biggr )^{1/(p-1)} \,\frac{d\rho }{\rho } \simeq \biggl ( \frac{R^{\theta p}}{\nu (B_R)} \biggr )^{1/(p-1)} \end{aligned}$$

with comparison constants depending only on \(\Theta _1,\) \(\Theta _2,\) \(\theta ,\) p and \(C_\nu .\)

Proof

By the doubling property of \(\nu ,\) we have \(\nu (B_\rho )\simeq \nu (B_{R})\) for all \(\Theta _1 R\le \rho \le \Theta _2R.\) The statement now follows by direct calculation of the integral. \(\square \)

Remark 8.2

The comparison constants in Theorem 1.1 are independent of the choice of \(x_0.\) They depend only on \(\theta ,\) p\(C_\nu \) and the uniform perfectness constant \(\kappa .\) In the proof below, the constants \(C_1\) and \(C_2\) (and thus the ultimate comparison constants) depend on \(\alpha \) and \(\tau .\) To avoid this dependence in Theorem 1.1, we can e.g. let \(\alpha =\tau =2.\) We have chosen not to fix \(\alpha \) and \(\tau ,\) so as to show that our proof is not dependent on fixing them.

Proof of Theorem 1.1

Let \(0<C_1<1<C_2\) be the constants appearing in (5.1), which only depend on \(\alpha ,\) \(\tau \) and \(\varepsilon =\log \alpha .\) We can assume that \(\kappa \ge 2.\) To be able to use the hyperbolic filling and the capacity results from Sect. 5, we need to use the results from either Sect. 6 or Sect. 7, depending on if Y is bounded or not.

If Y is bounded, we use the construction from Sect. 6 recursively N times (with N only depending on \(C_2/C_1\)) and replace Y by its suitable enlargement Z so that \(B^Z_{5C_2R/C_1}\ne Z.\) Note that by Lemma 6.1, the doubling constant of \(\nu \) is enlarged at most by a factor depending only on N. Applying Lemma 6.2 with \(E=B_r\) and \(\Omega =B_R\) several times to the consecutive enlargements of Y,  we obtain that

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(B_r,B_R) \simeq {{{\,\textrm{cap}\,}}_{\theta ,p}^{Z}}(B^Z_r,B^Z_R). \end{aligned}$$
(8.1)

If Y is unbounded, we let \(Z=Y',\) where \(Y'\) is as in Lemma 7.1 with \(\delta =5\kappa C_2 R/C_1,\) and then use Lemma 7.2 to obtain (8.1) also in this case. We will still denote the restricted measure by \(\nu .\) By Lemma 7.1, the doubling constant of \(\nu \) is in this case only enlarged by the power 6. Note that \(B^Z_{5 C_2 R/C_1} \varsubsetneq Z\) by the uniform perfectness condition.

The uniform perfectness constant \(\kappa \ge 2\) remains the same also for Z,  both with bounded and unbounded Y. Since the left- and right-hand sides in (1.2) and (1.3) scale in the same way, we may without loss of generality assume that \(0< {{\,\textrm{diam}\,}}Z < 1.\) Note that \(B^Y_\rho =B^Z_\rho \) for \(\rho \le R.\)

To conclude the proof, it suffices to estimate the latter capacity in (8.1). We consider two cases.

If \(2C_2r\ge C_1R,\) then Proposition 4.1 yields

$$\begin{aligned} {{{\,\textrm{cap}\,}}_{\theta ,p}^{Z}}(B^Z_r,B^Z_R) \simeq \frac{\nu (B^Z_R)}{R^{\theta p}}, \end{aligned}$$

which by Lemma 8.1 is comparable to the right-hand side in (1.2).

If \(2C_2r\le C_1R,\) then we follow Sect. 5 and construct a hyperbolic filling X of Z with parameters \(\alpha \) and \(\tau ,\) which we uniformize with parameter \(\varepsilon =\log \alpha \) and equip with the measure \({\mu _\beta },\) with \(\beta =\varepsilon (1-\theta )p,\) as in Sect. 5. As \(B^Z_{5C_2R/C_1} \ne Z\) we see that \(B^{\partial _\varepsilon X}_{5C_2R} \ne Z\) and thus \(.\) We can then use Theorem 5.3, together with (5.1), (5.2) and [2, Lemma 11.22], to conclude that

(8.2)

Similarly, from Theorem 5.4, (5.1), (5.2) and [2, Lemma 11.22] we get

(8.3)

where L is as in Theorem 5.4.

Next, the comparison (5.4) between \(\mu _\beta \) and \(\nu \) gives

Theorem 4.2 in Björn–Björn–Lehrbäck [6], together with the doubling property of \(\mu _\beta ,\) then shows that

Similarly,

which together with (8.1)–(8.3) concludes the proof of (1.2). The estimate for \({{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(\{x_0\},B_R)\) follows immediately by letting \(r\rightarrow 0\) in (1.2)  since \({{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}\) is an outer capacity. \(\square \)

9 Proofs of Theorems 1.2 and 1.3

Proof of Theorem 1.2

The upper bounds follow directly from Proposition 4.4. For the lower bounds we first construct Z as in the proof of Theorem 1.1. Since the left- and right-hand sides in (1.6) and (1.7) scale in the same way, we may without loss of generality assume that \(0< {{\,\textrm{diam}\,}}Z < 1.\) Note that \(B^Y_\rho =B^Z_\rho \) for \(\rho \le R_0.\)

As in (8.3), we see that

(9.1)

where L is as in Theorem 5.4. In (a), it follows from (5.6) that \(.\) Hence, by Björn–Björn–Lehrbäck [5, Theorem 1.1], (5.4) and the doubling property of \({\mu _\beta },\)

(9.2)

In (b), we instead have and [5, Theorem 1.1], together with (5.4) and the doubling property, yields

(9.3)

Inserting (9.2) and (9.3) into (9.1) and using (8.1) proves the lower bounds in (1.6) and (1.7).

It remains to discuss the sharpness. Let \(0< 2r < R \le C_1 R_0.\) If the lower bound in (1.6) holds, then by Proposition 4.4,

$$\begin{aligned} \frac{\nu (B^Y_r)}{r^{\theta p}} \lesssim {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(B^Y_r,B^Y_R) \lesssim \frac{\nu (B^Y_R)}{R^{\theta p}}, \end{aligned}$$

which immediately implies that \(.\) The argument for (1.7) is similar, using the upper bound \(\nu (B_r)/r^{\theta p}\) from Proposition 4.4.

Finally, if \(p>1\) then Theorem 5.3, together with (5.1), (5.2), (8.1), (1.7) and (5.4), yields

Theorem 1.3 in Björn–Björn–Christensen [3], applied to \(,\) then implies that \(,\) which is equivalent to \(.\) \(\square \)

In the borderline cases we have the following result corresponding to Theorem 1.2.

Theorem 9.1

Assume that Y is a complete metric space which is uniformly perfect at \(x_0\) and equipped with a doubling measure \(\nu .\) Let \(p>1,\) \(0<\theta <1\) and \(0 < R_0 \le \tfrac{1}{4} {{\,\textrm{diam}\,}}Y,\) with \(R_0\) finite.

Then the following hold for \(0<2r \le R \le R_0,\) with comparison constants depending on \(R_0,\) but independent of \(x_0,\) r and R.

  1. (a)

    If \(,\) then

    $$\begin{aligned} \frac{\nu (B_r)}{r^{\theta p}} \biggl ( \log \frac{R}{r} \biggr )^{1-p} \lesssim {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(B_r,B_{R}) \lesssim \frac{\nu (B_R)}{R^{\theta p}} \biggl ( \log \frac{R}{r} \biggr )^{1-p}. \end{aligned}$$
    (9.4)
  2. (b)

    If \(,\) then

    $$\begin{aligned} \frac{\nu (B_R)}{R^{\theta p}} \biggl ( \log \frac{R}{r} \biggr )^{1-p} \lesssim {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(B_r,B_{R}) \lesssim \frac{\nu (B_r)}{r^{\theta p}} \biggl ( \log \frac{R}{r} \biggr )^{1-p}. \end{aligned}$$
    (9.5)

Moreover,  if the lower bounds in (9.4) and (9.5) hold,  then and \(,\) respectively.

Proof

The estimate (9.4) follows directly from Theorem 1.1 since

$$\begin{aligned} \frac{R^{\theta p}}{\nu (B_R)} \lesssim \frac{\rho ^{\theta p}}{\nu (B_\rho )} \lesssim \frac{r^{\theta p}}{\nu (B_r)} \end{aligned}$$

as \(.\) The estimate (9.5) is shown similarly.

For the last statement, the lower bound in (9.4) and Proposition 4.4 imply for all \(\varepsilon >0\) that

$$\begin{aligned} \frac{\nu (B_r)}{\nu (B_R)} \lesssim \frac{r^{\theta p} {{{\,\textrm{cap}\,}}_{\theta ,p}^{Y}}(B_r,B_R)}{\nu (B_R)} \biggl ( \log \frac{R}{r} \biggr )^{p-1} \lesssim \Bigl ( \frac{r}{R} \Bigr )^{\theta p} \biggl ( \log \frac{R}{r} \biggr )^{p-1} \lesssim \Bigl ( \frac{r}{R} \Bigr )^{\theta p-\varepsilon }, \end{aligned}$$

where the implicit constant in the last “\(\lesssim \)” depends on \(\varepsilon .\) Thus for every \(\varepsilon >0,\) showing that \(.\) The implication (9.5) \(\Rightarrow \) is proved similarly. \(\square \)

Remark 9.2

If Y is unbounded, then Theorems 1.2 and 9.1 hold with \(R_0=\infty \) if \(,\) \(,\) and are replaced by

Remark 9.3

The comparison constants in Theorems 1.2 and 9.1 are independent of the choice of \(x_0,\) but depend on \(\theta ,\) p\(C_\nu ,\) \(R_0\) and the uniform perfectness constant \(\kappa .\)

In Theorem 1.2(a) they also depend on the choice of from the proof of [5, Proposition 6.1] leading to the estimate (9.2), and on the comparison constant appearing in the definition of \(.\)

Similarly, in Theorem 1.2(b) the constants also depend on the choice of from the proof of [5, Proposition 6.1] leading to the estimate (9.3), and on the comparison constant appearing in the definition of \(.\)

In Theorem 9.1 the dependence is similar but with \(q=\theta p.\) In Remark 9.2, the dependence is instead in terms of and \(.\)

Proof of Theorem 1.3

Let \(Z=Y',\) where \(Y'\) is as in Lemma 7.1 with \(\delta =\tfrac{1}{5}.\) Then \(0< {{\,\textrm{diam}\,}}Z < 1.\) (If Y is bounded we may instead let Z be a rescaled version of Y.) Then let \(X_\varepsilon \) be the uniformized hyperbolic filling for Z constructed in Sect. 5. By Corollary 4.8 on both Y and Z,  together with Lemma 7.2, it suffices to prove the statements (a) and (b) for \({C_{\theta ,p}^{Z}}(\{x_0\}),\) which in turn is comparable to by [8, Proposition 13.2].

As in (5.5), it follows that in (a), while or in (b). Hence, Proposition 8.2 in [5] implies that in (a), and in (b).

When \(p>1,\) the conclusions can also be derived from Theorem 1.1. \(\square \)