Abstract
We obtain estimates for the nonlinear variational capacity of annuli in weighted \(\mathbf {R}^n\) and in metric spaces. We introduce four different (pointwise) exponent sets, show that they all play fundamental roles for capacity estimates, and also demonstrate that whether an end point of an exponent set is attained or not is important. As a consequence of our estimates we obtain, for instance, criteria for points to have zero (resp. positive) capacity. Our discussion holds in rather general metric spaces, including Carnot groups and many manifolds, but it is just as relevant on weighted \(\mathbf {R}^n\). Indeed, to illustrate the sharpness of our estimates, we give several examples of radially weighted \(\mathbf {R}^n\), which are based on quasiconformality of radial stretchings in \(\mathbf {R}^n\).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Our aim in this paper is to give sharp estimates for the variational \(p\)-capacity of annuli in metric spaces. Such estimates play an important role for instance in the study of singular solutions and Green functions for (quasi)linear equations in (weighted) Euclidean spaces and in more general settings, such as subelliptic equations associated with vector fields and on Heisenberg groups, see e.g. Serrin [37], Capogna et al. [15], and Danielli et al. [16] for discussion and applications. Recall that analysis and nonlinear potential theory (including capacities) have during the last two decades been developed on very general metric spaces, including compact Riemannian manifolds and their Gromov–Hausdorff limits, and Carnot–Carathéodory spaces.
Sharp capacity estimates depend in a crucial way on good bounds for the (relative) measures of balls. For instance, recall that for \(0<2r\le R\), the variational \(p\)-capacity \({{\mathrm{cap}}}_p(B(x,r),B(x,R))\) of the annulus \(B(x,R){\setminus }B(x,r)\) in (unweighted) \(\mathbf {R}^n\) is comparable to \(r^{n-p}\) if \(p<n\) and to \(R^{n-p}\) if \(p>n\), see e.g. Example 2.12 in Heinonen et al. [24]. In both cases, \(r^n\) and \(R^n\) are comparable to the Lebesgue measure of one of the balls defining the annulus. For \(p=n\), the \(p\)-capacity contains a logarithmic term of the ratio R / r. Thus, the dimension n (or rather the way in which the Lebesgue measure scales on balls with different radii) determines (together with \(p\)) the form of the estimates for the \(p\)-capacity of annuli.
If \(X=(X,d,\mu )\) is a metric space equipped with a doubling measure \(\mu \) (i.e. \(\mu (2B)\le C \mu (B)\) for all balls \(B\subset X\)), then an iteration of the doubling condition shows that there exist \(q>0\) and \(C>0\) such that
for all \(x\in X\) and \(0<r<R\). In addition, a converse estimate, with some exponent \(0<q^{\prime }\le q\), holds under the assumption that X is connected (see Sect. 2 for details). Motivated by these observations, we introduce the following exponent sets for \(x\in X\):
Here the subscript 0 refers to the fact that only small radii are considered; we shall later define similar exponent sets with large radii as well. In general, all of these sets can be different, as shown in Examples 3.2 and 3.4.
The above exponent sets turn out to be of fundamental importance for distinguishing between the cases in which the sharp estimates for capacities are different, in a similar way as the dimension in \(\mathbf {R}^n\) does. Let us mention here that Garofalo and Marola [19] defined a pointwise dimension q(x) (called Q(x) therein) and established certain capacity estimates for the cases \(p<q(x)\), \(p=q(x)\) and \(p>q(x)\). In our terminology their \(q(x)=\sup {\underline{Q}}(x)\), where \({\underline{Q}}(x)\) is a global version of \({\underline{Q}_0}(x)\), see Sect. 2. However, it turns out that the situation is in fact even more subtle than indicated in [19], since actually all of the above exponent sets are needed to obtain a complete picture of capacity estimates. Our purpose is to provide a unified approach which not only covers (and in many cases improves) all the previous capacity estimates in the literature, but also takes into account the cases that have been overlooked in the past. We also indicate via Propositions 9.1 and 9.2 and numerous examples that our estimates are both natural and, in most cases, optimal. In addition, we hope that our work offers clarity and transparency also to the proofs of the previously known results.
The following are some of our main results. Here and later we often drop x from the notation of the exponent sets when the point is fixed, and moreover write e.g. \(B_r=B(x,r)\). For simplicity, we state the results here under the standard assumptions of doubling and a Poincaré inequality, but in fact less is needed, as explained below. Throughout the paper, we write \(a \lesssim b\) if there is an implicit constant \(C>0\) such that \(a \le Cb\), where C is independent of the essential parameters involved. We also write \(a \gtrsim b\) if \(b \lesssim a\), and \(a \simeq b\) if \(a \lesssim b \lesssim a\). In particular, in Theorems 1.1 and 1.2 below the implicit constants are independent of r and R, but depend on \(R_0\).
Theorem 1.1
Let \(0< R_0 < \frac{1}{4} {{\mathrm{diam}}}X\), \(1\le p<\infty \), and assume that the measure \(\mu \) is doubling and supports a \(p\)-Poincaré inequality.
-
(a)
If \(p\in {{\mathrm{int}}}{\underline{Q}}_0\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\simeq \frac{\mu (B_r)}{r^{p}} \quad \text {whenever } 0<2r \le R \le R_0. \end{aligned}$$(1.1) -
(b)
If \(p\in {{\mathrm{int}}}{\overline{Q}}_0\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\simeq \frac{\mu (B_{R})}{R^{p}} \quad \text {whenever } 0<2r \le R \le R_0. \end{aligned}$$(1.2)
Moreover, if (1.1) holds, then \(p \in {\underline{Q}}_0\), while if (1.2) holds, then \(p \in {\overline{Q}}_0\).
Here and elsewhere \({{\mathrm{int}}}Q\) denotes the interior of a set Q. Already unweighted \(\mathbf {R}^n\) shows that r needs to be bounded away from R in order to have the upper bounds in (1.1) and (1.2) (hence \(2r\le R\) above), and that the lower estimate in (1.1) [resp. (1.2)] does not hold in general when \(p \ge \sup {\underline{Q}_0}\) (resp. \(p \le \inf {\overline{Q}_0}\)), even if the borderline exponent is in the respective set. In these borderline cases \(p=\max {\underline{Q}_0}\) and \(p=\min {\overline{Q}_0}\) we instead obtain the following estimates involving logarithmic factors.
Theorem 1.2
Let \(0< R_0 < \frac{1}{4} {{\mathrm{diam}}}X\), and assume that the measure \(\mu \) is doubling and supports a \({p_0}\)-Poincaré inequality for some \(1\le {p_0}<p\).
-
(a)
If \(p=\max {\underline{Q}_0}\) and \(0<2r \le R \le R_0\), then
$$\begin{aligned} \frac{\mu (B_{r})}{r^{p}}\biggl (\log \frac{R}{r}\biggr )^{1-p} \lesssim {{\mathrm{cap}}}_p(B_r,B_{R}) \lesssim \frac{\mu (B_{R})}{R^{p}}\biggl (\log \frac{R}{r}\biggr )^{1-p}. \end{aligned}$$(1.3) -
(b)
If \(p=\min {\overline{Q}_0}\) and \(0<2r \le R \le R_0\), then
$$\begin{aligned} \frac{\mu (B_{R})}{R^{p}}\biggl (\log \frac{R}{r}\biggr )^{1-p} \lesssim {{\mathrm{cap}}}_p(B_r,B_{R}) \lesssim \frac{\mu (B_r)}{r^{p}}\biggl (\log \frac{R}{r}\biggr )^{1-p}. \end{aligned}$$(1.4)
Moreover, if the lower bound in (1.3) holds, then \(p\le \sup {{\underline{Q}_0}}\), and if the lower bound in (1.4) holds, then \(p\ge \inf {{\overline{Q}_0}}\).
See also (7.1) and (7.2) for improvements of the upper estimates of Theorem 1.2. Actually, Theorem 1.2 (a) holds for all \(p\in {\underline{Q}_0}\) [resp. (b) for all \(p\in {\overline{Q}_0}\)], but for p in the interior of the respective exponent sets Theorem 1.1 gives better estimates. Let us also mention that for p in between the Q-sets we obtain yet other estimates depending on how close p is to the corresponding Q-set, see Propositions 5.1 and 6.2. Also these estimates are sharp, as shown by Proposition 9.1.
We give related capacity estimates in terms of the S-sets as well. In particular, we obtain the following almost characterization of when points have zero capacity. Here \({C_p}(E)\) is the Sobolev capacity of \(E\subset X\).
Proposition 1.3
Assume that X is complete and that \(\mu \) is doubling and supports a \(p\)-Poincaré inequality. Let \(B \ni x\) be a ball with \({C_p}(X{\setminus }B)>0\).
If \(1\le p\notin {\overline{S}_0}\) or \(1 < p \in {\underline{S}_0}\), then \({C_p}(\{x\})={{\mathrm{cap}}}_p(\{x\},B)=0\).
Conversely, if \(p\in {{\mathrm{int}}}{\overline{S}_0}\), then \({C_p}(\{x\})>0\) and \({{\mathrm{cap}}}_p(\{x\},B)>0\).
In the remaining borderline case, when \(p = \min {\overline{S}_0}\notin {\underline{S}_0}\), we show that the capacity can be either zero or nonzero, depending on the situation, and thus the S-sets are not refined enough to give a complete characterization.
We also obtain similar results in terms of the \(S_\infty \)-sets, which can be used to determine if the space X is \(p\)-parabolic or \(p\)-hyperbolic; see Sect. 8 for details.
For most of our estimates it is actually enough to require that \(\mu \) is both doubling and reverse-doubling at the point x, and that a Poincaré inequality holds for all balls centred at x. Moreover, Poincaré inequalities and reverse-doubling are only needed when proving the lower bounds for capacities. It is however worth pointing out that the examples showing the sharpness of our estimates are based on \(p\)-admissible weights on \(\mathbf {R}^n\), and so, even though our results hold in very general metric spaces, it is essential to distinguish the cases and define the exponent sets, as we do, already in weighted \(\mathbf {R}^n\). We construct our examples with the help of a general machinery concerning radial weights, explained in Sect. 10.
Let us now give a brief account on some of the earlier results in the literature. On unweighted \(\mathbf {R}^n\), where \({\underline{Q}_0}={\underline{S}_0}=(0,n]\) and \({\overline{Q}_0}={\overline{S}_0}=[n,\infty )\), similar estimates (and precise calculations) are well known, see e.g. Example 2.11 in Heinonen et al. [24], which also contains an extensive treatise of potential theory on weighted \(\mathbf {R}^n\), including integral estimates for \(A_p\)-weighted capacities with \(p>1\) (Theorems 2.18 and 2.19 therein). Theorem 3.5.6 in Turesson [41] provides essentially our estimates for \(p=1\) and \(A_1\)-weighted capacities in \(\mathbf {R}^n\). Estimates for general weighted Riesz capacities in \(\mathbf {R}^n\) (including those equivalent to our capacities) were in somewhat different terms given in Adams [3, Theorem 6.1].
If the radii of the balls \(B_r\) and \(B_R\) are comparable, say \(R=2r\), then it is well known that the estimate \({{\mathrm{cap}}}_p(B_r,B_{2r})\simeq \mu (B_r)r^{-p}\) holds (with implicit constants independent of x) in metric spaces satisfying the doubling condition and a \(p\)-Poincaré inequality, see e.g. [24, Lemma 2.14] for weighted \(\mathbf {R}^n\) and Björn [12, Lemma 3.3] or Björn and Björn [5, Proposition 6.16].
Garofalo and Marola [19, Theorems 3.2 and 3.3] obtained essentially part (a) of our Theorem 1.1 using an approach different from ours. For the case \(p = q(x):=\sup {{\underline{Q}}(x)}\) they also gave estimates which are similar to part (a) of Theorem 1.2. However, they implicitly require that \(q(x) \in {{\underline{Q}}(x)}\) [i.e. \(q(x)=\max {\underline{Q}}(x)\)] in their proofs, and their estimates may actually fail if \(q(x)\notin {{\underline{Q}}}(x)\), as shown by Example 9.4 (c) below; the same comment applies to their estimates in the case \(p > q(x)\) as well. There also seems to be a slight problem in the proof of their lower bounds, since the second displayed line at the beginning of the proof of Theorem 3.2 in [19] does not in general follow from the first line, as can be seen by considering e.g. \(u(x)=\max \{0,\min \{1,1+j(r-|x|)\}\}\) in \(\mathbf {R}^n\) and letting \(j\rightarrow \infty \). Instead, this estimate can be derived directly from a 1-Poincaré inequality (see Mäkäläinen [35]), which is a stronger assumption than the \(p\)-Poincaré inequality assumed in [19] (and in the present work).
Also Adamowicz and Shanmugalingam [2] have given related estimates in metric spaces. They state their results in terms of the \(p\)-modulus of curve families, but it is known that the \(p\)-modulus coincides with the variational \(p\)-capacity, provided that X is complete and \(\mu \) is doubling and supports a \(p\)-Poincaré inequality, see e.g. Heinonen and Koskela [26], Kallunki and Shanmugalingam [31] and Adamowicz et al. [1]. In the setting considered in [2] this equivalence is not known in general. While it is always true that the \(p\)-modulus is majorized by the variational \(p\)-capacity, the converse is only known under the assumption of a \(p\)-Poincaré inequality, which is not required for the upper bounds in [2] nor here. At the same time, the test functions in [2] are admissible also for \({{\mathrm{cap}}}_p\), showing that their estimates apply also to the variational \(p\)-capacity. For \(p\in {{\mathrm{int}}}{\underline{Q}_0}\), Theorem 3.1 in [2] provides an upper bound that can be seen to be weaker than (1.1). In the borderline case \(p=\max {\underline{Q}}_0\) (when it is attained), the upper estimate (3.6) in [2] coincides with our (5.1). Under the assumption that the space X is Ahlfors Q-regular and supports a \(p\)-Poincaré inequality, they also prove lower bounds for capacities. For \(p>Q\), the lower bound in [2, Theorem 4.3] coincides with the one in Theorem 1.1 (b), but for \(p\le Q\) the lower bound in [2, Theorem 4.9] is weaker than our estimates (1.1) and (1.3).
Neither [2] nor [19] contain any results similar to ours for \(p \in {\overline{Q}_0}\), or in terms of \(q \in {\overline{Q}_0}\) for \(p\notin {\overline{Q}_0}\), or involving the S-sets.
As mentioned above, \(p\)-capacity and \(p\)-modulus estimates are closely related, and our estimates trivially give estimates for the \(p\)-modulus in all cases when they coincide, e.g. when X is complete and \(\mu \) is doubling and supports a \(p\)-Poincaré inequality, see above. Moreover, our upper estimates are trivially upper bounds of the \(p\)-modulus in all cases. We do not know if our lower estimates of the capacity are also lower bounds for the \(p\)-modulus, but neither do we know of any example when the \(p\)-modulus is strictly smaller than the \(p\)-capacity.
Let us also mention that earlier capacity estimates in Carnot groups and Carnot–Carathéodory spaces can be found in Heinonen and Holopainen [23] and in Capogna et al. [15], respectively. In [15], the estimates are then applied to yield information on the behaviour of singular solutions of certain quasilinear equations near the singularity; see also Danielli et al. [16] for related results in more general settings. In addition, Holopainen and Koskela [29] provided a lower bound for the variational capacity in terms of the volume growth in Riemannian manifolds, as well as some related estimates in general metric spaces, which in turn are related to the parabolicity and hyperbolicity of the space. Capacities defined by nonlinear potentials on homogeneous groups were considered by Vodop’yanov [43] and some estimates in terms of \(A_p\)-weights were given in Proposition 2 therein.
The outline of the paper is as follows: In Sect. 2 we introduce some basic terminology and discuss the exponent sets under consideration in this paper, while in Sect. 3 we give some key examples demonstrating various possibilities for the exponent sets. These examples will later, in Sect. 9, be used to show sharpness of our estimates.
In Sect. 4 we introduce the necessary background for metric space analysis, such as capacities and Newtonian (Sobolev) spaces based on upper gradients. Towards the end of the section we obtain a few new results and also the basic estimate used to obtain all our lower capacity bounds (Lemma 4.9).
Sections 5, 6, 7 and 8 are all devoted to the various capacity estimates. In Sect. 5 we obtain upper bounds, which are easier to obtain than lower bounds and in particular require less assumptions on the space. Lower bounds related to the Q-sets are established in Sects. 6 and 7, the latter containing some more involved borderline cases, while in Sect. 8 we study (upper and lower) estimates in terms of the S-sets and in particular prove Proposition 1.3 and the parabolicity/hyperbolicity results mentioned above.
The sharpness of most of our estimates (but for some borderline cases) is demonstrated in Sect. 9. Here we extend our discussion of the examples introduced in Sect. 3 by using the capacity formula for radial weights on \(\mathbf {R}^n\) given in Proposition 10.8. This formula enables us to compute explicitly the capacities in the examples, and thus we can make comparisons with the bounds given by the more general estimates from Sects. 5, 6, 7 and 8. We also obtain stronger and more theoretical sharpness results in Propositions 9.1 and 9.2.
The final Sect. 10 is devoted to proving the capacity formula mentioned above, and along the way we obtain some new results on quasiconformality of radial stretchings and on \(p\)-admissibility of radial weights.
2 Exponent sets
We assume throughout the paper that \(1 \le p<\infty \) and that \(X=(X,d,\mu )\) is a metric space equipped with a metric d and a positive complete Borel measure \(\mu \) such that \(0<\mu (B)<\infty \) for all balls \(B \subset X\). We adopt the convention that balls are nonempty and open. The \(\sigma \)-algebra on which \(\mu \) is defined is obtained by the completion of the Borel \(\sigma \)-algebra. It follows that X is separable.
Definition 2.1
We say that the measure \(\mu \) is doubling at x, if there is a constant \(C>0\) such that whenever \(r>0\), we have
Here \(B(x,r)=\{y \in X : d(x,y)<r\}\). If (2.1) holds with the same constant \(C>0\) for all \(x \in X\), we say that \(\mu \) is (globally) doubling.
The global doubling condition is often assumed in the metric space literature, but for our estimates it will be enough to assume that \(\mu \) is doubling at x. Indeed, this will be a standing assumption for us from Sect. 5 onward.
Definition 2.2
We say that the measure \(\mu \) is reverse-doubling at x, if there are constants \(\gamma ,\tau >1\) such that
holds for all \(0<r\le {{\mathrm{diam}}}X/2\tau \).
If X is connected (or uniformly perfect) and \(\mu \) is globally doubling, then \(\mu \) is also reverse-doubling at every point, with uniform constants; see e.g. Corollary 3.8 in [5]. If \(\mu \) is merely doubling at x, then the reverse-doubling at x does not follow automatically and has to be imposed separately whenever needed.
If both (2.1) and (2.2) hold, then an iteration of these conditions shows that there exist \(q, q^{\prime }>0\) and \(C,C^{\prime }>0\) such that
whenever \(0<r\le R<{2}{{\mathrm{diam}}}X\). More precisely, the doubling inequality (2.1) leads to the first inequality, while the reverse-doubling (2.2) yields the second inequality of (2.3). Recall also that the measure \(\mu \) (and also the space X) is said to be Ahlfors Q-regular if \(\mu (B(x,r))\simeq r^Q\) for every \(x\in X\) and all \(0<r<2{{\mathrm{diam}}}X\). This in particular implies that (2.3) holds with \(q=q^{\prime }=Q\).
The inequalities in (2.3) will be of fundamental importance to us. Note that in (2.3) one necessarily has \(q^{\prime }\ge q\) and that there can be a gap between the exponents, as demonstrated by Example 3.2 below. Garofalo and Marola [19] introduced the pointwise dimension q(x) (called Q(x) therein) as the supremum of all \(q>0\) such that the second inequality in (2.3) holds for some \(C_q>0\) and all \(0<r\le R<{{\mathrm{diam}}}X\). Furthermore, Adamowicz et al. [1] defined the pointwise dimension set Q(x) consisting of all \(q>0\) for which there are constants \(C_q>0\) and \(R_q>0\) such that the second inequality in (2.3) holds for all \(0<r\le R\le R_q\). It was shown in [1, Example 2.3] that it is possible to have \(Q(x)=(0,q)\) for some q, that is, the end point q need not be contained in the interval Q(x). Alternatively see Example 3.1 below.
For us it will be important to make even further distinctions. We consider the exponent sets \({\underline{Q}_0}\), \({\underline{S}_0}\), \({\overline{S}_0}\) and \({\overline{Q}_0}\) from the introduction. The pointwise dimension of Garofalo and Marola [19] is then \(q(x) = \sup {\underline{Q}}(x)\), where \({\underline{Q}}(x)\) is a global version of \({\underline{Q}_0}(x)\) (see below for the precise definition), and the pointwise dimension set of [1] is \(Q(x)={\underline{Q}_0}(x)\) (to see this, one should also appeal to Lemma 2.5). Recall that we often drop x from the notation, and write \(B_r=B(x,r)\).
If \(\mu \) is doubling at x (resp. reverse-doubling at x), then \({\overline{Q}_0}\ne \varnothing \) (resp. \({\underline{Q}_0}\ne \varnothing \)), by (2.3). The sets \({\underline{Q}_0}\) and \({\underline{S}_0}\) are then intervals of the form (0, q) or (0, q], whereas \({\overline{Q}_0}\) and \({\overline{S}_0}\) are intervals of the form \((q,\infty )\) or \([q,\infty )\). Whether the end point is or is not included in the respective intervals will be important in many situations.
We start our discussion of the exponent sets by three lemmas concerning their elementary properties. Note that Lemmas 2.3–2.5 and 2.8 hold for arbitrary measures, without assuming any type of doubling.
Lemma 2.3
It is true that
Moreover, \({\underline{S}_0}\cap {\overline{S}_0}\) contains at most one point, and when it is nonempty, \({\underline{Q}_0}={\underline{S}_0}\) and \({\overline{Q}_0}={\overline{S}_0}\).
Proof
If \(q \in {\underline{Q}_0}\), then \(\mu (B_r) \le C_q \mu (B_1) r^q, \) and thus \(q \in {\underline{S}_0}\). Similarly \({\overline{Q}_0}\subset {\overline{S}_0}\).
For the second part, let \(q \in {\underline{S}_0}\cap {\overline{S}_0}\). Then \(\mu (B_r) \simeq r^q\) and it follows that \(q \in {\underline{Q}_0}\) and \(q \in {\overline{Q}_0}\). That \({\underline{Q}_0}={\underline{S}_0}\) and \({\overline{Q}_0}={\overline{S}_0}\) thus follows from the first part. \(\square \)
The following two lemmas show that the bound 1 on the radii in the definitions of the exponent sets can equivalently be replaced by any other fixed bound \(R_0\). They also provide formulas for the borderline exponents in the S-sets and estimates for the borderline exponents in the Q-sets. Examples 2.6 and 2.7 show that finding the exact end points of the Q-sets may be rather subtle.
Lemma 2.4
Let \(q,R_0>0\). Then \(q \in {\underline{S}_0}\) if and only if there is a constant \(C>0\) such that
Similarly, \(q \in {\overline{S}_0}\) if and only if there is a constant \(C>0\) such that
Furthermore, let
Then \({\underline{S}_0}=(0,q_0)\) or \({\underline{S}_0}=(0,q_0]\), and \({\overline{S}_0}=(q_1,\infty )\) or \({\overline{S}_0}=[q_1,\infty )\).
Proof
For the first part, assume that \(q \in {\underline{S}_0}\). We may assume that \(R_0>1\). If \(1 \le r < R_0\), then
i.e. (2.4) holds with \(C:=\max \{C_q,\mu (B_{R_0})\}\). The converse implication is proved similarly.
For the last part, after taking logarithms we see that \(q \in {\underline{S}_0}\) if and only if there is \(C_q\) such that
which is easily seen to be possible if \(q < q_0\), and impossible if \(q> q_0\). The proofs for \({\overline{S}_0}\) are similar.\(\square \)
Lemma 2.5
Let \(q,R_0>0\). Then \(q \in {\underline{Q}_0}\) if and only if there is a constant \(C>0\) such that
The corresponding statement for \({\overline{Q}_0}\) is also true.
Assume furthermore that \(f(r):=\mu (B_r)\) is locally absolutely continuous on \((0,\infty )\) and let
Then
The following example shows that the assumption that f is locally absolutely continuous in Lemma 2.5 is not redundant.
Example 2.6
Let X be the usual Cantor ternary set, defined as a subset of [0, 1] and equipped with the normalized d-dimensional Hausdorff measure \(\mu \) with \(d=\log 2 /{\log 3}\). Let \(x=0\). Then \(f(r)=\mu (B_r)\) will be the Cantor staircase function which is not absolutely continuous. (See Dovgoshey et al. [18] for the history of the Cantor staircase function.) At the same time, \(\mu \) is Ahlfors d-regular and hence \({\underline{S}_0}={\underline{Q}_0}=(0,d]\) and \({\overline{S}_0}={\overline{Q}_0}=[d,\infty )\), while \({\underline{q}}=\overline{q}=0\).
On the other hand if \(X=\mathbf {R}^n\) is equipped with a weight w and \(d\mu =w\, dx\), then f automatically is locally absolutely continuous. In particular, this is true if w is a \(p\)-admissible weight. We do not know if f is always locally absolutely continuous whenever \(\mu \) is both globally doubling and supports a global Poincaré inequality.
Proof of Lemma 2.5
We prove that \(q\in {\underline{Q}_0}\) implies (2.5). The proofs of the converse implication and for \({\overline{Q}_0}\) are similar. We may assume that \(R_0>1\). If \(1\le r< R\le R_0\), then
For \(r\le 1\le R\le R_0\) we instead have
Thus, (2.5) holds whenever \(R\ge 1\). For \(R\le 1\) the claim follows directly from the assumption \(q\in {\underline{Q}_0}\).
Next assume that f is locally absolutely continuous and let \(q \in (0,{\underline{q}})\). Then \(h(r)=\log f(r)\) is also locally absolutely continuous and \(h^{\prime }(r)=f^{\prime }(r)/f(r)\). By assumption there is \(\widetilde{R}\) such that \(\rho h^{\prime }(\rho )>q\) for a.e. \(0<\rho \le \widetilde{R}\). Since h is locally absolutely continuous, we have for \(0<r<R \le \widetilde{R}\) that
and thus
By the first part, with \(R_0=\widetilde{R}\), we get that \(q \in {\underline{Q}}_0\). Hence \((0,{\underline{q}}) \subset {\underline{Q}_0}\). The proof that \( (\overline{q},\infty ) \subset {\overline{Q}_0}\) is analogous. The remaining inclusions follow from these inclusions together with the fact that \({\underline{Q}_0}\cap {\overline{Q}_0}\) contains at most one point (by Lemma 2.3). \(\square \)
The following example shows that \({\underline{q}}\) and \(\overline{q}\) (from Lemma 2.5) need not be the end points of \({\underline{Q}}_0\) and \({\overline{Q}}_0\).
Example 2.7
Let f be given for \(r\in (0,\infty )\) by
where \(a_k = 2\cdot 4^{-k}\) and \(n\ge 1\). Note that f is increasing and locally Lipschitz. For a.e. \(x\in \mathbf {R}^n\) set
where \(\omega _{n-1}\) is the surface area of the \((n-1)\)-dimensional sphere in \(\mathbf {R}^n\). With this choice of w we have
where \(d\mu = w\,dx\). Since
and \(r\simeq a_k\) on \((4^{-k}, 4\cdot 4^{-k})\), we see that \(w\simeq 1\) on \(\mathbf {R}^n\), i.e. \(\mu \) is comparable to the Lebesgue measure. In particular, \(\mu \) is Ahlfors n-regular and supports a global 1-Poincaré inequality, \({\underline{Q}}_0=(0,n]\) and \({\overline{Q}}_0=[n,\infty )\).
At the same time, considering \(r \in (4^{-k}, 2\cdot 4^{-k})\) and \(r \in (2\cdot 4^{-k}, 4\cdot 4^{-k})\), respectively, gives
It is easy to construct a similar example with a continuous weight w.
If X is unbounded, we will consider the following exponent sets at \(\infty \) for results in large balls and with respect to the whole space:
Note that the inequality in \({\underline{S}_\infty }(x)\) is reversed from the one in \({\underline{S}_0}(x)\), and similarly for \({\overline{S}_\infty }(x)\). This guarantees that \({\underline{S}_\infty }=(0,q)\) or \({\underline{S}_\infty }=(0,q]\), and \({\overline{S}_\infty }=(q,\infty )\) or \({\overline{S}_\infty }=[q,\infty )\), rather than the other way round, and also that \({\underline{Q}_\infty }\subset {\underline{S}_\infty }\) and \({\overline{Q}_\infty }\subset {\overline{S}_\infty }\).
Lemmas 2.3, 2.4 and 2.5 above have direct counterparts for these exponent sets at \(\infty \). In addition, Lemma 2.8 below shows that these sets are actually independent of the point \(x\in X\), and thus the sets \({\underline{Q}_\infty }\), \({\underline{S}_\infty }\), \({\overline{S}_\infty }\) and \({\overline{Q}_\infty }\) are well defined objects for the whole space X, not merely a short-hand notation (with a fixed base point \(x\in X\)) as in the case of \({\underline{Q}_0}\), \({\underline{S}_0}\), \({\overline{S}_0}\) and \({\overline{Q}_0}\). Note, however, that in general for instance the set \({\underline{S}_\infty }\) is different from the set
since the constant \(C_q\) in the definition of \({\underline{S}_\infty }\) is allowed to depend on the point x. This can be seen e.g. by letting \(w(x)=\log (2+|x|)\), which is a 1-admissible weight on \(\mathbf {R}^n\) by Proposition 10.5 below. Recall that a weight w in \(\mathbf {R}^n\) is \(p\)-admissible, \(p\ge 1\), if the measure \(d\mu ={w}\,dx\) is globally doubling and supports a global \(p\)-Poincaré inequality.
Lemma 2.8
Let X be unbounded and fix \(x\in X\). Then, for every \(y\in X\), we have \({\underline{Q}_\infty }(x)={\underline{Q}_\infty }(y)\), \({\underline{S}_\infty }(x)={\underline{S}_\infty }(y)\), \({\overline{S}_\infty }(x)={\overline{S}_\infty }(y)\) and \({\overline{Q}_\infty }(x)={\overline{Q}_\infty }(y)\).
Proof
Let \(y\in X\). By (the \(\infty \)-versions of) Lemmas 2.4 and 2.5 it is enough to verify the definitions of the exponent sets for \(R>r\ge 2d(x,y)\). In this case we have \(B(x,r/2)\subset B(y,r)\subset B(x,2r)\) and similarly for B(y, R). Hence
which shows that the inequalities in the definitions of the exponent sets at \(\infty \) hold for y if and only if they hold for x. \(\square \)
Finally, when we want to be able to treat both large and small balls uniformly we need to use the sets
If X is bounded, we simply set \({\underline{Q}}:={\underline{Q}_0}\) and \({\overline{Q}}:={\overline{Q}_0}\).
Remark 2.9
Let \(k(t)=\log \mu (B_{e^t})\). Then it is easy to show that \(q \in {\underline{Q}_0}\) and \(q^{\prime } \in {\overline{Q}_0}\) if and only if there is a constant C such that
or in other terms
i.e. k is a \((q,q^{\prime },C)\)-rough quasiisometry on \((-\infty ,0)\) for some C. Similarly, if X is unbounded, then k is a \((q,q^{\prime },C)\)-rough quasiisometry on \((0,\infty )\) (resp. on \(\mathbf {R}\)) for some C if and only if \(q \in {\underline{Q}_\infty }\) and \(q^{\prime } \in {\overline{Q}_\infty }\) (resp. \(q \in {\underline{Q}}\) and \(q^{\prime } \in {\overline{Q}}\)). Much of the current literature on rough quasiisometries call such maps quasiisometries, but we have chosen to follow the terminology of Bonk et al. [14] to avoid confusion with biLipschitz maps.
3 Examples of exponent sets
In this section we give various examples of the exponent sets. In particular, we shall see that the end points of the four exponent sets can all be different (Examples 3.2, 3.4) and that the borderline exponents may or may not belong to the sets (Examples 3.1, 3.3). See Svensson [40] for further examples with different types of exponent sets.
Our examples are based on radial weights in \(\mathbf {R}^n\), and all the weights we consider are in fact 1-admissible, i.e. they are globally doubling and support a global 1-Poincaré inequality on \(\mathbf {R}^n\). Later in Sect. 9 these weights will be used to demonstrate the sharpness of several of our capacity estimates. In Sect. 10 we give a general sufficient condition for 1-admissibility of radial weights.
For simplicity, we write e.g. \(\log ^\beta r:=(\log r)^\beta \).
Example 3.1
Consider \(\mathbf {R}^n\), \(n\ge 2\), equipped with the measure \(d\mu =w(|y|)\,dy\), where
Here \(p\ge 1\) and \(\beta \in \mathbf {R}\) is arbitrary. Fix \(x=0\) and write \(B_r=B(0,r)\). Then it is easily verified that for \(r\le 1/e\) we have \(\mu (B_r)\simeq r^p\log ^\beta (1/r)\). Letting \(r\rightarrow 0\) in the definition of the exponent sets shows that
In both cases \(\sup {\underline{Q}}=\inf {\overline{Q}}=p\), but only one of these is attained (when \(\beta \ne 0\)). Letting instead
gives again \(\sup {\underline{Q}}=\inf {\overline{Q}}=p\), but if \(\beta >0\) it is now \(\sup {\underline{Q}}\) that is attained, while for \(\beta <0\) only \(\inf {\overline{Q}}\) is attained.
Example 3.2
We are now going to create an example of a 1-admissible weight in \(\mathbf {R}^2\) with
showing that the four end points can all be different.
Let \(\alpha _k=2^{-2^k}\) and \(\beta _k= \alpha _k^{3/2}=2^{-3\cdot 2^{k-1}}\), \(k=0,1,2,\ldots \) . Note that \(\alpha _{k+1}= \alpha _k^2\). In \(\mathbf {R}^2\) we fix \(x=0\) and consider the measure \(d\mu =w(|y|)\,dy\), where
Then
and thus w is 1-admissible by Proposition 10.5. We next have that
In particular,
For \(\beta _k \le r \le \alpha _k\) we instead have
and thus
It follows that
and
Since \(w(\rho ) \le \rho \) for all \(\rho \), we have that \(\mu (B_r) \lesssim r^3\) for all r, which together with (3.5) shows that \({\underline{S}_0}=(0,3]\).
From the estimates (3.5) and (3.2) we obtain
Indeed, when \(\alpha _{k+1} \le r \le 2 \alpha _{k+1}\) this follows directly from (3.5), and for \(2 \alpha _{k+1} \le r \le \beta _k\) we use (3.2) to get a lower bound, while the upper bound follows from (3.2) together with (3.5). In particular, we get that
Estimating similarly, using instead (3.3) and (3.4), shows that
We conclude from the last two estimates and from (3.4) that \({\overline{S}_0}=\bigl [\tfrac{10}{3},\infty \bigr )\).
Next, we see from (3.6) and (3.8) that
Hence, if \(\alpha _{k+1} \le r \le \beta _k \le R \le \alpha _k\), then
and thus
It follows from (3.9) that this estimate holds also in the remaining cases when \(\alpha _{k+1} \le r \le R \le \alpha _k\). Finally, if \(\alpha _{j+1} \le r \le \alpha _j \le \alpha _{k+1} \le R \le \alpha _k\), then
and
which together with (3.9) show that
(The estimates for balls with radii larger than \(\alpha _0=\frac{1}{2}\) are easier.)
The following example is a modification of Example 3.2. It shows that we can have \(\sup {\underline{S}_0}=\inf {\overline{S}_0}\) while \({\underline{S}_0}\ne {\underline{Q}_0}\) and \({\overline{S}_0}\ne {\overline{Q}_0}\). In this case the common borderline exponent of the S-sets belongs to \({\underline{S}_0}\) but not to \({\overline{S}_0}\), thus demonstrating the sharpness of Lemma 2.3.
Example 3.3
Consider \(\mathbf {R}^2\) and \(x=0\). Let \(\alpha _k\) and w be as in Example 3.2. Also let \(\gamma _k=\alpha _{k+1} \log k\) and \(\delta _k=\alpha _{k+1} \log ^2 k\), \(k=3,4,\ldots \), so that \(\alpha _{k+1}< \gamma _k< \delta _k < \alpha _k\), and let
and \(d\mu (y) =w_2(|y|)\,dy\). It follows from Proposition 10.5 that \(w_2\) is 1-admissible, as
Since \(w(\rho ) \le w_2(\rho ) \le \rho \) for \(\rho \le \alpha _2\) we see that \(\mu (B_{\alpha _k}) \simeq \alpha _k^3\) and \({\underline{S}_0}=(0,3]\). Moreover,
and
It follows that
As in Example 3.2 one can show that these are the extreme cases, and thus letting \(k\rightarrow \infty \) shows that \({\overline{S}_0}=(3,\infty )\). Moreover,
Since \(\alpha _{k+1} / \gamma _k = 1/{\log k} \rightarrow 0\), as \(k \rightarrow \infty \), this shows that \(p \notin {\underline{Q}_0}\) if \(p>2\). As this is the extreme case, we see that \({\underline{Q}}={\underline{Q}_0}=(0,2]\). Finally,
which shows that \({\overline{Q}}={\overline{Q}_0}=[4,\infty )\).
There is nothing special about the end points 2, 3, \(\frac{10}{3}\) and 4 (or the plane \(\mathbf {R}^2\)) in Example 3.2. Indeed, in the following example we indicate how one can construct a 1-admissible weight w in \(\mathbf {R}^n\), \(n\ge 2\), such that
where \(1<a<b<c<d\). The reason for the condition \(a>1\) is that we want to obtain the 1-admissibility of w using Proposition 10.5, see Remark 10.6.
Example 3.4
For \(1<a<b<c<d\) let
and
Note that \(\lambda >1\) and thus \(\alpha _k\rightarrow 0\) as \(k\rightarrow \infty \). Also, \(\alpha _{k+1}\ll \beta _k\ll \alpha _k\). Then the weight
is continuous and 1-admissible on \(\mathbf {R}^n\). Without going into details, one then argues similarly to Example 3.2 to show that (3.10) holds.
4 Background results on metric spaces
In this section we are going to introduce the necessary background on Sobolev spaces and capacities in metric spaces. Proofs of most of the results mentioned in the first half of this section can be found in the monographs Björn and Björn [5] and Heinonen et al. [27]. Towards the end of this section we obtain some new results.
We begin with the notion of upper gradients as defined by Heinonen and Koskela [26] (who called them very weak gradients).
Definition 4.1
A Borel function \(g \ge 0\) on X is an upper gradient of \(f:X\rightarrow [-\infty ,\infty ] \) if for all (nonconstant, compact and rectifiable) curves \(\gamma :[0,l_{\gamma }] \rightarrow X\),
where we follow the convention that the left-hand side is \(\infty \) whenever at least one of the terms therein is infinite. If \(g \ge 0\) is a measurable function on X and if (4.1) holds for \(p\)-almost every curve (see below), then g is a \(p\)-weak upper gradient of f.
A curve is a continuous mapping from an interval, and a rectifiable curve is a curve with finite length. We will only consider curves which are nonconstant, compact and rectifiable, and thus each curve can be parameterized by its arc length ds. A property is said to hold for \(p\)-almost every curve if it fails only for a curve family \(\Gamma \) with zero \(p\)-modulus, i.e. there exists \(0\le \rho \in L^p(X)\) such that \(\int _\gamma \rho \,ds=\infty \) for every curve \(\gamma \in \Gamma \). Note that a \(p\)-weak upper gradient need not be a Borel function, it is only required to be measurable. On the other hand, every measurable function g can be modified on a set of measure zero to obtain a Borel function, from which it follows that \(\int _{\gamma } g\,ds\) is defined (with a value in \([0,\infty ]\)) for \(p\)-almost every curve \(\gamma \).
The \(p\)-weak upper gradients were introduced by Koskela and MacManus [34]. It was also shown there that if \(g \in L^p(X)\) is a \(p\)-weak upper gradient of f, then one can find a sequence \(\{g_j\}_{j=1}^\infty \) of upper gradients of f such that \(g_j \rightarrow g\) in \(L^p(X)\). If f has an upper gradient in \(L^p(X)\), then it has a minimal \(p\)-weak upper gradient \(g_f \in L^p(X)\) in the sense that for every \(p\)-weak upper gradient \(g \in L^p(X)\) of f we have \(g_f \le g\) a.e., see Shanmugalingam [39] and Hajłasz [21]. The minimal \(p\)-weak upper gradient is well defined up to a set of measure zero in the cone of nonnegative functions in \(L^p(X)\). Following Shanmugalingam [38], we define a version of Sobolev spaces on the metric measure space X.
Definition 4.2
For a measurable function \(f:X\rightarrow [-\infty ,\infty ] \), let
where the infimum is taken over all upper gradients of f. The Newtonian space on X is
The space \(N^{1,p}(X)/{\sim }\), where \(f \sim h\) if and only if \(\Vert f-h\Vert _{N^{1,p}(X)}=0\), is a Banach space and a lattice, see Shanmugalingam [38]. In this paper we assume that functions in \(N^{1,p}(X)\) are defined everywhere, not just up to an equivalence class in the corresponding function space. This is needed for the definition of upper gradients to make sense. For a measurable set \(E\subset X\), the Newtonian space \(N^{1,p}(E)\) is defined by considering \((E,d|_E,\mu |_E)\) as a metric space in its own right. If \(f,h \in N^{1,p}_{\mathrm{loc}}(X)\), then \(g_f=g_h\) a.e. in \(\{x \in X : f(x)=h(x)\}\), in particular \(g_{\min \{f,c\}}=g_f \chi _{f < c}\) for \(c \in \mathbf {R}\).
Definition 4.3
The Sobolev \(p\)-capacity of an arbitrary set \(E\subset X\) is
where the infimum is taken over all \(u \in N^{1,p}(X)\) such that \(u\ge 1\) on E.
The Sobolev capacity is countably subadditive. We say that a property holds quasieverywhere (q.e.) if the set of points for which it fails has Sobolev capacity zero. The Sobolev capacity is the correct gauge for distinguishing between two Newtonian functions. If \(u \in N^{1,p}(X)\), then \(u \sim v\) if and only if \(u=v\) q.e. Moreover, Corollary 3.3 in Shanmugalingam [38] shows that if \(u,v \in N^{1,p}(X)\) and \(u= v\) a.e., then \(u=v\) q.e. This is the main reason why, unlike in the classical Euclidean setting, we do not need to require the functions admissible in the definition of capacity to be 1 in a neighbourhood of E. Theorem 4.5 in [38] shows that for open \(\Omega \subset \mathbf {R}^n\), the quotient space \(N^{1,p}(\Omega )/{\sim }\) coincides with the usual Sobolev space \(W^{1,p}(\Omega )\). For weighted \(\mathbf {R}^n\), the corresponding results can be found in Björn and Björn [5, Appendix A.2]. It can also be shown that in this case \({C_p}\) is the usual Sobolev capacity in (weighted or unweighted) \(\mathbf {R}^n\).
Definition 4.4
We say that X supports a \(p\)-Poincaré inequality at x if there exist constants \(C>0\) and \(\lambda \ge 1\) such that for all balls \(B=B(x,r)\), all integrable functions f on X, and all upper gradients g of f,
where . If C and \(\lambda \) are independent of x, we say that X supports a (global) p-Poincaré inequality.
In the definition of Poincaré inequality we can equivalently assume that g is a \(p\)-weak upper gradient—see the comments above. It was shown by Keith and Zhong [32] that if X is complete and \(\mu \) is globally doubling and supports a global \(p\)-Poincaré inequality with \(p>1\), then \(\mu \) actually supports a global \(p_0\)-Poincaré inequality for some \(p_0<p\). The completeness of X is needed for Keith–Zhong’s result, as shown by Koskela [33]. In some of our estimates we will need such a better \(p_0\)-Poincaré inequality at x, which (by Koskela’s example) does not follow from the \(p\)-Poincaré inequality at x.
If X is complete and \(\mu \) is globally doubling and supports a global \(p\)-Poincaré inequality, then the functions in \(N^{1,p}(X)\) and those in \(N^{1,p}(\Omega )\), for open \(\Omega \subset X\), are quasicontinuous, see Björn et al. [10]. This means that in the Euclidean setting \(N^{1,p}(\mathbf {R}^n)\) and \(N^{1,p}(\Omega )\) are the refined Sobolev spaces as defined in Heinonen et al. [24, p. 96], see Björn and Björn [5, Appendix A.2] for a proof of this fact valid in weighted \(\mathbf {R}^n\).
To be able to define the variational capacity we first need a Newtonian space with zero boundary values. We let, for an open set \(\Omega \subset X\),
Definition 4.5
Let \(\Omega \subset X\) be open. The variational \(p\)-capacity of \(E\subset \Omega \) with respect to \(\Omega \) is
where the infimum is taken over all \(u \in N^{1,p}_0(\Omega )\) such that \(u\ge 1\) on E.
Also the variational capacity is countably subadditive and coincides with the usual variational capacity in the case when \(\Omega \subset \mathbf {R}^n\) is open (see Björn and Björn [7, Theorem 5.1] for a proof valid in weighted \(\mathbf {R}^n\)). We are next going to establish three new results concerning the variational capacity. Propositions 4.6 and 4.7 will only be used in Proposition 8.2 (and Example 9.4) to prove a condition for a point to have positive capacity, while Proposition 4.8 will only be used for proving Propositions 8.6 and 10.8 (and in Example 9.4), which deal with the variational capacity taken with respect to the whole space. These results may also be of independent interest.
It is well known that if X supports a global (p, p)-Poincaré inequality (i.e. a Poincaré inequality with an \(L^p\) norm instead of an \(L^1\) norm in the left-hand side), then the variational and Sobolev capacities have the same zero sets (if \(\Omega \) is bounded and \({C_p}(X {\setminus } \Omega )>0\)). We will need the following generalization of this fact. Since we do not have the same tools available, our proof is different and more direct than those in the literature. Note also that we only require a \(p\)-Poincaré inequality (at x), not a (p, p)-Poincaré inequality.
Proposition 4.6
Assume that X supports a \(p\)-Poincaré inequality at some \(x\in X\), that \(\Omega \) is a bounded open set, and that \(E \subset \Omega \). Then \({{\mathrm{cap}}}_p(E,\Omega )=0\) if and only if \({C_p}(E)=0\) or \({C_p}(X{\setminus } \Omega )=0\).
The Poincaré assumption cannot be completely omitted, as is easily seen by considering a nonconnected example, or a bounded “bow-tie” as in Example 5.5 in Björn and Björn [6]. However, we actually do not need the full \(p\)-Poincaré inequality at x, since it is enough to have a \(p\)-Poincaré inequality for some large enough ball B (i.e. such that \(\Omega \subset B\) and \({C_p}(B {\setminus } \Omega )>0\)). This somewhat resembles the situation concerning Friedrichs’ inequality (also called Poincaré inequality for \(N^{1,p}_0\)) and its role in the uniqueness of minimizers, see the discussion in Section 5 in [6]. For an easy example of a space which supports a Poincaré inequality for large balls but not for small balls, see Example 5.9 in [6].
Proof
If \({C_p}(E)=0\), then \(u:=\chi _E \in N^{1,p}_0(\Omega )\), while if \({C_p}(X {\setminus } \Omega )=0\), then \(u:=\chi _\Omega \in N^{1,p}_0(\Omega )\). In both cases this yields that \({{\mathrm{cap}}}_p(E,\Omega ) \le \int _\Omega g_u^p\,d\mu =0\).
Conversely, assume that \({{\mathrm{cap}}}_p(E,\Omega )=0\) and that \({C_p}(X {\setminus } \Omega )>0\). We need to show that \({C_p}(E)=0\). Choose a ball B centred at x and containing \(\Omega \) such that \({C_p}(B {\setminus } \Omega )>0\). By Lemma 2.24 in Björn and Björn [5], also \({C_p^B}(B {\setminus } \Omega )>0\), where \({C_p^B}\) is the Sobolev capacity with respect to the ambient space B. Let \(0 \le u \le 1\) be admissible for \({{\mathrm{cap}}}_p(E,\Omega )\). Then
In the former case we let \(v=(2u-1)_{+}:=\max \{2u-1,0\}\), while in the latter we let \(v=(1-2u)_{+}\). In both cases \(g_v\le 2g_u\) and \(\mu (A)\ge \tfrac{1}{2}\mu (B)\), where \(A=\{y\in B:v(y)=0\}\). Since \(v_B=|v-v_B|\) in A, we have by the \(p\)-Poincaré inequality for B that
Hence, as \(0\le v\le 1\) and \(g_v\le 2g_u\), we have
where the implicit constant in \(\lesssim \) depends on B but is independent of u. Taking infimum over all admissible u shows that, depending on the choices of v, we have at least one of \({C_p^B}(E)=0\) and \({C_p^B}(B{\setminus }\Omega )=0\), the latter being impossible by the choice of B. Thus \({C_p^B}(E)=0\) and Lemma 2.24 in [5] completes the proof. \(\square \)
If X is complete and \(\mu \) is globally doubling and supports a global \(p\)-Poincaré inequality, then it is known that the variational capacity is an outer capacity, i.e. if E is a compact subset of \(\Omega \) then
see Björn et al. [10, p. 1199] and Theorem 6.19 in Björn and Björn [5]. We will need a version of this result for sets of zero capacity under our more general assumptions. For the Sobolev capacity such a result was obtained in [10], Proposition 1.4 (which can also be found as Proposition 5.27 in [5]), under the assumption that X is proper. (Recall that a metric space X is proper if all closed bounded subsets are compact. If \(\mu \) is globally doubling, then X is proper if and only if X is complete.) A modification of that proof yields the following generalization, which only requires local compactness near E and at the same time also gives the conclusion for the variational capacity. This generalization was partly inspired by the discussion of the corresponding result in Heinonen et al. [27]. In combination with Proposition 4.6, Proposition 4.7 gives the outer capacity property for sets of zero variational capacity under very mild assumptions.
Proposition 4.7
Let \(\Omega \) be an open set, and let \(E\subset \Omega \) with \({C_p}(E)=0\). Assume that there is a locally compact open set \(G \supset E\). Then, for every \(\varepsilon >0\), there is an open set \(U\supset E\) with
We outline the main ideas of the proof, see the above references for more details.
Sketch of proof
First assume that \(\overline{G}\) is compact, and choose a bounded open set \(V\supset E\) such that \(V \subset G \cap \Omega \) and \(\int _V(\rho +1)^p\,d\mu <\varepsilon \), where \(\rho \) is a lower semicontinuous upper gradient of \(\chi _E\in N^{1,p}(X)\), which exists by the Vitali–Carathéodory property as \({C_p}(E)=0\). The function \(u(x):=\min \{1,\inf _\gamma \int _\gamma (\rho +1)\,ds\}\), with the infimum taken over all curves connecting x to \(X{\setminus } V\) (including constant curves), has \((\rho +1)\chi _V\) as an upper gradient, and \(u=1\) in E. Lemma 3.3 in [10] shows that u is lower semicontinuous in G and hence everywhere, since \(u=0\) in \(X {\setminus } V\) by construction. This also shows that \(u \in N^{1,p}_0(\Omega )\). Using u as a test function for the level set \(U:=\{x: u(x)>\tfrac{1}{2}\}\) shows that \({{\mathrm{cap}}}_p(U,\Omega )\lesssim \varepsilon \) and \({C_p}(U)\lesssim \varepsilon \), and proves the claim in this case.
If G is merely locally compact, we use separability to find a suitable countable cover of E, and then conclude the result using the countable subadditivity of the capacities.\(\square \)
A direct consequence of Proposition 4.7 is that the assumption that X is proper can be replaced by the assumption that \(\Omega \) is locally compact in Theorem 5.29 and Propositions 5.28 and 5.33 in Björn and Björn [5], see also Björn et al. [10] and Heinonen et al. [27].
We will also need the following result.
Lemma 4.8
Let \(E \subset X\) be bounded and let \(x\in X\). Then
Proof
That \({{\mathrm{cap}}}_p(E,X)\le \lim _{r \rightarrow \infty } {{\mathrm{cap}}}_p(E,B(x,r))\) is trivial. To prove the converse, we may assume that \({{\mathrm{cap}}}_p(E,X) < \infty \). Let \(\varepsilon >0\) and let u be admissible for \({{\mathrm{cap}}}_p(E,X)\) and such that \(\int _X g_u^p \, d\mu < {{\mathrm{cap}}}_p(E,X) + \varepsilon \). Then \(u_n:=u\eta _n\rightarrow u\) in \(N^{1,p}(X)\), as \(n \rightarrow \infty \), where \(\eta _n(y)=(1-{{\mathrm{dist}}}(y,B(x,n)))_{+}\). Hence,
Letting \(\varepsilon \rightarrow 0\) concludes the proof. \(\square \)
Our lower bound estimates for the capacities are all based on the following telescoping argument, which is well-known under the assumptions that \(\mu \) is globally doubling and supports a global \(p\)-Poincaré inequality. However, it is enough to require the \(p\)-Poincaré inequality, as well as the doubling and reverse-doubling conditions, at x only. We therefore recall the short proof.
Lemma 4.9
Assume that \(\mu \) is doubling and reverse-doubling at x and supports a \(p\)-Poincaré inequality at x. Let \(0<r < R \le {{\mathrm{diam}}}X /2\tau \), where \(\tau >1\) is the constant from the reverse-doubling condition (2.2). Write \(r_k=2^k r\) and \(B^k=B(x,r_k)\) for \(k\in \mathbf {Z}\), and let \(k_0\) be such that \(r_{k_0} \le R < r_{k_0+1}\). Then for any \(u\in N_0^{1,p}(B_{R})\) we have
where \(\lambda \) is the dilation constant in the \(p\)-Poincaré inequality at x.
Proof
For \(u\in N_0^{1,p}(B_{R})\) we have \(u_A=0\), where \(A={B_{\tau R}{\setminus } B_R}\). Let \(B^* = B_{\tau R}\cup B_{2R}\). Then
Since \(\mu \) is doubling and reverse-doubling at x, it is easy to verify that
The doubling condition and \(p\)-Poincaré inequality at x, together with the fact that \(B^{k_0+1}\subset B^*\) and \(A\subset B^*\), then show that
The claim follows, since the last integral is comparable to . \(\square \)
Remark 4.10
In the forthcoming sections we give several different capacity estimates involving the exponent sets \({\underline{Q}}\) and \({\overline{Q}}\). In these results (and in Lemma 4.9 above), the implicit constants in \(\simeq \), \(\lesssim \) and \(\gtrsim \) will always be independent of r and R, but they may depend on x, X, \(\mu \), p and (the auxiliary exponent) q. The dependence on x, X and \(\mu \) will only be through the constants in the doubling, reverse-doubling and Poincaré assumptions, as well as through the constants \(C_q\) in the definitions of the Q-sets. In particular, if these conditions hold in all of X with uniform constants, then we obtain capacity estimates which are independent of x as well.
There are also corresponding estimates involving \({\underline{Q}_0}\), \({\underline{Q}_\infty }\), \({\overline{Q}_0}\) and \({\overline{Q}_\infty }\), which are just easy reformulations with appropriate restrictions on the radii, viz. \(R\le R_0\) for the \({\underline{Q}_0}\)- and \({\overline{Q}_0}\)-sets, and \(r\ge R_0\) for the \({\underline{Q}_\infty }\)- and \({\overline{Q}_\infty }\)-sets, where \(0<R_0<\infty \) is fixed, cf. Theorems 1.1 and 1.2. In these restricted estimates, as well as in the estimates in Sect. 8 involving the S-sets, the implicit constants in \(\simeq \), \(\lesssim \) and \(\gtrsim \) will in addition depend on \(R_0\). Observe also that, by e.g. Lemmas 2.4 and 2.5, the exponent sets are independent of \(R_0\), but the constants \(C_q\) do depend on the range of radii.
For these restricted estimates one can also weaken the assumptions a little: The doubling and reverse-doubling conditions and the Poincaré inequality are only needed for balls with radii in the considered range, i.e. for \(r\le \max \{2,\tau \}R_0\) or for \(r\ge R_0\). Arguing as in Lemma 2.5, it is easily seen that in the case of the doubling condition (but not for reverse-doubling and the Poincaré inequality) this is equivalent to assuming doubling for all \(r\le 1\) or \(r\ge 1\), respectively. For the reverse-doubling and the Poincaré inequality, the range of radii for which they hold is however essential, as can be seen by e.g. letting X be the union of two disjoint closed balls in \(\mathbf {R}^n\).
The factor 2 in the above bound \(\max \{2,\tau \}R_0\) is only dictated by the dyadic balls in the proof of Lemma 4.9 and can equivalently be replaced by any \(\sigma >1\), upon correspondingly changing the choice of balls therein. Again, this will be reflected in the implicit constants.
5 Upper bounds for capacity
From now on we make the general assumption that \(\mu \) is doubling at x. Recall also that \(1\le p<\infty \).
The following simple upper bound for capacity is valid for any \(1\le p < \infty \). Note that we do not need any Poincaré inequality (nor reverse-doubling) to obtain any of our upper bound estimates.
Proposition 5.1
Let \(0<2r\le R\). Then
For \(p\in {\underline{Q}}\) (resp. \(p\in {\overline{Q}}\)), the first (resp. second) term in the minimum gives the sharper estimate, but for p in between the Q-sets the minimum can vary depending on the radii, as can be seen in Example 9.3. See Sect. 6 for corresponding lower estimates.
It is essential to bound r away from R in Proposition 5.1 since typically \({{\mathrm{cap}}}_p(B_r,B_{R})\rightarrow \infty \) as \(r\rightarrow R\). This is apparent and well-known in unweighted \(\mathbf {R}^n\) (cf. Example 2.12 in Heinonen et al. [24]), but similar behaviour is present in more general metric spaces as well. (This restriction should thus be taken into account in the upper bounds in [15] and [19] as well.) Capacity of thin annuli (with \(R/2<r<R\)) in the metric setting is studied in [9].
Proof
Take
Both of these are admissible for \({{\mathrm{cap}}}_p(B_r,B_{R})\), and clearly (by doubling),
\(\square \)
The following logarithmic upper bounds are particularly useful in the borderline cases \(p=\max {\underline{Q}}\) and \(p=\min {\overline{Q}}\). These estimates are valid also for \(p=1\), as well as for \(p\in {{\mathrm{int}}}{\underline{Q}}\) and \(p\in {{\mathrm{int}}}{\overline{Q}}\), but in these cases Proposition 5.1 actually gives better upper bounds for \({{\mathrm{cap}}}_p(B_r,B_R)\). Note also that even for the borderline cases \(p=\max {\underline{Q}}\) and \(p=\min {\overline{Q}}\), the estimates in Proposition 5.1 can be sharp, and better than those in Proposition 5.2 below, as shown at the end of Example 9.3.
Proposition 5.2
Let \(0<2r\le R\).
-
(a)
If \(p\in {\underline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\lesssim \frac{\mu (B_{R})}{R^{p}}\biggl (\log \frac{R}{r}\biggr )^{1-p}. \end{aligned}$$(5.1) -
(b)
If \(p\in {\overline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\lesssim \frac{\mu (B_r)}{r^{p}}\biggl (\log \frac{R}{r}\biggr )^{1-p}. \end{aligned}$$(5.2)
Examples 9.4 (b) and 9.5 (b) show that these estimates are sharp.
Proof
Choose
Then u is admissible for \({{\mathrm{cap}}}_p(B_r,B_{R})\), and g is a \(p\)-weak upper gradient of u, by Theorem 2.16 in Björn and Björn [5]. Write \(r_k=2^k r\) and \(B^k=B(x,r_k)\), and let \(k_0\in \mathbf {Z}\) be such that \(r_{k_0}\le R < r_{k_0+1}\). Then
For \(p\in {\underline{Q}}\) we have that \(r_k^{-p}\mu (B^k)\lesssim R^{-p}\mu (B_{R})\) when \(1 \le k \le k_0+1\), and for \(p\in {\overline{Q}}\) that \(r_k^{-p}\mu (B^k)\lesssim r^{-p}\mu (B_r)\) for all \(k \ge 1\). Since \(0<r\le R/2\), we have \(k_0+1\lesssim \log (R/r)\), and so both claims follow from (5.3). \(\square \)
6 Lower bounds for capacity
The results in this section complement the upper bounds in Sect. 5, and for p in the interior of (one of) the Q-sets these together yield the sharp estimates announced in Theorem 1.1. For p in between the Q-sets, the lower and upper bounds do not meet, but we shall see in Proposition 6.2 that the lower bounds indicate the distance from p to the corresponding Q-set. Example 9.3 shows that in this case both the upper bounds in Proposition 5.1 and the lower bounds (6.6) and (6.7) in Proposition 6.2 are optimal. See also Proposition 9.1, which further demonstrates the sharpness of these estimates.
Also note that for the lower bounds without logarithmic terms we do not need the restriction \(2r\le R\), since the capacity of thin annuli is minorized by the capacity of thick annuli. In the borderline cases, where \(\log (R/r)\) plays a role, the restriction \(2r\le R\) is still needed. As in Lemma 4.9, we however require that \(R \le {{\mathrm{diam}}}X/2\tau \), where \(\tau >1\) is the constant from the reverse-doubling condition (2.2). See Remark 4.10 for comments on how the choice of the involved parameters influences the implicit constants in \(\simeq \), \(\lesssim \) and \(\gtrsim \).
Proposition 6.1
Assume that \(\mu \) is reverse-doubling at x and supports a \(p\)-Poincaré inequality at x. Let \(0<r<R \le {{\mathrm{diam}}}X/2\tau \).
-
(a)
If \(p\in {{\mathrm{int}}}{\underline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\gtrsim \frac{\mu (B_r)}{r^{p}}. \end{aligned}$$(6.1) -
(b)
If \(p\in {{\mathrm{int}}}{\overline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\gtrsim \frac{\mu (B_{R})}{R^{p}}. \end{aligned}$$(6.2)
With this we can now prove Theorem 1.1, which also shows that the estimates in Proposition 6.1 are sharp.
Proof of Theorem 1.1
Combining Propositions 5.1 and 6.1 and appealing to Remark 4.10 yield (a) and (b). The last part follows from Proposition 9.1 below.\(\square \)
The comparison constants in (6.1) and (6.2) depend on p. In particular, the constants in our proof tend to zero as \(p\nearrow \sup {\underline{Q}}\) in (a) and as \(p\searrow \inf {\overline{Q}}\) in (b). This is quite natural, since already unweighted \(\mathbf {R}^n\) shows that these estimates do not always hold when \(p=\max {\underline{Q}}\) and \(p=\min {\overline{Q}}\), respectively. In fact, if X is Ahlfors \(p\)-regular, and thus \({\underline{Q}}={\underline{S}_0}={\underline{S}_\infty }=(0,p]\) and \({\overline{Q}}={\overline{S}_0}={\overline{S}_\infty }={[}p,\infty )\), Proposition 8.1 (c) shows that (6.1) and (6.2) fail. Moreover, Proposition 9.1 shows that the estimates in Proposition 6.1 can never hold for all r and R when p is outside of the Q-sets.
Proof of Proposition 6.1
Let u be admissible for \({{\mathrm{cap}}}_p(B_r,B_{R})\), and let \(B^k\) be a chain of balls, with radii \(r_k\), as in Lemma 4.9. From Lemma 4.9 we obtain, for any \(0<q <\infty \), that
In (a) we choose \(q>p\) such that \(q\in {\underline{Q}}\), and so we have for all \(1\le k \le k_0+1\) that
Since \(1-q/p<0\), the sum in the last line of (6.3) can thus be estimated as
giving
Taking infimum over all admissible u finishes the proof of part (a).
In (b) we instead choose \(q\in {\overline{Q}}\) such that \(q<p\), and so we have for all \(1\le k \le k_0+1\) that
Now \(1-q/p>0\), and thus the sum in the last line of (6.3) can be estimated as
giving
and the claim follows by taking infimum over all admissible u. \(\square \)
A modification of the above proof gives the following result, which is interesting mainly in the case when p is in between the Q-sets, i.e. \(p\notin {\underline{Q}}\cup {\overline{Q}}\).
Proposition 6.2
Assume that \(\mu \) is reverse-doubling at x and supports a \(p\)-Poincaré inequality at x. Let \(0<r<R \le {{\mathrm{diam}}}X/2\tau \).
-
(a)
If \(0< q < p\) and \(q\in {\underline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_R)\gtrsim \frac{\mu (B_r)}{r^{q}}R^{q-p} = \frac{\mu (B_r)}{r^{p}} \Bigl (\frac{r}{R}\Bigr )^{p-q}. \end{aligned}$$(6.6) -
(b)
If \(q > p\) and \(q\in {\overline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_R) \gtrsim \frac{\mu (B_R)}{R^{q}} r^{q-p} = \frac{\mu (B_R)}{R^{p}} \Bigl (\frac{r}{R}\Bigr )^{q-p}. \end{aligned}$$(6.7)
Proposition 9.1 and Example 9.3 show that this result is sharp, while unweighted \(\mathbf {R}^n\), with \(p=n\), shows that we cannot allow for \(q=p\) in general. Also note that if \(q\in {{\mathrm{int}}}{\underline{Q}}\) (resp. \(q\in {{\mathrm{int}}}{\overline{Q}}\)) and \(2r \le R\), then (6.6) [resp. (6.7)] can be written as \({{\mathrm{cap}}}_p(B_r,B_R) \gtrsim {{\mathrm{cap}}}_q(B_r,B_R) R^{q-p}\) (resp. \({{\mathrm{cap}}}_p(B_r,B_R) \gtrsim {{\mathrm{cap}}}_q(B_r,B_R) r^{q-p}\)).
Proof
Let u be admissible for \({{\mathrm{cap}}}_p(B_r,B_R)\), and let \(B^k\) be the corresponding balls, with radii \(r_k\), from Lemma 4.9. In (a) we proceed as in (6.3) and use (6.4) to obtain
since the exponent in the geometric series is \(1-q/p>0\). Taking infimum over all admissible u yields (6.6).
In (b) we instead use (6.3) and (6.5) and that the geometric series is \(\lesssim r^{1-q/p}\) in this case.\(\square \)
For the borderline cases \(p=\max {\underline{Q}}\) or \(p=\min {\overline{Q}}\), (6.6) or (6.7) can be used with q arbitrarily close to p, but the following proposition gives better estimates involving logarithmic terms. If X supports a \(p_0\)-Poincaré inequality at x for some \(1\le p_0<p\), then even better estimates in the borderline cases are obtained in Proposition 7.1. Nevertheless, the estimates in Proposition 6.3 are of particular interest when \(p=1\), since the 1-Poincaré inequality is the best possible.
Proposition 6.3
Assume that \(\mu \) is reverse-doubling at x and supports a \(p\)-Poincaré inequality at x. Let \(0<2r\le R \le {{\mathrm{diam}}}X/2\tau \).
-
(a)
If \(p\in {\underline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_R)\gtrsim \frac{\mu (B_r)}{r^{p}} \biggl (\log \frac{R}{r}\biggr )^{-p}. \end{aligned}$$(6.8) -
(b)
If \(p\in {\overline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_R) \gtrsim \frac{\mu (B_R)}{R^{p}} \biggl (\log \frac{R}{r}\biggr )^{-p}. \end{aligned}$$(6.9)
In unweighted \(\mathbf {R}\) it is well known that \({{\mathrm{cap}}}_1(B_r,B_R)=2\) for all \(0<r<R\). In this case the right-hand sides in (6.8) and (6.9) both reduce to \(2 (\log (R/r))^{-1}\), showing that these estimates are not optimal in this particular case.
Proof
Let u be admissible for \({{\mathrm{cap}}}_p(B_r,B_R)\), and let \(B^k\) be the corresponding balls, with radii \(r_k\), from Lemma 4.9. Then (6.3) with \(q=p\) and (6.4) yield
Since \(0<2r\le R\), we have \(k_0+1\simeq k_0\simeq \log (R/r)\), and taking infimum over all admissible u yields (6.8).
7 Capacity estimates for borderline exponents
When the borderline exponents are attained, Propositions 5.1 and 5.2 yield for \(p=\max {\underline{Q}}\),
while for \(p=\min {\overline{Q}}\),
In this section we provide corresponding lower bounds, even though the estimates do not exactly meet, as seen in Theorem 1.2. Nevertheless, Proposition 9.1 and Examples 9.3, 9.4 and 9.5 below show that all these estimates [including both possibilities for the upper bounds in (7.1) and in (7.2)] are in some sense optimal; see also Remark 9.6.
The following result holds for all \(p\in {\underline{Q}}\) (resp. \(p\in {\overline{Q}}\)), but because of Proposition 6.1 it is most useful in the limiting case \(p=\max {\underline{Q}}\) (resp. \(p=\min {\overline{Q}}\)). It improves upon Proposition 6.3 at the cost of requiring a better Poincaré inequality; see the discussion on different Poincaré inequalities after Definition 4.4.
Proposition 7.1
Assume that \(\mu \) is reverse-doubling at x and supports a \({p_0}\)-Poincaré inequality at x for some \(1\le {p_0}<p\). Let \(0<2r\le R \le {{\mathrm{diam}}}X/2\tau \).
-
(a)
If \(p\in {\underline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\gtrsim \frac{\mu (B_r)}{r^{p}}\biggl (\log \frac{R}{r}\biggr )^{1-p}. \end{aligned}$$(7.3) -
(b)
If \(p\in {\overline{Q}}\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\gtrsim \frac{\mu (B_{R})}{R^{p}}\biggl (\log \frac{R}{r}\biggr )^{1-p}. \end{aligned}$$(7.4)
Examples 9.4 and 9.5 show that these estimates are sharp, while Proposition 9.1 and Examples 9.3, 9.4 and 9.5 show that they do not hold for p outside of the Q-sets. In particular, these lower bounds do not in general hold for \(p = \sup {\underline{Q}}\notin {\underline{Q}}\) and \(p=\inf {\overline{Q}}\notin {\overline{Q}}\), respectively.
Proof
Let u be admissible for \({{\mathrm{cap}}}_p(B_r,B_R)\), and let \(B^k\) be the corresponding balls, with radii \(r_k\), from Lemma 4.9. Also let \(A_k = \lambda B^{k}{\setminus }\lambda B^{k-1}\).
Without loss of generality we may assume that \({p_0}>1\). Lemma 4.9 (with exponent \({p_0}\)) and Hölder’s inequality for sums (with \({p_0}\) and \({p_0}/({p_0}-1)\)) yield
Interchanging the order of summation, the double sum in (7.5) can be estimated by Hölder’s inequality for integrals [with exponents \(p/{{p_0}}\) and \(p/(p-{{p_0}})\)] as
Let us now take \(q\in {\underline{Q}}\). [In (a) we can use \(q=p\), but recall that also in (b) we have \({\underline{Q}}\ne \varnothing \) by the reverse-doubling.] Then
for \(1\le j \le k \le k_0+1\). Moreover, let \({\rho }=r\) if \(p\in {\underline{Q}}\) [case (a)] and \({\rho }=R\) if \(p\in {\overline{Q}}\) [case (b)]. Then we have for all \(1\le k \le k_0+1\) that
From (7.7) and (7.8) we obtain
since \(1-{p_0}/p>0\), and thus \(\sum _{k=j}^{k_0+1} (r_j/r_k)^{q(1-{p_0}/p)}\simeq 1\).
Insertion of (7.9) into (7.6) and a use of Hölder’s inequality for sums [with exponents \(p/{{p_0}}\) and \(p/(p-{{p_0}})\)] yield
Since \(0<2r\le R\), we have \(k_0+1\simeq k_0\simeq \log (R/r)\), and so we conclude from (7.5) and (7.10) that
The desired capacity estimates (7.3) and (7.4) now follow from (7.11) by taking infimum over all u admissible for \({{\mathrm{cap}}}_p(B_r,B_R)\) and recalling that \({\rho }=r\) in the case (a) and \({\rho }=R\) in the case (b).\(\square \)
Proof of Theorem 1.2
Combining Propositions 7.1 and 5.2, and appealing to Remark 4.10 yield (a) and (b). The last part follows from Proposition 9.1 below.\(\square \)
8 Capacity estimates involving S-sets
Let us first record the following upper bounds related to the S-sets. As before, these upper estimates do not require any Poincaré inequalities. Recall from Sect. 2 that the inequalities defining the \(S_\infty \)-sets are reversed from the ones in the \(S_0\)-sets, so that \({\underline{Q}_\infty }\subset {\underline{S}_\infty }\) and \({\overline{Q}_\infty }\subset {\overline{S}_\infty }\).
Proposition 8.1
Fix \(0<R_0<\infty \).
-
(a)
If \(0<q\in {\underline{S}_0}\), then for \(0<2r \le R \le R_0\),
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\lesssim \left\{ \begin{array}{l@{\quad }l} R^{q-p}, &{} \text {if } q<p,\\ r^{q-p}, &{} \text {if } q>p. \end{array}\right. \end{aligned}$$(8.1) -
(b)
If \(0<q \in {\overline{S}_\infty }\), then (8.1) holds for \(R_0 \le r \le R/2<\infty \).
-
(c)
If \(p \in {\underline{S}_0}\), then for \(0<2r \le R \le R_0\),
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\lesssim \biggl (\log \frac{R}{r}\biggr )^{1-p}. \end{aligned}$$(8.2) -
(d)
If \(p \in {\overline{S}_\infty }\), then (8.2) holds for \(R_0 \le r \le R/2<\infty \).
In unweighted \(\mathbf {R}^n\), the capacity \({{\mathrm{cap}}}_p(B_r,B_R)\) is comparable to the right-hand sides in the respective cases (with \(q=n\)), which shows that these estimates are sharp. See also the end of Example 9.3, where the sharpness of part (a) is shown in a case where \(q\in {\underline{S}_0}{\setminus }{\underline{Q}_0}\).
Proof
The proofs of (a) and (b) follow immediately form Proposition 5.1 and the definitions of the S-sets. To see that (c) and (d) hold, one can proceed as in the proof of Proposition 5.2 up to deducing (5.3). Then one uses the estimates \(\mu (B^k)\lesssim r_k^p\) and \(k_0+1 \lesssim \log (R/r)\) to obtain (8.2).\(\square \)
The estimate (c) was already given by Heinonen [22], Lemma 7.18. (The statement therein is slightly different, but the proof applies verbatim to yield our estimate.) It follows immediately from (c) that for \(1<p\in {\underline{S}_0}\) the point x has zero capacity, but in fact the same is true even in the (possibly) larger set \([1,\infty ){\setminus }{\overline{S}_0}\), as the following proposition shows. Similarly, it follows from (d) that if \(p \in {\overline{S}_\infty }\), then for a fixed \(r>0\) we have \({{\mathrm{cap}}}_p(B_r,B_R)\rightarrow 0\) as \(R\rightarrow \infty \), but again we obtain a better result in Proposition 8.6. Recall that
by Lemma 2.4 (and its \(\infty \)-version).
Proposition 8.2
If \(1\le p\notin {\overline{S}_0}\) or \(1 < p \in {\underline{S}_0}\), then \({C_p}(\{x\})=0={{\mathrm{cap}}}_p(\{x\},B)\) for any ball \(B\ni x\).
Conversely, assume that \(\mu \) is reverse-doubling at x and supports a \(p\)-Poincaré inequality at x, and that there is a locally compact neigbourhood \(G \ni x\). If \(p\in {{\mathrm{int}}}{\overline{S}_0}\), then \({C_p}(\{x\})>0\) and \({{\mathrm{cap}}}_p(\{x\},B)>0\) for any ball \(B\ni x\) with \({C_p}(X {\setminus } B)>0\).
The first part of Proposition 8.2 improves and clarifies the result of Corollary 3.4 in Garofalo and Marola [19]. Note that this part is valid without requiring any Poincaré inequality. Unweighted \(\mathbf {R}\) shows that the inequality in \(1 < p \in {\underline{S}_0}\) is necessary. The second part, on the other hand, is a consequence of Proposition 4.6 and the lower bound in Proposition 8.3 below.
In the remaining case when \(p = \min {\overline{S}_0}\) and \(p \notin {\underline{S}_0}\), the S-sets are not enough to determine if the capacities of \(\{x\}\) are zero or not, as we demonstrate at the end of Example 9.4.
Proposition 8.3
Assume that \(\mu \) is reverse-doubling at x and supports a \(p\)-Poincaré inequality at x, and fix \(0<R_0<\infty \). Furthermore, assume that \(0<q\in {\overline{S}_0}\) and \(0<r<R\le R_0\), or that \(q \in {\underline{S}_\infty }\) and \(R_0 \le r<R<\infty \). Then
Also here unweighted \(\mathbf {R}^n\) shows that the first two estimates are sharp, and at the end of Example 9.3 their sharpness is shown in a case where \(q\in {\overline{S}_0}{\setminus }{\overline{Q}_0}\). Proposition 9.2 provides a converse of Proposition 8.3.
Proof
Let u be admissible for \({{\mathrm{cap}}}_p(B_r,B_R)\), and let \(B^k\) be the corresponding balls, with radii \(r_k\), from Lemma 4.9. Since \(\mu (\lambda B^k) \ge \mu (B^k)\gtrsim r_k^q\) for all \(k\in \mathbf {N}\), we have by Lemma that
where
The claim then follows by taking infimum over all admissible u.\(\square \)
Proof of Proposition 8.2
We may assume that \(B=B_{R}\). If \(p\notin {\overline{S}_0}\), then there exist \(r_n\rightarrow 0\) such that \(\mu (B(x,r_n))<r_n^p/n\). For \(r_n\le R\), let \(u_n(y)=(1-d(x,y)/r_n)_{+}\). Then \(u_n(x)=1\), \(u_n=0\) outside \(B(x,r_n)\), and \(g_{u_n}\le 1/r_n\). Thus
and
In the case \(1 < p \in {\underline{S}_0}\) the claim \({{\mathrm{cap}}}_p(\{x\},B)=0\) follows easily from Proposition 8.1 (c). To show that also \({C_p}(\{x\})=0\), we let \(\varepsilon >0\). Since \(p \in {\underline{S}_0}\) we can find \(r>0\) such that \(\mu (B(x,r))< \varepsilon \). As \({{\mathrm{cap}}}_p(\{x\},B(x,r))=0\) [by Proposition 8.1 (c) again], we can also find \(u \in N^{1,p}_0(B(x,r))\) such that \(u(x)=1\), \(0 \le u \le 1\) and \(\int _X g_u^p \, d\mu < \varepsilon \). It follows that
Conversely, assume that \(p>q\in {\overline{S}_0}(x)\). By Proposition 8.3 we have for all \(0<r<R{=:}R_0\) that
with comparison constant independent of r. If \({{\mathrm{cap}}}_p(\{x\},B)\) were 0, then we would have \({C_p}(\{x\})=0\), by Proposition 4.6, which in turn, by Proposition 4.7, would contradict (8.4). Hence \({{\mathrm{cap}}}_p(\{x\},B)>0\) and \({C_p}(\{x\})>0\).\(\square \)
Remark 8.4
It follows directly from Proposition 8.3 that if we a priori know that the capacity is outer or that the capacity of singletons can be tested by only continuous functions, then actually \({{\mathrm{cap}}}_p(\{x\},B_R) \gtrsim R^{q-p}\) whenever \(p>q\in {\overline{S}_0}\). Both of the above assumptions hold e.g. if X is complete, \(\mu \) is doubling and supporting a \(p\)-Poincaré inequality, by Theorem 6.19 in [5] or Kallunki and Shanmugalingam [31], see also Theorem 4.1 in Björn and Björn [7].
Let us also record the following logarithmic lower bound, which improves the third lower bound in Proposition 8.3 and is interesting in the borderline cases \(p=\min {\overline{S}_0}\) and \(p=\max {\underline{S}_\infty }\).
Proposition 8.5
Let \(1<p<\infty \) and assume that \(\mu \) is reverse-doubling at x and supports a \({p_0}\)-Poincaré inequality at x for some \(1\le {p_0}<p\). Fix \(0<R_0<\infty \).
-
(a)
If \(p \in {\overline{S}_0}\) and \(0<2r \le R \le R_0\), then
$$\begin{aligned} {{\mathrm{cap}}}_p(B_r,B_{R})\gtrsim \biggl (\log \frac{R}{r}\biggr )^{1-p}. \end{aligned}$$(8.5) -
(b)
If \(p \in {\underline{S}_\infty }\) and \(R_0 \le r \le R/2<\infty \), then (8.5) holds.
Proof
We proceed as in the proof of Proposition 7.1, but instead of (7.8) we now have the simple estimate \(r_k^p/\mu (B^k)\lesssim 1\) for all \(1\le k \le k_0+1\) [both in (a) and (b)]. Thus the left-hand side of (7.9) is bounded by a constant. Inserting this into (7.6) and then (7.5), together with a use of Hölder’s inequality for sums as in (7.10), yields
since \(k_0+1 \simeq \log (R/r)\). Taking infimum over all admissible u yields (a) and (b). \(\square \)
In unbounded spaces we have the following counterpart to Proposition 8.2. Recall that the sets \({\underline{S}_\infty }\) and \({\overline{S}_\infty }\) are independent of the reference point \(x\in X\), by Lemma 2.8.
Proposition 8.6
Assume that X is unbounded. If \(1 \le p\notin {\underline{S}_\infty }\) or \(1 <p \in {\overline{S}_\infty }\), then \({{\mathrm{cap}}}_p(B_r,X)=0\) for all \(r>0\), and thus \({{\mathrm{cap}}}_p(E,X)=0\) for all bounded sets E.
Conversely, assume that \(\mu \) is reverse-doubling at x and supports a \(p\)-Poincaré inequality at x. If \(p\in {{\mathrm{int}}}{\underline{S}_\infty }\), then
Unweighted \(\mathbf {R}\) again shows that the inequality in \(1 < p \in {\overline{S}_\infty }\) is necessary. In the remaining case when \(p = \max {\underline{S}_\infty }\) and \(p \notin {\overline{S}_\infty }\), the S-sets are not enough to determine if the capacities are zero or not, see the end of Example 9.5.
Proof
If \(p\notin {\underline{S}_\infty }\), then there exist \(R_n\rightarrow \infty \) such that \(\mu (B_{R_n})< R_n^p/n\). By Proposition 5.1 we have
If \(1 <p \in {\overline{S}_\infty }\) we instead use Proposition 8.1 (d) to conclude that \({{\mathrm{cap}}}_p(B_r,X)=0\).
Conversely, if \(p<q\in {\underline{S}_\infty }\), then let \({R_0}:=r<R\). From Proposition 8.3 we obtain that
and the claim follows from Lemma 4.8. \(\square \)
Remark 8.7
Recall that an unbounded proper space X is said to be \(p\)-parabolic, if \({{\mathrm{cap}}}_p(K,X)=0\) for all compact sets \(K\subset X\), and otherwise X is \(p\)-hyperbolic. From Proposition 8.6 it thus follows that the space X is \(p\)-parabolic if \(1\le p\notin {\underline{S}_\infty }\) (or \(1 <p \in {\overline{S}_\infty }\)), and X is \(p\)-hyperbolic if \(p\in {{\mathrm{int}}}{\underline{S}_\infty }\). See e.g. Holopainen [28], Holopainen and Koskela [29] and Holopainen and Shanmugalingam [30] for more information on parabolic and hyperbolic Riemannian manifolds and metric spaces.
9 Sharpness of the estimates
The following result shows that the lower bounds in Sects. 6 and 7 are not only sharp, but also essentially equivalent to p (or q) belonging to the corresponding Q-sets.
Proposition 9.1
If (6.1), (6.2), (6.8), (6.9), (7.3) or (7.4) holds for all \(0<2r\le R\), then \(p\in {\underline{Q}}\), \(p\in {\overline{Q}}\), \(p\le \sup {\underline{Q}}\), \(p\ge \inf {\overline{Q}}\), \(p\le \sup {\underline{Q}}\) or \(p\ge \inf {\overline{Q}}\), respectively.
Similarly, if (6.6) or (6.7) holds for all \(0<2r\le R\), then \(q\in {\underline{Q}}\) or \(q\in {\overline{Q}}\), respectively.
Proof
We need to estimate \(\mu (B_r)/\mu (B_R)\) in terms of r / R for all \(0<r<R\). It is enough to do this for \(0<2r\le R\), since \(R/2<r<R\) can be treated by the doubling property of \(\mu \) at x. If (6.1) or (6.2) holds, then Proposition 5.1 yields
which is equivalent to \(p\in {\underline{Q}}\) or \(p\in {\overline{Q}}\), respectively.
Next, if (6.6) holds for some \(q>0\), then using Proposition 5.1 we see that
which after division by \(R^{q-p}\) shows that \(q \in {\underline{Q}}\). Similarly, if (6.7) holds for some \(q>0\), then \(q \in {\overline{Q}}\).
Finally, if (6.8) holds, and in particular if (7.3) holds, then Proposition 5.1 yields for all \(\varepsilon >0\) that
where the last implicit constant depends on \(\varepsilon \). Thus \(p-\varepsilon \in {\underline{Q}}\) for every \(\varepsilon >0\), showing that \(p\le \sup {\underline{Q}}\). The implications (7.4) \(\Rightarrow \) (6.9) \(\Rightarrow \) \(p\ge \inf {\overline{Q}}\) are proved similarly.\(\square \)
We have a corresponding result for the S-sets as well.
Proposition 9.2
If for some \(q>0\) and all \(0<2r\le R \le R_0\),
then \(q\in {\overline{S}_0}\). Similarly, if
for all \(0<2r\le R \le R_0\), then \(p\ge \inf {\overline{S}_0}\).
If instead (9.1) or (9.2) holds for all \(R_0\le r\le R/2<\infty \), then \(q\in {\underline{S}_\infty }\) or \(p\le \sup {\underline{S}_\infty }\), respectively.
Proof
We prove only the case \(0<2r\le R \le R_0\), the other case being similar.
If (9.1) holds and \(q\le p\), then Proposition 5.1 implies that \(R^{q-p} \lesssim {{\mathrm{cap}}}_p(B_r,B_R) \lesssim R^{-p} \mu (B_R)\) for all \(R\le R_0\), showing that \(q\in {\overline{S}_0}\). If instead \(q\ge p\) and (9.1) holds, then we get \(r^{q-p} \lesssim {{\mathrm{cap}}}_p(B_r,B_R) \lesssim r^{-p} \mu (B_r)\) for all \(r\le R_0/2\), and the same conclusion follows.
If (9.2) holds, then Proposition 5.1 and taking \(R=R_0\) show that \(\log ^{-p} (R_0/r) \lesssim {{\mathrm{cap}}}_p(B_r,B_R) \lesssim r^{-p} \mu (B_r)\). Since \(\log (R_0/r) \lesssim r^{-\varepsilon }\) for every \(\varepsilon >0\), this yields \(\mu (B_r)\gtrsim r^{p(1+\varepsilon )}\), and hence \(p(1+\varepsilon )\in {\overline{S}_0}\). Letting \(\varepsilon \rightarrow 0\), gives \(p\ge \inf {\overline{S}_0}\).\(\square \)
In the rest of this section we continue our study of the examples from Sect. 3, using a general formula for the capacity on weighted \(\mathbf {R}^n\) with radial weights. The proof of this formula is postponed until Sect. 10, see Proposition 10.8.
Example 9.3
We continue with Example 3.2. First, for \(p >2\) and \(2\alpha _{k+1}\le 2r\le R\le \beta _k\), we estimate using Proposition 10.8 with \(f^{\prime }(\rho )\simeq w(\rho )\rho \) and (3.6) that
showing that the second upper bound in Proposition 5.1 cannot be improved. With \(r=\alpha _{k+1}\) and \(R=\beta _k\) it also follows that
since \(p>2\). This illustrates the fact (known from Proposition 9.1) that the lower estimates (6.1), (6.8) and (7.3) do not hold for \(p >2\), i.e. for \(p \notin {\underline{Q}}\). In addition, the equivalence in (9.4) shows that the lower bound in (6.6) is sharp (with \(q=2\in {\underline{Q}}\)). Since
we also conclude from (9.3) (with \(r=\alpha _{k+1}\) and \(R=\beta _k\)) that the estimate (5.1) does not hold for \(p >2\), i.e. for \(p \notin {\underline{Q}}\).
If \(1<p <4\) and \(2\beta _k\le 2r\le R \le \alpha _{k}\), then by Proposition 10.8 with \(f^{\prime }(\rho )\simeq w(\rho )\rho \) and (3.8),
showing that the first upper bound in Proposition 5.1 cannot be improved. In particular, (9.3) and (9.5) show that each of the upper bounds in Proposition 5.1 can give a sharp estimate for certain radii even when \(p\notin {\underline{Q}}\cup {\overline{Q}}\).
With \(r=\beta _{k}\) and \(R=\alpha _k\) it follows from (9.5) that
since \(p<4\). Thus we here have a concrete case where the lower estimates (6.2), (6.9) and (7.4) do not hold for \(p <4\), i.e. for \(p \notin {\overline{Q}}\), and we also see that (6.7) is sharp as well (with \(q=4\in {\overline{Q}}\)). Moreover, as
we conclude from (9.5) (with \(r=\beta _{k}\) and \(R=\alpha _k\)) that the estimate (5.2) does not hold for \(1<p<4\), i.e. for \(p \notin {\overline{Q}}\).
From (9.5) and (9.3) with \(p=2\) and \(p=4\), respectively, we see that
which shows that the lower bounds in (7.3) and (7.4) are not always comparable to \({{\mathrm{cap}}}_p(B_r,B_R)\) when \(p=\max {\underline{Q}}\) or \(p=\min {\overline{Q}}\), and that the estimates provided by Proposition 5.1 are in this case optimal (and better than those in Proposition 5.2).
Finally, choosing \(R=\beta _k\) and \(p>q=\tfrac{10}{3}=\min {\overline{S}_0}\) in (9.3) [or \(r=\beta _k\) and \(1<p<q=\tfrac{10}{3}=\min {\overline{S}_0}\) in (9.5)] shows, together with (3.4), that the first two lower bounds in Proposition 8.3 are sharp. Similarly for \(p<q=3=\max {\underline{S}_0}\), we see from (3.5) and (9.5) with \(r=\tfrac{1}{2}\alpha _k\) and \(R=\alpha _k\) that the upper bounds in Proposition 8.1 (a) are sharp.
Example 9.4
This is a continuation of Example 3.1 in \(\mathbf {R}^n\), \(n\ge 2\), with the weight
where \(\beta \in \mathbf {R}\) is arbitrary and we this time require \(p>1\). Recall that for \(0<r<1/e\) and \(x=0\) we have \(\mu (B_r)\simeq r^p \log ^\beta (1/r)\). Proposition 10.8 with \(f^{\prime }(\rho )\simeq w(\rho )\rho ^{n-1}\) gives, for \(0<r<R<1/e\), that
if \(\sigma =1+\beta /(1-p)\ne 0\), and
if \(\beta =p-1\).
The estimate (9.6) can be further simplified. For that we recall the simple Lemma 3.1 from Björn et al. [8] which says that for all \(\sigma >0\) and all \(t\in [0,1]\),
Thus, if \(\sigma >0\) in (9.6), we have
Since \(\sigma -1=\beta /(1-p)\), this together with (9.6) gives
On the other hand, if \(\sigma <0\) in (9.6) then replacing \(\sigma \) by \(\theta =-\sigma >0\) in (9.7) yields
Since \(\sigma (1-p)=\beta (1-(p-1)/\beta )\), we obtain from (9.6) that
We now distinguish three cases.
(a) If \(\beta <0\), then \(p=\max {\underline{Q}}\) and \(\sigma >0\). Thus (9.8) yields
This shows that the lower estimate in Proposition 7.1 (a) is sharp and that (7.4) fails in this case, despite the fact that \(p = \inf {\overline{Q}}\).
(b) If \(0<\beta <p-1\), then \(p=\min {\overline{Q}}\) and \(\sigma >0\). From (9.8) we conclude that
this time showing that the upper estimate in Proposition 5.2 (b) is sharp.
(c) If \(\beta >p-1\), then \(p=\min {\overline{Q}}\) and \(\sigma <0\). From (9.9) we see that
Note that both exponents \(1-(p-1)/\beta \) and \((p-1)/\beta \) are positive and their sum is 1. Letting \(\beta \rightarrow \infty \) and \(\beta \rightarrow p-1\), respectively, shows that in general for \(p=\min {\overline{Q}}\) the estimate
is the best we can hope for, since the definitions of \({\underline{Q}}\) and \({\overline{Q}}\) cannot capture the size of \(\beta \) in \(\mu (B_\rho )\simeq \rho ^p\log ^\beta (1/\rho )\), only its sign. Thus also the lower estimate in Proposition 7.1 (b) is optimal.
In addition, if R is fixed and \(r <R\), then by (9.9),
When \(r \ll R\), this is substantially smaller (since \(\beta >p-1\)) than the lower bound
claimed in [19, Theorem 3.2] for the case \(p=q(x)=\sup {\underline{Q}}\). Thus the latter estimate cannot be valid if \(p=\sup {\underline{Q}}=\min {\overline{Q}}\notin {\underline{Q}}\). Similarly, for \({\tilde{p}}>p=\sup {\underline{Q}}=\min {\overline{Q}}\notin {\underline{Q}}\) we have by Proposition 5.1 that
For \(\beta >0\) and \(r\ll R\), this is again substantially smaller than
showing that the lower bound claimed in [19, Theorem 3.2] for the case \({\tilde{p}}>q(x)\) cannot be valid in general. Nevertheless, let us point out that if \(q(x)=\max {\underline{Q}}\), then the estimates given in [19, Theorem 3.2] for the cases \({\tilde{p}}=q({x})\) and \({\tilde{p}}>q({x})\) are (essentially) the same as our Propositions 7.1 (a) and 6.2 (a), respectively.
We now turn to the S-sets. If \(\beta >0\), then \({\underline{S}_0}={\underline{Q}}=(0,p)\) and \({\overline{S}_0}={\overline{Q}}={[}p,\infty )\). Thus, Proposition 8.2 is of no use, and indeed we can show that both \({C_p}(\{0\})=0\) and \({C_p}(\{0\})>0\) are possible in this case:
If \(\sigma <0\), i.e. if \(\beta >p-1\), then \( \lim _{r\rightarrow 0} {{\mathrm{cap}}}_p(B_r,B_R) >0,\) by (9.6). In the same way as at the end of the proof of Proposition 8.2 it follows that \({C_p}(\{0\})>0\) and \({{\mathrm{cap}}}_p(\{0\},B)>0\) for every ball \(B \ni 0\).
If instead \(\sigma >0\), i.e. if \(0< \beta < p-1\), then \( \lim _{r\rightarrow 0} {{\mathrm{cap}}}_p(B_r,B_R) =0, \) by (9.6), from which it directly follows that \({{\mathrm{cap}}}_p(\{0\},B)=0\) for every ball \(B \ni 0\). Using that \({C_p}(\{0\}) \le {{\mathrm{cap}}}_p(\{0\},B) + \mu (B)\) shows that also \({C_p}(\{0\})=0\).
Example 9.5
Let
in \(\mathbf {R}^n\), \(n \ge 2\), where \(p>1\) and \(\beta \in \mathbf {R}\) is arbitrary, as in the second part of Example 3.1. This example is similar to the previous example, but the roles of r and R are in a sense reversed and thus we obtain different estimates.
As in Example 9.4, we have \(\sup {\underline{Q}}=\inf {\overline{Q}}={p}\), but if \(\beta >0\) it is now \(\sup {\underline{Q}}\) that is attained, while for \(\beta <0\) we have that \(\inf {\overline{Q}}\) is attained. Since
we have by Proposition 10.8 for \(e<r<R\) the estimate
if \(\sigma =1+\beta /(1-p)\ne 0\).
The simplification of (9.11) can be carried out analogously to the previous example, and we obtain for \(\sigma >0\) that
This yields the following conclusions in the cases corresponding to (a) and (b) of Example 9.4:
(a) If \(\beta <0\), then \(p=\min {\overline{Q}}\) and \(\sigma >0\). Thus (9.12) shows the sharpness of the lower estimate in Proposition 7.1 (b). It also shows that (7.3) fails in this case, despite the fact that \(p = \sup {\underline{Q}}\).
(b) If \(0<\beta <p-1\), then \(p=\max {\underline{Q}}\) and \(\sigma >0\), and from (9.12) we can conclude that also the upper estimate in Proposition 5.2 (a) is sharp.
We also mention that the case \(\sigma <0\) can be studied just as in Example 9.4 (c), this time showing the sharpness of the lower bound in Proposition 7.1 (a), although this was already known from the case (a) of Example 9.4; see however Remark 9.6 below.
Finally, if \(\beta >0\), then \({\underline{S}_\infty }={\underline{Q}}=(0,p]\) and \({\overline{S}_\infty }={\overline{Q}}=(p,\infty )\), and thus Proposition 8.6 is of no use. Considering the two cases \(\sigma >0\) and \(\sigma <0\) shows that indeed both possibilities \({{\mathrm{cap}}}_p(B_r,X)=0\) and \({{\mathrm{cap}}}_p(B_r,X)>0\) can happen in this case, cf. the end of Example 9.4.
Remark 9.6
In Example 9.4 we have \({\underline{Q}}={\underline{Q}_0}\) and \({\overline{Q}}={\overline{Q}_0}\), and thus the conclusions of this example also show the sharpness of the respective restricted capacity estimates, that is, the analogues of Proposition 5.2 (b) and Proposition 7.1 (a) and (b) for \({\underline{Q}_0}\) and \({\overline{Q}_0}\) and for radii \(0<2r\le R \le R_0\). In particular, Theorem 1.2, with the exception of the upper bound in (1.3), is shown to be sharp.
Similarly, in Example 9.5 we have \({\underline{Q}}={\underline{Q}_\infty }\) and \({\overline{Q}}={\overline{Q}_\infty }\), and so we obtain the sharpness of the analogues of Proposition 5.2 (a) and Proposition 7.1 (a) and (b) for \({\underline{Q}_\infty }\) and \({\overline{Q}_\infty }\) and for radii \(R_0\le r\le R/2\).
Nevertheless, these examples still leave open the sharpness of one of the upper bounds in each of the restricted versions of Proposition 5.2: We do not know if the upper estimate (5.1) is sharp for \(p\in {\underline{Q}_0}\) and \(0<2r\le R\le R_0\), or if (5.2) is sharp for \(p\in {\overline{Q}_\infty }\) and \(R_0\le r\le R/2\).
10 Radial weights and stretchings in \(\mathbf {R}^n\)
In this section we consider radial weights in \(\mathbf {R}^n\), \(n\ge 2\), and give a sufficient condition for when they are admissible, and in particular satisfy the global doubling condition and a global Poincaré inequality, thus providing a basis for our examples in Sect. 9. This will be achieved by comparing such weights with suitable powers of Jacobians of quasiconformal mappings on \(\mathbf {R}^n\). In particular, in Theorem 10.2 we characterize those radial stretchings in \(\mathbf {R}^n\) which are quasiconformal. The same condition was considered in \(\mathbf {R}^2\) by Astala et al. [4, Section 2.6] and for continuously differentiable mappings in \(\mathbf {R}^n\) by Manojlović [36, Example 2.9], while for power-like radial stretchings the corresponding result is well known, see e.g. Example 16.2 in Väisälä [42]. Both in [4] and [36], the result is obtained by differentiation and uses the analytic definition of quasiconformal mappings, based on the Jacobian determinant. Our assumptions are weaker and the method is different and based on more direct estimates of the linear dilation, rather than on the differentiable structure of \(\mathbf {R}^n\). We use the following metric definition of quasiconformal mappings, provided by e.g. Theorem 34.1 in [42], and applicable also in metric spaces.
Definition 10.1
A homeomorphism \(F:\mathbf {R}^n\rightarrow \mathbf {R}^n\), \(n \ge 2\), is a quasiconformal mapping if its linear dilation
is bounded. Here
We shall consider radial stretchings \(F:\mathbf {R}^n\rightarrow \mathbf {R}^n\) given by
where \(h(\rho )=k(\rho )/\rho \), and k is a locally absolutely continuous homeomorphism of \([0,\infty )\) satisfying \(k(0)=0\) and
for a.e. \(\rho \in [0,\infty )\) and some \(0<m\le M<\infty \). It is easily verified that the inverse mapping of F is given by
where the inverse \(k^{-1}\) is (under our assumptions) also locally absolutely continuous, and by (10.2) we have for a.e. \(\rho \in [0,\infty )\) that
where the implicit constants in \(\simeq \) are 1 / M and 1 / m.
We are going to obtain the following characterization.
Theorem 10.2
Assume that the mapping \(F:\mathbf {R}^n\rightarrow \mathbf {R}^n\), \(n \ge 2\), is defined as in (10.1). Then F is quasiconformal if and only if (10.2) holds for a.e. \(\rho \in [0,\infty )\) and some \(0<m \le M < \infty \).
The following lemma gives a basis for the sufficiency part of the theorem.
Lemma 10.3
If \(F:\mathbf {R}^n\rightarrow \mathbf {R}^n\) is as in (10.1) and satisfies (10.2), then for all \(x,y\in \mathbf {R}^n\), with \(|x|\le |y|\) and \(x \ne y\), we have
Proof
For \(x=0\) this is easily checked using the definition of F, so assume for the rest of the proof that \(x \ne 0\). The triangle inequality yields
Note that h is also locally absolutely continuous and the assumption (10.2) gives for a.e. \(|x|\le \xi \le |y|\) that
Hence
Inserting this into (10.5) proves the second inequality in (10.4).
To prove the first inequality we use the inverse mapping \(F^{-1}(z)=k^{-1}(|z|)z/|z|\). By (10.3), it satisfies (10.2) with m and M replaced by 1 / M and 1 / m. The first part of the proof applied to \(F^{-1}\) with \(z=F(x)\) and \(w=F(y)\) then yields
Since \(k^{-1}(\zeta )/\zeta = \xi /k(\xi ) = 1/h(\xi )\) with \(\xi =k^{-1}(\zeta )\), the first inequality in (10.4) follows. \(\square \)
Proof of Theorem 10.2
First assume that (10.2) holds. If \(x=0\), then \(L(x,r)=l(x,r)\) by the definition of F, and so \(H_F(0)=1\). If on the other hand \(x \ne 0\), then by Lemma 10.3 and the definition of F we have, for \(0<r<|x|\),
Inserting this into the definition of \(H_F(x)\) and letting \(r\rightarrow 0\) shows that F is quasiconformal.
Conversely, assume that F is quasiconformal. Since the linear dilation \(H_F(x)\) is bounded, Theorem 32.1 in Väisälä [42] shows that F is differentiable a.e. It follows that \(k^{\prime }\) exists a.e. in \((0,\infty )\). To prove (10.2), choose \(K>0\) such that \(H_F < K\) in \(\mathbf {R}^n\). Fix \(x\in \mathbf {R}^n\) with \(|x|=1\) and let \(\rho >0\) be arbitrary but such that \(k^{\prime }(\rho )\) exists. Then there exists \(0< r_0 < \rho \) such that \(L(\rho x,r) \le K l(\rho x,r) \) whenever \(0<r\le r_0\). For each such r find \(y\in \mathbf {R}^n\) such that \(|y|=1\) and \(|x-y|=r/\rho \). Then \(|\rho x -\rho y|=r\) and
On the other hand,
and the quotient \((k(\rho )-k(\rho -r))/r\) can be treated similarly. Letting \(r\rightarrow 0\) shows that \(k^{\prime }(\rho )\le K k(\rho )/\rho \). Applying the same argument to the quasiconformal mapping \(F^{-1}\) yields, with \(\zeta =k(\rho )\),
i.e. \(k^{\prime }(\rho )\ge k(\rho )/K\rho \).\(\square \)
Now assume that F is as in Lemma 10.3. The Jacobian \(J_F\) of F is the infinitesimal area distortion under F, and thus (10.6) implies that \(J_F(x)\simeq h(|x|)^n\) for a.e. \(x\in \mathbf {R}^n\). Since Jacobians of quasiconformal mappings are strong \(A_\infty \) weights (by a result due to Gehring [20], cf. pp. 101–102 in David and Semmes [17] and Theorem 1.5 in Heinonen and Koskela [25]), Theorem 1 in Björn [11] shows that the weight
is \(p\)-admissible when \(1\le p\le n\). (For \(1<p\le n\), one can instead use Theorem 15.33 in Heinonen et al. [24] or Corollary 1.10 in Heinonen and Koskela [25].) We thus have the following result.
Theorem 10.4
Let \(k:[0,\infty )\rightarrow [0,\infty )\) be a locally absolutely continuous homeomorphism of \([0,\infty )\) satisfying (10.2) for a.e. \(\rho \in [0,\infty )\). Then the weight \(w(x)=(k(|x|)/|x|)^{n-p}\) with \(1\le p\le n\) is \(p\)-admissible in \(\mathbf {R}^n\), \(n\ge 2\).
Now let w be a radial weight on \(\mathbf {R}^n\), \(n\ge 2\), i.e. \(w(x)=w(|x|)\) where \(0\le w\in L^1_{\mathrm{loc}}(0,\infty )\). Here we abuse the notation and use w both for the weight itself and for its one-dimensional representation on \((0,\infty )\). With the help of Theorem 10.4 we obtain the following sufficient condition for admissibility of radial weights.
Proposition 10.5
Assume that \(w:(0,\infty )\rightarrow (0,\infty )\) is locally absolutely continuous and that for some \(\gamma _1<n-1\), \(\gamma _2<\infty \) and a.e. \(\rho >0\) we have,
Then the radial weight \(w(x)=w(|x|)\) is 1-admissible in \(\mathbf {R}^n\), \(n\ge 2\).
Remark 10.6
In particular, Proposition 10.5 shows that all the weights
with \(\alpha >1-n\) and \(\beta \in \mathbf {R}\), are 1-admissible in \(\mathbf {R}^n\), \(n\ge 2\). We expect these weights to be 1-admissible (and even \(A_1\)) for \(-n<\alpha \le 1-n\) as well, but the \(A_1\) condition needs to be checked in this case. This is well known for \(\beta =0\), see Heinonen et al. [24, p. 10], thus showing that the above condition for admissibility is not sharp. Note also that for \(n=1\) a weight is \(p\)-admissible if and only if it is an \(A_p\) weight, by Theorem 2 in Björn et al. [13], and that the above “Jacobian” technique does not apply in this case.
Proof of Proposition 10.5
Let \(k(\rho )=\rho w(\rho )^{1/(n-1)}\). Then k is locally absolutely continuous and (10.7) implies that
which is positive for a.e. \(\rho \). Thus k is strictly increasing. Note also that integrating the inequality \(w^{\prime }(\rho )/w(\rho )\ge -\gamma _1/\rho \) implies that
for \(0<\rho _1\le \rho _2<\infty \), and hence
and
showing that k is onto. From (10.7) and (10.8) we also conclude that
i.e. that (10.2) holds. Theorem 10.4 now finishes the proof.\(\square \)
Remark 10.7
The condition (10.7) can also be expressed in terms of \(f(\rho ):=\mu (B(0,\rho ))\), where \(d\mu =w\,dx\), as follows. Since \(w(\rho )=C\rho ^{1-n}f^{\prime }(\rho )\), an equivalent condition to (10.7) is
Note that this requires \(f^{\prime \prime }>0\) (since f is increasing), i.e. f must be convex, which excludes small powers \(f(r)= r^\alpha \), \(0<\alpha <1\). On the other hand, these correspond to \(A_1\) weights, and are thus 1-admissible; see Heinonen et al. [24, p. 10] and Theorem 4 in Björn [11], and cf. also Remark 10.6.
We end this section by calculating the variational capacity of annuli with respect to radial weights in \(\mathbf {R}^n\).
Proposition 10.8
Let \(w(x)=w(|x|)\) be a radial weight on \(\mathbf {R}^n\), \(n\ge 2\), such that \(w>0\) a.e. and \(w\in L^1_{\mathrm{loc}}(\mathbf {R}^n)\). Assume that the corresponding measure \(d\mu = w\,dx\) supports a \(p\)-Poincaré inequality at 0, where \(p>1\). Let \(f(r)=\mu (B_r)\), where \(B_r=B(0,r)\subset \mathbf {R}^n\). Then
In Sect. 9 we applied this formula to various weights including weights of logarithmic type. In Theorems 2.18 and 2.19 in Heinonen et al. [24], an integral estimate was obtained for nonradial weights satisfying the \(A_p\) condition. See also Theorem 3.1 in Holopainen and Koskela [29], where capacity of annuli in Riemannian manifolds is estimated in a similar way.
Remark 10.9
For Proposition 10.8, we actually do not need the full \(p\)-Poincaré inequality at 0; it is enough to have it for some ball \(B \supset B_R\) with \(\mu (B {\setminus } B_R)>0\). The Poincaré inequality is only used when proving Lemma 10.10, which in turn is used to show that the minimizer u for \({{\mathrm{cap}}}_p(B_r,B_R)\) is absolutely continuous on rays and that \(g_u=|u^{\prime }|\).
These consequences are not always true if the Poincaré assumption is omitted. Indeed, if e.g.
where \(\{q_j\}_{j=1}^\infty \) is an enumeration of the positive rational numbers, then
is for every \(\tilde{r}\in (r,R)\) an upper gradient of \(u:=\chi _{B_{\tilde{r}}}\), since \(\int _\gamma g\,ds=\infty \) for every curve \(\gamma \) crossing over \(\partial B_{\tilde{r}}\). Thus \(u\in N^{1,p}_0(B_R, w\,dx)\) and Corollary 2.21 in Björn and Björn [5] implies that \(g_u=0\) a.e. in \(B_R\). It follows that the minimizer is not unique (and may also be nonradial) and \({{\mathrm{cap}}}_p(B_r,B_R)=0\) in this case. Moreover, \(g\in N^{1,p}(B_R,w\,dx)\) (with itself as an upper gradient), but \(g\notin L^1_{\mathrm{loc}}(\mathbf {R}^n,dx)\), so \(g^{\prime }\) need not be defined (e.g. in the distributional sense). Cf. also the discussion after Proposition 4.6 and the discussion about gradients on p. 13 in Heinonen et al. [24].
On the other hand, if w is \(p\)-admissible, then Theorem 8.6 in [24] directly shows that \({{\mathrm{cap}}}_p(B_r,B_R)=\int _{B_R{\setminus } B_r} |\nabla u|^p\,w\,dx\), where u is the solution of
with the boundary data 1 on \(\partial B_r\) and 0 on \(\partial B_R\), and only the second half of the proof below is needed in this case, cf. Example 2.22 in [24]. More general weights require more care and are treated using the metric space theory.
Proof of Proposition 10.8
By Lemma 4.8 we may assume that \(R < \infty \). First we have
where \(\omega _{n-1}\) is the surface area of the \((n-1)\)-dimensional unit sphere in \(\mathbf {R}^n\). To calculate \({{\mathrm{cap}}}_p(B_r,B_R)\) we need to minimize \(\int _{B_R{\setminus } B_r}g_u^p w\,dx\) among functions u with \(u=1\) on \(B_r\) and \(u=0\) on \(\partial B_R\). We shall also see below that under our assumptions, \(g_u=|u^{\prime }|\) a.e.
Since the data are bounded, no Poincaré inequality nor doubling property is needed for the existence of a minimizer (i.e. a competing function having \(p\)-energy equal to \({{\mathrm{cap}}}_p(B_r,B_R)\)), by e.g. Theorem 5.13 in Björn and Björn [6]. Without such assumptions the minimizer need not be unique and there may exist a nonradial minimizer, but there always exists at least one radial minimizer. Indeed, if v is a minimizer, then
and we can find \(\theta _0 \in \mathbf {S}^{n-1}\) so that
Letting \(u(x)=v(|x|\theta _0)\) and \(g(x)=g_v(|x|\theta _0)\) it is easily verified that g is a \(p\)-weak upper gradient of u and that, by (10.9),
Thus u is a radial minimizer.
As usual, we write \(u(x)=u(|x|)\), where \(u:[0,\infty )\rightarrow \mathbf {R}\). We may clearly assume that u is decreasing, and so \(u^{\prime }(\rho )\) exists for a.e. \(\rho \). By Proposition 3.1 in Shanmugalingam [38] (or Theorem 1.56 in [5]), u is absolutely continuous on all curves, except for a curve family with zero \(p\)-modulus (with respect to the measure \(\mu \)). Lemma 10.10 below shows that the family of all radial rays connecting \(B_r\) to \(\mathbf {R}^n{\setminus } B_R\) has positive \(p\)-modulus. By symmetry, it then follows that u is absolutely continuous on the interval [r, R] and hence, by Lemma 2.14 in [5], \(g_u = |u^{\prime }|\) a.e.
Thus,
Since u is a minimizer of this integral, it solves the corresponding Euler–Lagrange equation
(which is derived in a standard way) and hence \(|u^{\prime }|^{p-2}u^{\prime }f^{\prime }=A\) a.e. It is clear that \(u^{\prime }\le 0\), and so we get \(u^{\prime }=-(A/f^{\prime })^{1/(p-1)}\) a.e. To determine the constant A, notice that
and thus
Inserting this into the above expressions for \(u^{\prime }\) and \(\int _{B_R{\setminus } B_r} g_u^p w\,dx\) gives
\(\square \)
Lemma 10.10
Under the assumptions of Proposition 10.8, the family \(\Gamma _{r,R}\) of all radial rays connecting \(B_r\) to \(\mathbf {R}^n{\setminus } B_R\) has positive \(p\)-modulus with respect to the measure \(d\mu = w\,dx\).
Proof
Assume on the contrary that the \(p\)-modulus of \(\Gamma _{r,R}\) is zero. Then there exists \(g\in L^p(B_R{\setminus } B_r,\mu )\) such that for every radial ray \(\gamma \) connecting \(r\theta \) to \(R\theta \), where \(\theta \in \mathbf {S}^{n-1}\), we have
Since \(g\in L^p(B_R{\setminus } B_r,\mu )\), Fubini’s theorem implies that for a.e. \(\theta \in \mathbf {S}^{n-1}\),
Choose one such \(\theta \in \mathbf {S}^{n-1}\) and set \(\tilde{g}(|x|)=\tilde{g}(x)=g(|x|\theta )\), \(x\in B_R{\setminus } B_r\). Then \(\tilde{g}\) is radially symmetric, \(\tilde{g}\in L^p(B_R{\setminus } B_r,\mu )\), and \(\int _\gamma \tilde{g}\,ds = \infty \) for every \(\gamma \in \Gamma _{r,R}\).
Since \(\int _r^R \tilde{g}\,dt=\infty \), we can by successively halving intervals find a decreasing sequence of intervals \([a_j,b_j]\) such that \(\int _{a_j}^{b_j} \tilde{g}\,dt=\infty \) and \(b_j - a_j \rightarrow 0\), as \(j \rightarrow \infty \). Letting \(\tilde{r}=\lim _{j \rightarrow \infty } a_j\) we see that either \(\int _{\tilde{r}- \varepsilon }^{\tilde{r}} \tilde{g}\,dt=\infty \) for all \(\varepsilon >0\), or \(\int _{\tilde{r}}^{\tilde{r}+\varepsilon } \tilde{g}\,dt=\infty \) for all \(\varepsilon >0\) (or both). Let in the former case \(E=B_{\tilde{r}}\) and in the latter case .
If \(\gamma :[0,l_\gamma ]\rightarrow \mathbf {R}^n\) is any (possibly nonradial) curve connecting E to \(\mathbf {R}^n{\setminus } E\), then using the symmetry of \(\tilde{g}\) it is easily verified that
Thus \(\tilde{g}\) is an upper gradient of \(u_n=n\chi _{E}\) for every \(n=1,2,\ldots \) . Since \(u_n\in N^{1,p}(B_{2R},\mu )\), applying the \(p\)-Poincaré inequality at 0 to \(u_n\) gives
Letting \(n\rightarrow \infty \) leads to a contradiction, showing that \(\Gamma _{r,R}\) has positive \(p\)-modulus.\(\square \)
References
Adamowicz, T., Björn, A., Björn, J., Shanmugalingam, N.: Prime ends for domains in metric spaces. Adv. Math. 238, 459–505 (2013)
Adamowicz, T., Shanmugalingam, N.: Non-conformal Loewner type estimates for modulus of curve families. Ann. Acad. Sci. Fenn. Math. 35, 609–626 (2010)
Adams, D.R.: Weighted nonlinear potential theory. Trans. Am. Math. Soc. 297, 73–94 (1986)
Astala, K., Iwaniec, T., Martin, G.: Elliptic Partial Differential Equations and Quasiconformal Mappings in the Plane. Princeton Mathematical Series, vol. 48. Princeton University Press, Princeton (2009)
Björn, A., Björn, J.: Nonlinear Potential Theory on Metric Spaces. EMS Tracts in Mathematics, vol. 17. European Mathematics Society, Zürich (2011)
Björn, A., Björn, J.: The variational capacity with respect to nonopen sets in metric spaces. Potential Anal. 40, 57–80 (2014)
Björn, A., Björn, J.: Obstacle and Dirichlet problems on arbitrary nonopen sets in metric spaces, and fine topology. Rev. Mat. Iberoam. 31, 161–214 (2015)
Björn, A., Björn, J., Gill, J., Shanmugalingam, N.: Geometric analysis on Cantor sets and trees. J. Reine Angew. Math. doi:10.1515/crelle-2014-0099 (to appear)
Björn, A., Björn, J., Lehrbäck, J.: The annular decay property and capacity estimates for thin annuli. Collect. Math. doi:10.1007/s13348-016-0178-y (to appear)
Björn, A., Björn, J., Shanmugalingam, N.: Quasicontinuity of Newton–Sobolev functions and density of Lipschitz functions on metric spaces. Houston J. Math. 34, 1197–1211 (2008)
Björn, J.: Poincaré inequalities for powers and products of admissible weights. Ann. Acad. Sci. Fenn. Math. 26, 175–188 (2001)
Björn, J.: Boundary continuity for quasiminimizers on metric spaces. Ill. J. Math. 46, 383–403 (2002)
Björn, J., Buckley, S.M., Keith, S.: Admissible measures in one dimension. Proc. Am. Math. Soc. 134, 703–705 (2006)
Bonk, M., Heinonen, J., Koskela, P.: Uniformizing Gromov hyperbolic spaces. Astérisque 270, i–viii, 1–99 (2001)
Capogna, L., Danielli, D., Garofalo, N.: Capacitary estimates and the local behavior of solutions of nonlinear subelliptic equations. Am. J. Math. 118, 1153–1196 (1996)
Danielli, D., Garofalo, N., Marola, N.: Local behavior of \(p\)-harmonic Green functions in metric spaces. Potential Anal. 32, 343–362 (2010)
David, G., Semmes, S.: Strong \(A_\infty \) weights, Sobolev inequalities and quasiconformal mappings. In: Sadosky, S. (ed.) Analysis and Partial Differential Equations, Lecture Notes in Pure and Applied Mathematics, vol. 122, pp. 101–111. Dekker, New York (1990)
Dovgoshey, O., Martio, O., Ryazanov, V., Vuorinen, M.: The Cantor function. Expos. Math. 24, 1–37 (2006)
Garofalo, N., Marola, N.: Sharp capacitary estimates for rings in metric spaces. Houston J. Math. 36, 681–695 (2010)
Gehring, F.W.: The \(L^p\)-integrability of the partial derivatives of a quasiconformal mapping. Acta Math. 130, 265–277 (1973)
Hajłasz, P.: Sobolev spaces on metric-measure spaces. In: Auscher, P., Coulhon, T., Grigor’yan, A. (eds.) Heat Kernels and Analysis on Manifolds, Graphs and Metric Spaces (Paris, 2002). Contemporary Mathematics, vol. 338, pp. 173–218. American Mathematics Society, Providence (2003)
Heinonen, J.: Lectures on Analysis on Metric Spaces. Springer, New York (2001)
Heinonen, J., Holopainen, I.: Quasiregular maps on Carnot groups. J. Geom. Anal. 7, 109–148 (1997)
Heinonen, J., Kilpeläinen, T., Martio, O.: Nonlinear Potential Theory of Degenerate Elliptic Equations, 2nd edn. Dover, Mineola (2006)
Heinonen, J., Koskela, P.: Weighted Sobolev and Poincaré inequalities and quasiregular mappings of polynomial type. Math. Scand. 77, 251–271 (1995)
Heinonen, J., Koskela, P.: Quasiconformal maps in metric spaces with controlled geometry. Acta Math. 181, 1–61 (1998)
Heinonen, J., Koskela, P., Shanmugalingam, N., Tyson, J.T.: Sobolev Spaces on Metric Measure Spaces. New Mathematical Monographs, vol. 27. Cambridge University Press, Cambridge (2015)
Holopainen, I.: Nonlinear Potential Theory and Quasiregular Mappings on Riemannian Manifolds. Annales Academiae Scientiarum Fennicae Series A I Mathematica Dissertationes, vol. 74 (1990)
Holopainen, I., Koskela, P.: Volume growth and parabolicity. Proc. Am. Math. Soc. 129, 3425–3435 (2001)
Holopainen, I., Shamugalingam, N.: Singular functions on metric measure spaces. Collect. Math. 53, 313–332 (2002)
Kallunki [Rogovin], S., Shanmugalingam, N.: Modulus and continuous capacity. Ann. Acad. Sci. Fenn. Math. 26, 455–464 (2001)
Keith, S., Zhong, X.: The Poincaré inequality is an open ended condition. Ann. Math. 167, 575–599 (2008)
Koskela, P.: Removable sets for Sobolev spaces. Ark. Mat. 37, 291–304 (1999)
Koskela, P., MacManus, P.: Quasiconformal mappings and Sobolev spaces. Stud. Math. 131, 1–17 (1998)
Mäkäläinen, T.: Adams inequality on metric measure spaces. Rev. Mat. Iberoam. 25, 533–558 (2009)
Manojlović, V.: Harmonic quasiconformal mappings in domains in \({\bf R} ^n\). J. Anal. 18, 297–316 (2010)
Serrin, J.: Local behavior of solutions of quasi-linear equations. Acta Math. 111, 247–302 (1964)
Shanmugalingam, N.: Newtonian spaces: an extension of Sobolev spaces to metric measure spaces. Rev. Mat. Iberoam. 16, 243–279 (2000)
Shanmugalingam, N.: Harmonic functions on metric spaces. Ill. J. Math. 45, 1021–1050 (2001)
Svensson [Ström], H.: Radiella vikter i \(\mathbf{R}^n\) och lokala dimensioner [Radial weights in \({\bf R} ^n\) and local dimensions] Master’s thesis, Linköping University, Linköping (2014) (in Swedish). http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-107173
Turesson, B.O.: Nonlinear Potential Theory and Weighted Sobolev Spaces. Lecture Notes in Mathematics, vol. 1736. Springer, Berlin (2000)
Väisälä, J.: Lectures on \(n\)-Dimensional Quasiconformal Mappings. Lecture Notes in Mathematics, vol. 229. Springer, Berlin (1971)
Vodop’yanov, S.K.: Weighted \(L_p\) potential theory on homogeneous groups. Sibirsk. Mat. Zh. 33(2), 29–48 (1992) (in Russian). English transl. Sib. Math. J. 33, 201–218 (1992)
Acknowledgements
A.B. and J.B. were supported by the Swedish Research Council. J.L. was supported by the Academy of Finland (Grant No. 252108) and the Väisälä Foundation of the Finnish Academy of Science and Letters. Part of this research was done during several visits of J.L. to Linköping University in 2012–2013, and while A.B. and J.B. visited Institut Mittag-Leffler in 2013. We wish to thank these institutions for their kind hospitality.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Björn, A., Björn, J. & Lehrbäck, J. Sharp capacity estimates for annuli in weighted \(\mathbf {R}^n\) and in metric spaces. Math. Z. 286, 1173–1215 (2017). https://doi.org/10.1007/s00209-016-1797-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00209-016-1797-4
Keywords
- Annulus
- Doubling measure
- Exponent sets
- Metric space
- Newtonian space
- \(p\)-admissible weight
- Poincaré inequality
- Quasiconformal mapping
- Radial weight
- Sobolev space
- Variational capacity