1 Introduction

1.1 History and Motivation

The past decades have seen a considerable achievements at the intersection of harmonic analysis, PDEs, and geometric measure theory. The general idea of the research is to establish links between the geometry of the boundary of a domain \(\Omega \) and the regularity of the solutions (or the well-posedness of boundary value problems). Let us give an example. Given an open domain \(\Omega \subset \mathbb {R}^n\) and a function g on the boundary \(\partial \Omega \), the Dirichlet problem consist to find solutions (possibly in some weak sense and appropriate spaces) to \(-\Delta u = 0\) in \(\Omega \) and \(u=g\) on \(\partial \Omega \). If \(\Omega \) is a \(C^{k,\alpha }\)-domain and \(g\in C^{k,\alpha }(\overline{\Omega })\) for a \(k\ge 2\), then it is well-known that the solution u to the aforementioned Dirichlet problem is also \(C^{k,\alpha }(\overline{\Omega })\) (see e.g. [20, Theorem 6.19]).

Of course, mathematicians have studied the Dirichlet conditions under weaker conditions on \(\partial \Omega \) and g. The important discovery for us is the following equivalence: the solvability of the Dirichlet problem for all g in \(L^p\) and one large \(p<+\infty \) is equivalent to the \(A_\infty \)-absolute continuity of the harmonic measure with respect to the surface measure, see e.g. [26, Theorem 1.7.3], the \(A_\infty \)-absolute continuity is a scale invariant quantitative version of the mutual absolute continuity. Using the harmonic measure instead of boundary value problem is convenient because captures the diffusion property of the Laplacian, but places more emphasis on the boundary \(\partial \Omega \) than on the domain \(\Omega \).

The first result that links the harmonic measure and the boundary is now more than a century old. In 1916, F. and M. Riesz showed that for simply connected domains in the complex plane with rectifiable boundary, the harmonic measure is absolutely continuous with respect to the arc length (see [19]). In 1936, M. Lavrent’ev established a scale invariant version of the Riesz brothers result (see [30]). C. Bishop and P. Jones obtained a local version of the result in 1990 ([6]), and showed that topological conditions are needed to ensure that the harmonic measure on a rectifiable boundary is absolutely continuous with respect to the arc length.

The result was also studied in higher dimensional spaces, namely \(\mathbb {R}^n\) for \(n\ge 3\). B. Dahlberg proved in 1977 that, for domains \(\Omega \) with Lipschitz boundary, the harmonic measure is indeed absolutely continuous continuous with respect to the surface measure \(\mathcal H^{n-1}|_{\partial \Omega }\) (see [8]). The topic underwent a lot of improvements in the next three decades, leading to finer and finer necessary and sufficient conditions, see for instance [5, 9, 25, 35, 39, 40]. The authors of [23] (see also [2]) obtained that under some conditions of topology, uniform rectifiability of the boundary implies that the harmonic measure is \(A_\infty \)-absolutely continuous with respect to the Hausdorff measure was also observed in [24] (with topological assumptions) and [3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18, 20, 21] (without topological assumptions) that rectifiability is necessary to get absolute continuity of the harmonic measure. Several recent works (e.g. [1, 4]) looked then for the optimal topological condition to have to ensure, together with uniform rectifiability, the \(A_\infty \)-property of the harmonic measure (or slightly weaker versions). A lot of articles could be related to the present discussion, for instance a lot of work has been done to see to which extend we can replace—in the previous results—the harmonic measure by elliptic measures associated to elliptic operators other than the Laplacian (which is relevant since at least one approach to this problem is to study perturbations of the Laplacian for simpler domains). We apologize to all the other mathematicians that could have been cited here, we refer to [16] for a more detailed presentation of the state of the art, and we send the non-specialist interested reader to [38] and [33] for nice presentations of the problems related to this topic.

Guy David and Svitlana Mayboroda had the following thoughts. Would it be a way, for sets \(\Gamma \subset \mathbb {R}^n\) of dimension \(d<n-1\), to obtain a similar criterion of uniform rectifiability using harmonic measure? A positive answer would be a huge discovery, because most of the criterions of uniform rectifiability, in particular those pertaining to PDE, are limited to certain dimensions or codimensions. The question at that time started as a challenge, since the harmonic measure is a tool that—roughly speaking—can only “see” the parts of \(\Gamma \) sets of dimension d such that \(n-2<d<n\). The idea was thus to find a way to construct a probability measure by way of elliptic PDEs, that will play the role of the harmonic measure, for sets with low dimension. In for instance [29], the authors used a non-linear p-Laplacian operator to solve this issue, but their goal was different from David and Mayboroda’s objective. David and Mayboroda’s approach was to use linear but degenerate elliptic operators L in \(\Omega := \mathbb {R}^n \setminus \Gamma \), that satisfy elliptic and boundedness conditions relative to a weight \(w(x) = \,\mathrm {dist}(x,\Gamma )^{d+1-n}\) that takes the dimension d of \(\Gamma \) and the distance to the boundary into account. These ideas led to the memoir [13], where we developed an elliptic theory associated to the degenerate operators that we wanted to use. In particular, we prove that when \(L:=-\mathop {{\text {div}}}[w(x)A(x) \nabla ]\) is a degenerate elliptic operator and A(x) satisfies the classical elliptic and boundedness conditions, weak solutions to \(Lu=0\) in \(\mathbb {R}^n \setminus \Gamma \) satisfy De Giorgi–Nash–Moser estimates inside the domain and at the boundary. We can then define a probability measure \(\omega ^X_{L}\) on \(\Gamma \) associated to L, and this measure \(\omega ^X_{L}\) has desirable properties such as the doubling property, the degeneracy property, the change of pole property. We do not want to talk much about those properties; indeed, they are only used for the proof of Lemma 1.15 below which will not be repeated here because it can be found in previous works.

In [14], we continued our project by aiming for Dahlberg’s result for sets with a low dimension, that is if \(\Gamma \) is the graph of a Lipschitz function from \(\mathbb {R}^d\) to \(\mathbb {R}^{n-d}\), then the “harmonic measure” \(\omega ^X_L\) is \(A_\infty \)-absolutely continuous with respect to the d-dimensional Hausdorff measure. The main difficulty here is the fact that—even in the classical case—not all elliptic operator with bounded coefficients satisfy that \(\omega ^X_L\) is \(A_\infty \)-absolutely continuousFootnote 1 (see [7, 32]). We had to make a choice for \(L:=L_\Gamma \), which is simple enough and systematically defined for all sets \(\Gamma \subset \mathbb {R}^n\) of dimension d. The survey [15] presents what we succeeded to do by 2018. But what you need to know for the article—in particular the choice of \(L_\Gamma \)—is given in the next subsection.

Guy David and Svitlana Mayboroda extended in [17] the above result to all uniformly rectifiable sets. That is, for such sets, the harmonic measure \(\omega ^X_{L_\Gamma }\) is \(A_\infty \) absolutely continuous for all uniformly rectifiable sets \(\Gamma \subset \mathbb {R}^n\) of dimension \(d<n-1\). Contrary to the case \(d=n-1\), we do not need any assumption of topology on our domain \(\Omega := \mathbb {R}^n \setminus \Gamma \), since they are automatically verified when \(\Gamma \) has dimension \(d<n-1\) (the fact that no extra topology condition is needed can be related to the fact that we are unlikely to touch the boundary when we travel between two points of \(\Omega \)).

In the current article, we purpose a shorter and simpler proof of the main theorem in [17]. Our result is established by a completely different method that exploits in a crucial manner on the fact that \(\Gamma \) has a dimension \(d<n-1\). Our methods are simple in nature, for instance they do not rely on the so called Corona decomposition, saw-tooth domains, extrapolation arguments, ...

1.2 Main Result

In this subsection, we want to properly introduce all the tools that we need for our main result, and later state our main theorem. We shall first talk about the uniform rectifiability, and then turn to the presentation of the degenerate elliptic operator that will substitute the Laplacian and that will be used to construct our harmonic measure.

Let \(\Gamma \subset \mathbb {R}^n\) be a Ahlfors regular set of dimension d, which means that \(\Gamma \) is a closed set and there exists a measure \(\sigma \) supported on \(\Gamma \) and a constant \(C_{\sigma } \ge 1\) such that

$$\begin{aligned} C_{\sigma }^{-1} r^{d} \le \sigma (B(x,r)) \le C_\sigma r^d \end{aligned}$$
(1.1)

for all \(x\in \Gamma \) and \(r>0\). The Ahlfors regularity is a property on the set \(\Gamma \) rather than on the measure \(\sigma \). Indeed, if a \(\sigma \) satisfying (1.1) exists, then (1.1) is also satisfied when \(\sigma \) is the d-dimensional Hausdorff measure on \(\Gamma \), possibly with a larger constant \(C_\sigma \).

The geometric assumption on \(\Gamma \) in this article is uniformly rectifiability. Uniformly rectifiable sets were introduced by David and Semmes, and equivalent definitions are given in [10, 11]. The characterization of uniform rectifiability that would be closer to the one of rectifiability is probably the one stating

$$\begin{aligned} \Gamma \text { is uniformly rectifiable if }\Gamma \text { has big pieces of Lipschitz images,} \end{aligned}$$
(1.2)

that is, if \(\Gamma \) is Ahlfors regular (1.1), and there exists \(\theta , M>0\) such that, for each \(x\in \Gamma \) and \(r>0\), there is a Lipschitz mapping \(\rho \) from the ball \(B(0,r) \subset \mathbb {R}^d\) into \(\mathbb {R}^n\) such that \(\rho \) has Lipschitz norm \(\le M\) and

$$\begin{aligned} \sigma (\Gamma \cap B(x,r) \cap \rho (B_{\mathbb {R}^d}(0,r))) \ge \theta r^d. \end{aligned}$$

In this article, we shall only rely on the characterization of uniform rectifiable sets by Tolsa \(\alpha \)-numbers that we present now. We denote by \(\Xi \) the set of affine d-dimensional planes in \(\mathbb {R}^n\). Each plane \(P\in \Xi \) is associated with a measure \(\mu _P\), which is the restriction to d-dimensional Hausdorff measure to P (i.e. the Lebesgue measure on the plane). A flat measure is a measure \(\mu \) that can be written \(\mu = c\mu _P\) where c is a positive constant and \(P\in \Xi \). The set of flat measures is called \(\mathcal F\). We need Wasserstein distances to quantify the difference between two measures, and we shall use it to measure how far a measure \(\sigma \) is from a flat measure.

Definition 1.3

For \(x\in \mathbb {R}^n\) and \(r > 0\), denote by Lip(xr) the set of 1-Lipschitz functions f supported in \(\overline{B(x,r)}\), that is the set of functions \(f : \mathbb {R}^n \rightarrow \mathbb {R}\) such that \(f(y)=0\) for \(y\in \mathbb {R}^n \setminus B(x,r)\) and \(|f(y)-f(z)|\le |y-z|\) for \(y,z\in \mathbb {R}^n\). The normalized Wasserstein distance between two measures \(\sigma \) and \(\mu \) is then

$$\begin{aligned} \,\mathrm {dist}_{x,r}(\mu ,\sigma ) = r^{-d-1} \sup _{f\in Lip(x,r)} \Big |\int f d\sigma - \int f d\mu \Big |. \end{aligned}$$
(1.3)

The distance to flat measures is defined by

$$\begin{aligned} \alpha _\sigma (x,r) = \inf _{\mu \in {\mathcal F}}\,\mathrm {dist}_{x,r}(\mu ,\sigma ). \end{aligned}$$
(1.4)

Observe that \(\alpha _\sigma \) is uniformly bounded, i.e. there exists a constant that depends only on d, n, and \(C_\sigma \) such that for all \(x\in \Gamma \) and \(r>0\),

$$\begin{aligned} \alpha _\sigma (x,r) \le C_\sigma . \end{aligned}$$
(1.5)

The result above is quite classical. Take a flat measure \(\mu \) supported outside B(xr), and we can see that \(\alpha _\sigma (x,r) \le r^{-d-1} \sup _{f\in Lip(x,r)} \Big |\int f d\sigma \Big |\). But since f is 1-Lipschitz supported in B(xr), the function f is bounded by r, which leads to the fact that \(\alpha _\sigma (x,r) \le r^{-d} \sigma (B(x,r)) \le C_\sigma \) as desired.

Let \(\Gamma \) be a d-Ahlfors regular set, and \(\sigma \) be a measure that satisfies (1.1). If \(\Gamma \) is uniformly rectifiable, then there exists a constant \(C_0>0\) that depends only on \(\sigma \) such that

$$\begin{aligned} \int _{0}^r \int _{\Gamma \cap B(x,r)} |\alpha _\sigma (y,s)|^2 \, d\sigma (y) \, \frac{ds}{s} \le C_0 \sigma (B(x,r)) \qquad \text { for } x\in \Gamma \text { and } r>0. \end{aligned}$$
(1.6)

The above statement is even a characterization of uniform rectifiability (see Theorem 1.2 in [37]). Tolsa’s characterization of uniform rectifiability is given with dyadic cubes, but one can easily check that our bound (1.6) is equivalent to Tolsa’s one.

Now, we present the elliptic theory associated to our problem. Define \(\Omega := \mathbb {R}^n \setminus \Gamma \), where \(\Gamma \) is a d-Ahlfors regular set with \(d < n-1\). The set \(\Omega \) will serve as our domain in which we study elliptic equations. Because of the thin boundary, the domain \(\Omega \) automatically satisfies the Harnack chain condition (quantitative connectedness), see Lemma 2.1 [13]. Moreover Lemma 11.6 in [13] entails the existence of a constant C that depends only on \(C_\sigma \) and \(n-d>1\) such that for any \(x\in \Gamma \) and \(r>0\), we can find a point \(A_{x,r}\) such that

$$\begin{aligned} C^{-1} r \le \,\mathrm {dist}(A_{x,r},\Gamma ) \le |A_{x,r}-x| \le Cr. \end{aligned}$$
(1.7)

Thus, contrary to the case where \(d=n-1\) [23], we don’t need to assume those topological hypotheses.

When \(\Gamma \) is of codimension at least 2, a weak solution to \(-\Delta u = 0\) in \(\Omega :=\mathbb {R}^n \setminus \Gamma \) is also a weak solution to \(-\Delta u = 0\) in \(\mathbb {R}^n\). So in particular, we cannot impose any non-smooth data on \(\Gamma \), and we cannot define the harmonic measure on \(\Gamma \). In [13,14,15,16], the three authors developed an elliptic theory associated to a domain \(\Gamma \) by considering degenerate elliptic operators that takes the dimension of \(\Gamma \) into account. In the present article, we consider the operators \(L_{\beta ,\gamma }\), \(\beta >0\) and \(\gamma \in (-1,1)\) defined as

$$\begin{aligned} L_{\beta ,\gamma } := - \mathop {{\text {div}}}(D_\beta )^{d+1+\gamma -n} \nabla . \end{aligned}$$
(1.8)

where \(D_\beta \) is defined on \(\Omega \) as

$$\begin{aligned} D_\beta (X) := \left( \int _\Gamma |X-y|^{-d-\beta } d\sigma (y) \right) ^{-1/\beta }. \end{aligned}$$
(1.9)

and \(\sigma \) is the measure on \(\Gamma \) introduced in (1.1). Lemma 5.1 in [14] shows that, when \(\Gamma \) is d-Ahlfors regular,

$$\begin{aligned} C^{-1} \,\mathrm {dist}(X,\Gamma ) \le D_\beta \le C \,\mathrm {dist}(X,\Gamma ) \qquad \text { for } X\in \Omega , \end{aligned}$$
(1.10)

where \(C>0\) depends only on n, d, \(\beta \), and \(C_\sigma \). In view of the above estimate, we can extend the definition of \(D_\beta \) to all \(\mathbb {R}^n\) by setting \(D_\beta (x) = 0\) if \(x\in \Gamma \). Moreover, it shows that the operator \(L_{\beta ,\gamma }\), for \(\beta >0\) and \(\gamma \in (-1,1)\) enters the scope of the theory written in [16] (see the discussion in paragraph 3.3 from [16] when \(\gamma \ne 0\), see also [13] for the case \(\gamma = 0\)).

For the rest of the article, we say that u is a weak solution to \(L_{\beta ,\gamma } = 0\) if

$$\begin{aligned} \int _\Omega (\nabla u \cdot \nabla \varphi ) \, D_\beta ^{d+1+\gamma -n} = 0 \qquad \text { for } \varphi \in C^\infty _0(\Omega ). \end{aligned}$$
(1.11)

In the integral above, we didn’t specify that we integrate with respect to the n-dimensional Lebesgue measure. For the rest of the article, to lighten the notation, an integral without measure will always be an integral against the n-dimensional Lebesgue measure. The precise definition of the harmonic measure, as constructed in [16] is then:

Definition 1.12

For each \(X\in \Omega \), we can define a unique probability measure \(\omega ^X:=\omega ^X_{\beta ,\gamma }\) on \(\Gamma \) with the following properties. For any compactly supported continuous function g on \(\Gamma \), the function \(u_g\) defined as

$$\begin{aligned} u_g(X) = \int _\Gamma g(y) d\omega ^X(y)\end{aligned}$$

is a weak solution to \(L_{\beta ,\gamma }u=0\) in \(\Omega :=\mathbb {R}^n \setminus \Gamma \), which in addition is continuous on \(\mathbb {R}^n\) and is equal to g on \(\Gamma \).

The goal of the article is to obtain the following result.

Theorem 1.13

Let \(\Gamma \subset \mathbb {R}^n\) be a d-Ahlfors regular uniformly rectifiable set with \(d < n-1\), and let \(\sigma \) be an Ahlfors regular measure on \(\Gamma \) that satisfies (1.1). Define \(L_{\beta ,\gamma }\) as in (1.8). Then the associated harmonic measure satisfies \(\omega _{\beta ,\gamma }^X \in A_\infty (\sigma )\). This means that for every choice of \(\epsilon \in (0,1)\), there exists \(\delta \in (0,1)\), that depends only on \(C_\sigma \), \(C_0\), \(\epsilon \), n, d, \(\beta \), and \(\gamma \), such that for each choice of \(x\in \Gamma \), \(r>0\), a Borel set \(E\subset B(x,r) \cap \Gamma \), and a corkscrew point \(X = A_{x,r}\) as in (1.7),

$$\begin{aligned} \frac{\omega ^X_{\beta ,\gamma }(E)}{\omega ^X_{\beta ,\gamma }(B(x,r) \cap \Gamma )}< \delta \Rightarrow \frac{\sigma (E)}{\sigma (B(x,r) \cap \Gamma )} < \epsilon . \end{aligned}$$
(1.14)

Observe that (1.14) implies that \(\omega ^X_{\beta ,\gamma }\) is absolutely continuous with respect to \(\sigma \) and, as raised earlier in the introduction, the \(A_\infty \) property can be seen as a quantitative scale invariant version of the absolute continuity.

The fact that two measures \(\mu \), \(\nu \) satisfy \(\mu \in A_\infty (\nu )\) has several characterizations, as mentioned in [26, Theorem 1.4.13]. It is worth mentioning that, contrary to what the notation suggests, the \(A_\infty \) property is an equivalence relationship, that is \(\mu \in A_\infty (\nu )\) is actually the same as \(\nu \in A_\infty (\mu )\). Moreover, \(\mu \in A_\infty (\nu )\) is equivalent to reverse Hölder bounds at all scales on the kernel \(\frac{d\mu }{d\nu }\).

1.3 Steps of the Proof of Theorem 1.13

We recall for a last time that \(\Gamma \subset \mathbb {R}^n\) denotes a Ahlfors regular set of dimension \(d\le n-1\), and that \(\Omega := \mathbb {R}^n\setminus \Gamma \) is its complement. Moreover, \(\sigma \) stands for an Ahlfors regular measure that satisfies (1.1). This notation will stand for the rest of the article.

The main result holds only when \(d<n-1\), but some intermediate results pertaining to the geometry of uniformly rectifiable sets will be true for any integer \(d < n\).

In addition, we shall use the convenient symbols \(\lesssim \) and \(\eqsim \). The inequality \(A \lesssim B\) means that A is smaller than a constant times B, with a constant that depends on parameters that are either recalled or obvious from context. Similarly, \(A\eqsim B\) means that \(A\lesssim B\) and \(B \lesssim A\).

We shall prove the \(A_\infty \)-property of the harmonic measure via the following result.

Lemma 1.15

Let \(d< n-1\) and take \(\gamma \in (-1,1)\). Consider the operator \(L:= -\mathop {{\text {div}}}D_\beta ^{d+1+\gamma -n} \mathcal A \nabla \), where \(\beta >0\) and \(\mathcal A\) is a (measurable real) matrix function on \(\Omega \) that satisfies the usual elliptic conditions, that is

  1. (i)

    \(|\mathcal A(X)\xi \cdot \zeta | \le C_2 |\xi ||\zeta |\),

  2. (ii)

    \(|\mathcal A(X)\xi \cdot \xi | \ge (C_2)^{-1} |\xi |^2\).

Assume that we can find K such that for any ball \(B \subset \mathbb {R}^n\) centered on \(\Gamma \) and any Borel set \(H \subset \Gamma \), the solution \(u_H\) defined by \(u_H(X):=\omega ^X_L(H)\) satisfies

$$\begin{aligned} \int _{B} |\nabla u_H|^2 D_\beta ^{d+2-n} \le K \sigma (B). \end{aligned}$$
(1.16)

Then the harmonic measure \(\omega ^X_L\) is \(A_\infty (\sigma )\) in the sense given in Theorem 1.13.

In the last lemma, the choices of \(\beta \) and \(\mathcal A\) does not really matter. The conditions we gave are the ones that allow us to fall in the scope of [16] and thus that ensure the existence and the properties of the harmonic measure, namely the non-degeneracy of the harmonic measure, the fact that \(\omega ^X_{\beta ,\gamma }\) is a doubling measure, and the change of pole property (these results are respectively Lemma 15.1, Lemma 15.43, and Lemma 15.61 in [16]). Furthermore, an analogue of Lemma 1.15 exists also when \(d=n-1\); and in this case, \(\Gamma \) is the boundary of an open domain \(\Omega \) satisfying extra topological conditions (Harnack chains and corkscrew points) which are automatically true when \(d<n-1\). But since we do not what to give details for a situation that we do not need, we excluded the case \(d=n-1\).

The demonstration of Lemma 1.15 will not be given here. Even if the lemma is stated in a slightly different context than what you can currently find in the literature, this type of result is not surprising—it became classical in the past years for experts in the field and known as “the BMO-solvability implies \(A_\infty \)-absolute continuity of the harmonic measure”—and the proof would just a small variation of what have already been done. Theorem 8.9 in [14] which deals with the case where \(\gamma = 0\) and \(\Omega = \mathbb {R}^n \setminus \mathbb {R}^d\) (but those conditions are not relevant for the proof) and is stated in a similar manner as our lemma. Earlier references are Theorem 3.2 in [27] and Theorem 1.3 in [18]. See also Theorem 4.22 [31], and in [22], the proof of Lemma 5.24 and how it is paired with Theorem 1.3 to prove Theorem 5.30.

Our objective switched now to the proof of (1.16), which says that \(|\nabla u_H|^2 D_\beta ^{d+2-n} dX\) is a Carleson measure. With the appearance of Carleson measure, let us introduce the following condition, that will play a crucial role in the article.

Definition 1.17

The function f on \(\Omega \) satisfies the Carleson measure condition if \(f\in L^\infty (\Omega )\), and the quantity \(|f(X)|^2 \,\mathrm {dist}(X,\Gamma )^{d-n} dX\) is a Carleson measure, that is if for any \(x\in \Gamma \) and \(r>0\), there holds

$$\begin{aligned} \int _{B(x,r)} |f(X)|^2 \,\mathrm {dist}(X,\Gamma )^{d-n}dX \le C \sigma (B(x,r)) \end{aligned}$$
(1.18)

with a constant \(C>0\) independent of x and r.

For short, we shall write that \(f \in CM\) or \(f\in CM(C)\) when we want to refer to the constant in (1.18).

Recall that \(\,\mathrm {dist}(X,\Gamma ) \eqsim D_\beta \), where \(D_\beta \) is the quantity defined in (1.9). We will use and abuse of this equivalence in all our article. In particular, we shall use or prove the Carleson measure condition with the quantity \(D_\beta ^{d-n}(X)\) instead of \(\,\mathrm {dist}(X,\Gamma )^{d-n}\), and \(\beta \) will be chosen to fit our purpose.

To prove (1.16), we shall use Carleson perturbations in the spirit of Kenig and Piper in [28], as we already did in [14].

Lemma 1.19

Let \(d \le n-1\) and take \(\gamma \in (-1,1)\). Consider the operator \(L:= -\mathop {{\text {div}}}D_\beta ^{d+1+\gamma -n} \mathcal A \nabla \), where \(\beta >0\) and \(\mathcal A\) is a matrix function on \(\Omega \) that satisfies the usual elliptic conditions given in Lemma 1.15. Assume that we can find a scalar function b and a vector function \(\mathcal V\), both defined on \(\Omega \), such that

$$\begin{aligned} \mathop {{\text {div}}}[(b\mathcal A^T \nabla D_\beta + \mathcal V) D_\beta ^{d+1-n}] = 0 \end{aligned}$$
(1.20)

and such that b and \(\mathcal V\) satisfy

  1. (i)

    \(C_1^{-1} \le b \le C_1\),

  2. (ii)

    \(D_\beta \nabla b \in CM(C_1)\),

  3. (iii)

    \(|\mathcal V| \le C_1\),

  4. (iv)

    \(\mathcal V \in CM(C_1)\),

for some constant \(C_1>0\).

Then, for any ball \(B\subset \mathbb {R}^n\) centered on \(\Gamma \) and any weak solution u to \(Lu = 0\) in 2B, one has

$$\begin{aligned} \int _{B} |\nabla u|^2 D_\beta ^{d+2-n} \le C\left( \sup _{2B} |u|^2\right) \sigma (B), \end{aligned}$$
(1.21)

where C depends only on \(C_\sigma \), \(C_1\), \(\beta \), \(\gamma \), n, and d. In particular (1.16) holds.

The expression (1.20) has to be taken in a weak sense, that is we shall use that for any test function \(\varphi \in W^{1,1}(\Omega ,\mathbb {R}^n)\) with compact support in \(\Omega \), one has

$$\begin{aligned} \int _\Omega \nabla \varphi \cdot (b\mathcal A \nabla D_\beta + \mathcal V) D_\beta ^{d+1-n} = 0. \end{aligned}$$
(1.22)

Moreover, (1.21) is actually a typical \(S<N\) estimate. For a ball B centered on \(\Gamma \), and \(x\in \Gamma \), we define the cone in B with vertex x as \(\gamma ^B(x) := \{X\in B, \, |X-x| \le 2\,\mathrm {dist}(X,\Gamma )\}\) and then, for a function u defined on \(\Gamma \)

$$\begin{aligned}N^B(u)(x) = \sup _{\gamma ^B(x)} |u|.\end{aligned}$$

We actually prove the following stronger version of (1.21):

$$\begin{aligned} \int _{B} |\nabla u|^2 D_\beta ^{d+2-n} \le C\Vert N^{2B}(u)\Vert _{L^2(6B)}^2. \end{aligned}$$
(1.23)

Lemma 1.19 is probably the key result, but at the same time, its proof is elementary and uses classical computations. Theorem 1.32 in [14] states a similar result when \(\Gamma = \mathbb {R}^d\). The proof in [14] relies on the fact that |t| is a solution to \(L_0u=0\) with \(L_0\) being a ‘Carleson perturbation’ of the considered operator L. In our case, the analogue of |t| is \(D_\beta \) and \(L_0 u = \mathop {{\text {div}}}[D_\beta ^{d-n} (bD_\beta \mathcal A \nabla + \mathcal V) u]\).

Of course, the above lemma alone does not look very appealing. What is the point of considering the assumption (1.20), which relies on the existence of two quantities b and \(\mathcal V\) that may be impossible to find? But its combination with the next geometrical result makes the magic happen.

Lemma 1.24

Let \(\Gamma \) be uniformly rectifiable, i.e. (1.6) is verified, of dimension \(d<n-1\). Let \(\beta >0\). Then, there exist a scalar function b and a vector function \(\mathcal V\), both defined on \(\Omega \), such that

$$\begin{aligned} \int _{\Gamma } |X-y|^{-n}(X-y) d\sigma (y) = (b \nabla D_\beta + \mathcal V) D_\beta ^{d+1-n} \qquad \text { for } X\in \Omega \end{aligned}$$
  1. (i)

    \(C_1^{-1} \le b \le C_1\),

  2. (ii)

    \(D_\beta \nabla b \in CM(C_1)\),

  3. (iii)

    \(|\mathcal V| \le C_1\),

  4. (iv)

    \(\mathcal V \in CM(C_1)\),

where \(C_1\) is a constant that depends only on \(C_\sigma \), \(C_0\), \(\beta \), n, and d.

One extra observation is needed. The quantity \(\int _{\Gamma } |X-y|^{-n}(X-y) d\sigma (y)\) is divergence free. Indeed, we can locally interchange derivative and integral by using the dominated convergence theorem in order to get

$$\begin{aligned} \mathop {{\text {div}}}\int _{\Gamma } |X-y|^{-n}(X-y) d\sigma (y) = \int _\Gamma {{\,\mathrm{div}\,}}_X [ |X-y|^{-n}(X-y) ] d\sigma (y) = 0. \end{aligned}$$
(1.25)

Therefore, Lemma 1.24 gives us exactly what we need to apply Lemma 1.19.

The limitation \(d<n-1\) comes from the fact that the quantity \(\int _{\Gamma } |X-y|^{-n}(X-y) d\sigma (y)\) is not defined when \(d=n-1\). Nothing stops Lemma 1.19 to be valid in the case \(d=n-1\), but our lack of substitute for \(\int _{\Gamma } |X-y|^{-n}(X-y) d\sigma (y)\) in co-dimension 1 is why we believe that our proof cannot be (easily) adapted to the classical co-dimension 1 case.

The next result is an interesting variant of Lemma 1.24, which will be proved in Sect. 3, and used to prove Lemma 1.24.

Lemma 1.26

Let \(\Gamma \) be uniformly rectifiable, i.e. (1.6) is verified, of dimension \(d<n\). Then for any \(\alpha ,\beta >0\), the quantity \(\,\mathrm {dist}(X,\Gamma ) \nabla [D_{\beta }/D_{\alpha }]\) satisfies the Carleson measure condition with a constant that depends only on \(C_\sigma \), \(C_0\), \(\alpha \), \(\beta \), n, and d.

We conclude our section by saying that the reader is welcome to check that the combination of Lemmas 1.24, 1.19, and 1.15 easily implies Theorem 1.13. As a consequence, the rest of the article is solely devoted to the proofs of Lemma 1.19 and Lemma 1.24 (in this order). The two demonstrations use different techniques, and the sections can be read independently.

2 Proof of Lemma 1.19

In this section, d can be any real value in \((0,n-1]\). The first step in the proof of Lemma 1.19 is to establish a Carleson inequality. The proof Carleson inequality exists in \(\mathbb {R}^{n+1}_+\) (see [36]), or in \(\mathcal M \times (0,+\infty )\) where \(\mathcal M\) is a manifold (see [34]). We do not pretend that our context is more complicated or even that the arguments of the proof are different, but we could not pinpoint a good reference, so we believe that it was now a good opportunity to discuss (and sketch the proof) about the Carleson inequality on Ahlfors regular sets.

We start with the definition. We say that a function f on \(\Omega \) describes a Carleson measure if f is a Borel measurable function and if \(f(X)\,\mathrm {dist}(X,\Gamma )^{d-n}dX\) is a Carleson measure, that is if there exists \(C>0\) such that for any ball B centered on  \(\Gamma \),

$$\begin{aligned} \int _{B} |f(X)| \,\mathrm {dist}(X,\Gamma )^{d-n}dX \le C \sigma (B). \end{aligned}$$
(2.1)

The quantity \(\Vert f\Vert _{CM1}\) denotes the smallest constant that satisfies (2.1) for any ball B centered on \(\Gamma \). Then we need cones \(\gamma (x)\) with vertex \(x\in \Gamma \) defined by

$$\begin{aligned} \gamma (x) := \{X\in \Omega , \, |X-x| \le 2 \,\mathrm {dist}(X,\Gamma )\}; \end{aligned}$$
(2.2)

the constant 2 in the definition of the cones \(\gamma (x)\) does not matter. Any fixed constant \(\alpha >1\) would do (and constants in the incoming estimates will then also depend on \(\alpha \)). The non-tangential maximal function N is

$$\begin{aligned} N(u)(x) := \sup _{\gamma (x)} |u|; \end{aligned}$$
(2.3)

if—say—u is a continuous bounded function on \(\Omega \) and \(x\in \Gamma \).

We need the Carleson inequality.

Proposition 2.4

Let f be a function on \(\Omega \) such that \(f(X) \,\mathrm {dist}(X,\Gamma )^{d-n} dX\) is a Carleson measure. There exists a constant \(C>0\) that depends only on \(C_\sigma \) such that for any continuous bounded function u on \(\Omega \),

$$\begin{aligned} \left| \int _\Omega u(X) f(X) \,\mathrm {dist}(X,\Gamma )^{d-n} dX \right| \le C \Vert f\Vert _{CM1} \int _\Gamma N(u) \, d\sigma . \end{aligned}$$
(2.5)

In particular, if \(f\in CM(C_1)\) (see Definition 1.17), we easily deduce that

$$\begin{aligned}\int _\Omega |u(X)|^2 |f(X)|^2 \,\mathrm {dist}(X,\Gamma )^{d-n} dX \le C C_1 \Vert N(u)\Vert ^2_{L^2(\Gamma ,\sigma )}.\end{aligned}$$

Proof

The second part of the proposition is immediate from the first part. Without surprise the proof of (2.5) is the same (with obvious modifications) as the one in [36, Section II.2.2, Theorem 2], which treats the case \(\Omega = \mathbb {R}^n_+\). \(\square \)

We have the proper tools to prove Lemma 1.19

Proof of Lemma 1.19

The result is a local one, so we use cut-off functions. Take \(\psi \in C^\infty _0(\mathbb {R})\) be such that \(\psi \equiv 1\) on \([-1,1]\), \(\psi \) is compactly supported in \((-2,2)\), \(0\le \psi \le 1\), and \(|\psi '|\le 2\). Let \(B=B(x,r)\) a ball centered on the boundary, and \(\epsilon >0\). We define the function \(\phi _{B,\epsilon }\) on \(\Omega \) by

$$\begin{aligned} \phi _{B,\epsilon }(X) := \underbrace{\psi \left( \frac{\,\mathrm {dist}(X,B)}{10\,\mathrm {dist}(X,\Gamma )}\right) }_{\psi _1(X)} \underbrace{\psi \left( \frac{2\,\mathrm {dist}(X,B)}{r}\right) }_{\psi _2(X)} \underbrace{\psi \left( \frac{\epsilon }{\,\mathrm {dist}(X,\Gamma )}\right) }_{\psi _3(X)} \end{aligned}$$
(2.6)

Let us list few properties of \(\phi _{B,\epsilon }\). We have

$$\begin{aligned} \phi _{B,\epsilon }(X) = 1 \quad X \in B, \,\mathrm {dist}(X,\Gamma ) \ge \epsilon . \end{aligned}$$
(2.7)

In addition, the function \(\phi _{B,\epsilon }\) is supported on

$$\begin{aligned} \mathrm {supp}\, \phi _{B,\epsilon } \subset \{X \in 2B, \, \,\mathrm {dist}(X,B) \le 20\,\mathrm {dist}(X,\Gamma ), \, \,\mathrm {dist}(X,\Gamma ) \ge \epsilon /2\}, \end{aligned}$$
(2.8)

At last, we want to bound its gradient. With the help of (2.8), we deduce that

$$\begin{aligned} |\nabla \phi _{B,\epsilon }| \le \frac{10}{\,\mathrm {dist}(X,\Gamma )} \left( {\mathbbm {1}}_{\mathrm {supp}\, \nabla \psi _1} + {\mathbbm {1}}_{\mathrm {supp}\, \nabla \psi _2} + {\mathbbm {1}}_{\mathrm {supp}\, \nabla \psi _3} \right) \end{aligned}$$

We quickly observe that

$$\begin{aligned}\begin{aligned} \mathrm {supp}\, \nabla \psi _1&\subset \{10\,\mathrm {dist}(X,\Gamma )\le \,\mathrm {dist}(X,B) \le 20 \,\mathrm {dist}(X,\Gamma )\} \\ \mathrm {supp}\, \nabla \psi _3&\subset \{ \epsilon /2 \le \,\mathrm {dist}(X,\Gamma ) \le \epsilon \} \\ \mathrm {supp}\, \nabla \psi _2&\subset \{X\in 2B, \, r/2 \le \,\mathrm {dist}(X,B)\}. \end{aligned}\end{aligned}$$

Together with the facts that \(20 \,\mathrm {dist}(X,\Gamma ) \ge \,\mathrm {dist}(X,B)\) and \(\,\mathrm {dist}(X,\Gamma ) \le |X-x| \le 2r\) when \(X \in \mathrm {supp}\, \phi _{B,\epsilon }\), we obtain that

$$\begin{aligned} |\nabla \phi _{B,\epsilon }| \le \frac{100}{\,\mathrm {dist}(X,\Gamma )} \left[ {\mathbbm {1}}_{E_1} + {\mathbbm {1}}_{E_2} + {\mathbbm {1}}_{E_3}\right] , \end{aligned}$$
(2.9)

where

$$\begin{aligned}&E_1:= \{X\in 2B, \, 10\,\mathrm {dist}(X,\Gamma ) \le \,\mathrm {dist}(X,B) \le 20 \,\mathrm {dist}(X,\Gamma )\}, \end{aligned}$$
(2.10)
$$\begin{aligned}&E_2:= \{X\in 2B, \, r/40 \le \,\mathrm {dist}(X,\Gamma ) \le 2r\}, \end{aligned}$$
(2.11)

and

$$\begin{aligned} E_3:= \{X\in 2B, \, \epsilon /2 \le \,\mathrm {dist}(X,\Gamma ) \le \epsilon \}. \end{aligned}$$
(2.12)

We claim that \({\mathbbm {1}}_{E_1}\), \({\mathbbm {1}}_{E_2}\), and \({\mathbbm {1}}_{E_3}\) satisfy all the Carleson measure condition, that is for any \((y,s)\in \Gamma \times (0,+\infty )\),

$$\begin{aligned} \int _{B(y,s)} |{\mathbbm {1}}_{E_1} + {\mathbbm {1}}_{E_2} + {\mathbbm {1}}_{E_3}|^2 \,\mathrm {dist}(X,\Gamma )^{d-n} dX \le Cs^d. \end{aligned}$$
(2.13)

Note that if we prove the claim, we also prove the same estimate without the square power. We shall demonstrate the claim separately for each \(E_i\). First, Fubini’s identity and the Ahlfors regularity of \(\sigma \) entail, for \(i\in \{1,2,3\}\), that

$$\begin{aligned}&\int _{B(y,s)} |{\mathbbm {1}}_{E_i}|^2 \,\mathrm {dist}(X,\Gamma )^{d-n} dX\nonumber \\&\quad \le C \int _{B(y,10s)} \left( \int _{\gamma (z)} {\mathbbm {1}}_{E_i}(X) \,\mathrm {dist}(X,\Gamma )^{-n} dX\right) d\sigma (z). \end{aligned}$$
(2.14)

where the cone \(\gamma (x)\) is the one from (2.2). Therefore, it is enough to prove that for all \(z\in \Gamma \), and for \(i\in \{1,2,3\}\) one has

$$\begin{aligned} \int _{\gamma (z)} {\mathbbm {1}}_{E_i}(X) \,\mathrm {dist}(X,\Gamma )^{-n} dX \lesssim 1. \end{aligned}$$
(2.15)

On \(E_3\), we have \(\,\mathrm {dist}(X,\Gamma ) \eqsim \epsilon \), so

$$\begin{aligned}\int _{\gamma (z)} {\mathbbm {1}}_{E_3}(X) \,\mathrm {dist}(X,\Gamma )^{-n} dX \lesssim \epsilon ^{-n} \int _{\gamma (z) \cap E_3} dX \lesssim \epsilon ^{-n} |B(z,2\epsilon )| \lesssim 1.\end{aligned}$$

The estimate (2.15) for \(i=3\) follows, so is the claim (2.13) for \(E_3\). The claim (2.13) for \(E_2\) is similar to \(E_3\), and is left to the reader. We turn to the proof of (2.15), that implies (2.13), for \(E_1\). Let \(X \in \gamma (z) \cap E_1\). Having \(X\in E_1\) means that

$$\begin{aligned} \,\mathrm {dist}(X,B) \le 20\,\mathrm {dist}(X,\Gamma ) \le 2\,\mathrm {dist}(X,B). \end{aligned}$$
(2.16)

In addition, \(X\in \gamma (z)\) means that \(|X-z| \le 2\,\mathrm {dist}(X,\Gamma )\). Combining this latter fact with the second inequality in (2.16) leads to

$$\begin{aligned} |X-z| \le \frac{1}{5} \,\mathrm {dist}(X,B), \end{aligned}$$
(2.17)

and thus

$$\begin{aligned} \frac{4}{5} \,\mathrm {dist}(X,B)\le & {} \,\mathrm {dist}(X,B) - |X-z| \le \,\mathrm {dist}(z,B) \le |X-z| + \,\mathrm {dist}(X,B)\\\le & {} \frac{6}{5} \,\mathrm {dist}(X,B).\end{aligned}$$

The bounds above and (2.16) allow us to compare \(\,\mathrm {dist}(X,\Gamma )\) and \(\,\mathrm {dist}(z,B)\). We have

$$\begin{aligned} \frac{1}{24} \,\mathrm {dist}(z,B) \le \frac{1}{20}\,\mathrm {dist}(X,B) \le \,\mathrm {dist}(X,\Gamma ) \le \frac{1}{10} \,\mathrm {dist}(X,B) \le \frac{1}{8} \,\mathrm {dist}(z,B)\end{aligned}$$

which is very nice because we bounded \(\,\mathrm {dist}(X,\Gamma )\) by quantities that do not depend on \(X\in \gamma (z) \cap E_1\). All those estimates allow us to also say that \(|X-z| \le \frac{1}{4} \,\mathrm {dist}(z,B)\). As a consequence,

$$\begin{aligned}\int _{\gamma (z)} {\mathbbm {1}}_{E_1}(X) \,\mathrm {dist}(X,\Gamma )^{-n} dX \lesssim \,\mathrm {dist}(z,B)^{-n} \int _{\gamma (z) \cap B(z,\frac{1}{4}\,\mathrm {dist}(z,B))} dX \lesssim 1\end{aligned}$$

The claims (2.15) and (2.13) follow.

Let us turn to the main part of the proof of Lemma 1.19. Let u be a weak solution to \(L_{\beta ,\gamma } u = 0\) in \(2B\cap \Omega \). We intend to prove that

$$\begin{aligned} \begin{aligned} \int _{\Omega } |\nabla u|^2 \phi _{B,\epsilon }^2 D_\beta ^{d+2-n}&\le C \int _\Gamma |N(u{\mathbbm {1}}_{\mathrm {supp}\, \phi _{B,\epsilon }})|^2 \, d\sigma \\&\quad + C\left( \int _{\Omega } |\nabla u|^2 \phi _{B,\epsilon }^2 D_\beta ^{d+2-n}\right) ^\frac{1}{2} \left( \int _\Gamma |N(u{\mathbbm {1}}_{\mathrm {supp}\, \phi _{B,\epsilon }})|^2 \, d\sigma \right) ^\frac{1}{2}, \end{aligned} \end{aligned}$$
(2.18)

with a constant \(C>0\) that depends only on d, n, \(C_\sigma \), and \(C_1\). Why is it enough? We used the cut-off function \(\phi _{B,\epsilon }\), which is compactly supported in \(\Omega \), and a weak solution u to \(Lu = 0\)Footnote 2. Therefore all the quantities in (2.18) are finite. So the estimate (2.18) self-improves to

$$\begin{aligned}\int _{\Omega } |\nabla u|^2 \phi _{B,\epsilon }^2 D_\beta ^{d+2-n} \le C \int _\Gamma |N(u{\mathbbm {1}}_{\mathrm {supp}\, \phi _{B,\epsilon }})|^2 \, d\sigma \end{aligned}$$

The constant C above is independent of \(\epsilon \), so we can take the limit as \(\epsilon \rightarrow 0\). The lemma follows then by the properties of \(\phi _{B,\epsilon }\), in particular (2.7). In order to prove the stronger bound (1.23), we need to prove that \(\Vert N(u{\mathbbm {1}}_{\mathrm {supp}\, \phi _{B,\epsilon }})\Vert _{L^2} \le \Vert N^{2B}(u)\Vert _{L^2(6B)}\), which is implied by the fact that \(\gamma ^{2B}(x):= \gamma (x) \cap 2B = \emptyset \) whenever \(x\in \Gamma \setminus 6B\). But the latter is fairly straightforward. Indeed, if \(X\in \gamma ^{2B}(x)\), then

$$\begin{aligned}\,\mathrm {dist}(x,B) \le \,\mathrm {dist}(X,B) + |X-x| \le r_B + 2\delta (X) \le 5r_B,\end{aligned}$$

where \(r_B\) is the radius of the ball B (centered on \(\Gamma \)). The bound (1.23) follows.

It remains to establish (2.18). For the rest of the proof, to lighten the notation, we write \(\phi \) for \(\phi _{B,\epsilon }\). Let \(b,\mathcal V\) as in the assumptions of the Lemma. We define \(H_{n-d-1}\) as

$$\begin{aligned} H_{n-d-1} := (b\mathcal A^T \nabla D_\beta + \mathcal V) D_{\beta }^{d+1-n}, \end{aligned}$$
(2.19)

which is a quantity locally bounded in \(\Omega \). The notation \(H_{n-d-1}\) above may look a bit weird at this point (why not calling it simply H), but is consistent with the one in Sect. 3. We assume \(\mathop {{\text {div}}}[H_{n-d-1}] = 0\) in a weak sense, that is for any compactly supported test function \(\varphi \in W^{1,1}(\Omega )\), we have

$$\begin{aligned} \int _\Omega \nabla \varphi \cdot H_{n-d-1} = 0. \end{aligned}$$
(2.20)

Note also that \(|H_{n-d-1}| \lesssim D_\beta ^{d+1-n}\). The estimate is straightforward from the assumption on b and \(\mathcal V\) once you know that \(|\nabla D_\beta | \lesssim 1\). The latter fact is not surprising, since \(D_\beta \) is smooth and plays the role of a distance, and not very hard to prove from the definition either; but proof is postponed to (3.25).

Since \(b \gtrsim 1\) and \(\mathcal A\) is elliptic, we get

$$\begin{aligned}\int _{\Omega } |\nabla u|^2 \phi ^2 D_\beta ^{d+2-n} \lesssim J:= \int _{\Omega } (\mathcal A \nabla u \cdot \nabla u) \, b\phi ^2 D_\beta ^{d+2-n}.\end{aligned}$$

We use the product rule to force every term into the second gradient,

$$\begin{aligned}\begin{aligned} J&= \int _{\Omega } \mathcal A \nabla u \cdot \nabla [u b\phi ^2 D_\beta ^{1-\gamma }] D_\beta ^{d+1+\gamma -n} - 2 \int _{\Omega } \mathcal A \nabla u \cdot \nabla \phi \, bu \phi \, D_\beta ^{d+2-n} \\&\quad - \int _{\Omega } \mathcal A \nabla u \cdot \nabla b \, u \phi ^2 \, D_\beta ^{d+2-n} - \int _{\Omega } \mathcal A \nabla u \cdot \nabla [D_\beta ^{1-\gamma }] \, bu \phi ^2 \, D_\beta ^{d+1+\gamma -n} \\&:= J_1 + J_2 + J_3 + J_4. \end{aligned}\end{aligned}$$

The term \(J_1\) equals 0, because u is a weak solution to \(Lu = -\mathop {{\text {div}}}D_\beta ^{d+1+\gamma -n} \mathcal A \nabla u = 0\), and because [13, Lemma 9.18] says that \(v:= ub \phi ^2 D_\beta ^{1-\gamma }\) is a valid test function. The terms \(J_2\) and \(J_3\) can be treated in a similar manner. Using the bounds on b and \(\mathcal A\), we have

$$\begin{aligned}\begin{aligned} |J_2 + J_3|&\lesssim \int _\Omega |\nabla u| |u {\mathbbm {1}}_{\mathrm {supp}\, \phi } \phi [|\nabla \phi | + |\nabla b|] D_\beta ^{d+2-n} \\&\lesssim \left( \int _\Omega |\nabla u|^2 \phi ^2 D_\beta ^{d+2-n} \right) ^\frac{1}{2} \left( \int _\Omega [u{\mathbbm {1}}_{\mathrm {supp}\, \phi }]^2 [|D_\beta \nabla \phi |^2 + |D_\beta \nabla b|^2] D_\beta ^{d-n} \right) ^\frac{1}{2}\end{aligned}.\end{aligned}$$

Recall that, thanks to (1.10), the function \(D_\beta \) can be used like \(\,\mathrm {dist}(.,\Gamma )\), in particular in the Carleson inequality (Proposition 2.4) that we intend to invoke. But first, let us verify that we have the Carleson measure condition we need. The fact that \(D_\beta \nabla b \in CM\) is part of the assumption of the lemma. The fact that \(D_\beta \nabla \phi \in CM\) is a consequence of (2.9), (2.13), and again (1.10). Proposition 2.4 infers that

$$\begin{aligned}\int _\Omega [u{\mathbbm {1}}_{\mathrm {supp}\, \phi }]^2 [|D_\beta \nabla \phi |^2 + |D_\beta \nabla b|^2] D_\beta ^{d-n} \lesssim \int _\Gamma |N(u {\mathbbm {1}}_{\mathrm {supp}\, \phi })|^2 \, d\sigma ,\end{aligned}$$

that is

$$\begin{aligned}|J_2 + J_3| \lesssim \left( \int _{\Omega } |\nabla u|^2 \phi ^2 D_\alpha ^{d+2-n}\right) ^\frac{1}{2} \left( \int _\Gamma |N(u{\mathbbm {1}}_{\mathrm {supp}\, \phi })|^2 \, d\sigma \right) ^\frac{1}{2}\end{aligned}$$

which is perfect since the right-hand side above appears in the right-hand side of (2.18).

The last term that we need to treat is \(J_4\). We have

$$\begin{aligned} J_4= & {} -(1-\gamma ) \int _{\Omega } \mathcal A \nabla u \cdot [b D_\beta ^{d+1-n}\nabla D_\beta ] \, u \phi ^2\\= & {} (\gamma -1) \int _{\Omega } \nabla u \cdot [ D_\beta ^{d+1-n} b \mathcal A^T \nabla D_\beta ] \, u \phi ^2 .\end{aligned}$$

We use the relation (2.19) to get

$$\begin{aligned}\begin{aligned} J_4&= (1-\gamma ) \int _{\Omega } \nabla u \cdot \mathcal V \, u \phi ^2 \, D_{\beta }^{d+1-n} + (\gamma -1 ) \int _{\Omega } \nabla u \cdot H_{n-d-1} \, u \phi ^2 \\&:= J_{41} + J_{42}. \end{aligned}\end{aligned}$$

The integral \(J_{41}\) can be dealt with using the same computations as the ones we did for \(J_{2} + J_3\), using the facts that \(\mathcal V \in CM\). We are left with \(J_{42}\), where we force all the terms in the first gradient with the product rule. We obtain that

$$\begin{aligned}\begin{aligned} J_{42}&= \frac{\gamma -1}{2} \int _\Omega \nabla [u^2\phi ^2] \cdot H_{n-d-1} + (1-\gamma ) \int _\Omega \nabla \phi \cdot H_{n-d-1} \, u^2 \phi \\&:= J_{421} + J_{422}. \end{aligned}\end{aligned}$$

The term \(J_{421}\) is 0, due to (2.20). As for \(J_{422}\), similarly to what we did for \(J_2\), we use (2.9) and the fact that \(|H_{n-d-1}| \lesssim D_\beta ^{d+1-n}\) to get

$$\begin{aligned}\begin{aligned} J_{422}&\lesssim \int _\Omega [{\mathbbm {1}}_{E_1} + {\mathbbm {1}}_{E_2} + {\mathbbm {1}}_{E_3}] \, [u^2 {\mathbbm {1}}_{\mathrm {supp}\, \phi }] \, D_\beta ^{d-n} \\&\lesssim \int _\Omega |{\mathbbm {1}}_{E_1} + {\mathbbm {1}}_{E_2} + {\mathbbm {1}}_{E_3}|^2 \, [u^2 {\mathbbm {1}}_{\mathrm {supp}\, \phi }] \, D_\beta ^{d-n}.\end{aligned}\end{aligned}$$

Proposition 2.4 and (2.13) prove that

$$\begin{aligned}\begin{aligned} J_{422}&\lesssim \Vert {\mathbbm {1}}_{E_1} + {\mathbbm {1}}_{E_2} + {\mathbbm {1}}_{E_3}\Vert _{CM1} \int _{\Gamma } N(u^2{\mathbbm {1}}_{\mathrm {supp}\, \phi }) \, d\sigma \\&\lesssim \int _{\Gamma } |N(u{\mathbbm {1}}_{\mathrm {supp}\, \phi })|^2 \, d\sigma \end{aligned}\end{aligned}$$

since \(\Vert {\mathbbm {1}}_{E_1} + {\mathbbm {1}}_{E_2} + {\mathbbm {1}}_{E_3}\Vert _{CM1}\) is bounded by a constant depending only on d, n, \(C_\sigma \). The claim (2.18) and then the lemma follows. \(\square \)

3 Proof of Lemma 1.24

In this section, \(\Gamma \) is a d-Ahlfors regular set of dimension \(d<n\)Footnote 3. We deduce that \(\Gamma \) is a closed non-empty set. So we can use a family of Whitney cubes \({\mathcal W}\) as constructed in [36]. The diameter of Q is written \(\ell (Q)\), and we notice that the side length of Q is \(\ell (Q)/\sqrt{n}\).

We record a few of the properties of \({\mathcal W}\) that we shall need. The collection \({\mathcal W}\) is the family of maximal dyadic cubes such that \(20Q \subset \Omega \), that is we have

$$\begin{aligned} 20Q \subset \Omega \quad \text { but } \quad 60Q \cap \Gamma \ne \emptyset . \end{aligned}$$
(3.1)

If \(Q,R \in {\mathcal W}\) are such that \(2Q\cap 2R \ne \emptyset \), then \(\ell (R) \in \{\ell (Q)/2,\ell (Q),2\ell (Q)\}\). Thus, R is a dyadic cube that satisfies \(R \subset 8Q\) and \(\ell (R) \eqsim \ell (Q)\), and there are only a (uniformly) finite number of such cubes. This proves that there is a constant \(K:=K(n)\) such that

$$\begin{aligned} \text {the number of cubes }R\in {\mathcal W}\text { such that }2R \cap 2Q \ne \emptyset \text { is at most }K. \end{aligned}$$
(3.2)

For each \(Q\in {\mathcal W}\), we pick once for all the article a point

$$\begin{aligned} \xi _Q \in 60Q \cap \Gamma . \end{aligned}$$
(3.3)

We write \(B_Q\) for the ball \(B_Q:= B(\xi _Q,2^5\ell (Q)) \supset Q\) and, using (1.3), we define the Wasserstein distance between two measures \(\mu _1\) and \(\mu _2\) relatively to Q as

$$\begin{aligned} \,\mathrm {dist}_Q(\mu _1,\mu _2):= \,\mathrm {dist}_{\xi _Q,2^{10}\ell (Q)} (\mu _1,\mu _2). \end{aligned}$$
(3.4)

Consider

$$\begin{aligned} \alpha _Q:= \alpha _\sigma (\xi _Q,2^{10}\ell (Q)), \end{aligned}$$
(3.5)

and we have the following result.

Lemma 3.6

Let \(\mu _Q:=c_Q\mu _{P_Q}\) be a flat measure that satisfies \(\,\mathrm {dist}_{Q}(\mu _Q,\sigma ) \le 2 \alpha _Q\).

There exists a small constant \(\epsilon :=\epsilon (C_\sigma ,d) >0\) such that if \(\alpha _Q \le \epsilon \), we have \(\,\mathrm {dist}(\xi _Q,P_Q) \le 5\ell (Q)\), \(\,\mathrm {dist}(2Q,P_Q) \ge 5\ell (Q)/\sqrt{n}\), and \(\epsilon \le c_Q \le 1/\epsilon \) .

Proof

We shall prove the result by contraposition. Assume first that \(\,\mathrm {dist}(\xi _Q,P_Q) \ge 5\ell (Q)\). In this case, we choose the 1-Lipschitz function

$$\begin{aligned}\tilde{f} := \max \{5\ell (Q) - |X-\xi _Q|,0\}\end{aligned}$$

in the definitions (1.3) and(1.4) to get

$$\begin{aligned}\left| \int \tilde{f} \, d\sigma - \int \tilde{f} \, d\mu _Q \right| \le 2 [2^{10}\ell (Q)]^{d+1} \alpha _Q.\end{aligned}$$

The function \(\tilde{f}\) is always 0 on \(P_Q\) and is at least \(4\ell (Q)\) on \(B(\xi _Q,\ell (Q))\). So the estimate above becomes

$$\begin{aligned} 4\ell (Q) \sigma (B(\xi _Q,\ell (Q))) \le 2 [2^{10}\ell (Q)]^{d+1} \alpha _Q.\end{aligned}$$

We use (1.1) to obtain a uniform lower bound on \(\alpha _Q\).

Assume now that \(c_Q\) is smaller than a constant \(\epsilon _0\) that depends only on d and \(C_\sigma \) and that will be chosen later. In this case we choose the 1-Lipschitz function \(\hat{f}(X) = \max \{\ell (Q) - \,\mathrm {dist}(X, B_Q),0\}\) in (1.3) to deduce that

$$\begin{aligned} \begin{aligned} \alpha _Q&\gtrsim \ell (Q)^{-d-1} \left| \int \hat{f} \, d\sigma - \int \hat{f} \, d\mu \right| \\&\gtrsim \ell (Q)^{-d} \left( \sigma (B_Q) - c_Q \mu _{P_Q}(2B_Q) \right) . \end{aligned} \end{aligned}$$
(3.7)

We choose \(\epsilon _0\) small enough depending only on \(C_\sigma \) and d, so that the quantity \(\sigma (B_Q) - c_Q \mu _{P_Q}(2B_Q)\) is positive and bigger than \(\sigma (B_Q)/2 \ge \ell (Q)^{d}/C_\sigma \). Hence, with our choice of \(\epsilon _0\), \(\alpha _Q\) is bigger than a constant that depends only on d and \(C_\sigma \).

By a similar argument, we prove that if \(c_Q\) is large and \(\,\mathrm {dist}(\xi _Q,P_Q) \le 8\ell (Q)\), then \(c_Q \mu _{P_Q}(B_Q) - \sigma (2B_Q)\) is positive and bigger than \(\ell (Q)^d\). Thus using \(\hat{f}\) in (1.3) as before will also implies that \(\alpha _Q\) is bigger than a uniform constant.

Assume now that \(\,\mathrm {dist}(2Q,P_{Q}) \le 5\ell (Q)/\sqrt{n}\) but \(c_Q\) is bigger than \(\epsilon _0\). Then by (3.1), it means that we can find \(x\in P_Q\) such that \(\,\mathrm {dist}(x,\Gamma ) \ge 10\ell (Q)/\sqrt{n}\). We construct the 1-Lipschitz function f on \(\mathbb {R}^n\) as

$$\begin{aligned} f(X) = \max \{10\ell (Q)/\sqrt{n} - |X-x|,0\}.\end{aligned}$$

The function f is supported in \(2^{5}B_Q\), it is a simple consequence of the fact that \(Q\subset B_Q\) and x is not far from Q. We deduce by definition of \(\alpha _\sigma \) (and by our choice of flat measure \(\mu _Q\)) that

$$\begin{aligned}\left| \int f \, d\sigma - \int f \, d\mu _Q \right| \le 2 [2^{10}\ell (Q)]^{d+1} \alpha _Q.\end{aligned}$$

But f do not touch \(\Gamma \), so the above estimate becomes

$$\begin{aligned}c_Q \int _{P_Q} f \, dy \le 2^{11+10d} \ell (Q)^{d+1} \alpha _Q, \end{aligned}$$

where dy is the d-dimensional Lebesgue measure. We can estimate \(\int _{P_Q} f\, dy\) with our choice of f, we can deduce that

$$\begin{aligned} c_Q \le C \alpha _Q \end{aligned}$$
(3.8)

with a constant C that depends only on d and n. But we assumed that \(c_Q\) is large (bigger than \(\epsilon _0\)), then (3.8) implies that \(\alpha _Q\) is also bigger than a constant that depends only on d and \(C_\sigma \). The lemma follows. \(\square \)

For each \(Q \in {\mathcal W}\), we pick a flat measure. If \(\alpha _Q \le \epsilon \), where \(\epsilon \) is the one in Lemma 3.6, then we take a constant \(c_Q\) and a d-plane \(P_Q\) such that the flat measure \(\mu _Q := c_Q \mu _{P_Q}\) satisfies \(\,\mathrm {dist}_Q(\mu _Q,\sigma ) \le 2 \alpha _Q\). If \(\alpha _Q \ge \epsilon \), then we take \(\mu _Q:= c_Q\mu _{P_Q}\) with \(c_Q := 1\) and \(P_Q\) any d-plane going through \(\xi _Q\) and that does not intersect 20Q (it is possible since \(20Q \not \ni \xi _Q\) is convex). The following properties are proved by Lemma 3.6 when \(\alpha _Q\) is small and are immediate by construction when \(\alpha _Q\) is large:

$$\begin{aligned} \,\mathrm {dist}(2Q,P_Q)\ge & {} 5 \ell (Q)/\sqrt{n} \ge C^{-1} \,\mathrm {dist}(Q,\Gamma ), \end{aligned}$$
(3.9)
$$\begin{aligned} \,\mathrm {dist}(\xi _Q,P_Q)\le & {} 5 \ell (Q), \end{aligned}$$
(3.10)
$$\begin{aligned} C^{-1}\le & {} c_Q \le C, \end{aligned}$$
(3.11)

and

$$\begin{aligned} \widetilde{\alpha }_Q := \,\mathrm {dist}_Q(\mu _Q,\sigma ) \le C \alpha _Q \end{aligned}$$
(3.12)

for a constant \(C>0\) that depends on d, n, and \(C_\sigma \).

We introduced our choices of flat measure that approximates \(\Gamma \), it is now a good time to present the following lemma, that presents how we shall use the rectifiability of \(\Gamma \). Since we shall need it later, we set the quantities

$$\begin{aligned} \alpha _{Q,k} := \inf _{\mu \in \mathcal F} \,\mathrm {dist}_{\xi _Q,2^{10+k}\ell (Q)}(\sigma ,\mu ), \end{aligned}$$
(3.13)

and we observe that \(\alpha _Q\) is \(\alpha _{Q,0}\).

Lemma 3.14

Let \(\Gamma \) be uniformly rectifiable. For \(x\in \Gamma \) and \(r>0\), define \({\mathcal W}(x,r)\) as the sub-collection of \({\mathcal W}\) of cubes Q for which 2Q intersects \(B(x,r) \subset \mathbb {R}^n\). Then for \(x\in \Gamma \), \(r>0\), and \(k\in \mathbb N\),

$$\begin{aligned}\sum _{Q \in {\mathcal W}(x,r)} (\alpha _{Q,k})^2 \ell (Q)^{d} \le C(C_0+k)r^d,\end{aligned}$$

where \(C>0\) depends only on \(C_\sigma \), d and n. By (3.12), we immediately have

$$\begin{aligned}\sum _{Q \in {\mathcal W}(x,r)} (\widetilde{\alpha }_Q)^2 \ell (Q)^{d} \le CC_0r^d,\end{aligned}$$

where C depends on the same parameters.

Proof

The proof is pretty much immediate. Let \(Q\in {\mathcal W}(x,r)\). If \(y\in \Gamma \) and \(s>0\) are such that \(|y-\xi _Q| \le \ell (Q)\) and \(2^{11+k}\ell (Q) \le s \le 2^{12+k} \ell (Q)\), then \(B(y,s) \supset B(\xi _Q,2^{10+k}\ell (Q))\) and the set of functions Lip(ys) is larger than \(Lip(\xi _Q,2^{10+k}\ell (Q))\). The definitions (1.3) and (1.4) entail that \(\alpha _{Q,k} \le \alpha _\sigma (y,s)\), which can be rewritten

$$\begin{aligned} (\alpha _{Q,k})^2 \ell (Q)^d \le C \int _{2^{11+k}\ell (Q)}^{2^{12+k}\ell (Q)} \int _{\Gamma \cap B(\xi _Q,\ell (Q))} |\alpha _\sigma (y,s)|^2 d\sigma (y) \frac{ds}{s}\end{aligned}$$

where the constant C depends only on \(C_\sigma \). Summing over Q gives that

$$\begin{aligned} \sum _{Q\in {\mathcal W}(x,r)} (\alpha _{Q,k})^2 \ell (Q)^d \lesssim \sum _{Q\in {\mathcal W}(x,r)}\int _{2^{11+k}\ell (Q)}^{2^{12+k}\ell (Q)} \int _{\Gamma \cap B(\xi _Q,\ell (Q))} |\alpha _\sigma (y,s)|^2 d\sigma (y) \frac{ds}{s} \end{aligned}$$
(3.15)

Notice that the collection

$$\begin{aligned} \{(2^{11+k}\ell (Q),2^{12+k}\ell (Q)) \times (\Gamma \cap B(\xi _Q,\ell (Q)))\}_{Q\in {\mathcal W}} \end{aligned}$$
(3.16)

is finitely overlapping in \((0,+\infty ) \times \Gamma \) (with a uniform constant that depends only on n). Indeed, an overlap appears only for cubes QR that have the same diameter D (recall that \({\mathcal W}\) is a collection of dyadic cubes in \(\mathbb {R}^n\)) and when \(|\xi _Q-\xi _R|\le D\). But the latter implies that \(100Q\cap 100R \ne \emptyset \), and given \(Q\in {\mathcal W}\), there is a uniformly finite number of Whitney cubes R that satisfies \(\ell (Q) = \ell (R)\) and \(100Q\cap 100R \ne \emptyset \).

We use the finite overlap of (3.16) in (3.15) to obtain

$$\begin{aligned}\sum _{Q\in {\mathcal W}(x,r)} (\alpha _{Q,k})^2 \ell (Q)^d \lesssim \int _{0}^{(2^{12+k}\sqrt{n})r} \int _{B(x,2^{12}\sqrt{n} r)} |\alpha _\sigma (y,s)|^2 d\sigma (y) \frac{ds}{s}\end{aligned}$$

because if a cube Q is in \({\mathcal W}(x,r)\), then we need to have—for instance—\(\mathrm {diam}(Q) \le \sqrt{n} \, r\) and \(\xi _Q\in B(x,100\sqrt{n} \, r)\). We divide the integral in s into two parts: when \(s \le (2^{12}\sqrt{n}) r\) and when \((2^{12}\sqrt{n}) r \le s \le (2^{12+k}\sqrt{n})r\), and by (1.5) and (1.6), we obtain

$$\begin{aligned}\begin{aligned} \sum _{Q\in {\mathcal W}(x,r)} (\alpha _{Q,k})^2 \ell (Q)^d&\lesssim \int _{0}^{(2^{12}\sqrt{n}) r} \int _{B(x,2^{12}\sqrt{n} r)} |\alpha _\sigma (y,s)|^2 d\sigma (y) \frac{ds}{s} \\&\quad + \int _{(2^{12}\sqrt{n}) r}^{(2^{12+k}\sqrt{n})r} \int _{B(x,2^{12}\sqrt{n} r)} |\alpha _\sigma (y,s)|^2 d\sigma (y) \frac{ds}{s} \\&\lesssim \sigma (B(x,2^{12}\sqrt{n} \, r)) \left[ C_0 + C_\sigma \int _{(2^{12}\sqrt{n}) r}^{(2^{12}\sqrt{n})r} \frac{ds}{s} \right] \\&\lesssim r^d [C_0 + k]. \end{aligned}\end{aligned}$$

The lemma follows.

We are almost done with flat measures that approximate \(\sigma \). We shall just link a point in \(\Omega \) to a flat measure, but we do not need to be gentle, so we take, for \(X\in \Omega \), \(\mu _X := \mu _Q\) where \(Q \in {\mathcal W}\) is the only dyadic cube containing X. Similarly, \(c_X\) is \(c_Q\) and \(P_X\) is \(c_Q\) where \(X \in Q \in {\mathcal W}\). From (3.9) and (3.11), it is not very hard to see that we have

$$\begin{aligned} \,\mathrm {dist}(X,P_X) \ge C^{-1} \,\mathrm {dist}(X,\Gamma ). \end{aligned}$$
(3.17)

and

$$\begin{aligned} C^{-1} \le c_X \le C \end{aligned}$$
(3.18)

for a constant \(C>0\) that depends only on d, n, and \(C_\sigma \). It is also practical to introduce the alpha numbers relatively to the point X

$$\begin{aligned} \alpha (X) := \widetilde{\alpha }_Q = \,\mathrm {dist}_{\xi _Q,2^{10}\ell (Q)}(\sigma ,\mu _X), \end{aligned}$$
(3.19)

The second part of Lemma 3.14 can now be rewritten as:

Lemma 3.20

If \(\Gamma \) is uniformly rectifiable, for every \(x\in \Gamma \) and \(r>0\), there holds

$$\begin{aligned} \int _{B(x,r)} |\alpha (X)|^2 \,\mathrm {dist}(X,\Gamma )^{d-n}dX \le CC_0 r^d, \end{aligned}$$
(3.21)

where \(C>0\) depends only on \(C_\sigma \), d and n.

We are prepared to talk about the soft distance \(D_\beta \). We shall also use the vector function defined \(\Omega \) by

$$\begin{aligned} H_\beta (X) := \int _\Gamma |X-y|^{-d-\beta -1} (X-y) d\sigma (y). \end{aligned}$$
(3.22)

The purpose of Lemma 1.24, with our new notation, is to compare \(H_{n-d-1}\) and \(D_\beta ^{d+1-n} \nabla D_\beta \). We will actually prove a more general result, that compares \(H_\alpha \) and \(D_\beta ^{-\alpha } \nabla D_\beta \) for any \(\alpha ,\beta >0\). Before starting the long computations involving \(H_\alpha \) and \(D_\beta \), we introduce few notation and make few observations.

From the definition (1.9) of \(D_\beta \), we can see that the term \(H_\beta (X)\) is immediately bounded by \(D_\beta ^{-\beta }\) and thus by (1.10)

$$\begin{aligned} |H_\beta | \lesssim D_\beta ^{-\beta } \lesssim \,\mathrm {dist}(X,\Gamma )^{-\beta }. \end{aligned}$$
(3.23)

Moreover, a direct computation shows that

$$\begin{aligned} \nabla D_\beta ^{-\beta } = -(d+\beta ) H_{\beta +1} \end{aligned}$$
(3.24)

which entails \(\nabla D_\beta = \frac{d+\beta }{\beta } D_\beta ^{\beta +1} H_{\beta +1}\) and then

$$\begin{aligned} |\nabla D_\beta | \lesssim 1. \end{aligned}$$
(3.25)

Let \(c_\beta \) be the number

$$\begin{aligned} c_\beta := \int _{\mathbb {R}^d} (1+|y|^2)^{-\frac{d+\beta }{2}} \, dy. \end{aligned}$$
(3.26)

Verify that by a simple change of variable that for all \(Y,X\in \Omega \)

$$\begin{aligned} \int |Y-y|^{-d-\beta } \, d\mu _X(y) = c_\beta c_X \,\mathrm {dist}(Y,P_X)^{-\beta }. \end{aligned}$$
(3.27)

Let \(N_X\) be the unit vector defined for all \(X\in \Omega \) by \(N_X:=\left[ \nabla _Y \,\mathrm {dist}(Y,P_X)\right] _{Y=X}\). In one hand, we have by symmetry of \(P_X\) that

$$\begin{aligned} \begin{aligned}&\int |X-y|^{-d-\beta -1}(X-y) \, d\mu _X(y)\\&\quad = \left( \int |X-y|^{-d-\beta -1}(X-y)\cdot N_X \, d\mu _X(y)\right) N_X \\&\quad = \left( \int |X-y|^{-d-\beta -1} \,\mathrm {dist}(X,P_X) \, d\mu _X(y)\right) N_X \\&\quad = c_{\beta +1} c_X \,\mathrm {dist}(X,P_X)^{-\beta } N_X, \end{aligned}\end{aligned}$$
(3.28)

but also, in the other hand, we can write when \(\beta > 1\)

$$\begin{aligned} \begin{aligned}&\int |X-y|^{-d-\beta -1}(X-y) \, d\mu _X(y) \\&\quad = -\frac{1}{\beta +d-1} \left( \nabla _Y \left[ \int |Y-y|^{-d-\beta +1} \, d\mu _X(y) \right] \right) _{Y=X} \\&\quad = \frac{\beta -1}{\beta +d - 1} c_{\beta -1} c_X \,\mathrm {dist}(X,P_X)^{-\beta } N_X \end{aligned}\end{aligned}$$
(3.29)

by (3.27). The combination of the two last estimates gives a nice relation on the coefficients \(c_\beta \)

$$\begin{aligned} (\beta + d) c_{\beta +2} = \beta c_\beta \qquad \text { whenever } \beta >0. \end{aligned}$$
(3.30)

The next lemma shows the cost of changing the measure \(\sigma \) by \(\mu _X\) in \(D_\beta \) and \(H_\beta \).

Lemma 3.31

Let \(\beta >0\). There exists a constant \(C>0\) that depends only on d, n, \(\beta \), and \(C_\sigma \) such that f or all \(X\in \Omega \),

$$\begin{aligned} |D_\beta ^{-\beta }(X) - c_\beta c_X \,\mathrm {dist}(X,P_X)^{-\beta }| \le C \,\mathrm {dist}(X,\Gamma )^{-\beta } \left( \alpha (X) + \sum _{k\in \mathbb {N}} 2^{-k\beta }\alpha _{Q,k} \right) , \end{aligned}$$
(3.32)

and

$$\begin{aligned} |H_\beta (X) - c_{\beta +1} c_X \,\mathrm {dist}(X,P_X)^{-\beta } N_X| \le C \,\mathrm {dist}(X,\Gamma )^{-\beta } \left( \alpha (X) + \sum _{k\in \mathbb {N}} 2^{-k\beta }\alpha _{Q,k} \right) , \end{aligned}$$
(3.33)

where \(Q\in {\mathcal W}\) is the only dyadic cube containing X.

Proof

The two estimates are proven in the same manner. We shall rigorously prove (3.32) first, and only explain the differences for (3.33). Let \(X\in \Omega \) and let \(Q\in {\mathcal W}\) be the cube containing X. We intend to cut the integral

$$\begin{aligned}D_\beta ^{-\beta } = \int _\Gamma |X-y|^{-d-\beta } \, d\sigma (y)\end{aligned}$$

into pieces that lives in annuli, so we use cut-off functions that live in the desired annuli. We start by taking a function \(\widetilde{\theta }_0 :\, \mathbb {R}^n \rightarrow \mathbb {R}\) supported in \(B(0,2^{10}\ell (Q))\), and satisfying \(0\le \widetilde{\theta }_0 \le 1\) everywhere, \(\widetilde{\theta }_0 \equiv 1\) on \(B(0,2^9\ell (Q))\), and \(|\nabla \widetilde{\theta }_0| \le 1/\ell (Q)\). Then we set for \(k\ge 1\) the functions

$$\begin{aligned}\widetilde{\theta }_k(y) := \widetilde{\theta }_0(2^{-k}y) - \widetilde{\theta }_0(2^{-k+1}y),\end{aligned}$$

and we translate these functions by taking for all \(k\in \mathbb {N}\)

$$\begin{aligned}\theta _k(y) := \widetilde{\theta }_k(y-\xi _Q).\end{aligned}$$

The functions \(\theta _k\) form a partition of unity, that is

$$\begin{aligned} \sum _{k \in \mathbb {N}} \theta _k \equiv 1. \end{aligned}$$
(3.34)

In addition, for \(k\ge 0\)

$$\begin{aligned} \theta _k\text { is supported in }B_k:= B(\xi _Q,2^{10+k}\ell (Q)), \end{aligned}$$
(3.35)

and for \(k\ge 1\),

$$\begin{aligned} \theta _k \equiv 0\text { on }B_{k-2}. \end{aligned}$$
(3.36)

We shall use the decomposition \(D_\beta ^{-\beta } = \sum _{k\in \mathbb {N}} I_k\), where

$$\begin{aligned} I_k := \int _\Gamma |X-y|^{-d-\beta } \theta _k(y) \, d\sigma (y) = \int _\Gamma f_k(y) \, d\sigma (y) \end{aligned}$$
(3.37)

if \(f_k\) is the function defined on \(\mathbb {R}^n\) by

$$\begin{aligned} f_k(y) := |X-y|^{-d-\beta } \theta _k(y). \end{aligned}$$
(3.38)

We intend to approximate \(I_k\) by

$$\begin{aligned} J_k := \int _{P_X} f_k \, d\mu _X, \end{aligned}$$
(3.39)

which can be defined without problems thanks to (3.17). The sum of the \(J_k\)’s can be directly linked to (3.32). Indeed, observe that

$$\begin{aligned}\sum _{k\in \mathbb {N}} J_k= & {} \sum _{k\in \mathbb {N}} \int _{P_X} |X-y|^{-d-\beta } \theta _k(y) \, d\mu _X(y)\\= & {} \int |X-y|^{-d-\beta } \, d\mu _X(y) = c_\beta c_X \,\mathrm {dist}(X,P_X)^{-\beta }\end{aligned}$$

by (3.27). Therefore, we have

$$\begin{aligned} |D_\beta ^{-\beta }(X) - c_\beta c_X \,\mathrm {dist}(X,P_X)^{-\beta }| \le \sum _{k\in \mathbb {N}} |I_k - J_k|, \end{aligned}$$
(3.40)

and we just have to estimate the difference \(|I_k-J_k|\).

We start with \(k=0\). By (3.17), \(P_X\) stays far from X, so we can find \(\epsilon \in (0,1)\) independent of X such that \(B(X,\epsilon \,\mathrm {dist}(X,\Gamma ))\) intersects neither \(P_X\) nor \(\Gamma \). If the function \(\widetilde{f}_0\) is defined as

$$\begin{aligned} \widetilde{f}_0(y) := \min \{|X-y|^{-d-\beta },[\epsilon \,\mathrm {dist}(X,\Gamma )]^{-d-\beta }\} \theta _k, \end{aligned}$$
(3.41)

then \(\widetilde{f}_0 = f_0\) on \(\Gamma \cup P_X\) and we have

$$\begin{aligned} I_0 = \int \widetilde{f}_0 \, d\sigma \quad \text { and } \quad J_0 = \int \widetilde{f}_0 \, d\mu _X. \end{aligned}$$
(3.42)

By (3.35), the function \(f_0\) is supported in \(B_{0} = B(\xi _Q,2^{10}\ell (Q))\), so is our new function \(\widetilde{f}_0\). But the function \(\widetilde{f}_0\) is now Lipschitz, with constant \(C\,\mathrm {dist}(X,\Gamma )^{-d-\beta -1}\), or \(C\ell (Q)^{-d-1-\beta }\) since \(X\in Q\) and Q is a Whitney cube. Hence,

$$\begin{aligned} |I_0 - J_0| = \left| \int \widetilde{f}_0 \, (d\sigma - d\mu _X)\right| \lesssim \ell (Q)^{-\beta } \alpha (X) \end{aligned}$$
(3.43)

by definition of \(\alpha (X)\).

We turn to the case \(k\ge 1\). Here, \(f_k\) is already Lipschitz. Indeed, it is not hard to check that by construction, we have \(X\in B_{-5} = B(\xi _Q, 32\ell (Q))\). So (3.36) entails that \(f_k\) is 0 around X. Note that

$$\begin{aligned} \text {the Lipschitz constant of }f_k\text { is smaller than} C[2^k\ell (Q)]^{-d-\beta -1}. \end{aligned}$$
(3.44)

and

$$\begin{aligned} f_k\text { is supported in }B_k = B(\xi _Q,2^{10+k}\ell (Q)). \end{aligned}$$
(3.45)

Those observations will be of use a bit later.

The \(\alpha \)-numbers will not allow us to compare directly \(I_k\) with \(J_k\), because \(\mu _X\) is not the correct flat measure that approximate \(\sigma \) in the ball \(B_k\). So for \(j\in \mathbb {N}\), we take flat measures \(\mu _{Q,j}=c_{Q,j}\mu _{P_{Q,j}}\) such that

$$\begin{aligned}\,\mathrm {dist}_{\xi _Q,2^{10+j}\ell (Q)}(\sigma ,\mu _{Q,j}) \le 2 \alpha _{Q,j}.\end{aligned}$$

The key observation is the fact that

$$\begin{aligned} \,\mathrm {dist}_{\xi _Q,2^{10+k}\ell (Q)}(\mu _{Q,j-1},\mu _{Q,j}) \lesssim \alpha _{Q,j}. \end{aligned}$$
(3.46)

for all \(1<j\le k\). The rough idea is that you just need to know the difference between \(c_{Q,j}\) and \(c_{Q,j-1}\), the distance and the angle between the planes \(P_{Q,j}\) and \(P_{Q,j-1}\), to be able to bound the Wasserstein distance \(\,\mathrm {dist}_{\xi _Q,2^{10+k}\ell (Q)}(\mu _{Q,j-1},\mu _{Q,j})\). But all these 3 quantities can be estimated with \(\alpha _{Q,j}\). A detailed explanation of (3.46) is given as Appendix A. For a similar reason

$$\begin{aligned} \,\mathrm {dist}_{\xi _Q,2^{10+k}\ell (Q)}(\mu _{Q,0},\mu _{X}) \lesssim \alpha _{Q,0} + \alpha (X). \end{aligned}$$
(3.47)

We deduce that

$$\begin{aligned} \,\mathrm {dist}_{\xi _Q,2^{10+k}\ell (Q)}(\mu _{Q,k},\mu _{X}) \lesssim \alpha (X) + \sum _{j=0}^k \alpha _{Q,j}. \end{aligned}$$
(3.48)

Let us return to the \(I_k\) and \(J_k\) for \(k\ge 1\). We have

$$\begin{aligned}|I_k - J_k| \le \left| \int f_k \, d\sigma - \int f_k \, d\mu _{Q,k} \right| + \left| \int f_k \, d\mu _{Q,k} - \int f_k \, d\mu _{X} \right| .\end{aligned}$$

The properties (3.44) and (3.45) on \(f_k\) allows us to bound the above terms with the help of Wasserstein distances. One has

$$\begin{aligned} \begin{aligned} |I_k - J_k|&\lesssim [2^k\ell (Q)]^{-\beta } \left[ \,\mathrm {dist}_{\xi _Q,2^{10+k}\ell (Q)}(\sigma ,\mu _{Q,k}) + \,\mathrm {dist}_{\xi _Q,2^{10+k}\ell (Q)}(\mu _{Q,k},\mu _{X}) \right] \\&\lesssim [2^k\ell (Q)]^{-\beta } \left[ \alpha (X) + \sum _{j=0}^k \alpha _{Q,j} \right] \end{aligned} \end{aligned}$$
(3.49)

by the definition of \(\mu _{Q,k}\) and by (3.48).

The estimates (3.40), (3.43), and (3.49) prove that

$$\begin{aligned} \begin{aligned} |D_\beta ^{-\beta }(X) - c_\beta c_X \,\mathrm {dist}(X,P_X)^{-\beta }|&\lesssim \ell (Q)^{-\beta } \sum _{k\in \mathbb {N}} 2^{-\beta k} \left( \alpha (X) + \sum _{j=0}^k \alpha _{Q,j} \right) \\&\lesssim \ell (Q)^{-\beta } \left( \alpha (X) + \sum _{j\in \mathbb {N}} 2^{-j\beta } \alpha _{Q,j} \right) \end{aligned} \end{aligned}$$
(3.50)

by Fubini’s theorem. The estimate (3.32) follows by recalling that, since \(X\in Q\) and Q is a Whitney cube, we have \(\ell (Q) \eqsim \,\mathrm {dist}(X,\Gamma )\).

The estimate (3.33) can be established exactly as (3.32), by noticing that the argument only requires that the functions inside the integral—namely \(|X-y|^{-d-\beta }\) for (3.32) and \(|X-y|^{-d-\beta -2}(X-y)\) for (3.33)—are Lipschitz (on \(\Gamma \cup P_X\)) and have enough decay at infinity.

As before, recall that

$$\begin{aligned}H_\beta (X) := \int _\Gamma |X-y|^{-d-\beta -1} (X-y) d\sigma (y)\end{aligned}$$

We use the functions

$$\begin{aligned}f'_k := |X-y|^{-d-\beta -1} (X-y) \theta _k(y)\end{aligned}$$

to make the decomposition \(H_\beta = \sum _{k\in \mathbb {N}} I'_k\) where

$$\begin{aligned}I'_k:= \int _\Gamma |X-y|^{-d-\beta -1} (X-y) \theta _k(y) d\sigma (y) = \int _\Gamma f'_k(y) d\sigma (y).\end{aligned}$$

We define then \(J'_k := \int _{P_X} f'_kd\mu _X\), which satisfies by (3.28) that

$$\begin{aligned}\sum _k J'_k = c_{\beta +1} c_X \,\mathrm {dist}(X,P_X)^{-\beta -1} N_X.\end{aligned}$$

And since

$$\begin{aligned}|H_\beta (X) - c_{\beta +1} c_X \,\mathrm {dist}(X,P_X)^{-\beta } N_X| \le \sum _{k\in \mathbb {N}} |I'_k - J'_k|,\end{aligned}$$

we only need to get appropriate bounds on \(|I'_k-J'_k|\). We use the same argument as the one given for the proof of (3.32), observing that the Lipschitz constant of \(f'_k\) is now at most \(C[2^k\ell (Q)]^{-d-\beta -1}\) and we obtain that

$$\begin{aligned}|I'_k - J'_k| \lesssim [2^k\ell (Q)]^{-\beta } \left[ \alpha (X) + \sum _{j=0}^k \alpha _{Q,j} \right] ,\end{aligned}$$

which implies,

$$\begin{aligned}|H_\beta (X) - c_{\beta +1} c_X \,\mathrm {dist}(X,P_X)^{-\beta } N_X| \lesssim \,\mathrm {dist}(X,\Gamma )^{-\beta } \left( \alpha (X) + \sum _{j\in \mathbb {N}} 2^{-j\beta } \alpha _{Q,j} \right) \end{aligned}$$

The bound (3.33) and the lemma follow. \(\square \)

For the rest of the section, we take \(\alpha ,\beta >0\). We overload the notation \(\alpha \), that is used both for Tolsa’s \(\alpha \)-number and as a parameter of the quantities \(D_\alpha \) and \(H_\alpha \), but the two \(\alpha \) have such different roles that we believe no confusion should arise from it. But anyway, let us to get rid of any mention to the \(\alpha \) as Tolsa number as fast as possible. To that objective, we define the quantity a(X) as

$$\begin{aligned} a(X) := \alpha (X) + \sum _{k\in \mathbb {N}} 2^{-k\min \{\alpha ,\beta \}} \alpha _{Q,k} \end{aligned}$$
(3.51)

where \(Q\ni X\), which is nice because

$$\begin{aligned} \text {If }\Gamma \text { is uniformly rectifiable, then }a\text { satisfies the Carleson measure condition.} \end{aligned}$$
(3.52)

Indeed, by the Cauchy–Schwarz inequality, one has

$$\begin{aligned}|a(X)|^2 \lesssim |\alpha (X)|^2 + \sum _{k\in \mathbb {N}} 2^{-k\min \{\alpha ,\beta \}} |\alpha _{Q,k}|^2\end{aligned}$$

so for every \(x\in \Gamma \) and \(r>0\), we get

$$\begin{aligned}\begin{aligned} \int _{B(x,r)} |a(X)|^2 \,\mathrm {dist}(X,\Gamma )^{d-n} dX&\lesssim \int _{B(x,r)} |\alpha (X)|^2 \,\mathrm {dist}(X,\Gamma )^{d-n} dX \\&\quad + \sum _{k\in \mathbb {N}} 2^{-k\min \{\alpha ,\beta \}} \sum _{Q\in {\mathcal W}(x,r)} |\alpha _{Q,k}|^2 \ell (Q)^d \\&\lesssim C_0r^d \left( 1 + \sum _{k\in \mathbb {N}} 2^{-k\min \{\alpha ,\beta \}} k\right) \lesssim C_0r^d \end{aligned}\end{aligned}$$

where we used the fact that \(\,\mathrm {dist}(X,\Gamma ) \eqsim \ell (Q)\) in the first inequality, and Lemmas 3.14 and 3.20 in the second one. The uniform boundedness of a, which is required in the definition of the Carleson measure condition, is a consequence of the fact that \(\alpha _\sigma \)—and thus all the quantities constructed from it—is uniformly bounded, as recalled in the introduction.

Proof of Lemma 1.26

Assume that \(\Gamma \) is uniformly rectifiable. We want to estimate \(\nabla [D_\beta /D_\alpha ]\), which can be rewritten

$$\begin{aligned} \nabla \left( \frac{D_\beta }{D_\alpha } \right)= & {} \frac{D_\beta }{D_\alpha } \left( \dfrac{\nabla [D_\alpha ^{-\alpha }]}{\alpha D_\alpha ^{-\alpha }} - \dfrac{\nabla [D_\beta ^{-\beta }]}{\beta D_\beta ^{-\beta }} \right) \\= & {} \frac{D_\beta }{D_\alpha } \left( \dfrac{(d+\beta ) H_{\beta +1}}{\beta D_\beta ^{-\beta }} - \dfrac{(d+\alpha ) H_{\alpha +1}}{\alpha D_\alpha ^{-\alpha }} \right) , \end{aligned}$$

by (3.24).

Lemma 3.31 infers that

$$\begin{aligned}&|D_\beta ^{-\beta } - c_\beta c_X \,\mathrm {dist}(X,P_X)^{-\beta }| \le C \,\mathrm {dist}(X,\Gamma )^{-\beta } a(X), \end{aligned}$$
(3.53)
$$\begin{aligned}&|H_{\beta +1} -c_{\beta +2} c_X \,\mathrm {dist}(X,P_X)^{-\beta -1} N_X| \le C \,\mathrm {dist}(X,\Gamma )^{-\beta -1} a(X), \end{aligned}$$
(3.54)

and analogous estimates for \(D_\alpha ^{-\alpha }\) and \(\nabla [D_\alpha ^{-\alpha }]\). So since \(D_\alpha \eqsim D_\beta \) by (1.10),

$$\begin{aligned} \begin{aligned} \left| \nabla \left( \frac{D_\beta }{D_\alpha } \right) \right| \lesssim \left| \dfrac{(d+\beta ) H_{\beta +1}}{\beta D_\beta ^{-\beta }} - \dfrac{N_X}{\,\mathrm {dist}(X,P_X)}\right| + \left| \dfrac{(d+\alpha ) H_{\alpha +1}}{\alpha D_\alpha ^{-\alpha }} - \dfrac{N_X}{\,\mathrm {dist}(X,P_X)}\right| \end{aligned}\end{aligned}$$
(3.55)

We shall only bound the first term on the right-hand side above, since the same bound will be obviously obtained on the second term by replacing \(\beta \) by \(\alpha \). Recall that by (1.10) and (3.17)

$$\begin{aligned} D_\beta (X) \eqsim \,\mathrm {dist}(X,\Gamma ) \eqsim \,\mathrm {dist}(X,P_X), \end{aligned}$$
(3.56)

the lower bound \(\,\mathrm {dist}(X,P_X) \le C \,\mathrm {dist}(X,\Gamma )\) being an easy consequence of (3.10). So we have

$$\begin{aligned} \begin{aligned}&\left| \dfrac{(d+\beta ) H_{\beta +1}}{\beta D_\beta ^{-\beta }} - \dfrac{N_X}{\,\mathrm {dist}(X,P_X)}\right| \\&\quad \lesssim \,\mathrm {dist}(X,P_X)^{\beta } \left| H_{\beta +1}- \dfrac{\beta D_\beta ^{-\beta }N_X }{(d+\beta )\,\mathrm {dist}(X,P_X)}\right| \\&\quad \le \,\mathrm {dist}(X,P_X)^{\beta } \Big ( \left| H_{\beta +1} - c_{\beta +2} c_X N_X \,\mathrm {dist}(X,P_X)^{-\beta -1}\right| \\&\qquad + \left. \frac{\beta |N_X|}{(d+\beta ) \,\mathrm {dist}(X,P_X)} \left| D_\beta ^{-\beta } - \frac{(d+\beta )c_{\beta +2}}{\beta } c_X \,\mathrm {dist}(X,P_X)^{-\beta }\right| \right) \end{aligned}\end{aligned}$$

But since \((d+\beta )c_{\beta +2}/\beta = c_\beta \) due to (3.30), we obtain

$$\begin{aligned} \begin{aligned}&\left| \dfrac{(d+\beta ) H_{\beta +1}}{\beta D_\beta ^{-\beta }} - \dfrac{N_X}{\,\mathrm {dist}(X,P_X)}\right| \\&\quad \le \,\mathrm {dist}(X,P_X)^{\beta } \Big ( \left| H_{\beta +1} - c_{\beta +2} c_X N_X \,\mathrm {dist}(X,P_X)^{-\beta -1}\right| \\&\qquad + \left. \frac{\beta }{(d+\beta ) \,\mathrm {dist}(X,P_X)} \left| D_\beta ^{-\beta } - c_\beta c_X \,\mathrm {dist}(X,P_X)^{-\beta }\right| \right) \\&\quad \lesssim \frac{a(X)}{\,\mathrm {dist}(X,P_X)} \eqsim \,\mathrm {dist}(X,\Gamma )^{-1} a(X) \end{aligned}\end{aligned}$$

by (3.53) and (3.54). Together with (3.55), we conclude that

$$\begin{aligned}\,\mathrm {dist}(X,\Gamma ) \left| \nabla \left( \frac{D_\beta }{D_\alpha } \right) \right| \lesssim a(X),\end{aligned}$$

which concludes the proof of Lemma 1.26 since a(X) satisfies the Carleson measure condition. \(\square \)

Let us finish the section by the following lemma. Lemma 1.24 from the introduction is a consequence of the next lemma in the particular case where \(\alpha = n-d-1>0\).

Lemma 3.57

Let \(\Gamma \) be uniformly rectifiable Let \(\alpha ,\beta >0\). Then there exist a scalar function b and a vector function \(\mathcal V\), both defined on \(\Omega \), such that

$$\begin{aligned} H_{\alpha } = (b \nabla D_\beta + \mathcal V) D_\beta ^{-\alpha } \qquad \text { for } X\in \Omega \end{aligned}$$
  1. (i)

    \(C_1^{-1} \le b \le C_1\),

  2. (ii)

    \(D_\beta \nabla b \in CM(C_1)\),

  3. (iii)

    \(|\mathcal V| \le C_1\),

  4. (iv)

    \(\mathcal V \in CM(C_1)\),

where \(C_1\) is a constant that depends only on \(C_\sigma \), \(C_0\), \(\alpha \), \(\beta \), n, and d.

Proof of Lemma 1.24

First, let us explain why we have, for any real \(\nu \),

$$\begin{aligned} |D_\beta ^{\nu }(X) - c_\beta ^{-\nu /\beta } c_X^{-\nu /\beta } \,\mathrm {dist}(X,P_X)^\nu | \le C \,\mathrm {dist}(X,\Gamma )^{\nu } a(X). \end{aligned}$$
(3.58)

By the Mean Value Theorem, if \(x,y>0\), we have

$$\begin{aligned} |x^{-\nu /\beta } - y^{-\nu /\beta }| \le \left( \sup _{z\in [x,y]} \frac{|\nu |}{\beta }z^{-\nu /\beta -1}\right) |x-y|. \end{aligned}$$
(3.59)

We apply the above estimate to \(x = D_\beta ^{-\beta }/\,\mathrm {dist}(X,P_X)^{-\beta }\) and \(y = c_\beta c_X\). Since x and y are uniformly (in X) bounded from above and from below by a positive constant (see (3.56), (3.18)), we deduce that \(\sup _{z\in [x,y]} \frac{1}{\beta }z^{-1/\beta -1}\) is bounded uniformly in X. As a consequence,

$$\begin{aligned} \left| \frac{D_\beta ^\nu (X)}{\,\mathrm {dist}(X,P_X)^\nu } - c_\beta ^{-\nu /\beta } c_X^{-\nu /\beta } \right| \lesssim \left| \frac{D_\beta ^{-\beta }(X)}{\,\mathrm {dist}(X,P_X)^{-\beta }} - c_\beta c_X \right| \lesssim a(X) \end{aligned}$$
(3.60)

thanks to (3.32). The claim (3.58) follows once (3.56) is invoked again.

Then, we need a smooth substitute for the function \(X \rightarrow c_X\). We could have been more careful when building \(c_X\), so that it would have been smooth from the beginning, but we chose another path. We write \(s_X\) for

$$\begin{aligned} s_X := c_1 c_{1/2}^{-2} \frac{D_1(X)}{D_{1/2}(X)}, \end{aligned}$$
(3.61)

where \(c_1,c_{1/2}\) are the coefficients defined in (3.26) for the values \(\beta = 1,1/2\), and \(D_1,D_{1/2}\) are the smooth distances defined in (1.9). We claim that

$$\begin{aligned} |c_X - s_X| \lesssim a(X) \qquad \text { for } X\in \Omega . \end{aligned}$$
(3.62)

Indeed, we call (3.56) and (3.18) to justify that

$$\begin{aligned} |s_X - c_X|\lesssim & {} \frac{1}{\,\mathrm {dist}(X,\Gamma )} \left| \frac{c_1}{c_X} D_1(X) - c_{1/2}^2 D_{1/2}(X) \right| \nonumber \\\le & {} \frac{1}{\,\mathrm {dist}(X,\Gamma )}\left( \frac{c_1}{c_X} \left| D_1(X) - c_1^{-1} c_X^{-1} \,\mathrm {dist}(X,P_X)\right| \right. \nonumber \\&\left. + c_{1/2}^2 \left| D_{1/2}(X) - c_{1/2}^{-2} c_X^{-2} \,\mathrm {dist}(X,P_X)\right| \right) \nonumber \\\lesssim & {} a(X) \end{aligned}$$
(3.63)

by (3.58). The claim (3.62) follows.

The quantities \(s_X\) and \(c_X\) are uniformly bounded from above and below by a positive constant, as a consequence of (1.10) and (3.18). So if we mimic the proof of (3.58), using the Mean Value Theorem, we get that for any real number \(\nu \), one has

$$\begin{aligned} |c_X^{\nu } - s_X^\nu | \lesssim a(X). \end{aligned}$$
(3.64)

We can now replace the \(c_X\) by \(s_X\) in the estimates (3.58) and (3.40). Indeed,

$$\begin{aligned}\begin{aligned} |D_\beta ^{\nu }(X) - c_\beta ^{-\nu /\beta } s_X^{-\nu /\beta } \,\mathrm {dist}(X,P_X)^\nu |&\le |D_\beta ^{\nu }(X) - c_\beta ^{-\nu /\beta } c_X^{-\nu /\beta } \,\mathrm {dist}(X,P_X)^\nu | \\&\quad + c_\beta ^{-\nu /\beta } \,\mathrm {dist}(X,P_X)^\nu | c_X^{-\nu /\beta } - s_X^{-\nu /\beta }| \end{aligned}\end{aligned}$$

So by (3.58), (3.56), and (3.64), for any power \(\nu \in \mathbb {R}\) and any \(\beta >0\).

$$\begin{aligned} |D_\beta ^{\nu }(X) - c_\beta ^{-\nu /\beta } s_X^{-\nu /\beta } \,\mathrm {dist}(X,P_X)^\nu | \lesssim \,\mathrm {dist}(X,\Gamma )^\nu a(X). \end{aligned}$$
(3.65)

Similarly, we have for \(\beta >0\),

$$\begin{aligned} |H_\beta (X) - c_{\beta +1} s_X \,\mathrm {dist}(X,P_X)^{-\beta } N_X| \lesssim \,\mathrm {dist}(X,\Gamma )^{-\beta } a(X). \end{aligned}$$
(3.66)

We finished our preliminary estimates. We set

$$\begin{aligned}b(X) := \frac{\beta c_{\alpha +1}}{(d+\beta )c_{\beta +2}} [c_\beta s_X]^{(\beta + 1 - \alpha )/\beta }.\end{aligned}$$

Since \(\nabla b\) is—up to a constant—\(s_X^{(1-\alpha )/\beta } \nabla s_X\), and since \(s_X\) is the quotient \(D_1/D_{1/2}\), Lemma 1.26 and (1.10) give the conclusions (i) and (ii) of the lemma under proof. So it remains to check that

$$\begin{aligned}\mathcal V:= D_\beta ^{\alpha } H_\alpha - b\nabla D_\beta \end{aligned}$$

is uniformly bounded and satisfies the Carleson measure condition. The uniform bound on \(\mathcal V\) is easy, and is a consequence of the fact that \(|H_\alpha | \lesssim D_\alpha ^{-\alpha } \eqsim D_\beta ^{-\alpha }\) and \(|\nabla D_\beta | \lesssim 1\) (see (3.23) and (3.25)).

We are left with the proof of (iv), i.e. the fact that \(\mathcal V \in CM\). Thanks to (3.24), we have that

$$\begin{aligned} \begin{aligned} |\mathcal V|&= \left| D_\beta ^{\alpha } H_\alpha - \frac{d+\beta }{\beta } b D_\beta ^{\beta +1} H_{\beta +1} \right| \\&\le D_\beta ^{\beta +1} \Big ( \left| D_\beta ^{\alpha -\beta -1} H_\alpha - c_{\alpha +1} [c_\beta ]^{(\beta +1-\alpha )/\beta } [s_X]^{1 + \frac{\beta + 1 - \alpha }{\beta }} \,\mathrm {dist}(X,P_X)^{-\beta -1} N_X \right| \\&\qquad + \left. \left| \frac{d+\beta }{\beta }b H_{\beta +1} - c_{\alpha +1} [c_\beta ]^{(\beta +1-\alpha )/\beta } [s_X]^{1 + \frac{\beta + 1 - \alpha }{\beta }} \,\mathrm {dist}(X,P_X)^{-\beta -1} N_X \right| \right) \\&:= D_{\beta }^{\beta +1} (V_1 + V_2). \end{aligned}\end{aligned}$$
(3.67)

We start by bounding \(V_2\). We use the expression of b to get

$$\begin{aligned} \begin{aligned} V_2&= \frac{c_{\alpha +1}}{c_{\beta +2}} [c_\beta s_X]^{(\beta +1-\alpha )/\beta } |H_{\beta +1} - c_{\beta +2} s_X \,\mathrm {dist}(X,P_X)^{-\beta -1} N_X | \\&\lesssim \,\mathrm {dist}(X,\Gamma )^{-\beta -1} a(X) \lesssim D_\beta ^{-\beta -1} a(X). \end{aligned}\end{aligned}$$
(3.68)

by (3.18), (3.66), and then (1.10). As for \(V_1\), we write

$$\begin{aligned} \begin{aligned} V_1&= D_\beta ^{\alpha -\beta -1} \left| H_\alpha - c_{\alpha +1} [c_\beta ]^{(\beta +1-\alpha )/\beta } [s_X]^{1 + \frac{\beta + 1 - \alpha }{\beta }} \,\mathrm {dist}(X,P_X)^{-\beta -1} D_\beta ^{\beta +1-\alpha } N_X \right| \\&\le D_\beta ^{\alpha -\beta -1} \Big (\left| H_\alpha - c_{\alpha +1} s_X \,\mathrm {dist}(X,P_X)^{-\alpha } N_X \right| \\&\quad + c_{\alpha +1} [c_\beta ]^{(\beta +1-\alpha )/\beta } [s_X]^{1 + \frac{\beta + 1 - \alpha }{\beta }} \,\mathrm {dist}(X,P_X)^{-\beta -1} \\&\quad \left| D_\beta ^{\beta +1-\alpha } - [c_{\beta }s_X]^{(\alpha - \beta -1)/\beta } \,\mathrm {dist}(X,P_X)^{\beta +1-\alpha } \right| \Big ) \\&\lesssim D_\beta ^{\alpha -\beta -1} (\,\mathrm {dist}(X,\Gamma )^{-\alpha } + \,\mathrm {dist}(X,P_X)^{-\beta -1} \,\mathrm {dist}(X,\Gamma )^{\beta +1-\alpha } ) a(X) \\&\lesssim D_\beta ^{-\beta -1} a(X) \end{aligned}\end{aligned}$$
(3.69)

by (3.65), (3.66), and then (3.56). We combine the estimates (3.67)–(3.69) to deduce that

$$\begin{aligned}|\mathcal V| \lesssim a(X).\end{aligned}$$

Since by (3.52), the quantity a satisfies the Carleson measure condition, \(|\mathcal V|\) also satisfies the the Carleson measure condition. Conclusion (iv) and the lemma follow. \(\square \)