1 Introduction

Carleson measures (see Definition 2.7) and their connections to geometry, harmonic analysis and partial differential equations (PDE) have been studied actively over the previous decades; see [51] for a recent survey related to many key developments and other relevant research. Carleson-type estimates are particularly powerful for measures \(\mu _F\) such that \(d\mu _F = |\nabla F(Y)|^2 {\text {dist}}(Y,\partial \Omega ) \, dY\) and F is a solution to an elliptic PDE in the set \(\Omega \): they can be used to, for example, characterize functions of bounded mean oscillation (BMO) [25], quantitative boundary geometry [30, 40] and absolute continuity properties of harmonic measure [34]. As it is discussed in [28, Chapters VI and VIII], Carleson-type estimates for measures \({\widetilde{\mu }}_F\) such that \(d{\widetilde{\mu }}_F = |\nabla F(Y)| \, dY\) would be very powerful but they fail even for harmonic functions in the unit disk. To circumvent this problem, Varopoulos [60] introduced a way to approximate harmonic functions in \(L^\infty \) sense by other functions satisfying these estimates. This \(\varepsilon \)-approximability theory has been studied from many points of view in the past years [4,5,6, 15, 28,29,30, 36, 39, 40, 45, 48].

The initial motivation behind \(\varepsilon \)-approximability theory was to prove an extension theorem for BMO functions inspired by Carleson’s Corona Theorem [10]. Varopoulos [59, 60] showed that any compactly supported BMO function f in \({\mathbb {R}}^n\) has a smooth extension \(V_f\) to \(\mathbb {R}^n\times (0,\infty )\) such that the extension converges non-tangentially (see Definition 2.8) back to f and \(|\nabla V_f(Y)| \, dY\) defines a Carleson measure. Inspired by the power of these estimates, Hofmann and the third named author [37] recently showed that uniform rectifiability (see Definition 2.6) of the boundary is enough to guarantee the existence of these Varopoulos-type extensions. Since Carleson-type estimates for harmonic functions can be used to characterize uniform rectifiability [30, 40] or even stronger geometric properties [3], it is natural to ask if the existence of Varopoulos-type extensions (which satisfy better Carleson-type estimates than harmonic functions) characterizes some quantitative geometric properties for the boundary.

In this paper, we show that the existence of Varopoulos-type extensions does not characterize uniform rectifiability but they can exist even in sets with unrectifiable boundaries. Our main result is the following \(\varepsilon \)-approximability result which can be used to build Varopoulos-type extensions. Throughout, set \(\sigma {:}{=}{\mathcal {H}}^n|_{\partial \Omega }\).

Theorem 1.1

Let \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n\ge 1\), be a uniform domain (see Definition 2.3) with n-Ahlfors regular boundary (see Definition 2.4). Let \(L=-{\text {div}}A\nabla \) be a real, not necessarily symmetric, bounded elliptic operator in \(\Omega \) such that the corresponding elliptic measure \(\omega _L\) satisfies \(\omega _L\in A_\infty (\sigma )\) (see Definition 2.31), and let \(\varepsilon \in (0,1)\). Then any solution \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega ) \cap L^\infty (\Omega )\) to \(Lu = 0\) in \(\Omega \) is \(\varepsilon \)-approximable: there exists a constant \(C_\varepsilon \) and a function \(\Phi = \Phi ^{\varepsilon } \in C^\infty (\Omega )\) such that

  1. (i)

    \(\Vert u - \Phi \Vert _{L^\infty (\Omega )} \le \varepsilon \Vert u\Vert _{L^\infty (\Omega )}\),

  2. (ii)

    \(\Phi \) satisfies a quantitative \(L^1\)-type Carleson measure estimate

    $$\begin{aligned} \sup _{x \in \partial \Omega , r > 0} \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla \Phi (Y)|\, dY \le C_\varepsilon \Vert u\Vert _{L^{\infty }(\Omega )}, \end{aligned}$$
  3. (iii)

    \(|\nabla \Phi (X)| \le \tfrac{C_\varepsilon \Vert u\Vert _{L^\infty (\Omega )}}{\delta (X)}\) for every \(X \in \Omega \),

  4. (iv)

    if \(|X-Y| \ll {\text {dist}}(X,\partial \Omega )\), then \(|\Phi (X) - \Phi (Y)| \le \tfrac{C_\varepsilon \Vert u\Vert _{L^\infty (\Omega )}}{\delta (X)}|X-Y|\),

  5. (v)

    there exists a function \(\varphi \in L^\infty (\partial \Omega )\) such that

    $$\begin{aligned} \lim _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} \Phi (Y) = \varphi (x) \text { for } \sigma \text {-a.e. } x \in \partial \Omega . \end{aligned}$$

The notation \(\lim _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}}\) means non-tangential convergence (see Definition 2.8) and \(\delta (\cdot ) {:}{=}{\text {dist}}(\cdot ,\partial \Omega )\). Here, \(C_\varepsilon \) depends on \(\varepsilon \), the structural constants related to \(\Omega \) and \(\partial \Omega \), ellipticity and the \(\omega _L\in A_\infty (\sigma )\) constants.

The proof of Theorem 1.1 borrows some ideas from the proof of [40, Theorem 1.3] (which is an adaptation of the classical construction in [28, Chapter VIII, Theorem 6.1]), but very quickly our argument must differ significantly. In [40], the authors constructed the approximators for harmonic functions in the presence of a quantitative rectifiability hypothesis. This allowed them to construct approximating chord-arc domains which, in turn, allowed them to use an “\(N \lesssim S\)” estimate for harmonic functions. This was the most delicate part of their argument and our main challenges are strongly related to overcoming the fact that we cannot use the same tools due to our geometry (our boundary may be purely unrectifiable) and our operator L (no control on its structure or smoothness).

We also obtain the following converse to Theorem 1.1.

Theorem 1.2

Let \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n\ge 1\), be a uniform domain (see Definition 2.3) with n-Ahlfors regular boundary (see Definition 2.4), and let \(L=-{\text {div}}A\nabla \) be a real, not necessarily symmetric, bounded elliptic operator in \(\Omega \). Suppose also that every solution \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega ) \cap L^\infty (\Omega )\) to \(Lu = 0\) is \(\varepsilon \)-approximable for every \(\varepsilon \in (0,1)\) in the sense of Theorem 1.1. Then \(\omega _L \in A_\infty (\sigma )\) (see Definition 2.31).

In particular, by combining Theorems 1.1 and 1.2 with [11, Theorem 1.1], we get a new characterization of the \(A_\infty \) property of elliptic measure on uniform domains.

Corollary 1.3

Let \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n\ge 1\), be a uniform domain (see Definition 2.3) with n-Ahlfors regular boundary (see Definition 2.4), and let \(L=-{\text {div}}A\nabla \) be a real, not necessarily symmetric, bounded elliptic operator in \(\Omega \). The following conditions are equivalent:

  1. (a)

    \(\omega _L \in A_\infty (\sigma )\),

  2. (b)

    every solution \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega ) \cap L^\infty (\Omega )\) to \(Lu = 0\) in \(\Omega \) is \(\varepsilon \)-approximable for any \(\varepsilon \in (0,1)\) in the sense of Theorem 1.1,

  3. (c)

    every solution \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega ) \cap L^\infty (\Omega )\) to \(Lu = 0\) in \(\Omega \) satisfies an \(L^2\)-type Carleson measure estimate with \(L^\infty \) control over the Carleson norm: there exists \(C \ge 1\) such that

    $$\begin{aligned} \sup _{\begin{array}{c} x \in \partial \Omega \\ r \in (0,{{\,\textrm{diam}\,}}(\partial \Omega )) \end{array}} \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla u(X)|^2 {\text {dist}}(X,\partial \Omega ) \, dX \le C\Vert u\Vert _{L^\infty ({\Omega })}^2. \end{aligned}$$

Only the implications “(a) \(\implies \) (b)” and “(b) \(\implies \) (a)” in Corollary 1.3 are new. The equivalence “(a)\(\iff \) (c)” was already shown in [11] for \(n\ge 2\), while the case \(n=1\) follows as a particular case of a more general result in [26], with a similar method of proof.

In the setting of a uniform domain with Ahlfors regular boundary, the conditions (a), (b) and (c) in Corollary 1.3 are known to be equivalent with uniform rectifiability of \(\partial \Omega \) for the special case \(L = -\Delta \) [30, 35, 38, 40], or for \(L = -{\text {div}}A\nabla \) where A is a locally Lipschitz symmetric matrix such that \(|\nabla A|\) satisfies an \(L^1\)-type Carleson measure condition [4, 11, 42]. Even for the aforementioned operators with nice structure, all the available proofs in the literature of the implications “(a) \(\implies \) (b)” or “(c) \(\implies \) (b)” in Corollary 1.3 rely on uniform rectifiability techniques in a decisive fashion, and do not extend to domains with rougher boundaries. The main novelty of this manuscript is that we succeed in proving the implication “(a) \(\implies \) (b)” without appealing to uniform rectifiability theory. This allows us to establish the equivalence “(a)\(\iff \) (b)” for arbitrary elliptic operators in settings that are beyond chord-arc domains (see Definition 2.5). For further characterizations of the \(\omega _L\in A_{\infty }(\sigma )\) property for arbitrary real divergence form elliptic operators, see [8, 55].

Let us see an immediate corollary of our new characterization of the \(\omega _L\in A_\infty (\sigma )\) property. We say that L is a Dahlberg–Kenig–Pipher operator if \(A\in {\text {Lip}}_{{{\,\textrm{loc}\,}}}(\Omega )\) with \(|\nabla A|{\text {dist}}(\cdot ,\partial \Omega )\in L^{\infty }(\Omega )\), and the measure \(\mu _A\) such that \(d\mu _A = |\nabla A(X)|^2{\text {dist}}(X,\partial \Omega )\,dX\) is a Carleson measure. For these operators, the conditions (a) and (c) in Corollary 1.3 are equivalent with uniform rectifiability of \(\partial \Omega \) when \(\Omega \) is a uniform domain with Ahlfors regular boundary [43]. Combining the main result of [43] with Corollary 1.3 gives us a new result for Dahlberg–Kenig–Pipher operators:

Corollary 1.4

Let \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n\ge 2\), be a uniform domain (see Definition 2.3) with n-Ahlfors regular boundary (see Definition 2.4), and let \(L=-{\text {div}}A\nabla \) be a not necessarily symmetric Dahlberg–Kenig–Pipher operator in \(\Omega \). The following are equivalent:

  1. (a)

    \(\partial \Omega \) is uniformly rectifiable (see Definition 2.6).

  2. (b)

    every solution \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega ) \cap L^\infty (\Omega )\) to \(Lu = 0\) in \(\Omega \) is \(\varepsilon \)-approximable for any \(\varepsilon \in (0,1)\) in the sense of Theorem 1.1.

As a consequence of Theorem 1.1 and the techniques in [37], we get the following generalization of the Varopoulos extension theorem [59, 60]:

Theorem 1.5

Let \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n\ge 1\), be a uniform domain (see Definition 2.3) with Ahflors regular boundary (see Definition 2.4). If there exists a divergence form elliptic operator \(L = -\mathop {{\text {div}}}\nolimits A\nabla \) such that the corresponding elliptic measure \(\omega _L\) satisfies \(\omega _L\in A_\infty (\sigma )\) (see Definition 2.31), then every \(f \in {{\,\textrm{BMO}\,}}_{{{\,\textrm{c}\,}}}(\partial \Omega )\) (see Definition 2.12) has a Varopoulos extension in \(\Omega \). That is, there exists F with the following properties:

  1. (1)

    \(F \in C^\infty (\Omega )\) and \(|\nabla F(X)| \lesssim \tfrac{\Vert f\Vert _{{{\,\textrm{BMO}\,}}(\partial \Omega )}}{\delta (X)}\), for all \(X \in \Omega \),

  2. (2)

    \(\lim _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} F(Y) = f(x)\) for \(\sigma \)-a.e. \(x \in \partial \Omega \), and

  3. (3)

    \(|\nabla F(Y)|\) is the density of a Carleson measure in the sense that

    $$\begin{aligned} \sup _{r > 0, x \in \partial \Omega } \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla F(Y)| \, dY \le C\Vert f\Vert _{{{\,\textrm{BMO}\,}}(\partial \Omega )}. \end{aligned}$$

The notation \(\lim _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}}\) means non-tangential limit and \(\delta (\cdot ) {:}{=}{\text {dist}}(\cdot ,\partial \Omega )\). Here, C depends on the structural constants related to \(\Omega \) and \(\partial \Omega \), and the \(\omega _L\in A_\infty (\sigma )\) constants.

Theorem 1.5 is not (and is not meant to be) a generalization of the main result in [37] where an extension theorem of this type was proven in the presence of a quantitative rectifiability hypothesis for the boundary. The novelty of Theorem 1.5 is that its assumptions hold for some sets with very rough boundaries. In particular, recently David and Mayboroda [17] showed that the key hypothesis of Theorem 1.5 holds for the exterior of the 4-corner Cantor set:

Theorem 1.6

([17, Sect. 4]) Let \(\Omega \) be the complement of the 4-corner Cantor set in \(\mathbb {R}^2\) (see Sect. 7). There exists a divergence form elliptic operator \(L = -\mathop {{\text {div}}}\nolimits A\nabla \) in \(\Omega \) such that \(\omega _L\in A_\infty (\sigma )\) (see Definition 2.31). More precisely, one can take the matrix A to be diagonal and equal to the identity outside a ball of radius 1 concentric with the Cantor set and so that the L-elliptic measure with pole at \(\infty \) equals \(\sigma /\sigma (\partial \Omega )\).

The remarkable thing about Theorem 1.6 is that since the complement of the 4-corner Cantor set is an unbounded uniform domain with unrectifiable 1-Ahlfors regular boundary, we know that harmonic measure for this set cannot satisfy the \(A_\infty (\sigma )\) condition (see, for example, [3]). Thus, constructing operators like this is highly non-trivial. In Sect. 7, we take the David–Mayboroda example and use it to build an example of a similar operator in \(\mathbb {R}^3\) (see Proposition 7.5).

By combining Theorems 1.5 and 1.6 and Proposition 7.5, we get the following:

Corollary 1.7

There exist uniform domains \(\Omega \) with unrectifiable Ahlfors regular boundaries \(\partial \Omega \) in \(\mathbb {R}^2\) and \(\mathbb {R}^3\) such that every function \(f \in {{\,\textrm{BMO}\,}}(\partial \Omega )\) with compact support has a Varopoulos extension in \(\Omega \). In particular, the existence of Varopoulos extensions does not imply rectifiability for the boundary.

By Corollary 1.7, \(L^1\)-type Carleson measure estimates are simultaneously too strong and too weak from the point of view of the David–Semmes theory: harmonic functions fail these estimates even in the unit disk but the existence of non-harmonic extensions that satisfy these estimates does not imply even qualitative rectifiability for the boundary.

Finally, we want to mention a manuscript that is closely related to our results. After the first version of the present work was uploaded to the arXiv, the article [54] was posted to the arXiv on March 2023, where the authors construct Varopoulos extensions in corkscrew domains with Ahlfors regular boundaries satisfying a mild quantitative connectivity hypothesis by completely different methods. In particular, in [54] the authors achieve a more general version of our Corollary 1.7, while on the other hand, the relationship between the \(A_\infty \) property of elliptic measure and \(\varepsilon \)-approximability is not studied in [54].

The paper is organized as follows. In Sect. 2, we discuss basic definitions and consider some key tools from dyadic analysis and elliptic PDE theory. In Sect. 3, we prove some important preliminary estimates for Theorem 1.1, and in Sect. 4, we prove Theorem 1.1. In Sect. 5, we prove Theorem 1.2 (and hence, Corollary 1.3). Finally, in Sect. 6 we sketch the proof of Theorem 1.5 and in Sect. 7 we construct a David–Mayboroda-type example in \({\mathbb {R}}^3\) (which completes the proof of Corollary 1.7).

2 Preliminaries

Throughout, we let \(\Omega \subset {\mathbb {R}}^{n+1}\) be an open set with \(n \ge 1\). We say that \(\Omega \) is a domain if it is also connected.

Usually, we use capital letters XYZ, and so on to denote points in \(\Omega \), and lowercase letters xyz, and so on to denote points in \(\partial \Omega \). For \(X \in {\mathbb {R}}^{n+1}\) and \(r > 0\), we let B(Xr) be the Euclidean open ball of radius r centered at X. The letters c and C and their obvious variations denote constants that depend only on dimension, n-Ahlfors regularity constant (see Definition 2.4), corkscrew constant (see Definition 2.1), Harnack chain constants, ellipticity constants (see Sect. 2.5), and so on. We call these kinds of constants structural constants. We write \(a \lesssim b\) if \(a \le Cb\) for a structural constant C and \(a \approx b\) if \(C_1 b \le a \le C_2 b\) for structural constants \(C_1\) and \(C_2\).

2.1 Uniform Domains, Chord-Arc Domains, Ahlfors Regularity and Uniform Rectifiability

Definition 2.1

(Corkscrew condition) We say that a domain \(\Omega \subset {\mathbb {R}}^{n+1}\) satisfies the corkscrew condition if there exists a constant \(\gamma > 0\) such that for every \(x \in \partial \Omega \) and \(r \in (0,{{\,\textrm{diam}\,}}(\Omega ))\) there exists \(Y_{x,r}\) such that

$$\begin{aligned} B(Y_{x,r}, \gamma r) \subset B(x,r) \cap \Omega . \end{aligned}$$

We call \(Y_{x,r}\) a corkscrew point relative to x at scale r.

Definition 2.2

(Harnack chain condition) We say a domain \(\Omega \subset {\mathbb {R}}^{n+1}\) satisfies the Harnack chain condition if there exists a uniform constant C such that for every \(\rho > 0\) and \(\Lambda \ge 0\) and \(X, X' \in \Omega \) with \({\text {dist}}(X,\partial \Omega ), {\text {dist}}(X',\partial \Omega ) \ge \rho \) and \(|X - X'| \le \Lambda \rho \) there exists a chain of open balls \(B_1,\dots , B_J\) with \(J \le N(\Lambda )\) with \(X \in B_1\), \(X' \in B_J\), \(B_{j} \cap B_{j + 1} \ne {\text{\O }}\) and \(C^{-1} {{\,\textrm{diam}\,}}(B_j) \le {\text {dist}}(B_j, \partial \Omega ) \le C{{\,\textrm{diam}\,}}(B_j)\).

Definition 2.3

(Uniform domain) We say that a domain \(\Omega \subset {\mathbb {R}}^{n+1}\) is uniform if it satisfies the corkscrew and Harnack chain conditions.

Definition 2.4

(Ahlfors regularity) We say \(\Sigma \subset {\mathbb {R}}^{n+1}\) is n-Ahlfors regular (or simply Ahlfors regular) if there exists C such that

$$\begin{aligned} C^{-1} r^n \le {\mathcal {H}}^n(B(x,r) \cap \Sigma ) \le Cr^n, \quad \text {for each } x \in r \in (0, {{\,\textrm{diam}\,}}(\Sigma )). \end{aligned}$$

Here and below \({\mathcal {H}}^n\) denotes the n-dimensional Hausdorff measure.

Definition 2.5

(Chord-arc domain) We say that a domain \(\Omega \subset {\mathbb {R}}^{n+1}\) is a chord-arc domain if \(\Omega \) satisfies the Harnack chain condition, both \(\Omega \) and \(\text {int} \, \Omega ^{{{\,\textrm{c}\,}}}\) satisfy the corkscrew condition, and the boundary \(\partial \Omega \) is n-Ahlfors regular.

Definition 2.6

(Uniform rectifiability) Following [18], we say that an n-Ahlfors regular set \(E \subset {\mathbb {R}}^{n+1}\) is uniformly rectifiable if it contains “big pieces of Lipschitz images” of \(\mathbb {R}^n\): there exist constants \(\theta , M > 0\) such that for every \(x \in E\) and \(r \in (0,{{\,\textrm{diam}\,}}(E))\) there is a Lipschitz mapping \(\rho = \rho _{x,r} :\mathbb {R}^n \rightarrow {\mathbb {R}}^{n+1}\), with Lipschitz norm no larger that M, such that

$$\begin{aligned} {\mathcal {H}}^n\big (E \cap B(x,r) \cap \rho (\{y \in \mathbb {R}^n :|y| < r\})\big ) \ge \theta r^n. \end{aligned}$$

2.2 Carleson Measures, Non-tangential Convergence, BMO and Local BV

Given a domain \(\Omega \subset {\mathbb {R}}^{n+1}\), we set \(\sigma {:}{=}{\mathcal {H}}^n|_{\partial \Omega }\) and \(\delta (X) {:}{=}{\text {dist}}(X,\partial \Omega )\) for \(X\in \Omega \).

Definition 2.7

(Carleson measures) We say that a Borel measure \(\mu \) in \(\Omega \) is a Carleson measure (with respect to \(\partial \Omega \)) if we have

$$\begin{aligned} C_\mu {:}{=}\sup _{x \in \partial \Omega , r > 0} \frac{\mu (B(x,r) \cap \Omega )}{r^n} < \infty . \end{aligned}$$

We call \(C_\mu \) the Carleson norm of \(\mu \).

Definition 2.8

(Cones and non-tangential convergence) Suppose that \(m>1\). For every \(x \in \partial \Omega \), the cone of m-aperture at x is the set

$$\begin{aligned} {\widetilde{\Gamma }}(x) {:}{=}{\widetilde{\Gamma }}^m(x) {:}{=}\{Z \in \Omega :{\text {dist}}(Z,x) <m{\text {dist}}(Z,\partial \Omega )\}. \end{aligned}$$
(2.9)

Let G be a function defined in \(\Omega \) and g be a function defined on \(\partial \Omega \). We say that G converges non-tangentially to g at \(x \in \partial \Omega \) if there exists \(m> 1\) such that we have \(\lim _{k \rightarrow \infty } G(Y_k) = g(x)\) for every sequence \((Y_k)\) in \({\widetilde{\Gamma }}^m(x)\) such that \(\lim _{k \rightarrow \infty } Y_k = x\). We denote this by \(\lim _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} G(Y) = g(x)\).

Definition 2.10

(Non-tangential maximal operator) We denote the non-tangential maximal operator by \(N_*\), that is, for a function \(u \in L^\infty (\Omega )\), the function \(N_*u :\partial \Omega \rightarrow \mathbb {R}\) is defined as

$$\begin{aligned} N_* u(x) = \sup _{X \in {\widetilde{\Gamma }}(x)} |u(X)|. \end{aligned}$$

We call \(N_* u\) the non-tangential maximal function of u.

Remark 2.11

The aperture constant m in Definition 2.8 does not play a big role in this paper and therefore we do not analyze it in detail for our results; we simply have that the results hold for some uniform aperture constant. Naturally, the aperture constant affects the values of the non-tangential maximal function but since the \(L^p\) norms of non-tangential maximal functions that are defined with different aperture constants are comparable (with the comparability constant depending on the aperture constants) (see [25, Lemma 1] and [36, Lemma 1.10]), this is not important for us. In most computations, it is convenient to use dyadic cones (see (2.17)) instead of cones of the previous type.

Definition 2.12

(BMO) The space \({{\,\textrm{BMO}\,}}(\partial \Omega )\) (bounded mean oscillation) consists of \(f\in L^1_{{{\,\textrm{loc}\,}}}(\partial \Omega )\) with

where the supremum is taken over all surface balls \(\Delta = \Delta (x,r) {:}{=}B(x,r) \cap \partial \Omega \). We denote \(f \in {{\,\textrm{BMO}\,}}_{{{\,\textrm{c}\,}}}(\partial \Omega )\) if f is a \({{\,\textrm{BMO}\,}}\) function with compact support.

Definition 2.13

(Local BV) We say that locally integrable function f has locally bounded variation in \(\Omega \) (denote \(f \in {{\,\textrm{BV}\,}}_{{{\,\textrm{loc}\,}}}(\Omega )\)) if for any open relatively compact set \(\Omega ' \subset \Omega \) the total variation over \(\Omega '\) is finite:

$$\begin{aligned} \iint _{\Omega '} |\nabla f(Y)| \, dY {:}{=}\sup _{\begin{array}{c} \overrightarrow{\Psi } \in C_0^1(\Omega ') \\ \Vert \overrightarrow{\Psi }\Vert _{L^\infty (\Omega ') \le 1} \end{array}} \iint _{\Omega '} f(Y) \, \text {div} \overrightarrow{\Psi }(Y) \, dY < \infty , \end{aligned}$$

where \(C_0^1(\Omega ')\) is the class of compactly supported continuously differentiable vector fields in \(\Omega '\).

2.3 Dyadic Cubes, Whitney Regions and Approximating Domains

An arbitrary n-Ahlfors regular set \(E\subset \mathbb {R}^{n+1}\) equipped with the Euclidean distance and surface measure can be viewed as a space of homogeneous type of Coifman and Weiss [14], with ambient dimension \(n+1\). All such sets can be decomposed dyadically in the following sense:

Lemma 2.14

([12, 18, 44]) Assume that \(E \subset {\mathbb {R}}^{n+1}\) is n-Ahlfors regular. Then E admits a dyadic decomposition in the sense that there exist constants \(a_1 \ge a_0 > 0\) such that for each \(k \in {\mathbb {Z}}\) there exists a collection of Borel sets, \({\mathbb {D}}_k\), which we will call (dyadic) cubes, such that

$$\begin{aligned} {\mathbb {D}}_k {:}{=}\{Q_{j}^k\subset E :j\in {\mathfrak {I}}_k\}, \end{aligned}$$

where \({\mathfrak {I}}_k\) denotes a countable index set depending on k, satisfying

  1. (i)

    for each fixed \(k \in {\mathbb {Z}}\), the sets \(Q_j^k\) are disjoint and \(E=\cup _{j}Q_{j}^k\,\,\),

  2. (ii)

    if \(m\ge k\) then either \(Q_{i}^{m}\subset Q_{j}^{k}\) or \(Q_{i}^{m}\cap Q_{j}^{k}={\text{\O }}\),

  3. (iii)

    for each \(k \in {\mathbb {Z}}\), \(j \in {\mathfrak {I}}_k\) and \(m<k\), there is a unique \(i \in {\mathfrak {I}}_m\) such that \(Q_{j}^k\subset Q_{i}^m\),

  4. (iv)

    \({{\,\textrm{diam}\,}}(Q_{j}^k)\le a_1 2^{-k}\),

  5. (v)

    for each \(Q_{j}^k\), there exists a point \(z_j^k \in Q_j^k\) such that \(E \cap B(z^k_{j}, a_0 2^{-k}) \subset Q_j^k \subset E \cap B(z^k_{j}, a_1 2^{-k})\).

We denote by \({\mathbb {D}}= {\mathbb {D}}(E)\) the collection of all cubes \(Q^k_j\), that is,

$$\begin{aligned} {\mathbb {D}}{:}{=}\cup _{k} {\mathbb {D}}_k. \end{aligned}$$

If E is bounded, we ignore cubes where \(2^{-k} > rsim {{\,\textrm{diam}\,}}(\partial \Omega )\) (in particular, where \(a_0 2^{-k} \ge {{\,\textrm{diam}\,}}(\partial \Omega )\)). Given a cube \(Q = Q_j^k \in {\mathbb {D}}\), we define the side-length of Q as \(\ell (Q) {:}{=}2^{-k}\). By Ahlfors regularity and property (v) in Lemma 2.14, we know that \(\ell (Q) \approx {{\,\textrm{diam}\,}}(Q)\) and \({\mathcal {H}}^n(Q) \approx \ell (Q)^n\). Given \(Q \in {\mathbb {D}}\) and \(m \in {\mathbb {Z}}\), we set

$$\begin{aligned} {\mathbb {D}}_Q {:}{=}\left\{ Q'\in {\mathbb {D}}:Q'\subseteq Q\right\} ,\qquad {\mathbb {D}}_{m,Q} {:}{=}\left\{ Q'\in {\mathbb {D}}_Q :\ell (Q') = 2^{-m} \ell (Q) \right\} . \end{aligned}$$

We call the cubes in the collection \({\mathbb {D}}_{1,Q}\) the children of Q. Notice that by Ahlfors regularity and property (v) in Lemma 2.14, each cube has a uniformly bounded number of children.

Given a cube \(Q = Q_j^k \in {\mathbb {D}}\), we call the point \(z_j^k \in Q\) in property (v) in Lemma 2.14 the center of Q, denote \(x_Q {:}{=}z_j^k\), and set

$$\begin{aligned} \Delta _Q {:}{=}E \cap B(z_j^k, a_0 \ell (Q)). \end{aligned}$$

We now take \(E=\partial \Omega \) and use Lemma 2.14 to decompose it, so that \({\mathbb {D}}(\partial \Omega ) = {\mathbb {D}}(E) {=}{:}{\mathbb {D}}\). For each \(Q \in {\mathbb {D}}\), we let \(X_Q\) be the corkscrew point relative to \(x_Q\) at scale \(10^{-5}a_0\ell (Q)\). We have \(B(X_Q, \gamma 10^{-5} a_0 \ell (Q)) \subset B(x_Q, 10^{-5} a_0 \ell (Q)) \cap \Omega \), where \(\gamma \) is the corkscrew constant in Definition 2.1.

For many of our techniques, it is important that we show that some collections of dyadic cubes are quantitatively small in the following sense:

Definition 2.15

(Carleson packing condition) Let \({\mathbb {D}}\) be a dyadic system on \(\partial \Omega \) and let \({\mathcal {A}}\subset {\mathbb {D}}\). We say that \({\mathcal {A}}\) satisfies a Carleson packing condition if there exists a constant \(C \ge 1\) such that for any \(Q_0 \in {\mathbb {D}}\) we have

$$\begin{aligned} \sum _{Q \in {\mathcal {A}}, Q \subset Q_0} \sigma (Q) \le C \sigma (Q_0). \end{aligned}$$

We denote the smallest such constant C by \({\mathcal {C}}_{{\mathcal {A}}}\).

Next, we use a standard decomposition of \(\Omega \) into Whitney cubes (see e.g. [58, Chapter VI]), and then associate a collection of such Whitney cubes to each boundary cube to construct suitable Whitney-type regions. These Whitney regions are modeled after regions of the type \(Q \times (\ell (Q)/2,\ell (Q))\) (that is, the upper halves of Carleson boxes) in the simpler geometry of the upper half-space. For this, we recall the construction found in [35] noting that we make some changes to the notation therein (following the notation of more recent papers, e.g. [40]). We let \({\mathcal {W}}= \{I\}_I\) denote a Whitney decomposition of \(\Omega \), with the properties that each I is a closed \((n+1)\)-dimensional cube satisfying

$$\begin{aligned} 4{{\,\textrm{diam}\,}}(I) \le {\text {dist}}(4I, \partial \Omega ) \le {\text {dist}}(I, \partial \Omega ) \le 40{{\,\textrm{diam}\,}}(I), \end{aligned}$$

where 4I is the standard concentric Euclidean dilate of a cube; the interiors of the cubes I are disjoint, and for all \(I_1, I_2 \in {\mathcal {W}}\) with \(I_1 \cap I_2 \ne {\text{\O }}\) we have

$$\begin{aligned} \frac{1}{4} {{\,\textrm{diam}\,}}(I_1) \le {{\,\textrm{diam}\,}}(I_2) \le 4{{\,\textrm{diam}\,}}(I_1). \end{aligned}$$

For \(I \in {\mathcal {W}}\) we let \(\ell (I)\) denote the side length of I.

For each cube \(Q \in {\mathbb {D}}\) and constant \(K \ge K_0\), with \(K_0\) to be described momentarily, we associate an initial collection of Whitney cubes

$$\begin{aligned} W_Q(K) {:}{=}\{I \in {\mathcal {W}}:K^{-1} \ell (I) \le \ell (Q) \le K \ell (I), {\text {dist}}(I, Q) \le K \ell (Q)\}. \end{aligned}$$

We choose \(K_0\) depending on the constants in the corkscrew condition and the Ahlfors regularity condition, insisting on two conditions being met:

  1. (1)

    If \(X \in \Omega \) with \({\text {dist}}(X, \partial \Omega ) \le 10^5 {{\,\textrm{diam}\,}}(\partial \Omega )\) then \(X \in I \in W_Q(K)\) for some \(Q \in {\mathbb {D}}\).

  2. (2)

    For any \(Q \in {\mathbb {D}}\), we have \(B(X_Q, {\text {dist}}(X_Q,\partial \Omega )/2) \subseteq \cup _{I \in W_Q(K)} I\), and if \(Q' \in {\mathbb {D}}\) is another cube such that \(Q' \subset Q\) with \(\ell (Q') = \tfrac{1}{2}\ell (Q)\), then we also have \(B(X_{Q'}, {\text {dist}}(X_{Q'},\partial \Omega )/2) \subseteq \cup _{I \in W_Q(K)} I\).

Of course, condition (1) above is automatically satisfied if \({{\,\textrm{diam}\,}}(\partial \Omega ) = \infty \).

Following [35, Sect. 3], we augment the collection \({\mathcal {W}}_Q(K)\) as follows. For each \(I \in W_Q(K)\), we take a Harnack chain H(I) from the center of I to the corkscrew point \(X_Q\), and we let \(W_{Q,I}(K)\) be the collection of Whitney cubes in \({\mathcal {W}}\) that meet at least one ball in the chain H(I). We then set \({\mathcal {W}}^*_Q(K) {:}{=}\bigcup _{I \in W_Q(K)} W_{Q,I}(K)\). Finally, for a small dimensional parameter \(\tau \) (this is the parameter \(\lambda \) in [35, Sect. 3]), we define the Whitney region relative to Q as

$$\begin{aligned} U_Q = U_Q(K_0) {:}{=}\bigcup _{I \in {\mathcal {W}}^*_Q(K_0)} (1 + \tau ) I. \end{aligned}$$

By construction, we know that if \(X \in U_Q\), then

$$\begin{aligned} K_1^{-1} \ell (Q) \le {\text {dist}}(X, \partial \Omega ) \le {\text {dist}}(X, Q) \le K_1 \ell (Q), \end{aligned}$$
(2.16)

where \(K_1\) depends on \(K_0\), the dimension and the Harnack chain condition. For \(\kappa \gg K_0\) to be chosen, we also define the following fattened versions of the Whitney regions:

$$\begin{aligned} U_Q^* = U_Q(\kappa ) = \bigcup _{I \in {\mathcal {W}}^*_Q(\kappa )} (1 + \tau )I,\qquad U_Q^{**} = \bigcup _{I \in {\mathcal {W}}^*_Q(\kappa )} (1 + 2\tau )I, \end{aligned}$$

that is, \(U_Q^*\) is constructed the same way as \(U_Q\) but we replace the constant \(K_0\) by \(\kappa \) (similarly for \(U_Q^{**}\)). We describe the reasoning and choice of \(\kappa \) in the next subsection. We note that for \(\tau \) small enough the regions \(U_Q\), \(U^*_Q\) and \(U_Q^{**}\) have bounded overlaps, that is, for a collection of dyadic cubes \({\mathscr {Q}}\) and the \((n+1)\)-dimensional Lebesgue measure \(|\cdot |\) we have \(|\bigcup _{Q \in {\mathscr {Q}}} U_Q^{**}| \approx \sum _{Q \in {\mathscr {Q}}} |U_Q^{**}|\).

Using the Whitney regions above, we can now define objects like sawtooth regions, Carleson boxes and dyadic cones. Let \(Q_0 \in {\mathbb {D}}\) be a fixed cube and \({\mathcal {F}}\subset {\mathbb {D}}_{Q_0}\) a collection of pairwise disjoint cubes. We set

$$\begin{aligned} {\mathbb {D}}_{{\mathcal {F}}, Q_0} {:}{=}{\mathbb {D}}_{Q_0} \setminus \cup _{Q \in {\mathcal {F}}} {\mathbb {D}}_Q. \end{aligned}$$

We then define the local sawtooth relative to \({\mathcal {F}}\) (and its fattened version) as

$$\begin{aligned} \Omega _{{\mathcal {F}}, Q_0} {:}{=}{{\,\textrm{int}\,}}\Big (\bigcup _{Q \in {\mathbb {D}}_{{\mathcal {F}}, Q_0}} U_Q\Big ),\qquad \Omega ^*_{{\mathcal {F}}, Q_0} {:}{=}{{\,\textrm{int}\,}}\Big (\bigcup _{Q \in {\mathbb {D}}_{{\mathcal {F}}, Q_0}} U^*_Q\Big ). \end{aligned}$$

In the special case where \({\mathcal {F}}= {\text{\O }}\), we write \(T_{Q_0} = \Omega _{{\mathcal {F}}, Q_0}\) and \(T^*_{Q_0} = \Omega ^*_{{\mathcal {F}}, Q}\), that is,

$$\begin{aligned} T_{Q_0} {:}{=}{{\,\textrm{int}\,}}\Big (\bigcup _{Q \in {\mathbb {D}}_{Q_0}} U_Q\Big ),\qquad T_{Q_0}^* {:}{=}{{\,\textrm{int}\,}}\Big (\bigcup _{Q \in {\mathbb {D}}_{Q_0}} U^*_Q\Big ),\qquad T_{Q_0}^{**} {:}{=}{{\,\textrm{int}\,}}\Big (\bigcup _{Q \in {\mathbb {D}}_{Q_0}} U^{**}_Q\Big ). \end{aligned}$$

We call \(T_{Q_0}\) the Carleson box relative to \(Q_0\) and \(T_{Q_0}^*\) and \(T_{Q_0}^{**}\) its fattened versions. Given a cube \(Q_0 \in {\mathbb {D}}\) and a point \(x \in Q_0\), we also define the (truncated) dyadic cone at \(x \in \partial \Omega \), \(\Gamma (x)\), by setting

$$\begin{aligned} \Gamma (x) {:}{=}\Gamma _{Q_0}(x) {:}{=}{{\,\textrm{int}\,}}\Big ( \bigcup _{Q \in {\mathbb {D}}_{Q_0} :x \in Q} U_Q \Big ). \end{aligned}$$
(2.17)

Notice that \(\Gamma (x) = \Omega _{{\mathcal {F}}_x, Q_0}\), where \({\mathcal {F}}_x\) is the collection of maximalFootnote 1 (and hence, disjoint) cubes in the collection \(\{Q \in {\mathbb {D}}_{Q_0} :x \notin Q\}\). It is straightforward to verify that there exists uniform constants \(m_1, m_2 > 1\) such that \(\Gamma (x)\) contains a truncated version of the cone \({\widetilde{\Gamma }}^{m_1}(x)\) and it is contained in a truncated version of the cone \({\widetilde{\Gamma }}^{m_2}(x)\), where \({\widetilde{\Gamma }}^{m_1}(x)\) and \({\widetilde{\Gamma }}^{m_2}(x)\) are cones of the type (2.9). Thus, since the aperture of the cones is not important for us, we mostly use dyadic cones when studying non-tangential convergence.

By the following lemma, the Whitney regions, sawtooth regions, Carleson boxes, and truncated dyadic cones inherit many quantitative geometric properties from \(\Omega \):

Lemma 2.18

([35, Lemma 3.61]) Suppose that \(\Omega \subset \mathbb {R}^{n+1}\) is a uniform domain with n-Ahlfors regular boundary. Let \(Q_0 \in {\mathbb {D}}(\partial \Omega )\) be a cube and \({\mathcal {F}}\subset {\mathbb {D}}_{Q_0}\) be a collection of pairwise disjoint cubes. Then \(\Omega _{{\mathcal {F}}, Q}\) and \(\Omega ^*_{{\mathcal {F}}, Q}\) are uniform domains with n-Ahlfors regular boundary whose structural constants depend only on the dimension, the structural constants of \(\Omega \), and the constant \(\kappa \). In particular, the Whitney regions \(U_Q\), \(U_Q^*\) and \(U_Q^{**}\), the Carleson boxes \(T_Q\), \(T_Q^*\) and \(T_Q^{**}\) and the truncated dyadic cones \(\Gamma (x)\) are uniform domains with n-Ahlfors regular boundaries, with structural constants depending only on the dimension and the structural constants of \(\Omega \) and the constant \(\kappa \).

2.4 The Choice of the Parameter \(\kappa \)

In contrast to the setting of the upper half space, we do not define the sawtooths by removing Whitney regions. This is due to the overlaps of the regions \(U_Q\): we may encounter situations where for \(Q_0 \in {\mathbb {D}}(\partial \Omega )\) and a collection of pairwise disjoint cubes \({\mathcal {F}}\subset {\mathbb {D}}_{Q_0}\) there exists a cube \(Q \in {\mathbb {D}}_{{\mathcal {F}}, Q_0}\) such that \(\overline{U_Q}\) does not contribute to the boundary of \(\Omega _{{\mathcal {F}}, Q}\). That being said, if \(\kappa \) is chosen large enough, then the fattened Whitney region \(U_Q^*\) meets the boundary of the unfattened region \(\Omega _{{\mathcal {F}}, Q}\) on a portion roughly the measure of Q. We will consider this in Sect. 4 where it will be convenient for us, but we prove the technical estimates that give this property below.

Let us fix a cube \(Q \in {\mathbb {D}}\). Recall that \(X_Q\) is a corkscrew point relative to \(x_Q \in Q\) at scale \(r_Q {:}{=}10^{-5}a_0 \ell (Q)\) and \(\Delta _Q = B(x_Q, 10^5 r_Q) \cap \partial \Omega \subset Q\) is the surface ball associated to Q. We let \({\hat{x}}_Q \in \partial \Omega \) denote a touching point for \(X_Q\), that is, a point such that \(|X_Q - {\hat{x}}_Q| = \delta (X_Q)\). By triangle inequality, \(|x_Q - {\hat{x}}_Q|<2r_Q\), and thus,

$$\begin{aligned} {\hat{\Delta }}_Q = B({\hat{x}}_Q, 10^3 r_Q) \cap \partial \Omega \subset \Delta _Q \subseteq Q. \end{aligned}$$
(2.19)

For every \(\theta \in (0,1)\), we let

$$\begin{aligned} P_Q(\theta ) {:}{=}{\hat{x}}_Q + \theta (X_Q - {\hat{x}}_Q), \end{aligned}$$

be the “\(\theta \)-point” on the directed line segment from \({\hat{x}}_Q\) to \(X_Q\). Then, by definitions,

$$\begin{aligned} \gamma \theta r_Q \le |P_Q(\theta ) - {\hat{x}}_Q| = {\text {dist}}(P_Q(\theta ), \partial \Omega ) \le \theta r_Q. \end{aligned}$$
(2.20)

Lemma 2.21

There exists \(\theta _0 \in (0,1)\) depending on \(K_0\) and the structural constants such that if for some \(Q'\in {\mathbb {D}}\) we have that

$$\begin{aligned} B\big (P_Q(\theta _0), \tfrac{\gamma \theta _0}{10} r_Q \big ) \cap U_{Q'} \ne {\text{\O }}, \end{aligned}$$

then \(Q' \subseteq Q\) and \(\ell (Q') < \ell (Q)\).

Proof

Suppose that \(\theta _0 \le 1 / 4C(K_1)^2\) for a large structural constant \(C \ge 1\) and

$$\begin{aligned} X \in B\big (P_Q(\theta _0), \tfrac{\gamma \theta _0}{10} r_Q \big ) \cap U_{Q'}, \end{aligned}$$
(2.22)

for some \(Q'\), where \(K_1\) is the constant in (2.16). By (2.16), (2.22) and (2.20), it holds that

$$\begin{aligned} \ell (Q')&\le K_1 {\text {dist}}(X,\partial \Omega ) \nonumber \\&\le K_1 \bigg ({\text {dist}}(P_Q(\theta _0),\partial \Omega ) + \frac{\gamma \theta _0}{10} r_Q \bigg ) \le 2K_1 \theta _0 r_Q = 2K_1 \theta _0 10^{-5} a_0 \ell (Q). \end{aligned}$$
(2.23)

In particular, we have \(\ell (Q') < \ell (Q)\).

To show that \(Q' \subset Q\), we first notice that we have

$$\begin{aligned} |X - x_{Q'}| \le C K_1 \ell (Q'), \end{aligned}$$

for a structural constant \(C \ge 1\) by (2.16) and the fact that \({{\,\textrm{diam}\,}}(Q') \approx \ell (Q')\). This and (2.23) then give us

$$\begin{aligned} |X - x_{Q'}| \le 2 C (K_1)^2 \theta _0 10^{-5} a_0 \ell (Q). \end{aligned}$$

Thus, by (2.20) and (2.22), it holds that

$$\begin{aligned} |X - {\hat{x}}_Q| \le 2\theta _0 r_Q = 2\theta _0 10^{-5} a_0 \ell (Q). \end{aligned}$$

Combining the previous two inequalities then gives us

$$\begin{aligned} |{\hat{x}}_Q - x_{Q'}| \le 4 C (K_1)^2 \theta _0 10^{-5} a_0 \ell (Q) = 4C (K_1)^2 \theta _0 r_Q. \end{aligned}$$
(2.24)

In particular,

$$\begin{aligned} |x_{Q'} - x_Q| \le |x_{Q'} - {\hat{x}}_Q| + |{\hat{x}}_Q - x_Q| \le 4C (K_1)^2 \theta _0 r_Q + 2r_Q< 3r_Q < a_0 \ell (Q), \end{aligned}$$

by (2.24), the fact that \(|{{\hat{x}}}_Q-x_Q|<2r_Q\), and the choice \(\theta _0 \le 1 / 4C(K_1)^2\). Thus, \(x_{Q'} \in \Delta _Q \subset Q\). Since \(Q' \cap Q \ne {\text{\O }}\) and \(\ell (Q') < \ell (Q)\), we know that \(Q' \subset Q\), which is what we wanted. \(\square \)

Let us then fix \(\theta _0\) so that Lemma 2.21 holds. For \(Q \in {\mathbb {D}}\), we set

$$\begin{aligned} \Xi _Q {:}{=}\bigcup _{\theta \in [\theta _0,1]} B\big (P_Q(\theta ), \tfrac{\gamma \theta _0}{10} r_Q \big ), \end{aligned}$$

which is a cylinder-like object. We get the following straightforward lemma:

Lemma 2.25

Let \(Q \in {\mathbb {D}}\) be a fixed cube and let \({\mathscr {Q}}_Q\) be the collection of cubes that share the same dyadic parent as Q, that is,

$$\begin{aligned} {\mathscr {Q}}_Q {:}{=}\{P \in {\mathbb {D}}:P,Q \subset Q_0 \text { for a cube } Q_0 \in {\mathbb {D}}\text { such that } \ell (P) = \ell (Q) = \tfrac{1}{2} \ell (Q_0)\}. \end{aligned}$$

Let \(\kappa \gg \max \{K_0,(\theta _0)^{-1}\}\) and \(X \in \Xi _P\) for some \(P \in {\mathscr {Q}}_Q\). Then \(X \in U_Q^*\).

Proof

Let \(\kappa \gg \max \{K_0,(\theta _0)^{-1}\}\) and \(X \in \Xi _P\) for some \(P \in {\mathscr {Q}}_Q\). By the Whitney decomposition, there exists a Whitney cube \(I \in {\mathcal {W}}\) such that \(X \in I\). By the definition of \(U_Q^*\), it is enough to show that \(I \in {\mathcal {W}}_Q^*(\kappa )\).

By (2.20) and the definitions, we first notice that

$$\begin{aligned} \ell (I) \approx {\text {dist}}(X,\partial \Omega ) \approx \gamma \theta _0 r_P \approx \theta _0 \ell (P) = \theta _0 \ell (Q), \end{aligned}$$

with uniformly bounded implicit constants. In particular, since \(\theta _0 \gg \tfrac{1}{\kappa }\) and \(\kappa \gg 1\), we get

$$\begin{aligned} \frac{1}{\kappa } \ell (I) \le \ell (Q) \le \kappa \ell (I). \end{aligned}$$

On the other hand, since \(P \in {\mathscr {Q}}_Q\), we know that

$$\begin{aligned} {\text {dist}}(I,Q) \lesssim {\text {dist}}(I,P) + \ell (Q), \end{aligned}$$

and by (2.20), (2.19) and the fact that \(X \in \Xi _P\), we know that

$$\begin{aligned} {\text {dist}}(I,P) \le {\text {dist}}(X,P) \le |X - {\hat{x}}_P| \le \frac{\gamma \theta _0}{10}r_P + |X_P - {\hat{x}}_P| \le 2r_P \le 2\ell (P) = 2\ell (Q). \end{aligned}$$

In particular, since \(\kappa \gg 1\), we have

$$\begin{aligned} {\text {dist}}(I,Q) \le \kappa \ell (Q). \end{aligned}$$

Thus, \(I \in {\mathcal {W}}_Q^*(\kappa )\), which proves the claim. \(\square \)

Let us also record the following simple lemma for future use:

Lemma 2.26

Let \(Q \in {\mathbb {D}}\) and let \(X_Q \in \Omega \) be a corkscrew point, \({\hat{x}}_Q \in \partial \Omega \) be a touching point and \(r_Q = 10^{-5} a_0 \ell (Q)\) as above. If \(Y \in B(P_Q(\theta ),r_Q)\) for some \(\theta \in [0,1]\) and \({\hat{y}} \in \partial \Omega \) is a point such that \(|Y-{\hat{y}}| = {\text {dist}}(Y,\partial \Omega )\), then \({\hat{y}} \in {\hat{\Delta }}_Q \subseteq \Delta _Q \subseteq Q\).

Proof

By definitions and using (2.20) several times, we get

$$\begin{aligned} |{\hat{y}} - {\hat{x}}_Q|{} & {} \le |{\hat{y}} - Y| + |Y - P_Q(\theta )| + |P_Q(\theta ) - {\hat{x}}_Q|< {\text {dist}}(Y,\partial \Omega ) + 2r_Q \\{} & {} \le |Y - P_Q(\theta )| + {\text {dist}}(P_Q(\theta ),\partial \Omega ) + 2r_Q < r_Q + r_Q + 2r_Q = 4r_Q, \end{aligned}$$

and thus, \({\hat{y}} \in {\hat{\Delta }}_Q \subset \Delta _Q \subset Q\) by (2.19). \(\square \)

2.5 Elliptic PDE Estimates

Here we collect some of the standard estimates for divergence form elliptic operators with real coefficients that will be used throughout the paper. In this section, \(\Omega \) always denotes a uniform domain in \(\mathbb {R}^{n+1}\), \(n\ge 1\), with n-Ahlfors regular boundary. We recall that a divergence form elliptic operator is of the form

$$\begin{aligned} L (\cdot ) {:}{=}- \mathop {{\text {div}}}\nolimits (A \nabla \cdot ), \end{aligned}$$

viewed in the weak sense, where A is a uniformly elliptic matrix, that is, \(A = (a_{i,j})_{i,j=1}^{n+1}\) is an \((n+1)\times (n+1)\) matrix-valued function on \({\mathbb {R}}^{n+1}\) and there exists a constant \(\Lambda \), the ellipticity parameter, such that

$$\begin{aligned} \Lambda ^{-1} |\xi |^2 \le A(X)\xi \cdot \xi ,\qquad \text {and}\qquad \Vert a_{i,j}\Vert _{L^\infty ({\mathbb {R}}^{n+1})} \le \Lambda , \end{aligned}$$

for all \(\xi , \zeta \in {\mathbb {R}}^{n+1}\) and almost every \(X \in \Omega \). We say that a constant depends on ellipticity if it depends on \(\Lambda \). Given an open set \({\mathcal {O}} \subset {\mathbb {R}}^{n+1}\) we say a function \(u\in W^{1,2}_{{{\,\textrm{loc}\,}}}({\mathcal {O}})\) is a solution to \(Lu = 0\) in \({\mathcal {O}}\) if

$$\begin{aligned} \iint _{{\mathcal {O}}}A \nabla u \cdot \nabla \varphi \, dX = 0, \quad \text { for every } \varphi \in C_{{{\,\textrm{c}\,}}}^\infty ({\mathcal {O}}). \end{aligned}$$

The most fundamental estimate for solutions to divergence form elliptic equations is the following local energy inequality.

Lemma 2.27

(Caccioppoli inequality) Let \(L = -\mathop {{\text {div}}}\nolimits A \nabla \) be a divergence form elliptic operator and u a solution to \(Lu = 0\) in an open set \({\mathcal {O}}\). If \(a> 0\) and B is a ball such that \((1 + a)B \subset {\mathcal {O}}\) then

$$\begin{aligned} \iint _B |\nabla u| \, dX \lesssim r^{-2} \iint _{(1 + a)B} u^2 \, dX, \end{aligned}$$

where the implicit constant depends only on a, dimension and ellipticity.

Solutions to divergence form elliptic equations are locally Hölder continuous.

Lemma 2.28

(Hölder continuity of solutions, [21, 57]) Let \(L = -\mathop {{\text {div}}}\nolimits A \nabla \) be a divergence form elliptic operator and u a non-negative solution to \(Lu = 0\) in an open set \({\mathcal {O}}\). Suppose that \(B = B(X_0,R)\) is a ball such that \(\lambda B {:}{=}B(X_0,2\lambda R) \subset {\mathcal {O}}\) for \(\lambda > 1\). Then we have

where \(\alpha \) and C depend only on dimension and ellipticity.

Another celebrated result is Moser’s Harnack inequality for non-negative solutions.

Lemma 2.29

(Harnack inequality [53]) Let \(L = -\mathop {{\text {div}}}\nolimits A \nabla \) be a divergence form elliptic operator and u a non-negative solution to \(Lu = 0\) in an open set \({\mathcal {O}}\). If B is a ball such that \(2B \subset {\mathcal {O}}\) then \(\sup _{B} u \le C \inf _{B} u\), where C depends only on dimension and ellipticity.

We now turn our attention to the elliptic measure, for which we borrow the setting of [4]. Consider the compactified space \(\overline{{\mathbb {R}}}^{n+1}={\mathbb {R}}^{n+1}\cup \{\infty \}\); following [32, Sect. 9], we will understand all topological notions with respect to this space. Hence, for instance, if \(\Omega \) is unbounded, then \(\infty \in \partial \Omega \), and the functions in the space \(C(\partial \Omega )\) are assumed continuous and real-valued, so that all functions in \(C(\partial \Omega )\) lie in \(L^{\infty }(\partial \Omega )\) even if \(\partial \Omega \) is unbounded.

Given a domain \(\Omega \) and a divergence form elliptic operator L, we let \(\omega _{L,\Omega }^X\) denote the elliptic measure with pole at \(X \in \Omega \). That is, by the Riesz representation theorem, for every \(X \in \Omega \) there exists a probability measure \(\omega _{L,\Omega }^X\) such that if \(f \in C_{{{\,\textrm{c}\,}}}(\partial \Omega )\), then the solution to \(Lu = 0\) in \(\Omega \) with \(u \in C({\overline{\Omega }})\) and \(u = f\) on \(\partial \Omega \), constructed via Perron’s method, satisfies

$$\begin{aligned} u(X) = u_f(X) = \int _{\partial \Omega } f(y) \, d\omega ^{X}_{L,\Omega }(y). \end{aligned}$$
(2.30)

When the context is clear, we simply denote \(\omega ^X {:}{=}\omega ^X_{L,\Omega }\) and, with slight abuse of terminology, call the family of elliptic measures \(\omega = \omega _L = \omega _{L,\Omega } {:}{=}\{\omega ^X\}_X\) just the elliptic measure.

Our main results consider characterizations and implications given by quantitative absolute continuity of elliptic measure in the sense of Muckenhoupt’s \(A_\infty \) condition [13, 56]:

Definition 2.31

(\(A_\infty \) for elliptic measure) Let L be a divergence form elliptic operator in \(\Omega \). We say that the elliptic measure \(\omega = \omega _{L,\Omega }\) satisfies the \(A_\infty \) condition with respect to surface measure (denote \(\omega \in A_\infty (\sigma )\)) if there exist constants \(C \ge 1\) and \(s > 0\) such that if \(B {:}{=}B(x,r)\) with \(x \in \partial \Omega \) and \(r \in (0,{{\,\textrm{diam}\,}}(\partial \Omega )/4)\) and \(A \subset \Delta {:}{=}B \cap \partial \Omega \) is a Borel set, then

$$\begin{aligned} \omega ^Y(A) \le C \left( \frac{\sigma (A)}{\sigma (\Delta )} \right) ^s \omega ^Y(\Delta ),\qquad \text {for every }Y \in \Omega \setminus 4B. \end{aligned}$$

We refer to C and s here together as the “\(\omega _L\in A_\infty (\sigma )\) constants”.

Next we discuss the Green’s function and its properties.

Definition 2.32

(Green’s function) Let \(L=-{\text {div}}A\nabla \) be a not necessarily symmetric divergence form elliptic operator with bounded measurable coefficients. There exists a unique non-negative function \(G=G_L=G_{L,\Omega }:\Omega \times \Omega \rightarrow \mathbb {R}\), called Green’s function for L, satisfying the following properties:

  1. (i)

    For each \(X,Y\in \Omega \),

    $$\begin{aligned} 0\le G(X,Y)\lesssim \left\{ \begin{matrix}|X-Y|^{1-n},&{}\qquad n\ge 2,&{}\qquad X\ne Y,\\[1mm] 1,&{}\qquad n=1,&{}\qquad |X-Y|\ge \delta (Y)/10,\\[1mm] \log \big (\frac{\delta (Y)}{|X-Y|}\big ),&{}\qquad n=1,&{}\qquad |X-Y|\le \delta (Y)/2. \end{matrix}\right. \qquad \end{aligned}$$
    (2.33)
  2. (ii)

    For every \(a \in (0,1)\) there exists \(c_a\) such that

    $$\begin{aligned} G(X,Y)\ge \left\{ \begin{matrix}c_a|X-Y|^{1-n},&{}\qquad n\ge 2, &{}\qquad |X-Y| \le a\delta (Y),\\[1mm] c_a\log \big (\frac{\delta (Y)}{|X-Y|}\big ),&{}\qquad n=1, &{}\qquad |X-Y|\le a\delta (Y). \end{matrix}\right. \end{aligned}$$
    (2.34)
  3. (iii)

    For each \(Y\in \Omega \), \(G(\cdot ,Y)\in C({\overline{\Omega }}\backslash \{Y\})\cap W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega \backslash \{Y\})\) and \(G(\cdot ,Y)|_{\partial \Omega }\equiv 0\).

  4. (iv)

    For each \(X\in \Omega \), the identity \(LG(\cdot ,X)=\delta _X\) holds in the distributional sense; that is,

    $$\begin{aligned} \int _\Omega A(Y)\nabla _YG(Y,X)\nabla \Phi (Y)\,dY=\Phi (X),\qquad \text {for any }\Phi \in C_c^{\infty }(\Omega ). \end{aligned}$$
  5. (v)

    For each \(X,Y\in \Omega \) with \(X\ne Y\), if \(L^*=-{\text {div}}A^T\nabla \), then

    $$\begin{aligned} G_L(X,Y)=G_{L^*}(Y,X). \end{aligned}$$

If \(n\ge 2\), then it has been known for a long time that a non-negative Green’s function exists for any domain, without any further regularity assumptions on the geometry [31, 33]. If \(n=1\), the situation has been more challenging; for instance, key Sobolev embeddings, available when \(n\ge 2\), fail when \(n=1\), and the fundamental solution changes sign when \(n=1\) [47]; nevertheless, the paper [23] shows the construction of a Green’s function for any domain with either finite volume or finite width, and also, for the domain above a Lipschitz graph, improving on the result of [22] (but non-negativity is not shown in these works). For our setting of uniform domains \(\Omega \subset \mathbb {R}^{n+1}\), \(n\ge 1\), with n-Ahlfors regular boundary, the unified (for \(n=1\) and \(n\ge 2\)) existence of the non-negative Green’s function for arbitrary divergence form elliptic operators L of merely bounded measurable coefficients with the properties stated above follows from the much more general, recent construction in [20, Theorem 14.60, Lemma 14.78].

The Green’s function and the elliptic measure are related through the following Riesz formula: For every \(F \in C_{{{\,\textrm{c}\,}}}^\infty ({\mathbb {R}}^{n+1})\), one has that

$$\begin{aligned} \int _{\partial \Omega } F(y) \, d\omega ^X(y) = F(X) - \iint _{\Omega } A^T \nabla _Y G_L(X,Y) \cdot \nabla _Y F(Y) \, dY. \end{aligned}$$
(2.35)

We need several estimates from the literature for the elliptic measure and Green’s function in our proofs and we list these estimates below. Although these have appeared in several works in the literature [4, 7, 32], we cite [20] for their unified consideration of the cases \(n=1\) and \(n\ge 2\) and arbitrary elliptic operators on uniform domains with Ahlfors regular boundary. The first estimate is a non-degeneracy estimate for elliptic measure.

Lemma 2.36

(Bourgain’s estimate, [20, Lemma 15.1]) Let \(x \in \partial \Omega \) and \(r \in (0,{{\,\textrm{diam}\,}}(\partial \Omega )]\) and let \(Y_{x,r}\) be a corkscrew point relative to x at scale r. We have

$$\begin{aligned} \omega ^{Y_{x,r}}(\Delta (x,r)) \ge c, \end{aligned}$$

where c depends only on dimension, ellipticity, and the Ahlfors regularity constant. Here and below \(\Delta (x,r) = B(x,r) \cap \partial \Omega \).

The elliptic measure is locally doubling in the following sense.

Lemma 2.37

((Local) doubling property, [20, Lemma 15.43]) Let \(x \in \partial \Omega \) and \(r \in (0, {{\,\textrm{diam}\,}}(\Omega )] \). If \(Y \in \Omega {\setminus } B(x,4r)\) then \(\omega ^Y(\Delta (x,2r)) \le C\omega ^Y(\Delta (x,r))\), where C depends on dimension, ellipticity, Harnack chain, corkscrew and Ahlfors regularity constants.

The following estimate allows us to connect the Green’s function and elliptic measure in a quantitative way:

Lemma 2.38

(CFMS estimate, [20, Lemma 15.28]) If \(x \in \partial \Omega \) and \(r \in (0, {{\,\textrm{diam}\,}}(\partial \Omega )]\) then

$$\begin{aligned} \frac{G_L(X, Y_{x,r})}{r} \approx \frac{\omega ^X_L(\Delta (x,r))}{r^n}, \end{aligned}$$

for every \(X \in \Omega \setminus B(x,2r)\), where the implicit constants depend on dimension, ellipticity, Harnack chain, corkscrew and Ahlfors regularity constants, and \(Y_{x,r}\) is any corkscrew point relative to x at scale r.

Non-negative solutions u to \(Lu = 0\) that vanish on an open subset of the boundary of a uniform domain must do so at the same rate:

Lemma 2.39

(Boundary Harnack Principle, [20, Theorem 15.64]) Let \(x \in \partial \Omega \) and \(r \in (0, {{\,\textrm{diam}\,}}(\partial \Omega )]\), and let u and v be positive functions such that \(Lu = Lv = 0\) in \(\Omega \cap B(x,4r)\) that vanish continuously on \(\partial \Omega \cap B(x,4r)\). Then

$$\begin{aligned} \frac{u(X)}{v(X)} \approx \frac{u(Y_{x,r})}{v(Y_{x,r})}, \qquad \text {for all } X \in B(x,r) \cap \Omega , \end{aligned}$$

where \(Y_{x,r}\) is a corkscrew point relative to x at scale r. The implicit constants depend on dimension, ellipticity, Harnack chain, corkscrew and Ahlfors regularity constants.

We have the following standard consequence of the boundary Harnack Principle.

Lemma 2.40

(Change of Poles, [20, Lemma 15.61]) Let \(x\in \partial \Omega \), \(r\in (0,{{\,\textrm{diam}\,}}(\partial \Omega ))\), \(Y_{x,r}\) a corkscrew point relative to x at scale r, and \(E\subset \Delta (x,r)\) a Borel set. Then

$$\begin{aligned} \omega ^{Y_{x,r}}(E)\approx \frac{\omega ^X(E)}{\omega ^X(\Delta (x,r))},\qquad \text {for each }X\in \Omega \backslash B(x,2r), \end{aligned}$$

where the implicit constants depend on dimension, ellipticity, and structural constants of \(\Omega \).

Finally, solutions u to \(Lu = 0\) that vanish continuously on the boundary do so at a Hölder rate:

Lemma 2.41

( [20, Lemma 11.32]) Let \(x \in \partial \Omega \) and \(r \in (0, {{\,\textrm{diam}\,}}(\partial \Omega )]\), and let u be a solution to \(Lu= 0\) in \(\Omega \cap B(x,4r)\) that vanishes continuously on \(\partial \Omega \cap B(x,4r)\). We have the bound

$$\begin{aligned} |u(X)| \le C\left( \frac{{\text {dist}}(X,\partial \Omega )}{r}\right) ^\alpha \sup _{Y \in B(x,4r)} |u(Y)|, \end{aligned}$$

for every \(X \in B(x,r) \cap \Omega \), where \(\alpha \) and C depend on dimension, the Harnack chain, corkscrew and Ahlfors regularity constants.

3 Set-Up for Theorem 1.1

In this section we provide some of the preliminary estimates and observations required to prove Theorem 1.1. We divide this section into two subsections. The first records Carleson measure estimates and non-tangential convergence in our setting. The second subsection contains a few lemmas, which are roughly adapted from ideas in [16] and play a crucial role in our analysis. Throughout this section, we suppose that the assumptions of Theorem 1.1 hold; that is, \(\Omega \) is a uniform domain with Ahlfors regular boundary and L is a not necessarily symmetric divergence form elliptic operator such that \(\omega _L\in A_\infty (\sigma )\). For the sequel, recall that \(\delta (\cdot ) {:}{=}{\text {dist}}(\cdot ,\partial \Omega )\).

3.1 CME and Non-tangential Convergence

One of the key tools in most of the constructions of \(\varepsilon \)-approximators (see, for example, [40]) is \(L^2\)-type Carleson measure estimates (CME), that is, Carleson properties (see Definition 2.7) of measures \(\mu _u\) such that \(d\mu _u = |\nabla u|^2 \delta (Y) \, dY\) for a solution u to \(Lu = 0\). Under the hypotheses of Theorem 1.1, we have the following “classical” Carleson measure estimate and \(L^p\)-solvability of the Dirichlet problem for L for some p [1, 11, 26]:

Lemma 3.1

([26, Corollary 1.32]) Suppose that \(\Omega \) is a uniform domain in \(\mathbb {R}^{n+1}\), \(n\ge 1\), with n-Ahlfors regular boundary and L is a divergence form elliptic operator such that \(\omega _L \in A_\infty (\sigma )\). There exists a constant \(C \ge 1\) such that if \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega ) \cap L^\infty (\Omega )\) is a solution to \(Lu = 0\) then

$$\begin{aligned} \sup _{r > 0} \sup _{x \in \partial \Omega } \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla u(Y)|^2 \delta (Y) \, dY \lesssim C \Vert u\Vert _{L^\infty (\Omega )}^2. \end{aligned}$$

The constant C depends on the structural constants and the \(\omega _L\in A_\infty (\sigma )\) constants.

Lemma 3.2

([34, Theorem 1.3]) Suppose that \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n\ge 1\), is a uniform domain with Ahlfors regular boundary and L is a divergence form elliptic operator in \(\Omega \) such that \(\omega _L \in A_\infty (\sigma )\). There exist \(1< p < \infty \) and \(C \ge 1\) (depending on p) such that for every \(f \in L^p(\partial \Omega )\) there exists a solution u to the boundary value problem

$$\begin{aligned} {\left\{ \begin{array}{ll} Lu = 0 &{} \text { in } \Omega , \\ \Vert Nu \Vert _{L^p(\partial \Omega )} \le C\Vert f\Vert _{L^p(\partial \Omega )}, \\ \lim \limits _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} u(Y) = f(x), &{} \text {for} \sigma \text {-a.e.} x \in \partial \Omega . \end{array}\right. } \end{aligned}$$

Moreover, the solution is of the form

$$\begin{aligned} u(X) = u_f(X) = \int _{\partial \Omega } f(y) \, d\omega ^X(y). \end{aligned}$$

Using the \(L^p\) result of Lemma 3.2 gives us the following non-tangential convergence result for \(L^\infty \) data:

Lemma 3.3

Suppose that \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n\ge 1\), is a uniform domain with Ahlfors regular boundary and L is a divergence form elliptic operator in \(\Omega \) such that \(\omega _L \in A_\infty (\sigma )\). If \(f \in L^\infty (d\sigma )\), then the solution

$$\begin{aligned} u_f(X) = \int _{\partial \Omega } f(y) \, d\omega ^X(y), \end{aligned}$$

converges non-tangentially to f; that is,

$$\begin{aligned} \lim \limits _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} u_f(Y) = f(x), \quad \text { for} \sigma -\text {a.e. } x \in \partial \Omega . \end{aligned}$$
(3.4)

Equivalently,

$$\begin{aligned} \lim \limits _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} u_f(Y) = f(x), \quad \text {for} \omega ^Y\text {-a.e. } x \in \partial \Omega . \end{aligned}$$
(3.5)

Proof

Let p be as in Lemma 3.2. Let us note that \(\omega ^Y\) and \(\omega ^{Y'}\) are mutually absolutely continuous for all \(Y, Y' \in \Omega \) by the Harnack chain condition and the Harnack inequality (see Lemma 2.29). Moreover, \(\omega ^{Y}\) is mutually absolutely continuous with respect to \(\sigma \) by the \(A_\infty (\sigma )\) condition [13, Lemma 5] and hence, (3.4) and (3.5) are equivalent. In addition, we have \(f \in L^\infty (\partial \Omega ,d\sigma )\) if and only if \(f \in L^\infty (\partial \Omega ,d\omega ^Y)\) for all \(Y \in \Omega \), with \(\Vert f\Vert _{L^\infty (\partial \Omega ,d\sigma )} = \Vert f\Vert _{L^\infty (\partial \Omega ,d\omega ^Y)}\).

If \({{\,\textrm{diam}\,}}(\partial \Omega ) < \infty \), then \(f \in L^p(\partial \Omega )\) and the claim follows from Lemma 3.2. Thus, we may assume that \({{\,\textrm{diam}\,}}(\partial \Omega ) = \infty \). We show that the claim holds in any ball B centered at \(\partial \Omega \). Let us fix \(x_0 \in \partial \Omega \) and \(r > 0\). We write \(f = g + h\) for \(g = f\mathbb {1}_{\Delta (x_0,100r)}\), where \(\Delta (x_0,100r) {:}{=}B(x_0,100r) \cap \partial \Omega \). By linearity, we have that \(u_f(X) = u_g(X) + u_h(X)\). Since \(f \in L^\infty (\partial \Omega )\), we know that \(g \in L^p(\partial \Omega )\) and thus, Lemma 3.2 gives us

$$\begin{aligned} \lim \limits _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} u_g(Y) = g(x) = f(x), \quad \text {for} \sigma \text {-a.e. } x \in \partial \Omega \cap B(x_0,r). \end{aligned}$$

Therefore it suffices to show

$$\begin{aligned} \lim \limits _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} u_h(Y) = 0, \quad \text {for} \sigma \text {-a.e. } x \in \partial \Omega \cap B(x_0,r). \end{aligned}$$
(3.6)

We now have

$$\begin{aligned} |u_h(X)| \le \Vert f\Vert _{L^\infty (\partial \Omega )} \omega ^X(\partial \Omega \setminus \Delta (x_0,100r)), \end{aligned}$$
(3.7)

for any \(X \in \Omega \). Let \(\phi \in C_{{{\,\textrm{c}\,}}}(\Delta (x_0,100r))\) be a non-negative function such that \(\phi (x) \le 1\) for every \(x \in \partial \Omega \) and \(\phi \equiv 1\) on \(\Delta (x_0,50r)\). Then, since \(\omega ^X\) is a probability measure, we get

$$\begin{aligned} \omega ^X(\partial \Omega \setminus \Delta (x_0,100r )) \le 1 - v(X) {:}{=}1 - \int _{\partial \Omega } \phi (y) \, d\omega ^X(y). \end{aligned}$$
(3.8)

Since \(\phi \) is a compactly supported continuous function on \(\partial \Omega \), we know that v is a continuous bounded solution to \(Lv = 0\) in \({\overline{\Omega }}\) such that \(v = \phi \equiv 1\) on \(\Delta (x_0,50r)\). In particular, we have

$$\begin{aligned} \lim _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} v(Y) = 1, \end{aligned}$$
(3.9)

for every \(x \in \Delta (x_0,50r)\). Thus, combining (3.7) and (3.8) gives us \(|u_h(X)| \le \Vert f\Vert _{L^\infty (\partial \Omega )}\left( 1 - v(X) \right) \) for every \(X\in \Omega \) and (3.6) follows then from (3.9). This completes the proof. \(\square \)

3.2 A Few Important Lemmas

In this subsection, we prove some lemmas that will be important for the proof of Theorem 1.1. The first three lemmas were inspired by the ideas in [16].

Lemma 3.10

Let \(\Omega \) be a domain in \(\mathbb {R}^{n+1}\), \(n\ge 1\), and let L be a not necessarily symmetric divergence form elliptic operator. Suppose that \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega )\) is a weak solution to \(Lu = 0\) in \(\Omega \), that \(\Omega '\subset \Omega \) is a Wiener-regular domain which is compactly contained in \(\Omega \), and fix \(X_*\in \Omega '\). If \(n=1\), assume in addition that \(\Omega '\) is a uniform domain with n-Ahlfors regular boundary. Then \(|\nabla u|^2G_{L,\Omega '}(X_*,\cdot )\in L^1(\Omega ')\), and

$$\begin{aligned} \iint _{\Omega '} |\nabla u(Y)|^2 G_{L,\Omega '}(X_*, Y) \, dY \approx \int _{\partial \Omega '} (u(y) - u(X_*))^2 \, d\omega _{L, \Omega '}^{X_*}(y), \end{aligned}$$
(3.11)

where the implicit constants depend only on ellipticity.

Proof

Throughout this argument we fix \(X_*\in \Omega '\), we let \(r={\text {dist}}(X_*,\partial \Omega ')/8\), and we write \(G(Y) {:}{=}G_{\Omega '}(X_*,Y)\). First, we show the finiteness of the integral in the left-hand side of (3.11). If \(n=1\), then it is trivial by (2.33) and the Meyers reverse Hölder estimate for gradients of solutions [52], so suppose that \(n\ge 2\). We write

$$\begin{aligned} \iint _{\Omega '} |\nabla u|^2 G \, dY{} & {} =\iint _{\Omega '\backslash B(X_*,r)} |\nabla u|^2 G\, dY \\{} & {} \quad +\iint _{B(X_*,r)} |\nabla u|^2 G \, dY {=}{:}T_1+T_2. \end{aligned}$$

Since \(G\in L^{\infty }(\Omega '\backslash B_*(X_*,r))\) and \(\nabla u\in L^2(\Omega ')\), it is clear that \(T_1<\infty \). As for \(T_2\), for each \(k\in \mathbb {N}_0\), let \(A_k {:}{=}B(X_*,2^{-k}r)\backslash B(X_*,2^{-k-1}r)\), and using (2.33), Lemma 2.27, and Lemma 2.28, we see that

$$\begin{aligned} T_2{} & {} =\sum _{k=0}^{\infty }\iint _{A_k}|\nabla u|^2 G\,dY \lesssim \sum _{k=0}^{\infty }(2^{-k}r)^{-n+1}\iint _{B(X_*,2^{-k}r)}|\nabla (u(Y)-u(X_*))|^2\,dY\\{} & {} \lesssim \sum _{k=0}^{\infty }(2^{-k}r)^{-n-1}\iint _{B(X_*,2^{-k+1}r)}|u(Y)-u(X_*)|^2\,dY \lesssim \sum _{k=0}^{\infty }2^{-2\alpha k}\Vert u\Vert _{L^{\infty }(B(X_*,4r))}<\infty . \end{aligned}$$

We turn to the proof of (3.11). By the ellipticity of A and the product rule, we see that

$$\begin{aligned} \iint _{\Omega '}|\nabla u|^2 G\,dY\approx & {} \iint _{\Omega '}A\nabla u\cdot (\nabla u) G\,dY =\iint _{\Omega '}A\nabla u\cdot \nabla (uG)\,dY\nonumber \\{} & {} -\iint _{\Omega '}A\nabla u\cdot (\nabla G)u\,dY {=}{:}T_3+T_4,\qquad \end{aligned}$$
(3.12)

provided that the last two integrals are finite, which we now show. First, we claim that \(|u||\nabla u||\nabla G|\in L^1(\Omega ')\). As before, it is enough to show that \(|u||\nabla u||\nabla G|\in L^1(B(X_*,r))\). Let \(A_k\) as above, and for each \(k\in \mathbb {N}_0\), let \(\{B_k^j\}_{j=1}^{J_k}\) be a cover of \(A_k\) by balls centered on \(A_k\) of radius \(2^{-k-4}r\) with uniformly bounded overlap. Then \(J_k\lesssim 1\), and we have that

$$\begin{aligned} \iint _{B(X_*,r)}|u||\nabla u||\nabla G|\,dY \le C\sum _{k=0}^{\infty }\max _j\iint _{B_k^j}|\nabla u||\nabla G|\,dY\le \left\{ \begin{matrix}C\sum _{k=0}^{\infty }2^{-k\alpha },&{}\quad n\ge 2,\\[1mm] C\sum _{k=0}^{\infty }k2^{-k\alpha },&{}\quad n=1,\end{matrix}\right. \end{aligned}$$

where the last estimate follows from using Lemma 2.27 for both u and G, then (2.33) and Lemma 2.28. This proves the claim. By the product rule and the triangle inequality, we have also shown that \(|\nabla u||\nabla (uG)|\in L^1(\Omega ')\). Finally, by boundedness of A we conclude that \(A\nabla u\nabla (uG)\) and \(A\nabla u(\nabla G)u\) belong to \(L^1(\Omega ')\), as desired.

The next step is to show that \(T_3=0\). For each \(M\in \mathbb {N}\) large enough, let \(\psi _M\in C^{\infty }(\Omega )\) satisfy \(\psi _M\ge 0\), \(\psi _M\equiv 1\) in \(\Omega \backslash B(X_*,\frac{2}{M})\), \(\psi _M\equiv 0\) in \(B(X_*,\frac{1}{M})\), and \(|\nabla \psi _M|\lesssim M\). We claim that

$$\begin{aligned} \iint _{\Omega '}A\nabla u\nabla (uG)\psi _M\,dY\rightarrow 0,\qquad \text {as }M\rightarrow \infty . \end{aligned}$$
(3.13)

Fix \(M\in \mathbb {N}\) large enough. By the product rule and the fact that \(uG\psi _M\in W_0^{1,2}(\Omega ')\), since \(Lu=0\) in \(\Omega '\), we have that

$$\begin{aligned} \Big |\iint _{\Omega '}A\nabla u\nabla (uG)\psi _M\,dY\Big |{} & {} =\Big |\iint _{\Omega '}A\nabla u(\nabla \psi _M)uG\,dY\Big |\\{} & {} \le CM\iint _{B(X_*,\frac{2}{M})\backslash B(X_*,\frac{1}{M})}|\nabla u|G\,dY \le \left\{ \begin{matrix}CM^{-\alpha },&{}\quad n\ge 2,\\[1mm] CM^{-\alpha }\log M,&{}\quad n=1,\end{matrix}\right. \end{aligned}$$

where the constant C depends on u but not on M, and in the last estimate we once again used (2.33), Lemma 2.27, and Lemma 2.28. Thus we have shown (3.13). Since it is also true that \(A\nabla u\nabla (uG)\psi _M\rightarrow A\nabla u\nabla (uG)\) pointwise a.e. in \(\Omega '\) and since we have already proved that \(|\nabla u||\nabla (uG)|\in L^1(\Omega ')\), then by the Lebesgue Dominated Convergence Theorem we conclude that \(T_3=0\).

We proceed with the proof of (3.11) as follows:

$$\begin{aligned} T_4{} & {} =-\frac{1}{2} \iint _{\Omega '} A^T\nabla G\nabla (u^2)\, dY = \frac{1}{2} \Big (\int _{\partial \Omega '} u(y)^2 \, d\omega _{L, \Omega '}^{X_*}(y)-u(X_*)^2\Big ) \\{} & {} = \frac{1}{2} \int _{\partial \Omega '} (u(y) - u(X_*))^2 \, d\omega _{L, \Omega '}^{X_*}(y), \end{aligned}$$

where we used the Riesz formula (2.35), and in the last identity we used that \(\omega _{L, \Omega '}^{X_*}\) is a probability measure, and that \(Lu= 0\) in \(\Omega '\) so that

$$\begin{aligned} \int _{\partial \Omega '} u(y) \, d\omega _{L, \Omega '}^{X_*}(y) = u(X_*), \end{aligned}$$

and hence

$$\begin{aligned} \int _{\partial \Omega '} u(X_*)u(y) \, d\omega _{L, \Omega '}^{X_*}(y) = u(X_*) \int _{\partial \Omega '} u(y) \, d\omega _{L, \Omega '}^{X_*}(y) = u(X_*)^2. \end{aligned}$$

This finishes the proof. \(\square \)

The following result is a direct consequence of the maximum principle and the DeGiorgi–Nash–Moser theory (see for instance the proof of [2, Proposition 3.2]).

Lemma 3.14

Let \(\Omega _1\) and \(\Omega _2\) be Wiener regular domains such that \(\Omega _1 \subseteq \Omega _2\). If \(G_i(X,Y)\) is the Green function for \(\Omega _i\), \(i = 1,2\) then

$$\begin{aligned} G_1(X,Y) \le G_2(X,Y) \quad \text { for every } (X, Y) \in \Omega _1 \times \Omega _1 \setminus \{X = Y\}. \end{aligned}$$

We will also use a corona-type decomposition of the elliptic measure \(\omega _L\) [4, 30, 41]. To formulate this decomposition, we recall the coherency and semicoherency of subcollections of \({\mathbb {D}}\):

Definition 3.15

We say that a subcollection \({\mathcal {S}}\subset {\mathbb {D}}\) is coherent if the following three conditions hold.

  1. (a)

    There exists a maximal element \(Q({\mathcal {S}}) \in {\mathcal {S}}\) such that \(Q \subset Q({\mathcal {S}})\) for every \(Q \in {\mathcal {S}}\).

  2. (b)

    If \(Q \in {\mathcal {S}}\) and \(P \in {\mathbb {D}}\) is a cube such that \(Q \subset P \subset Q({\mathcal {S}})\), then also \(P \in {\mathcal {S}}\).

  3. (c)

    If \(Q \in {\mathcal {S}}\), then either all children of Q belong to \({\mathcal {S}}\) or none of them do.

If \({\mathcal {S}}\) satisfies only conditions (a) and (b), then we say that \({\mathcal {S}}\) is semicoherent.

We are ready to present the corona decomposition that we shall use in the sequel.

Lemma 3.16

([4, Proposition 3.1(b)]) Suppose that \(\Omega \) is a uniform domain and \(L = -\mathop {{\text {div}}}\nolimits A \nabla \) is a divergence form elliptic operator such that \(\omega _L \in A_\infty (\sigma )\). Then there exist constants, \(C, M > 1\) and a decomposition of the dyadic system \({\mathbb {D}} = {\mathbb {D}}(\partial \Omega )\), with the following properties.

  1. (i)

    The dyadic grid breaks into a disjoint decomposition \({\mathbb {D}} = {\mathcal {G}} \cup {\mathcal {B}}\), the good cubes and bad cubes respectively.

  2. (ii)

    The family \({\mathcal {G}}\) has a disjoint decomposition \({\mathcal {G}} = \cup {\mathcal {S}}\) where each \({\mathcal {S}}\) is a coherent stopping time regime with maximal cube \(Q({\mathcal {S}})\).

  3. (iii)

    The maximal cubes and bad cubes satisfy a Carleson packing condition:

    $$\begin{aligned} \sum _{\begin{array}{c} Q \in {\mathcal {B}} \\ Q \subseteq R \end{array}} \sigma (Q) + \sum _{{\mathcal {S}}:Q({\mathcal {S}}) \subseteq R} \sigma (Q({\mathcal {S}})) \le C \sigma (R) \quad \text { for every } R \in {\mathbb {D}}. \end{aligned}$$
  4. (iv)

    On each stopping time \({\mathcal {S}}\), the elliptic measure ‘acts like surface measure’ in the sense that if \(Q \in {\mathcal {S}}\), then

    $$\begin{aligned} M^{-1} \frac{\sigma (Q)}{\sigma (Q({\mathcal {S}}))} \le \frac{\omega ^{X_{Q({\mathcal {S}})}}(Q)}{\omega ^{X_{Q({\mathcal {S}})}}(Q({\mathcal {S}}))} \le M \frac{\sigma (Q)}{\sigma (Q({\mathcal {S}}))}. \end{aligned}$$

Proof

Since \(\omega _L\in A_\infty (\sigma )\), then by Lemma 3.1 we have that the hypothesis (b) of [4, Proposition 3.1] is satisfied. By [4, Proposition 3.1], this yields a decomposition \(\mathbb {D}=\mathcal G\cup {\mathcal {B}}\) like the one described above (in fact we have that \({\mathcal {B}}=\varnothing \)), but with the caveat that the stopping time regimes need only be semicoherent. Then, by a standard mechanism (see for instance [19, pp. 56–57], and [9, Remark 2.13]), we can modify the stopping time regimes so that they are coherent, while the rest of the properties are still satisfied. \(\square \)

It is straightforward to check that for each semicoherent stopping time regime \({\mathcal {S}}\), there exists a collection of pairwise disjoint cubes \({\mathcal {F}}_{\mathcal {S}}\) such that \({\mathcal {S}}= {\mathbb {D}}_{{\mathcal {F}}_{\mathcal {S}}, Q({\mathcal {S}})}\). By Lemma 2.38 and the Harnack inequality, we get the following:

Lemma 3.17

Suppose that \(\Omega \) is a uniform domain in \(\mathbb {R}^{n+1}\), \(n\ge 1\), with n-Ahlfors regular boundary, that \(L = -\mathop {{\text {div}}}\nolimits A \nabla \) is a divergence form elliptic operator such that \(\omega _L \in A_\infty (\sigma )\), and that \({\mathcal {S}}\) is a semicoherent stopping time regime satisfying the property in Lemma 3.16(iv). Then

$$\begin{aligned} G_L(X_Q, Y) \sigma (Q) \approx \delta (Y) \qquad \text { for every } Q \in {\mathcal {S}}\text { and } Y \in \Omega ^*_{{\mathcal {F}}_{\mathcal {S}}, Q}, \end{aligned}$$

where the implicit constants depend only on dimension, ellipticity, \(\kappa \), Harnack chain, corkscrew, and Ahlfors regularity constants, as well as the constant M in Lemma 3.16(iv).

Proof

Fix \(Q\in {\mathcal {S}}\) andFootnote 2\(Y\in \Omega _{{\mathcal {F}}_{\mathcal {S}},Q}^*\). By definition, there exists \(P\in \mathbb {D}_{{\mathcal {F}}_S,Q}\) so that \(Y\in U_P^*\), and thus \(P\in {\mathcal {S}}\), \(P\subseteq Q\), and \(\delta (Y)\approx \ell (P)\). Let \(B_P' {:}{=}B(x_P,\eta a_0\ell (P))\) with \(\eta \in (0,1)\) small, and let \(X_P'\) be the corkscrew point for \(B_P'\). Define \(B_Q'\) and \(X_Q'\) analogously. We may guarantee that \(X_Q\in \Omega \backslash 2B_P'\) if \(\eta \) is chosen small enough depending only on the corkscrew constant. Then, since \(\delta (Y)\approx \delta (X_P')\), and \(|Y-X_P'|\lesssim \delta (Y)\), by the Harnack inequality (for L and \(L^*\)) and Harnack chains, Lemma 2.38, the doubling property of elliptic measure, and Ahlfors regularity, we have that

$$\begin{aligned} \frac{G_L(X_Q,Y)}{\delta (Y)}\approx \frac{G_L(X_Q,X_P')}{\ell (P)}\approx \frac{\omega ^{X_Q}(2B_P')}{\ell (P)^n} \approx \frac{\omega ^{X_Q'}(B_P')}{\sigma (P)}. \end{aligned}$$

Now, since \(X_{Q({\mathcal {S}})}\in \Omega \backslash 2B_Q'\) and \(P,Q\in {\mathcal {S}}\), then by Lemma 2.40, the doubling property of elliptic measure, Harnack chains, Harnack inequality, Bourgain’s estimate, and the property (iv) in Lemma 3.16, it follows that

$$\begin{aligned} \frac{\omega ^{X_Q'}(B_P)}{\sigma (P)}{} & {} \approx \frac{\omega ^{X_{Q({\mathcal {S}})}}(B_P)}{\sigma (P)}\frac{1}{\omega ^{X_{Q({\mathcal {S}})}}(B_Q')}\approx \frac{\omega ^{X_{Q({\mathcal {S}})}}(P)}{\sigma (P)}\frac{1}{\omega ^{X_{Q({\mathcal {S}})}}(Q)}\\{} & {} \approx \frac{1}{\sigma (Q({\mathcal {S}}))}\frac{\sigma (Q({\mathcal {S}}))}{\sigma (Q)}=\frac{1}{\sigma (Q)}, \end{aligned}$$

which completes the proof. \(\square \)

4 Existence of \(\varepsilon \)-Approximators: Proof of Theorem 1.1

In this section we prove Theorem 1.1. The key step in the proof is the construction of \({{\,\textrm{BV}\,}}_{{{\,\textrm{loc}\,}}}\) approximators since the existence of smooth approximators with more delicate properties follows then by using almost black box regularization arguments. More precisely, the heart of Theorem 1.1 is the following result:

Theorem 4.1

Let \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n \ge 1\), be a uniform domain with Ahflors regular boundary. Let \(L = -\mathop {{\text {div}}}\nolimits A\nabla \) be a (not necessarily symmetric) divergence form elliptic operator satisfying that \(\omega _L \in A_\infty (\sigma )\). Then, for any \(\varepsilon \in (0,1)\) there exists a constant \(C_\varepsilon \) such that if \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega ) \cap L^\infty (\Omega )\) is a solution to \(Lu = 0\) in \(\Omega \), then there exists \(\Phi = \Phi ^{\varepsilon } \in {{\,\textrm{BV}\,}}_{{{\,\textrm{loc}\,}}}(\Omega )\) satisfying

$$\begin{aligned} \Vert u - \Phi \Vert _{L^\infty (\Omega )} \le \varepsilon \Vert u\Vert _{L^\infty (\Omega )} \quad \text { and } \quad \sup _{x \in \partial \Omega , r > 0} \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla \Phi (Y)|\, dY \le C_\varepsilon \Vert u\Vert _{L^{\infty }(\Omega )}, \end{aligned}$$

where \(C_\varepsilon \) depends on \(\varepsilon \), structural constants, ellipticity and the \(\omega _L\in A_{\infty }(\sigma )\) constants.

As mentioned in the introduction (see the paragraph after Theorem 1.1), the proof of Theorem 4.1 is based on the construction of \(\varepsilon \)-approximators in [40, Theorem 1.3], with important differences. We will point out when we have to diverge from the strategy in [40].

4.1 Set-Up for the Proof of Theorem 4.1

We start by making some preliminary considerations and fixing some notation and terminology. Let u be a bounded solution to \(Lu = 0\), and let \(\varepsilon \in (0,1)\). Without loss of generality, we may assume that \(\Vert u\Vert _{L^\infty (\Omega )} = 1\), since otherwise we have \(u = 0\) or we can replace u with \(u/\Vert u\Vert _{L^\infty }\). We fix a stopping time regime \({\mathcal {S}}\) from Lemma 3.16, with maximal cube \(Q({\mathcal {S}})\). Following [28, 40], we label each cube \(Q \in {\mathbb {D}}\) depending on how much the the function u oscillates in the corresponding fattened Whitney region \(U_Q^*\). To be more precise, we say that

$$\begin{aligned} Q\in \mathbb {D} \text {is} {{\textbf {red}}} \text {if}\quad \sup _{X,Y\in U_Q^*} |u(X) - u(Y)| \ge \frac{\varepsilon }{1000}, \end{aligned}$$

and

$$\begin{aligned} Q\in \mathbb {D} \text {is} {\textbf {blue}} \text {if}\quad \sup _{X,Y\in U_Q^*} |u(X) - u(Y)| < \frac{\varepsilon }{1000}. \end{aligned}$$

We note that these conditions differ from the, ones in [40] in two ways: in [40], the oscillation threshold was \(\varepsilon /10\) and the level of oscillation was measured in the unfattened region \(U_Q\). With our conditions we have a little bit more control on the blue cubes, which will be useful for us later. In [40], it was enough to use these two labels but for our analysis it is important to take into consideration the minimal cubes of \({\mathcal {S}}\), which we label as follows:

$$\begin{aligned} Q\in {\mathcal {S}}\text {is} {\textbf {yellow}} \text {if }\quad Q \text {has a child} Q' \text {such that} Q' \not \in {\mathcal {S}}. \end{aligned}$$

Recall that \(Q'\) is a child of Q if \(Q' \subset Q\) and \(\ell (Q') = \ell (Q)/2\). Notice that yellow cubes are not a separate collection from the red and blue cubes but each yellow cube is also red or blue. We denote the collections of red and yellow cubes by

$$\begin{aligned} {\mathcal {R}} {:}{=}\{Q \in {\mathbb {D}}:Q \text { is red}\},\qquad {\mathcal {Y}} = {\mathcal {Y}}({\mathcal {S}}) {:}{=}\{Q \in {\mathcal {S}}:Q \text { is yellow}\}. \end{aligned}$$

The rough idea of the construction of the \(\varepsilon \)-approximator \(\Phi \) of u is to first construct \(\Phi \) inside a Carleson box \(T_{Q_0}\) for a fixed cube \(Q_0\) and then use a “local to global”-type argument to define a global approximator. Working in a Carleson box allows us to reduce many of the challenging estimates to working with just one stopping time regime \({\mathcal {S}}\) given by Lemma 3.16. We then set \(\Phi \equiv u\) in the regions \(U_Q\) such that \(Q \in {\mathcal {R}}\cup {\mathcal {Y}}\) and break up the blue cubes into smaller stopping time regimes where u does not vary by more than \(\varepsilon /100\). In these new regimes, we set \(\Phi = u(X_0)\) in the union of \(U_Q\), where \(X_0\) is any point in the union. The \(L^\infty \) approximation property follows then just from the way we defined \(\Phi \) but verifying the \(L^1\)-type Carleson measure estimate for \(\Phi \) is more challenging. For this, one of the key steps is to show that the collections of red, yellow and maximal cubes from the new stopping time regimes satisfy Carleson packing conditions. For the first two collections, this follows in a straightforward way:

Lemma 4.2

The collections \({\mathcal {R}}\) and \(\bigcup _{{\mathcal {S}}} {\mathcal {Y}}({\mathcal {S}})\) satisfy the following Carleson packing conditions: for any \(P \in {\mathbb {D}}\) we have

$$\begin{aligned} \sum _{Q \in {\mathcal {R}}, Q \subset P} \sigma (Q) \lesssim \frac{1}{\varepsilon ^2} \sigma (P), \end{aligned}$$

and

$$\begin{aligned} \sum _{{\mathcal {S}}} \sum _{Q \in {\mathcal {Y}}({\mathcal {S}}), Q \subset P} \sigma (Q) \le (C+1)\sigma (P). \end{aligned}$$

The constant C is the same constant as C in Lemma 3.16.

Proof

The packing condition for \(\bigcup _{{\mathcal {S}}} {\mathcal {Y}}({\mathcal {S}})\) follows from the definition and the facts that the stopping time regimes are coherent and their maximal cubes satisfy a Carleson packing condition. Indeed, each yellow cube \(Q \in {\mathcal {Y}}({\mathcal {S}})\) is contained in \(Q({\mathcal {S}})\) and by the coherency of \({\mathcal {S}}\), no yellow cube can contain a smaller yellow cube \({{\widetilde{Q}}}\in {\mathcal {Y}}({\mathcal {S}})\). Thus, the cubes in \({\mathcal {Y}}({\mathcal {S}})\) are disjoint and we get

$$\begin{aligned} \sum _{{\mathcal {S}}} \sum _{Q \in {\mathcal {Y}}({\mathcal {S}}), Q \subset P} \sigma (Q){} & {} \le \sum _{\begin{array}{c} {\mathcal {S}}: Q \in {\mathcal {Y}}({\mathcal {S}}), Q \subset P, \\ Q({\mathcal {S}}) \subset P \end{array}} \sigma (Q) + \sum _{\begin{array}{c} {\mathcal {S}}: Q \in {\mathcal {Y}}({\mathcal {S}}), Q \subset P, \\ P \in {\mathcal {S}} \end{array}} \sigma (Q) \\{} & {} \le \sum _{\begin{array}{c} {\mathcal {S}}: Q({\mathcal {S}}) \subset P \end{array}} \sigma (Q({\mathcal {S}})) + \sum _{\begin{array}{c} {\mathcal {S}}: P \in {\mathcal {S}} \end{array}} \sigma (P). \end{aligned}$$

Since the collection \(\{Q({\mathcal {S}})\}_{{\mathcal {S}}}\) satisfies a Carleson packing condition by Lemma 3.16, we know that the first sum on the right-hand side is bounded by \(C\sigma (P)\). In addition, since the stopping time regimes \({\textbf {S}}\) are disjoint, we have \(P \in {\mathcal {S}}\) for at most one stopping time regime \({\mathcal {S}}\). Thus, the second sum on the right-hand side is bounded by \(\sigma (P)\). The desired bound follows from combining these two estimates.

Let us then prove the Carleson packing condition for \({\mathcal {R}}\). If \(Q \in {\mathcal {R}}\), then there exist points \(X_1, X_2 \in U_Q^*\) such that \(|u(X_1) - u(X_2)| > \varepsilon /1000\). Since \(X_1, X_2 \in U_Q^*\), there exist Whitney cubes \(I_1, I_2 \subset U_Q^{*}\) such that \(X_1 \in (1+\tau )I_1\), \(X_2 \in (1+\tau )I_2\) and \(\ell (Q) \approx \ell (I_1) \approx \ell (I_2) \approx \delta (X_1) \approx \delta (X_2)\). In particular, we have \({\text {dist}}(X_1,\partial U_Q^{**}) \approx {\text {dist}}(X_2,\partial U_Q^{**}) \approx \ell (Q)\) for the twice-fattened region \(U_Q^{**}\) (see Sect. 2.3), where the implicit constants depend on the dilation parameter \(\tau \). Thus, since \(U_Q^{**}\) satisfies the Harnack chain condition by Lemma 2.18, there exists a uniformly bounded number of balls \(B_1, B_2, \ldots , B_N\) and points \(Y_1, Y_2, \ldots , Y_{N-1}\) such that

  • \(X_1 \in B_1\), \(X_2 \in B_N\) and \(Y_i \in B_{i} \cap B_{i+1}\) for every \(i = 1,2,\ldots ,N-1\),

  • for a constant \(\lambda > 1\) (depending on \(\tau \)), we have \(2\lambda B_i \subset U_Q^{**}\) for every \(i = 1,2,\ldots ,N\), and

  • \(|2\lambda B_i| \approx |U_Q^{**}| \approx \ell (Q)^{n+1}\) for every \(i = 1,2,\ldots ,N\).

These properties combined with the triangle inequality, the local Hölder continuity (that is, Lemma 2.28) and the Poincaré inequality applied for solution v, , then give us

Thus, since \(\delta (X) \approx \ell (Q)\) for every \(X \in U_Q^{**}\), we have

By construction, we know that the regions \(U_P^{**}\) have bounded overlaps and any twice-fattened Carleson box \(T_P^{**}\) satisfies \(T_P^{**} \subset B(x_P,R_P) \cap \Omega \) the center \(x_P\) of P and for a radius \(R_P \approx \ell (P)\). Hence, for any \(Q_0 \in {\mathbb {D}}\), these facts, the previous estimate and the Carleson measure estimate () give us

$$\begin{aligned} \sum _{\begin{array}{c} P \in {\mathcal {R}} \\ P \subset Q_0 \end{array}} \sigma (P){} & {} \lesssim \varepsilon ^{-2}\sum _{\begin{array}{c} P \in {\mathcal {R}} \\ P \subset Q_0 \end{array}}\iint _{U_P^{**}} |\nabla u(X)|^2 \delta (X) \, dX \\{} & {} \lesssim \varepsilon ^{-2} \iint _{T^{**}_{Q_0}} |\nabla u(X)|^2 \delta (X) \, dX \lesssim \varepsilon ^{-2}\ell (Q_0)^n \approx \varepsilon ^{-2}\sigma (Q_0), \end{aligned}$$

which proves the claim. \(\square \)

4.2 A Stopping Time Decomposition for the Family of Blue Cubes

We now move to decomposing the collection of blue cubes into more manageable subcollections using a stopping time procedure. A similar idea is utilized in [40, p. 2360] but, due to our geometry, we use different stopping time conditions and different analysis of the subcollections. Set \({\mathcal {L}}= {\mathcal {L}}({\mathcal {S}})\) to be the collection of blue cubes in \({\mathcal {S}}\). We first take the largest blue cube in \({\mathcal {S}}\) with respect to side length (if there is more than one such cube, we just pick one) and denote this cube by \(Q({\textbf {S}}_1)\). The cube \(Q({\textbf {S}}_1)\) will be the maximal cube in our first refined stopping time regime \({\textbf {S}}_1\). We let \({\mathcal {F}}_{{\textbf {S}}_1}\) be the collection of cubes \(Q \in {\mathcal {S}}\cap {\mathbb {D}}_{Q({\textbf {S}}_1)} \setminus \{Q({\textbf {S}}_1)\}\) such that Q is a maximal cube with respect to having one of the following three properties:

  1. (1)

    Q or one of its siblingsFootnote 3 is red.

  2. (2)

    Q and all of its siblings are blue, but for some \(Q'\) that is either Q or a sibling of Q it holds that

    $$\begin{aligned} |u(X_{Q'}) - u(X_{Q({\textbf {S}}_1)})| > \varepsilon /100. \end{aligned}$$
  3. (3)

    Q is yellow.

Recall that for every \(P \in {\mathbb {D}}\), the point \(X_P\) is a corkscrew point relative to the center \(x_P\) at scale \(10^{-5} a_0 \ell (P)\), as defined in Sect. 2.3. We now set \({\textbf {S}}_1 {:}{=}{\mathbb {D}}_{{\mathcal {F}}_{{\textbf {S}}_1}, Q({\textbf {S}}_1)}\). By construction, \({\textbf {S}}_1\) is a coherent stopping time regime in the sense of Definition 3.15. Since the cubes in \({\textbf {S}}_1\) are blue and none of them satisfy the stopping condition (2) above, we know that

$$\begin{aligned} |u(X) - u(X_{Q({\textbf {S}}_1)})| \le \varepsilon /50 \quad \text { for every } X \in \Omega _{{\mathcal {F}}_{{\textbf {S}}_1}, Q({\textbf {S}}_1)}^*. \end{aligned}$$
(4.3)

We now express \({\mathcal {F}}_{{\textbf {S}}_1}\) as a union of three collections,

$$\begin{aligned} {\mathcal {F}}_{{\textbf {S}}_1} = {\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{R}\,}}} \cup {\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{SB}\,}}} \cup {\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{Y}\,}}}, \end{aligned}$$

where \({\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{R}\,}}}\) contains the cubes for which (1) holds, \({\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{SB}\,}}}\) contains the cubes for which (2) holds and \({\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{Y}\,}}}\) contains the cubes for which (3) holds. The superscripts stand for “red”, “stopping blue” and “yellow”. We note that the collections \({\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{R}\,}}}\) and \({\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{SB}\,}}}\) are disjoint but \({\mathcal {F}}_{{\textbf {S}}_1}^{{{\,\textrm{Y}\,}}}\) may overlap with both of them.

We now continue this way: we let \(Q({\textbf {S}}_2)\) be the largest blue cube in \({\mathcal {S}}\setminus {\textbf {S}}_1\) with respect to side length, we extract the collection of maximal stopping cubes \({\mathcal {F}}_{Q({\textbf {S}}_2)}\) (with an updated stopping condition (2)), we define the coherent stopping regime \({\textbf {S}}_2\) and the collections \({\mathcal {F}}_{{\textbf {S}}_2}^{{{\,\textrm{R}\,}}}\), \({\mathcal {F}}_{{\textbf {S}}_2}^{{{\,\textrm{SB}\,}}}\) and \({\mathcal {F}}_{{\textbf {S}}_2}^{{{\,\textrm{Y}\,}}}\), choose the largest blue cube \(Q({\textbf {S}}_3)\) in \({\mathcal {S}}{\setminus } {\textbf {S}}_1 \cup {\textbf {S}}_2\), and so on. Since each \({\textbf {S}}_i\) contains at least the cube \(Q({\textbf {S}}_i)\), we know that this procedure exhausts \({\mathcal {L}}\) and gives us a disjoint decomposition \({\mathcal {L}}= \cup _j {\textbf {S}}_j\) where each \({\textbf {S}}_j\) is a coherent stopping time regime. Just like (4.3), we have the oscillation estimate

$$\begin{aligned} |u(X) - u(X_{Q({\textbf {S}}_j)})| \le \varepsilon /50 \quad \text { for every } X \in \Omega _{{\mathcal {F}}_{{\textbf {S}}_j}, Q({\textbf {S}}_j)}^*, \end{aligned}$$
(4.4)

for every j. We also get the collections \({\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}\), \({\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{SB}\,}}}\) and \({\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{Y}\,}}}\) for each j.

Our next goal is to show that the maximal cubes \(\{Q({\textbf {S}}_j)\}_j\) satisfy a Carleson packing condition. This goal is an analog of [40, Lemma 5.16] but since the proof of this lemma is based on the use of an “\(N \lesssim S\)” estimate in sawtooth regions (which is possible in the presence of uniform rectifiability of \(\partial \Omega )\), this is the part where we significantly depart from [40]. Following an idea in [18], we let \(\lambda \in (0,10^{-10})\) be a small parameter (to be chosen) and break the stopping times into four groups. We say that \({\textbf {S}}_j\) is of

  • Type 1 (T1) if \(\sigma (Q({\textbf {S}}_j) {\setminus } \cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}} Q) \ge \lambda \sigma (Q({\textbf {S}}_j))\).

  • Type 2 (T2) if \(\sigma (\cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}} Q) \ge \lambda \sigma (Q({\textbf {S}}_j))\).

  • Type 3 (T3) if \(\sigma (\cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{Y}\,}}}} Q) \ge \lambda \sigma (Q({\textbf {S}}_j))\).

  • Type 4 (T4) if \({\textbf {S}}_j\) is not type 1, 2 or 3.

The Carleson packing condition for the cubes \(Q({\textbf {S}}_j)\) follows in a straightforward way when \({\textbf {S}}_j\) is of Type 1, 2 or 3:

Lemma 4.5

We have the following Carleson packing conditions for the maximal cubes of the subregimes \({\textbf {S}}_j\) in the decomposition \({\mathcal {L}}({\mathcal {S}}) = \cup _j {\textbf {S}}_j\): for any \(P \in {\mathbb {D}}\), we have

$$\begin{aligned}&\sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T1}\\ Q({\textbf {S}}_j) \subset P \end{array}} \sigma (Q({\textbf {S}}_j)) + \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T3}\\ Q({\textbf {S}}_j) \subset P \end{array}} \sigma (Q({\textbf {S}}_j)) \lesssim \frac{1}{\lambda } \sigma (P),\qquad \text {and,}\\&\sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T2}\\ Q({\textbf {S}}_j) \subset P \end{array}} \sigma (Q({\textbf {S}}_j)) \lesssim \frac{1}{\lambda \varepsilon ^2} \sigma (P). \end{aligned}$$

Proof

For the regimes of Type 1, we first notice that if \(Q({\textbf {S}}_i) \subsetneq Q({\textbf {S}}_j)\), then \(Q({\textbf {S}}_i) \subset Q\) for some \(Q \in {\mathcal {F}}_{{\textbf {S}}_j}\). In particular, the sets \(Q({\textbf {S}}_j) {\setminus } \cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}} Q\) are pairwise disjoint. Thus, by the definition of Type 1, we get

$$\begin{aligned} \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T1} \\ Q({\textbf {S}}_j) \subset P \end{array}} \sigma (Q({\textbf {S}}_j)) \le \frac{1}{\lambda } \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T1} \\ Q({\textbf {S}}_j) \subset P \end{array}} \sigma (Q({\textbf {S}}_j) \setminus \cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}} Q) \le \frac{1}{\lambda } \sigma (P). \end{aligned}$$

For Type 3 regimes, since the cubes \(Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{Y}\,}}} \subset {\mathcal {Y}}({\mathcal {S}})\) are yellow cubes in \({\mathcal {S}}\), they are disjoint. Thus, the claim for the regimes of Type 3 follows immediately from definition.

For the regimes of Type 2, we recall that if \(Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}\), then Q is red or one of its siblings is red. In particular, each \(Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}\) has approximately the same measure as some red sibling \(R_Q \subset Q({\textbf {S}}_j)\) of Q. If there is more than one red sibling, we just choose one of them for each \(Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}\). On the other hand, since each cube has only a uniformly bounded number of siblings, for each \(Q \in \bigcup _j {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}\) we can have \(R_Q = R_{Q'}\) only for a uniformly bounded number of cubes \(Q'\). Thus, we get

$$\begin{aligned} \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T2}\\ Q({\textbf {S}}_j) \subset P \end{array}} \sigma (Q({\textbf {S}}_j)){} & {} \le \frac{1}{\lambda } \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T2} \\ Q({\textbf {S}}_j) \subset P \end{array}} \sum _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}} \sigma (Q) \approx \frac{1}{\lambda } \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T2}\\ Q({\textbf {S}}_j) \subset P \end{array}} \sum _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}} \sigma (R_Q)\\{} & {} \lesssim \frac{1}{\lambda } \sum _{R \in {\mathcal {R}}, R \subset P} \sigma (R) \lesssim \frac{1}{\lambda \varepsilon ^2} \sigma (P), \end{aligned}$$

where we used Lemma 4.2 in the final estimate. \(\square \)

By Lemma 4.5, to show the Carleson packing condition for the collection \(\{Q({\textbf {S}}_j)\}_j\) it remains only to consider the regimes \({\textbf {S}}_j\) of Type 4.

Lemma 4.6

There exists \(\lambda _0 > 0\) depending only on structural constants, ellipticity, and the \(\omega _{L,\Omega }\in A_\infty (\sigma )\) constants, such that for any \(\lambda \in (0,\lambda _0)\), there exist constants \(C_1, C_2\ge 1\) depending only on structural constants, ellipticity, and the \(\omega _{L,\Omega }\in A_\infty (\sigma )\) constants (and independent of \(\varepsilon \), \(\lambda \), and \({\mathcal {S}}\)), so that

$$\begin{aligned} \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T4} \\ Q({\textbf {S}}_j) \subset P \end{array}} \sigma (Q({\textbf {S}}_j))&\le \frac{C_1}{\varepsilon ^2}\sigma (P), \end{aligned}$$

for every \(P \in {\mathbb {D}}\). In particular, we have

$$\begin{aligned} \sum _{j :Q({\textbf {S}}_j) \subset P} \sigma (Q({\textbf {S}}_j))&\le \frac{C_2}{\lambda \varepsilon ^2}\sigma (P). \end{aligned}$$

Proving Lemma 4.6 is much more delicate than proving Lemma 4.5 and we do this in several steps. The key idea is to reduce the proof to proving estimates for which we can use lemmas from Sect. 3.

Let us fix a regime \({\textbf {S}}_j\) that is of Type 4. Since \({\textbf {S}}_j\) is not of Type 1, 2 or 3, we know that, roughly speaking, at the “bottom” of the sawtooth domain \(\Omega _{{\mathcal {F}}_{{\textbf {S}}_j}, Q({\textbf {S}}_j)}\) there is a large region where u has some (uniform) oscillation from the value \(u(X_{Q({\textbf {S}}_j)})\). Let us be more precise. Since \({\textbf {S}}_j\) is not of Type 1, we know that \(\sigma (Q({\textbf {S}}_j) {\setminus } \cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}} Q) < \lambda \sigma (Q({\textbf {S}}_j)\), and since \({\textbf {S}}_j\) is not of Type 2 or 3, we have

$$\begin{aligned} \sigma \big ((\cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}}} Q)\cup (\cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{Y}\,}}}}Q)\big ) < 2\lambda \sigma (Q({\textbf {S}}_j)). \end{aligned}$$

Thus, since \({\mathcal {F}}_{{\textbf {S}}_j} = {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{R}\,}}} \cup {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{SB}\,}}} \cup {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{Y}\,}}}\), it holds that

$$\begin{aligned} \sigma \big (\cup _{Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{SB}\,}}}} Q\big ) \ge (1- 3\lambda ) \sigma (Q({\textbf {S}}_j)). \end{aligned}$$

Let \(N = N(j)\) be so large that the subcollection

$$\begin{aligned} {\mathcal {F}}_{N,j}^{{{\,\textrm{SB}\,}}} {:}{=}\{Q \in {\mathcal {F}}_{{\textbf {S}}_j}^{{{\,\textrm{SB}\,}}} :\ell (Q) > 2^{-N}\ell (Q({\textbf {S}}_j))\}, \end{aligned}$$

satisfies

$$\begin{aligned} \sigma \big ( \cup _{Q \in {\mathcal {F}}_{N,j}^{{{\,\textrm{SB}\,}}}} Q \big ) \ge (1- 4\lambda ) \sigma (Q({\textbf {S}}_j)). \end{aligned}$$
(4.7)

Our estimates will not depend on N but we work with the subcollection \({\mathcal {F}}_{N,j}^{{{\,\textrm{SB}\,}}}\) to avoid dealing with estimates on the boundary. We let \({\mathcal {F}}_{N,j}\) denote the collection of maximal cubes in the collection \({\mathcal {F}}_{{\textbf {S}}_j} \cup {\mathbb {D}}_{N, Q({\textbf {S}}_j)}\), where \({\mathbb {D}}_{N,Q} = \{Q' \in {\mathbb {D}}_Q :\ell (Q') = 2^{-N}\ell (Q)\}\) as earlier. We note that \(\Omega _{{\mathcal {F}}_{N,j}, Q({\textbf {S}}_j)} \subseteq \Omega _{{\mathcal {F}}_{{\textbf {S}}_j},Q({\textbf {S}}_j)}\). Then

$$\begin{aligned} {\mathcal {F}}_{N,j} = {\mathcal {F}}_{N,j}^{{{\,\textrm{SB}\,}}} \cup {\mathcal {F}}_{N,j}^{{{\,\textrm{O}\,}}}, \ \text { where } \ {\mathcal {F}}_{N,j}^{{{\,\textrm{O}\,}}} {:}{=}{\mathcal {F}}_{N,j} \setminus {\mathcal {F}}_{N,j}^{{{\,\textrm{SB}\,}}}, \end{aligned}$$

where the superscript O in \({\mathcal {F}}_{N,j}^{{{\,\textrm{O}\,}}}\) stands for “other” cubes. By (4.7) we have

$$\begin{aligned} \sigma \big (\cup _{Q \in {\mathcal {F}}_{N,j}^{{{\,\textrm{O}\,}}}} Q\big ) = \sum _{Q \in {\mathcal {F}}_{N,j}^{{{\,\textrm{O}\,}}}} \sigma (Q) \le (4\lambda ) \sigma (Q({\textbf {S}}_j)). \end{aligned}$$
(4.8)

With the notation above, we can formulate our key estimate for the proof of Lemma 4.6:

Lemma 4.9

Suppose that \({\textbf {S}}_j\) is a stopping time regime of Type 4. There exists \(\lambda _0 > 0\) depending only on structural constants, ellipticity, and the \(\omega _{L,\Omega }\in A_\infty (\sigma )\) constants, such that for any \(\lambda \in (0,\lambda _0)\), there exists a constant \(C_3\ge 1\) depending only on structural constants, ellipticity, and the \(\omega _{L,\Omega }\in A_\infty (\sigma )\) constants (and independent of \(\varepsilon \), \(\lambda \), N, j, and \({\mathcal {S}}\)), so that the following estimate holds:

$$\begin{aligned} \sigma (Q({\textbf {S}}_j)) \le \frac{C_3}{\varepsilon ^2} \iint _{\Omega _{{\mathcal {F}}_{N,j}, Q({\textbf {S}}_j)}} |\nabla u(Y)|^2 \delta (Y) \, dY. \end{aligned}$$
(4.10)

Taking Lemma 4.9 for granted momentarily, we can prove Lemma 4.6 in a straightforward way:

Proof of Lemma 4.6

Fix \(P \in {\mathbb {D}}\). Then we have

$$\begin{aligned} \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T4} \\ Q({\textbf {S}}_j) \subset P \end{array}} \sigma (Q({\textbf {S}}_j)){} & {} \le \frac{C_3}{\varepsilon ^2} \sum _{\begin{array}{c} j :{\textbf {S}}_j \text { is T4} \\ Q({\textbf {S}}_j) \subset P \end{array}} \iint _{\Omega _{{\mathcal {F}}_{N,j}, Q({\textbf {S}}_j)}} |\nabla u(Y)|^2 \delta (Y) \, dY \\{} & {} \lesssim \frac{C_3}{\varepsilon ^2} \iint _{T_P} |\nabla u(Y)|^2 \delta (Y) \, dY \lesssim \frac{C_3}{\varepsilon ^2} \sigma (P), \end{aligned}$$

where we used Lemma 4.9, the fact that the bounded overlap of the regions \(U_Q\) and the disjointness of the collections \({\textbf {S}}_j\) imply that the regions \(\Omega _{{\mathcal {F}}_{N(j),j}, Q({\textbf {S}}_j)}\) have bounded overlaps, the fact that \(T_P \subset B(x_P, C\ell (P))\) and Lemma 3.1. The rest of the claim follows now from Lemma 4.5. \(\square \)

4.3 Proof of Lemma 4.9: A High Oscillation Estimate

Let us then start processing the estimate (4.10). Let \({\textbf {S}}_j\) be a fixed stopping time regime of Type 4. To relax the notation, we denote

$$\begin{aligned} {\textbf {S}}_* {:}{=}{\mathbb {D}}_{{\mathcal {F}}_{N,j},Q({\textbf {S}}_j)} \subseteq {\textbf {S}}_j,\qquad Q({\textbf {S}}_*) {:}{=}Q({\textbf {S}}_j),\qquad X_* {:}{=}X_{Q({\textbf {S}}_j)},\\ \Omega _* {:}{=}\Omega _{{\mathcal {F}}_{N,j}, Q({\textbf {S}}_j)},\qquad {\mathcal {F}}{:}{=}{\mathcal {F}}_{N,j},\qquad {\mathcal {F}}^{{{\,\textrm{SB}\,}}} {:}{=}{\mathcal {F}}_{N,j}^{{{\,\textrm{SB}\,}}}, \quad \text { and}\quad {\mathcal {F}}^{{{\,\textrm{O}\,}}} {:}{=}{\mathcal {F}}_{N,j}^{{{\,\textrm{O}\,}}}. \end{aligned}$$

Recall that \(Q({\textbf {S}}_*) \in {\mathcal {S}}\) and that \({\mathcal {S}}\) is a coherent stopping time regime satisfying the property (iv) in Lemma 3.16. Thus, by using Lemmas 3.17, 3.14 and 3.10 in this order, we get

$$\begin{aligned}{} & {} \iint _{\Omega _*} |\nabla u(Y)|^2 \delta (Y) \, dY \approx \sigma (Q({\textbf {S}}_*)) \iint _{\Omega _*} |\nabla u(Y)|^2 G_{L,\Omega }(X_*, Y) \, dY \nonumber \\{} & {} \quad \ge \sigma (Q({\textbf {S}}_*)) \iint _{\Omega _*} |\nabla u(Y)|^2 G_{L,\Omega _*}(X_*, Y)\, dY \approx \sigma (Q({\textbf {S}}_*)) \int _{\partial \Omega _*} (u(y) - u(X_*))^2 \, d\omega _*(y),\nonumber \\ \end{aligned}$$
(4.11)

where

\(\omega _*\) is the elliptic measure for L in \(\Omega _*\) with pole at \(X_*\).

Thus, Lemma 4.9 for regions of Type 4 follows immediately from the following estimate. Recall that \(\lambda \) is the parameter we used when we defined the Types 1–4 for the stopping time regimes.

Lemma 4.12

There exists \(\lambda _0 > 0\) depending on structural constants, ellipticity, and the \(\omega _{L,\Omega }\in A_\infty (\sigma )\) constants, such that for any \(\lambda \in (0,\lambda _0)\), there exists a constant \(c_4>0\) depending only on structural constants, ellipticity, and the \(\omega _{L,\Omega }\in A_\infty (\sigma )\) constants (and independent of \(\varepsilon \), \(\lambda \), N, j, and \({\mathcal {S}}\)), so that the following estimate holds:

$$\begin{aligned} \int _{\partial \Omega _*} (u(y) - u(X_*))^2 \, d\omega _*(y) \ge c_4 \varepsilon ^2. \end{aligned}$$

For the the proof of Lemma 4.12, we need some auxiliary constructions and estimates. Recall that \(X_* = X_{Q({\textbf {S}}_*)}\) is a corkscrew point relative to \(Q({\textbf {S}}_*)\) at scale \(r_* {:}{=}10^{-5} a_0 \ell (Q({\textbf {S}}_*))\) (see Sect. 2.4). Let \({\hat{x}}_* \in \partial \Omega \) be a touching point for \(X_*\) on \(\partial \Omega \), thaHigh Oscillation Estimatet is, \(|{\hat{x}}_* - X_*| = {\text {dist}}(X_*, \partial \Omega )\). For \(\xi \in [0,1]\), consider the points \(X(\xi ) {:}{=}{\hat{x}}_* + \xi (X_*- {\hat{x}}_*)\) which lie on the line segment from \({\hat{x}}_*\) to \(X_*\). Since we know that \({\hat{x}}_* \not \in {\overline{\Omega }}_*\) and \(X_* \in \Omega _*\), there exists \(\xi _0 \in (0,1)\) such that \(X(\xi _0) \in \partial \Omega _*\). Now we set \(X_{**}=X(\xi _0)\), and \(\Delta _* {:}{=}B(X_{**}, r_*) \cap \partial \Omega _*\). Since we are working with the truncated collection of stopping cubes \({\mathcal {F}}= {\mathcal {F}}_{N,j}\), we know that \(\partial \Omega \cap \partial \Omega _* = {\text{\O }}\). In particular, \(\Delta _* \subset \Omega \). By Lemma 2.26, we know that

$$\begin{aligned} \text {if } y \in \Delta _*, \ \ \text { then } {\hat{y}} \in \Delta _{Q({\textbf {S}}_*)} \subseteq Q({\textbf {S}}_*), \end{aligned}$$
(4.13)

where \({\hat{y}}\) is a touching point for y in \(\Omega \), that is, \(|y - {\hat{y}}| = {\text {dist}}(y, \partial \Omega )\).

By Lemma 2.18, we know that \(\Omega _*\) is also a uniform domain with Ahlfors regular boundary,Footnote 4 Thus, by Lemma 2.36, we have

$$\begin{aligned} \omega ^{{\widetilde{X}}}_{L, \Omega _*}(\Delta _* ) > rsim 1, \end{aligned}$$
(4.14)

where \({\widetilde{X}}\) is a corkscrew point relative to \(X_{**}\) at scale \(r_*\) in the domain \(\Omega _*\). Recall that by the construction in Sect. 2.3, we know that \(B(X_*, \delta (X_*)/2) \subset \Omega _*\) and \({{\,\textrm{diam}\,}}(\Omega _*) \approx \ell (Q({\textbf {S}}_*))\), and we have \(\delta (X_*) \approx \ell (Q({\textbf {S}}_*)) \approx r_* \approx \delta ({\widetilde{X}})\) by the definition of \(X_*\) and \({\widetilde{X}}\). Thus, by the Harnack chain property of \(\Omega _*\), there exists a Harnack chain of uniformly bounded length from \({\widetilde{X}}\) to \(X_*\) inside \(\Omega _*\). Thus, by (4.14), formula (2.30) and Lemma 2.29 (that is, Harnack inequality), there exists a constant \(c_* > 0\) that depends only on structural constants such that

$$\begin{aligned} \omega _*(\Delta _* ) = \omega ^{X_*}_{L, \Omega _*}(\Delta _* ) > c_*. \end{aligned}$$
(4.15)

Next, we will construct a cover of \(\Delta _*\) that consists of dilated surface balls on \(\partial \Omega _*\) associated to the cubes \(Q \in {\mathcal {F}}\). We will construct the cover in such a way that there is oscillation of u on the balls associated to cubes in \({\mathcal {F}}^{{{\,\textrm{SB}\,}}}\) and the balls associated to cubes in \({\mathcal {F}}^{{{\,\textrm{O}\,}}}\) do not have much \(\omega _*\)-mass, provided \(\lambda \) is sufficiently small. Given \(Q\in \mathbb {D}\), denote by \({{\hat{x}}}_Q\) a touching point for the corkscrew point \(X_Q\). For \(\theta \in [0,1]\), recall that we denote \( P_Q(\theta ) = {\hat{x}}_Q + \theta (X_Q - {\hat{x}}_Q)\), and by Lemma 2.21 we showed that there exists \(\theta _0 \in (0,1)\) such that if for some \(Q'\in \mathbb {D}\) we have \(B(P_Q(\theta _0), \tfrac{\gamma \theta _0}{10} r_Q) \cap U_{Q'} \ne {\text{\O }}\), then \(Q' \subset Q\) and \(\ell (Q') < \ell (Q)\), where \(\gamma \) is the corkscrew constant in Definition 2.1.

Fix \(Q \in {\mathcal {F}}\). Then its parent \({\widetilde{Q}}\) satisfies \({\widetilde{Q}} \in {\mathbb {D}}_{{\mathcal {F}}, Q({\textbf {S}}_*)}\) and hence, by the construction of the Whitney regions in Sect. 2.3, we have \(P_Q(1) = X_Q \in U_{{\widetilde{Q}}} \subset \Omega _*\). By Lemma 2.21, we also know that \(P_Q(\theta _0) \not \in \Omega _*\). Indeed, otherwise \(P_Q(\theta _0) \in U_{Q'}\) for a cube \(Q' \in {\mathbb {D}}_{{\mathcal {F}}, Q({\textbf {S}}_*)}\), but by Lemma 2.21 we would then have \(Q' \subset Q\) with \(\ell (Q') < \ell (Q)\). This is impossible since \(Q \in {\mathcal {F}}\). Thus, there exists \(\theta ' \in (\theta _0, 1)\) such that

$$\begin{aligned} X^*_Q {:}{=}P_Q(\theta ') \in \partial \Omega _*. \end{aligned}$$

We set

$$\begin{aligned} \Delta ^*_Q {:}{=}B(X^*_Q, \tfrac{\lambda \theta _0}{2} r_Q) \cap \partial \Omega _* \quad \text { and } \quad M\Delta ^*_Q {:}{=}B(X^*_Q, M \tfrac{\lambda \theta _0}{2} r_Q) \cap \partial \Omega _*, \end{aligned}$$

for a constant \(M \ge 1\) to be chosen momentarily.

Let us describe some of the properties of \(\Delta ^*_Q\). First, by Lemma 2.25 and definition of \(\Delta _Q^*\), it holds that

$$\begin{aligned} \Delta ^*_Q \subseteq \Xi _Q \subseteq U_{Q'}^*, \end{aligned}$$
(4.16)

whenever \(Q'\) is a sibling of Q, where

$$\begin{aligned} \Xi _{Q'} = \bigcup _{\theta \in [\theta _0,1]} B\big (P_{Q'}(\theta ), \tfrac{\gamma \theta _0}{10} r_{Q'} \big ). \end{aligned}$$

Next, let us observe that if \(Q \in {\mathcal {F}}^{{{\,\textrm{SB}\,}}}\), then there exists a sibling \(Q'\) of Q such that \(Q'\) is blue and \(|u(X_{Q'}) - u(X_{Q({\textbf {S}}_*)})| > \varepsilon /100\). Thus, for every \(Q \in {\mathcal {F}}^{{{\,\textrm{SB}\,}}}\), it holds that

$$\begin{aligned} |u(X) - u(X_*)| \ge |u(X_{Q'}) - u(X_{Q({\textbf {S}}_*)})| - |u(X_{Q'}) - u(X)| \ge \varepsilon /100 - \varepsilon /1000 \ge \varepsilon /200,\nonumber \\ \end{aligned}$$
(4.17)

for every \(X \in \Delta ^*_Q\) since \(Q'\) is blue and \(\Delta ^*_Q \subset U_{Q'}^*\) by (4.16).

Lemma 4.18

There exists \(M \ge 1\), depending only on structural constants, such that

$$\begin{aligned} \Delta _* = B(X_{**}, r_*) \cap \partial \Omega _* \subset \bigcup _{Q \in {\mathcal {F}}} M\Delta ^*_Q. \end{aligned}$$

Proof

Fix \(y \in \Delta _* \subset \partial \Omega _*\) and a touching point \({\hat{y}} \in \partial \Omega \) for y. Then by (4.13) we have \({\hat{y}} \in \Delta _{Q({\textbf {S}}_*)}\). Since \(X_{**}\) lies on the line segment from the corkscrew point \(X_*\) relative to \(x_{Q({\textbf {S}}_*)}\) at scale \(r_*\) to its touching point \({\hat{x}}_*\), we have \(|{\hat{x}}_{*} - X_{**}| = {\text {dist}}(X_{**}, \partial \Omega ) \le {\text {dist}}(X_*, \partial \Omega ) \le r_* = 10^{-5}a_0 \ell (Q({\textbf {S}}_*))\). Thus, \(y \in B({\hat{x}}_{*}, 2r_*) = B({\hat{x}}_{Q({\textbf {S}}_*)}, 2(10)^{-5}a_0 \ell (Q({\textbf {S}}_*)))\). By (4.13), we have

$$\begin{aligned} |y-{\hat{y}}| = {\text {dist}}(y, \partial \Omega ) = {\text {dist}}(y,Q({\textbf {S}})) \le \ell (Q({\textbf {S}}_*)), \end{aligned}$$
(4.19)

and by the definition of \({\mathcal {W}}_{Q'}(K_0)\) (which we used in the construction of the Whitney regions), for any cube \(Q'\) it holds that

$$\begin{aligned} \text {if } {\hat{y}} \in Q' \text { and } C_\tau ^{-1} K_0^{-1}|y-{\hat{y}}| \le \ell (Q') \le C_\tau K_0|y-{\hat{y}}|, \quad \text { then } y \in {{\,\textrm{int}\,}}(U_{Q'}), \end{aligned}$$
(4.20)

where \(\tau \) is the dilation parameter in the definition of \(U_Q\). Since \({\textbf {S}}_*\) and \({\mathcal {F}}\) are the truncated collections, we have \(Q({\textbf {S}}_*) = \cup _{Q \in {\mathcal {F}}} Q\). In particular, by Lemma 2.26, we have \({\hat{y}} \in Q({\textbf {S}}_*) = \cup _{Q \in {\mathcal {F}}} Q\). Let \(Q_{{\hat{y}}} \in {\mathcal {F}}\) be the cube such that \({\hat{y}} \in Q_{{\hat{y}}}\). We now have \(\ell (Q_{{\hat{y}}}) \ge C_\tau ^{-1} K_0^{-1} |y - {\hat{y}}|\) for the same constant \(C_\tau \) as in (4.20), since otherwise there exists a cube \(Q'\) such that \(Q_{{\hat{y}}} \subset Q' \subseteq Q({\textbf {S}}_*)\) with \(C^{-1}_\tau K_0^{-1}|y-{\hat{y}}| \le \ell (Q') \le C_\tau K_0|y-{\hat{y}}|\). This is not possible since (4.20) and the fact that \(Q_{{\hat{y}}} \in {\mathcal {F}}\) would then imply that \(y \in \text {int} \, (U_{Q'}) \subset \Omega _*\), but we know that \(y \in \partial \Omega _*\). Thus, it holds that

$$\begin{aligned} {\text {dist}}(Q_{{\hat{y}}}, y) = |{\hat{y}} - y| \le C_\tau K_0 \ell (Q_{{\hat{y}}}). \end{aligned}$$
(4.21)

Recall that \(x_{Q_{{\hat{y}}}}\) is the center of \(Q_{{\hat{y}}}\) and \(X_{Q_{{\hat{y}}}}^*\) lies on a line segment from a corkscrew point \(X_{Q_{{\hat{y}}}}\) to its touching point \({\hat{x}}_{Q_{{\hat{y}}}}\). By Lemma 2.26, we know that \({\hat{x}}_{Q_{{\hat{y}}}} \in Q_{{\hat{y}}}\). Thus, (4.21), the definitions of the points and the fact that \({\hat{y}},{\hat{x}}_{Q_{{\hat{y}}}} \in Q_{{\hat{y}}}\) give us

$$\begin{aligned} |y - X^*_{Q_{{\hat{y}}}}|&\le |y - {\hat{y}}| + |{\hat{y}} - {\hat{x}}_{Q_{{\hat{y}}}}| + |{\hat{x}}_{Q_{{\hat{y}}}} - X^*_{Q_{{\hat{y}}}}| \\&\lesssim K_0 \ell (Q_{{\hat{y}}}) + {{\,\textrm{diam}\,}}(Q_{{\hat{y}}}) + r_{Q_{{\hat{y}}}} \approx r_{Q_{{\hat{y}}}}. \end{aligned}$$

In particular, there exists \(M \ge 1\) such that

$$\begin{aligned} y \in M\Delta ^*_{Q_{{\hat{y}}}} \subseteq \bigcup _{Q \in {\mathcal {F}}} M\Delta ^*_Q, \end{aligned}$$

which is what we wanted. \(\square \)

Let us then fix \(M \ge 1\) as in Lemma 4.18. By (4.15), it holds that

$$\begin{aligned} \omega _*\Big (\bigcup _{Q \in {\mathcal {F}}} M\Delta ^*_Q \Big ) \ge c_*. \end{aligned}$$
(4.22)

Our next goal is to analyze how much the cubes \(Q \in {\mathcal {F}}^{{{\,\textrm{O}\,}}}\) contribute to (4.22) and then limit this contribution by choosing \(\lambda \) in a suitable way. For this, we prove the following bound:

Lemma 4.23

For any \(Q \in {\mathcal {F}}\), we have

$$\begin{aligned} \omega _*(\Delta ^*_Q) \lesssim \frac{\sigma (Q)}{\sigma (Q({\textbf {S}}_*))}, \end{aligned}$$
(4.24)

for an implicit constant depending only on structural constants, ellipticity, and the \(\omega _{L,\Omega }\in A_\infty (\sigma )\) constants (and independent of \(\lambda \), \(\varepsilon \), N, j, and \({\mathcal {S}}\)).

Proof

Let \(Q \in {\mathcal {F}}\). Recall that \(\Delta ^*_Q = B(X^*_Q, \tfrac{\lambda \theta _0}{2} r_Q) \cap \partial \Omega _*\) is a surface ball on \(\partial \Omega _*\) with \(X^*_Q \in \partial \Omega _*\). Since \(\Omega _*\) is a uniform domain, it satisfies the corkscrew condition. Let \({\widetilde{X}}_Q\) be a corkscrew point in \(\Omega _*\) relative to \(X^*_Q\) at scale approximately \(r_Q' = \frac{\lambda \theta _0}{2} r_Q \approx r_Q\). By perhaps insisting that \(\theta _0\) is smaller we have that this corkscrew point \({\widetilde{X}}_Q\) is far from \(X_*\). Then by connecting \({\widetilde{X}}_Q\) to \(X_Q\) with a Harnack chainFootnote 5 (of uniformly bounded length) in \(\Omega \) and using Lemma 2.38,

$$\begin{aligned} G_{L,\Omega }(X_*, {\widetilde{X}}_Q) \approx G_{L,\Omega }(X_*, X_Q) \lesssim \ell (Q) \frac{\omega ^{X_*}_{L,\Omega }(Q)}{\ell (Q)^n}. \end{aligned}$$

Now, by 3.14 we have that \(G_{L,\Omega _*}(X_*, {\widetilde{X}}_Q) \le G_{L,\Omega }(X_*, {\widetilde{X}}_Q)\), and then by Lemma 2.38 in \(\Omega _*\) we conclude that

$$\begin{aligned} \frac{\omega _*(\Delta ^*_Q)}{(r'_Q)^n} r'_Q = \frac{\omega ^{X_*}_{L, \Omega _*}(\Delta ^*_Q)}{(r'_Q)^n} r'_Q \lesssim G_{L,\Omega _*}(X_*, {\widetilde{X}}_Q) \le G_{L,\Omega }(X_*, {\widetilde{X}}_Q). \end{aligned}$$

Combining the two previously displayed inequalities and using that \(\ell (Q) \approx r'_Q\) we have \(\omega _*(\Delta ^*_Q) \lesssim \omega ^{X_*}_{L,\Omega }(Q)\). By Lemmas 2.40 and 3.16 applied twice for both \(Q,Q({\textbf {S}}_*)\in {\mathcal {S}}\), it holds that

$$\begin{aligned} \omega ^{X_*}_{L,\Omega }(Q) = \omega ^{X_{Q({\textbf {S}}_*)}}_{L,\Omega }(Q) \approx \frac{\omega _{L,\Omega }^{X_{Q({\mathcal {S}})}}(Q)}{\omega _{L,\Omega }^{X_{Q({\mathcal {S}})}}(Q({\textbf {S}}_*))}\approx \frac{\sigma (Q)}{\sigma (Q({\mathcal {S}}))}\frac{\sigma (Q({\mathcal {S}}))}{\sigma (Q({\textbf {S}}_*))}=\frac{\sigma (Q)}{\sigma (Q({\textbf {S}}_*))}, \end{aligned}$$

which ends the proof of (4.24). \(\square \)

Now we are ready to conclude the proof of Lemma 4.12. Using (4.24) and the doubling property of \(\omega _*\) we find that

$$\begin{aligned} \sum _{Q \in {\mathcal {F}}^{{{\,\textrm{O}\,}}}} \omega _*\left( M\Delta ^*_Q \right) \le C \sum _{Q \in {\mathcal {F}}^{{{\,\textrm{O}\,}}}} \omega _*\left( \Delta ^*_Q \right) \le C \sum _{Q \in {\mathcal {F}}^{{{\,\textrm{O}\,}}}} \frac{\sigma (Q)}{\sigma (Q({\textbf {S}}_*))} \le {{\hat{C}}} \lambda , \end{aligned}$$
(4.25)

where we used (4.8) in the last inequality. Now we choose \(\lambda > 0\) so that \({{\hat{C}}}\lambda < c_*/2\) and use (4.25) and (4.22) to deduce

$$\begin{aligned} \omega _*\Big (\bigcup _{Q \in {\mathcal {F}}^{{{\,\textrm{SB}\,}}}} M\Delta ^*_Q \Big ) \ge c_*/2, \end{aligned}$$
(4.26)

where we used \({\mathcal {F}}= {\mathcal {F}}^{{{\,\textrm{SB}\,}}} \cup {\mathcal {F}}^O\). Now we use the 5R-covering lemma [50] to produce a countable collection of disjoint surface balls \(\{M\Delta ^*_k\}{:}{=}\{M\Delta ^*_{Q_k}\}\) where each \(Q_k\) is in \({\mathcal {F}}^{{{\,\textrm{SB}\,}}}\) and such that

$$\begin{aligned} \bigcup _{Q \in {\mathcal {F}}^{{{\,\textrm{SB}\,}}}} M\Delta ^*_Q\subset \cup _{k} 5M\Delta ^*_k. \end{aligned}$$

Then using (4.26) and the doubling property of \(\omega _*\) it holds

$$\begin{aligned} \omega _*(\cup _k \Delta ^*_k) = \sum _k \omega _*(\Delta ^*_k) > rsim \sum _{k} \omega _*(5M\Delta ^*_k) > rsim \omega _*\Big (\bigcup _{Q \in {\mathcal {F}}^{{{\,\textrm{SB}\,}}}} M\Delta ^*_Q \Big ) \ge c_*/2, \end{aligned}$$

where we used that \(\Delta ^*_k\) are disjoint. To summarize we have produced a sequence of surface balls \(\Delta ^*_k = \Delta ^*_{Q_k}\) with \(Q_k \in {\mathcal {F}}^{{{\,\textrm{SB}\,}}}\) such that

$$\begin{aligned} \omega _*(\cup _k \Delta ^*_k) \ge c_{**}, \end{aligned}$$
(4.27)

where \(c_{**}\) depends on dimension, ellipticity, the Ahlfors regularity constant for \(\partial \Omega \), the corkscrew and Harnack Chain constants for \(\Omega \), and the \(\omega _L \in A_\infty (\sigma )\) constants. Thus, using (4.17) we have

$$\begin{aligned}{} & {} \int _{\partial \Omega _*} (u(y) - u(X_*))^2 \, d\omega _*(y) \ge \omega _*(\cup _k \Delta ^*_k)\inf \limits _{y \in \cup _k \Delta ^*_k}(u(y) - u(X_*))^2\\{} & {} \quad \ge c_{**}(\varepsilon /200)^2 {=}{:}c_4 \varepsilon ^2, \end{aligned}$$

which proves Lemma 4.12.

As we had reduced the proof of the packing of the Type 4 maximal cubes to Lemma 4.12, this completes the proof of Lemma 4.6.

4.4 Construction of \(\varepsilon \)-Approximators

With the help of the previous constructions and estimates, we can prove the existence of \({{\,\textrm{BV}\,}}_{{{\,\textrm{loc}\,}}}\) \(\varepsilon \)-approximators in a similar way as in [40]. For the convenience of the reader, we recall the key steps of the construction below. For some of the details, we follow the construction of \(L^p\)-type approximators in [36] which are an adaptation of the arguments in [40]. Recall that we denote the collection of blue cubes in the disjoint stopping time regimes \({\mathcal {S}}\) in Lemma 3.16 by \({\mathcal {L}}= {\mathcal {L}}({\mathcal {S}})\), and each of these collections has a decomposition \({\mathcal {L}}= \cup _j {\textbf {S}}_j\).

Proof of Theorem 4.1

Let us fix a dyadic cube \(Q_0 \in {\mathbb {D}}(\partial \Omega )\) and construct an \(\varepsilon \)-approximator \(\Phi _{Q_0} = \Phi _{Q_0}^\varepsilon \) first in the Carleson box \(T_{Q_0}\). We start by dividing the Carleson box \(T_{Q_0}\) into a few types of different regions where we define the approximator differently. Let us choose the largest good (in the sense of the corona decomposition from Lemma 3.16), blue subcube \(Q_1 \subset Q_0\) which may be \(Q_0\) itself; if there are several such cubes with the largest side length, we choose just one of them. Since \(Q_1\) is a good blue cube, there exists a stopping time regime \({\mathcal {S}}_{Q_1}\) in Lemma 3.16 and subregime \({\textbf {S}}_{Q_1} \subset {\mathcal {L}}({\mathcal {S}}_{Q_1})\) such that \(Q_1\) is the maximal element of the regime \({\textbf {S}}^1 {:}{=}{\textbf {S}}_{Q_1} \cap {\mathbb {D}}_{Q_0}\). We then choose the largest good blue cube \(Q_2\) from the collection \({\mathbb {D}}_{Q_0} \setminus {\textbf {S}}^1\). Similarly, \(Q_2\) is the maximal element of the regime \({\textbf {S}}^2 {:}{=}{\textbf {S}}_{Q_2} \cap {\mathbb {D}}_{Q_0}\). We then choose the largest good blue cube \(Q_3 \in {\mathbb {D}}_{Q_0} {\setminus } ({\textbf {S}}^1 \cup {\textbf {S}}^2)\), and continue like this. This gives us a sequence of good blue cubes \(Q_1, Q_2, \ldots \) such that \(\ell (Q_1) \ge \ell (Q_2) \ge \ldots \), each cube \(Q_i\) is a maximal element of a regime \({\textbf {S}}^i\) and the collection \(\cup _i {\textbf {S}}^i\) contains all the good blue cubes in \({\mathbb {D}}_{Q_0}\). The cubes \(Q_i\) are “mostly” of the form \(Q({\textbf {S}})\) as in the decomposition of the collections \({\mathcal {L}}({\mathcal {S}})\) earlier in the sense that there exists a collection of pairwise disjoint cubes \(\{P_k\}_k \subset {\mathbb {D}}_{Q_0}\) (that may be empty) such that every cube in the collection \(\{Q_i\}_i \setminus \{P_k\}_k\) is of the form of \(Q({\textbf {S}})\) for some \({\textbf {S}}\). This is because the cube \(Q_0\) is arbitrary and hence, it may be a bad cube or a red cube. For each i, we define the “bottom” cubes of \({\textbf {S}}^i\) in the obvious way: we set \({\mathcal {F}}_{{\textbf {S}}^i} {:}{=}{\mathcal {F}}_{{\textbf {S}}_{Q_i}}\), that is, \({\mathcal {F}}_{{\textbf {S}}^i}\) is the collection of the stopping cubes associated to the unique regime \({\textbf {S}}_{Q_i}\) that contains \(Q_i\).

For each i, we define the regions \(A_i\) recursively the following way:

$$\begin{aligned} A_1 {:}{=}\Omega _{{\mathcal {F}}_{{\textbf {S}}^1},Q_1}, \qquad A_i {:}{=}\Omega _{{\mathcal {F}}_{{\textbf {S}}^i,Q_i}} \setminus \bigcup _{k=1}^{i-1} A_k \quad \text { for } i \ge 2. \end{aligned}$$

By construction, the regions \(A_i\) are pairwise disjoint. We also set \(\Omega _0 {:}{=}\bigcup _i A_i\), and we define the function \(\Phi _0\) on \(\Omega _0\) as

$$\begin{aligned} \Phi _0 {:}{=}\sum _i u(X_{Q({\textbf {S}}_{Q_i})}) \mathbb {1}_{A_i}, \end{aligned}$$

where \(X_{Q({\textbf {S}}_{Q_i})}\) is the corkscrew point we used in the stopping conditions in the definition of \({\textbf {S}}_{Q_i}\). In particular, for any \(X \in A_i\) we have \(|u(X) - u(X_{Q({\textbf {S}}_{Q_i})})| \le \varepsilon /100 < \varepsilon \). Furthermore, by the disjointness of the regions \(A_i\), we have \(\Vert u - \Phi _0\Vert _{L^\infty (\Omega _0)} < \varepsilon \).

Let us then consider the cubes in \({\mathbb {D}}_{Q_0} {\setminus } \cup _i {\textbf {S}}^i\). Let us fix some enumeration \(\{R_j\}_j\) for the cubes \({\mathbb {D}}_{Q_0} \setminus \cup _i {\textbf {S}}^i\). The cubes \(R_j\) are red cubes or bad blue cubes. For each j, we define the regions \(V_j\) recursively the following way:

$$\begin{aligned} V_1 {:}{=}U_{R_1}, \qquad V_j {:}{=}U_{R_j} \setminus \bigcup _{k=1}^{j-1} V_k \quad \text { for } j \ge 2. \end{aligned}$$

By construction, the regions \(V_j\) are pairwise disjoint. We also set \(\Omega _1 {:}{=}\bigcup _j V_j\), and we define the function \(\Phi _1\) on \(\Omega _1\) as

$$\begin{aligned} \Phi _1(X) {:}{=}\left\{ \begin{array}{cl} u(X), &{}\text { if } X \in V_k \text { for a red cube } R_k,\\ u(X_k), &{}\text { if } X \in V_k \text { for a blue cube } R_k, \end{array} \right. \end{aligned}$$

where \(X_k\) is any fixed point on \(U_{R_k}\). By the definitions, we have \(\Vert u - \Phi _1\Vert _{L^\infty (\Omega _1)}< \varepsilon / 1000 < \varepsilon \).

We define the \(\varepsilon \)-approximator \(\Phi _{Q_0}\) of u in the Carleson box \(T_{Q_0}\) as

$$\begin{aligned} \Phi _{Q_0}(X) {:}{=}\left\{ \begin{array}{cl} \Phi _0(X), &{}\text { if } X \in \Omega _0,\\ \Phi _1(X), &{}\text { if } X \in T_{Q_0} \setminus \Omega _0. \end{array} \right. \end{aligned}$$

By the construction, we have \(\Vert u - \Phi _{Q_0}\Vert _{L^\infty (T_{Q_0})} < \varepsilon \). The \(L^1\)-type Carleson measure estimate for \(\Phi _{Q_0}\) in \(T_{Q_0}\) can be proven as in [40] with small but quite obvious changes. Using a covering argument, the claim can be reduced to proving the estimate on Carleson boxes \(T_{Q'}\), and since \(u \in L^\infty (\Omega )\), the core challenge is to handle the jumps across the boundaries of the sets \(A_i\) and \(V_j\) that contribute to the total variation of \(\Phi _{Q_0}\) inside \(T_{Q_0}\). Since the boundaries of the sawtooth regions, Whitney regions and Carleson boxes are Ahlfors regular by Lemma 2.18, the estimates reduce to using the Carleson packing conditions in Lemmas 3.16, 4.2, 4.5 and 4.6. The Carleson norm of the measure \(\mu _{\Phi _{Q_0}}\) such that \(d\mu _{\Phi _{Q_0}}(Y) = |\nabla \Phi _{Q_0}(Y)| \, dY\) is given (up to a structural constant) by the sizes of the Carleson packing norms in these results. We omit further details, but see [40, pp. 2366–2373].

Using these kinds of local approximators, we build the global approximator of u. If \({{\,\textrm{diam}\,}}(\Omega ) < \infty \), it is enough to build a local approximator for a Carleson box that covers the whole space \(\Omega \). Thus, we may assume that \({{\,\textrm{diam}\,}}(\Omega ) = \infty \). Suppose first that \({{\,\textrm{diam}\,}}(\partial \Omega ) < \infty \). Then there exists a dyadic cube \(Q_0\) that covers the whole boundary \(\partial \Omega \). We build the local approximator \(\Phi _{Q_0}\) on \(T_{Q_0}\), extend it to whole \(\Omega \) by setting it to be 0 outside \(T_{Q_0}\) and define the global approximator as \(\Phi = \mathbb {1}_{T_{Q_0}} \Phi _{Q_0} + \mathbb {1}_{\Omega {\setminus } T_{Q_0}} u\). The \(L^1\)-type Carleson measure estimate follows from the same arguments as with the local approximators.

Finally, suppose that \({{\,\textrm{diam}\,}}(\partial \Omega ) = \infty \). Fix a sequence of dyadic cubes \(P_k\) such that \(P_1 \subset P_2 \subset \ldots \), \(\ell (P_1)< \ell (P_2) < \cdots \) and \(\partial \Omega = \cup _k P_k\). This type of sequence of cubes does not exist in every dyadic system, but we can always construct a system where it exists (see, for example, [46]). We build a local approximator \(\Phi _{P_k}\) in \(T_{P_k}\) for every k and extend the approximators to whole \(\Omega \) by setting each of them to be 0 outside \(T_{P_k}\). We then define the global approximator as \(\Phi = \mathbb {1}_{T_{P_1}} \Phi _{P_1} + \sum _{k=2}^\infty \mathbb {1}_{T_{P_k} {\setminus } T_{P_{k-1}}} \Phi _{P_k}\). The \(L^1\)-type Carleson measure estimate follows from the Carleson measure estimates of the local approximators and the fact that the collection \(\{P_k\}_k\) satisfies a Carleson packing condition with a uniformly bounded Carleson packing norm depending only on structural constants. Again, we omit the details. \(\square \)

To finish the proof of Theorem 1.1, we regularize the approximators in Theorem 4.1. This regularization makes the constant \(C_\varepsilon \) significantly larger but since the size of this constant is not important for our results, we do not track its size.

Lemma 4.28

Let \(\varepsilon \in (0,1)\). There exists a unifomly bounded constant \({\widetilde{C}}_\varepsilon \ge 1\) such that we can choose the \(\varepsilon \)-approximator \(\Phi = \Phi ^\varepsilon \) for the solution \(u \in W^{1,2}(\Omega ) \cap L^\infty (\Omega )\) to \(Lu = 0\) in Theorem 4.1 so that

  1. (i)

    \(\Vert u - \Phi \Vert _{L^\infty (\Omega )} \le 2 \varepsilon \Vert u\Vert _{L^\infty (\Omega )}\),

  2. (ii)

    \(\sup _{x \in \partial \Omega , r > 0} \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla \Phi (Y)|\, dY \le {\widetilde{C}}_\varepsilon \Vert u\Vert _{L^{\infty }(\Omega )},\)

  3. (iii)

    \(\Phi \in C^\infty (\Omega )\),

  4. (iv)

    \(|\nabla \Phi (Y)| \le \tfrac{{\widetilde{C}}_\varepsilon }{\delta (Y)}\) for every \(Y \in \Omega \),

  5. (v)

    if \(|X-Y| \ll \delta (X)\), then \(|\Phi (X) - \Phi (Y)| \le \tfrac{{\widetilde{C}}_\varepsilon |X-Y|}{\delta (X)}\),

  6. (vi)

    there exists a function \(\varphi \in L^\infty (\partial \Omega )\) such that

    $$\begin{aligned} \lim _{Y \rightarrow x, \, {{\,\mathrm{n.t.}\,}}} \Phi (Y) = \varphi (x) \text { for } \sigma \text {-a.e. } x \in \partial \Omega . \end{aligned}$$

The constant \({\widetilde{C}}_\varepsilon \) depends on \(\varepsilon \), the structural constants of \(\Omega \), the constant \(C_\varepsilon \) in Theorem 4.1 and the Hölder continuity constants C and \(\alpha \) in Lemma 2.28.

Proof

The proof uses tweaked mollifier techniques combined with a regularized distance function. The properties follow mostly from [37, Sect. 3] but for the convenience of the reader, we define the core objects and give some explicit details below.

Let \(\beta \) be a regularized version of the distance function \(\delta = {\text {dist}}(\cdot ,\partial \Omega )\), that is, a smooth function in \(\Omega \) such that \(\beta \approx \delta \) (see [58, Theorem 2, p. 171]). Let \(\zeta \ge 0\) be a smooth non-negative function supported on \(B(0,\tfrac{1}{m})\) for a suitable constant \(m > 0\) (depending on the implicit constants in \(\delta \approx \beta \)), satisfying \(\zeta \le 1\) and \(\int \zeta = 1\). For a constant \(\xi _\varepsilon > 0\) to be chosen momentarily, we set

$$\begin{aligned} \Lambda _{\xi _\varepsilon }(X,Y) {:}{=}\zeta _{\xi _\varepsilon \beta (X)}(X-Y) = \frac{1}{(\xi _\varepsilon \beta (X))^{n+1}} \zeta \Big ( \frac{X-Y}{\xi _\varepsilon \beta (X)}\Big ). \end{aligned}$$

For a suitable choice of m, we have \(\text {supp} \, \Lambda (X,\cdot ) \subset B(X,\xi _\varepsilon \delta (X)/2)\). Given the non-smooth \(\varepsilon \)-approximator \(\Phi _0\) of the solution \(u \in W^{1,2}_{{{\,\textrm{loc}\,}}}(\Omega ) \cap L^\infty (\Omega )\) to \(Lu = 0\) in Theorem 4.1, we set

$$\begin{aligned} \Phi (X) {:}{=}\iint \Lambda _\varepsilon (X,Y) \Phi _0(Y) \, dY. \end{aligned}$$

The property (iii) follows from a standard modification of the case \(\Omega = {\mathbb {R}}^{n+1}\) (for example, see [24, Theorem 1, p. 123]) and properties (ii), (iv) and (v) are formulated explicitly in [37, Sect. 3]. Property (i) follows from the local Hölder continuity of u (that is, Lemma 2.28) and the fact that \(\Phi _0\) is an \(\varepsilon \)-approximator of u: for almost every \(X \in \Omega \), we get

as long as we choose \(\xi _\varepsilon \le \tfrac{1}{2} \big (\tfrac{\varepsilon }{C}\big )^{\frac{1}{\alpha }}\), where C and \(\alpha \) are the Hölder continuity constants in Lemma 2.28.

Property (vi) follows from the same argument that is used in the proof of [37, Lemma 4.14] after some small additional considerations. The proof of [37, Lemma 4.14] is based on showing that almost every cone on a codimension 1 uniformly rectifiable set has locally exactly two components, these local components satisfy the Harnack chain condition and the Harnack chain condition combined with the \(L^1\)-type Carleson measure estimate (ii) implies the existence of the a.e. non-tangential trace \(\varphi \). We do not assume that \(\partial \Omega \) is uniformly rectifiable but by the definition of dyadic cones (2.17) and Lemma 2.18 we know that any truncated cone on \(\partial \Omega \) has exactly one component inside \(\Omega \) and this component satisfies the Harnack chain condition. Thus, the argument in the proof of [37, Lemma 4.14] works also for us. In particular, we can choose \(\Phi \) in such a way that all the properties (i)–(vi) hold. This completes the proofs of Lemma 4.28 and Theorem 1.1. \(\square \)

5 Proof of Theorem 1.2

In this section, we prove Theorem 1.2, that is, we prove that \(\varepsilon \)-approximability of solutions u to \(Lu = 0\) implies that \(\omega _L \in A_\infty (\sigma )\). To be more precise, we prove the following seemingly stronger result:

Theorem 5.1

Let \(\Omega \subset {\mathbb {R}}^{n+1}\), \(n\ge 1\), be a uniform domain with Ahflors regular boundary, and let L be a divergence form elliptic operator \(L = -\mathop {{\text {div}}}\nolimits A\nabla \) in \(\Omega \). Suppose also that for every bounded Borel set \(S \subset \partial \Omega \) the solution \(u=u_S\) to \(Lu = 0\) such that \(u(X) = \omega _L^X(S)\) is \(\varepsilon \)-approximable for every \(\varepsilon \in (0,1)\) in the sense of Theorem 1.1 with the \(\varepsilon \)-approximability constants depending only on structural constants and \(\varepsilon \). Then \(\omega _L \in A_\infty (\sigma )\).

In particular, by Theorems 1.1 and 5.1, \(\varepsilon \)-approximability of the subclass of solutions u to \(Lu = 0\) in Theorem 5.1 is equivalent with \(\varepsilon \)-approximability of all solutions u to \(Lu = 0\) (and hence, it is equivalent with the other conditions in Corollary 1.3).

The proof of Theorem 5.1 is based on the proof of [11, Theorem 1.1], which itself is based on the techniques used in [48, 49]. The key idea is the following. We fix a cube \(Q_0\) and a Borel set \(F \subset Q_0\) and we build a suitable solution \(u = u_F\) associated to F such that u oscillates a significant amount. We then use this oscillation to control the \(L^1\) norm of the gradient of an \(\varepsilon \)-approximator of u from below near \(Q_0\) for a very small \(\varepsilon \). The \(L^1\)-type Carleson measure estimate of the \(\varepsilon \)-approximator then allows us to verify the \(\omega _L\in A_\infty (\sigma )\) condition.

For the proof of Theorem 5.1, we need some definitions and notation from [11]. For clarity, we adopt most of the notation as it is from [11]. We define the following fattened version of the Whitney regions \(U_Q\) and a “wider” version of the truncated dyadic cone:

$$\begin{aligned} U_{Q,\eta ^3} {:}{=}\bigcup _{\begin{array}{c} Q' \in {\mathbb {D}}_Q \\ \ell (Q') > \eta ^3 \ell (Q) \end{array}}U_{Q'},\qquad \text { and } \qquad \Gamma _{Q_0}^{\eta }(x) {:}{=}\bigcup _{\begin{array}{c} Q \in {\mathbb {D}}_{Q_0} \\ Q \ni x \end{array}} U_{Q,\eta ^3}. \end{aligned}$$

The main difference between our approach and the proof of the implication “CME \(\implies \) \(A_\infty \)” in [11] is the following lemma which is a modification of [11, Lemma 3.10]:

Lemma 5.2

There exist \(\varepsilon \in (0,1)\), \(0<\eta \ll 1\), depending only on structural constants, and \(\alpha _0\in (0,1)\), \(C_\eta \ge 1\), both depending on structural constants and on \(\eta \), such that for each \(Q_0\in \mathbb {D}\), for every \(\alpha \in (0,\alpha _0)\), and for every \(F\subset Q_0\) satisfying \(\omega _L^{X_{Q_0}}(F)\le \alpha \omega _L^{X_{Q_0}}(Q_0)\), there exists a Borel set \(S\subset Q_0\) such that if \(u(X)=\omega _L^X(S)\) and \(\Phi =\Phi ^{\varepsilon }\) is an \(\varepsilon \)-approximator of u, then

$$\begin{aligned} \iint _{\Gamma _{Q_0}^\eta (y)} |\nabla \Phi (Y)| \delta (Y)^{-n} \, dY \ge C_\eta ^{-1} \log (\alpha ^{-1}),\qquad \text {for each }y\in F. \end{aligned}$$

Before proving Lemma 5.2, let us see how it gives us Theorem 5.1.

Proof of Theorem 5.1

Following [11]Footnote 6, we show that for each \(\beta \in (0,1)\), there exists \(\alpha \in (0,1)\) such that for every \(Q_0\in \mathbb {D}\) and every Borel set \(F\subset Q_0\), we have that

$$\begin{aligned} \frac{\omega _L^{X_{Q_0}}(F)}{\omega _L^{X_{Q_0}}(Q_0)}\le \alpha \quad \implies \quad \frac{\sigma (F)}{\sigma (Q_0)}\le \beta , \end{aligned}$$
(5.3)

where the constants \(\alpha \) and \(\beta \) are independent of the choice of the dyadic system \({\mathbb {D}}\). This is a dyadic version of the \(A_\infty (\sigma )\) condition. Although it looks different than Definition 2.31, the conditions are equivalent since we consider doubling measures (see, for example, [27, Chapter IV, Theorem 2.11]) and the constants are independent of the system \({\mathbb {D}}\) (see [11, pp. 16] or use adjacent dyadic techniques [44, 46]).

Fix \(\beta \in (0,1)\) and \(Q_0\in \mathbb {D}\). Moreover, fix \(\eta \in (0,1)\) small enough, and constants \(\varepsilon \), \(\alpha _0\), and \(C_\eta \) as in Lemma 5.2. Let \(F\subset Q_0\) satisfy \(\omega _L^{X_0}(F) \le \alpha \omega _L^{X_0}(Q_0)\) with \(\alpha \in (0,\alpha _0)\). We now use Lemma 5.2 to see that there exists \(S\subset Q_0\) such that if \(\Phi ^{\varepsilon }\) is an \(\varepsilon \)-approximator of \(u(X)=\omega _L^X(S)\), then

$$\begin{aligned} C_{\eta }^{-1}\log (\alpha ^{-1})\sigma (F)\le \int _F\iint _{\Gamma _{Q_0}^\eta (y)} |\nabla \Phi ^{\varepsilon }(Y)| \delta (Y)^{-n} \, dY\,d\sigma (y)\le C\eta ^{-3n} \sigma (Q_0), \end{aligned}$$

where the last estimate follows by Fubini’s theorem and property ii) of \(\varepsilon \)-approximability in Theorem 1.1 (see [11, pp. 15–16] for more details). We may then choose \(\alpha \) small enough depending on \(\beta \) so that (5.3) holds. \(\square \)

We turn to the proof of Lemma 5.2. For this, we need the following machinery:

Definition 5.4

Let \(Q_0 \in {\mathbb {D}}\) be a dyadic cube, \(\mu \) a regular Borel measure on \(Q_0\), \(F \subset Q_0\) be a Borel set, \(\epsilon _0 > 0\), and \(k \in \mathbb {N}\) be fixed. We say that a collection of nested Borel subsets \(\{{\mathcal {O}}_\ell \}_{\ell = 1}^k\) of \(Q_0\) is a good \(\epsilon _0\)-cover of F of length k for \(\mu \) if

  1. (a)

    \(F \subset {\mathcal {O}}_k \subset {\mathcal {O}}_{k-1} \subset \cdots \subset {\mathcal {O}}_2 \subset {\mathcal {O}}_1 \subset Q_0\),

  2. (b)

    \({\mathcal {O}}_{\ell } = \bigcup _i Q_i^{\ell }\) for disjoint subcubes \(Q_i^\ell \in {\mathbb {D}}_{Q_0}\),

  3. (c)

    \(\mu ({\mathcal {O}}_\ell \cap Q_i^{\ell -1}) \le \epsilon _0 \mu (Q_i^{\ell -1})\) for every i and every \(2 \le \ell \le k\).

Lemma 5.5

([11, Lemma 3.5]) Fix \(Q_0\in \mathbb {D}\) and suppose that \(\epsilon _0\in (0,\tfrac{1}{e})\). If \(F \subset Q_0\) is a Borel set such that \(\omega _L^{X_{Q_0}}(F) \le \alpha \omega _L^{X_{Q_0}}(Q_0)\) for \(\alpha \in (0,\epsilon _0^2/C')\), then there exists a good \(\epsilon _0\)-cover of F of length \(k \approx \tfrac{\log \alpha ^{-1}}{\log \epsilon _0^{-1}}\) for \(\omega _L^{X_{Q_0}}\). Here \(C'\) depends only on the constant C of Lemma 2.37.

Let \(Q_0 \in {\mathbb {D}}\) be a fixed cube and \(F \subset Q_0\) be a fixed subset such that \(\omega _L^{X_0}(F) \le \alpha \omega _L^{X_0}(Q_0)\) for \(\alpha \) small enough. For each \(Q \in {\mathbb {D}}\), we let \({\widetilde{Q}} \in {\mathbb {D}}\) be the unique dyadic cube such that \(x_Q \in {\widetilde{Q}}\) and \(\ell ({\widetilde{Q}}) = \eta \ell (Q)\) for \(\eta > 0\) a small enough parameter to be determined later. Fix \(\epsilon _0>0\) small enough. Let \(\{{\mathcal {O}}_l\}_{l=1}^k\) be a good \(\epsilon _0\)-cover of F given by Lemma 5.5. We set

$$\begin{aligned} {\widetilde{{\mathcal {O}}}}_j {:}{=}\bigcup _{i} {\widetilde{Q}}_i^l \quad \text { and } \quad S {:}{=}\bigcup _{j=2}^k {\widetilde{{\mathcal {O}}}}_{j-1} \setminus {\mathcal {O}}_j. \end{aligned}$$

By the nestedness of the sets \({\mathcal {O}}_j\), we have \(\mathbb {1}_S = \sum _{j=2}^k \mathbb {1}_{{\widetilde{{\mathcal {O}}}}_{j-1} {\setminus } {\mathcal {O}}_j}\). We define the nonnegative solution \(u {:}{=}u_F :\Omega \rightarrow {\mathbb {R}}\) to \(Lu = 0\) as

$$\begin{aligned} u(X) = \int _{\partial \Omega } \mathbb {1}_S(y) \, d\omega _L^X(y) = \omega _L^X(S) = \sum _{j=2}^k \omega _L^X({\widetilde{{\mathcal {O}}}}_{j-1} \setminus {\mathcal {O}}_j). \end{aligned}$$

We have the following lower oscillation bound:

Lemma 5.6

([11, Lemma 3.24]) There exists a structural constant \(c_0 > 0\) such that if \(\eta \) and \(\epsilon _0 = \epsilon _0(\eta ,c_0)\) are small enough, then for any \(y \in F\) and any \(1 \le l \le k-1\), there exist dyadic cubes \(Q_i^l\) and \(P_i^l\) such that

$$\begin{aligned} \big | u({\hat{X}}_{{\widetilde{Q}}_i^l}) - u({\hat{X}}_{{\widetilde{P}}_i^l}) \big | \ge c_0, \end{aligned}$$

where \(Q_i^l\) is the unique cube (in \({\mathcal {O}}_l\)) such that \(y \in Q_i^l\), \(P_i^l \in {\mathbb {D}}_{Q_i^l}\) is the unique cube such that \(y \in P_i^l\) and \(\ell (P_i^l) = \eta \ell (Q_i^l)\), \({\widetilde{Q}}_i^l\) and \({\widetilde{P}}_i^l\) are defined as we did after Lemma 5.5 and \({\hat{X}}_{{\widetilde{Q}}_i^l}\) and \({\hat{X}}_{{\widetilde{P}}_i^l}\) are corkscrew points relative to \(x_{{\widetilde{Q}}_i^l}\) at scale \(a_1 \ell ({\widetilde{Q}}_i^l)\) and relative to \(x_{{\widetilde{P}}_i^l}\) at scale \(a_1 \ell ({\widetilde{P}}_i^l)\), respectively.

With the help of Lemma 5.6, we can now prove Lemma 5.2:

Proof of Lemma 5.2

The proof is a straightforward modification of the proofs of [11, Lemma 3.10] and [37, Lemma 4.14]. Let us fix \(y \in F\) and \(l \in \{1,2,\ldots ,k-1\}\). We borrow the notation from Lemma 5.6, and set \(\varepsilon =c_0/4\). By adjusting the construction parameters for the Whitney regions in Sect. 2.3, we may assume that there exist Whitney cubes \(I_{{\widetilde{Q}}_i^l}\) and \(I_{{\widetilde{P}}_i^l}\) such thatFootnote 7

$$\begin{aligned} {\hat{X}}_{{\widetilde{Q}}_i^l} \in I_{{\widetilde{Q}}_i^l} \subset (1+\tau )I_{{\widetilde{Q}}_i^l} \subset U_{{\widetilde{Q}}_i^l} \quad \text { and } \quad {\hat{X}}_{{\widetilde{P}}_i^l} \in I_{{\widetilde{P}}_i^l} \subset (1+\tau )I_{{\widetilde{P}}_i^l} \subset U_{{\widetilde{P}}_i^l}. \end{aligned}$$
(5.7)

Since \(\ell ({\widetilde{Q}}_i^l) = \eta \ell (Q_i^l)\) and \(\ell ({\widetilde{P}}_i^l) = \eta \ell (P_i^l) = \eta ^2 \ell (Q_i^l)\). In particular, since \(\eta < 1\), we have \(U_{{\widetilde{Q}}_i^l}, U_{{\widetilde{P}}_i^l} \subset U_{Q_i^l,\eta ^3}\).

Since \(\Phi \) is a \(\tfrac{c_0}{4}\)-approximator of u, Lemma 5.6 gives us

$$\begin{aligned} c_0{} & {} \le \bigg | u({\hat{X}}_{{\widetilde{Q}}_i^l}) - u({\hat{X}}_{{\widetilde{P}}_i^l}) | \le \bigg | u({\hat{X}}_{{\widetilde{Q}}_i^l}) - \Phi ({\hat{X}}_{{\widetilde{Q}}_i^l}) \bigg | + \bigg | \Phi ({\hat{X}}_{{\widetilde{Q}}_i^l}) - \Phi ({\hat{X}}_{{\widetilde{P}}_i^l}) \bigg | + \bigg | \Phi ({\hat{X}}_{{\widetilde{P}}_i^l}) - u({\hat{X}}_{{\widetilde{P}}_i^l}) \bigg | \\{} & {} \le \frac{c_0}{2} + \bigg | \Phi ({\hat{X}}_{{\widetilde{Q}}_i^l}) - \Phi ({\hat{X}}_{{\widetilde{P}}_i^l}) \bigg |. \end{aligned}$$

By (5.7), we know that the points \({\hat{X}}_{{\widetilde{Q}}_i^l}\) and \({\hat{X}}_{{\widetilde{P}}_i^l}\) are well inside \(U_{Q_i^l,\eta ^3}\) in the sense that \({\text {dist}}({\hat{X}}_{{\widetilde{P}}_i^l}, \partial U_{Q_i^l,\eta ^3}) \approx {\text {dist}}({\hat{X}}_{{\widetilde{Q}}_i^l}, \partial U_{Q_i^l,\eta ^3}) \approx \ell (Q_i^l)\). Since \({{\,\textrm{diam}\,}}(U_{Q_i^l,\eta ^3}) \approx _\eta \ell (Q_i^l)\) and \(U_{Q_i^l,\eta ^3}\) satisfies the Harnack chain condition by Lemma 2.18, there exists a chain of \(N = N(\eta )\) balls \(B_1, B_2, \ldots , B_N\) such that

  1. (i)

    \({\hat{X}}_{{\widetilde{Q}}_i^l} \in B_1\), \({\hat{X}}_{{\widetilde{P}}_i^l} \in B_N\) and \(B_{j} \cap B_{j+1} \ne {\text{\O }}\) for every \(j = 1,2,\ldots ,N-1\),

  2. (ii)

    the radii of the balls \(B_j\) are comparable to \(\ell (Q_i^l)\), depending on \(\eta \) and \(c_0\),

  3. (iii)

    \(|\Phi ({\hat{X}}_{{\widetilde{Q}}_i^l}) - \Phi (X)| < \tfrac{c_0}{8}\) for every \(X \in B_1\) and \(|\Phi ({\hat{X}}_{{\widetilde{P}}_i^l}) - \Phi (Y)| < \tfrac{c_0}{8}\) for every \(Y \in B_N\),

  4. (iv)

    for each \(j = 1,2,\ldots ,N-1\), there exists a cylinder \({\mathfrak {C}}_j\) connecting \(B_j\) to \(B_{j+1}\) such that \(B_j \cup B_{j+1} \subset {\mathfrak {C}}_j \subset U_{Q_i^l,\eta ^3}\), the cylinders have bounded overlaps and \(|{\mathfrak {C}}_j| \approx _\eta \ell (Q_i^l)^{n+1}\),

where the bound (iii) follows from properties of \(\varepsilon \)-approximators (see property iii) in Theorem 1.1). Then

and furthermore, by Poincaré inequality and properties of the Whitney regions and cylinders \({\mathfrak {C}}_j\), we have

for every \(j = 1,2,\ldots ,N-1\). Combining the previous estimates and using the bounded overlaps of the cylinders then gives us

Finally, we sum over l and use Lemma 5.5, the bounded overlaps of the regions \(U_{Q_i^l,\eta ^3}\) and the structure of \(\Gamma _{Q_i^l}^\eta \) to get

$$\begin{aligned} \frac{c_0}{4} \frac{\log \alpha ^{-1}}{\log \epsilon _0^{-1}}{} & {} \approx \frac{c_0}{4} (k-1) \lesssim _\eta \sum _{l=1}^{k-1} \iint _{U_{Q_i^l,\eta ^3}} |\nabla \Phi (X)| \delta (X)^{-n} \, dX\\ {}{} & {} \lesssim _\eta \iint _{\Gamma _{Q_i^l}^{\eta }(y)} |\nabla \Phi (X)| \delta (X)^{-n} \, dX, \end{aligned}$$

which proves the claim. \(\square \)

6 Proof of Theorem 1.5

With the help of the tools from the previous sections, we can now prove Theorem 1.5. The proof is an adaptation of the corresponding proof from [37] (which itself uses some core ideas of Varopoulos [59, 60] and Garnett [28]).

Proof of Theorem 1.5

Let \(f \in {{\,\textrm{BMO}\,}}_{{{\,\textrm{c}\,}}}(\partial \Omega )\) and \({\mathbb {D}}\) be a dyadic system on \(\partial \Omega \). By Lemma 10.1 and Remark 10.3 in [37], we know that

$$\begin{aligned} f = f_0 + g, \end{aligned}$$
(6.1)

where \(f_0 \in L^\infty (\partial \Omega )\) with \(\Vert f_0\Vert _{L^\infty (\partial \Omega )} \lesssim \Vert f\Vert _{{{\,\textrm{BMO}\,}}}\) and \(g = \sum _j \alpha _j\mathbb {1}_{Q_j}\), for a collection of dyadic cubes \({\mathcal {A}}{:}{=}\{Q_j\}_j \subset {\mathbb {D}}\) satisfying a Carleson packing condition with \({\mathcal {C}}_{{\mathcal {A}}} \lesssim 1\) and coefficients \(\alpha _j\) satisfying \(\sup _j |\alpha _j| \lesssim \Vert f\Vert _{{{\,\textrm{BMO}\,}}(\partial \Omega )}\). By [37, Proposition 1.3], there exists an extension G of g in \(\Omega \) that satisfies the corresponding versions of the properties (1)–(3) in Theorem 1.5. Thus, it is enough to construct the extension for the function \(f_0\) in the decomposition (6.1).

Construction of the extension for \(f_0\) follows the proof of [37, Theorem 1.2]. We repeatedly use Lemma 3.3 to solve boundary value problems for L with updated data and Theorem 1.1 to take smooth \(\tfrac{1}{2}\)-approximators (that is, \(\varepsilon \)-approximators for \(\varepsilon = \tfrac{1}{2}\)) for the corresponding solutions and to take their a.e. non-tangential boundary traces. The idea is that a suitable approximator is a Varopoulos-type extension plus an error term and we can keep on halving the size of the error term through iteration.

We start by taking the solution \(u_0\) to the boundary value problem with data \(f_0\), satisfying \(u_0(X) = \int _{\partial \Omega } f_0 \, d\omega ^X\). We then take the \(\tfrac{1}{2}\)-approximator \(\Phi _0\) to \(u_0\). This approximator has an a.e. non-tangential boundary trace \(\varphi _0\). By Theorem 1.1, these functions satisfy \(\Vert u_0 - \Phi _0\Vert _{L^\infty (\Omega )} \le \tfrac{1}{2}\Vert u_0\Vert _{L^\infty (\Omega )} \le \tfrac{1}{2} \Vert f_0\Vert _{L^\infty (\partial \Omega )}\), and

$$\begin{aligned} \sup _{x \in \partial \Omega , r > 0} \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla \Phi _0(Y)|\, dY \le {\widetilde{C}} \Vert u_0\Vert _{L^{\infty }(\Omega )} \le {\widetilde{C}} \Vert f_0\Vert _{L^\infty (\partial \Omega )}. \end{aligned}$$

We set \(f_1 {:}{=}f_0 - \varphi _0\) which is the a.e. non-tangential boundary trace of \(u_0 - \Phi _0\). We take the solution \(u_1\) with the data \(f_1\), satisfying \(u_1(X) = \int _{\partial \Omega } f_1 \, d\omega ^X\). This solution satisfies

$$\begin{aligned} \Vert u_1\Vert _{L^\infty (\Omega )} \le \Vert f_1\Vert _{L^\infty (\partial \Omega )} \le \Vert u_0 - \Phi _0\Vert _{L^\infty (\Omega )} \le \tfrac{1}{2}\Vert u_0\Vert _{L^\infty (\Omega )} \le \tfrac{1}{2} \Vert f_0\Vert _{L^\infty (\partial \Omega )}, \end{aligned}$$

and thus, we can take a \(\tfrac{1}{2}\)-approximator \(\Phi _1\) of \(u_1\), with boundary trace \(\varphi _1\). We get

$$\begin{aligned} \Vert u_1 - \Phi _1\Vert _{L^\infty (\Omega )} \le \tfrac{1}{2} \Vert u_1\Vert _{L^\infty (\Omega )} \le \tfrac{1}{4} \Vert u_0\Vert _{L^\infty (\Omega )}, \end{aligned}$$

and

$$\begin{aligned} \sup _{x \in \partial \Omega , r > 0} \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla \Phi _1(Y)|\, dY \le {\widetilde{C}} \Vert u_1\Vert _{L^{\infty }(\Omega )} \le \frac{{\widetilde{C}}}{2} \Vert f_0\Vert _{L^\infty (\partial \Omega )}. \end{aligned}$$

We set \(f_2 {:}{=}f_1 - \varphi _1 = f_0 - \varphi _0 - \varphi _1\), and continue in the previous way. This gives us a sequences of solutions \(u_k\) and their \(\tfrac{1}{2}\)-approximators \(\Phi _k\). The functions \(u_k\) and \(\Phi _k\) have a.e. non-tangential boundary traces \(f_k\) and \(\varphi _k\), respectively, and we have

  1. (i)

    \(f_{k+1} = f_0 - \sum _{i=0}^k \varphi _i\),

  2. (ii)

    \(\Vert f_{k+1}\Vert _{L^\infty (\partial \Omega )} \le \Vert u_k - \Phi _k\Vert _{L^\infty (\Omega )} \le 2^{-k-1}\Vert u_0\Vert _{L^\infty (\Omega )} \le 2^{-k-1} \Vert f_0\Vert _{L^\infty (\partial \Omega )}\),

  3. (iii)

    \(\Vert u_k\Vert _{L^\infty (\Omega )} \le \Vert f_{k}\Vert _{L^\infty (\partial \Omega )} \le 2^{-k} \Vert u_0\Vert _{L^\infty (\Omega )}\),

  4. (iv)

    \(\sup _{x \in \partial \Omega , r > 0} \frac{1}{r^n} \iint _{B(x,r) \cap \Omega } |\nabla \Phi _k(Y)| \, dY \le {\widetilde{C}} \Vert u_k\Vert _{L^\infty (\Omega )} \le {\widetilde{C}}2^{-k} \Vert u_0\Vert _{L^\infty (\Omega )}\), and

  5. (v)

    \( \Vert \Phi _k\Vert _{L^\infty (\Omega )} \lesssim 2^{-k} \Vert u_0\Vert _{L^\infty (\Omega )}\).

Property (v) follows from the properties (ii) and (iii) combined with the fact that \(\Phi _k\) is a \(\tfrac{1}{2}\)-approximator of \(u_k\). By this property, we can define a uniformly convergent series

$$\begin{aligned} \Phi (X) {:}{=}\sum _{k=0}^\infty \Phi _k(X), \end{aligned}$$

for \(X \in \Omega \). Let \(F_0\) be a version of \(\Phi \) that has been smoothened using similar convolution techniques as in the proof of Lemma 4.28 (see [37, Sect. 3] for details). By the arguments in the proof of [37, Theorem 1.1], \(F_0\) is an extension of \(f_0\) that satisfies the properties (1)–(3) in Theorem 1.5. Thus, by the decomposition (6.1), we may define the extensions F in Theorem 1.5 by setting \(F {:}{=}F_0 + G\). This completes the proof. \(\square \)

7 An Example in \(\mathbb {R}^3\)

In this section we construct a three-dimensional version of the example provided in Corollary 1.7. That is, we show there is a domain \(\Omega \) in \(\mathbb {R}^3\) whose boundary is not rectifiable and such that every function \(f \in {{\,\textrm{BMO}\,}}(\partial \Omega )\) has a Varopoulos extension in \(\Omega \).

To define \(\Omega \), denote by E the 4-corner Cantor set in \(\mathbb {R}^2\). That is \(E=\bigcap _{k=0}^\infty E_k\), where \(E_k\) equals the union of \(4^k\) closed squares \(Q_i^k\) of side length \(4^{-k}\) located in the corners of the squares \(Q_j^{k-1}\) of the previous generation (see [17, Sect. 3] for the precise definition). We assume that the center of \(E_0=Q_1^0\) coincides with the origin in \(\mathbb {R}^2\). We consider the half-plane \(\Pi =\{(x,y)\in \mathbb {R}^2:y>-2\}\) and we set \(V=\Pi \setminus E\). We also write \(\mathbb {L}=\partial \Pi =\{(x,y)\in \mathbb {R}^2:y=-2\}\). Then we define \(\Omega = V\times \mathbb {R}\). It is straightforward to check that both V and \(\Omega \) are uniform domains.

Notice that \(\partial V= E\cup \mathbb {L}\) is 1-Ahlfors regular, while \(\partial \Omega = (E\cup \mathbb {L})\times \mathbb {R}\) is 2-Ahlfors regular. In fact, the purpose of introducing the half plane \(\Pi \) and the line \(\mathbb {L}\) in this previous construction is to ensure the 2-Ahlfors regularity of \(\partial \Omega \). It is also clear that \(E\times \mathbb {R}\) is purely 2-unrectifiable (since this set has no approximate tangent planes at any point).

Let A be the \(2\times 2\) matrix in the David–Mayboroda example in Theorem 1.6, let \(L=-{\text {div}}A\nabla \) be the associated elliptic operator in \(\mathbb {R}^2\), and let

$$\begin{aligned} {{\hat{A}}} =\left( \begin{array}{ll} A &{} 0\\ 0 &{}1\end{array}\right) . \end{aligned}$$

Set \({{\hat{L}}} = -\mathop {{\text {div}}}\nolimits {{\hat{A}}}\,\nabla \) in \(\mathbb {R}^3\). Below we will show that \(\omega _{{{\hat{L}}},\Omega }\in A_\infty (\sigma )\), where \(\sigma \) is the surface measure on \(\partial \Omega \). Consequently, by Theorem 1.5, every function \(f \in {{\,\textrm{BMO}\,}}(\partial \Omega )\) has a Varopoulos extension in \(\Omega \).

First, we prove the following.

Lemma 7.1

We have that \(\omega _{L,V}\in A_{\infty }({\mathcal {H}}^1|_{\partial V})\). Further, there is a constant \(C>0\) such that for any surface ball \(\Delta \subset \partial V\) and any corkscrew point \(p\in V\) for \(\Delta \),

$$\begin{aligned} \frac{d\omega _{L,V}^p}{d{\mathcal {H}}^1|_{\partial V}}(x)\le \frac{C}{{\mathcal {H}}^1(\Delta )},\qquad \text {for each }x\in \Delta . \end{aligned}$$
(7.2)

Proof

It is clear that the estimate (7.2) implies the local \(A_\infty \) condition of \(\omega _L\equiv \omega _{L,V}\). First we will show that, for any \(p\in \Pi \),

$$\begin{aligned} \frac{d\omega _{L,\Pi }^p}{d{\mathcal {H}}^1|_{\mathbb {L}}}(x)\le C\,\frac{d\omega _{-\Delta ,\Pi }^p}{d{\mathcal {H}}^1|_{\mathbb {L}}}(x), \end{aligned}$$
(7.3)

where \(\omega _{L,\Pi }\) stands for the L-elliptic measure for the domain \(\Pi \), and \(\omega _{-\Delta ,\Pi }\) is the harmonic measure (i.e. for the Laplacian) for the domain \(\Pi \). To this end, we consider the auxiliary domain \(U= \Pi \setminus \overline{B_1}\), where we denoted \(B_r=B(0,r)\). Observe that A is the identity matrix on U, and thus \(\omega _{L,U}^X\equiv \omega _{-\Delta ,U}^X\) for all \(X\in U\). Let \(F\subset \mathbb {L}\) be an arbitrary closed subset, and let \(q\in \partial B_{3/2}\) be such that

$$\begin{aligned} \omega _{L,\Pi }^q(F) = \max _{X\in \partial B_{3/2}}\omega _{L,\Pi }^X(F). \end{aligned}$$

Since \(\omega _{L,\Pi }^x(F)\) is a harmonic function of x in U, we have

$$\begin{aligned} \omega _{L,\Pi }^q(F){} & {} = \int _{\partial U} \omega ^X_{L,\Pi }(F)\,d\omega ^q_{-\Delta ,U}(X) \\ {}{} & {} = \int _{\mathbb {L}} \omega _{L,\Pi }^X(F)\,d\omega ^q_{-\Delta ,U}(X) + \int _{\partial B_1} \omega _{L,\Pi }^X(F)\,d\omega ^q_{-\Delta ,U}(X)\\{} & {} \le \omega _{-\Delta ,U}^q(F) + \sup _{X\in \partial B_1}\omega _{L,\Pi }^X(F)\,\omega _{-\Delta ,U}^q(\partial B_1). \end{aligned}$$

By the maximum principle and the definition of q, we have

$$\begin{aligned} \sup _{X\in \partial B_1}\omega _{L,\Pi }^X(F) \le \sup _{X\in \partial B_{3/2}}\omega _{L,\Pi }^X(F) = \omega _{L,\Pi }^q(F). \end{aligned}$$

Also, it is immediate that \(c_B {:}{=}\omega _{-\Delta ,U}^q(\partial B_1)<1\). Hence,

$$\begin{aligned} \omega _{L,\Pi }^q(F) \le \omega _{-\Delta ,U}^q(F) + c_B\,\omega _{L,\Pi }^q(F), \end{aligned}$$

or equivalently, \(\omega _{L,\Pi }^q(F) \le (1-c_B)^{-1}\omega _{-\Delta ,U}^q(F)\). By a Harnack chain argument we deduce that \(\omega _{L,\Pi }^p(F) \lesssim \omega _{-\Delta ,U}^p(F) = \omega _{L,U}^p(F) \) for all \(p\in \partial B_{3/2}\), and then by the maximum principle, it follows that the same estimate is valid for all \(p\in \Pi \setminus {{\overline{B}}}_{3/2}\). Using again the maximum principle, we get, for all \(p\in \Pi \setminus {{\overline{B}}}_{3/2}\),

$$\begin{aligned} \omega _{L,\Pi }^p(F) \lesssim \omega _{-\Delta ,U}^p(F) \le \omega _{-\Delta ,\Pi }^p(F). \end{aligned}$$

Since \(\omega _{L,\Pi }^p(F)\) and \(\omega _{-\Delta ,U}^p(F)\) are, respectively, elliptic and harmonic in \(B_{3/2}\), by a Harnack chain argument it follows that the estimate above also holds for all \(p\in B_{3/2}\), possibly with a different implicit constant. This is equivalent to (7.3).

To prove (7.2), let B be a ball centered in \(\partial V\) such that \(\Delta =B \cap \partial \Omega \) and consider an arbitrary closed set \(F\subset \Delta \). Let \(p\in B\cap V\) be a corkscrew point for \(\Delta \). By the maximum principle and by (7.3),

$$\begin{aligned} \omega _{L,V}^p (F\cap \mathbb {L}) \le \omega _{L,\Pi }^p (F\cap \mathbb {L}) \lesssim \omega _{-\Delta ,\Pi }^p (F\cap \mathbb {L})\lesssim \frac{{\mathcal {H}}^1(F\cap \mathbb {L})}{{\mathcal {H}}^1(\Delta )}, \end{aligned}$$

where in the last estimate we used that \(\frac{d\omega _{-\Delta ,\Pi }^p}{d{\mathcal {H}}^1|_{\mathbb {L}}}(x)\lesssim \sigma (\Delta )^{-1}\) for all \(x\in \Delta \). Indeed, if \(F\cap \mathbb {L}=\varnothing \), then there is nothing to show; if B is centered on \(\mathbb {L}\), this follows from Ahlfors regularity of \(\partial V\) and classical properties of the harmonic measure, and if B is centered on E and there exists \(z\in F\cap \mathbb {L}\), then if \(B'=B(z,{\text {rad}}(B))\) and \(p'\) is a corkscrew point for \(B'\) in V, then by Harnack chains we have that \(\omega ^p_{-\Delta ,\Pi }(F\cap \mathbb {L})\approx \omega ^{p'}_{-\Delta ,\Pi }(F\cap \mathbb {L})\), and the claim follows as in the previous case.

To estimate \(\omega _{L,V}^p (F\cap E)\), we assume that \(B\cap E\ne \varnothing \) and we distinguish two cases. Suppose first that r(B), the radius of B, satisfies \(r(B)\le 1\). Denote \(X_E=(0,2)\) and let \(B'\) be a ball centered in E containing B with radius at most 2r(B). By the maximum principle, the change of poles formula, and the fact that \(\frac{d\omega _{L,E^c}^{X_E}}{d{\mathcal {H}}^1|_{E}}(x)\approx 1\) (by Theorem 1.6), we obtain

$$\begin{aligned} \omega _{L,V}^p(F\cap E) \le \omega _{L,E^c}^p(F\cap E) \approx \frac{\omega _{L,E^c}^{X_E}(F\cap E)}{\omega _{L,E^c}^{X_E}(B')} \approx \frac{{\mathcal {H}}^1(F\cap E)}{{\mathcal {H}}^1(B'\cap E)} \approx \frac{{\mathcal {H}}^1(F\cap E)}{{\mathcal {H}}^1(\Delta )}. \end{aligned}$$

In the case \(r(B)>1\), recall that \(B_1=B(0,1)\), and then using the change of poles formula and the maximum principle, we write

$$\begin{aligned} \omega _{L,V}^p(F\cap E) \approx \omega _{L,V}^{X_E}(F\cap E)\,\omega _{L,V}^{p}(B_1) \le \omega _{L,E^c}^{X_E}(F\cap E)\,\omega _{L,V}^{p}(B_1) \approx {\mathcal {H}}^1(F\cap E)\,\omega _{L,V}^{p}(B_1). \end{aligned}$$

To estimate \(\omega _{L,V}^{p}(B_1)\), consider a ball \(B_1'\) of radius 1 centered in \(\mathbb {L}\), disjoint from E, and contained in \(4B_1\). We claim that

$$\begin{aligned} \omega _{L,V}^{p}(B_1) \approx \omega _{L,V}^{p}(B_1'). \end{aligned}$$
(7.4)

Indeed, if \(p\notin 8B_1\), then (7.4) follows directly from Lemma 2.37. On the other hand, if \(p\in 8B_1\), then denote \(p'=(0,16)\) and note that \(\delta (p) > rsim 1\), \(\delta (p')\ge 8\), and \(|p-p'|\le 24\). Then, by the Harnack inequality, Harnack chains, and Lemma 2.37, the estimate (7.4) follows. Next, by the maximum principle, (7.3), and Ahlfors regularity, we get

$$\begin{aligned} \omega _{L,V}^{p}(B_1') \le \omega _{L,\Pi }^{p}(B_1') \lesssim \omega _{-\Delta ,\Pi }^{p}(B_1')\lesssim \frac{{\mathcal {H}}^1(B_1'\cap \mathbb {L})}{{\mathcal {H}}^1(\Delta )}\approx \frac{1}{{\mathcal {H}}^1(\Delta )}. \end{aligned}$$

Therefore, again we derive

$$\begin{aligned} \omega _{L,V}^p(F\cap E)\lesssim \frac{{\mathcal {H}}^1(F\cap E)}{{\mathcal {H}}^1(\Delta )}. \end{aligned}$$

Altogether, we deduce that

$$\begin{aligned} \omega _{L,V}^p (F) = \omega _{L,V}^p (F\cap \mathbb {L}) + \omega _{L,V}^p (F\cap E)\lesssim \frac{{\mathcal {H}}^1(F)}{{\mathcal {H}}^1(\Delta )}, \end{aligned}$$

for any closed set \(F\subset \Delta \), which is equivalent to the statement in the lemma, by the Lebesgue–Radon–Nykodim Theorem. \(\square \)

Now we are ready to prove the \(A_\infty \) property of the elliptic measure \(\omega _{{{\hat{L}}}}\) for the three-dimensional domain \(\Omega \):

Proposition 7.5

The elliptic measure \(\omega _{{{\hat{L}}}}\) for the domain \(\Omega \subset \mathbb {R}^3\) defined above satisfies the \(A_\infty \) condition with respect to \({\mathcal {H}}^2|_{\partial \Omega }\). Further, there is a constant \(C>0\) such that for any surface ball \(\Delta \subset \partial \Omega \) and any corkscrew point \(p\in \Omega \) for \(\Delta \),

$$\begin{aligned} \frac{d\omega _{{{\hat{L}}}}^p}{d{\mathcal {H}}^2|_{\partial \Omega }}(x)\le \frac{C}{{\mathcal {H}}^2(\Delta )},\qquad \text {for each }x\in \Delta . \end{aligned}$$
(7.6)

Proof

Let B be a ball with radius r(B) centered in \(\partial \Omega \) such that \(\Delta = B\cap \partial \Omega \). Clearly, to prove the \(A_\infty \) condition for \(\omega _{{{\hat{L}}}}\), it suffices to show (7.6). In turn, since \({{\hat{L}}}\) is symmetric, by a direct application of Lemma 2.38 and the Lebesgue–Radon–Nykodim Theorem, it is enough to prove that

$$\begin{aligned} G_{{{\hat{L}}}}(X,p) \lesssim \frac{{\text {dist}}(X,\partial \Omega )}{r(B)^2}\quad \text{ for } \text{ all } X\in B\cap \Omega \setminus B(p,\frac{1}{2} {\text {dist}}(p,\partial \Omega )), \end{aligned}$$
(7.7)

where \(G_{{{\hat{L}}}}\) is the \({{\hat{L}}}\)-Green function for \(\Omega \).

Denote by P the orthogonal projection of \(\mathbb {R}^3\) onto \(\mathbb {R}^2\equiv \mathbb {R}^2\times \{0\}\). Let \(p_0 = P(p)\) and consider the function \(u:{{\overline{\Omega }}}\setminus P^{-1}(\{p_0\})\rightarrow \mathbb {R}\) defined by

$$\begin{aligned} u(X) = G_L(P(X),P(p)) = G_L(P(X),p_0), \end{aligned}$$

where \(G_L\) is the Green’s function for L in the domain \(V=P(\Omega )\). It is immediate to check that u is \({{\hat{L}}}\)-elliptic in \(\Omega \setminus P^{-1}(\{p_0\})\), and clearly it can be extended continuously by zero to the whole \(\partial \Omega \). Thus, by the boundary Harnack principle, choosing \(p'\in \partial B(p,\frac{1}{2} {\text {dist}}(p,\partial \Omega ))\) such that \(P(p') \in \partial B(p_0,\frac{1}{2} {\text {dist}}(p,\partial \Omega ))\cap \mathbb {R}^2\), we have

$$\begin{aligned} \frac{G_{{{\hat{L}}}}(X,p)}{G_{{{\hat{L}}}}(p',p)} \approx \frac{u(X)}{u(p')}= \frac{G_L(P(X),p_0)}{G_L(P(p'),p_0)} \quad \text{ for } \text{ all } X\in B\cap \Omega \setminus B(p,\frac{1}{2} {\text {dist}}(p,\partial \Omega )). \end{aligned}$$

Thus, for such points X and by (2.33) and (2.34) applied both to \(G_{{{\hat{L}}}}\) and \(G_{L}\),

$$\begin{aligned} \frac{G_{{{\hat{L}}}}(X,p)}{|p'-p|^{-1}} \approx \frac{G_L(P(X),p_0)}{1}. \end{aligned}$$

Thus, by Lemma 2.38 applied to \(G_{L,V}\), Harnack chains, and the Harnack inequality, we see that

$$\begin{aligned} G_{{{\hat{L}}}}(X,p) \approx \frac{G_{L,V}(P(X),p_0)}{r(B)} \approx \frac{\omega _{L,V}^{p_0}(\Delta _{V,X})}{r(B)}, \end{aligned}$$
(7.8)

where \(\Delta _{V,X}= B(P(X),2{\text {dist}}(P(X),\partial V))\). From (7.2) we infer that

$$\begin{aligned} \omega _{L,V}^{p_0}(\Delta _{V,X})\lesssim \frac{{\mathcal {H}}^1(\Delta _{V,X})}{{\mathcal {H}}^1(P(B)\cap \partial V)} \approx \frac{{\text {dist}}(X,\partial \Omega )}{r(B)}. \end{aligned}$$
(7.9)

From (7.8) and (7.9), we deduce (7.7), which concludes the proof. \(\square \)