1 Introduction

Let \((X,d,\mu )\) be a complete metric space, where \(\mu \) is a doubling locally finite Borel measure. It is known, see for example [7, 9], that plenty of analysis can be conducted on \((X,d,\mu )\) whenever the weak p-Poincaré inequality

$$\begin{aligned} \frac{1}{\mu (B)}\int _{B} |u - u_{B}| \, d\mu \le C {\text {diam}}(B) \left( \frac{1}{\mu (\lambda B)}\int _{\lambda B} \rho ^{p} \, d\mu \right) ^{1/p} \end{aligned}$$
(1.1)

is satisfied for some \(C,p,\lambda \ge 1\), for all balls \(B\subset X\), for all locally integrable Borel functions \(u :X \rightarrow {\mathbb {R}}\), and for all upper gradients \(\rho \) of u. So, it is worthwhile to find necessary and sufficient conditions for the validity of (1.1). One well-known sufficient condition is the existence of pencils of curves, introduced by Semmes [15] in the 90’s. To motivate the results in the present paper, we first discuss Semmes’ condition in some detail; our definition is the one given in Section 14.2 in [9], where the setting is somewhat more general than in Semmes’ original work [15].

Definition 1.1

(Pencils of curves) The space \((X,d,\mu )\) admits pencils of curves (PC) if there exists a constant \(C_0 \ge 1\) with the following property. For all distinct \(s,t\in X\) there is a family \(\Gamma _{s,t}\) of rectifiable curves \(\gamma \subset B(s,C_0 d(s,t))\), each joining s to t and satisfying \({\mathcal {H}}^{1}(\gamma ) \le C_{0}d(s,t)\), and a probability measure \(\alpha _{s,t}\) on \(\Gamma _{s,t}\) such that

$$\begin{aligned} \int _{\Gamma _{s,t}}\int _{\gamma } g \; d{\mathcal {H}}^{1} \, d\alpha _{s,t}(\gamma ) \le C_0 \int _{B(s,C_0 d(s,t))} \left( \frac{g(y)}{\Theta (s,d(s,y))} + \frac{g(y)}{\Theta (t,d(t,y))} \right) \, d\mu (y). \end{aligned}$$

for Borel functions \(g: X \rightarrow [0,\infty ]\). Here, and in the sequel, \(\Theta \) stands for the 1-dimensional density

$$\begin{aligned} \Theta (x,r) = \frac{\mu (B(x,r))}{r}, \qquad x \in X, \, r > 0. \end{aligned}$$

A doubling space \((X,d,\mu )\) admitting PCs satisfies the weak 1-Poincaré inequality. This was proven by Semmes for Q-regular spaces, see [15, Theorem B.15], and the general case can be found for instance in Heinonen’s book [8, Chapter 4].

Of course, Semmes in [15] also gives sufficient conditions for finding PCs: his Standard Assumptions (see [15, Theorem 1.11] and above) include the space \((X,d,\mu )\) to be an orientable topological n-manifold, with \(\mu = {\mathcal {H}}^{n}\). Moreover, X has to be locally contractible (for more precise statements, see [15, Definition 1.7] or [15, Definition 1.15], but also the discussion in [15, Remark A.35]). These assumptions are certainly not necessary for a space \((X,d,\mu )\) to admit PCs or support a Poincaré inequality; notably, the Laakso spaces [11] have PCs, hence satisfy (1.1) with \(p = 1\), but are generally not integer-dimensional.

The main result of our paper shows that, in complete doubling metric measure spaces, the weak 1-Poincaré inequality implies the existence of PCs. We achieve this by passing through an intermediate notion, the generalised pencils of curves.

Definition 1.2

(Generalised pencils of curves) The space \((X,d,\mu )\) admits generalised pencils of curves (GPC in short) if there exists a constant \(C_{0} \ge 1\) with the following property. For all distinct \(s,t \in X\), there exists a normal 1-current T on X (in the sense of Ambrosio and Kirchheim) satisfying the following three properties:

  1. (P1)

    \(\partial T = \delta _{t} - \delta _{s}\),

  2. (P2)

    \({\text {spt}}T \subset B(s,C_{0}d(s,t))\), and

  3. (P3)

    \(\Vert T\Vert = w \, d\mu \) with

    $$\begin{aligned} |w(y)| \le C_{0}\left[ \frac{1}{\Theta (s,d(s,y))} + \frac{1}{\Theta (t,d(t,y))}\right] \qquad \text {for } \mu \text {-a.e. } y \in X. \end{aligned}$$

The main novelty of the present paper is the following result:

Theorem 1.3

Let \((X,d,\mu )\) be complete and doubling. Then X satisfies (1.1) with \(p = 1\) if and only if X admits generalised pencils of curves.

It turns out that the existence of GPCs implies the existence of PCs. This is a consequence of a recent decomposition result for normal 1-currents, due to Paolini and Stepanov [13]. Combined with Theorem 1.3, we obtain the following characterisation.

Theorem 1.4

Let \((X,d,\mu )\) be complete and doubling. Then X satisfies (1.1) with \(p = 1\) if and only if X admits pencils of curves.

Remark 1.5

The first version of this paper only contained Theorem 1.3, as we were not aware of the decomposition result of Paolini and Stepanov. Shortly afterwards, Theorem 1.4 was obtained by Durand-Cartagena et al. [4, Theorem 3.7]. Their proof uses the modulus of curve families instead of metric currents. So, Theorem 1.4 first appeared in [4].

The structure of the paper is the following. In Sect. 2, we briefly recall the definition of, and some basic concepts related to, the metric currents of Ambrosio and Kirchheim. Then, in Sect. 3, we prove the easy “if” implication

$$\begin{aligned} \text {GPCs} \Longrightarrow \text {weak } 1\text {-PI} \end{aligned}$$

of Theorem 1.3, mostly using classical methods in metric analysis. We chose to retain a direct proof of this implication, as it indicates how GPCs can be applied in practice. Another proof would be

$$\begin{aligned} \text {GPCs} \Longrightarrow \text {PCs} \Longrightarrow \text {weak } 1\text {-PI}, \end{aligned}$$

where the first implication follows from Paolini and Stepanov’s work. As explained above, the second implication follows from [15, Theorem B.15] in the Q-regular case, and in full generality from [8, Chapter 4].

Section 4 is the core of the paper, containing the proof of the “only if” implication of Theorem 1.3. In short, the idea is to translate the problem of finding currents in \((X,d,\mu )\) to finding “network flows” in certain graphs derived from \(\delta \)-nets in X. The existence of such flows is guaranteed by the famous max flow–min cut theorem of Ford and Fulkerson [5]. Then, the main task will be to verify that there are no “small cuts” in the graph, and this can be done by using the weak 1-Poincaré inequality. Finally, in Sect. 5, we explain how to deduce Theorem 1.4 from Theorem 1.3 using the results of Paolini and Stepanov.

1.1 Basic notation

Open balls in a metric space (Xd) will be denoted by B(xr), with \(x \in X\) and \(r > 0\). A measure on (Xd) will always refer to a Borel measure \(\mu \) with \(\mu (B(x,r)) < \infty \) for all balls \(B(x,r) \subset X\). The notation \(A \lesssim B\) means that there exists a constant \(C \ge 1\) such that \(A \le CB\): the constant C will typically depend on the “data” of the ambient space, for example the doubling constant of \(\mu \), or the constant in the Poincaré inequality (1.1) (whenever (1.1) is assumed to hold). The two-sided inequality \(A \lesssim B \lesssim A\) is abbreviated to \(A \sim B\).

2 Background on currents

The main result in the paper mentions currents in metric spaces, so we include here a brief introduction. We claim no originality for anything in this section. We use the definition of metric currents given by Ambrosio and Kirchheim, see Definition 3.1 in [1].

Let X be a complete metric space, and let \(\text {Lip}(X)\) and \(\text {Lip}_{b}(X)\) be the families of Lipschitz, and bounded Lipschitz functions on X. For \(k \ge 1\), Let \({\mathcal {D}}^{k}(X) := \text {Lip}_{b}(X) \times [\text {Lip}(X)]^{k}\). We typically denote the \((k + 1)\)-tuples in \({\mathcal {D}}^{k}(X)\) by \((f,\pi _{1},\ldots ,\pi _{k})\). Following Definition 2.2 in [1], we consider subadditive, positively 1-homogenous functionals \(T :{\mathcal {D}}^{k}(X) \rightarrow {\mathbb {R}}\). These are denoted by \(MF_{k}(X)\). We say \(T \in MF_{k}(X)\) has finite mass, if there exists a finite Borel measure \(\nu \) on X such that

$$\begin{aligned} |T(f,\pi _{1},\ldots ,\pi _{k})| \le \prod _{j = 1}^{k} \text {Lip}(\pi _{j}) \int _{X} |f| \, d\nu , \qquad (f,\pi _{1},\ldots ,\pi _{k}) \in {\mathcal {D}}^{k}(X). \end{aligned}$$
(2.1)

Here \(\mathrm {Lip}(\pi _j)\) denotes the Lipschitz constant of \(\pi _j\), that is, the smallest constant \(L\in [0,\infty )\) for which \(|\pi _j(x)-\pi _j(y)|\le L d(x,y)\) holds for all \(x,y\in X\). For \(k = 0\), the correct interpretation of (2.1) is \(|T(f)| \le \int _X |f| \, d\nu \). As in Definition 2.6 in [1], the minimal measure \(\nu \) satisfying (2.1) is denoted by \(\Vert T\Vert \) (this is well-defined, as discussed below (2.2) in [1]).

A k-dimensional current, or just a k-current, is then a \((k + 1)\)-multilinear functional \(T \in MF_{k}(X)\) with finite mass, satisfying a few additional requirements which we will not need explicitly, see Definition 3.1 in [1]. If T is a k-current, so that \(\Vert T\Vert \) is a finite Borel measure, then bounded Lipschitz functions are dense in \(L^{1}(X,\Vert T\Vert )\), and in particular in the space of bounded Borel functions B(X) equipped with the \(L^{1}(\Vert T\Vert )\)-norm. This fact, and (2.1), together imply that T has a canonical extension to \(B(X) \times [\text {Lip}(X)]^{k}\), which we also denote by T.

We review a few basic concepts related to currents.

Definition 2.1

(Support) The support \({\text {spt}}(T)\) of a k-current T is the usual measure-theoretic support of \(\Vert T\Vert \), namely

$$\begin{aligned} {\text {spt}}\Vert T\Vert = \{x \in X : \Vert T\Vert (B(x,r))> 0 \text { for all } r > 0\}. \end{aligned}$$

Definition 2.2

(Boundary) Let \(T \in MF_{k}(X)\), \(k \ge 1\). Then \(\partial T \in MF_{k - 1}(X)\) is the functional defined by

$$\begin{aligned} \partial T(f,\pi _{1},\ldots ,\pi _{k - 1}) = T(1,f,\pi _{1},\ldots ,\pi _{k-1}). \end{aligned}$$

A k-current T is called normal, if \(\partial T\) is a \((k - 1)\)-current, in particular, \(\partial T\) has finite mass.

Definition 2.3

(Push-forward) Let XY be complete metric spaces, and let \(\varphi :X \rightarrow Y\) be Lipschitz. For \(T \in MF_{k}(X)\), we define the functional \(\varphi _{\sharp }T \in MF_{k}(Y)\) by

$$\begin{aligned} \varphi _{\sharp }T(g,\pi _{1},\ldots ,\pi _{k}) = T(g \circ \varphi ,\pi _{1} \circ \varphi ,\ldots ,\pi _{k} \circ \varphi ). \end{aligned}$$

The following is a special instance of Definition 2.5 in [1].

Definition 2.4

(Restriction) Let X be a complete metric space, \(T \in MF_{1}(X)\), and \(g\in \mathrm {Lip}_b(X)\). Then we define an element \( T\lfloor _{ g} \in MF_1(X)\) by setting

$$\begin{aligned} T\lfloor _{ g}(f,\pi _1):= T(fg,\pi _1). \end{aligned}$$

If T is a 1-current, then Definition 2.4 can be extended to \(g\in B(X)\) using the canonical extension of T to \(B(X) \times \mathrm {Lip}(X)\), see [1, p.11]. Moreover, in that case, \(T\lfloor _{ g}\) is again a current, see [1, p.16 and p.19]. If E is a Borel subset of X and \(g=\chi _E\), we write \(T\lfloor _{ E}\) for the restriction \(T\lfloor _{ g}\). We have

$$\begin{aligned} \left| T\lfloor _{E}(f,\pi _1)\right| = \left| T(f\chi _E,\pi _1)\right| \le \mathrm {Lip}(\pi _1) \int _{E} |f| d\Vert T\Vert \end{aligned}$$
(2.2)

for all \((f,\pi _1)\in {\mathcal {D}}^1(X)\). Since \(\chi _E\) is merely Borel but not Lipschitz, (2.2) does not follow directly from the definition of \(\Vert T\Vert \) given by (2.1), but it can be deduced by the density argument alluded to earlier, see [1, (2.3)]. Finally, (2.2) and the minimality of \(\Vert T\lfloor _{E}\Vert \) show that

$$\begin{aligned} \mathrm {spt} \left( T\lfloor _{E}\right) \subseteq \mathrm {spt}\left( \Vert T\Vert \lfloor _E\right) \subseteq {\overline{E}}. \end{aligned}$$

We record the following lemma, which follows from general measure theory:

Lemma 2.5

Assume that T is a k-current on a \(\sigma \)-compact metric space X. Then, for any Borel set \(B \subset X\) and any \(\epsilon > 0\), there exists a compact set \(K \subset B\) with \(\Vert T\Vert (B \setminus K) < \epsilon \).

Proof

By assumption \(\Vert T\Vert \) is a finite Borel measure. The claim now follows from [12, Theorem 1.10], and the Note directly below it. \(\square \)

We next describe a simple example, which will be useful later on.

Example 2.6

Given a non-degenerate interval \([a,b] \subset {\mathbb {R}}\) we may define the 1-current \(\llbracket a,b \rrbracket \) as follows:

$$\begin{aligned} \llbracket a,b \rrbracket (f,\pi ) = \int _{a}^{b} f(t)\pi '(t) \, dt, \end{aligned}$$

where \((f,\pi ) \in \text {Lip}_{b}({\mathbb {R}}) \times \text {Lip}({\mathbb {R}})\). This is a particular case of Example 3.2 in [1], and it is noted there that \(\Vert \llbracket a,b \rrbracket \Vert = {\mathcal {H}}^{1}|_{[a,b]}\). The boundary of \(\llbracket a,b \rrbracket \) is the measure (or 0-current) \(\delta _{b} - \delta _{a}\), as shown by the following computation:

$$\begin{aligned} \partial \llbracket a,b \rrbracket (f) = \llbracket a,b \rrbracket (1,f) = \int _{a}^{b} f'(t) \, dt = f(b) - f(a) = \int f \, d[\delta _{b} - \delta _{a}]. \end{aligned}$$

Next, consider an isometric embedding \(\gamma :[a,b] \rightarrow X\), where X is any complete metric space. Then \(\gamma _{\sharp } \llbracket a,b \rrbracket \) defines a current in X given by (spelling out Definition 2.3)

$$\begin{aligned} \gamma _{\sharp }\llbracket a,b \rrbracket (f,\pi ) = \llbracket a,b \rrbracket (f \circ \gamma , \pi \circ \gamma ) = \int _{a}^{b} f(\gamma (t))(\pi \circ \gamma )'(t) \, dt. \end{aligned}$$

It is noted in [1, (2.6)], and in the discussion directly below, that

$$\begin{aligned} \Vert \gamma _{\sharp }\llbracket a,b \rrbracket \Vert = \gamma _{\sharp } \llbracket a,b \rrbracket = \gamma _{\sharp }({\mathcal {H}}^{1}\lfloor _{[a,b]}) = {\mathcal {H}}^{1}\lfloor _{\gamma ([a,b])}. \end{aligned}$$
(2.3)

The last equation follows from the isometry assumption. Finally, because boundary and push-forward commute by [1, (2.1)], we have

$$\begin{aligned} \partial (\gamma _{\sharp } \llbracket a,b \rrbracket ) = \gamma _{\sharp }(\partial \llbracket a,b \rrbracket ) = \delta _{\gamma (b)} - \delta _{\gamma (a)}. \end{aligned}$$
(2.4)

We end the section by recalling (a slightly simplified) version of the compactness theorem for normal currents. The original reference, and the full version of the theorem, is [1, Theorem 5.2].

Theorem 2.7

(Compactness) Let \((T_{n})_{n \in {\mathbb {N}}}\) be a sequence of normal k-currents with

$$\begin{aligned} \sup _{n} \left( \Vert T_{n}\Vert (X) + \Vert \partial T_{n}\Vert (X) \right) < \infty , \end{aligned}$$

and such that \({\text {spt}}T_{n} \subset K\) for some fixed compact set \(K \subset X\), for all \(n \in {\mathbb {N}}\). Then there exists a subsequence \((T_{n_{m}})_{m\in {\mathbb {N}}}\), and a normal k-current T supported on K, such that

$$\begin{aligned} \lim _{m \rightarrow \infty } T_{n_{m}}(f,\pi _{1},\ldots ,\pi _{k}) = T(f,\pi _{1},\ldots ,\pi _{k}), \qquad (f,\pi _{1},\ldots ,\pi _{k}) \in \mathrm{Lip}_{b}(X) \times [\mathrm{Lip}(X)]^{k}. \end{aligned}$$

3 Generalised pencils of curves imply the 1-Poincaré inequality

In this section we prove that if X as in Theorem 1.3 admits generalised pencils of curves, then it supports a weak 1-Poincaré inequality. It is well known, see for instance [9, Theorem 8.1.7], that doubling metric measure spaces which support a Poincaré inequality can be characterised in terms of the validity of pointwise inequalities between functions and their upper gradients. We will use the existence of GPCs to derive such an inequality between an arbitrary Lipschitz function \(u:X \rightarrow {\mathbb {R}}\) and its (upper) pointwise Lipschitz constant

$$\begin{aligned} x \mapsto \text {Lip}(u,x) := \limsup _{r \rightarrow 0} \sup _{d(y,x) \le r} \frac{|u(x) - u(y)|}{r}. \end{aligned}$$

The desired inequality will be based on the following lemma.

Lemma 3.1

Let X be a complete \(\sigma \)-compact metric space and let \(u :X \rightarrow {\mathbb {R}}\) be a Lipschitz function. Then, for any 1-current T, we have

$$\begin{aligned} |\partial T(u)| \le \int \mathrm{Lip}(u,x) \, d\Vert T\Vert (x). \end{aligned}$$

Proof

The idea of the proof is the following: By definition of \(\partial T\) and since \(\Vert T\Vert \) is a finite Borel measure on X satisfying (2.1), we know that

$$\begin{aligned} |\partial T (u)| = |T(1,u)| \le \int _X \mathrm {Lip}(u) d\Vert T\Vert . \end{aligned}$$

The desired inequality in Lemma 3.1 is similar, but \(\mathrm {Lip}(u)\) is replaced by the pointwise Lipschitz constant \(\mathrm {Lip}(u,\cdot )\). To achieve this, we will essentially decompose X into pieces where \(\mathrm {Lip}(u,\cdot )\) is almost constant.

We now turn to the details. Write

$$\begin{aligned} E := \{x \in X : \text {Lip}(u,x) > 0\} \quad \text {and} \quad Z := \{x \in X : \text {Lip}(u,x) = 0\}. \end{aligned}$$

We perform countable decompositions of the sets E and Z. Consider

$$\begin{aligned} u_{R}(x) := \sup _{0 < r \le R} \sup _{d(y,x) \le r} \frac{|u(x) - u(y)|}{r}, \end{aligned}$$

fix \(\epsilon > 0\), and define

$$\begin{aligned} E_{\delta ,j} := \{x \in X : (1 + \epsilon )^{j} < u_{R} \le (1 + \epsilon )^{j + 1} \text { for all } R \le \delta \}, \quad j \in {\mathbb {Z}}, \, \delta > 0 \end{aligned}$$

and

$$\begin{aligned} Z_{\delta ,j} := \{x \in X : u_{R} \le 2^{-j} \text { for all } R \le \delta \}, \quad j \in {\mathbb {N}}, \, \delta > 0. \end{aligned}$$

Note that \(\mathrm {Lip}(u,x)= \lim _{R\rightarrow 0} u_R(x)\) and that for every \(j \in {\mathbb {Z}}\) fixed, the sequences \((E_{1/i,j})_{i\in {\mathbb {N}}}\) and \((Z_{1/i,j})_{i\in {\mathbb {N}}}\) are nested. Then, we can write

$$\begin{aligned} X = E \cup Z \subset \bigcup _{i \in {\mathbb {N}}} \bigcup _{j \in {\mathbb {Z}}} E_{1/i,j} \cup \bigcap _{j \in {\mathbb {N}}} \bigcup _{i \in {\mathbb {N}}} Z_{1/i,j}. \end{aligned}$$
(3.1)

Fix \(x \in X\), \(i \in {\mathbb {N}}\). Notice that the restriction of u to the set

$$\begin{aligned} B(x,1/(2i)) \cap E_{1/i,j} =: E^{x}_{1/(2i),j} \end{aligned}$$

is \((1 + \epsilon )^{j + 1}\)-Lipschitz since, if \(y,z \in E^{x}_{1/(2i),j}\), then

$$\begin{aligned} y,z \in E_{1/i,j} \quad \text {and} \quad d(y,z) \le \frac{1}{i}, \end{aligned}$$

which implies that

$$\begin{aligned} \frac{|u(y) - u(z)|}{d(y,z)} \le u_{1/i}(y) \le (1 + \epsilon )^{j + 1}. \end{aligned}$$
(3.2)

Moreover,

$$\begin{aligned} \text {Lip}(u,y) \ge (1 + \epsilon )^{j}, \qquad y \in E_{1/(2i),j}^{x}. \end{aligned}$$
(3.3)

A similar argument as for (3.2) applies if \(x \in X\), and

$$\begin{aligned} y,z \in B(x,1/(2i)) \cap Z_{1/i,j} =: Z_{1/(2i),j}^{x}. \end{aligned}$$

Then the conclusion is simply that the restriction of u to the set \(Z_{1/(2i),j}^{x}\) is \(2^{-j}\)-Lipschitz.

Since (Xd) is \(\sigma \)-compact, it is separable and hence we can pick a countable dense subset \(\{x_n\}_{n\in {\mathbb {N}}}\subseteq X\). Then for arbitrarily small \(\delta \in [0,1/2)\), the family \({\mathcal {E}}\cup {\mathcal {Z}}\) with

$$\begin{aligned} {\mathcal {E}}:= \left\{ E_{1/(2i),j}^{x_n}:\; i\in {\mathbb {N}},j\in {\mathbb {Z}},n\in {\mathbb {N}}\right\} \end{aligned}$$

and

$$\begin{aligned} {\mathcal {Z}}:= \left\{ Z_{1/(2i),j}^{x_n}:\;i\in {\mathbb {N}}, j\in {\mathbb {N}}\text { with }j\ge - \log _2 \delta ,n\in {\mathbb {N}}\right\} \end{aligned}$$

constitutes a countable cover of X by Borel sets. Using the cover \({\mathcal {E}} \cup {\mathcal {Z}}\), we can further produce a countable disjoint Borel cover of X of the form \(\{E_i\}_{i\in {\mathbb {N}}} \cup \{Z_i\}_{i\in {\mathbb {N}}}\) with the following properties. First, \(E_{i} \subset E\) and \(Z_{i} \subset Z\) for \(i \in {\mathbb {N}}\). Second, the function u can be decomposed as

$$\begin{aligned} u = u\chi _{E} + u\chi _{Z} = \sum _{i \in {\mathbb {N}}} u\chi _{E_{i}} + \sum _{i \in {\mathbb {N}}} u\chi _{Z_{i}}, \end{aligned}$$
(3.4)

where

$$\begin{aligned} u|_{E_{i}} \text { is } L_{i}\text {-Lipschitz} \quad \text {and} \quad \text {Lip}(u,x) \ge (1 - \epsilon )L_{i} \quad \text { for } x \in E_{i} \end{aligned}$$
(3.5)

for some finite constants \(L_{i} > 0\), and

$$\begin{aligned} u|_{Z_{i}} \text { is } \delta \text {-Lipschitz}, \end{aligned}$$
(3.6)

where \(\delta > 0\) can be taken arbitrarily small. To achieve such a disjoint cover \(\{E_{i}\}_{i \in {\mathbb {N}}} \cup \{Z_{i}\}_{i \in {\mathbb {N}}}\), consider first the family

$$\begin{aligned} {\mathcal {E}}':=\{A\cap E:\; A\in {\mathcal {E}}\}. \end{aligned}$$

The sets in \({\mathcal {E}}'\) form a countable cover of E and we enumerate them as \((E_i')_{i\in {\mathbb {N}}}\). Then we arrive at a disjoint cover of E by setting \(E_1:= E_1'\) and defining iteratively \(E_i:= E_i'\setminus (E_1'\cup \ldots \cup E_{i-1}')\) for \(i \ge 2\). To obtain the sets \(\{Z_{i}\}_{i \in {\mathbb {N}}}\), apply the same procedure with \({\mathcal {Z}}' = \{A \cap Z : A \in {\mathcal {Z}}\}\) in place of \({\mathcal {E}}'\). The decomposition (3.4) then holds by construction. To establish (3.5) and (3.6), we note that the properties (3.2) and (3.3) (and the \(2^{-j}\)-Lipschitz continuity of u on \(Z_{1/(2i),j}^x\)) are preserved under taking subsets of \(E_{1/(2i),j}^{x}\) (and \(Z_{1/(2i),j}^x\)).

Next, Lemma 2.5 allows us to remove for every \(i\in {\mathbb {N}}\) a Borel set \(N_i\) from \(E_i\) (or similarly \(Z_{i}\)) such that

$$\begin{aligned} E_i \setminus N_i\text { is compact }\quad \text {and}\quad \Vert T\Vert (N_i) < \frac{\epsilon }{2^{i+1}}. \end{aligned}$$

Thus, we may assume that the sets \(\{E_i\}_{i\in {\mathbb {N}}}\cup \{Z_i\}_{i\in {\mathbb {N}}}\) are compact, if we replace (3.4) by a decomposition

$$\begin{aligned} u = \sum _{i \in {\mathbb {N}}} u\chi _{E_{i}} + \sum _{i \in {\mathbb {N}}} u\chi _{Z_{i}} + u\chi _{N}, \end{aligned}$$

where \(N \subset X\) is a Borel set with \(\Vert T\Vert (N) < \epsilon \).

Next, we use the McShane extension theorem to find Lipschitz functions \(u^{E}_{i},u^{Z}_{i} :X \rightarrow {\mathbb {R}}\) such that

$$\begin{aligned} u^{E}_{i}|_{E_{i}} = u|_{E_{i}} \quad \text {and} \quad u^{Z}_{i}|_{Z_{i}} = u|_{Z_{i}} \end{aligned}$$

and

$$\begin{aligned} u_{i}^{E} \text { is } L_{i}\text {-Lipschitz} \quad \text {and} \quad u_{i}^{Z} \text { is } \delta \text {-Lipschitz}. \end{aligned}$$

Then

$$\begin{aligned} u = \sum _{i \in {\mathbb {N}}} u_{i}^{E}\chi _{E_{i}} + \sum _{i \in {\mathbb {N}}} u_{i}^{Z}\chi _{Z_{i}} + u\chi _{N}, \end{aligned}$$

and we can write

$$\begin{aligned} |\partial T(u)| = |T(1,u)| \le \sum _{i \in {\mathbb {N}}} |T(\chi _{E_{i}},u)| + \sum _{i \in {\mathbb {N}}} |T(\chi _{Z_{i}},u)| + |T(\chi _{N},u)|. \end{aligned}$$
(3.7)

Since \(\text {Lip}(u) < \infty \), we have \(|T(\chi _{N},u)| \le \text {Lip}(u)\Vert T\Vert (N) < \mathrm {Lip}(u) \epsilon \). We now estimate the terms involving \(\chi _{E_i}\). By definition of the restriction operation, see Definition 2.4 and the comment below it, we can write

$$\begin{aligned} \left| T(\chi _{E_{i}},u)\right| = \left| T\lfloor _{E_{i}}(1,u)\right| . \end{aligned}$$
(3.8)

Recall that \(u|_{E_i}= u_i^E|_{E_i}\). This is useful information since, according to [1, (3.6)], the values of a 1-current agree on \((f,\pi _1)\) and \((f',\pi _1')\) whenever \(f=f'\) and \(\pi _1= \pi _1'\) on the support of T. Using that \(\mathrm {spt}\left( T\lfloor _{E_i}\right) \subseteq E_i\), we apply this fact to the current \(T\lfloor _{E_{i}}\) and the pairs \((f,\pi _1)=(1,u)\) and \((f',\pi _1')=(1,u_i^E)\), \(i\in {\mathbb {N}}\). This shows that

$$\begin{aligned} \left| T\lfloor _{E_{i}}(1,u)\right| = \left| T\lfloor _{E_{i}}(1,u_i^E)\right| \end{aligned}$$
(3.9)

Finally, by (2.2) and the property (3.5) of \(u_i^E\), it holds for every \(i\in {\mathbb {N}}\) that

$$\begin{aligned} \left| T\lfloor _{E_{i}}(1,u_i^E)\right| \le \int _{E_i} L_i d \Vert T \Vert \le \frac{1}{1-\epsilon } \int _{E_i} \mathrm {Lip}(u,x) d\Vert T\Vert (x). \end{aligned}$$
(3.10)

Combining (3.8), (3.9), and (3.10), and using the pairwise disjointedness of the sets \(E_i\), \(i\in {\mathbb {N}}\), we conclude that

$$\begin{aligned} \sum _{i \in {\mathbb {N}}} \left| T(\chi _{E_{i}},u)\right| \le \frac{1}{1-\epsilon } \int _{X} \mathrm {Lip}(u,x) d\Vert T\Vert (x) \end{aligned}$$

Similar considerations give

$$\begin{aligned} \sum _{i \in {\mathbb {N}}} |T(\chi _{Z_{i}},u)| \le \delta \Vert T\Vert (Z). \end{aligned}$$

Letting \(\epsilon \rightarrow 0\) and \(\delta \rightarrow 0\) in (3.7) completes the proof of Lemma 3.1. \(\square \)

We next apply Lemma 3.1 to deduce the validity of a weak 1-Poincaré inequality from the existence of GPCs.

Proof of “if” implication in Theorem 1.3

By a result of Keith [10], see also Theorem 8.4.2 in [9], it suffices to verify the Poincaré inequality for a priori Lipschitz continuous functions u and for the pointwise Lipschitz constant \(\rho = \mathrm {Lip}(u,\cdot )\) instead of arbitrary upper gradients. So, let \(u :X \rightarrow {\mathbb {R}}\) with \(\text {Lip}(u) < \infty \).

Recalling that

$$\begin{aligned} \Theta (x,r)= \frac{\mu (B(x,r))}{r},\quad x\in X,\;r>0, \end{aligned}$$

we will first check that

$$\begin{aligned} |u(t) - u(s)| \lesssim \int _{B(s,C_{0}d(s,t))} \left( \frac{\text {Lip}(u,y)}{\Theta (s,d(s,y))} + \frac{\text {Lip}(u,y)}{\Theta (t,d(t,y))} \right) \, d\mu (y) \end{aligned}$$
(3.11)

for distinct points \(s,t \in X\). Start by fixing such points st, let T be a GPC joining s to t, and recall that \({\text {spt}}T \subset B(s,C_{0}d(s,t))\). Then,

$$\begin{aligned} |u(t) - u(s)| = |\partial T(u)|&\le \int \text {Lip}(u,y) \, d\Vert T\Vert (y)\\&\le C_0 \int _{B(s,C_{0}d(s,t))} \left( \frac{\text {Lip}(u,y)}{\Theta (s,d(s,y))} + \frac{\text {Lip}(u,y)}{\Theta (t,d(t,y))} \right) \, d\mu (y), \end{aligned}$$

using Lemma 3.1 and property (P3) in Definition 1.2. (Since a complete doubling metric space is proper, see [9, Lemma 4.1.14], and a proper metric space is \(\sigma \)-compact, the space (Xd) satisfies the assumptions of Lemma 3.1). This proves (3.11), which is (almost) a well-known sufficient condition for the weak 1-Poincaré inequality. The only technicality here is that we only know (3.11) for the particular upper gradient \(\text {Lip}(u,\cdot )\). To complete the proof, we will now briefly argue that that this suffices to imply the weak 1-Poincaré inequality in full generality.

Indeed, [8, Theorem 9.5] lists several conditions that imply weak Poincaré inequalities in doubling spaces (see also the references in [8]). Our estimate (3.11) shows that condition (2) in [8, Theorem 9.5] holds for \(p=1\), \(\mu \), u Lipschitz, the particular upper gradient \(\rho = \mathrm {Lip}(u,\cdot )\), \(C_2=C_3 = C_0\). It then follows from the proof in [8] that also condition (3) in the theorem holds for the same pair \((u,\rho )\) (by this, we mean that to obtain condition (3) for u and \(\rho \), one only needs to have condition (2) for u and \(\rho \), and no other upper gradients). We rephrase condition (3) in a slightly peculiar manner for future application: there exists a constant \(C\ge 1\) (again depending on \(C_{0}\)) such that if \(2B:= B(z,2r)\) is any ball in X, and \(x,y\in 2B\), then

$$\begin{aligned} |u(x) - u(y)| \lesssim d(x,y) \left( M_{Cd(x,y)}\rho (x) + M_{Cd(x,y)}\rho (y) \right) . \end{aligned}$$
(3.12)

Here \(M_{R}\) is the restricted maximal function

$$\begin{aligned} M_{R}\rho (x) = \sup _{r < R} \frac{1}{\mu (B(x,r))} \int _{B(x,r)} \rho (y) \, d\mu (y), \qquad R > 0. \end{aligned}$$

Next we need to verify that inequality (3.12) implies that the very same pair \((u,\rho )\) satisfies the weak 1-Poincaré inequality. To this end, we apply Theorem 8.1.18 in [9] (originally due to Hajłasz [6]) with \(h = M_{4Cr}\rho \) and \(Q= \log _2 C_{\mu }\) for the doubling constant \(C_{\mu }\) of \(\mu \), to deduce that

$$\begin{aligned} \frac{1}{\mu (B)} \int _{B} |u - u_{B}| \, d\mu ^Q \lesssim r \left( \frac{1}{\mu (2B)} \int _{2B} [M_{4Cr}\rho ]^{\tfrac{Q}{Q + 1}} \, d\mu \right) ^{\tfrac{Q + 1}{Q}}. \end{aligned}$$

Finally, following verbatim the argument on p. 224 of [9], we conclude that

$$\begin{aligned} \frac{1}{\mu (B)} \int _{B} |u - u_{B}| \, d\mu&\lesssim r \left( \frac{1}{\mu (2B)} \int _{2B} [M_{4Cr}\rho ]^{\tfrac{Q}{Q + 1}} \, d\mu \right) ^{\tfrac{Q + 1}{Q}}\\&\lesssim r \left( \frac{1}{\mu (2CB)}\int _{2CB} [M_{4Cr}\rho ]^{\tfrac{Q}{Q + 1}} \, d\mu \right) ^{\tfrac{Q + 1}{Q}}\\&\lesssim \frac{1}{\mu (6CB)}\int _{6CB} \rho \;\mathrm {d}\mu . \end{aligned}$$

Hence the inequality (1.1) holds with \(p=1\) and \(\lambda = 6C\) (depending on \(C_0\)) for all open balls \(B\subset X\) and all pairs \((u,\rho )\), where \(u:X \rightarrow {\mathbb {R}}\) is Lipschitz and \(\rho =\mathrm {Lip}(u,\cdot )\). According to [9, Theorem 8.4.2], this shows that \((X,d,\mu )\) supports the weak 1-Poincaré inequality. \(\square \)

4 From 1-Poincaré to generalised pencils of curves

4.1 An initial reduction

The main effort in the rest of the paper consists of proving the following statement:

Theorem 4.1

Let \((X,d,\mu )\) be a complete doubling and geodesic metric measure space. If X supports a 1-Poincaré inequality, then it supports generalised pencils of curves.

The assumptions in Theorem 4.1 are superficially stronger than in the remaining implication of Theorem 1.3, so we start by briefly discussing how Theorem 1.3 reduces to the special case in Theorem 4.1.

Proof of the “only if” part of Theorem 1.3, assuming Theorem 4.1

By [9, Corollary 8.3.16], if the complete doubling space \((X,d,\mu )\) supports a weak 1-Poincaré inequality, then d is biLipschitz equivalent to a geodesic metric \(d_g\). Clearly, the measure \(\mu \) remains doubling with respect to this new metric \(d_g\). Further, by [9, Lemma 8.3.18], the space \((X,d_g,\mu )\) still supports a weak 1-Poincaré inequality. In fact, since \((X,d_g)\) is geodesic, it follows from [9, Remark 9.1.19] that \((X,d_g,\mu )\) even satisfies the weak 1-Poincaré inequality (1.1) with constant \(\lambda =1\); this is often called the 1-Poincaré inequality (without the attribute “weak”).

We can thus apply Theorem 4.1 to \((X,d_g,\mu )\) in order to find a GPC between any pair of distinct points \(s,t \in X\). Then T is also a GPC joining s to t in \((X,d,\mu )\), since the conditions (P1)–(P3) in Definition 1.2 are obviously invariant under bi-Lipschitz changes of metric. The least obvious is (P3), where one needs to recall that \(\mu \) is a doubling measure, whence \(\Theta _{d}(s,d(s,y)) \sim \Theta _{d_g}(s,d_g(s,y))\) and \(\Theta _{d}(t,d(t,y)) \sim \Theta _{d_g}(t,d_g(t,y))\) for all \(y \in X\). \(\square \)

As in the assumptions of Theorem 4.1, we now suppose that \((X,d,\mu )\) is a complete geodesic doubling metric measure space supporting the Poincaré inequality (1.1) with \(p=1\) and \(\lambda =1\).

4.2 Proof of Theorem 4.1

Fix two points \(s,t \in X\), and write \(B_{0} := B(s,C_{0}d(s,t))\) for the ball inside (the closure of) which we should find the current T as in Definition 1.2; the constant \(C_{0} \ge 1\) will be specified later, and its size only depends on the data of \((X,d,\mu )\), such as the doubling constant of \(\mu \), and the constant in the Poincaré inequality. We find the current T by initially constructing a sequence of approximating currents, each of them a sum of finitely many currents of the form discussed in Example 2.6. We start by defining a sequence of covers of \(B_{0}\) by balls. For \(n \in {\mathbb {N}}\), write \(r_{n} := 2^{-n}\), and let \(X_{n} \subset B_{0}\) be an \(r_{n}\)-net, that is, some maximal family of points \(X_{n} \subset B_{0}\) satisfying \(d(x,x') \ge r_{n}\) for all distinct \(x,x' \in X_{n}\). We assume that \(r_{n}\) is far smaller than d(st), and we require that

$$\begin{aligned} \{s,t\} \subset X_{n}. \end{aligned}$$
(4.1)

We note that the collection of open balls \({\mathcal {B}}_{n} := \{B(x,2r_{n}) : x \in X_{n}\}\) is now a cover of \(B_{0}\). In fact, already the balls \(B(x,r_{n})\) would be a cover of \(B_{0}\): by the maximality of \(X_{n}\), for every \(y \in B_{0}\) there exists \(x \in X_{n}\) such that

$$\begin{aligned} y \in B(x,r_{n}). \end{aligned}$$
(4.2)

Moreover, every ball \(B \in {\mathcal {B}}_{n}\) only has boundedly many “neighbours”:

$$\begin{aligned} {\text {card}}\{B' \in {\mathcal {B}}_{n} : B \cap B' \ne \emptyset \} \lesssim 1. \end{aligned}$$
(4.3)

This follows by using the doubling property of \(\mu \) and the consequential relative lower volume decay (see [9, Lemma 8.1.13]), and noting that the balls \(B(x,r_{n}/2)\), \(x \in X_{n}\), are disjoint.

We will now construct a current \(T_{n}\) supported in \(\Omega _n = B(s, C_{0}d(s,t) + C r_n)\) for suitable constants \(C_{0},C \ge 1\) (depending on the constants in the 1-Poincaré inequality). The current \(T_{n}\) will be constructed using the max flow–min cut theorem from graph theory, and a subsequence of the currents \(T_{n}\) will eventually be shown, using the compactness theorem for normal currents, to converge to the desired current T supported on \({\bar{B}}_{0}\).

4.3 Graphs and flows

To apply the max flow–min cut theorem, we need to define a graph \(G_{n} = ({\mathcal {V}}_{n},E_{n})\) associated to our problem, where \({\mathcal {V}}_{n}\) is a collection of vertices, and \(E_{n} \subset {\mathcal {V}}_{n} \times {\mathcal {V}}_{n}\) is a collection of edges. We set \({\mathcal {V}}_{n} := X_{n}\), and

$$\begin{aligned} E_{n} := \{(x,x') \in {\mathcal {V}}_{n} \times {\mathcal {V}}_{n} : x \ne x' \quad \text { and } \quad B(x,2r_{n}) \cap B(x',2r_{n}) \ne \emptyset \}. \end{aligned}$$

Note that \((x,x') \in E_{n}\) if and only if \((x',x) \in E_{n}\), and that the maximum degree of any vertex is uniformly bounded by (4.3). We also define a capacity function \(c_{n} :E_{n} \rightarrow {\mathbb {Q}}_{+}\) satisfying

$$\begin{aligned} c_{n}(x,x') \sim \frac{\Theta (x,r_{n})}{\Theta (s,d(s,x))} + \frac{ \Theta (x',r_{n})}{\Theta (t,d(t,x'))}, \quad (x,x') \in E_{n}, \end{aligned}$$
(4.4)

if \(\{x,x'\} \cap \{s,t\} = \emptyset \), and \(c_{n}(x,x') \sim 1\) otherwise. Here \(\Theta (x,r)\) is the (1-dimensional) density

$$\begin{aligned} \Theta (x,r) = \frac{\mu (B(x,r))}{r}, \qquad x \in X, \, r > 0. \end{aligned}$$

We do not specify the values of \(c_n(x,x')\) more precisely: we will only use that \(c_{n}(x,y)\) is a rational number within a constant multiple of the right hand side of (4.4). Note that \(c_n(x,x') \sim c_n(x',x)\) for all \((x,x') \in E_{n}\); in fact, we may as well define \(c_{n}(x,x') = c_{n}(x',x)\).

A flow in \(G_{n}\) is a function \(f :E_{n} \rightarrow {\mathbb {R}}\) satisfying the following three conditions:

  1. (F1)

    \(f(x,x') = -f(x',x)\) for all \((x,x') \in E_{n}\),

  2. (F2)

    \(f(\{x\},{\mathcal {V}}_{n}) = 0\) for all \(x \in {\mathcal {V}}_{n} \setminus \{s,t\}\),

  3. (F3)

    \(f(x,x') \le c_{n}(x,x')\) for all \((x,x') \in E_{n}\).

Informally, (F2) means that the total flow entering any vertex \(x \in {\mathcal {V}}_{n}\) equals the total flow leaving x (except for the “source” and “sink” vertices s and t), and (F3) says that the flow is bounded by the capacity function. Here, and in the sequel, we write

$$\begin{aligned} f({\mathcal {U}},{\mathcal {W}}) = \sum _{e \in E_{n}({\mathcal {U}},{\mathcal {W}})} f(e), \end{aligned}$$

where \(E_{n}({\mathcal {U}},{\mathcal {W}}) = \{(x,x') \in E_{n} : x \in {\mathcal {U}} \text { and } x' \in {\mathcal {W}}\}\), and analogously

$$\begin{aligned} c_n({\mathcal {U}},{\mathcal {W}}) = \sum _{e \in E_{n}({\mathcal {U}},{\mathcal {W}})} c_n(e). \end{aligned}$$

The norm of a flow \(f :E_{n} \rightarrow {\mathbb {R}}\) is defined to be the quantity

$$\begin{aligned} \Vert f\Vert := f(\{s\},{\mathcal {V}}_{n}). \end{aligned}$$

A cut is any pair \(({\mathcal {S}},{\mathcal {S}}^{c})\), where \({\mathcal {S}}\subset {\mathcal {V}}_{n}\) is a set with \(s \in {\mathcal {S}}\) and \(t \in {\mathcal {S}}^{c}\). As usual, \({\mathcal {S}}^{c}\) denotes the complement of \({\mathcal {S}}\) (in \({\mathcal {V}}_{n}\)). The “total flow” of f over any cut \(({\mathcal {S}},{\mathcal {S}}^{c})\) equals \(\Vert f\Vert \):

$$\begin{aligned} \Vert f\Vert = f({\mathcal {S}},{\mathcal {S}}^{c}). \end{aligned}$$
(4.5)

In particular, \(\Vert f\Vert = f({\mathcal {V}}_{n}\setminus \{t\},\{t\})\). For a proof of (4.5), see [5, Proposition 6.2.1]. Consequently, by (F3),

$$\begin{aligned} \Vert f\Vert \le \min _{({\mathcal {S}},{\mathcal {S}}^{c})} c_{n}({\mathcal {S}},{\mathcal {S}}^{c}), \end{aligned}$$
(4.6)

where the \(\min \) runs over all cuts \(({\mathcal {S}},{\mathcal {S}}^{c})\). In other words, the norm of any flow is bounded from above by the capacity of any cut in the graph.

A well-known theorem in graph theory due to Ford and Fulkerson [5] states that if \(c_{n}\) is integer-valued, then (4.6) is sharp: there exists a flow f with \(\Vert f\Vert = \min c_{n}({\mathcal {S}},{\mathcal {S}}^{c})\). We learned the theorem from Diestel’s graph theory book, see [3, Theorem 6.2.2]. Our capacity \(c_{n}\) is not integer valued, but since \(c_{n}(x,x') \in {\mathbb {Q}}_{+}\), and the cardinality of \(E_{n}\) is finite, we may assume that \(c_{n}(x,x') \in {\mathbb {N}}\) by initially multiplying all quantities by a suitable integer.

The reader should view flows in \(G_{n}\) as discrete models for the current \(T_{n}\): we will make the connection rigorous in Sect. 4.4. For now, we wish to find a uniform lower bound for the numbers \(c_{n}({\mathcal {S}},{\mathcal {S}}^{c})\), where \(({\mathcal {S}},{\mathcal {S}}^{c})\) is an arbitrary cut. We claim that

$$\begin{aligned} c_{n}({\mathcal {S}},{\mathcal {S}}^{c}) \gtrsim 1. \end{aligned}$$
(4.7)

To this end, fix a cut \(({\mathcal {S}},{\mathcal {S}}^{c})\), and recall that \(s \in {\mathcal {S}}\) and \(t \in {\mathcal {S}}^{c}\) by definition. Also by definition,

$$\begin{aligned} c_{n}({\mathcal {S}},{\mathcal {S}}^{c})&= \sum _{(x,x') \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c})} c_{n}(x,x') \nonumber \\&\sim \sum _{(x,x') \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c})} \left( \frac{\Theta (x,r_{n})}{\Theta (s,d(s,x))} + \frac{ \Theta (x',r_{n})}{\Theta (t,d(t,x'))} \right) . \end{aligned}$$
(4.8)

To be precise, (4.8) only holds if none of the edges in \(E_{n}({\mathcal {S}},{\mathcal {S}}^{c})\) start or end in \(\{s,t\}\). We may assume this, since, for example, if \((s,x') \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c})\), then \(c_{n}({\mathcal {S}},{\mathcal {S}}^{c}) \ge c_{n}(s,x') \gtrsim 1\), and (4.7) follows. In fact, the same argument holds a little more generally: if \((x,x') \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c})\) satisfies \({\text {dist}}(x,\{s,t\}) \lesssim r_{n}\) or \({\text {dist}}(x',\{s,t\}) \lesssim r_{n}\), then again \(c_{n}({\mathcal {S}},{\mathcal {S}}^{c}) \gtrsim 1\), using the assumption that \(\mu \) is doubling. So, without loss of generality, we assume that

$$\begin{aligned} \min \{{\text {dist}}(x,\{s,t\}),{\text {dist}}(x',\{s,t\})\} \ge Cr_{n}, \qquad (x,x') \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c}). \end{aligned}$$
(4.9)

where \(C \ge 1\) is a suitable large constant to be specified later. If \(C \ge 20\), say, then (4.9) has the following consequence:

$$\begin{aligned} d(s,y) \sim d(s,x) \sim d(s,x') \quad \text {and} \quad d(t,y) \sim d(t,x') \sim d(t,x) \end{aligned}$$
(4.10)

for all \(y \in \overline{B(x,5r_{n})} \cup \overline{B(x',5r_{n})}\), and all pairs \((x,x') \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c})\), since \(d(x,x') \le 4r_{n}\) for such pairs. Note that (4.10), combined with the doubling property of \(\mu \), also allows us to replace the denominators in (4.9) by some comparable quantities, as indicated by (4.10), for example

$$\begin{aligned} \Theta (t,d(t,x')) \sim \Theta (t,d(t,x)), \qquad (x,x') \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c}). \end{aligned}$$
(4.11)

Evidently, the proof of (4.7) should somehow use our only assumption: the Poincaré inequality (1.1) with \(p=1\). To this end, we define a Lipschitz function \(u = u_{n} :B_{0} \rightarrow {\mathbb {R}}\) associated to the cut \(({\mathcal {S}},{\mathcal {S}}^{c})\), using a Lipschitz partition of unity on \(B_{0}\), subordinate to the cover \({\mathcal {B}}_{n}\). For \(x \in {\mathcal {V}}_{n}\), let

$$\begin{aligned} \phi _{x}(y) = \chi _{B(x,2r_{n})}(y)\cdot \frac{2r_{n} - d(y,x)}{2r_{n}} \quad \text {and} \quad \psi _{x} := \frac{\phi _{x}}{\sum _{x' \in {\mathcal {V}}_{n}} \phi _{x'}}. \end{aligned}$$

Then

$$\begin{aligned} \{y \in B_{0} : \psi _{x}(y) > 0\} = B_{0} \cap B(x,2r_{n}), \end{aligned}$$
(4.12)

and \(\psi _{x}\) and is \((C/r_{n})\)-Lipschitz on \(B_{0}\): this is easy to check, noting that

$$\begin{aligned} \sum _{x' \in {\mathcal {V}}_{n}} \phi _{x'}(y) \sim 1, \qquad y \in B_{0}, \end{aligned}$$

by (4.2) and the bounded overlap of the balls in \({\mathcal {B}}_{n}\). Evidently,

$$\begin{aligned} \sum _{x \in {\mathcal {V}}_{n}} \psi _{x}(y) = 1, \qquad y \in B_{0}, \end{aligned}$$

so the family \(\{\psi _{x}\}_{x \in {\mathcal {V}}_{n}}\) is the partition of unity on \(B_{0}\) we were after. We set

$$\begin{aligned} u := \sum _{x \in {\mathcal {S}}} \psi _{x}. \end{aligned}$$

Evidently, u takes values in [0, 1], and is \(L_{n}\)-Lipschitz on \(B_{0}\) for some \(L_{n} \sim 1/r_{n}\). For \(y \in X\) and any subset \({\mathcal {U}}\subset {\mathcal {V}}_{n}\), write

$$\begin{aligned} {\mathcal {U}}(y) := \{x \in {\mathcal {U}}: y \in B(x,2r_{n})\} = \{x \in {\mathcal {U}}: \phi _{x}(y) > 0\} \end{aligned}$$

Clearly \({\mathcal {S}}(y),{\mathcal {S}}^{c}(y) \subset {\mathcal {V}}_{n}(y)\) and \({\mathcal {V}}_{n}(y) = {\mathcal {S}}(y) \cup {\mathcal {S}}^{c}(y)\) for all \(y \in X\).

Lemma 4.2

For \(y \in B_{0}\), we have

$$\begin{aligned} u(y) = 1 \Longleftrightarrow {\mathcal {V}}_{n}(y) = {\mathcal {S}}(y), \end{aligned}$$

and

$$\begin{aligned} u(y) = 0 \Longleftrightarrow {\mathcal {V}}_{n}(y) = {\mathcal {S}}^{c}(y). \end{aligned}$$

Proof

If \(u(y) = 1\), then

$$\begin{aligned} 1 = \sum _{x \in {\mathcal {S}}} \psi _{x}(y) = \sum _{x \in {\mathcal {S}}} \frac{\phi _{x}(y)}{\sum _{x' \in {\mathcal {V}}_{n}} \phi _{x'}(y)} = \frac{\sum _{x \in {\mathcal {S}}(y)} \phi _{x}(y)}{\sum _{x' \in {\mathcal {V}}_{n}(y)} \phi _{x'}(y)}. \end{aligned}$$

Hence

$$\begin{aligned} \sum _{x \in {\mathcal {S}}(y)} \phi _{x}(y) = \sum _{x' \in {\mathcal {V}}_{n}(y)} \phi _{x'}(y), \end{aligned}$$

which forces \({\mathcal {S}}(y) = {\mathcal {V}}_{n}(y)\). The converse implication is clear.

If \(u(y) = 0\), then \(\psi _{x}(y) = 0\) for all \(x \in {\mathcal {S}}\), so \({\mathcal {S}}(y) = \emptyset \). Consequently, \({\mathcal {V}}_{n}(y) \subset {\mathcal {S}}^{c}\), as claimed. The converse implication is again clear. \(\square \)

Corollary 4.3

We have \(u(s) = 1\) and \(u(t) = 0\).

Proof

This is, in fact, a corollary of Lemma 4.2 and (4.9). Start with s: if \(u(s) < 1\), then \(\phi _{x}(s) > 0\) for some \(x \in {\mathcal {S}}^{c}\), hence \(s \in B(x,2r_{n})\) by (4.12). On the other hand, \(s \in {\mathcal {S}}\) (by the very definition of a cut), so the fact that \(s \in {\text {spt}}\psi _x\subseteq B(x,2r_{n})\) implies \((s,x) \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c})\). This contradicts (4.9) as soon as \(C\ge 2\), and hence we deduce that \(u(s) = 1\).

The treatment of t is essentially symmetric: if \(u(t) > 0\), then \(\phi _{x}(t) > 0\) for some \(x \in {\mathcal {S}}\). But since \(t \in {\mathcal {S}}^{c}\), this implies that \((x,t) \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c})\), again violating (4.9). \(\square \)

Next, still using Lemma 4.2, we investigate where \(\text {Lip}(u,y) = 0\).

Lemma 4.4

We have

$$\begin{aligned} \{y \in B_{0} : \mathrm{Lip}(u,y) \ne 0\} \subset \bigcup _{x \in {\mathbf{Bd}}({\mathcal {S}})} B(x,5r_{n}), \end{aligned}$$

where

$$\begin{aligned} {\mathbf{Bd}}({\mathcal {S}}) := \{x \in {\mathcal {S}}: \exists \, x' \in {\mathcal {S}}^{c} \text { such that } (x,x') \in E_{n}\}, \end{aligned}$$

Proof

Pick \(y \in B_{0}\) with \(\text {Lip}(u,y) \ne 0\). We claim that this has the following consequence: for all \(x \in {\mathcal {V}}_{n}(y)\), the set \({\mathcal {N}}(x) := \{x\} \cup \{x' \in {\mathcal {V}}_{n} : (x,x') \in E_{n}\}\) intersects both \({\mathcal {S}}\) and \({\mathcal {S}}^{c}\).

Assume to the contrary that there is some \(x \in {\mathcal {V}}_{n}(y)\) with \({\mathcal {N}}(x) \subset {\mathcal {S}}\) or \({\mathcal {N}}(x) \subset {\mathcal {S}}^{c}\): we start with the case \({\mathcal {N}}(x) \subset {\mathcal {S}}\). Pick \(z \in B_{0} \cap B(x,2r_{n})\) arbitrarily, and consider any \(x' \in {\mathcal {V}}_{n}(z)\). Then \(z \in B(x',2r_{n})\) by definition, so

$$\begin{aligned} z \in B(x,2r_{n}) \cap B(x',2r_{n}) \Longrightarrow (x,x') \in E_{n} \Longrightarrow x' \in {\mathcal {S}}. \end{aligned}$$

This shows that \({\mathcal {V}}_{n}(z) \subset {\mathcal {S}}\), hence \(u(z) = 1\) by Lemma 4.2. But \(z \in B_{0} \cap B(x,2r_{n})\) was arbitrary, so we have inferred that \(u \equiv 1\) on the neighbourhood \(B_{0} \cap B(x,2r_{n})\) of y. In particular \(\text {Lip}(u,y) = 0\), a contradiction.

Next, consider the case \({\mathcal {N}}(x) \subset {\mathcal {S}}^{c}\). As before, pick \(z \in B_{0} \cap B(x,2r_{n})\) arbitrarily, and deduce as above that \(x' \in {\mathcal {S}}^{c}\) for all \(x' \in {\mathcal {V}}_{n}(z)\). This implies by Lemma 4.2 that \(u(z) = 0\), and hence \(u \equiv 0\) on \(B_{0} \cap B(x,2r_{n})\). This contradicts \(\text {Lip}(u,y) \ne 0\).

Now that we have proven the claim in italics, we finish the proof of the lemma. Fix \(y \in B_{0}\) with \(\text {Lip}(u,y) \ne 0\), pick any \(x \in {\mathcal {V}}_{n}(y)\), and assume first that \(x \in {\mathcal {S}}\). Then there exists \(x' \in {\mathcal {S}}^{c}\) with \((x,x') \in E_{n}\). Hence \(x \in \mathbf Bd ({\mathcal {S}})\) by definition, and \(y \in B(x,2r_{n}) \subset B(x,5r_{n})\), as claimed. Next, if \(x \in {\mathcal {S}}^{c}\), then we have shown that there exists \(x' \in {\mathcal {S}}\) with \((x',x) \in E_{n}\). This means that \(x' \in \mathbf Bd ({\mathcal {S}})\). Since \(B(x,2r_{n}) \cap B(x',2r_{n}) \ne \emptyset \), we infer that \(y \in B(x',5r_{n})\), and the proof is complete. \(\square \)

We extend u to an \(L_{n}\)-Lipschitz map \(X \rightarrow {\mathbb {R}}\) without change in the notation. Then, we note that (u, \(\text {Lip}(u,\cdot )\)) is a function–upper gradient pair on X, and we apply Theorem 9.5 in [8], more precisely the implication “\((4) \Longrightarrow (2)\)”, which requires the space (Xd) to be geodesic. This implication gives the following estimate:

$$\begin{aligned} 1 = |u(s) - u(t)| \lesssim \int _{B_{0}} \left( \frac{\text {Lip}(u,y)}{\Theta (s,d(s,y))} + \frac{\text {Lip}(u,y)}{\Theta (t,d(t,y))} \right) \, d\mu (y), \end{aligned}$$

assuming that st are “deep enough inside” the ball \(B_{0}\). This can be arranged by choosing \(C_{0} \ge 1\) in the definition of \(B_{0}\) large enough. Then, from Lemma 4.4, we obtain further

$$\begin{aligned} 1 \lesssim \frac{1}{r_{n}} \int _{B_{0} \cap \bigcup _{x \in \mathbf Bd ({\mathcal {S}})} B(x,5r_{n})} \left( \frac{1}{\Theta (s,d(s,y))} + \frac{1}{\Theta (t,d(t,y))} \right) \, d\mu (y). \end{aligned}$$
(4.13)

Using the definition of Bd\(({\mathcal {S}})\), and recalling (4.10)–(4.11), we may continue the estimate (4.13) as follows:

$$\begin{aligned} 1&\lesssim \frac{1}{r_{n}} \sum _{x \in \mathbf Bd ({\mathcal {S}})} \left( \frac{\mu (B(x,r_{n}))}{\Theta (s,d(s,x))} + \frac{\mu (B(x,r_{n}))}{\Theta (t,d(t,x))} \right) \\&\le \sum _{(x,x') \in E_{n}({\mathcal {S}},{\mathcal {S}}^{c})} \left( \frac{\Theta (x,r_{n})}{\Theta (s,d(s,x))} + \frac{\Theta (x',r_{n})}{\Theta (t,d(t,x'))} \right) \lesssim c_{n}({\mathcal {S}},{\mathcal {S}}^{c}), \end{aligned}$$

recalling (4.8) in the last inequality. This proves (4.7).

By multiplying the \(c_{n}\) by a constant (rational) factor, we may now arrange \(c_{n}({\mathcal {S}},{\mathcal {S}}^{c}) \ge 1\) for all cuts \(({\mathcal {S}},{\mathcal {S}}^{c})\) with \(s \in {\mathcal {S}}\) and \(t \in {\mathcal {S}}^{c}\). Then, we are in a position to apply the max flow–min cut theorem: there exists a flow \(f_{n} :E_{n} \rightarrow {\mathbb {R}}\) such that \(\Vert f_{n}\Vert \ge 1\). Moreover, recalling (4.6), the norm of the flow \(f_{n}\) is bounded from above by the capacity of the cut \((\{s\},{\mathcal {V}}_{n} \setminus \{s\})\). Since there are only boundedly many edges in \(E_{n}\) of the form (sx), \(x \in {\mathcal {V}}_{n}\), and the capacity of each one of them is \(c_{n}(s,x) \sim 1\), we get

$$\begin{aligned} \Vert f_{n}\Vert \sim 1. \end{aligned}$$
(4.14)

4.4 Currents

In this section, we use the flow \(f_{n}\) constructed above to find a metric current \(T_{n}\) supported in a neighborhood of \(B_{0}\). We define the current \(T_{n}\) as follows. For all edges \(e = (x,x') \in E_{n}\), write \(I_{e} := [0,d(x,x')]\), and let \(\gamma _{e} :I_{e} \rightarrow X\) be an isometric embedding with \(\gamma _{e}(0) = x\) and \(\gamma _{e}(d(x,x')) = x'\). Then \({\mathcal {H}}^{1}(\gamma _{e}(I_{e})) = d(x,x') \sim r_{n}\). We define

$$\begin{aligned} T_{n} := \sum _{e \in E_{n}} f_{n}(e)\gamma _{e\sharp }\llbracket I_{e} \rrbracket , \end{aligned}$$

where \(\llbracket I_{e} \rrbracket \) is the current discussed in Example 2.6.

First, we compute the boundary of \(T_{n}\), based on the facts that the boundary operation is linear, and we already know (recall (2.4)) the boundary of each term \(\gamma _{e\sharp } \llbracket I_{e} \rrbracket \):

$$\begin{aligned} \partial T_{n} = \sum _{e \in E_{n}} f_{n}(e) \partial (\gamma _{e\sharp } \llbracket I_{e} \rrbracket ) = \sum _{(x,y) \in E_{n}} f_{n}(x,y) [\delta _{y} - \delta _{x}]. \end{aligned}$$
(4.15)

To simplify the expression further, we use the flow property (F2) of \(f_{n}\), which says that

$$\begin{aligned} \sum _{(x,y) \in E_{n}} f_{n}(x,y) = 0, \qquad x \in {\mathcal {V}}_{n} \setminus \{s,t\}. \end{aligned}$$

It follows, using also the flow property (F1), namely \(f(x,y) = -f(y,x)\), that if \(x \in {\mathcal {V}}_{n} \setminus \{s,t\}\), then the terms in (4.15) containing \(\delta _{x}\) cancel out:

$$\begin{aligned} -\sum _{(x,y) \in E_{n}} f_{n}(x,y)\delta _{x} + \sum _{(y,x) \in E_{n}} f_{n}(y,x)\delta _{x} = -2\sum _{(x,y) \in E_{n}} f_{n}(x,y)\delta _{x} = 0. \end{aligned}$$

Consequently, all that remains in (4.15) are the terms containing s and t:

$$\begin{aligned} \partial T_{n} = -2\sum _{(s,y) \in E_{n}} f_{n}(s,y)\delta _{s} - 2\sum _{(t,y) \in E_{n}} f_{n}(t,y)\delta _{t} = 2\Vert f_{n}\Vert \delta _{t} - 2\Vert f_{n}\Vert \delta _{s}. \end{aligned}$$
(4.16)

In the last equation, we again used \(f_{n}(t,y) = -f_{n}(y,t)\), and the little proposition stated in (4.5) that \(\Vert f_{n}\Vert = f({\mathcal {S}},{\mathcal {S}}^{c})\) for any cut \(({\mathcal {S}},{\mathcal {S}}^{c})\), in particular for \(({\mathcal {S}},{\mathcal {S}}^{c}) = ({\mathcal {V}}_{n} \setminus \{t\}, \{t\})\). Recalling (4.14), this yields that

$$\begin{aligned} \Vert \partial T_{n}\Vert (X) \le 4\Vert f_{n}\Vert \lesssim 1, \qquad n \in {\mathbb {N}}, \end{aligned}$$
(4.17)

which in particular verifies that \(T_n\) is a normal current.

Next, we estimate the measures \(\Vert T_{n}\Vert \) and find a uniform upper bound for \(\Vert T_{n}\Vert (X)\). We recall from (2.3) that \(\Vert \gamma _{\sharp } \llbracket I_{e} \rrbracket \Vert = {\mathcal {H}}^{1}\lfloor _{\gamma _{e}(I_{e})}\). It follows that

$$\begin{aligned} \Vert T_{n}\Vert \le \sum _{e \in E_{n}} |f_{n}(e)| {\mathcal {H}}^{1}\lfloor _{\gamma _{e}(I_{e})} \le \sum _{e \in E_{n}} c_{n}(e) {\mathcal {H}}^{1}\lfloor _{\gamma _{e}(I_{e})}. \end{aligned}$$
(4.18)

Recalling that \(|f_{n}(e)| \le c_{n}(e)\) (using the flow property (F3) and the fact that \(c_{n}(x,x') = c_{n}(x',x)\) for all \((x,x') \in E_{n}\)), we may now easily estimate \(\Vert T_{n}\Vert (B)\) from above for all balls \(B = B(x,r) \subset X\) with \(r \ge r_{n}\). We claim that

$$\begin{aligned} \Vert T_{n}\Vert (B) \lesssim \int _{10B} \left( \frac{1}{\Theta (s,d(s,y))} + \frac{1}{\Theta (t,d(t,y))} \right) \, d\mu (y), \quad B = B(x,r) \subset X, \, r \ge r_{n}.\nonumber \\ \end{aligned}$$
(4.19)

We start by disposing of a little technicality. Note that there are only boundedly many edges in \(E_{n}\) of the form (xy) where \(\min \{d(x,s),d(y,s)\} \le 5r_{n}\). We denote these edges by \(E_{n}(s)\), and we use the trivial estimate \(c_{n}(e) \lesssim 1\) for all edges \(e \in E_{n}(s)\). If B happens to intersect \(\gamma _{e}(I_{e})\) for one of the edges \(e \in E_{n}(s)\), we estimate as follows:

$$\begin{aligned} \Vert T_{n}\Vert (B) \lesssim \sum _{e \in E_{n} \setminus E_{n}(s)} c_{n}(e){\mathcal {H}}^{1}(\gamma _{e}(I_{e}) \cap B) + \sum _{e \in E_{n}(s)} r_{n}. \end{aligned}$$

But if B intersects \(\gamma _{e}(I_{e})\) for an edge \(e \in E_{n}(s)\), then 10B contains \(B(s,r_{n})\), and the second term above is bounded from above by the right hand side of (4.19):

$$\begin{aligned} \sum _{e \in E_{n}(s)} r_{n} \lesssim \int _{B(s,r_{n})} \frac{d\mu (y)}{\Theta (s,d(s,y))} \le \int _{10B} \left( \frac{1}{\Theta (s,d(s,y))}+\frac{1}{\Theta (y,d(t,y))} \right) \, d\mu (y), \end{aligned}$$

using in the first inequality that \(d(s,y) \sim r_{n}\) for \(y \in B(s,r_{n}) \setminus B(s,r_{n}/2) =: A(s,r_{n})\), and \(\mu (A(s,r_{n})) \sim \mu (B(s,r_{n}))\) by the doubling hypothesis (this also requires \(A(s,r_{n}) \ne \emptyset \), which easily follows from the path connectedness of (Xd); see also [9, (8.1.17)]). We may dispose similarly of the situation where B meets \(\gamma _{e}(I_{e})\) for some \(e \in E_{n}(t)\) (defined in the same way as \(E_{n}(s)\)). In other words, it remains to estimate the sum

$$\begin{aligned} \sum _{e \in E_{n} \setminus (E_{n}(s) \cup E_{n}(t))} c_{n}(e){\mathcal {H}}^{1}(\gamma _{e}(I_{e}) \cap B) \lesssim \sum _{(x,y) \in E_{n}(B)} \left( \frac{\mu (B(x,r_{n}))}{\Theta (s,d(s,x))} + \frac{\mu (B(x,r_{n}))}{\Theta (t,d(t,x))} \right) ,\nonumber \\ \end{aligned}$$
(4.20)

where \(E_{n}(B) := \{e \in E_{n} \setminus [E_{n}(s) \cup E_{n}(t)] : \gamma _{e}(I_{e}) \cap B \ne \emptyset \}\). We used in (4.20) that

$$\begin{aligned} d(t,y) \sim d(t,x) \quad \text {and} \quad \mu (B(y,r_{n})) \sim \mu (B(x,r_{n})), \qquad (x,y) \in E_{n}(B). \end{aligned}$$

To proceed, we recall again that every \(x \in {\mathcal {V}}_{n}\) only has boundedly many neighbours in \(G_{n}\), and all the vertices \(x \in {\mathcal {V}}_{n}\) with at least one edge \((x,y) \in E_{n}(B)\) must lie at distance \(\le r_{n}\) from B, and at distance \(\ge 5r_{n}\) from \(\{s,t\}\). We denote the collection of such vertices by \({\mathcal {V}}_{n}(B)\). The observations above, and the bounded overlap of the balls \(B(x,r_{n})\), \(x \in {\mathcal {V}}_{n}\), allow us to continue (4.20) as follows:

$$\begin{aligned} ( 5.24)&\lesssim \sum _{x \in {\mathcal {V}}_{n}(B)} \left( \frac{\mu (B(x,r_{n}))}{\Theta (s,d(s,x))} + \frac{\mu (B(x,r_{n}))}{\Theta (t,d(t,x))} \right) \\&\lesssim \int _{10B} \left( \frac{1}{\Theta (s,d(s,y))} + \frac{1}{\Theta (t,d(t,y))} \right) \, d\mu (y). \end{aligned}$$

Here we used, once again, that \(d(s,y) \sim d(s,x)\) and \(d(t,y) \sim d(t,x)\) for all \(y \in B(x,r_{n})\), whenever \({\text {dist}}(x,\{s,t\}) \ge 5r_{n}\). This concludes the proof of (4.19).

Finally, since the sets \(\gamma _{e}(I_{e})\), \(e \in E_{n}\), are geodesics connecting vertices in \({\mathcal {V}}_{n}\), we infer that

$$\begin{aligned} {\text {spt}}T_{n} = {\text {spt}}\Vert T_{n}\Vert \subset B(s,C_{0}d(s,t) + 5r_{n}) \subset 2B_{0}. \end{aligned}$$

In particular, the supports of the currents \(T_{n}\) are contained in the fixed compact set \(\overline{2B_{0}}\). Moreover, noting that \(20B_{0} = B(s,20C_{0}d(s,t)) \subset B(t,21C_{0}d(s,t))\), we can write either

$$\begin{aligned} 20B_{0} \subset \bigcup _{2^{-j} \le 50C_{0}d(s,t)} A(s,2^{-j}) \quad \text {or} \quad 20B_{0} \subset \bigcup _{2^{-j} \le 50C_{0}d(s,t)} A(t,2^{-j}) \end{aligned}$$
(4.21)

For \(j \in {\mathbb {N}}\) fixed, we can use the doubling property of the measure \(\mu \) to estimate

$$\begin{aligned} \int _{A(s,2^{-j})} \frac{d\mu (y)}{\Theta (s,d(s,y))} = \int _{A(s,2^{-j})} \frac{d(s,y) \, d\mu (y)}{\mu (B(s,d(s,y)))} \lesssim 2^{-j} \frac{\mu (A(s,2^{-j}))}{\mu (B(s,2^{-j}))}, \end{aligned}$$

and a similar bound holds with s replaced by t. Hence, using first (4.19), and then splitting the integration domain as in (4.21), we obtain the following uniform upper bound for the masses of the currents \(T_{n}\):

$$\begin{aligned} \Vert T_{n}\Vert (X) = \Vert T_{n}\Vert (2B_{0})&\lesssim \int _{20B_{0}} \left( \frac{1}{\Theta (s,d(s,y))} + \frac{1}{\Theta (t,d(t,y))} \right) \, d\mu (y)\nonumber \\&\lesssim \sum _{2^{-j} \le 50C_{0}d(s,t)} 2^{-j} \left[ \frac{\mu (A(s,2^{-j}))}{\mu (B(s,2^{-j}))} + \frac{\mu (A(t,2^{-j}))}{\mu (B(t,2^{-j}))} \right] \lesssim {\text {diam}}(B_{0}). \end{aligned}$$
(4.22)

Recalling also (4.17), we are now in a position to use the compactness theorem for normal currents, Theorem 2.7: there exists a normal current T, supported on \({\bar{B}}_{0}\), such that

$$\begin{aligned} \lim _{m \rightarrow \infty } T_{n_{m}}(g,\pi ) = T(g,\pi ), \qquad (g,\pi ) \in \text {Lip}_{b}(X) \times \text {Lip}(X), \end{aligned}$$
(4.23)

for some subsequence \((n_{m})_{m \in {\mathbb {N}}}\). From the proof of the compactness theorem [1, Theorem 5.2] in the paper of Ambrosio and Kirchheim, in particular “Step 2”, one can read that the measure \(\Vert T\Vert \) is bounded from above by a certain measure which is obtained as a weak limit of the measures \(\Vert T_{n_{m}}\Vert \). As a consequence, we infer that (4.19) holds for the measure \(\Vert T\Vert \), and for all balls \(B(x,r) \subset X\). This fact, combined with the Lebesgue differentiation theorem, shows that \(\Vert T\Vert \) is absolutely continuous with respect to the measure \(\mu \), with Radon-Nikodym derivative bounded by

$$\begin{aligned} \frac{d\Vert T\Vert }{d\mu }(x) \lesssim \frac{1}{\Theta (s,d(s,x))} + \frac{1}{\Theta (t,d(t,x))}. \end{aligned}$$

So, to conclude the proof of Theorem 4.1, it remains to show that \(\partial T = C\delta _{t} - C\delta _{s}\) for some constant \(C \sim 1\). From (4.23) we infer that

$$\begin{aligned} \partial T(g) = \lim _{m \rightarrow \infty } \partial T_{n_{m}}(g) = \lim _{m \rightarrow \infty } (\Vert f_{n_{m}}\Vert g(t) - \Vert f_{n_{m}}\Vert g(s)), \qquad g \in \text {Lip}_{b}(X). \end{aligned}$$

Apply this to a bump function g satisfying \(g(t) = 1\) and \(g(s) = 0\) to find that the numbers \(\Vert f_{n_{m}}\Vert \sim 1\) converge to a limit \(C = \partial T(g)\), which evidently also satisfies \(C \sim 1\). Thus \(\partial T(g) = Cg(t) - Cg(s)\) for all \(g \in \text {Lip}_{b}(X)\), which by definition means that \(\partial T = C\delta _{t} - C\delta _{s}\). Finally, to be precise, the current mentioned in the definition of generalised pencil of curves can be obtained by dividing T by C. The proof of Theorem 4.1 is complete.

5 GPC and pencils of curves

In this section, we prove Theorem 1.4. As explained in the introduction, the implication

$$\begin{aligned} X \text { admits PCs} \Longrightarrow X \text { satisfies the weak } 1-\text {Poincar}\acute{\hbox {e}} \text { inequality} \end{aligned}$$

is due to Semmes for Q-regular spaces \((X,d,\mu )\), see [15, Theorem B.15], and a proof of the general case can be found in [8, Chapter 4]. So, we concentrate on the reverse implication. By the main result of this paper, Theorem 1.3, it suffices to verify the following:

$$\begin{aligned} X \text { admits GPCs} \Longrightarrow X \text { admits PCs}. \end{aligned}$$

The proof is a straightforward application of a decomposition result [13, Theorem 5.1] of Paolini and Stepanov, which we now recall for the reader’s convenience. Let CT be 1-currents on X. We say that C is a subcurrent of T, denoted \(C\le T\), if

$$\begin{aligned} \Vert T-C\Vert (X) + \Vert C\Vert (X) \le \Vert T\Vert (X). \end{aligned}$$

We say that C is a cycle of T, if \(C \le T\) and \(\partial C = 0\). A 1-current T is acyclic, if its only cycle is the trivial 1-current \(C = 0\).

By a curve in X, we mean the image of a Lipschitz map \(\theta :[0,1] \rightarrow X\). The space of curves in X is denoted by \(\Gamma \). We slightly abuse notation by using the letter \(\theta \) to denote both Lipschitz mappings \([0,1] \rightarrow X\), and elements in \(\Gamma \). There is a natural metric \(d_{\Gamma }\) on \(\Gamma \), defined in [13, (2.1)], and Borel sets in \(\Gamma \) are defined using the topology induced by \(d_{\Gamma }\). Following the terminology above [13, (4.1)], a finite positive Borel measure on \(\Gamma \) is called a transport. If \(\theta \in \Gamma \) has an injective parametrisation, then \(\theta \) is called an arc.

By [13, Theorem 5.1], a normal acyclic 1-current T on X is decomposable in arcs. Combining [13, Definition 4.4] and [13, Lemma 4.17], this means that there exists a transport \(\eta \) on \(\Gamma \) such that \(\eta \) almost every curve in \(\Gamma \) is an arc, and moreover the following equalities hold:

$$\begin{aligned} \Vert T\Vert = \int _{\Gamma } {\mathcal {H}}^{1}\lfloor _{\theta } \, d\eta (\theta ), \end{aligned}$$
(5.1)

and

$$\begin{aligned} \eta (1) = (\partial T)^{+} \quad \text {and} \quad \eta (0) = (\partial T)^{-}. \end{aligned}$$
(5.2)

The measures \((\partial T)^{+}\) and \((\partial T)^{-}\) appearing in (5.2) are the positive and negative parts of the measure \(\partial T\), and \(\eta (i)\) is the measure defined by

$$\begin{aligned} \eta (i)(A) = \eta \{\theta \in \Gamma : \theta (i) \in A\}, \qquad i \in \{0,1\}. \end{aligned}$$

If the reader checks carefully [13, Lemma 4.17], then she will find the measure “\(\mu _{[[\theta ]]}\)” (in our notation \(\Vert \theta _{\sharp }[[0,1]]\Vert \)) in place of “\({\mathcal {H}}^{1}\lfloor _{\theta }\)” in (5.1), but these measures coincide if \(\theta \) is an arc. Indeed, in that case [14, Lemma 2.2] shows that \(\Vert \theta _{\sharp }[[0,1]]\Vert = \theta _{\sharp } (|{\dot{\theta }}|{\mathcal {L}}^1)\), where \(|{\dot{\theta }}|\) denotes the metric derivative. Then it can be seen for instance by [2, Theorem 4.1.6] that \(\theta _{\sharp } (|{\dot{\theta }}|{\mathcal {L}}^1)\) agrees with \({\mathcal {H}}^1\lfloor _{\theta }\).

Now, we apply the decomposition result to our concrete situation. Assume that \(s,t \in X\), and T is a GPC joining s to t, as in Definition 1.2. There is no guarantee that T is acyclic, but [13, Proposition 3.8] comes to our rescue: it gives a cycle \(C \le T\) such that \(T' = T - C\) is acyclic. Clearly

$$\begin{aligned} \partial T' = \partial T - \partial C = \partial T = \delta _{t} - \delta _{s}. \end{aligned}$$

Moreover, \(\Vert T'\Vert + \Vert C\Vert = \Vert T\Vert \) by [13, (3.1)], so in particular \(\Vert T'\Vert \le \Vert T\Vert \). It follows that \(T'\) is an acyclic GPC joining s to t. With this argument in mind, we may assume that T was acyclic to begin with.

Hence, a decomposition as in (5.1)–(5.2) exists. In particular, (5.2) implies that

$$\begin{aligned} \eta \{\theta \in \Gamma : \theta (1) = t\} = \eta (1)(\{t\}) = (\partial T)^{+}(\{t\}) = \delta _{t}(\{t\}) = 1, \end{aligned}$$

and similarly \(\eta \{\theta \in \Gamma : \theta (0) = s\} = 1\). So, there exists a set \(\Gamma _{s,t} \subset \Gamma \) of arcs of full \(\eta \) measure such that \(\theta (0) = s\) and \(\theta (1) = t\) for all \(\theta \in \Gamma _{s,t}\). Moreover, if \(g :X \rightarrow [0,\infty ]\) is a Borel function, then (5.1) shows that

$$\begin{aligned} \int _{\Gamma _{s,t}} \int _{\theta } g \, d{\mathcal {H}}^{1} \, d\eta (\theta ) = \int g \, d\Vert T\Vert \lesssim \int _{B(s,C_0 d(s,t))} \left( \frac{g(y)}{\Theta (s,d(s,y))} + \frac{g(y)}{\Theta (t,d(t,y))} \right) \, d\mu (y), \end{aligned}$$

taking into account that \({\text {spt}}\Vert T\Vert \subset B(s,C_{0}d(s,t))\) by (P2). The only point we are still missing from Definition 1.1 is the claim that the arcs in \(\Gamma _{s,t}\) are quasiconvex. This need not be true to begin with, but we note that

$$\begin{aligned} \Vert T\Vert (X) \lesssim d(s,t). \end{aligned}$$

This can be easily seen from (P2)–(P3). Alternatively, our specific construction for T gives this estimate, see (4.22). So, it follows from (5.1) that half of the arcs \(\theta \in \Gamma \), relative to the measure \(\eta \), satisfy the quasiconvexity requirement \({\mathcal {H}}^{1}(\theta ) \lesssim d(s,t)\). The proof is now completed by restricting \(\Gamma _{s,t}\) to this “good half” of the arcs.