## Introduction

A metric space $$(\mathrm{Y},\textsf {d} )$$ is said to be a $$\mathrm{CAT}(\kappa )$$ space if, roughly said, it is geodesic and geodesic triangles are ‘thinner’ than triangles in the model space $${\mathbb {M}}_\kappa$$ of constant sectional curvature $$=\kappa$$. Typical examples of $$\mathrm{CAT}(\kappa )$$ spaces are simply connected Riemannian manifolds with sectional curvature $$\le \kappa$$ and their Gromov–Hausdorff limits. Despite the absence of any a priori smooth structure, $$\mathrm{CAT}(\kappa )$$ spaces are quite regular and carry a solid calculus resembling that on manifolds with curvature $$\le \kappa$$. We refer to [1, 8, 10, 11, 25] for overviews on the topic and a more detailed bibliography.

In the particular case $$\kappa =0$$ the $$\mathrm{CAT}(0)$$ condition reads as follows: for any points $$x_0,x_1\in \mathrm{Y}$$ and any geodesic $$\gamma :[0,1]\rightarrow \mathrm{Y}$$ connecting them it holds that

\begin{aligned} \textsf {d} ^2(\gamma _t,y)\le (1-t)\textsf {d} ^2(x_0,y)+t\textsf {d} ^2(x_1,y)-t(1-t)\textsf {d} ^2(x_0,x_1) \quad \forall \in \mathrm{Y},\,t\in [0,1].\nonumber \\ \end{aligned}
(1.1)

This can be regarded as a parallelogram inequality and from this point of view it is perhaps not surprising that several aspects of $$\mathrm{CAT}(0)$$ spaces strongly resemble properties of Hilbert spaces; this perspective is emphasised e.g. in . For instance, from (1.1) it directly follows that if a normed vector space is a $$\mathrm{CAT}(0)$$ space, then the norm comes from a scalar product. Equivalently,

\begin{aligned} \begin{array}{l} \text {if a normed vector space isometrically embeds in a }\mathrm{CAT}(0)\text { space,}\\ \text {then the norm comes from a scalar product.} \end{array} \end{aligned}
(1.2)

Given that $$\mathrm{CAT}(0)$$ spaces naturally arise as tangent cones to generic $$\mathrm{CAT}(\kappa )$$ spaces, these analogies with Hilbert structures appear also at small scales on $$\mathrm{CAT}(\kappa )$$ spaces.

A metric measure space $$(\mathrm{Y},\textsf {d} ,\mu )$$ is called infinitesimally Hilbertian provided the Sobolev space $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ is Hilbert (see  and then also [5, 40] for the definition of Sobolev spaces in this context). The concept of infinitesimal Hilbertianity, introduced in , aims at detecting Hilbert structures at small scales in the non-smooth setting. The motivating example in the smooth category is the following: if $$\mathrm{Y}$$ is a smooth Finsler manifold and $$\mu$$ is a smooth measure on it (i.e. with smooth density when seen in charts), then the $$W^{1,2}$$-norm can be written as

\begin{aligned} \Vert f\Vert _{W^{1,2}}^2=\int |f|^2(x)+\Vert \mathrm{d}f(x)\Vert _x^2\,\mathrm{d}\mu (x). \end{aligned}
(1.3)

Since $$f\mapsto \int |f|^2\,\mathrm{d}\mu$$ always satisfies the parallelogram identity, we see that $$f\mapsto \Vert f\Vert _{W^{1,2}}^2$$ has the same property if and only if $$f\mapsto \int \Vert \mathrm{d}f(x)\Vert _x^2\,\mathrm{d}\mu (x)$$ satisfies the parallelogram identity. With a little bit of work it is possible to check that this is the case if and only if $$\Vert \cdot \Vert _x^2$$ satisfies the parallelogram identity for every x, i.e. if and only if $$\mathrm{Y}$$ is in fact a Riemannian manifold.

In the smooth category one could run the above consideration also with smooth functions, rather than with Sobolev ones, but this is obviously not possible on a metric measure space. In this direction let us emphasise that in the non-smooth environment it is crucial to work with Sobolev functions rather than, say, with Lipschitz ones. To see why, recall that the local Lipschitz constant $$\mathrm {lip}f:\mathrm{Y}\rightarrow [0,\infty ]$$ of a function $$f:\mathrm{Y}\rightarrow {\mathbb {R}}$$ is defined as

\begin{aligned} \mathrm {lip}f(x):=\varlimsup _{y\rightarrow x}\frac{|f(y)-f(x)|}{\textsf {d} (x,y)}\quad \text {if }x\text { is not isolated, 0 otherwise} \end{aligned}

and consider the following example. Let $$(\mathrm{Y},\textsf {d} )$$ be the Euclidean space $$({\mathbb {R}}^d,\textsf {d} _\mathrm{Eucl})$$ and $$\mu$$ be a positive Radon measure. Then:

1. (a)

The map

\begin{aligned} \mathrm {LIP}_c({\mathbb {R}}^d)\ni f\;\mapsto \;\int (|f|^2+\mathrm {lip}^2 f)\,\mathrm{d}\mu \end{aligned}

is not a quadratic form, in general.

2. (b)

The map

\begin{aligned} W^{1,2}({\mathbb {R}}^d,\textsf {d} _\mathrm{Eucl},\mu )\ni f\;\mapsto \; \Vert f\Vert ^2_{W^{1,2}} \end{aligned}

is a quadratic form, i.e. $$({\mathbb {R}}^d,\textsf {d} _\mathrm{Eucl},\mu )$$ is infinitesimally Hilbertian.

To see why (a) holds simply consider $$\mu$$ to be a Dirac delta at a point o and $$f,g\in \mathrm {LIP}_c({\mathbb {R}}^d)$$ generic functions not differentiable at o: for these the parallelogram identity for $$f\mapsto \mathrm {lip}^2 f(o)$$ typically fails. Intuitively, this is due to the fact that, if f and g are not differentiable at o, they are not (close to being) linear in the vicinity of o and thus their local Lipschitz constants fail to capture the Hilbert structure of the cotangent space $$\mathrm{T}^*_o{\mathbb {R}}^d$$ at o.

The statement in (b) is non-trivial and is one of the results proved in . The crucial aspect of the proof is the possibility of approximating Sobolev functions with $$C^1$$ functions: these are by nature differentiable everywhere, and thus also $$\mu$$—a.e., and hence are suitable to identify the Hilbertian structure of the cotangent spaces.

Hence the idea behind the notion of infinitesimal Hilbertianity is to exploit the fact that ‘by nature’ Sobolev functions are a.e. differentiable in some sense, regardless of the regularity of the metric and of the measure in consideration (for instance, if $$\mu$$ is a Dirac delta as above, it turns out that Sobolev functions have 0 differential, so that the claim (b) is trivially true in this case). This makes them suitable for detecting Hilbert structures at an infinitesimal scale. Let us emphasise that even though this is an analytic notion, it is strictly related to—and its introduction has been motivated by—the study of geometric properties of metric measure spaces, in particular those satisfying a curvature-dimension bound in the sense of Lott–Sturm–Villani. An example of this link is the validity of the non-smooth splitting theorem [18, 20], which states that under the appropriate geometric rigidity given by a LSV condition the weak and ‘differential’ notion of infinitesimal Hilbertianity implies the validity of a kind of Pythagora’s theorem for the ‘integrated’ object $$\textsf {d}$$.

These considerations about Sobolev functions, together with the fact that tangents of $$\mathrm{CAT}(\kappa )$$-spaces are $$\mathrm{CAT}(0)$$-spaces and thus exhibit behaviour akin to Hilbert spaces, might lead one to suspect that a $$\mathrm{CAT}(\kappa )$$-space equipped with any measure is infinitesimally Hilbertian.

This is indeed the case and is the main result of this manuscript:

### Theorem 1.1

(Universal infinitesimal Hilbertianity of local $$\mathrm{CAT}(\kappa )$$ spaces) Let $$\kappa \in {\mathbb {R}}$$, $$(\mathrm{Y},\textsf {d} )$$ be a local $$\mathrm{CAT}(\kappa )$$-space and $$\mu$$ a non-negative and non-zero Radon measure on $$\mathrm{Y}$$ giving finite mass to bounded sets.

Then $$(\mathrm{Y},\textsf {d} ,\mu )$$ is infinitesimally Hilbertian, i.e. the Sobolev space $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ is a Hilbert space.

1. (i)

Sobolev functions on metric measure spaces are typically studied either on generic mm-spaces, mostly for foundational purposes, or on spaces which are either doubling, support a Poincaré inequality, or have Ricci curvature bounded from below. In these contexts, Sobolev spaces constitute a key ingredient for the development of a non-smooth calculus (see [5, 9, 13, 28, 29] and the references therein). All these conditions are in strong contrast to the upper sectional curvature bound encoded by the $$\mathrm{CAT}(\kappa )$$ notion as they all more-or-less point to a lower (Ricci) curvature bound.

In this direction, it is worth mentioning that $$\mathrm{CAT}(\kappa )$$ spaces do not carry any natural reference measure (unlike, for instance, finite-dimensional Alexandrov spaces with curvature bounded from below) and perhaps for this reason they have been investigated mostly as metric spaces, rather than as metric measure spaces.

To the best of our knowledge, this manuscript contains the first result about the structure of Sobolev functions on $$\mathrm{CAT}(\kappa )$$ spaces.

2. (ii)

A particular case of Theorem 1.1 has been obtained in the recent paper  by Kapovitch and Ketterer. There the authors consider a metric measure space $$(\mathrm{X},\textsf {d} ,{\mathfrak {m}})$$ which is a $${\textsf {CD}}(K,N)$$ space in the sense of Lott–Sturm–Villani ([32, 42, 43]) when seen as a metric measure space and a $$\mathrm{CAT}(\kappa )$$ space when seen as metric space. Among other things, they prove that $$(\mathrm{X},\textsf {d} ,{\mathfrak {m}})$$ is infinitesimally Hilbertian, thus giving another instance of the fact that a $$\mathrm{CAT}(\kappa )$$ condition forces $$W^{1,2}$$ to be Hilbert. Their proof is based on the strong rigidity which comes from having both a ‘lower Ricci’ and an ‘upper sectional’ curvature bound (in fact the study of such rigidity, and of the regularity it enforces, is their main goal) and cannot be adapted to our case.

3. (iii)

We have mentioned that, in , to prove the result stated in (b) above the use of $$C^1$$ functions is crucial. Something similar happens here, where we make extensive use of the fact that on $$\mathrm{CAT}(\kappa )$$ spaces there are many semiconvex Lipschitz functions (e.g. distance functions) and they have a well-defined notion of differential at every point; see Sect. 2.3.

4. (iv)

This manuscript is part of a broader program aiming at stating and proving the Bochner–Eells–Sampson inequality

\begin{aligned} \Delta \frac{|\mathrm{d}u|^2_{\textsf {HS}}}{2}\ge {\langle }\mathrm{d}u,\Delta \mathrm{d}u{\rangle }_{\textsf {HS}}+K|\mathrm{d}u|_{\textsf {HS}}^2 \end{aligned}
(1.4)

for maps from a $${\textsf {RCD}}(K,N)$$ space $$(\mathrm{X},\textsf {d} _\mathrm{X},{\mathfrak {m}}_\mathrm{X})$$ to a $$\mathrm{CAT}(0)$$ space $$(\mathrm{Y},\textsf {d} )$$. Notice that inequality (1.4) would immediately imply Lipschitz regularity of harmonic maps, by well known elliptic regularity theory in the non-smooth setting.

The role of this manuscript, to be used in conjunction with , is to ensure that $$L^2(\mathrm{T}^*\mathrm{Y};u_*(|\mathrm{d}u|^2{\mathfrak {m}}_\mathrm{X}))$$ is a Hilbert module, so that the same holds for the tensor product $$L^2(\mathrm{T}^*\mathrm{X};{\mathfrak {m}}_\mathrm{X})\otimes \big (u^*L^2(\mathrm{T}^*\mathrm{Y};u_*(|\mathrm{d}u|^2{\mathfrak {m}}_\mathrm{X}))\big )$$ and thus the ‘pointwise Hilbert–Schmidt norm’ appearing in (1.4) makes sense. We refer to [23, 24] for more details on this.

5. (v)

$$\mathrm{CAT}(\kappa )$$ spaces are not necessarily separable (for instance, the $$\mathrm{CAT}(0)$$ space obtained by glueing uncountably many copies of [0, 1] at 0 is not separable), as opposed to finite-dimensional spaces with curvature bounded from below. For this reason separability is not an assumption in Theorem 1.1. Still, given that Sobolev spaces on metric measure spaces are typically studied in a separable environment, we first prove our main result for separable spaces and postpone the technical details needed to handle the general case until the final section.

Let us briefly describe the proof of Theorem 1.1. The basic intuition is given by (1.2) and the fact that the tangent cone of a local $$\mathrm{CAT}(\kappa )$$ space is a $$\mathrm{CAT}(0)$$ space. More precisely, we consider:

1. (1)

The space $$\mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ of derivations (with divergence), as introduced by the first author in [14, 15] (see Sect. 5). These are in duality with Sobolev functions.

2. (2)

The collection $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ of ‘$$L^2(\mu )$$ Borel sections of the bundle $$\mathrm{T}_G\mathrm{Y}$$ on $$\mathrm{Y}$$ whose fibre at x is the tangent cone $$\mathrm{T}_x\mathrm{Y}$$’ (see Sect. 3).

In Theorem 6.2 and Corollary 6.4, we construct an isometric embedding

\begin{aligned} {\mathscr {F}}: \mathrm{Der}^{2,2}(\mathrm{Y};\mu )\hookrightarrow L^2(\mathrm{T}_G\mathrm{Y};\mu ) \end{aligned}

which respects distances fibrewise. From this fact, the arguments behind (1.2) and the aforementioned duality between derivations and Sobolev functions easily imply the main Theorem 1.1.

To construct the embedding $${\mathscr {F}}$$, recall that a derivation $$b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ gives rise to a normal 1-current $$T_b$$ in the sense of Ambrosio-Kirchheim  (Lemma 6.1). Using Paolini–Stepanov’s version [35, 36] of Smirnov’s superposition principle (see Theorem 4.9) we express the 1-current $$T_b$$ as a superposition $$\int [\![\gamma ]\!]\,\mathrm{d}\pi _{T_b}(\gamma )$$, where $$\pi _{T_b}$$ is a finite measure on the space of absolutely continuous curves and $$[\![\gamma ]\!]$$ is the current induced by $$\gamma$$.

Inspired by , we see that if $$\gamma$$ is an absolutely continuous curve then the right and left derivatives $${\dot{\gamma }}_t^+$$ and $${\dot{\gamma }}^-_t$$ exist as elements of $$\mathrm{T}_{\gamma _t}\mathrm{Y}$$, and satisfy $${\dot{\gamma }}_t^+\oplus {\dot{\gamma }}_t^-=0$$, for almost every $$t\in [0,1]$$ (Proposition 2.20, Remark 2.21 and Lemma 2.22).

Given the measures , obtained by disintegrating with respect to the evaluation map $$(\gamma ,t)\mapsto \gamma _t$$, we consider their push-forward by the ‘right-derivative’ map (cf. Proposition 3.7), thus obtaining measures $${{\mathfrak {n}}}_x$$ supported in $$\mathrm{T}_x\mathrm{Y}$$.

The Borel section $${\mathscr {F}}(b)$$ is defined to be, at almost every $$x\in \mathrm{Y}$$, the barycenter of $${{\mathfrak {n}}}_x$$. The barycenter lies in the tangent cone $$\mathrm{T}_x\mathrm{Y}$$. By a rigidity property of barycenters (Lemma 2.27), and convexity properties of tangent cones, the measure $${\mathfrak {n}}_x$$ is concentrated on a half-line for almost every $$x\in \mathrm{Y}$$.

Theorem 1.2 below is an improved version of the embedding result (Theorem 6.2 and Corollary 6.4), and follows from it by Theorem 1.1 and Proposition 6.5. It states that the tangent module $$L^2(\mathrm{T}\mathrm{Y};\mu )$$, introduced by the second named author in  (see also ), admits an isometric embedding into $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ that is compatible with the fibrewise $$\mathrm{CAT}(0)$$-structure on the target side. We refer to [19, 21] for the theory of tangent modules, and to Sect. 2.2 for the notation (below Theorem 2.10).

### Theorem 1.2

Let $$\mathrm{Y}$$ be a complete and separable locally $$\mathrm{CAT}(\kappa )$$-space ($$\kappa \in {\mathbb {R}}$$) and $$\mu$$ a Borel measure on $$\mathrm{Y}$$ that is finite on bounded sets. Then there is a map $${\mathscr {F}}:L^2(\mathrm{T}\mathrm{Y};\mu )\hookrightarrow L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ such that for $$X,Y\in L^2(\mathrm{T}\mathrm{Y};\mu )$$

1. (1)

$${\mathscr {F}}(X+Y)={\mathscr {F}}(X)\oplus {\mathscr {F}}(Y)$$,

2. (2)

$$\textsf {d} _\cdot ({\mathscr {F}}(X),{\mathscr {F}}(Y))=|X-Y|$$, and

3. (3)

$$2(|{\mathscr {F}}(X)|_\cdot ^2+|{\mathscr {F}}(Y)|_\cdot ^2)=\textsf {d} _\cdot ^2({\mathscr {F}}(X),{\mathscr {F}}(Y))+|{\mathscr {F}}(X)\oplus {\mathscr {F}}(Y)|_\cdot ^2$$

pointwise $$\mu$$-almost everywhere.

Both main results, along with Proposition 6.5, are proven in the end of Sect. 6.

## $$\mathrm{CAT}(\kappa )$$-Spaces and Basic Calculus on Them

### Definition of $$\mathrm{CAT}(\kappa )$$-Spaces and Basic Properties

In this paper geodesics will always be assumed to be minimizing and with constant speed. If, for two given points xy in a metric space $$(\mathrm{Y},\textsf {d} )$$, there is only one (up to reparametrization) geodesic connecting them, the one defined on [0, 1] will be denoted by $$\textsf {G} _x^y$$. Given a point $$x\in \mathrm{Y}$$, we denote by $$\mathrm {dist}_x:\mathrm{Y}\rightarrow {\mathbb {R}}$$ the function $$y\mapsto \textsf {d} (x,y)$$.

For $$\kappa \in {\mathbb {R}}$$ the model space $${\mathbb {M}}_\kappa$$ is the connected, simply connected, complete 2-dimensional manifold with constant curvature $$\kappa$$, and $$\textsf {d} _\kappa$$ is the distance induced by the metric tensor. Thus $$({\mathbb {M}}_\kappa ,\textsf {d} _\kappa )$$ is (a) the hyperbolic space $${\mathbb {H}}^2_\kappa$$ of constant sectional curvature $$\kappa$$, if $$\kappa <0$$, (b) $${\mathbb {R}}^2$$ with the usual Euclidean metric, if $$\kappa =0$$, and (c) the sphere $$S^2_\kappa$$ of constant sectional curvature $$\kappa$$, if $$\kappa >0$$.

We set $$D_\kappa :=\mathrm {diam}({\mathbb {M}}_\kappa )$$, i.e.

\begin{aligned} D_\kappa =\left\{ \begin{array}{ll} \infty &{}\quad \text { is }\kappa \le 0,\\ \frac{\pi }{\sqrt{\kappa }}&{}\quad \text { if }\kappa >0. \end{array} \right. \end{aligned}

We refer to [10, Chap. I.2] for a detailed study of the model spaces $${\mathbb {M}}_\kappa$$.

$$\mathrm{CAT}(\kappa )$$ spaces are geodesic spaces where geodesic triangles are ‘thinner’ than in $${\mathbb {M}}_\kappa$$: they offer a metric counterpart to the notion of ‘having sectional curvature bounded from above by $$\kappa$$’.

To define them we start by recalling that if $$a,b,c\in \mathrm{Y}$$ is a triple of points satisfying $$\textsf {d} (a,b)+\textsf {d} (b,c)+\textsf {d} (c,a)<2D_\kappa$$, then there are points, called comparison points, $${\bar{a}},{\bar{b}},{\bar{c}}\in {\mathbb {M}}_\kappa$$ such that

\begin{aligned} \textsf {d} _\kappa ({\bar{a}},{\bar{b}})=\textsf {d} (a,b),\quad \textsf {d} _\kappa ({\bar{b}}, {\bar{c}})=\textsf {d} (b,c),\quad \textsf {d} _\kappa ({\bar{c}},{\bar{a}})=\textsf {d} (c,a). \end{aligned}

A point $$d\in \mathrm{Y}$$ is said to be intermediate between $$b,c\in \mathrm{Y}$$ provided $$\textsf {d} (b,d)+\textsf {d} (d,c)=\textsf {d} (b,c)$$ (if $$\mathrm{Y}$$ is geodesic, as we shall always assume, this means that d lies on a geodesic joining b and c). A comparison point of d is a point $${\bar{d}}\in {\mathbb {M}}_\kappa$$, such that

\begin{aligned} \textsf {d} _\kappa ({\bar{d}},{\bar{b}})=\textsf {d} (d,b),\quad \textsf {d} _\kappa ({\bar{d}}, {\bar{c}})=\textsf {d} (d,c). \end{aligned}

### Definition 2.1

($$\mathrm{CAT}(\kappa )$$ spaces) A metric space $$(\mathrm{Y},\textsf {d} )$$ is called a $$\mathrm{CAT}(\kappa )$$-space if it is geodesic and satisfies the following triangle comparison principle: for any $$a,b,c\in \mathrm{Y}$$, satisfying $$\textsf {d} (a,b)+\textsf {d} (b,c)+\textsf {d} (c,a)<2D_\kappa$$, and any intermediate point d between bc, there are comparison points $${\bar{a}},{\bar{b}},{\bar{c}},{\bar{d}}\in {\mathbb {M}}_\kappa$$ as above such that

\begin{aligned} \textsf {d} (a,d)\le \textsf {d} _\kappa ({\bar{a}},{\bar{d}}). \end{aligned}
(2.1)

A metric space $$(\mathrm{Y},\textsf {d} )$$ is said to be locally $$\mathrm{CAT}(\kappa )$$ (or of curvature $$\le \kappa$$) if every point in $$\mathrm{Y}$$ has a neighbourhood which is a $$\mathrm{CAT}(\kappa )$$-space with the inherited metric.

It is worth noting that balls of radius $$<D_\kappa /2$$ in the model space $${\mathbb {M}}_\kappa$$ are convex, cf. Definition 2.3. Hence the comparison property (2.1) grants that the same is true on $$\mathrm{CAT}(\kappa )$$ spaces (see [10, Proposition II.1.4.(3)] for the rigorous proof of this fact). It is then easy to see that, for the same reasons, $$(\mathrm{Y},\textsf {d} )$$ is locally $$\mathrm{CAT}(\kappa )$$ provided every point has a neighbourhood U where the comparison inequality (2.1) holds for every triple of points $$a,b,c\in U$$, where the geodesics connecting the points (and thus the intermediate points) are allowed to exit the neighbourhood U.

Let us fix the following notation: if $$(\mathrm{Y},\textsf {d} )$$ is a local $$\mathrm{CAT}(\kappa )$$ space, for every $$x\in \mathrm{Y}$$ we set

\begin{aligned} \textsf {r} _x:=\sup \big \{r\le D_\kappa /2\ :\ {\bar{B}}_r(x)\ \text {is a } \mathrm{CAT}(\kappa )\text { space}\big \}. \end{aligned}

Notice that in particular $$B_{\textsf {r} _x}(x)$$ is a $$\mathrm{CAT}(\kappa )$$ space. The definition trivially grants that $$\textsf {r} _y\ge \textsf {r} _x-\textsf {d} (x,y)$$ and thus in particular $$x\mapsto \textsf {r} _x$$ is continuous.

We mention in passing that restricting attention to complete $$\mathrm{CAT}(\kappa )$$-spaces presents no loss of generality, since the completion of a $$\mathrm{CAT}(\kappa )$$-space is a $$\mathrm{CAT}(\kappa )$$-space; see [10, Corollary 3.11].

In a $$\mathrm{CAT}(\kappa )$$ space, points at distance $$<D_\kappa$$ are connected by a unique (up to parametrization) geodesic and these geodesics vary continuously with the endpoints. The following lemma is a quantitative version of this statement, and directly implies the uniqueness and continuous dependence of geodesics between points of distance $$<D_\kappa$$.

### Lemma 2.2

Let $$\kappa \in {\mathbb {R}}$$ and let $$\mathrm{Y}$$ be a $$\mathrm{CAT}(\kappa )$$-space. For every $$\lambda <D_\kappa$$, there are constants $$C=C(\kappa ,\lambda )>0$$ and $$\varepsilon _0=\varepsilon _0(\kappa ,\lambda )>0$$ such that the following holds: if $$x,y\in \mathrm{Y}$$ satisfy $$\textsf {d} (x,y)\le \lambda$$, and m is the midpoint of xy, we have, for any $$\varepsilon \in (0,\varepsilon _0)$$ and $$m'\in \mathrm{Y}$$, that

\begin{aligned} \textsf {d} (m,m')\le C\varepsilon ,\quad \text { whenever }\quad \textsf {d} ^2(x,m'),\textsf {d} ^2(y,m')\le \tfrac{1}{4}\textsf {d} ^2(x,y)+\varepsilon ^2. \end{aligned}

### Proof

By the definition of $$\mathrm{CAT}(\kappa )$$ space, using the triangle comparison property with the points $$x,y,m'$$, we see that it is sufficient to prove the claim when $$\mathrm{Y}$$ is the model space $${\mathbb {M}}_\kappa$$. Since $$\mathrm{CAT}(\kappa )$$ spaces are $$\mathrm{CAT}(\kappa ')$$ spaces for $$\kappa '\ge \kappa$$ (see [10, Part II, Chap. 1]), we can assume that $$\kappa >0$$. Thus we may assume $$\mathrm{Y}=S_\kappa ^2$$. In this case the conclusion follows by direct computations, one possible line of thought being the following.

Let $$\varepsilon _0$$ be such that

\begin{aligned} \frac{\varepsilon ^2_0}{2\cos (\sqrt{\kappa }\lambda /2)}<1\quad \text { and }\quad \sqrt{\kappa (\varepsilon _0^2+(\lambda /2)^2)}<\pi /2. \end{aligned}
(2.2)

Let $$\varepsilon \in (0,\varepsilon _0)$$, and let xym and $$m'$$ be as in the claim.

Set $$r_\varepsilon :=\sqrt{\textsf {d} (x,y)^2/4+\varepsilon ^2}$$ and consider the set

\begin{aligned} S:={\overline{B}}_{S_\kappa ^2}(x,r_\varepsilon )\cap {\overline{B}}_{S_\kappa ^2}(y,r_\varepsilon ) \end{aligned}

(note that $$m'\in S$$). The distance

\begin{aligned} \max _{s\in S}\textsf {d} (s,m) \end{aligned}

is maximized at a point $$s\in \partial S$$ where the geodesic segment [ms] makes a right angle with the geodesic segment [xy]. The spherical cosine law, applied to the triangle $$\Delta (x,m,s)$$ (resp. $$\Delta (y,m,s)$$) yields

\begin{aligned} \cos (\sqrt{\textsf {d} }(m,s))\cos \frac{\sqrt{\kappa }\textsf {d} (x,y)}{2}=\cos (\sqrt{\kappa }r_\varepsilon ). \end{aligned}
(2.3)

Denote $$a:=\textsf {d} (x,y)/2$$ and define

\begin{aligned} f(s):=\frac{\cos (\sqrt{\kappa (a^2+s^2)})}{\cos (\sqrt{\kappa }a)},\quad 0\le s\le \varepsilon _0. \end{aligned}

Note that

\begin{aligned} 1-f(\varepsilon )\le \int _0^\varepsilon |f'(s)|\mathrm{d}s \le \int _0^\varepsilon \frac{\kappa s\mathrm{d}s}{\cos (\sqrt{\kappa }a)}\le \frac{\kappa \varepsilon ^2}{\cos (\sqrt{\kappa }a)}. \end{aligned}

From this estimate, (2.3), and the fact that $$a\le \lambda /2$$, we have

\begin{aligned} \cos (\sqrt{\kappa }\textsf {d} (m,s))=f(\varepsilon )\ge 1-\frac{\kappa \varepsilon ^2}{\cos (\sqrt{\kappa }\lambda /2)}. \end{aligned}

This, the elementary estimate $$\displaystyle \arccos (1-t^2)\le 2t\ (0\le t<1)$$ and (2.2) then imply that

\begin{aligned} \sqrt{\kappa }\textsf {d} (m,s)\le \frac{\sqrt{\kappa }\varepsilon }{\sqrt{\cos (\sqrt{\kappa }\lambda /2)}}. \end{aligned}

This completes the proof. $$\square$$

Being geodesic spaces, on $$\mathrm{CAT}(\kappa )$$ spaces it makes sense to speak about convex sets:

### Definition 2.3

(Convex sets and convex hull) Let $$\mathrm{Y}$$ be a $$\mathrm{CAT}(\kappa )$$ space. Then a set $$C\subset \mathrm{Y}$$ is said to be convex provided for any $$x,y\in C$$ we have that every geodesic connecting them is entirely contained in C. The (closed) convex hull of a set $$C\subset \mathrm{Y}$$ is the smallest (closed) convex set containing C.

One might define a weaker form of convexity by requiring that for every xy there exists a geodesic connecting them which is entirely contained in C. In $$\mathrm{CAT}(\kappa )$$ spaces this distinction is relevant only when $$\textsf {d} (x,y)\ge D_\kappa$$, as otherwise geodesics are unique. For the purposes of the current manuscript the distinction is irrelevant.

The following simple lemma will be useful later on:

### Lemma 2.4

(Separable convex hull) Let $$\mathrm{Y}$$ be a $$\mathrm{CAT}(\kappa )$$ space and $$C\subset \mathrm{Y}$$ a separable subset which is contained in a closed ball B of radius $$<D_\kappa /2$$.

Then the closed convex hull $${\overline{C}}_\mathrm{conv}$$ of C is separable and contained in B.

### Proof

Define the sequence $$(C_n)$$ of subsets of $$\mathrm{Y}$$ recursively as follows. Set $$C_0:=C$$, then iteratively let $$C_{n+1}$$ be the union of the images of geodesics whose endpoints are in $$C_n$$. It is clear that the convex hull of C must contain $$\cup _nC_n$$ and thus $${\overline{C}}_\mathrm{conv}\supset \overline{\cup _nC_n}$$.

To conclude the proof it is therefore enough to show that $$\overline{\cup _nC_n}$$ is convex and separable. The convexity of $${\cup _nC_n}$$ is a straightforward consequence of the definition using induction. Since B is convex we see that $${\cup _nC_n}\subset B$$. Hence we have that $$\sup _{x,y\in {\cup _nC_n}}\textsf {d} (x,y)<D_\kappa$$. By Lemma 2.2, the geodesic connecting two points $$x,y\in \cup _nC_n$$ depends continuously on x and y. In particular, the separability of $$C_{n+1}$$ follows from that of $$C_{n}$$ (and the uniqueness of geodesics). Thus $$\overline{\cup _nC_n}$$ is separable. By the continuous dependence of the (unique) geodesics and the convexity of $${\cup _nC_n}$$ the convexity of $$\overline{\cup _nC_n}$$ follows. $$\square$$

We conclude the section with the following result, taken from [10, Part II, Lemma 3.20]:

### Lemma 2.5

Let $$(\mathrm{Y},\textsf {d} )$$ be a $$\mathrm{CAT}(\kappa )$$ space and $$x\in \mathrm{Y}$$. Then there exists a function C defined on a right neighbourhood of 0 such that $$\lim _{r\downarrow 0}C(r)=1$$ and

\begin{aligned} \frac{\textsf {d} \big ((\textsf {G} _x^y)_\varepsilon ,(\textsf {G} _x^z)_\varepsilon \big )}{\varepsilon } \le C(r)\,\textsf {d} (y,z)\quad {for}\,\, {every}\,\,\varepsilon \in (0,1) \quad \mathrm{and}\quad y,z\in B_r(x) \end{aligned}
(2.4)

for all $$r<D_\kappa$$ sufficiently small.

### Tangent Cone

Here we define the tangent cone at a point on a $$\mathrm{CAT}(\kappa )$$ space and study its first properties. We refer the interested reader to the surveys [10,11,12] and the references therein for more details.

We start by describing a construction of tangent cone which is valid in every geodesic space. Let $$\mathrm{Y}$$ be a geodesic space and $$x\in \mathrm{Y}$$. We denote by $$\textsf {Geo} _x\mathrm{Y}$$ the space of (constant speed) geodesics starting from x and defined on some right neighbourhood of 0 and equip such space with the pseudo-distance $$\textsf {d} _x$$ defined as:

\begin{aligned} \textsf {d} _x(\gamma ,\eta ):=\varlimsup _{t\downarrow 0}\frac{\textsf {d} (\gamma _t,\eta _t)}{t}\quad \forall \gamma ,\eta \in \textsf {Geo} _x\mathrm{Y}. \end{aligned}
(2.5)

Then $$\textsf {d} _x$$ naturally induces an equivalence relation on $$\textsf {Geo} _x\mathrm{Y}$$ by declaring $$\gamma \sim \eta$$ iff $$\textsf {d} _x(\gamma ,\eta )=0$$. The equivalence class of $$\gamma \in \textsf {Geo} _x\mathrm{Y}$$ in $$\textsf {Geo} _x\mathrm{Y}/\sim$$ will be denoted by $$\gamma '_0$$. Clearly $$\textsf {d} _x$$ passes to the quotient and defines a distance—still denoted by $$\textsf {d} _x$$—on $$\textsf {Geo} _x\mathrm{Y}/\sim$$ .

### Definition 2.6

(Tangent cone) Let $$\mathrm{Y}$$ be a geodesic space and $$x\in \mathrm{Y}$$. The tangent cone $$(\mathrm{T}_x\mathrm{Y},\textsf {d} _x)$$ is the completion of $$(\textsf {Geo} _x\mathrm{Y}/\sim ,\textsf {d} _x)$$. We call $$0\in \mathrm{T}_x\mathrm{Y}$$, or sometimes $$0_x\in \mathrm{T}_x\mathrm{Y}$$, the equivalence class of the constant geodesic in $$\textsf {Geo} _x\mathrm{Y}$$.

In a general geodesic space little can be said about the structure of tangent cones, but if $$\mathrm{Y}$$ is locally a $$\mathrm{CAT}(\kappa )$$ space then tangent cones have interesting geometric properties and can be used as basic tools to build a robust first-order calculus.

In order to understand the geometry of $$\mathrm{T}_x\mathrm{Y}$$ it is necessary to recall the notion of angle between geodesics. To do so, let us recall the definition of modified trigonometric functions

\begin{aligned} \mathrm{sn}^\kappa (x):=\left\{ \begin{array}{ll} \tfrac{1}{\sqrt{\kappa }}\sin (\sqrt{\kappa }x)&{}\text {if }\kappa>0\\ x&{}\text {if }\kappa =0\\ \tfrac{1}{\sqrt{-\kappa }}\sinh (\sqrt{-\kappa } x)&{}\text {if }\kappa<0 \end{array}\right. \qquad \qquad \mathrm{cn}^\kappa (x):=\left\{ \begin{array}{ll} \cos (\sqrt{\kappa }x)&{}\text {if }\kappa >0\\ 1&{}\text {if }\kappa =0\\ \cosh (\sqrt{-\kappa } x)&{}\text {if }\kappa <0 \end{array}\right. \end{aligned}

and that in the model space $${\mathbb {M}}_\kappa$$ the cosine law reads, for $$\kappa \ne 0$$, as

\begin{aligned} \cos (\alpha )=\frac{\mathrm{cn}^\kappa (a)-\mathrm{cn}^\kappa (b)\,\mathrm{cn}^\kappa (c)}{\kappa \, \mathrm{sn}^\kappa (b)\,\mathrm{sn}^\kappa (c)} \end{aligned}

whenever abc are the lengths of the sides of a geodesic triangle and $$\alpha$$ is the angle opposite to a (in the limiting case $$\kappa \rightarrow 0$$ this reduces to the classical Euclidean cosine law).

Then given three points $$x,y_0,y_1$$ in a metric space with $$\textsf {d} (x,y_0)+\textsf {d} (x,y_1)+\textsf {d} (y_0,y_1)<2D_\kappa$$, we define the angle between $$y_0,y_1$$ seen from x as

\begin{aligned} {\overline{\angle }}^\kappa _x(y_0,y_1):=\arccos \bigg (\frac{\mathrm{cn}^\kappa (\textsf {d} (y_0,y_1))-\mathrm{cn}^\kappa (\textsf {d} (x,y_0))\,\mathrm{cn}^\kappa (\textsf {d} (x,y_1))}{\kappa \,\mathrm{sn}^\kappa (\textsf {d} (x,y_0))\,\mathrm{sn}^\kappa (\textsf {d} (x,y_1))}\bigg ). \end{aligned}
(2.6)

Notice that this is the angle in the model space $${\mathbb {M}}_\kappa$$ at $${\bar{x}}$$ of a comparison triangle $${\bar{\Delta }} ({\bar{x}},{\bar{y}}_0,{\bar{y}}_1)$$ and from this observation it is not hard to check that

\begin{aligned} {\overline{\angle }}^\kappa _x(y_0,y_2)\le {\overline{\angle }}^\kappa _x(y_0,y_1)+{\overline{\angle }}^\kappa _x(y_1,y_2) \end{aligned}
(2.7)

for any four points $$x,y_0,y_1,y_2$$ in a metric space.

A direct consequence of the definition of $$\mathrm{CAT}(\kappa )$$ space and of the above cosine law is that on a $$\mathrm{CAT}(\kappa )$$ space $$\mathrm{Y}$$, for $$x\in \mathrm{Y}$$ and $$\gamma ,\eta \in \textsf {Geo} _x\mathrm{Y}$$

\begin{aligned} \begin{aligned}&\text {the angle }{\overline{\angle }}^\kappa _x(\gamma _t,\eta _s)\text { is non-decreasing in both }t\text { and }s\\&\text {provided they vary in }\big \{(t,s)\,:\,\textsf {d} (x,\gamma _t),\textsf {d} (x,\eta _s)<D_\kappa \big \}. \end{aligned} \end{aligned}
(2.8)

Hence, if $$\mathrm{Y}$$ is a local $$\mathrm{CAT}(\kappa )$$ space, $$x\in \mathrm{Y}$$ and $$\gamma ,\eta \in \textsf {Geo} _x\mathrm{Y}$$ the joint limit

\begin{aligned} \angle ^\kappa _x(\gamma ,\eta ):=\lim _{t,s\downarrow 0}{\overline{\angle }}^\kappa _x (\gamma _t,\eta _s) \end{aligned}
(2.9)

exists and it is called angle between the geodesics $$\gamma ,\eta$$.

The following technical result will be useful (for the proof see [1, Lemma 3.3.1] and the discussion thereafter).

### Lemma 2.7

(Independence of the angle on $$\kappa$$) Let $$\kappa _1,\kappa _2\in {\mathbb {R}}$$, $$\kappa _1\ge \kappa _2$$. Then there is a constant $$C=C(\kappa _1,\kappa _2)$$ such that the following holds: for any metric space $$\mathrm{Y}$$ and $$x,y_1,y_2\in \mathrm{Y}$$ with $$\textsf {d} (x,y_1),\textsf {d} (x,y_2),\textsf {d} (y_1,y_2)<D_{\kappa _1}$$ it holds that

\begin{aligned} \big |{\overline{\angle }}_x^{\kappa _1}(y_1,y_2)-{\overline{\angle }}_x^{\kappa _2} (y_1,y_2)\big |\le C\textsf {d} (x,y_1)\,\textsf {d} (x,y_2). \end{aligned}

In particular, the angle $$\angle _{x}^\kappa (\gamma ,\eta )$$ between geodesics $$\gamma ,\eta \in \textsf {Geo} _x\mathrm{Y}$$ does not depend on $$\kappa$$ and we shall drop the superscript from the notation. Picking $$\kappa _1=0$$ we see that, for any $$\kappa \in {\mathbb {R}}$$, we have

\begin{aligned} \cos ({\overline{\angle }}_x^\kappa (\gamma _t,\eta _s))=\frac{\textsf {d} ^2(\gamma _t,x) +\textsf {d} ^2(\eta _s,x)-\textsf {d} ^2(\gamma _t,\eta _s)}{2\textsf {d} (\gamma _t,x)\textsf {d} (\eta _s,x)}+o(ts). \end{aligned}
(2.10)

We drop the superscript from the notation of the comparison angle as well, with the understanding that $$\kappa$$ is fixed in each claim.

From (2.7) it is not hard to check that $$\angle _x$$ is a pseudo-distance on $$\textsf {Geo} _x\mathrm{Y}$$ and thus defines an equivalence relation $$\sim '$$ by declaring $$\gamma \sim '\eta$$ iff $$\angle _x(\gamma ,\eta )=0$$. It is worth noticing that the angle between two different reparametrizations of the same geodesic is 0.

We denote by $$\mathrm {dir}_x\mathrm{Y}$$ the quotient $$\textsf {Geo} _x\mathrm{Y}/\sim '$$ and, abusing a bit the notation, we keep denoting by $$\angle _x$$ and $$\gamma \in \mathrm {dir}_x\mathrm{Y}$$ the distance induced by $$\angle _x$$ and the equivalence class of $$\gamma \in \textsf {Geo} _x\mathrm{Y}$$, respectively.

### Definition 2.8

((Space of directions) Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space and $$x\in \mathrm{Y}$$. The space of directions $$(\Sigma _x\mathrm{Y},\angle _x)$$ is the completion of $$(\mathrm {dir}_x\mathrm{Y},\angle _x)$$.

Let us now recall that given a generic metric space $$(\mathrm{X},\textsf {d} _\mathrm{X})$$, the (Euclidean) cone over it is the metric space $$(C(\mathrm{X}),\textsf {d} _{C(\mathrm{X})})$$ defined as follows (see also e.g.  for further details). As a set, $$C(\mathrm{X})$$ is equal to $$\big ([0,\infty )\times \mathrm{X}\big )/\sim$$, where $$(t,x)\sim (s,y)$$ iff $$t=s=0$$ or $$(t,x)=(s,y)$$. The distance is defined as

\begin{aligned} \textsf {d} _{C(\mathrm{X})}^2\big ((t,x),(s,y)\big ):=t^2+s^2-2ts\cos \big (\textsf {d} _\mathrm{X}(x,y)\wedge \pi \big ). \end{aligned}
(2.11)

On $$C(\mathrm{X})$$ there is a natural operation of ‘multiplication by a positive scalar’: the product $$\lambda z$$ of $$z=(t,x)$$ by $$\lambda \ge 0$$ is defined as $$(\lambda t,x)$$.

We then have the following:

### Theorem 2.9

($$\mathrm{T}_x\mathrm{Y}$$ as a cone over the space of directions) Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space. Fix a point $$x\in \mathrm{Y}$$. Then the $$\varlimsup$$ in (2.5) is a limit. Moreover, the map sending $$\gamma \in \textsf {Geo} _x\mathrm{Y}$$ to $$(\mathrm {Lip}(\gamma ),\gamma )\in [0,\infty )\times \mathrm {dir}_x\mathrm{Y}$$ passes to the quotient and uniquely extends to a bijective isometry from $$\mathrm{T}_x\mathrm{Y}$$ to $$C(\Sigma _x\mathrm{Y})$$. Finally, the map $$B_{\textsf {r} _x}(x)\ni y\mapsto (\textsf {G} ^y_x)'_0\in \mathrm{T}_x\mathrm{Y}$$ is continuous. In particular, if $${\mathcal {D}}\subset B_{\textsf {r} _x}(x)$$ is dense in a neighbourhood of x, then $$\{\lambda (\textsf {G} ^y_x)'_0\,:\,\lambda \ge 0,\ y\in {\mathcal {D}}\}$$ is dense in $$\mathrm{T}_x\mathrm{Y}$$.

### Proof

For any $$\gamma ,\eta \in \textsf {Geo} _x\mathrm{Y}$$, by picking $$t=s$$ in (2.10) we see that

\begin{aligned} \frac{\textsf {d} ^2(\gamma _t,\eta _t)}{t^2}=\mathrm {Lip}(\gamma )^2+\mathrm {Lip}(\eta )^2-2 \mathrm {Lip}(\gamma )\mathrm {Lip}(\eta )\cos ({\overline{\angle }}_x(\gamma _t,\eta _t))+o(t^2). \end{aligned}

Since $$\lim _{t\downarrow 0}{\overline{\angle }}_x(\gamma _t,\eta _t)=\angle _x(\gamma ,\eta )$$ it follows that the limit $$\displaystyle \lim _{t\downarrow 0}\frac{\textsf {d} ^2(\gamma _t,\eta _t)}{t^2}$$ exists, and equals

\begin{aligned} \begin{aligned}\ \lim _{t\downarrow 0}\frac{\textsf {d} ^2(\gamma _t,\eta _t)}{t^2}&=\mathrm {Lip}(\gamma )^2 +\mathrm {Lip}(\eta )^2-2\mathrm {Lip}(\gamma )\mathrm {Lip}(\eta )\cos (\angle _x(\gamma ,\eta ))\\&=\textsf {d} ^2_{C(\Sigma _x\mathrm{Y})}\big ((\mathrm {Lip}(\gamma ),\gamma ),(\mathrm {Lip}(\eta ),\eta )\big ). \end{aligned} \end{aligned}

It follows that the map $$\gamma _0'\mapsto (\mathrm {Lip}(\gamma ),\gamma )$$ defines a bijective isometry $$\mathrm{T}_x\mathrm{Y}\mapsto C(\Sigma _x\mathrm{Y})$$.

For the continuity of $$y\mapsto (\textsf {G} ^y_x)'_0$$, notice that from the monotonicity of the angle it follows that

\begin{aligned} \angle _x\big ((\textsf {G} ^{y}_x)'_0,(\textsf {G} ^z_x)'_0\big )\le {\overline{\angle }}^\kappa _x(y,z) \end{aligned}

and thus if $$z\rightarrow y$$ we have $$\angle _x\big ((\textsf {G} ^{y}_x)'_0,(\textsf {G} ^z_x)'_0\big )\rightarrow 0$$. Since trivially it also holds that $$\mathrm{Lip}(\textsf {G}_{x}^{z})=\textsf {d} (x,z)\rightarrow \textsf {d} (x,y)=\mathrm{Lip}(\textsf {G}_{x}^{y})$$, continuity follows.

For the last claim, notice that by the definition of tangent cone and of multiplication by a positive scalar we have that $$\{\lambda (\textsf {G} ^y_x)'_0\ :\ \lambda \ge 0,\ y\in B_r(x)\}$$ is dense in $$\mathrm{T}_x\mathrm{Y}$$ for any $$r\in (0,\textsf {r} _x)$$. Then the continuity just proved ensures that for any $$\lambda \ge 0$$ the set $$\{\lambda (\textsf {G} ^y_x)'_0\ :\ y\in {\mathcal {D}}\}$$ is dense in $$\{\lambda (\textsf {G} ^y_x)'_0\ :\ y\in B_r(x)\}$$, leading to the claim. $$\square$$

A key property of the tangent cone is the following statement, which is central for our subsequent results. For the proof we refer to [10, Chap. II, Theorem 3.19].

### Theorem 2.10

Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space and $$x\in \mathrm{Y}$$. Then the tangent cone $$(\mathrm{T}_x\mathrm{Y},\textsf {d} _x)$$ is a $$\mathrm{CAT}(0)$$-space.

The tangent cone $$\mathrm{T}_x\mathrm{Y}$$ is not only a $$\mathrm{CAT}(0)$$ space, but also comes with an additional structure which somehow resembles that of a Hilbert space. To make this more evident, let us introduce the following notation, valid for any $$v,w\in \mathrm{T}_x\mathrm{Y}$$ (see [12, 37]).

1. (a)

Multiplication by a positive scalar. As for general cones, for $$\lambda \ge 0$$ and $$v=(t,\gamma )\in \mathrm{T}_x\mathrm{Y}\approx C(\Sigma _x\mathrm{Y})$$ we put $$\lambda v:=(\lambda t,\gamma )$$.

2. (b)

Norm. $$|v|_x:=\textsf {d} _x(v,0)$$.

3. (c)

Scalar product. $$\langle v,w\rangle _x: = \tfrac{1}{2}\big [|v|_x^2+|w|_x^2-\textsf {d} _x^2(v,w)\big ]$$.

4. (d)

Sum. $$v\oplus w:=2 m_{v,w}$$, where $$m_{v,w}$$ is the midpoint of vw (well-defined because $$\mathrm{T}_x\mathrm{Y}$$ is a $$\mathrm{CAT}(0)$$ space).

The basic properties of these operations are collected in the following proposition:

### Proposition 2.11

(Basic calculus on the tangent cone) Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space and $$x\in \mathrm{Y}$$. Then the four operations defined above are continuous in their variables. The ‘sum’ and the ‘scalar product’ are also symmetric. Moreover:

\begin{aligned} |\lambda v|_x&=\lambda |v|_x, \end{aligned}
(2.12a)
\begin{aligned} \textsf {d} _x^2(v,w)&=|v|_x^2+|w|_x^2-2\langle v,w\rangle _x,\end{aligned}
(2.12b)
\begin{aligned} {\langle }\gamma '_0,\eta '_0{\rangle }_x&=|\gamma '_0|_x|\eta '_0|_x\cos (\angle _x(\gamma ,\eta )),\end{aligned}
(2.12c)
\begin{aligned} \langle \lambda v,w\rangle _x&= \langle v,\lambda w\rangle _x=\lambda \langle v,w\rangle _x, \end{aligned}
(2.12d)
\begin{aligned} |\langle v,w\rangle _x|&\le |v|_x|w|_x, \end{aligned}
(2.12e)
\begin{aligned} \langle v,w\rangle _x&= |v|_x|w|_x\quad \text { if and only if }\quad |w|_xv=|v|_xw, \end{aligned}
(2.12f)
\begin{aligned} \textsf {d} _x^2(v,w)+|v\oplus w|_x^2&\le 2(|v|_x^2+|w|_x^2), \end{aligned}
(2.12g)

for any $$v,w\in \mathrm{T}_x\mathrm{Y}$$, $$\gamma ,\eta \in \textsf {Geo} _x\mathrm{Y}$$ and $$\lambda \ge 0$$.

### Proof

The symmetry of the ‘sum’ and ‘scalar product’ are obvious and so are the continuity of the ‘norm’ and then of the ‘scalar product’. The continuity of $$(\lambda ,v)\rightarrow \lambda v$$ is a direct consequence of the inequality

\begin{aligned} \textsf {d} _x(\lambda v,\lambda 'v')\le \textsf {d} _x(\lambda v,\lambda v')+\textsf {d} _x(\lambda v',\lambda 'v')=\lambda \textsf {d} _x(v,v')+|\lambda '-\lambda | |v'|_x, \end{aligned}

where the equality follows trivially from the definition of cone distance and Theorem 2.9. For the continuity of the ‘sum’ it is now sufficient to prove that the map $$(v,w)\mapsto m_{v,w}$$ is continuous. This follows from the bound

\begin{aligned} \textsf {d} _x(m_{v,w},m_{v',w'})\le \frac{1}{2}\big ( \textsf {d} _x(v,v')+\textsf {d} _x(w,w')\big ) \end{aligned}

which is valid in any $$\mathrm{CAT}(0)$$ space (see e.g. [8, Proposition 1.1.5 and Theorem 1.3.3]).

Notice that (2.12a), (2.12b) are direct consequences of the definitions. For (2.12c) we observe that by definition we have

\begin{aligned} {\langle }\gamma _0',\eta _0'{\rangle }_x=\tfrac{1}{2}\big (|\gamma '_0|_x^2+|\eta '_0|_x^2 -\textsf {d} _x^2(\gamma _0',\eta _0')\big ) \end{aligned}

and thus recalling (2.5) (and the fact that the $$\varlimsup$$ is actually a limit—see Theorem 2.9) we obtain

\begin{aligned}&{\langle }\gamma _0',\eta _0'{\rangle }_x=|\gamma '_0|_x|\eta '_0|_x\lim _{t\downarrow 0} \frac{\textsf {d} ^2(\gamma _t,x)+\textsf {d} ^2(\eta _t,x)-\textsf {d} ^2(\gamma _t,\eta _t)}{2\textsf {d} (\gamma _t,x) \textsf {d} (\eta _t,x)}\\&{\mathop {=}\limits ^{(2.10)}}|\gamma '_0|_x|\eta '_0|_x\cos (\angle _x(\gamma ,\eta )). \end{aligned}

For (2.12d) we note that from the definition (2.11) and Theorem 2.9 it is clear that $$\textsf {d} _x(\lambda v,\lambda w)=\lambda \textsf {d} _x(v,w)$$ for $$\lambda \ge 0$$. Hence we also have $${\langle }\lambda v,\lambda w{\rangle }_x=\lambda ^2{\langle }v,w{\rangle }_x$$ and thus, taking into account the symmetry of the scalar product, to conclude it is sufficient to prove that

\begin{aligned} {\langle }\lambda v, w{\rangle }_x\ge \lambda {\langle }v,w{\rangle }_x\quad \text {for }\ \lambda \in [0,1]. \end{aligned}
(2.13)

To see this, notice that by the 2-convexity (1.1) of squared distance functions in $$\mathrm{CAT}(0)$$-spaces we have, for $$\lambda \in [0,1]$$ and $$v,w\in \mathrm{T}_x\mathrm{Y}$$, that

\begin{aligned} \textsf {d} _x^2(\lambda v,w)\le (1-\lambda )|w|_x^2+\lambda \textsf {d} _x^2(v,w)-\lambda (1-\lambda )|v|_x^2. \end{aligned}

The estimate (2.13) follows from this and the definition of $${\langle }\cdot ,\cdot {\rangle }_x$$.

To prove (2.12e) let $$\gamma ,\eta \in \mathrm {dir}_x\mathrm{Y}$$ and $$t,s\ge 0$$ and observe that

\begin{aligned} \big |2\langle (t,\gamma ),(s,\eta )\rangle _x\big |= & {} \big | t^2+s^2-\textsf {d} _x^2\big ((t,\gamma ),(s,\eta )\big )\big |\\= & {} 2ts|\cos \angle _x(\gamma ,\eta )| \le 2ts=2|(t,\gamma )|_x|(s,\eta )|_x. \end{aligned}

Since elements of the form $$(t,\gamma )$$ are dense in $$\mathrm{T}_x\mathrm{Y}$$, we just proved (2.12e).

The ‘if’ in (2.12f) comes from (2.12d), for the ‘only if’ suppose that $$\langle v,w\rangle _x=|v|_x|w|_x$$ and take $$\gamma _n,\eta _n\in \mathrm {dir}_x\mathrm{Y}$$ so that $$(|v|_x,\gamma _n)\rightarrow v$$ and $$(|w|_x,\eta _n) \rightarrow w$$ in $$\mathrm{T}_x\mathrm{Y}$$. It follows that

\begin{aligned} |v|_x|w|_x=\langle v,w\rangle _x=\lim _{n\rightarrow \infty } \langle (|v|_x,\gamma _n),(|w|_x,\eta _n)\rangle _x=|v|_x|w|_x\lim _{n\rightarrow \infty } \cos \angle _x(\gamma _n,\eta _n), \end{aligned}

i.e. $$\lim _{n\rightarrow \infty }\angle _x(\gamma _n,\eta _n)=0$$. This implies that

\begin{aligned}&\textsf {d} _x(|w|_xv,|v|_xw)\\&\quad =\lim _n \textsf {d} _x\big ((|v|_x|w|_x,\gamma _n) ,(|v|_x|w|_x,\eta _n)\big )=|v|_x|w|_x\lim _n \textsf {d} _x\big ((1,\gamma _n) ,(1,\eta _n)\big )=0, \end{aligned}

which was the claim.

Finally, (2.12g) is also a direct consequence of the 2-convexity of the squared distance from a point, which gives

\begin{aligned} \Big |\frac{1}{2}(v\oplus w)\Big |_x^2\le \frac{1}{2}\big (|v|_x^2+|w|_x^2\big )-\frac{1}{4}\textsf {d} _x^2(v,w). \end{aligned}

Taking into account the proved homogeneities, this is the claim. $$\square$$

It is worth underlying that, in general, $$\oplus$$ is not associative.

### Lemma 2.12

Let $$(\mathrm{Y},\textsf {d} )$$ be a $$\mathrm{CAT}(\kappa )$$ space. Fix a point $$x\in \mathrm{Y}$$. Then for any $$y,z\in B_{D_\kappa }(x)\setminus \{x\}$$ and $$\alpha ,\beta >0$$ it holds that

\begin{aligned} \big (\alpha (\textsf {G} _x^y)'_0\big )\oplus \big (\beta (\textsf {G} _x^z)'_0\big ) =\lim _{\varepsilon \downarrow 0}\frac{2(\textsf {G} _x^{m_\varepsilon })'_0}{\varepsilon }\in \mathrm{T}_x\mathrm{Y}, \end{aligned}
(2.14)

where $$m_\varepsilon$$ denotes the midpoint between $$(\textsf {G} _x^y)_{\varepsilon \alpha }$$ and $$(\textsf {G} _x^z)_{\varepsilon \beta }$$.

### Proof

Let us call $$p_\varepsilon :=(\textsf {G}_x^y)_{\varepsilon \alpha }$$ and $$q_\varepsilon :=(\textsf {G}_x^z)_{\varepsilon \beta }$$. For $$\varepsilon ,\delta >0$$ small it holds that $$(\textsf {G}_x^y)_{\delta \varepsilon \alpha }=(\textsf {G}_x^{p_\varepsilon })_\delta$$, whence by using (2.5) and Lemma 2.5 we obtain that

\begin{aligned} \begin{aligned} \textsf {d} _x\left( \alpha (\textsf {G}_x^y)'_0,\frac{(\textsf {G}_x^{m_\varepsilon })'_0}{\varepsilon }\right)&=\frac{1}{\varepsilon }\,\textsf {d} _x\big (\varepsilon \alpha (\textsf {G}_x^y)'_0, (\textsf {G}_x^{m_\varepsilon })'_0\big )=\frac{1}{\varepsilon }\lim _{\delta \downarrow 0} \frac{\textsf {d} \big ((\textsf {G}_x^y)_{\delta \varepsilon \alpha }, (\textsf {G}_x^{m_\varepsilon })_\delta \big )}{\delta }\\&=\frac{1}{\varepsilon }\lim _{\delta \downarrow 0} \frac{\textsf {d} \big ((\textsf {G}_x^{p_\varepsilon })_\delta ,(\textsf {G}_x^{m_\varepsilon })_\delta \big )}{\delta } \le \frac{1}{\varepsilon }\,\textsf {d} (p_\varepsilon ,m_\varepsilon )=\frac{1}{2\varepsilon }\,\textsf {d} (p_\varepsilon ,q_\varepsilon ). \end{aligned} \end{aligned}

Similarly, we have that $$\textsf {d} _x\left( \beta (\textsf {G}_x^z)'_0,\varepsilon ^{-1}(\textsf {G}_x^{m_\varepsilon })'_0\right) \le \textsf {d} (p_\varepsilon ,q_\varepsilon )/(2\varepsilon )$$. Choosing $$\varepsilon _n\downarrow 0$$ so that

\begin{aligned} \Big |\frac{\textsf {d} (p_{\varepsilon _n},q_{\varepsilon _n})}{\varepsilon _n}- \textsf {d} _x\big (\alpha (\textsf {G}_x^y)'_0,\beta (\textsf {G}_x^z)'_0\big )\Big |<\frac{2}{n} \quad \text { for every }n, \end{aligned}

we deduce that $$\varepsilon _n^{-1}(\textsf {G}_x^{m_{\varepsilon _n}})'_0$$ is a $$\frac{1}{n}$$-approximate midpoint between $$\alpha (\textsf {G}_x^y)'_0$$ and $$\beta (\textsf {G}_x^z)'_0$$. This yields (2.14) by Lemma 2.2, as required. $$\square$$

We close this section with the following important formula:

### Proposition 2.13

(First variation formula) Let $$\mathrm{Y}$$ be a $$\mathrm{CAT}(\kappa )$$ space, $$x\in \mathrm{Y}$$ and $$\gamma ,\eta \in \textsf {Geo} _x\mathrm{Y}$$ with $$\eta$$ defined on [0, 1] and such that $$\textsf {d} (x,\eta _1)< D_\kappa$$. Then

\begin{aligned} {{\langle }\gamma '_0,\eta '_0{\rangle }_x}=-\mathrm {Lip}(\eta )\lim _{t\downarrow 0}\frac{\textsf {d} (\gamma _t,\eta _1)-\textsf {d} (\gamma _0,\eta _1)}{t}. \end{aligned}
(2.15)

### Proof

We know from (2.12c) and (2.10) that

\begin{aligned} {\langle }\gamma '_0,\eta '_0{\rangle }_x=\lim _{t,s\downarrow 0}\frac{\textsf {d} ^2(\gamma _t,x) +\textsf {d} ^2(\eta _s,x)-\textsf {d} ^2(\gamma _t,\eta _s)}{2ts} \end{aligned}

and by direct computation we see that

\begin{aligned} \begin{aligned} \lim _{t\downarrow 0}\frac{\textsf {d} ^2(\gamma _t,x)+\textsf {d} ^2(\eta _s,x)-\textsf {d} ^2 (\gamma _t,\eta _s)}{2ts}&=\lim _{t\downarrow 0}\frac{\textsf {d} ^2(\eta _s,x) -\textsf {d} ^2(\gamma _t,\eta _s)}{2ts}\\&=-\mathrm {Lip}(\eta )\lim _{t\downarrow 0}\frac{\textsf {d} (\gamma _t,\eta _s)-\textsf {d} (x,\eta _s)}{t}. \end{aligned} \end{aligned}

Since the triangle inequality gives $$\textsf {d} (\gamma _t,\eta _1)-\textsf {d} (x,\eta _1)\le \textsf {d} (\gamma _t,\eta _s)-\textsf {d} (x,\eta _s)$$, from the above we deduce

\begin{aligned} {\langle }\gamma '_0,\eta '_0{\rangle }_x\le -\mathrm {Lip}(\eta )\lim _{t\downarrow 0}\frac{\textsf {d} (\gamma _t,\eta _1)-\textsf {d} (\gamma _0,\eta _1)}{t}. \end{aligned}

Now notice that from (2.12c), Lemma 2.7, the assumption $$\textsf {d} (x,\eta _1)<D_\kappa$$ and the monotonicity property (2.8) we get

\begin{aligned} {\langle }\gamma '_0,\eta '_0{\rangle }_x\ge \mathrm {Lip}(\gamma )\mathrm {Lip}(\eta )\varlimsup _{t\downarrow 0} \frac{\mathrm{cn}^\kappa (\textsf {d} (\gamma _t,\eta _1))-\mathrm{cn}^\kappa (\textsf {d} (\gamma _t,x))\, \mathrm{cn}^\kappa (\textsf {d} (\eta _1,x))}{\kappa \,\mathrm{sn}^\kappa (\textsf {d} (\gamma _t,x))\, \mathrm{sn}^\kappa (\textsf {d} (\eta _1,x))}. \end{aligned}

Thus, using the expansions

\begin{aligned} \begin{aligned} \mathrm{cn}^\kappa (\textsf {d} (\gamma _t,x))&=1+O(t^2),\\ \mathrm{cn}^\kappa (\textsf {d} (\gamma _t,\eta _1))&=\mathrm{cn}^\kappa (\textsf {d} (x,\eta _1))-\kappa \, \mathrm{sn}^\kappa (\textsf {d} (x,\eta _1))\big (\textsf {d} (\gamma _t,\eta _1)-\textsf {d} (x,\eta _1)\big )+O(t^2),\\ \mathrm{sn}^\kappa (\textsf {d} (\gamma _t,x))&=t\mathrm {Lip}(\gamma )+O(t^2), \end{aligned} \end{aligned}

we get the inequality $$\ge$$ in (2.15) and the conclusion. $$\square$$

### Differential of Locally Semiconvex Lipschitz Functions

In this section we see that for Lipschitz and locally semiconvex functions there is a well-behaved notion of differential defined on the tangent cone of every point in the domain of the function itself. See [37, 38] for the lower curvature bound case, and  for more general classes of metric spaces.

We start by recalling the following notion:

### Definition 2.14

(Locally semiconvex function) Let $$\mathrm{Y}$$ be a geodesic metric space and $$f:\mathrm{Y}\rightarrow {\mathbb {R}}$$. We say that f is semiconvex if there exists $$K\in {\mathbb {R}}$$ so that the inequality

\begin{aligned} f(\gamma _t)\le (1-t)f(\gamma _0)+tf(\gamma _1)-\frac{K}{2}t(1-t)\textsf {d} ^2(\gamma _0,\gamma _1) \end{aligned}

holds for any geodesic $$\gamma :[0,1]\rightarrow \mathrm{Y}$$.

A function $$f:\Omega \rightarrow {\mathbb {R}}$$, with $$\Omega \subset \mathrm{Y}$$ open connected set, is called locally semiconvex if every point $$x\in \Omega$$ has a neighbourhood U such that the inequality above holds for all geodesics $$\gamma :[0,1]\rightarrow \Omega$$ with endpoints in U.

For locally semiconvex functions it is possible to define directional derivatives, which we do in the setting of $$\mathrm{CAT}(\kappa )$$ spaces:

### Definition 2.15

(Directional derivative) Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space, $$x\in \mathrm{Y}$$, $$U\subset B_{\textsf {r} _x}(x)$$ a neighbourhood of x and $$f:U\rightarrow {\mathbb {R}}$$ locally semiconvex. The directional derivative of f at x is the map $$\sigma _x f:\textsf {Geo} _x\mathrm{Y}\rightarrow {\mathbb {R}}\cup \{-\infty \}$$ defined as

\begin{aligned} \sigma _x f(\gamma ):=\lim _{h\downarrow 0}\frac{f(\gamma _h)-f(\gamma _0)}{h}. \end{aligned}

Notice that the monotonicity of incremental ratios of convex functions ensures that the limit above exists. Still, in general it is not clear if $$\sigma _x f$$ passes to the quotient $$\textsf {Geo} _x\mathrm{Y}/\sim$$ nor if it is real-valued. In the next proposition we see that this is the case if we further assume that f is Lipschitz in a neighbourhood of x.

Recall that given $$f:\mathrm{Y}\rightarrow {\mathbb {R}}$$ the asymptotic Lipschitz constant $$\mathrm {lip}_af: \mathrm{Y}\rightarrow [0,+\infty ]$$ is defined as ### Proposition 2.16

(Differentials of locally Lipschitz and semiconvex functions) Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space, $$\Omega \subset \mathrm{Y}$$ open and $$f:\Omega \rightarrow {\mathbb {R}}$$ be locally semiconvex and Lipschitz.

Then for each $$x\in \Omega$$ there exists a unique continuous map $$\mathrm{d}_xf:\mathrm{T}_x\mathrm{Y}\rightarrow {\mathbb {R}}$$, called the differential of f at x, such that

\begin{aligned} \mathrm{d}_xf(\gamma '_0)=\sigma _xf(\gamma )\quad \forall \gamma \in \textsf {Geo} _x\mathrm{Y}. \end{aligned}
(2.16)

Moreover, $$\mathrm{d}_xf$$ is convex, $$\mathrm {lip}_a f(x)$$-Lipschitz and positively 1-homogeneous, i.e $$\mathrm{d}_xf(\lambda v)=\lambda \mathrm{d}_xf(v)$$ for any $$v\in \mathrm{T}_x\mathrm{Y}$$ and $$\lambda \ge 0$$.

### Proof

Fix $$x\in \Omega$$ and let $$r>0$$ be such that $$B_r(x)\subset \Omega$$. Then for every $$\gamma ,\eta \in \textsf {Geo} _x\mathrm{Y}$$ we have $$\gamma _t,\eta _t\in B_r(x)$$ for $$t\ll 1$$ and thus This shows that $$\sigma _x f$$ passes to the quotient and defines a -Lipschitz map on $$\textsf {Geo} _x\mathrm{Y}/\sim$$. Existence and uniqueness of the continuous extension $$\mathrm{d}_x f$$ to the whole $$\mathrm{T}_x\mathrm{Y}$$ are then obvious and, letting $$r\downarrow 0$$, it is also clear that $$\mathrm{d}_xf$$ is $$\mathrm {lip}_af(x)$$-Lipschitz.

For the homogeneity observe that, for $$\gamma \in \textsf {Geo} _x\mathrm{Y}$$ and $$\lambda \ge 0$$, the isometry given in Theorem 2.9 and the definition of multiplication by positive scalar ensure that $$\lambda \gamma '_0={\tilde{\gamma }}'_0$$ in $$C(\Sigma _x\mathrm{Y})\approx \mathrm{T}_x\mathrm{Y}$$, where $${\tilde{\gamma }}_t:=\gamma _{\lambda t}$$. Then (2.16) and the definition of directional derivative grant that $$\mathrm{d}_xf(\lambda \gamma '_0)=\lambda \mathrm{d}_xf(\gamma '_0)$$ for any $$\lambda \ge 0$$ and $$\gamma \in \textsf {Geo} _x\mathrm{Y}$$. Since tangent vectors of the form $$\gamma '_0$$ are dense in $$\mathrm{T}_x\mathrm{Y}$$, the claim follows by the continuity of $$\mathrm{d}_xf$$ that we already proved.

It remains to prove that $$\mathrm{d}_xf$$ is convex and, thanks to the continuity just proven, it is sufficient to show that for any $$\gamma ,\eta \in \textsf {Geo} _xB_{\textsf {r} _x}(x)\simeq \textsf {Geo} _x\mathrm{Y}$$, letting m be the midpoint of $$\gamma '_0,\eta '_0\in \mathrm{T}_x\mathrm{Y}$$ it holds that

\begin{aligned} \mathrm{d}_x f(m)\le \frac{1}{2}\big (\mathrm{d}_x f(\gamma '_0)+\mathrm{d}_x f(\eta '_0)\big ). \end{aligned}
(2.17)

To this aim, let $$\varepsilon >0$$ and use the density of $$\textsf {Geo} _xB_{\textsf {r} _x}(x)/\sim$$ in $$\mathrm{T}_x\mathrm{Y}$$ to find $$\rho \in \textsf {Geo} _xB_{\textsf {r} _x}(x)$$ such that $$\rho '_0$$ is an approximated midpoint of $$\gamma '_0,\eta '_0$$ in the sense that

\begin{aligned} \textsf {d} _x^2(\gamma '_0,\rho '_0),\textsf {d} _x^2(\eta '_0,\rho '_0)\le \tfrac{1}{4} \textsf {d} _x^2(\gamma '_0,\eta '_0)+\varepsilon ^2. \end{aligned}

By the very definition (2.5) of $$\textsf {d} _x$$ we see that there exists $$T>0$$ such that

\begin{aligned} \textsf {d} ^2(\gamma _t,\rho _t),\textsf {d} ^2(\eta _t,\rho _t)\le \tfrac{1}{4} \textsf {d} ^2(\gamma _t,\eta _t)+2\varepsilon ^2t^2\quad \forall t\in [0,T]. \end{aligned}

Up to taking T smaller, we can assume that $$\textsf {d} (\gamma _t,\eta _t)\le \frac{1}{2}D_\kappa$$, thus we are in a position to apply Lemma 2.2 (in $$\mathrm{T}_x\mathrm{Y}$$ and $$B_{\textsf {r} _x}(x)$$) to deduce that

\begin{aligned} \begin{aligned} \textsf {d} _x(\rho '_0,m)&\le C\varepsilon ,\\ \textsf {d} (\rho _t,m_t)&\le C\varepsilon t\quad \forall t\in [0,T], \end{aligned} \end{aligned}
(2.18)

for some $$C>0$$ independent on t, where $$m_t$$ is the midpoint of $$\gamma _t,\eta _t$$. Now let V be a neighbourhood of x where f is K-semiconvex and L-Lipschitz and notice that what previously proved grants that $$\mathrm{d}_x f$$ is L-Lipschitz as well. Then $$\gamma _t,\eta _t\in V$$ for $$t\ll 1$$ and the K-semiconvexity gives

\begin{aligned} f(m_t)\le & {} \frac{1}{2}\big (f(\gamma _t)+f(\eta _t)\big )-\frac{K}{8}\textsf {d} ^2(\gamma _t,\eta _t)\le \frac{1}{2}\big (f(\gamma _t)+f(\eta _t)\big )\\&+\,\frac{| K|}{8}t^2\big (\mathrm {Lip}^2(\gamma )+\mathrm {Lip}^2(\eta )\big ) \end{aligned}

and thus

\begin{aligned} \varlimsup _{t\downarrow 0}\frac{f(m_t)-f(x)}{t}\le & {} \frac{1}{2}\varlimsup _{t\downarrow 0} \frac{f(\gamma _t)-f(x)}{t}+\frac{1}{2}\varlimsup _{t\downarrow 0}\frac{f(\eta _t) -f(x)}{t}=\frac{1}{2}\big (\mathrm{d}_xf(\gamma '_0)\nonumber \\&+\,\mathrm{d}_xf(\eta '_0)\big ). \end{aligned}
(2.19)

Hence taking into account (2.18) and the L-Lipschitz property of f and $$\mathrm{d}_xf$$ we get

\begin{aligned} \begin{aligned} \mathrm{d}_xf(m)&{\mathop {\le }\limits ^{(2.18)}}CL\varepsilon +\mathrm{d}_xf(\rho '_0)=CL\varepsilon +\lim _{t\downarrow 0}\frac{f(\rho _t)-f(x)}{t}\\&{\mathop {\le }\limits ^{(2.18)}}2CL\varepsilon +\varlimsup _{t\downarrow 0}\frac{f(m_t)-f(x)}{t}{\mathop {\le }\limits ^{(2.19)}}2CL\varepsilon +\frac{1}{2}\big (\mathrm{d}_xf(\gamma '_0)+\mathrm{d}_xf(\eta '_0)\big ). \end{aligned} \end{aligned}

The conclusion follows letting $$\varepsilon \downarrow 0$$. $$\square$$

In the model space $${\mathbb {M}}_\kappa$$, if $$\gamma$$ is a geodesic on [0, 1] and $$x\in {\mathbb {M}}_\kappa$$ is such that $$\textsf {d} (\gamma _0,\gamma _1)+\textsf {d} (\gamma _0,x)+\textsf {d} (\gamma _1,x)<2D_\kappa$$ we have that $$t\mapsto \textsf {d} (\gamma _t,x)$$ is semiconvex. Hence if $$\mathrm{Y}$$ is a local $$\mathrm{CAT}(\kappa )$$ space and $$x\in \mathrm{Y}$$, for any $$y\in B_{\textsf {r} _x}(x)$$ the function $$\mathrm {dist}_y$$ is semiconvex on $$B_{\textsf {r} _x}(x)$$.

We collect below the main properties of the differential:

### Proposition 2.17

(Differentials of distance functions) Let $$\mathrm{Y}$$ be a $$\mathrm{CAT}(\kappa )$$ space and $$x\in \mathrm{Y}$$. Then:

1. (i)

For $$y\in \mathrm{Y}$$ with $$\textsf {d} (x,y)<D_\kappa$$ we have

\begin{aligned} \mathrm{d}_x\mathrm {dist}_y(v)=\left\{ \begin{array}{ll} -\frac{1}{\mathrm {Lip}(\eta )}{{\langle }v,\eta '_0{\rangle }_x}&{}\quad \mathrm{if }\ y\ne x,\\ \,|v|_x&{}\quad \mathrm{if }\ y=x, \end{array}\right. \quad \forall v\in \mathrm{T}_x\mathrm{Y}, \end{aligned}
(2.20)

where $$\eta \in \textsf {Geo} _x\mathrm{Y}$$ is any geodesic passing through y.

2. (ii)

For $${\mathcal {D}}\subset B_{D_\kappa }(x)$$ dense in a neighbourhood of x we have

\begin{aligned} |v|_x=\sup _{y\in {\mathcal {D}}}\big [-\mathrm{d}_x\mathrm {dist}_y(v)\big ]\quad \forall v\in \mathrm{T}_x\mathrm{Y}. \end{aligned}
3. (iii)

Let $$v,w\in \mathrm{T}_x\mathrm{Y}$$ and $${\mathcal {D}}\subset B_{D_\kappa }(x)$$ be dense. Assume that

\begin{aligned} \mathrm{d}_x\mathrm {dist}_y(v)\le \mathrm{d}_x\mathrm {dist}_y(w) \quad \forall y \in {\mathcal {D}}. \end{aligned}

Then $$|w|^2_x \le {\langle }v,w{\rangle }_x$$ and in particular $$|w|_x \le |v|_x$$.

If moreover either $$|v|_x \le |w|_x$$ or $$x \in {\mathcal {D}}$$, then we also have $$v=w$$.

### Proof

(i) By the continuity of $$v\mapsto \mathrm{d}_x\mathrm {dist}_y(v)$$ and of the stated expression it is sufficient to check (2.20) for v of the form $$v=\gamma '_0$$ for arbitrary $$\gamma \in \textsf {Geo} _x\mathrm{Y}$$. Then keeping in mind the identity (2.16) and the definition of directional derivative we see that the case $$y=x$$ is obvious. For the case $$y\ne x$$ we notice that (2.12d) ensures that $$-\frac{1}{\mathrm {Lip}(\eta )}{{\langle }v,\eta '_0{\rangle }_x}$$ does not depend on the particular choice of $$\eta$$. We choose $$\eta$$ to be defined on [0, 1] and such that $$\eta _1=y$$ and conclude noticing that the formula is a restatement of the first variation formula in Proposition 2.13.

(ii) Inequality $$\ge$$ follows from point (i) and the ‘Cauchy-Schwarz inequality’ (2.12e). The opposite inequality is trivial if $$v=0$$. If not, we use the density result in Theorem 2.9 to find $$(y_n)\subset {\mathcal {D}}$$ such that, letting $$\gamma _n:[0,1]\rightarrow \mathrm{Y}$$ be the geodesic from x to $$y_n$$, we have $$\frac{1}{\textsf {d} (x,y_n)}\gamma _{n,0}'\rightarrow \frac{1}{|v|_x}v$$. By point (i) (and recalling the calculus rules in Proposition 2.11) we have that

\begin{aligned} -\mathrm{d}_x\mathrm {dist}_{y_n}(v)=\frac{1}{\textsf {d} (x,y_n)}{\langle }v,\gamma '_{n,0}{\rangle }_x\quad \rightarrow \quad \frac{1}{|v|_x} {\langle }v,v{\rangle }_x=|v|_x. \end{aligned}

(iii) If $$w=0$$ the first claim is obvious. Otherwise use the density result in Theorem 2.9 to find $$(y_n)\subset {\mathcal {D}}$$ such that, letting $$\gamma _n:[0,1]\rightarrow \mathrm{Y}$$ be the geodesic from x to $$y_n$$, we have $$\frac{1}{\textsf {d} (x,y_n)}\gamma _{n,0}'\rightarrow \frac{1}{|w|_x}w$$. Then point (i) and our assumption give $$-\frac{1}{\textsf {d} (x,y_n)}{\langle }v,\gamma '_{n,0}{\rangle }\le -\frac{1}{\textsf {d} (x,y_n)}{\langle }w,\gamma '_{n,0}{\rangle }$$ and passing to the limit (using the calculus rules in Proposition 2.11) we get the first claim.

For the second claim, notice that if $$x\in {\mathcal {D}}$$, picking $$y:=x$$ in our assumption and using again point (i) we deduce $$|v|_x\le |w|_x$$ (and thus $$|v|_x=|w|_x$$). Hence from what previously proved we obtain $${\langle }v,w{\rangle }_x\ge |w|_x^2\ge |v|_x|w|_x$$, so that from (2.12f) we conclude $$|w|_xv=|v|_xw$$ and from the equality of norms we conclude $$v=w$$, as desired. $$\square$$

### Velocity of Absolutely Continuous Curves

Recall that a curve $$\gamma :[0,1]\rightarrow \mathrm{Y}$$ with values in a metric space is said to be absolutely continuous provided there is $$f\in L^1(0,1)$$ such that

\begin{aligned} \textsf {d} (\gamma _t,\gamma _s)\le \int _t^s f(r)\,\mathrm{d}r\quad \forall t,s\in [0,1],\ t<s. \end{aligned}
(2.21)

It is well-known that to any absolutely continuous curve we can associate a function $$|{\dot{\gamma }}|\in L^1(0,1)$$, called metric speed, which plays the role of the modulus of the derivative. The following proposition recalls the main properties of $$|{\dot{\gamma }}|$$; for the proof we refer to [3, Theorem 1.1.2] and its proof.

### Proposition 2.18

Let $$(\mathrm{Y},\textsf {d} )$$ be a separable metric space and $$\gamma :[0,1]\rightarrow \mathrm{Y}$$ be absolutely continuous. Then for a.e. $$t\in [0,1]$$ there exists the limit $$|{\dot{\gamma }}_t|$$ as $$h\rightarrow 0$$ of $$\frac{\textsf {d} (\gamma _{t+h},\gamma _t)}{|h|}$$. The function $$t\mapsto |{\dot{\gamma }}_t|$$ belongs to $$L^1(0,1)$$ and is the least, in the a.e. sense, function f for which (2.21) holds.

Moreover, for any $$(x_n)\subset \mathrm{Y}$$ dense, letting $$f_{n,t}:=\textsf {d} (\gamma _t,x_n)$$, the following holds: for a.e. $$t\in [0,1]$$ the function $$f_n$$ is differentiable at t for every $$n\in {\mathbb {N}}$$ and

\begin{aligned} |{\dot{\gamma }}_t|=\sup _{n\in {\mathbb {N}}}[-f_{n,t}']. \end{aligned}
(2.22)

On a local $$\mathrm{CAT}(\kappa )$$ space more can be said: for a.e. time we have not only a ‘numerical’ value for the derivative, but also right and left derivatives as elements of the tangent cone. The key lemma needed for achieving such a result is the following (see also [33, Theorem 1.6]):

### Lemma 2.19

Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space and $$\gamma : [0,1] \rightarrow \mathrm{Y}$$ be an absolutely continuous curve. Then for almost every $$t\in [0,1]$$ we have that either $$|{\dot{\gamma }}_t|=0$$ or

\begin{aligned} \varlimsup _{\delta _2\downarrow 0}\varlimsup _{\delta _1\downarrow 0} {\overline{\angle }}_{\gamma _t}^\kappa (\gamma _{t+\delta _1},\gamma _{t+\delta _2})=0. \end{aligned}
(2.23)

### Proof

The statement is local in nature, thus up to use the compactness of $$\gamma ([0,1])$$, its cover made of $$B_{\textsf {r} _{\gamma _t}}(\gamma _t)$$ and Lemma 2.4 we can assume that $$\mathrm{Y}$$ is a separable $$\mathrm{CAT}(\kappa )$$ space of diameter $$<D_\kappa$$.

Let $$t\in [0,1]$$ be such that the metric derivative $$|{\dot{\gamma }}_t|$$ exists, is strictly positive and (2.22) holds for some fixed countable dense $$(x_n)\subset \mathrm{Y}$$. Then for every $$x\in \mathrm{Y}$$ we have

\begin{aligned} \begin{aligned} \varliminf _{\delta \downarrow 0}\cos \big ({\overline{\angle }}_{\gamma _t}^\kappa (\gamma _{t+\delta },x)\big )=\varliminf _{\delta \downarrow 0}\frac{\mathrm{cn}^\kappa (\textsf {d} (x,\gamma _{t+\delta }))-\mathrm{cn}^\kappa (\textsf {d} (\gamma _t,x))\mathrm{cn}^\kappa (\textsf {d} (\gamma _t,\gamma _{t+\delta }))}{\kappa \,\mathrm{sn}^\kappa (\textsf {d} (\gamma _t,\gamma _{t+\delta }))\mathrm{sn}^\kappa (\textsf {d} (\gamma _t,x))} \end{aligned} \end{aligned}

with obvious modifications for $$\kappa =0$$. Using the expansions

\begin{aligned} \begin{aligned} \mathrm{cn}^\kappa (\textsf {d} (\gamma _t,\gamma _{t+\delta }))&=1+o(\delta ),\\ \mathrm{cn}^\kappa (\textsf {d} (\gamma _{t+\delta },x))&=\mathrm{cn}^\kappa (\textsf {d} (\gamma _{ t},x))-\kappa \, \mathrm{sn}^\kappa (\textsf {d} (\gamma _{ t},x))\big (\textsf {d} (\gamma _{t+\delta },x)-\textsf {d} (\gamma _{t},x)\big )+o(\delta ),\\ \mathrm{sn}^\kappa (\textsf {d} (\gamma _{ t},\gamma _{t+\delta }))&=\delta |{\dot{\gamma }}_t|+o(\delta ), \end{aligned} \end{aligned}

we obtain

\begin{aligned} \begin{aligned} \varliminf _{\delta \downarrow 0}\cos \big ({\overline{\angle }}_{\gamma _t}^\kappa (\gamma _{t+\delta },x)\big )=-\varlimsup _{\delta \downarrow 0}\frac{\textsf {d} (\gamma _{t+\delta },x) -\textsf {d} (\gamma _{t},x)}{\delta |{\dot{\gamma }}_t|}. \end{aligned} \end{aligned}

Picking $$x=x_n$$ and recalling that by assumption $$s\mapsto f_{n,s}:=\mathrm {dist}_{x_n}(\gamma _s)$$ is differentiable at t, we see that $$\varliminf _{\delta \downarrow 0}\cos \big ({\overline{\angle }}_{\gamma _t}^\kappa (\gamma _{t+\delta },x_n)\big )=-\frac{f_{n,t}'}{|{\dot{\gamma }}_t|}$$. Hence by triangle inequality for angles (2.7) we obtain

\begin{aligned}&\varlimsup _{\delta _2\downarrow 0}\varlimsup _{\delta _1\downarrow 0} {\overline{\angle }}_{\gamma _t}^\kappa (\gamma _{t+\delta _1}, \gamma _{t+\delta _2})\\&\quad \le \varlimsup _{\delta _1\downarrow 0}{\overline{\angle }}_{\gamma _t}^\kappa (\gamma _{t+\delta _1},x_n)+ \varlimsup _{\delta _2\downarrow 0}{\overline{\angle }}_{\gamma _t}^\kappa (\gamma _{t+\delta _2},x_n)=2\arccos \Big (-\frac{f_{n,t}'}{|{\dot{\gamma }}_t|}\Big ) \quad \forall n\in {\mathbb {N}}. \end{aligned}

Taking the infimum in n and using (2.22) we conclude the proof. $$\square$$

We then have the following result:

### Proposition 2.20

(Right derivative of AC curves) Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space and $$\gamma : [0,1] \rightarrow \mathrm{Y}$$ be an absolutely continuous curve. For any $$t,s\in [0,1]$$ write, for brevity, $$\textsf {G} _t^s$$ in place of $$\textsf {G} _{\gamma _t}^{\gamma _s}$$ whenever this is well-defined.

Then for every $$t\in [0,1]$$ for which $$|{\dot{\gamma }}_t|$$ exists and the conclusion of Lemma 2.19 holds (and thus in particular for a.e. t) we have that:

1. (i)

the limit, denoted by $${\dot{\gamma }}^+_t$$ in $$\mathrm{T}_{\gamma _t}\mathrm{Y}$$, of $$\frac{1}{s-t}(\textsf {G} ^s_t)'_0$$ as $$s\downarrow t$$ exists, and

2. (ii)

for every locally Lipschitz and locally semiconvex function f defined on some neighbourhood of $$\gamma _t$$ it holds that

\begin{aligned} \lim _{h\downarrow 0}\frac{f(\gamma _{t+h})-f(\gamma _t)}{h}=\mathrm{d}_{\gamma _t}f({\dot{\gamma }}^+_t). \end{aligned}
(2.24)

### Proof

(i) We shall prove that $$s\mapsto \frac{1}{s-t}(\textsf {G} ^s_t)'_0\in \mathrm{T}_{\gamma _t}\mathrm{Y}$$ has a limit as $$s\downarrow t$$ for any t for which $$|{\dot{\gamma }}_t|$$ exists and the conclusions of Lemma 2.19 hold. Notice that $$| \frac{1}{s-t}(\textsf {G} ^s_t)'_0|_{\gamma _t}=\frac{\textsf {d} (\gamma _t,\gamma _s)}{|s-t|}\rightarrow |{\dot{\gamma }}_t|$$, thus if $$|{\dot{\gamma }}_t|=0$$ the conclusion follows. If $$|{\dot{\gamma }}_t|>0$$, by the convergence of norms that we just proved and recalling (2.11) and Theorem 2.9, to conclude it is sufficient to prove that $$\lim _{s_2\downarrow t}\lim _{s_1\downarrow t}\angle _{\gamma _t}((\textsf {G} _t^{s_2})'_0,(\textsf {G} _t^{s_1})'_0)=0$$. This is a direct consequence of (2.23) and the monotonicity property (2.8), which ensures that

\begin{aligned} \angle _{\gamma _t}((\textsf {G} _t^{s_2})'_0,(\textsf {G} _t^{s_1})'_0)\le {\overline{\angle }}_{\gamma _t}(\textsf {G} _t^{s_2}(1),\textsf {G} _t^{s_1}(1)) ={\overline{\angle }}^\kappa _{\gamma _t}(\gamma _{t+s_2},\gamma _{t+s_1}). \end{aligned}

(ii) If $$|{\dot{\gamma }}_t|=0$$ both sides of (2.24) are easily seen to be zero, so that the conclusion follows. Otherwise, for $$\delta _2\ge \delta _1>0$$ put for brevity $$\eta _{\delta _1,\delta _2}:=(\textsf {G} _{\gamma _t}^{\gamma _{t+\delta _2}})_{\delta _1/\delta _2}$$ and notice that the monotonicity (2.8) of angles gives

\begin{aligned} \cos \big ({\overline{\angle }}^\kappa _{\gamma _t}(\gamma _{t+\delta _1}, \eta _{\delta _1,\delta _2})\big )\ge \cos \big ({\overline{\angle }}^\kappa _{\gamma _t} (\gamma _{t+\delta _1},\gamma _{t+\delta _2})\big ). \end{aligned}

From (2.10) we have that

\begin{aligned} \cos \big ({\overline{\angle }}^\kappa _{\gamma _t}(\gamma _{t+\delta _1}, \eta _{\delta _1,\delta _2})\big )=\frac{\textsf {d} ^2(\gamma _t,\gamma _{t+\delta _1}) +\textsf {d} ^2(\gamma _t,\eta _{\delta _1,\delta _2})-\textsf {d} ^2(\gamma _{t+\delta _1}, \eta _{\delta _1,\delta _2})}{2\textsf {d} (\gamma _t,\gamma _{t+\delta _1}) \textsf {d} (\gamma _t,\eta _{\delta _1,\delta _2})}+O(\delta _1) \end{aligned}

and thus using the identity $$\textsf {d} (\gamma _t,\eta _{\delta _1,\delta _2})=\frac{\delta _1}{\delta _2} \textsf {d} (\gamma _t,\gamma _{t+\delta _2})$$ and passing to the limit recalling (2.23) we deduce that

\begin{aligned} \varlimsup _{\delta _2\downarrow 0}\varlimsup _{\delta _1\downarrow 0}\frac{\textsf {d} ^2 (\gamma _{t+\delta _1},\eta _{\delta _1,\delta _2})}{\delta _1^2}=0. \end{aligned}
(2.25)

Now let L be the Lipschitz constant of f in some neighbourhood of $$\gamma _t$$ and notice that for $$\delta _2>0$$ sufficiently small we have

\begin{aligned} \begin{aligned} \varlimsup _{\delta _1\downarrow 0}\Big |\mathrm{d}_{\gamma _t}f\big (\tfrac{1}{\delta _2} (\textsf {G} _t^{t+\delta _2})'_0\big )-\frac{f(\gamma _{t+\delta _1})-f(\gamma _t)}{\delta _1} \Big |&=\varlimsup _{\delta _1\downarrow 0}\Big |\frac{f(\eta _{\delta _1,\delta _2}) -f(\gamma _t)}{\delta _1}\\&\qquad -\frac{f(\gamma _{t+\delta _1})-f(\gamma _t)}{\delta _1}\Big |\\&\le L\varlimsup _{\delta _1\downarrow 0}\frac{\textsf {d} (\eta _{\delta _1,\delta _2},\gamma _{t+\delta _1})}{\delta _1}. \end{aligned} \end{aligned}

The conclusion follows by letting $$\delta _2\downarrow 0$$ and using (2.25). $$\square$$

### Remark 2.21

The conclusions of Lemma 2.19 and Proposition 2.20 hold for left derivatives as well; for a.e. $$t\in \{ |{\dot{\gamma }}_t|>0 \}$$ we have $$\lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0}{\overline{\angle }}_{\gamma _t}(\gamma _{t-\delta },\gamma _{t-\varepsilon })=0$$, and for every t satisfying it the limit of $$\frac{1}{t-s}(\textsf {G} ^s_t)'_0$$ as $$s\uparrow t$$ exists. We denote this limit by $${\dot{\gamma }}_t^-$$. $$\square$$

### Lemma 2.22

Let $$\gamma :[0,1]\rightarrow \mathrm{Y}$$ be an absolutely continuous curve. Then, for almost every $$t\in [0,1]$$, the limits $${\dot{\gamma }}^+_t$$ and $${\dot{\gamma }}^-_t$$ are antipodal, i.e.

\begin{aligned} {\dot{\gamma }}_t^+\oplus {\dot{\gamma }}_t^-=0. \end{aligned}

### Proof

We prove that

\begin{aligned} \angle _{\gamma _t}({\dot{\gamma }}_t^+,{\dot{\gamma }}_t^-)=\lim _{\delta \downarrow 0}{\overline{\angle }}_{\gamma _t}(\gamma _{t+\delta },\gamma _{t-\delta })=\pi \end{aligned}
(2.26)

for almost every $$t\in [0,1]$$. The claim follows from this.

If $$|{\dot{\gamma }}_t|=0$$ then $${\dot{\gamma }}_t^+={\dot{\gamma }}_t^-=0$$ and the claim is clear.

By Lemma 2.19, Proposition 2.20 and Remark 2.21, almost every $$t\in \{ |{\dot{\gamma }}_t|>0 \}$$ satisfies conditions (i) and (ii) below.

(i) $$\lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0}{\overline{\angle }}_{\gamma _t}(\gamma _{t+\delta },\gamma _{t+\varepsilon })=0$$ and $$\lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0}{\overline{\angle }}_{\gamma _t}(\gamma _{t-\delta },\gamma _{t-\varepsilon })=0$$;

(ii) $$\lim _{\delta \downarrow 0}\frac{\textsf {d} (\gamma _{t+\delta },\gamma _{t})}{\delta }=\lim _{\delta \downarrow 0}\frac{\textsf {d} (\gamma _{t-\delta },\gamma _{t})}{\delta }=\lim _{\delta \downarrow 0}\frac{\textsf {d} (\gamma _{t-\delta },\gamma _{t+\delta })}{2\delta }=|{\dot{\gamma }}_t|$$ (including the existence of these limits).

We fix $$t\in [0,1]$$ satisfying (i) and (ii). Note that, by the monotonicity of angles (2.8), we have the estimate

\begin{aligned} \begin{aligned} \angle _{\gamma _t}({\dot{\gamma }}_t^+,{\dot{\gamma }}_t^-)=&\lim _{\delta \downarrow 0} \angle _{\gamma _t}((\textsf {G} _t^{t+\delta })_0',(\textsf {G} _t^{t-\delta })_0') =\lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0} {\overline{\angle }}_{\gamma _t}((\textsf {G} _t^{t+\delta })_\varepsilon ,(\textsf {G} _t^{t-\delta })_\varepsilon )\\ \le&\lim _{\delta \downarrow 0}{\overline{\angle }}_{\gamma _t}(\gamma _{t+\delta },\gamma _{t-\delta }). \end{aligned} \end{aligned}

To prove the opposite inequality, we use the triangle inequality (2.7) to obtain

\begin{aligned} {\overline{\angle }}_{\gamma _t}((\textsf {G} _t^{t+\delta })_\varepsilon ,(\textsf {G} _t^{t-\delta })_\varepsilon )&\ge {\overline{\angle }}_{\gamma _t}((\textsf {G} _t^{t+\delta })_\varepsilon , \gamma _{t-\varepsilon \delta })-{\overline{\angle }}_{\gamma _t} (\gamma _{t-\varepsilon \delta },(\textsf {G} _t^{t-\delta })_\varepsilon )\\&\ge {\overline{\angle }}_{\gamma _t}(\gamma _{t+\varepsilon \delta }, \gamma _{t-\varepsilon \delta })-{\overline{\angle }}_{\gamma _t}(\gamma _{t+\varepsilon \delta }, (\textsf {G} _t^{t+\delta })_\varepsilon )-{\overline{\angle }}_{\gamma _t}(\gamma _{t-\varepsilon \delta }, (\textsf {G} _t^{t-\delta })_\varepsilon )\\&\ge {\overline{\angle }}_{\gamma _t}(\gamma _{t+\varepsilon \delta },\gamma _{t-\varepsilon \delta }) -{\overline{\angle }}_{\gamma _t}(\gamma _{t+\delta },\gamma _{t+\varepsilon \delta })- {\overline{\angle }}_{\gamma _t}(\gamma _{t-\delta },\gamma _{t-\varepsilon \delta }). \end{aligned}

Here the last estimate follows simply by the monotonicity of angles (2.8). By (i), it follows that

\begin{aligned} \angle _{\gamma _t}({\dot{\gamma }}_t^+,{\dot{\gamma }}_t^-)= & {} \lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0}{\overline{\angle }}_{\gamma _t}((\textsf {G} _t^{t+\delta })_\varepsilon ,(\textsf {G} _t^{t-\delta })_\varepsilon )\ge \lim _{\delta \downarrow 0}\lim _{\varepsilon \downarrow 0}{\overline{\angle }}_{\gamma _t}(\gamma _{t+\varepsilon \delta },\gamma _{t-\varepsilon \delta })\\= & {} \lim _{\delta \downarrow 0}{\overline{\angle }}_{\gamma _t}(\gamma _{t+\delta },\gamma _{t-\delta }). \end{aligned}

It remains to show that $$\lim _{\delta \downarrow 0}{\overline{\angle }}_{\gamma _t}(\gamma _{t+\delta },\gamma _{t-\delta })=\pi$$. By (ii) and (2.10), we have

\begin{aligned} \lim _{\delta \downarrow 0}\cos {\overline{\angle }}_{\gamma _t}(\gamma _{t+\delta }, \gamma _{t-\delta })&=\lim _{\delta \downarrow 0}\frac{\textsf {d} ^2(\gamma _{t+\delta }, \gamma _t)+\textsf {d} ^2(\gamma _{t-\delta },\gamma _t)-\textsf {d} ^2(\gamma _{t+\delta },\gamma _{t-\delta })}{2\textsf {d} (\gamma _{t+\delta },\gamma _t)\textsf {d} (\gamma _{t-\delta },\gamma _t)}\\&=\frac{|{\dot{\gamma }}_t|^2+|{\dot{\gamma }}_t|^2-(2|{\dot{\gamma }}_t|)^2}{2|{\dot{\gamma }}_t|^2}=-1, \end{aligned}

implying the claim and completing the proof. $$\square$$

### Barycenters and Rigidity

In this section, we review the concept of ‘barycenter’ of a probability measure on a $$\mathrm{CAT}(0)$$ space. With the exception of the rigidity statement given by Proposition 2.27, the content comes from .

Fix a $$\mathrm{CAT}(0)$$ space $$\mathrm{Y}$$ and denote by $${\mathscr {P}}(\mathrm{Y})$$ the set of all Borel probability measures on $$\mathrm{Y}$$ having separable support, and by $${\mathscr {P}}_1(\mathrm{Y})\subset {\mathscr {P}}(\mathrm{Y})$$ the set of those with finite first moment, i.e. those $$\mu \in {\mathscr {P}}(\mathrm{Y})$$ such that for some, and thus all, $$y\in \mathrm{Y}$$ it holds that $$\int \textsf {d} (\cdot ,y)\,\mathrm{d}\mu <\infty$$.

For a proof of the following result we refer to [41, Proposition 4.3].

### Proposition 2.23

(Definition of barycenter) Let $$\mathrm{Y}$$ be a $$\mathrm{CAT}(0)$$ space, $$\mu \in {\mathscr {P}}_1(\mathrm{Y})$$ and $$y\in \mathrm{Y}$$. Then

\begin{aligned} \mathrm{Y}\ni x\longmapsto \int \big [\textsf {d} ^2(\cdot ,x)-\textsf {d} ^2(\cdot ,y)\big ]\,\mathrm{d}\mu \in {\mathbb {R}}\end{aligned}

admits a unique minimizer. The minimizer does not depend on y, is called the barycenter of $$\nu$$ and is denoted by $$\textsf {Bar} (\nu )\in \mathrm{Y}$$.

The basic properties of barycenters that we shall need are collected in the following statement:

### Theorem 2.24

Let $$\mathrm{Y}$$ be a $$\mathrm{CAT}(0)$$ space. Then the following holds:

1. (i)

Variance inequality. For any $$\mu \in {\mathscr {P}}_1(\mathrm{Y})$$ and $$p\in \mathrm{Y}$$ it holds

\begin{aligned} \int \big [\textsf {d} ^2(\cdot ,p)-\textsf {d} ^2(\cdot ,\textsf {Bar} (\mu ))\big ]\,\mathrm{d}\mu \ge \textsf {d} ^2(p,\textsf {Bar} (\mu )). \end{aligned}
(2.27)
2. (ii)

Jensen’s inequality. Let $$\varphi :\mathrm{Y}\rightarrow [0,+\infty )$$ be convex and lower semicontinuous. Then for every $$\mu \in {\mathscr {P}}_1(\mathrm{Y})$$ we have

\begin{aligned} \varphi (\textsf {Bar} (\mu ))\le \int \varphi \,\mathrm{d}\mu . \end{aligned}
(2.28)

### Proof

The variance inequality (2.27) is proved in [41, Proposition 4.4], while Jensen’s inequality comes from [41, Theorem 6.2]. $$\square$$

Applying Jensen’s inequality (2.28) to the convex and Lipschitz function $$\varphi :=\textsf {d} (\cdot ,p)$$ we see that the inequality

\begin{aligned} \textsf {d} (\textsf {Bar} (\mu ),p)\le \int \textsf {d} (x,p)\,\mathrm{d}\mu (x) \end{aligned}

holds for any $$\mu \in {\mathscr {P}}_1(\mathrm{Y})$$ and $$p\in \mathrm{Y}$$. Our aim is now to study the equality case and in order to do so we first recall the notion of nonbranching geodesics.

### Definition 2.25

(Non-branching from p) We say that a geodesic space $$(\mathrm{X},\textsf {d} )$$ is non-branching from $$p\in \mathrm{X}$$ provided the following holds: if, for given points $$q,x_1,x_2\in \mathrm{X}$$ with $$q\ne p$$, we have that there are geodesics $$\gamma _1,\gamma _2$$ starting from p and passing through $$q,x_1$$ and $$q,x_2$$, respectively, then there is a geodesic $$\gamma$$ starting from p and passing through $$q,x_1,x_2$$.

Here ‘passing through’ $$q,x_i$$ implies nothing about the order in which these points are met. It is not hard to see that the above definition is equivalent to the more classical one requiring for any $$t\in (0,1]$$ the injectivity of the map on the space of constant speed geodesics $$[0,1]\rightarrow \mathrm{X}$$ starting from p.

It is easy to verify that if $$q\ne p$$ and $$(x_i)\subset \mathrm{X}$$ are given points such that there are geodesics starting from p and passing through $$q,x_i$$ for every i, then there is a curve $$\gamma$$ starting from p and passing through q and all the $$x_i$$’s and such curve is either a geodesic or a half-line, i.e. a map from $$[0,+\infty )$$ to $$\mathrm{X}$$ such that its restriction to any compact interval is a geodesic.

The main example of space that is non-branching from one of its points is the cone over a metric space. Here the relevant point is the vertex 0.

### Lemma 2.26

(Tangent cones are non-branching from the origin) Let $$\mathrm{X}$$ be any metric space, and $$C(\mathrm{X})$$ the Euclidean cone over $$\mathrm{X}$$. Then $$C(\mathrm{X})$$ is non-branching from its origin 0. In particular, for a local $$\mathrm{CAT}(\kappa )$$ space $$\mathrm{Y}$$ we have that

\begin{aligned} \mathrm{T}_p \mathrm{Y}\quad \mathrm{is}\,\mathrm{non-branching}\,\mathrm{from }0\quad \mathrm{for}\,\mathrm{every}\quad p\in \mathrm{Y}. \end{aligned}
(2.29)

### Proof

By direct computation based on the definition of the cone distance we see that if $$\gamma$$ is a constant speed geodesic starting from the origin 0 it must hold $$\gamma _t=t\gamma _1$$, where the ‘product’ of t and $$\gamma _1$$ is defined as before Proposition 2.11. Thus for two given such curves $$\gamma ,\eta$$ we have—again using the definition of distance on the cone—that $$\textsf {d} _{C(\mathrm{X})}(\gamma _t,\eta _t)=\textsf {d} _{C(\mathrm{X})}(t\gamma _1,t\eta _1)=t \textsf {d} _{C(\mathrm{X})}(\gamma _1,\eta _1)$$. Hence if $$\gamma _1\ne \eta _1$$ we also have $$\gamma _t\ne \eta _t$$ for every $$t\in (0,1]$$. This is sufficient to conclude. $$\square$$

We now come to the rigidity statement:

### Proposition 2.27

(Rigidity) Let $$\mathrm{Y}$$ be a $$\mathrm{CAT}(0)$$ space and $$\mu \in {\mathscr {P}}_1(\mathrm{Y})$$. Assume that for some point p it holds that

\begin{aligned} \textsf {d} (\textsf {Bar} (\mu ),p) \ge \int \textsf {d} (x,p) \, \mathrm{d}\mu . \end{aligned}
(2.30)

Then

\begin{aligned} \textsf {d} (x,\textsf {Bar} (\mu ))=\big |\textsf {d} (x,p)-\textsf {d} (\textsf {Bar} (\mu ),p)\big |\quad \text { for }\mu \text {-a.e. }x\in \mathrm{Y}. \end{aligned}
(2.31)

In particular, if $$\mathrm{Y}$$ is non-branching from p, then the measure $$\nu$$ is concentrated on the image of a curve $$\gamma$$ starting from p which is either a geodesic or a half-line.

### Proof

By the discussion following Definition 2.25, we see that it is sufficient to prove (2.31). To this aim, notice that the triangle inequality gives

\begin{aligned} \big |\textsf {d} (x,p)-\textsf {d} (x,\textsf {Bar} (\mu ))\big |^2-\textsf {d} ^2(\textsf {Bar} (\mu ),p)\le 0\quad \forall x\in \mathrm{Y}. \end{aligned}
(2.32)

On the other hand we have

\begin{aligned} \begin{aligned}&\int \big |\textsf {d} (x,p)-\textsf {d} (\textsf {Bar} (\mu ),p)\big |^2-\textsf {d} ^2(x,\textsf {Bar} (\mu ))\,\mathrm{d}\mu (x)\\&\quad =\int \textsf {d} ^2(x,p)+\textsf {d} ^2(\textsf {Bar} (\mu ),p)-2\textsf {d} (x,p)\textsf {d} (\textsf {Bar} (\mu ),p) -\textsf {d} ^2(x,\textsf {Bar} (\mu )) \,\mathrm{d}\mu (x)\\&\qquad \text {by}\,(2.27)\qquad \ge \int 2 \textsf {d} ^2(p,\textsf {Bar} (\mu ))-2\textsf {d} (x,p)\textsf {d} (x,\textsf {Bar} (\mu ))\,\mathrm{d}\mu (x)\\&\quad = 2\textsf {d} (p,\textsf {Bar} (\mu ))\Big (\textsf {d} (p,\textsf {Bar} (\mu ))-\int \textsf {d} (x,p)\,\mathrm{d}\mu (x)\Big )\\&\qquad \text {by}\,(2.30)\qquad \ge 0. \end{aligned} \end{aligned}

This inequality and (2.32) give (2.31) and the conclusion. $$\square$$

### Remark 2.28

It is easily seen that in the preceding proposition the non-branching assumption is needed. Indeed, consider the ‘tripod’, i.e. the $$\mathrm{CAT}(0)$$-space $$\mathrm{Y}$$ obtained as the Euclidean cone over the space $$\{a,b,c\}$$ equipped with the discrete metric. Then $$\mathrm{Y}$$ is not non-branching from a and, indeed, the conclusion of Proposition 2.27 fails for the measure $$\mu =\frac{1}{3}(\delta _a+\delta _b+\delta _c)$$, even though the identity (2.30) holds for $$\mu$$. Note that in this case $$\textsf {Bar} (\mu )=0$$. $$\square$$

## Geometric Tangent Bundle $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$

In this section, we fix a separable local $$\mathrm{CAT}(\kappa )$$ space $$\mathrm{Y}$$. Our first aim here is to give a measurable structure to the ‘geometric tangent bundle’ $$\mathrm{T}_G\mathrm{Y}$$, i.e. the collection of all tangent cones on $$\mathrm{Y}$$. Once this is done, we will endow $$\mathrm{Y}$$ with a non-negative and non-zero Radon measure $$\mu$$ and study the space of ‘$$L^2$$-sections’ of $$\mathrm{T}_G\mathrm{Y}$$, which we shall denote by $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$.

As a set, the geometric tangent bundle $$\mathrm{T}_G\mathrm{Y}$$ is defined as

\begin{aligned} \mathrm{T}_G\mathrm{Y}:=\big \{(x,v)\;\big |\;x\in \mathrm{Y},\,v\in \mathrm{T}_x\mathrm{Y}\big \}. \end{aligned}

We denote by $$\pi ^\mathrm{Y}:\mathrm{T}_G\mathrm{Y}\rightarrow \mathrm{Y}$$ the canonical projection defined by $$\pi ^\mathrm{Y}(x,v):=x$$ and call section of $$\mathrm{T}_G\mathrm{Y}$$ a map $$v:\mathrm{Y}\rightarrow \mathrm{T}_G\mathrm{Y}$$ such that $$\pi ^\mathrm{Y}(v(x))=x$$ for every $$x\in \mathrm{Y}$$.

We now endow $$\mathrm{T}_G\mathrm{Y}$$ with a $$\sigma$$-algebra $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$, defined as the smallest $$\sigma$$-algebra such that:

1. (i)

The projection map $$\pi ^\mathrm{Y}:\mathrm{T}_G\mathrm{Y}\rightarrow \mathrm{Y}$$ is measurable, $$\mathrm{Y}$$ being equipped with Borel sets.

2. (ii)

For every $$x\in \mathrm{Y}$$ and $$y\in B_{\textsf {r} _x}(x)$$ the map $$\mathrm{d}\,\mathrm {dist}_y:\,\mathrm{T}_G\mathrm{Y}\rightarrow {\mathbb {R}}$$, defined as

\begin{aligned} (\mathrm{d}\,\mathrm {dist}_y)(z,v):=\left\{ \begin{array}{ll} \mathrm{d}_z\mathrm {dist}_y(v)\\ 0 \end{array}\qquad \begin{array}{ll} \text { if }(z,v)\in (\pi ^\mathrm{Y})^{-1}\big (B_{\textsf {r} _x}(x)\big ),\\ \text { otherwise,} \end{array}\right. \end{aligned}

is measurable.

It is clear that these define a $$\sigma$$-algebra $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$, to which we shall refer as the class of Borel subsets of $$\mathrm{T}_G\mathrm{Y}$$, hereafter speaking about Borel (rather than measurable) maps. This is a slight abuse of terminology, since we are not defining any topology on $$\mathrm{T}_G\mathrm{Y}$$. The abuse of terminology is justified by the fact that if $$\mathrm{Y}$$ is a smooth Riemannian manifold, then $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$ coincides with the $$\sigma$$-algebra of Borel subsets of the tangent bundle of $$\mathrm{Y}$$.

The following result gives a basic description of $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$:

### Proposition 3.1

Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space which is also separable and $$(x_n)\subset \mathrm{Y}$$ a countable set of points such that $$\bigcup _nB_{\textsf {r} _{x_n}}(x_n)=\mathrm{Y}$$ (these exist by the Lindelöf property of $$\mathrm{Y}$$). For each n, let $$(x_{n,m})\subset B_{\textsf {r} _{x_n}}(x_n)$$ be countable and dense.

Then $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$ coincides with the smallest $$\sigma$$-algebra $${\mathcal {B}}'(\mathrm{T}_G\mathrm{Y})$$ satisfying i) above and

1. ii’)

For every $$n,m\in {\mathbb {N}}$$ the function $$\mathrm{d}\,\mathrm {dist}_{x_{n,m}}$$ is measurable.

Moreover, for any $$x\in \mathrm{Y}$$ the measurable structure induced on $$\mathrm{T}_x\mathrm{Y}$$ by $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$ coincides with the Borel structure of $$(\mathrm{T}_x\mathrm{Y},\textsf {d} _x)$$.

### Proof

It is clear that $${\mathcal {B}}'(\mathrm{T}_G\mathrm{Y})\subset {\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$. To prove the other inclusion start observing that the continuity of $$x\mapsto \textsf {r} _x$$ grants that $$B_{\textsf {r} _x}(x)\subset \bigcup _n B_{\textsf {r} _{x_n}}(x_n)$$ if $$x_n\rightarrow x$$, thus to conclude it is sufficient to show that for given $$n\in {\mathbb {N}}$$ and $$y\in B_{\textsf {r} _{x_n}}(x_n)$$ the map $$(\pi ^\mathrm{Y})^{-1}(B_{\textsf {r} _{x_{n}}}(x_n))\ni (x,v)\mapsto \mathrm{d}_x\mathrm {dist}_y(v)$$ is $${\mathcal {B}}'(\mathrm{T}_G\mathrm{Y})$$-measurable. Keeping in mind point (i) of Proposition 2.17, this will be achieved if we prove that:

1. (a)

the map $$\mathrm{T}_y\mathrm{Y}\ni v\mapsto |v|_y$$ is measurable w.r.t. the $$\sigma$$-algebra induced by $${\mathcal {B}}'(\mathrm{T}_G\mathrm{Y})$$,

2. (b)

$$(\pi ^\mathrm{Y})^{-1}(B_{\textsf {r} _{x_{n}}}(x_n))\ni (x,v)\mapsto {\langle }v,(\textsf {G} _x^y )'_0{\rangle }_x$$ is $${\mathcal {B}}'(\mathrm{T}_G\mathrm{Y})$$-measurable.

Point (a) is a direct consequence of point (ii) of Proposition 2.17. For (b), we notice that by assumption the claim is true if $$y=x_{n,m}$$ for some nm. Then the general case follows from the continuity of the scalar product established in Proposition 2.11 and the continuity of the map $$B_{\textsf {r} _{x_n}}(x_n)\ni y\mapsto (\textsf {G} _x^y)'_0$$ proved in Theorem 2.9.

For the second claim, denote by $${\mathcal {B}}(\mathrm{T}_x\mathrm{Y})$$ the collection of Borel sets in $$(\mathrm{T}_x\mathrm{Y},\textsf {d} _x)$$ and by $${\mathcal {A}}_x$$ the $$\sigma$$-algebra induced by $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$ on $$\mathrm{T}_x\mathrm{Y}$$. Then the continuity of the ‘norm’ and ‘scalar product’ on $$\mathrm{T}_x\mathrm{Y}$$ proved in Proposition 2.11 and the already recalled point (i) of Proposition 2.17 give the inclusion $${\mathcal {A}}_x\subset {\mathcal {B}}(\mathrm{T}_x\mathrm{Y})$$. For the opposite inclusion it is sufficient to prove that for any $$v\in \mathrm{T}_x\mathrm{Y}$$ the map $$\mathrm{T}_x\mathrm{Y}\ni w\mapsto \textsf {d} _x^2(v,w)$$ is $${\mathcal {A}}_x$$-measurable. Since in (a) above we have already proved that $$\mathrm{T}_x\mathrm{Y}\ni w\mapsto |w|_x$$ is $${\mathcal {A}}_x$$-measurable, by (2.12b) it is sufficient to prove that $$\mathrm{T}_x\mathrm{Y}\ni w\mapsto {\langle }w,v{\rangle }_x$$ is $${\mathcal {A}}_x$$-measurable as well. For v of the form $$(\textsf {G} _x^y)'_0$$ for some $$y\in B_{\textsf {r} _x}(x)$$ this can be proved as in (b) above. Then the general case follows by the positive homogeneity of the scalar product given in (2.12d) and the density result in Theorem 2.9. $$\square$$

### Corollary 3.2

Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space which is also separable. Then $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$ is countably generated.

### Proof

By definition, the $$\sigma$$-algebra $${\mathcal {B}}'(\mathrm{T}_G\mathrm{Y})$$ defined in Proposition 3.1 is countably generated. Thus the same holds for $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$. $$\square$$

### Corollary 3.3

Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space which is also separable. Let us denote by $${\textsf {Norm}} :\mathrm{T}_G\mathrm{Y}\rightarrow {\mathbb {R}}^+$$ the map sending (xv) to $$|v|_x$$. Then $${\textsf {Norm}}$$ is a Borel function.

### Proof

Given that $$x\mapsto \textsf {r} _x$$ is continuous, for any $$z\in \mathrm{Y}$$ we can find $$\lambda _z\in (0,\textsf {r} _z)$$ such that $$B_{\lambda _z}(z)\subset B_{\textsf {r} _x}(x)$$ whenever $$x\in B_{\lambda _z}(z)$$. By Lindelöf property, to get the statement it is sufficient to prove that $$(\pi ^\mathrm{Y})^{-1}\big (B_{\lambda _z}(z)\big )\ni (x,v)\rightarrow |v|_x$$ is Borel for every $$z\in \mathrm{Y}$$. Fix $$z\in \mathrm{Y}$$ and choose a dense sequence $$(y_n)\subset B_{\lambda _z}(z)$$. We know from item ii) of Proposition 2.17 that $$|v|_x=-\inf _n\mathrm{d}_x\mathrm {dist}_{y_n}(v)$$ holds for every $$(x,v)\in (\pi ^\mathrm{Y})^{-1}\big (B_{\lambda _z}(z)\big )$$. Thus the required measurability follows from the definition of $${\mathcal {B}}(\mathrm{T}_G\mathrm{Y})$$, which grants that $$(x,v)\mapsto \mathrm{d}_x\mathrm {dist}_{y_n}(v)$$ is measurable on $$(\pi ^\mathrm{Y})^{-1}\big (B_{\lambda _z}(z)\big )$$ for every choice of $$n\in {\mathbb {N}}$$.

$$\square$$

We shall say that a section $$v:\mathrm{Y}\rightarrow \mathrm{T}_G\mathrm{Y}$$ is simple provided there are $$(y_n)\subset \mathrm{Y}$$, $$(\alpha _n)\subset {\mathbb {R}}^+$$ and a Borel partition $$(E_n)$$ of $$\mathrm{Y}$$ such that for every $$n\in {\mathbb {N}}$$ and $$x\in E_n$$ we have $$y_n\in B_{\textsf {r} _x}(x)$$ and $$v(x)=\alpha _n(\textsf {G} _x^{y_n})'_0$$. We will use the notation $$v=\sum _n{\chi }_{E_n}\alpha _n(\textsf {G} _\cdot ^{y_n})'_0$$ for simple sections. Notice that, arguing as in the proof of Proposition 3.1, we see that simple sections are automatically Borel.

The following lemma will be useful:

### Lemma 3.4

Let $$\mathrm{Y}$$ be a separable local $$\mathrm{CAT}(\kappa )$$ space. Then for every Borel section v of $$\mathrm{T}_G\mathrm{Y}$$ and $$\varepsilon >0$$ there is a simple section $${\tilde{v}}$$ such that $$\textsf {d} _x\big (v(x),{\tilde{v}}(x)\big )<\varepsilon$$ for every $$x\in \mathrm{Y}$$.

### Proof

Using the Lindelöf property of $$\mathrm{Y}$$ and the covering made by $$B_{\textsf {r} _x/2}(x)$$ it is easy to see that we can reduce to the case in which $$\mathrm{Y}$$ is $$\mathrm{CAT}(\kappa )$$ and, for any $$x,y\in \mathrm{Y}$$, it holds that $$y\in B_{\textsf {r} _x}(x)$$. Assume this is the case and let $$(y_n)\subset \mathrm{Y}$$ be countable and dense. By Theorem 2.9, for every $$x\in \mathrm{Y}$$ the set $$\big \{r(\textsf {G} _x^{y_n})'_0\,:\,r\in {\mathbb {Q}}^+,\ n\in {\mathbb {N}}\big \}$$ is dense in $$\mathrm{T}_x\mathrm{Y}$$. Let $$i\mapsto (r_i,y_{n_i})$$ be an enumeration of the couples $$(r,y_n)$$ with $$r\in {\mathbb {Q}}^+$$ and $$n\in {\mathbb {N}}$$. Given a Borel section v, define

\begin{aligned} E_i:=\Big \{x\in \mathrm{Y}\;\Big |\;i\text { is the least index }j\text { such that }\textsf {d} _x\big (v(x),r_j(\textsf {G} _x^{y_j})'_0\big )<\varepsilon \Big \} \end{aligned}

and notice that, since the map $$x\mapsto \textsf {d} _x\big (v(x),r_i(\textsf {G} _x^{y_i})'_0\big )$$ is Borel for every i, the sets $$E_i$$ are Borel. The density result previously recalled ensures that $$\bigcup _iE_i=\mathrm{Y}$$. It follows that $${\tilde{v}}:=\sum _i{\chi }_{E_i}r_i(\textsf {G} _\cdot ^{y_i})'_0$$ fulfills the requirements. $$\square$$

### Corollary 3.5

Let $$\mathrm{Y}$$ be a local $$\mathrm{CAT}(\kappa )$$ space which is also separable. Let $$f:\,\mathrm{Y}\rightarrow {\mathbb {R}}$$ be a locally semiconvex, locally Lipschitz function and v a Borel section of $$\mathrm{T}_G\mathrm{Y}$$. Then $$\mathrm{Y}\ni x\mapsto \mathrm{d}_x f\big (v(x)\big )$$ is a Borel function.

### Proof

In light of Lemma 3.4, it is sufficient to prove the statement for simple sections. Let $$v=\sum _n{\chi }_{E_n}\alpha _n(\textsf {G}_\cdot ^{y_n})'_0$$ be simple and observe that, for every $$x\in \mathrm{Y}$$, one has that

\begin{aligned} \mathrm{d}_x f\big (v(x)\big )=\sum _n{\chi }_{E_n}(x)\,\alpha _n\,\mathrm{d}_x f\big ((\textsf {G}_x^{y_n})'_0\big ) =\sum _n{\chi }_{E_n}(x)\,\alpha _n\lim _{h\downarrow 0} \frac{f\big ((\textsf {G}_x^{y_n})_h\big )-f(x)}{h}. \end{aligned}

Since the function $$E_n\ni x\mapsto f\big ((\textsf {G}_x^{y_n})_h\big )-f(x)$$ is continuous for all $$h\in (0,1)$$ by Lemma 2.2, we conclude that $$\mathrm{Y}\ni x\mapsto \mathrm{d}_x f\big (v(x)\big )$$ is Borel, thus completing the proof of the statement. $$\square$$

The approximation result Lemma 3.4 also links the notion of Borel sections to Borel functions on $$\mathrm{Y}$$.

### Proposition 3.6

Let $$\mathrm{Y}$$ be a separable local $$\mathrm{CAT}(\kappa )$$ space. Let vw be Borel sections of   $$\mathrm{T}_G\mathrm{Y}$$ and $$\lambda \ge 0$$. Then it holds that

\begin{aligned} \begin{aligned} \mathrm{Y}\ni x&\longmapsto |v|_x,\\ \mathrm{Y}\ni x&\longmapsto \textsf {d} _x\big (v(x),w(x)\big ),\\ \mathrm{Y}\ni x&\longmapsto \big \langle v(x),w(x)\big \rangle _x \end{aligned} \end{aligned}

are Borel functions. Moreover, $$\lambda v$$ and $$v\oplus w$$ are Borel sections of $$\mathrm{T}_G\mathrm{Y}$$.

### Proof

For the first part of the statement it is sufficient to prove that $$x\mapsto \textsf {d} _x\big (v(x),w(x)\big )$$ is Borel, by the definition of ‘norm’ and of ‘scalar product’. As for the proof of Lemma 3.4 above, we use the Lindelöf property of $$\mathrm{Y}$$ and the covering made of the balls $$B_{\textsf {r} _x/2}(x)$$, $$x\in \mathrm{Y}$$, to reduce to the case of a $$\mathrm{CAT}(\kappa )$$ space $$\mathrm{Y}$$ such that $$y\in B_{\textsf {r} _x}(x)$$ for every $$x,y\in \mathrm{Y}$$. By Lemma 3.4 it is sufficient to prove the claim for simple sections vw. Let $$v=\sum _i{\chi }_{E_i}\alpha _i(\textsf {G} _\cdot ^{y_i})'_0$$ and $$w=\sum _j{\chi }_{F_j}\beta _j(\textsf {G} _\cdot ^{z_j})'_0$$ be simple, and notice that

\begin{aligned} \textsf {d} _x\big (v(x),w(x)\big )=\sum _{i,j}{\chi }_{E_i\cap F_j}(x)\, \textsf {d} _x\big (\alpha _i(\textsf {G} _x^{y_i})'_0,\beta _j(\textsf {G} _x^{z_j})'_0\big ). \end{aligned}

The Borel regularity of $$x\mapsto \textsf {d} _x\big (v(x),w(x)\big )$$ will follow if we show that $$x\mapsto \textsf {d} _x\big (\alpha (\textsf {G} _x^y)'_0,\beta (\textsf {G} _x^z)'_0\big )$$ is Borel for every $$y,z\in \mathrm{Y}$$ and $$\alpha ,\beta >0$$. To this aim notice that, since geodesics in $$\mathrm{Y}$$ are unique, they depend continuously (w.r.t. uniform convergence) on their endpoints (see also Lemma 2.2). Therefore, for every $$t\in (0,1)$$, we have that $$x\mapsto \textsf {d} \big ((\textsf {G} _x^y)_{\alpha t},(\textsf {G} _x^z)_{\beta t}\big )$$ is continuous and the conclusion follows recalling that, by (2.11) and Theorem 2.9, we have

\begin{aligned} \textsf {d} _x\big (\alpha (\textsf {G} _x^y)'_0,\beta (\textsf {G} _x^z)'_0\big )= \lim _{n\rightarrow \infty }\frac{\textsf {d} \big ((\textsf {G} _x^y)_{\alpha t_n},(\textsf {G} _x^z)_{\beta t_n}\big )}{t_n}, \end{aligned}

where $$(t_n)$$ is any sequence decreasing to 0.

It is straightforward to see that $$\lambda v$$ is a Borel section of $$\mathrm{T}_G\mathrm{Y}$$: the function $$\mathrm{Y}\ni x\mapsto \mathrm{d}_x\mathrm {dist}_y\big (\lambda v(x)\big )= \lambda \,\mathrm{d}_x\mathrm {dist}_y\big (v(x)\big )$$ is Borel for every $$y\in \mathrm{Y}$$, whence $$\lambda v$$ is a Borel section.

We now aim to prove that $$v\oplus w$$ is a Borel section of $$\mathrm{T}_G\mathrm{Y}$$. By Lemma 3.4 it is enough to show that $$\mathrm{Y}\setminus \{y,z\}\ni x\mapsto \mathrm{d}_x\mathrm {dist}_p\big (\alpha (\textsf {G}_x^y)'_0, \beta (\textsf {G}_x^z)'_0\big )$$ is Borel for every $$p,y,z\in \mathrm{Y}$$ and $$\alpha ,\beta >0$$. By Lemma 2.12 and the properties of $$\mathrm{d}_x\mathrm {dist}_p$$ we have

\begin{aligned} \mathrm{d}_x\mathrm {dist}_p\big (\alpha (\textsf {G}_x^y)'_0,\beta (\textsf {G}_x^z)'_0\big )= & {} \lim _{\varepsilon \downarrow 0}\mathrm{d}_x\mathrm {dist}_p\big (2\varepsilon ^{-1}(\textsf {G}_x^{m_\varepsilon (x)})'_0\big )\\= & {} \lim _{\varepsilon \downarrow 0}\lim _{h\downarrow 0} \frac{\textsf {d} \big (p,(\textsf {G}_x^{m_\varepsilon (x)})_{2h/\varepsilon }\big )-\textsf {d} (p,x)}{h}, \end{aligned}

where $$m_\varepsilon (x)$$ stands for the midpoint between $$(\textsf {G}_x^y)_{\varepsilon \alpha }$$ and $$(\textsf {G}_x^z)_{\varepsilon \beta }$$. Given that the map sending $$x\in \mathrm{Y}$$ to $$(\textsf {G}_x^{m_\varepsilon (x)})_{2h/\varepsilon }\in \mathrm{Y}$$ is continuous (as one can see by repeatedly applying Lemma 2.2), we conclude that $$\mathrm{Y}\setminus \{y,z\}\ni x\mapsto \mathrm{d}_x\mathrm {dist}_p\big (\alpha (\textsf {G}_x^y)'_0, \beta (\textsf {G}_x^z)'_0\big )$$ is Borel, as required. This completes the proof of the statement. $$\square$$

We now consider the ‘right derivative’ map $$\textsf {RightDer}:\,C([0,1];\mathrm{Y})\times [0,1]\rightarrow \mathrm{T}_G\mathrm{Y}$$, given by

\begin{aligned} \textsf {RightDer}(\gamma ,t):= \left\{ \begin{array}{ll} \displaystyle \big (\gamma _t,\lim _{h\downarrow 0}h^{-1} (\textsf {G} _{\gamma _t}^{\gamma _{t+h}})'_0\big ),&{}\qquad \text { if the limit in }\mathrm{T}_{\gamma _t}\mathrm{Y}\text { exists,}\\ (\gamma _t,0),&{}\qquad \text { otherwise}. \end{array}\right. \end{aligned}
(3.1)

### Proposition 3.7

Let $$\mathrm{Y}$$ be a separable local $$\mathrm{CAT}(\kappa )$$ space. Then $${\textsf {RightDer}} :\,C([0,1];\mathrm{Y})\times [0,1]\rightarrow \mathrm{T}_G\mathrm{Y}$$ is a Borel map.

### Proof

Let us denote by $$\mathrm{e}:\,C([0,1];\mathrm{Y})\times [0,1]\rightarrow \mathrm{Y}$$ the evaluation map $$(\gamma ,t)\mapsto \gamma _t$$, which is clearly continuous. In order to show that $$\mathsf RightDer$$ is Borel it suffices to prove that:

1. (i)

$$\pi ^\mathrm{Y}\circ \textsf {RightDer}$$ is Borel,

2. (ii)

$$\mathrm{d}\,\mathrm {dist}_y\circ \textsf {RightDer}$$ is Borel for every $$x\in \mathrm{Y}$$ and $$y\in B_{\textsf {r} _x}(x)$$.

Item i) trivially follows from the observation that $$\mathrm{e}=\pi ^\mathrm{Y}\circ \textsf {RightDer}$$. To prove ii), fix $$x\in \mathrm{Y}$$ and $$y\in B_{\textsf {r} _x}(x)$$. Let us define the sets $$D'$$, D and $$S_h$$ for $$h\in (0,1)$$ as follows:

\begin{aligned} \begin{aligned} D'&:=\big \{(\gamma ,t)\in C([0,1];\mathrm{Y})\times [0,1)\;\big |\; \gamma _t\in B_{\textsf {r} _x}(x)\big \}=\mathrm{e}^{-1}\big (B_{\textsf {r} _x}(x)\big ),\\ D&:=\big \{(\gamma ,t)\in D'\;\big |\;\lim _{h\downarrow 0} h^{-1}(\textsf {G}_{\gamma _t}^{\gamma _{t+h}})'_0\text { exists}\big \},\\ S_h&:=\big \{(\gamma ,t)\in D'\;\big |\;t+h\in [0,1),\, \gamma _{t+h}\in B_{\textsf {r} _{\gamma _t}}(\gamma _t)\big \}, \end{aligned} \end{aligned}

respectively. Since $$\mathrm e$$ and $$x\mapsto \textsf {r} _x$$ are continuous, we have that $$D'$$ and $$S_h$$ are open. Notice that

\begin{aligned} \begin{aligned} (\mathrm{d}\,\mathrm {dist}_y\circ \textsf {RightDer})(\gamma ,t)&= {\chi }_D(\gamma ,t)\,\lim _{h\downarrow 0}\mathrm{d}_{\gamma _t}\mathrm {dist}_y \big (h^{-1}(\textsf {G}_{\gamma _t}^{\gamma _{t+h}})'_0\big )\\&\overset{(2.16)}{=}{\chi }_D(\gamma ,t)\, \lim _{h\downarrow 0}\lim _{\varepsilon \downarrow 0}{\chi }_{S_h}(\gamma ,t) \frac{\textsf {d} \big (y,(\textsf {G}_{\gamma _t}^{\gamma _{t+h}})_\varepsilon \big )-\textsf {d} (y,\gamma _t)}{\varepsilon h}, \end{aligned} \end{aligned}

where the first equality stems from the continuity of $$\mathrm{T}_{\gamma _t}\mathrm{Y}\ni v\mapsto \mathrm{d}_{\gamma _t}\mathrm {dist}_y(v)$$. Given any $$h,\varepsilon \in (0,1)$$, the map $$(\gamma ,t)\mapsto {\chi }_{S_h}(\gamma ,t)\big [\textsf {d} \big (y, (\textsf {G}_{\gamma _t}^{\gamma _{t+h}})_\varepsilon \big )-\textsf {d} (y,\gamma _t)\big ]/(\varepsilon h)$$ is continuous on $$S_h$$ by Lemma 2.2. Thus, to obtain the measurability of the function $$\mathrm{d}\,\mathrm {dist}_y\circ \textsf {RightDer}$$, it remains to show that the set D is Borel. To this aim, let us set

\begin{aligned} A_{h_1,h_2,\varepsilon }:=\Big \{(\gamma ,t)\in S_{h_1}\cap S_{h_2}\;\Big |\; \textsf {d} _{\gamma _t}\big (h_1^{-1}(\textsf {G}_{\gamma _t}^{\gamma _{t+h_1}})'_0, h_2^{-1}(\textsf {G}_{\gamma _t}^{\gamma _{t+h_2}})'_0\big )<\varepsilon \Big \} \end{aligned}

for every $$h_1,h_2\in (0,1)$$ and $$\varepsilon >0$$. Given that for all $$(\gamma ,t)\in S_{h_1}\cap S_{h_2}$$ we can write

\begin{aligned} \textsf {d} _{\gamma _t}\big (h_1^{-1}(\textsf {G}_{\gamma _t}^{\gamma _{t+h_1}})'_0, h_2^{-1}(\textsf {G}_{\gamma _t}^{\gamma _{t+h_2}})'_0\big ) \overset{(2.5)}{=}\lim _{\delta \downarrow 0} \frac{\textsf {d} \big ((\textsf {G}_{\gamma _t}^{\gamma _{t+h_1}})_{\delta /h_1}, (\textsf {G}_{\gamma _t}^{\gamma _{t+h_2}})_{\delta /h_2}\big )}{\delta }, \end{aligned}

we can deduce (by Lemma 2.2) that each set $$A_{h_1,h_2,\varepsilon }$$ is Borel. Finally, observe that

\begin{aligned} D=\bigcap _{\varepsilon \in {\mathbb {Q}}^+}\bigcup _{\begin{array}{c} h\in {\mathbb {Q}}^+ \\ h<1 \end{array}} \bigcap _{\begin{array}{c} h_1,h_2\in {\mathbb {Q}}^+ \\ h_1,h_2<h \end{array}}A_{h_1,h_2,\varepsilon }, \end{aligned}

whence the set D is Borel. The statement follows. $$\square$$

We now fix a non-negative and non-zero Radon measure $$\mu$$ on $$\mathrm{Y}$$. We are interested in Borel sections of $$\mathrm{T}_G\mathrm{Y}$$ which are also in $$L^2(\mu )$$.

### Definition 3.8

(The space $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$) Let $$\mathrm{Y}$$ be a separable local $$\mathrm{CAT}(\kappa )$$-space and $$\mu$$ a non-negative non-zero Radon measure on $$\mathrm{Y}$$. The space $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ is defined as

\begin{aligned} L^2(\mathrm{T}_G\mathrm{Y};\mu ):=\Big \{v\text { Borel section of }\mathrm{T}_G\mathrm{Y}\ :\ \int |v|_x^2\,\mathrm{d}\mu (x)<\infty \Big \}/\sim , \end{aligned}

where $$v\sim w$$ if $$\{x:v(x)=w(x)\}$$ is $$\mu$$-negligible. We endow $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ with the distance

\begin{aligned} \textsf {d} _{\mu }(v,w):=\sqrt{\int \textsf {d} ^2_x\big (v(x),w(x)\big )\,\mathrm{d}\mu (x)}. \end{aligned}

Notice that, by Proposition 3.6, the integrals in Definition 3.8 are well-defined. With a (common) abuse of notation we do not distinguish between a Borel section v and its equivalence class up to $$\mu$$-a.e. equality.

We conclude the section collecting some basic properties of $$\big (L^2(\mathrm{T}_G\mathrm{Y};\mu ),\textsf {d} _\mu \big )$$:

### Proposition 3.9

Let $$\mathrm{Y}$$ be a separable local $$\mathrm{CAT}(\kappa )$$ space and $$\mu$$ a non-negative, non-zero Radon measure on it. Then $$\big (L^2(\mathrm{T}_G\mathrm{Y};\mu ),\textsf {d} _\mu \big )$$ is a complete and separable $$\mathrm{CAT}(0)$$ space.

### Proof

The fact that $$\textsf {d} _\mu$$ is a distance on $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ is trivial, so we turn to the other properties.

Completeness. The argument is standard: as it is well-known, it is sufficient to prove that any $$(v_n)\subset L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ such that $$\sum _n\textsf {d} _\mu (v_n,v_{n+1})<\infty$$ is convergent. Then from the inequality

\begin{aligned} \Big \Vert \sum _n\textsf {d} _\cdot (v_n,v_{n+1})\Big \Vert _{L^2(\mu )}\le \sum _n\big \Vert \textsf {d} _\cdot (v_n,v_{n+1})\big \Vert _{L^2(\mu )}=\sum _n\textsf {d} _\mu (v_n,v_{n+1})<\infty \end{aligned}

we see that $$\sum _n\textsf {d} _\cdot (v_n,v_{n+1})\in L^2(\mu )$$ and in particular that $$\sum _n\textsf {d} _x(v_n(x),v_{n+1}(x))<\infty$$ for $$\mu$$-a.e. x. For any such x the sequence $$(v_n(x))$$ is Cauchy in $$\mathrm{T}_x\mathrm{Y}$$ and thus has a limit v(x). It is then clear that v is (the equivalence class up to $$\mu$$-a.e. equality of) a Borel section. Moreover, by Fatou’s lemma and the definition of $$\textsf {d} _\mu$$ we see that

\begin{aligned} \varlimsup _{n}\textsf {d} _\mu (v,v_n)\le \varlimsup _n\varliminf _m\textsf {d} _\mu (v_n,v_m)=0, \end{aligned}

having used again the assumption that $$(v_n)$$ is $$\textsf {d} _\mu$$-Cauchy. This proves that v is the $$\textsf {d} _\mu$$-limit of $$(v_n)$$ and, since this fact and the triangle inequality for $$\textsf {d} _\mu$$ also tell that $$v\in L^2(\mathrm{T}_G\mathrm{Y};\mu )$$, the claim is proved.

Separability. Using the Lindelöf property of $$\mathrm{Y}$$ and the very definition of distance $$\textsf {d} _\mu$$ we can reduce to the case in which $$\mathrm{Y}$$ is a separable $$\mathrm{CAT}(\kappa )$$ space with diameter $$<D_\kappa$$ and $$\mu$$ is a finite measure. Then taking into account Lemma 3.4 above it is easy to see that to conclude it is sufficient to find a countable set $${\mathcal {D}}\subset L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ whose closure contains all simple sections of the form $$v={\chi }_E(\textsf {G} _\cdot ^y)'_0$$ for generic $$E\subset \mathrm{Y}$$ Borel and $$y\in \mathrm{Y}$$. Let $${\mathcal {D}}_1\subset \mathrm{Y}$$ be countable and dense and $${\mathcal {D}}_2\subset {\mathcal {B}}(\mathrm{Y})$$ be countable and such that for any $$E\subset \mathrm{Y}$$ Borel and $$\varepsilon >0$$ there is $$U\in {\mathcal {D}}_2$$ such that $$\mu (U\Delta E)<\varepsilon$$ (for instance, the family of all finite unions of open balls having center in $${\mathcal {D}}_1$$ and rational radius does the job—by regularity of the measure $$\mu$$).

We then define

\begin{aligned} {\mathcal {D}}:=\Big \{{\chi }_E(\textsf {G} _\cdot ^y)'_0\ :\ y\in {\mathcal {D}}_1,\ E\in {\mathcal {D}}_2 \Big \} \end{aligned}

and claim that this does the job. To see this, notice that the inequality

\begin{aligned} \textsf {d} _\mu ^2\big ({\chi }_E(\textsf {G} _\cdot ^y)'_0,{\chi }_{{\tilde{E}}}(\textsf {G} _\cdot ^y)'_0\big ) =\int _{E\Delta {\tilde{E}}}|(\textsf {G} _x^y)'_0|_x^2\,\mathrm{d}\mu (x)\le \mu (E\Delta {\tilde{E}})\,D_\kappa ^2 \end{aligned}

grants that the closure of $${\mathcal {D}}$$ contains all the sections of the form $${\chi }_E(\textsf {G} _\cdot ^y)'_0$$ for $$E\subset \mathrm{Y}$$ Borel and $$y\in {\mathcal {D}}_1$$. To conclude recall the continuity of $$y\mapsto (\textsf {G} _x^y)'_0\in \mathrm{T}_x\mathrm{Y}$$ proved in Theorem 2.9 and notice that an application of the dominated convergence theorem gives that $${\chi }_E(\textsf {G} _\cdot ^{y_n})'_0\rightarrow {\chi }_E(\textsf {G} _\cdot ^y)'_0$$ in $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ if $$y_n\rightarrow y$$.

$$\mathrm{CAT}(0)$$ condition. By [41, Remark 2.2], it is sufficient to show that for any $$v,v'\in L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ there exists $$v''\in L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ such that

\begin{aligned} \textsf {d} _\mu ^2(w,v'')\le \frac{1}{2}\,\textsf {d} _\mu ^2(w,v)+ \frac{1}{2}\,\textsf {d} _\mu ^2(w,v')-\frac{1}{4}\,\textsf {d} _\mu ^2(v,v') \qquad \text { for every }w\in L^2(\mathrm{T}_G\mathrm{Y};\mu ). \end{aligned}

Let us define $$v''\in L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ as

\begin{aligned} v''_x:=\frac{1}{2}\,v_x\oplus v'_x\in \mathrm{T}_x\mathrm{Y}\qquad \text { for }\mu \text {-a.e. }x\in \mathrm{Y}. \end{aligned}

(Note that $$v''_x$$ is the midpoint between $$v_x$$ and $$v'_x$$.) The fact that $$v''$$ is (the equivalence class of) a Borel section of $$\mathrm{T}_G\mathrm{Y}$$ follows by Proposition 3.6, while the integrability condition $$(x\mapsto |v''_x|_x)\in L^2(\mu )$$ is implied by inequality (2.12g). By [41, Corollary 2.5], we have

\begin{aligned} \textsf {d} _x^2(w_x,v''_x)\le \frac{1}{2}\,\textsf {d} _x^2(w_x,v_x) +\frac{1}{2}\,\textsf {d} _x^2(w_x,v'_x)-\frac{1}{4}\,\textsf {d} _x^2(v_x,v'_x) \qquad \text { for }\mu \text {-a.e. }x\in \mathrm{Y}. \end{aligned}

By integrating with respect to $$\mu$$ we obtain the desired inequality. $$\square$$

## Normal 1-Currents and the Superposition Principle

In this section, we recall the notion of metric 1-current as introduced by Ambrosio–Kirchheim in  and Paolini–Stepanov’s metric version of Smirnov’s superposition principle. Throughout this section $$(\mathrm{Y},\textsf {d} )$$ is a complete and separable metric space. See also  and  for more on the topic.

We denote by $$\mathrm {LIP}(\mathrm{Y})$$ the space of real-valued Lipschitz functions on $$\mathrm{Y}$$, and by $$\mathrm {LIP}_{b}(\mathrm{Y})$$ the subspace of bounded Lipschitz functions.

### Definition 4.1

(Normal 1-currents) A (metric) 1-current of finite mass on $$\mathrm{Y}$$ is a bilinear functional

\begin{aligned} T:\,\mathrm {LIP}_{b}(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y})\rightarrow {\mathbb {R}}\end{aligned}

satisfying the following conditions:

1. (a)

$$T(g,f)=0$$ if the function $$f \in \mathrm {LIP}(\mathrm{Y})$$ is constant on the support of $$g\in \mathrm {LIP}_{b}(\mathrm{Y})$$,

2. (b)

$$T(g,f_n)\rightarrow T(g,f)$$ whenever $$f_n\rightarrow f$$ pointwise and $$\sup _n\mathrm {Lip}(f_n)<\infty$$,

3. (c)

there exists a finite Borel measure $$\nu$$ on $$\mathrm{Y}$$ satisfying

\begin{aligned} \big |T(g,f)\big |\le \mathrm {Lip}(f) \int |g|\,\mathrm{d}\nu \qquad \forall g\in \mathrm {LIP}_{b}(\mathrm{Y}),\ f\in \mathrm {LIP}(\mathrm{Y}). \end{aligned}
(4.1)

A normal 1 current is a 1-current of finite mass such that there is a finite Borel measure $$\mu$$ (called boundary of T and denoted by $$\partial T$$) such that

\begin{aligned} T(1,f)=\int f\,\mathrm{d}\mu \qquad \forall f\in \mathrm {LIP}_b(\mathrm{Y}). \end{aligned}

It is not hard to check that if T has finite mass, there is a minimal (in the sense of partial ordering of measures) Borel measure for which (4.1) holds: it will be denoted by $$\Vert T\Vert$$ and called mass measure of T. We set $${\mathbb {M}}(T):=\Vert T\Vert (\mathrm{Y})$$.

A prototypical example is the normal 1-current $$[\![\gamma ]\!]$$ induced by an absolutely continuous curve $$\gamma :[0,1]\rightarrow \mathrm{Y}$$ via the formula

\begin{aligned}{}[\![\gamma ]\!](g,f):=\int _0^1 g(\gamma _t) (f\circ \gamma )'_t\,\mathrm{d}t,\quad (g,f)\in \mathrm {LIP}_b(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y}). \end{aligned}

Its mass measure is given by and its boundary is given by $$\partial [\![\gamma ]\!]=\delta _{\gamma _1}-\delta _{\gamma _0}$$.

Notice that the current $$[\![\gamma ]\!]$$ remains unchanged if we change the parametrization of $$\gamma$$. This makes it natural to consider the space of ‘curves up to reparametrization’ as follows (here we only consider non-decreasing reparametrizations).

### Reparametrizations of Curves

A reparametrization $$\alpha :[0,1]\rightarrow [0,1]$$ is a non-decreasing continuous surjection. If $$\gamma ,\eta \in C([0,1];\mathrm{Y})$$, we say that $$\eta$$ is a reparametrization of $$\gamma$$ if there is a reparametrization $$\alpha$$ satisfying $$\gamma \circ \alpha =\eta$$.

### Remark 4.2

Given $$\gamma \in C([0,1];\mathrm{Y})$$, there exists a curve $$\eta \in C([0,1];\mathrm{Y})$$ which is not constant on any open interval, and is a reparametrization of $$\gamma$$, cf. [16, Proposition 3.6].$$\square$$

Define an equivalence relation on $$C([0,1];\mathrm{Y})$$ by declaring $$\gamma \sim \eta$$ if there is a curve $$\varphi \in C([0,1];\mathrm{Y})$$ which is a reparametrization of both $$\gamma$$ and $$\eta$$. It is easy to see that this indeed defines an equivalence relation. Let $${\overline{\Gamma }}(\mathrm{Y}):=C([0,1],\mathrm{Y})/\sim$$ be the quotient space. We define a distance function on $${\overline{\Gamma }}(\mathrm{Y})$$ by

\begin{aligned} \textsf {d} _{{\overline{\Gamma }}}([\gamma ],[\eta ])=\inf \big \{\textsf {d} _\infty (\gamma \circ \alpha ,\eta \circ \beta ):\ \alpha ,\beta \text { reparametrizations}\big \},\quad [\gamma ],[\eta ]\in {\overline{\Gamma }}(\mathrm{Y}), \end{aligned}

where

\begin{aligned} \textsf {d} _\infty (\gamma ,\eta ):=\sup _{0\le t\le 1}\textsf {d} (\gamma _t,\eta _t). \end{aligned}

This is clearly symmetric, and satisfies the triangle inequality. Consequently it defines a pseudometric on $${\overline{\Gamma }}(\mathrm{Y})$$. It follows from Lemma 4.3 below that $$\textsf {d} _{{\overline{\Gamma }}}$$ defines a metric on $${\overline{\Gamma }}(\mathrm{Y})$$.

Note that, since a non-decreasing surjection $$[0,1]\rightarrow [0,1]$$ may be approximated uniformly by increasing homeomorphisms of [0, 1], it easily follows that $$\textsf {d} _{{\overline{\Gamma }}}([\gamma ],[\eta ])$$ has the representation

\begin{aligned} \textsf {d} _{{\overline{\Gamma }}}([\gamma ],[\eta ])=\inf \big \{ \textsf {d} _\infty (\gamma \circ \phi ,\eta ):\ \phi \text { increasing homeomorphism of }[0,1]\big \}. \end{aligned}

### Lemma 4.3

Let $$\gamma ,\eta \in C([0,1];\mathrm{Y})$$ be such that $$\textsf {d} _{{\overline{\Gamma }}}([\gamma ],[\eta ])=0$$. Then $$[\gamma ]=[\eta ]$$.

### Proof

Let $$\gamma ,\eta \in C([0,1];\mathrm{Y})$$. By Remark 4.2 we may assume that $$\gamma$$ is not constant on any non-trivial interval. We will prove that there is a reparametrization $$\phi$$ such that $$\gamma \circ \phi =\eta$$.

Let $$\phi _n:[0,1]\rightarrow [0,1]$$ be a sequence of increasing homeomorphisms minimizing $$\textsf {d} _{{\overline{\Gamma }}}([\gamma ],[\eta ])$$:

\begin{aligned} \lim _{n\rightarrow \infty }\textsf {d} _\infty (\gamma \circ \phi _n,\eta )=0. \end{aligned}

Denote $$\psi _n=\phi _n^{-1}$$. For each $$n\in {\mathbb {N}}$$, $$\psi _n$$ is also an increasing homeomorphism. Thus, $$\phi _n$$ and $$\psi _n$$ are of bounded variation and their distributional derivatives $$\phi '_n$$ and $$\psi _n'$$ (which are positive measures on [0, 1]) satisfy

\begin{aligned} \int _0^1|\phi _n'|\,\mathrm{d}t=1=\int _0^1|\psi _n'|\,\mathrm{d}t \end{aligned}

for all $$n\in {\mathbb {N}}$$. By Helly’s selection principle (see ), there are subsequences (labeled here by the same indices) and functions $$\phi ,\psi :[0,1]\rightarrow [0,1]$$ of bounded variation so that $$\phi _n\rightarrow \phi$$ and $$\psi _n\rightarrow \psi$$ pointwise. Clearly, $$\phi$$ and $$\psi$$ are non-decreasing and satisfy $$\phi (0)=\psi (0)=0$$, $$\phi (1)=\psi (1)=1$$. Since $$\gamma$$ is continuous, and $$\phi _n\rightarrow \phi$$ pointwise, we have the estimate

\begin{aligned} \textsf {d} (\gamma _{\phi (t)},\eta _t)=\lim _{n\rightarrow \infty }\textsf {d} (\gamma _{\phi _n(t)},\eta _t) \le \lim _{n\rightarrow \infty }\textsf {d} _\infty (\gamma \circ \phi _n,\eta )=0 \end{aligned}

for all $$t\in [0,1]$$. Thus

\begin{aligned} \gamma \circ \phi =\eta . \end{aligned}

Similarly we obtain

\begin{aligned} \gamma =\eta \circ \psi . \end{aligned}

For any $$0\le a<b\le 1$$, the pointwise convergence $$\phi _n\rightarrow \phi$$ implies that

\begin{aligned} \bigcup _{k= 1}^\infty \bigcap _{n\ge k}\phi _n^{-1}[a,b]\subset \phi ^{-1}[a,b]. \end{aligned}

Moreover, we have

\begin{aligned} (\psi (a),\psi (b))\subset \bigcup _{k=1}^\infty \bigcap _{n\ge k}\phi _n^{-1}[a,b]. \end{aligned}

To see this, let $$x\in (\psi (a),\psi (b))$$. For all large enough $$n\in {\mathbb {N}}$$, we have $$\psi _n(a)<x<\psi _n(b)$$ or, equivalently, $$a<\phi _n(x)<b$$. Thus $$\phi (x)=\lim _{n\rightarrow \infty }\phi _n(x)\in [a,b]$$. The inclusions above imply

\begin{aligned} (\psi (a),\psi (b))\subset \phi ^{-1}[a,b]. \end{aligned}
(4.2)

It follows that $$\phi$$ is continuous. Indeed, if a non-decreasing function $$\phi :[0,1]\rightarrow [0,1]$$ has a point of discontinuity, it must omit some non-trivial interval $$[a,b]\subset [0,1]$$, i.e. $$\phi ^{-1}[a,b]=\varnothing$$, implying $$\psi (a)=\psi (b)$$. By (4.2), we have

\begin{aligned} \textsf {d} (\gamma _a,\gamma _b)=\textsf {d} (\eta _{\psi (a)},\eta _{\psi (b)})=0, \end{aligned}

which contradicts the fact that $$\gamma$$ is not constant on any non-trivial interval. $$\square$$

### Remark 4.4

Since $$C([0,1];\mathrm{Y})$$ is complete and separable, we have that $${\overline{\Gamma }}(\mathrm{Y})$$ is complete and separable. $$\square$$

We will denote by $$\Gamma (\mathrm{Y})\subset {\overline{\Gamma }}(\mathrm{Y})$$ the image of $$AC([0,1];\mathrm{Y})\subset C([0,1];\mathrm{Y})$$ under the quotient map $$q:\,C([0,1];\mathrm{Y})\rightarrow {\overline{\Gamma }}(\mathrm{Y})$$ given by $$\gamma \mapsto [\gamma ]$$, i.e. $$\Gamma (\mathrm{Y})=q\big (AC([0,1];\mathrm{Y})\big )$$. Notice that since $$AC([0,1];\mathrm{Y})$$ is a Borel subset of $$C([0,1];\mathrm{Y})$$ (see for instance [5, Sect. 2.2]), we have that $$\Gamma (\mathrm{Y})$$ is a Suslin subset of $${\overline{\Gamma }}(\mathrm{Y})$$ and thus universally measurable.

Recall that a curve $$\gamma \in C([0,1];\mathrm{Y})$$ is called rectifiable, if it has finite length:

\begin{aligned} \ell (\gamma ):=\sup \left\{ \sum _{i=1}^m\textsf {d} (\gamma _{t_i},\gamma _{t_{i-1}}) \right\} <\infty , \end{aligned}

where the supremum is taken over all partitions $$0=t_0<\cdots <t_m=1$$ of [0, 1]. Note that the length $$\ell (\gamma )$$ is independent of reparametrization and, for absolutely continuous curves, is given by

\begin{aligned} \ell (\gamma )=\int _0^1|{\dot{\gamma }}_t|\,\mathrm{d}t; \end{aligned}

see  for these statements, as well as the proposition below.

### Proposition 4.5

(Reparametrization with constant speed) [26, Theorem 3.2 and Corollary 3.8] Let $$\mathrm{Y}$$ be a complete and separable space and $$\gamma \in C([0,1];\mathrm{Y})$$ a non-constant rectifiable curve. Define $$\phi :[0,1]\rightarrow [0,1]$$ by Then $${\bar{\gamma }}:=\gamma \circ \phi$$ is a $$\ell (\gamma )$$-Lipschitz curve and has constant metric speed $$\displaystyle |\dot{{\bar{\gamma }}}_t|=\ell (\gamma )$$ for almost every t. Moreover i.e. $${\bar{\gamma }}$$ is a reparametrization of $$\gamma$$.

If $$\gamma \sim \eta$$ are two absolutely continuous curves, then their reparametrizations with constant speed coincide.

We shall denote by $$\textsf {ConstSpRep}:\Gamma (\mathrm{Y})\rightarrow C([0,1],\mathrm{Y})$$ the map sending the equivalence class of $$\gamma \in AC([0,1];\mathrm{Y})$$ to the constant speed reparametrization of any element in the class. Proposition 4.5 implies that this map is well-defined. Also, we have:

### Proposition 4.6

Let $$\mathrm{Y}$$ be a complete separable space. Then $${\textsf {ConstSpRep}} :\Gamma (\mathrm{Y})\rightarrow C([0,1];\mathrm{Y})$$ is a Borel map.

### Proof

Throughout the proof we use the shorthand $${\bar{\theta }}:=\textsf {ConstSpRep}(\theta )$$. Introduce a new metric $$\textsf {d} _0$$ on $$\Gamma (\mathrm{Y})$$ by setting

\begin{aligned} \textsf {d} _0(\theta _1,\theta _2):=\max \big \{\textsf {d} _\Gamma (\theta _1,\theta _2), \big |\ell (\theta _1)-\ell (\theta _2)\big |\big \},\quad \theta _1,\theta _2\in \Gamma (\mathrm{Y}). \end{aligned}

Since the length functional $$\ell :\Gamma (\mathrm{Y})\rightarrow [0,\infty ]$$ is lower semicontinuous, and since

\begin{aligned} B_{\textsf {d} _0}(\theta ,r)=B_\Gamma (\theta ,r)\cap \ell ^{-1}(\ell (\theta )-r,\ell (\theta )+r), \end{aligned}

it follows that $$\textsf {d} _0$$-balls $$B_{\textsf {d} _0}(\theta ,r)$$ are Borel in $$(\Gamma (\mathrm{Y}),\textsf {d} _\Gamma )$$. Consequently the identity

\begin{aligned} I:(\Gamma (\mathrm{Y}),\textsf {d} _\Gamma )\rightarrow (\Gamma (\mathrm{Y}),\textsf {d} _0) \end{aligned}

is a Borel map. Define

\begin{aligned} h_0:(\Gamma (\mathrm{Y}),\textsf {d} _0)\rightarrow C([0,1];Y),\quad \theta \mapsto {\bar{\theta }} \end{aligned}

and note that $$\textsf {ConstSpRep}=h_0\circ I$$. Thus, it suffices to prove that $$h_0$$ is continuous. We thank Stefan Wenger for providing the elegant argument presented below.

Suppose $$\textsf {d} _0(\theta _n,\theta )\rightarrow 0$$ as $$n\rightarrow \infty$$. Then there are nondecreasing bijections $$\varphi _n:[0,1]\rightarrow [0,1]$$ such that

\begin{aligned} \textsf {d} _\infty (\theta _n\circ \varphi _n,{\bar{\theta }})\rightarrow 0\quad \text { as }n\rightarrow \infty . \end{aligned}

Denote $$\gamma _n:=\theta _n\circ \varphi _n$$. We have $$\displaystyle \lim _{n\rightarrow \infty }\ell (\gamma _n)= \ell (\theta )$$. Moreover, for any $$t\in [0,1]$$, we have (4.3)

Since (4.4)

it follows from (4.3) that the inequalities in (4.4) are in fact equalities, and we may pass to a subsequence (not relabeled) so that, for a countable dense set $$D\subset [0,1]$$, we have (4.5)

Set whence, by (4.5) and the fact that $${\bar{\theta }}$$ is constant speed parametrized, we have (4.6)

The sequence $$({\bar{\gamma }}_n)_n$$ of constant speed parametrizations of $$\gamma _n$$ is uniformly Lipschitz and thus, after passing to a subsequence, it has a uniform limit $$\beta :[0,1]\rightarrow \mathrm{Y}$$ which is a Lipschitz curve. Note that $${\bar{\theta }}_n={\bar{\gamma }}_n$$.

By the constant speed parametrization, we have

\begin{aligned} {\bar{\gamma }}_n\circ \ell _n=\gamma _n. \end{aligned}
(4.7)

For each $$t\in D$$ we have, by (4.6) and (4.7),

\begin{aligned} \beta (t)=\lim _{n\rightarrow \infty }{\bar{\gamma }}_n(\ell _n(t))=\lim _{n\rightarrow \infty } \gamma _n(t)={\bar{\theta }}(t). \end{aligned}

Since the equality holds on a dense set of points, we conclude that $$\beta ={\bar{\theta }}$$.

By repeating this argument for any subsequence of $$\theta _n$$ we have that, if $$\theta _n\rightarrow \theta$$ in $$\textsf {d} _0$$, then $${\bar{\theta }}_n\rightarrow {\bar{\theta }}$$. Thus $$h_0$$ is continuous, and this completes the proof of the claim. $$\square$$

### The Superposition Principle

We shall consider finite Borel measures $$\pi$$ on $${\overline{\Gamma }}(\mathrm{Y})$$ concentrated on $$\Gamma (\mathrm{Y})$$ and typically denote by $$[\gamma ]$$ their ‘integration variable’. In doing this, we always implicitly assume that $$\gamma$$ is absolutely continuous for $$\pi$$-a.e. $$[\gamma ]$$ (i.e. we select an element in $$[\gamma ]$$ which is absolutely continuous—see also Proposition 4.6 above).

### Lemma 4.7

For any $$(g,f)\in \mathrm {LIP}_b(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y})$$, the map $$A:\,C([0,1];\mathrm{Y})\rightarrow {\mathbb {R}}\cup \{+\infty \}$$ given by

\begin{aligned} A(\gamma ):=\left\{ \begin{array}{ll} [\![\gamma ]\!](g,f)\\ +\infty \end{array}\quad \begin{array}{ll} \mathrm{if }\gamma \in AC([0,1];\mathrm{Y}),\\ \mathrm{otherwise} \end{array}\right. \end{aligned}
(4.8)

is a Borel map.

### Proof

Since $$AC([0,1];\mathrm{Y})\subset C([0,1];\mathrm{Y})$$ is Borel, it suffices to show that the map is Borel. Let $${\bar{\gamma }}$$ denote the constant speed parametrization given by Proposition 4.5. Let $$q:\,C([0,1];\mathrm{Y})\rightarrow {\overline{\Gamma }}(\mathrm{Y})$$ be the quotient map $$\gamma \mapsto [\gamma ]$$. Since $${\bar{\gamma }}=\textsf {ConstSpRep}(q(\gamma ))$$, Proposition 4.6 implies that $$AC([0,1];\mathrm{Y})\ni \gamma \mapsto {\bar{\gamma }}$$ is Borel. Consequently the map

\begin{aligned} I_n:AC([0,1];\mathrm{Y})\rightarrow {\mathbb {R}},\quad \gamma \mapsto \int _0^{1-1/n}g({\bar{\gamma }}_t)f({\bar{\gamma }}_{t+1/n})\,\mathrm{d}t \end{aligned}

is Borel for each $$n\in {\mathbb {N}}\cup \{\infty \}$$. To show that is Borel it suffices to see that (4.9)

For each $$\gamma \in AC([0,1];\mathrm{Y})$$ we have that $$f\circ {\bar{\gamma }}$$ is Lipschitz and thus, by the dominated convergence theorem,

\begin{aligned} A(\gamma )=[\![\gamma ]\!](g,f)&=[\![{\bar{\gamma }}]\!](g,f)= \int _0^1g({\bar{\gamma }}_t)\lim _{n\rightarrow \infty }n\big [f({\bar{\gamma }}_{t+1/n}) -f({\bar{\gamma }}_t)\big ]\,\mathrm{d}t\\&=\lim _{n\rightarrow \infty }n\left( \int _0^{1-1/n}g({\bar{\gamma }}_t) \big [f({\bar{\gamma }}_{t+1/n})-f({\bar{\gamma }}_t)\big ]\,\mathrm{d}t\right) \\&=\lim _{n\rightarrow \infty }n\big (I_n(\gamma )-I_\infty (\gamma )\big ), \end{aligned}

establishing (4.9). $$\square$$

By (4.8), and the fact that $$[\![\gamma ]\!]=[\![\eta ]\!]$$ if $$\gamma \sim \eta$$, we see that for any finite non-negative Borel measure $$\pi$$ on $${\overline{\Gamma }}(\mathrm{Y})$$ concentrated on $$\Gamma (\mathrm{Y})$$ the functional $$[\![\pi ]\!]:\mathrm {LIP}_{b}(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y})\rightarrow {\mathbb {R}}$$ given by

\begin{aligned}{}[\![\pi ]\!](g,f):=\int [\![\gamma ]\!](g,f)\,\mathrm{d}\pi ([\gamma ])\qquad \forall (g,f)\in \mathrm {LIP}_{b}(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y}) \end{aligned}

is well-defined and a normal 1-current: for its mass we have the bound (4.10)

for every non-negative $$g\in \mathrm {LIP}_b(\mathrm{Y})$$; notice that is independent on the parametrization of $$\gamma$$—see also Proposition 4.5 below. For its boundary we have

\begin{aligned} \int f\,\mathrm{d}\partial [\![\pi ]\!]=\int f\,\mathrm{d}\big ((\mathrm{e}_1)_*\pi -(\mathrm{e}_0)_*\pi \big ) = \int f(\gamma _1)-f(\gamma _0)\,\mathrm{d}\pi ([\gamma ])\quad \forall f\in \mathrm {LIP}_b(\mathrm{Y}) \end{aligned}

(notice that $$\gamma _0,\gamma _1$$ are independent on the parametrization of $$\gamma$$). Observe that picking $$g\equiv 1$$ in (4.10) we obtain

\begin{aligned} {\mathbb {M}}([\![\pi ]\!])\le \iint _0^1|{\dot{\gamma }}_t|\,\mathrm{d}t\,\mathrm{d}\pi ([\gamma ]). \end{aligned}
(4.11)

The superposition principle states that every normal 1-current is of the form $$[\![\pi ]\!]$$ for some $$\pi$$ as above, and moreover $$\pi$$ can be chosen so that equality holds in (4.11). For the proof of the following result we refer to [36, Corollary 3.3]:

### Theorem 4.8

(Superposition principle) Let $$\mathrm{Y}$$ be a complete and separable space and T a normal 1-current. Then there is a finite non-negative Borel measure $$\pi$$ on $${\overline{\Gamma }}(\mathrm{Y})$$ concentrated on $$\Gamma (\mathrm{Y})$$ such that

\begin{aligned} \begin{aligned} T&=[\![\pi ]\!],\\ {\mathbb {M}}(T)&=\iint _0^1|{\dot{\gamma }}_t|\,\mathrm{d}t\,\mathrm{d}\pi ([\gamma ]). \end{aligned} \end{aligned}

For our applications, it will be more convenient to deal with measures on $$C([0,1];\mathrm{Y})$$ rather than on $${\overline{\Gamma }}(\mathrm{Y})$$. Using Lemma 4.7 and Proposition 4.6, we can reformulate Theorem 4.8 as follows:

### Theorem 4.9

(Superposition principle—equivalent formulation) Let $$\mathrm{Y}$$ be a complete and separable space and T a normal 1-current. Then there is a finite non-negative Borel measure $$\pi$$ on $$C([0,1];\mathrm{Y})$$ concentrated on the set of non-constant absolutely continuous curves with constant speed such that

\begin{aligned} \begin{aligned} T(g,f)&=\iint _0^1 g(\gamma _t)(f\circ \gamma )'_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma ),\\ \int g \,\mathrm{d}\Vert T\Vert&=\iint _0^1g(\gamma _t)|{\dot{\gamma }}_t|\,\mathrm{d}t\,\mathrm{d}\pi (\gamma ) \end{aligned} \end{aligned}
(4.12)

for any $$g\in \mathrm {LIP}_b(\mathrm{Y})$$ and $$f\in \mathrm {LIP}(\mathrm{Y})$$.

### Proof

By Theorem 4.8, there is a finite measure $${\overline{\eta }} \in {\mathscr {M}}(\Gamma (\mathrm{Y}))$$ for which

\begin{aligned} T(g,f)&=\int \!\!\!\int _0^1g(\theta _t)(f\circ \theta )'_t\,\mathrm{d}t\, \mathrm{d}{\overline{\eta }}(\theta )\quad \text { for all }(g,f)\in \mathrm {LIP}_b(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y}),\\ {\mathbb {M}}(T)&=\int \ell (\theta )\,\mathrm{d}{\overline{\eta }}(\theta ). \end{aligned}

We define $$\pi \in {\mathscr {M}}(C([0,1];\mathrm{Y}))$$ as

\begin{aligned} \pi :=\textsf {ConstSpRep}_*{\overline{\eta }}. \end{aligned}

Since both

\begin{aligned} \ell (\theta )=\int _0^1|{\dot{\theta }}_t|\,\mathrm{d}t\quad \text { and }\quad \int _0^1g(\theta _t)(f\circ \theta )'_t\,\mathrm{d}t \end{aligned}

are independent of parametrization, we have the identities

\begin{aligned} T(g,f)&=\iint _0^1g(\gamma _t)(f\circ \gamma )'_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma ), \end{aligned}
(4.13)
\begin{aligned} {\mathbb {M}}(T)&=\iint _0^1|{\dot{\gamma }}_t|\,\mathrm{d}t\,\mathrm{d}\pi (\gamma ) \end{aligned}
(4.14)

for all $$(g,f)\in \mathrm {LIP}_b(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y})$$.

It remains to prove the second identity in the claim. It suffices to prove it for $$g={\chi }_E$$ for Borel sets $$E\subset \mathrm{Y}$$. It follows from (4.13) that

\begin{aligned} \big |T(g,f)\big |\le \iint _0^1|g|(\gamma _t)\mathrm {lip}_af(\gamma _t)|{\dot{\gamma }}_t|\,\mathrm{d}t\,\mathrm{d}\pi (\gamma ), \end{aligned}

whence $$\Vert T\Vert \le \nu$$, where $$\nu$$ is defined by

\begin{aligned} \nu (E):=\iint _0^1{\chi }_E(\gamma _t)|{\dot{\gamma }}_t|\,\mathrm{d}t\,\mathrm{d}\pi (\gamma ),\quad E\subset \mathrm{Y}\text { Borel}. \end{aligned}

By the characterisation of mass (see [7, Proposition 2.7]) it follows that, for every $$\varepsilon >0$$, there are functions $$(g_\varepsilon ^1,f_\varepsilon ^1),\ldots , (g_\varepsilon ^m,f_\varepsilon ^m) \in \mathrm {LIP}_b(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y})$$ such that $$\sum _{j=1}^m|g_\varepsilon ^j|\le 1$$ and $$\mathrm {Lip}(f_\varepsilon ^j)\le 1$$, $$1\le j\le m$$, and for which

\begin{aligned} {\mathbb {M}}(T)-\varepsilon <\sum _{j=1}^mT(g_\varepsilon ^j,f_\varepsilon ^j). \end{aligned}

Using (4.13) and the identity $$1={\chi }_E(\gamma _t)+{\chi }_{\mathrm{Y}\setminus E}(\gamma _t)$$, we have

\begin{aligned}&\iint _0^1{\chi }_E(\gamma _t)|{\dot{\gamma }}|_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma )+\iint _0^1{\chi }_{\mathrm{Y}\setminus E}(\gamma _t)|{\dot{\gamma }}|_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma )-\varepsilon \\&\quad ={\mathbb {M}}(T) -\varepsilon <\sum _{j=1}^mT(g_\varepsilon ^j,f_\varepsilon ^j)\\&\quad =\sum _{j=1}^m\iint _0^1{\chi }_E(\gamma _t)g_\varepsilon ^j(\gamma _t)(f_\varepsilon ^j\circ \gamma )'_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma )\\&\qquad +\sum _{j=1}^m\iint _0^1{\chi }_{\mathrm{Y}\setminus E}(\gamma _t)g_\varepsilon ^j(\gamma _t)(f_\varepsilon ^j\circ \gamma )'_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma )\\&\quad \le \sum _{j=1}^m\iint _0^1{\chi }_E(\gamma _t)g_\varepsilon ^j(\gamma _t)(f_\varepsilon ^j\circ \gamma )'_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma )+\iint _0^1{\chi }_{\mathrm{Y}\setminus E}(\gamma _t)|{\dot{\gamma }}_t|\,\mathrm{d}t\,\mathrm{d}\pi (\gamma ), \end{aligned}

which implies

\begin{aligned}&\iint _0^1{\chi }_E(\gamma _t)|{\dot{\gamma }}|_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma )-\varepsilon \\&\le \sum _{j=1}^m\iint _0^1{\chi }_E(\gamma _t)g_\varepsilon ^j(\gamma _t)(f_\varepsilon ^j\circ \gamma )'_t\,\mathrm{d}t\,\mathrm{d}\pi (\gamma )\le \Vert T\Vert (E) \end{aligned}

for every $$\varepsilon >0$$. It follows that $$\Vert T\Vert =\nu$$, and this completes the proof of the last identity in (4.12). $$\square$$

## Metric Measure Spaces

For our purposes, a metric measure space is a triple $$(\mathrm{Y},\textsf {d} ,\mu )$$ where $$(\mathrm{Y},\textsf {d} )$$ is a complete separable metric space and $$\mu$$ a Borel measure on $$\mathrm{Y}$$ that is finite on bounded sets.

### Derivations and the Space $$\mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$

We introduce derivations and their basic properties, based on the presentation in [14, 15]. This notion of derivation has been inspired by a similar concept introduced by N. Weaver in .

Let us denote by $$L^0(\mu )$$ the set of equivalence classes of $$\mu$$-measurable maps on $$\mathrm{Y}$$ (without any integrability assumptions).

### Definition 5.1

A derivation b on $$\mathrm{Y}$$ is a linear map $$b:\,\mathrm {LIP}_b(\mathrm{Y})\rightarrow L^0(\mu )$$ satisfying the following two conditions:

1. (1)

(Leibniz rule) $$b(fh)=fb(h)+hb(f)$$ $$\mu$$-a.e. for all $$f,h\in \mathrm {LIP}_b(\mathrm{Y})$$.

2. (2)

(Weak locality) There is $$g\in L^0(\mu )$$ such that $$\big |b(f)\big |\le g\,\mathrm {lip}_af$$ $$\mu$$-a.e. for all $$f\in \mathrm {LIP}_b(\mathrm{Y})$$.

We denote the set of derivations on $$\mathrm{Y}$$ by $$\mathrm{Der}(\mathrm{Y})$$. The space $$\mathrm{Der}(\mathrm{Y})$$ is a $$\mathrm {LIP}_b(\mathrm{Y})$$-module: given a Lipschitz function $$\varphi \in \mathrm {LIP}_b(\mathrm{Y})$$ and a derivation $$b\in \mathrm{Der}(\mathrm{Y})$$, the linear map

\begin{aligned} \varphi b:\,\mathrm {LIP}_b(\mathrm{Y})\rightarrow L^0(\mu ),\quad f\mapsto \varphi b(f) \end{aligned}

is again a derivation; see .

### Remark 5.2

By weak locality, we may extend a derivation $$b\in \mathrm{Der}(\mathrm{Y})$$ to act on $$\mathrm {LIP}(\mathrm{Y})$$. Indeed, given $$f\in \mathrm {LIP}(\mathrm{Y})$$ and an open ball $$B\subset \mathrm{Y}$$, we have

\begin{aligned} {\chi }_Bb(f)={\chi }_B b({\tilde{f}}) \end{aligned}

for any $${\tilde{f}}\in \mathrm {LIP}_b(\mathrm{Y})$$ for which . Thus, for any $$f\in \mathrm {LIP}(\mathrm{Y})$$ (and some fixed $$x_0\in \mathrm{Y}$$), the function

\begin{aligned} b(f)=\lim _{n\rightarrow \infty }{\chi }_{B_n(x_0)}b\big ((1-\mathrm {dist}(\cdot ,B_n(x_0)))_+f\big ) \end{aligned}

is well-defined, and $$\mathrm {LIP}(\mathrm{Y})\ni f\mapsto b(f)$$ satisfies (1) and (2) above. $$\square$$

Given a derivation $$b\in \mathrm{Der}(\mathrm{Y})$$, we define

\begin{aligned} |b|=:\mathrm{ess\,sup}\big \{b(f)\;\big |\;f\in \mathrm {LIP}_b(\mathrm{Y}),\,\mathrm {Lip}(f)\le 1\big \}. \end{aligned}

### Lemma 5.3

Let $$b\in \mathrm{Der}(\mathrm{Y})$$. Then |b| satisfies (2) in Definition 5.1. Moreover, |b| is the least function satisfying (2) in Definition 5.1.

### Proof

Let $$f\in \mathrm {LIP}_b(\mathrm{Y})$$. For $$x\in \mathrm{Y}$$ and $$r>0$$, set . Consider the McShane extension $$g_r$$ of ; in particular, $$g_r/L_r$$ is 1-Lipschitz and so we have

\begin{aligned} \big |b(g_r/L_r)\big | \le |b| \qquad \mu \text {-almost everywhere.} \end{aligned}

Since $$b(g_r)=b(f)$$ in $$B_r(x)$$, we deduce that $$\big |b(f/L_r)\big |\le |b|$$ holds $$\mu$$-almost everywhere on $$B_r(x)$$. Thus for each $$x\in \mathrm{Y}$$ and $$r>0$$ we have

\begin{aligned} \big |b(f)\big | (y)\le |b|(y) \cdot L_r(x)\le |b|(y) \cdot L_{2r}(y)\qquad \mu \text {-a.e. }y\in B_r(x). \end{aligned}

Using this reasoning for a countable dense set $$(x_n)\subset \mathrm{Y}$$, we deduce that for every $$r>0$$

\begin{aligned} \big |b(f)\big |(y)\le |b|(y) \cdot L_r(y) \qquad \mu \text {-a.e. }y\in \mathrm{Y}. \end{aligned}

The conclusion now follows by taking a sequence $$r_n\downarrow 0$$ and taking the limit as $$n\rightarrow \infty$$. $$\square$$

A derivation $$b\in \mathrm{Der}(\mathrm{Y})$$ is said to have divergence if there exists a function $$h\in L^1_b(\mu )$$ (that is, h is integrable on bounded sets) so that

\begin{aligned} \int b(f)\,\mathrm{d}\mu =-\int f h\,\mathrm{d}\mu \quad \text { for all }f\in \mathrm {LIP}_b(\mathrm{Y}) \end{aligned}
(5.1)

(whenever this makes sense). If such a function h exists, it is unique and we denote it by $$\mathrm{div}(b)$$ or $$\mathrm{div}\,b$$. The set of $$b\in \mathrm{Der}(\mathrm{Y})$$ that have divergence is denoted by $$D(\mathrm{div})$$.

For $$1\le p\le \infty$$ we set

\begin{aligned} \mathrm{Der}^p_b(\mathrm{Y};\mu ):= & {} \big \{b\in \mathrm{Der(\mathrm{Y})}\,:\,|b|\in L^p_b(\mu )\big \}; \\ \qquad \mathrm{Der}^p(\mathrm{Y};\mu ):= & {} \big \{b\in \mathrm{Der}(\mathrm{Y})\,:\,|b|\in L^p(\mu )\big \} \end{aligned}

and, for $$1\le p,q<\infty$$,

\begin{aligned} \mathrm{Der}^{p,q}(\mathrm{Y};\mu )&:=\big \{b\in \mathrm{Der}^p(\mathrm{Y})\cap D(\mathrm{div})\,:\,\mathrm{div}\,b\in L^q(\mu )\big \}. \end{aligned}

### Lemma 5.4

Let $$b\in D(\mathrm{div})$$. Assume $$(f_n)$$ is a sequence in $$\mathrm {LIP}_b(\mathrm{Y})$$ converging to $$f\in \mathrm {LIP}_b(\mathrm{Y})$$ pointwise and with $$\sup _n\mathrm {Lip}(f_n)<\infty$$.

1. (1)

Then

\begin{aligned} \int \varphi b(f_n)\,\mathrm{d}\mu \rightarrow \int \varphi b(f)\,\mathrm{d}\mu \end{aligned}
(5.2)

for each $$\varphi \in \mathrm {LIP}_b(\mathrm{Y})$$.

2. (2)

If, in addition, $$b\in \mathrm{Der}^p_b(\mathrm{Y})$$ for some $$1<p<\infty$$, then the convergence (5.2) holds for all $$\varphi \in L^q(\mu )$$ with bounded support. Here q is the conjugate exponent of p, i.e. $$1/p+1/q=1$$.

### Proof

By linearity it suffices to prove the claims when $$f=0$$. The Leibniz rule implies

\begin{aligned} \varphi b(f_n)=b(\varphi f_n)-f_nb(\varphi ). \end{aligned}

Thus

\begin{aligned} \int \varphi b(f_n)\,\mathrm{d}\mu&=\int b(\varphi f_n)\,\mathrm{d}\mu -\int f_n b(\varphi )\,\mathrm{d}\mu =-\int f_n\big [\varphi \mathrm{div}\,b+b(\varphi )\big ]\,\mathrm{d}\mu . \end{aligned}

Since $$f_n\rightarrow 0$$ pointwise and $$\sup _n\mathrm {Lip}(f_n)<\infty$$ it follows—using the dominated convergence theorem—that $$\int \varphi b(f_n)\,\mathrm{d}\mu \rightarrow 0$$ for all $$\varphi \in \mathrm {LIP}_b(\mathrm{Y})$$. This proves (1).

Let $$\varphi \in L^q(\mu )$$ have bounded support $$B'\subset \mathrm{Y}$$, and consider the set $$B=\big \{x\,:\,\mathrm {dist}(B',x)\le 1\big \}$$. Take a sequence $$(\varphi _m)\subset \mathrm {LIP}_b(\mathrm{Y})$$ with supports in B such that

\begin{aligned} \lim _{m\rightarrow \infty }\int |\varphi _m-\varphi |^q\,\mathrm{d}\mu =0. \end{aligned}

Denote

\begin{aligned} L=\sup _n\mathrm {Lip}(f_n). \end{aligned}

Then, for each $$m,n\in {\mathbb {N}}$$ we may estimate

\begin{aligned} \left| \int \varphi b(f_n)\,\mathrm{d}\mu \right| \le&\left| \int \varphi _m b(f_n)\,\mathrm{d}\mu \right| +\left| \int (\varphi -\varphi _m) b(f_n)\,\mathrm{d}\mu \right| \\ \le&\left| \int \varphi _m b(f_n)\,\mathrm{d}\mu \right| +\left( \int |\varphi _m-\varphi |^q\,\mathrm{d}\mu \right) ^{1/q} \left( \int _B\big |b(f_n)\big |^p\,\mathrm{d}\mu \right) ^{1/p}\\ \le&\left| \int \varphi _m b(f_n)\,\mathrm{d}\mu \right| +L\left( \int |\varphi _m-\varphi |^q\,\mathrm{d}\mu \right) ^{1/q} \left( \int _B|b|^p\,\mathrm{d}\mu \right) ^{1/p} \end{aligned}

Taking first $$\varlimsup _{n\rightarrow \infty }$$ and then $$\varlimsup _{m\rightarrow \infty }$$ we obtain

\begin{aligned} \lim _{n\rightarrow \infty }\int _B\varphi b(f_n)\,\mathrm{d}\mu =0, \end{aligned}

thus proving (2). $$\square$$

In order to prove the next proposition, we recall the notion of strong locality, cf. [14, Lemma 7.13]: if $$b\in D(\mathrm{div})$$, then for every $$f,g\in \mathrm {LIP}_b(\mathrm{Y})$$ we have

\begin{aligned} b(f)=b(g)\quad \mu \text {-almost everywhere on }\{ f=g \} \end{aligned}

and, moreover, for every closed set $$C\subset \mathrm{Y}$$.

### Proposition 5.5

Let $$(\mathrm{Y},\textsf {d} ,\mu )$$ be a metric measure space, $$\Omega \subset \mathrm{Y}$$ an open set, and $$b \in \mathrm{Der}^1_b(\mathrm{Y})\cap D(\mathrm{div})$$. Let $${\mathcal {D}}=(x_n)\subset \mathrm{Y}$$ be countable and dense in $$\Omega$$. Define $$f_n(x):=\textsf {d} (x_n,x)$$. Then we have

\begin{aligned} \mathrm{ess\,sup}_n\big \{b(f_n)\big \}=|b|,\quad \mathrm{ess\,inf}_n\big \{b(f_n)\big \}=-|b| \end{aligned}

$$\mu$$-almost everywhere in $$\Omega$$.

### Proof

Denote $$h_+=\mathrm{ess\,sup}_nb(f_n)$$ and $$h_-=\mathrm{ess\,inf}_nb(f_n)$$. It suffices to prove that $$|b|\le h_+$$ and $$-|b|\ge h_-$$ $$\mu$$-almost everywhere on $$\Omega$$.

### Claim

Consider the countable set

\begin{aligned} {\mathscr {A}}=\big \{(\mathrm {dist}_{x_1}+q_1)\wedge \cdots \wedge (\mathrm {dist}_{x_k}+q_k) \;\big |\;x_1,\ldots ,x_k\in {\mathcal {D}},q_1,\ldots ,q_k\in {\mathbb {Q}},\,k\in {\mathbb {N}}\big \}. \end{aligned}

The set of restrictions is dense in $$\mathrm {LIP}_1(\Omega ):=\big \{f\in \mathrm {LIP}(\Omega )\,:\,\mathrm {Lip}(f)\le 1\big \}$$ in the topology of pointwise convergence.

### Proof of Claim

Let $$f\in \mathrm {LIP}_1(\mathrm{Y})$$. Since $${\mathcal {D}}$$ is dense in $$\Omega$$, it is easy to see that

\begin{aligned} f(x)=\inf \big \{g(x)\,:\,g\in {\mathscr {A}},\,g\ge f\big \} \end{aligned}

for every $$x\in \Omega$$. For each $$x_k\in {\mathcal {D}}$$, let $$(g_k^m)_m$$ be a sequence in $${\mathscr {A}}$$ satisfying

\begin{aligned} g_k^m(x_k)-f(x_k)<1/m \end{aligned}

for all $$m\in {\mathbb {N}}$$. Set

\begin{aligned} g_n=g_1^n\wedge \cdots \wedge g_n^n. \end{aligned}

Then $$(g_n)$$ is a sequence in $${\mathscr {A}}$$, and

\begin{aligned} \lim _{n\rightarrow \infty }g_n(x_k)= f(x_k) \end{aligned}

for every $$x_k\in {\mathcal {D}}$$. Since $$g_n$$ and f are 1-Lipschitz functions, it follows that

\begin{aligned} \lim _{n\rightarrow \infty }g_n(x)= f(x) \end{aligned}

for every $$x\in \Omega$$. $$\square$$

For any $$f\in \mathrm {LIP}_1(\mathrm{Y})$$, let $$(g_j)\subset {\mathscr {A}}$$ be a sequence such that converges to pointwise. By passing to a subsequence we may assume that $$g_j$$ converges pointwise to some 1-Lipschitz function $$f'$$ in $$\mathrm{Y}$$ (in this case ). For each j write $$g_j$$ as

\begin{aligned} g_j=f_1^j\wedge \cdots \wedge f_{k_j}^j, \end{aligned}

with $$f_n^j=\mathrm {dist}_{x_n^j}+q_n^j$$. Define the sets $$B^j_n$$, $$n=1,2,\ldots$$ as $$B^j_n:=\{ g_j=f _n^j \}$$ if $$1\le n\le k_j$$ and $$B^j_n:=\varnothing$$ if $$n>k_j$$; also set

\begin{aligned} C_n^j:=B_n^j\setminus \bigcup _{m<n}B_m^j. \end{aligned}

Note that $$\{C_n^j\}_n$$ is a partition of $$\mathrm{Y}$$. By the strong locality of b we have

\begin{aligned} b(g_j)=b(f_n^j) \end{aligned}

$$\mu$$-almost everywhere on $$B_n^j$$. Thus the identity

\begin{aligned} b(g_j)=\sum _n\chi _{C_n^j}b(f_n^j) \end{aligned}

is valid $$\mu$$-almost everywhere. For any non-negative $$\eta \in \mathrm {LIP}_b(\mathrm{Y})$$ with bounded support, we then have

\begin{aligned} \int \eta b(g_j)\,\mathrm{d}\mu =\sum _n\int \eta {\chi }_{C_n^j}b(f_n^j)\,\mathrm{d}\mu \le \sum _n\int \eta {\chi }_{C_n^j}h_+\,\mathrm{d}\mu =\int \eta h_+\,\mathrm{d}\mu . \end{aligned}

It follows that $$b(g_j)\le h_+$$ $$\mu$$-a.e. and, by Lemma 5.4, that $$b(f')\le h_+$$ $$\mu$$-almost everywhere. Since the locality of b implies that $$b(f)\le h_+$$ $$\mu$$-almost everywhere on $$\Omega$$. Since f is arbitrary it follows that $$|b|\le h_+$$ $$\mu$$-almost everywhere on $$\Omega$$.

The inequality $$-|b|\ge h_-$$ ($$\mu$$-almost everywhere) on $$\Omega$$ is proven analogously, using the identity

\begin{aligned} -|b|=\mathrm{ess\,inf}_{\mathrm {Lip}(f)\le 1}b(f). \end{aligned}

$$\square$$

### Definition 5.6

Given $$p\ge 1$$, we define the norm $$\Vert \cdot \Vert _{p,p}$$ on $$\mathrm{Der}^{p,p}(\mathrm{Y};\mu )$$ as

\begin{aligned} \Vert b\Vert _{p,p}:=\left( \int |b|^p\,\mathrm{d}\mu +\int (\mathrm{div}\ b)^p\,\mathrm{d}\mu \right) ^{1/p}. \end{aligned}

The normed space $$\big (\mathrm{Der}^{p,p}(\mathrm{Y};\mu ),\Vert \cdot \Vert _{p,p}\big )$$ is a Banach space; see . We shall also use the norm

\begin{aligned} \Vert b\Vert _p:=\left( \int |b|^p\,\mathrm{d}\mu \right) ^{1/p}. \end{aligned}

### The Sobolev Space $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$

To define Sobolev spaces on metric measure spaces, we adopt the approach in  using derivations with divergence.

### Definition 5.7

(Sobolev space) Let $$(\mathrm{Y},\textsf {d} ,\mu )$$ be a metric measure space and $$p\in (1,\infty )$$. Let q be the conjugate exponent of p. A function $$f\in L^p(\mu )$$ belongs to the Sobolev space $$W^{1,p}(\mathrm{Y},\textsf {d} ,\mu )$$ provided there exists a $$\mathrm {LIP}_b(\mathrm{Y})$$-linear continuous map $$L_f:\,\mathrm{Der}^{q,q}(\mathrm{Y};\mu )\rightarrow L^1(\mu )$$ such that

\begin{aligned} \int L_f(b)\,\mathrm{d}\mu =-\int f\,\mathrm{div}\,b\,\mathrm{d}\mu \qquad \text { for every }b\in \mathrm{Der}^{q,q}(\mathrm{Y};\mu ). \end{aligned}
(5.3)

Whenever such a map $$L_f$$ exists, it is unique (cf. [14, Remark 7.1.5]).

### Theorem 5.8

(p-weak gradient) Let $$f\in W^{1,p}(\mathrm{Y},\textsf {d} ,\mu )$$. Then there is a function $$g_f\in L^p(\mu )$$ such that

\begin{aligned} \big |L_f(b)\big |\le g_f\,|b|\;\;\;\mu \text {-a.e.} \qquad \text { for every }b\in \mathrm{Der}^{q,q}(\mathrm{Y};\mu ). \end{aligned}
(5.4)

The least function $$g_f$$ (in the $$\mu$$-a.e. sense) that realises (5.4) is called p-weak gradient of f and denoted by |Df|.

For a proof of the previous result we refer to [14, Theorem 7.1.6]. We point out that the p-weak gradient |Df| might depend on p (this dependence is omitted in our notation). Thus, the p-weak gradient and the $$p'$$-weak gradient of a function in $$W^{1,p}(\mathrm{Y},\textsf {d} ,\mu )\cap W^{1,p'}(\mathrm{Y},\textsf {d} ,\mu )$$ can be different.

The space $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ equipped with the norm

\begin{aligned} {\Vert f\Vert }_{W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )}:= \left( \int |f|^2\,\mathrm{d}\mu +\int |Df|^2\,\mathrm{d}\mu \right) ^{1/2} \end{aligned}
(5.5)

is a Banach space. In general it is not a Hilbert space. There are alternative (equivalent) ways to define Sobolev spaces on metric measure spaces, namely the approaches that have been proposed in [5, 13, 40]; see also [9, 29] and the monographs [26, 27] for related discussions.

By combining [14, Theorem 7.2.5] with the results of , one gets the ensuing approximation theorem:

### Theorem 5.9

Let $$f\in W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ be given. Then there exists a sequence $$(f_n)\subset \mathrm {LIP}_b(\mathrm{Y})$$ such that $$f_n\rightarrow f$$ and $$\mathrm {lip}_af_n\rightarrow |Df|$$ in $$L^2(\mu )$$.

The following identity expresses a duality between $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ and $$\mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$.

### Proposition 5.10

Let $$f\in W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ be a given Sobolev function. Let us denote by $${\mathbb {B}}$$ the normed dual of $$\big (\mathrm{Der}^{2,2}(\mathrm{Y};\mu ),\Vert \cdot \Vert _2\big )$$. We define the element $${\mathscr {L}}_f\in {\mathbb {B}}$$ as $${\mathscr {L}}_f(b):=\int L_f(b)\,\mathrm{d}\mu$$ for every $$b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$. Then

\begin{aligned} {\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}={\big \Vert |Df|\big \Vert }_{L^2(\mu )}. \end{aligned}
(5.6)

To prove Proposition 5.10, we use the following well-known lemma. Let $$\mathrm{LIP}_{bs}(\mathrm{Y})$$ be the space of all Lipschitz functions on $$\mathrm{Y}$$ with bounded support.

### Lemma 5.11

Let $$f\in L^\infty (\mu )$$ be given. Then there exists a sequence $$(f_n)_n\subseteq \mathrm{LIP}_{bs}(\mathrm{Y})$$ such that $$f_n\rightarrow f$$ pointwise $$\mu$$-a.e. and $${\Vert f_n\Vert }_{L^\infty (\mu )}\le {\Vert f\Vert }_{L^\infty (\mu )}$$ for every $$n\in {\mathbb {N}}$$.

### Proof of Proposition 5.10

Step 1. First of all, we claim that

\begin{aligned} {\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}= \sup \Big \{\int \big |L_f(b)\big |\,\mathrm{d}\mu \;\Big | \;b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu ),\,\Vert b\Vert _2\le 1\Big \}. \end{aligned}
(5.7)

Call C the right hand side of (5.7). Recall that by definition of dual norm we have

\begin{aligned} {\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}= \sup \Big \{\int L_f(b)\,\mathrm{d}\mu \;\Big | \;b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu ),\,\Vert b\Vert _2\le 1\Big \}, \end{aligned}

whence trivially $${\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}\le C$$. To show the converse inequality, fix $$b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ with $$\Vert b\Vert _2\le 1$$. By Lemma 5.11, we can choose $$(g_n)_n\subseteq \mathrm{LIP}_{bs}(\mathrm{Y})$$ such that $$\sup _n|g_n|\le 1$$ and $$g_n\rightarrow \mathrm{sgn}\,L_f(b)$$ hold $$\mu$$-a.e.. Hence by applying the dominated convergence theorem we get

\begin{aligned} \int L_f(g_n b)\,\mathrm{d}\mu =\int g_n\,L_f(b)\,\mathrm{d}\mu \longrightarrow \int \big (\mathrm{sgn}\,L_f(b)\big )L_f(b)\,\mathrm{d}\mu = \int \big |L_f(b)\big |\,\mathrm{d}\mu . \end{aligned}

Since $$g_n b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ and $$\Vert g_n b\Vert _2\le \Vert b\Vert _2\le 1$$ for all $$n\in {\mathbb {N}}$$, we have $$\int \big |L_f(b)\big |\,\mathrm{d}\mu \le {\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}$$ and accordingly $$C\le {\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}$$. This proves (5.7).

Step 2. It can be readily checked that

\begin{aligned} |Df|=\underset{b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )}{\mathrm{ess\,sup\,}} {\chi }_{\{|b|>0\}}\,\frac{L_f(b)}{|b|}\quad \text { in the }\mu \text {-a.e. sense.} \end{aligned}

This means that there exists a sequence $$(b_i)_i\subseteq \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ such that

\begin{aligned} |Df|=\sup _{i\in {\mathbb {N}}}\,{\chi }_{\{|b_i|>0\}}\frac{L_f(b_i)}{|b_i|} \quad \text { in the }\mu \text {-a.e. sense.} \end{aligned}

For any $$n\in {\mathbb {N}}$$, there are pairwise disjoint Borel subsets $$A^n_1,\ldots ,A^n_n$$ of $$\mathrm{Y}$$ such that $$|b_i|>0$$ $$\mu$$-a.e. on $$A^n_i$$ for all $$i\le n$$ and

\begin{aligned} \sup _{i\le n}\,{\chi }_{\{|b_i|>0\}}\frac{L_f(b_i)}{|b_i|}= \sum _{i=1}^n{\chi }_{A^n_i}\frac{L_f(b_i)}{|b_i|}\quad \text { in the }\mu \text {-a.e. sense}. \end{aligned}

By the monotone convergence theorem we have that $$\displaystyle \sum _{i=1}^n{\chi }_{A^n_i}\,L_f(b_i)/|b_i|\rightarrow |Df|$$ in $$L^2(\mu )$$ as $$n\rightarrow \infty$$. We may choose Borel subsets $$B^n_i\subseteq A^n_i$$ such that

\begin{aligned} \begin{aligned}&{\chi }_{B^n_i}/|b_i|\in L^\infty (\mu )\quad \text { for every }i\le n,\\&\sum _{i=1}^n\chi _{B^n_i}\overset{n\rightarrow \infty }{\longrightarrow }\chi _D \quad \mu \text {-a.e.},\\&\sum _{i=1}^n{\chi }_{B^n_i}\,\frac{L_f(b_i)}{|b_i|} {\mathop {\longrightarrow }\limits ^{n\rightarrow \infty }}|Df|\;\;\text { in }L^2(\mu ). \end{aligned} \end{aligned}
(5.8)

Now let $$n\in {\mathbb {N}}$$ be fixed. Lemma 5.11 grants for all $$i\le n$$ the existence of $$(g^i_k)_k\subseteq \mathrm{LIP}_{bs}(\mathrm{Y})$$ such that $$\sup _k{\Vert g^i_k\Vert }_{L^\infty (\mu )}<+\infty$$ and $$g^i_k\rightarrow {\chi }_{B^n_i}/|b_i|$$ $$\mu$$-a.e. in $$\mathrm{Y}$$. Then an application of the dominated convergence theorem yields

\begin{aligned} \begin{aligned} \sum _{i=1}^n g^i_k\,L_f(b_i)\overset{k\rightarrow \infty }{\longrightarrow }\sum _{i=1}^n{\chi }_{B^n_i}\frac{L_f(b_i)}{|b_i|}&\quad \text { in }L^2(\mu ),\\ \left| \sum _{i=1}^n g^i_k\,b_i\right| \overset{k\rightarrow \infty }{\longrightarrow }\sum _{i=1}^n\chi _{B^n_i}&\quad \text { in }L^1_{loc}(\mu ). \end{aligned} \end{aligned}

Hence for sufficiently large $$k=k_n$$ the derivation

$${\tilde{b}}_n:=\sum _{i=1}^n g^i_k\,b_i\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu ),$$ possibly after passing to a (not relabeled) subsequence, satisfies

\begin{aligned} \begin{aligned} L_f({\tilde{b}}_n)&\rightarrow |Df| \;\;\;\text { in }L^2(\mu ),\\ |{\tilde{b}}_n|&\rightarrow {\chi }_D \;\;\;\ \mu \text {-a.e.}\quad \text { as }n\rightarrow \infty ,\\ {\chi }_B|{\tilde{b}}_n|&\le G \text { for some } G\in L^1(\mu ) \;\;\;\text { for each bounded }B\subset X. \end{aligned} \end{aligned}
(5.9)

Step 3. We can finally prove (5.6). For any $$b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ with $$\Vert b\Vert _2\le 1$$ it holds that

\begin{aligned} \int \big |L_f(b)\big |\,\mathrm{d}\mu \le \int |Df||b|\,\mathrm{d}\mu \le {\big \Vert |Df|\big \Vert }_{L^2(\mu )} \end{aligned}

by Hölder inequality, whence $${\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}} \le {\big \Vert |Df|\big \Vert }_{L^2(\mu )}$$ by (5.7). For the converse inequality, fix $$h\in \mathrm{LIP}_{bs}(\mathrm{Y})$$. By (5.9) we get

\begin{aligned} \int |h||Df|\,\mathrm{d}\mu= & {} \lim _{n\rightarrow \infty }\int |h|\big |L_f({\tilde{b}}_n)\big |\,\mathrm{d}\mu \nonumber \\= & {} \lim _{n\rightarrow \infty }\int \big |L_f(h{\tilde{b}}_n)\big |\,\mathrm{d}\mu \overset{(5.7)}{\le }\lim _{n\rightarrow \infty }{\Vert h{\tilde{b}}_n\Vert } {\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}\nonumber \\= & {} {\Vert h{\chi }_D\Vert }_{L^2(\mu )}{\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}} \le {\Vert h\Vert }_{L^2(\mu )}{\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}. \end{aligned}
(5.10)

Now choose any sequence $$(h_i)_i\subseteq \mathrm{LIP}_{bs}(\mathrm{Y})$$ such that $$h_i\rightarrow |Df|$$ pointwise $$\mu$$-a.e. and in $$L^2(\mu )$$. By writing (5.10) with $$h_i$$ in place of h and then letting $$i\rightarrow \infty$$, we conclude that $${\big \Vert |Df|\big \Vert }_{L^2(\mu )}\le {\Vert {\mathscr {L}}_f\Vert }_{{\mathbb {B}}}$$, as required. $$\square$$

## Proof of the Main Result in the Separable Case

In this section, we assume that $$(\mathrm{Y},\textsf {d} )$$ is a complete and separable local $$\mathrm{CAT}(\kappa )$$ space equipped with a Borel measure $$\mu$$ that is finite on bounded sets. As discussed in the introduction, the crucial step in the proof of Theorem 1.1 is the construction of an embedding of the ‘abstract analytical object’ $$\mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ into the ‘concrete and geometric bundle’ $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ that preserves distances on fibres. The construction of such embedding is the scope of this section.

We start by recalling the following general fact (see also [39, Theorem 3.7] for the general module homomorphism between derivations and 1-currents; notice that it is obvious that the boundary operation and the divergence operator are in correspondence under this homomorphism).

### Lemma 6.1

(From derivations to currents) Let $$(\mathrm{Y},\textsf {d} ,\mu )$$ be a metric measure space. Fix any derivation $$b\in \mathrm{Der}^{1,1}(\mathrm{Y};\mu )$$. Then the functional $$T_b:\,\mathrm {LIP}_b(\mathrm{Y})\times \mathrm {LIP}(\mathrm{Y})\rightarrow {\mathbb {R}}$$ defined by

\begin{aligned} T_b(g,f):=\int g b(f)\,\mathrm{d}\mu \end{aligned}
(6.1)

is a normal 1-current and the mass measure $$\Vert T_b\Vert$$ satisfies

\begin{aligned} \Vert T_b\Vert =|b|\mu . \end{aligned}
(6.2)

See Remark 5.2 for extending derivations to act on $$\mathrm {LIP}(\mathrm{Y})$$.

### Proof

By Lemma 5.3 we get the estimate

\begin{aligned} \big |T_b(g,f)\big |\le \int |g||b|\mathrm {lip}_a f\,\mathrm{d}\mu \le \mathrm {Lip}(f)\int |g||b|\,\mathrm{d}\mu \end{aligned}
(6.3)

and thus taking into account Lemma 5.4 we see that $$T_b$$ is a finite mass 1-current, with

\begin{aligned} \Vert T_b\Vert \le |b|\mu . \end{aligned}
(6.4)

It is moreover normal, since

\begin{aligned} T_b(1,f)=\int b(f)\,\mathrm{d}\mu =-\int f \mathrm{div}(b)\,\mathrm{d}\mu ,\quad \forall f\in \mathrm {LIP}_b(\mathrm{Y}). \end{aligned}

We are left with proving (6.2). By (6.4), it suffices to show that $${\mathbb {M}}(T_b)=\int |b|\,\mathrm{d}\mu$$. Let $$(x_n)\subset \mathrm{Y}$$ be countable and dense, and let $$f_n$$ be the function $$x\mapsto \textsf {d} (x_n,x)$$, for each $$n\in {\mathbb {N}}$$. For $$\varepsilon >0$$ and $$n\in {\mathbb {N}}$$, set

\begin{aligned} A_n:=\big \{x\in \mathrm{Y}:\ b(f_n)(x)\ge |b|(x)-\varepsilon \big \}\quad \text { and } B_n:=A_n\setminus \bigcup _{m<n}A_m. \end{aligned}

Since, by Proposition 5.5, $$|b|=\sup _nb(f_n)$$ $$\mu$$-almost everywhere, we have that the sets $$A_n$$ cover $$\mathrm{Y}$$ up to a set of $$\mu$$-measure zero. Thus the collection $$(B_n)$$ is a countable Borel partition of $$\mathrm{Y}$$ up to a $$\mu$$-null set. Let $$B\subset \mathrm{Y}$$ be a ball, and estimate

\begin{aligned} \int _B|b|\,\mathrm{d}\mu&=\sum _n\int _{B_n\cap B}|b|\,\mathrm{d}\mu \le \sum _n\int _{B_n\cap B} \big (b(f_n)+\varepsilon \big )\,\mathrm{d}\mu =\varepsilon \mu (B)\\&\qquad +\sum _n\int _{B_n\cap B}b(f_n)\,\mathrm{d}\mu \\&=\varepsilon \mu (B)+\sum _nT_b({\chi }_{B_n\cap B},f_n). \end{aligned}

By the characterization of mass (cf. [7, Proposition 2.7]), we obtain

\begin{aligned} \int _B|b|\,\mathrm{d}\mu \le \varepsilon \mu (B)+\Vert T_b\Vert (B). \end{aligned}

Since $$\varepsilon >0$$ and B are arbitrary, the claim follows. $$\square$$

We now come to the construction of the embedding.

### Theorem 6.2

(Embedding of $$\mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ into $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$) Let $$(\mathrm{Y},\textsf {d} ,\mu )$$ be a complete and separable local $$\mathrm{CAT}(\kappa )$$ space equipped with a Borel measure $$\mu$$ which is finite on bounded sets, and let $$b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$. Then there exists a unique $$v\in L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ such that for any $${\bar{x}}\in \mathrm{Y}$$ and $$y\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$ it holds that

\begin{aligned} \mathrm{d}_x\mathrm {dist}_y(v(x))=b(\mathrm {dist}_y)(x)\qquad \mu \text {-a.e. }x\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}}). \end{aligned}
(6.5)

Moreover, v satisfies

\begin{aligned} |v(x)|_x=|b|(x)\qquad \mu \text {-a.e. }x\in \mathrm{Y}. \end{aligned}
(6.6)

### Proof

Borel regularity. Taking into account Proposition 2.17(i), we can rewrite (6.5) as

\begin{aligned} {\langle }v(x),(\textsf {G} _x^y)'_0{\rangle }_x=-\textsf {d} (x,y)\,b(\mathrm {dist}_y)(x)\qquad \mu \text {-a.e. }x\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}}). \end{aligned}
(6.7)

Thus taking into account the continuity of $$y\mapsto (\textsf {G} _x^y)'_0$$, established in Theorem 2.9, and the weak continuity of $$y\mapsto b(\mathrm {dist}_y)$$, given by Lemma 5.4, we see that (6.5) holds for every $$y\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$ if and only if it holds for a countable and dense set of $$y\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$. Since the continuity of $$x\mapsto \textsf {r} _x$$ grants that $$B_{\textsf {r} _x}(x)\subset \cup _n B_{\textsf {r} _{x_n}}(x_n)$$ if $$x_n\rightarrow x$$, using an argument based on the Lindelöf property of $$\mathrm{Y}$$, we can reduce the claim to checking (6.5) for a countable and dense set of $${\bar{x}}$$’s.

Now for given $${\bar{x}}$$, and $$y\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$ running in these countable sets, fix a Borel representative $$f_{{\bar{x}},y}$$ of $$b(\mathrm {dist}_y)$$ on $$B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$ and notice that if v satisfies (6.5) for any $$y,{\bar{x}}$$ in such countable sets, there is a Borel $$\mu$$-negligible set $${\mathcal {N}}\subset \mathrm{Y}$$ such that $$\mathrm{d}_x\mathrm {dist}_y(v(x))=f_{{\bar{x}},y}(x)$$ for every $$x\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})\setminus {\mathcal {N}}$$. Thus redefining v on $${\mathcal {N}}$$ by setting it to 0 and recalling Proposition 3.1 we conclude that any v for which (6.5) holds for any $${\bar{x}}\in \mathrm{Y}$$ and $$y\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$ is, up to modification in a negligible set, a Borel section of $$\mathrm{T}_G\mathrm{Y}$$.

Integrability. Propositions 5.5 and 2.17 ensure that any v for which (6.5) holds also satisfies (2.12a). This, together with the Borel measurability proved above, implies that any v satisfying (6.5) belongs to $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$.

Uniqueness. Let $$v_1,v_2\in L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ satisfy (6.5) so that, by what we already proved, we have that $$|v_1(x)|_x=|v_2(x)|_x$$ for $$\mu$$-a.e. x. By Proposition 2.17(iii), we conclude that $$v_1(x)=v_2(x)$$ for $$\mu$$-a.e. x.

Existence. Assume at first that $$b\in \mathrm{Der}^{1,1}(\mathrm{Y};\mu )$$ and let $$T_b$$ be defined as in Lemma 6.1, so that $$T_b$$ is a normal 1-current. By Theorem 4.9, we find a finite non-negative Borel measure $$\pi$$ on $$C([0,1];\mathrm{Y})$$ concentrated on curves with constant speed for which (4.12) holds with $$T=T_b$$. Notice that, by restricting $$\pi$$ to the complement of the set of constant curves (this does not affect the validity of (4.12)), we can assume that $$\pi$$ gives 0 mass to constant curves.

Let $$\mathrm{e}:\,C([0,1];\mathrm{Y})\times [0,1]\rightarrow \mathrm{Y}$$ be the evaluation map defined as $$\mathrm{e}(\gamma ,t):=\gamma _t$$ and put and $$\nu :=\mathrm{e}_*{\hat{\pi }}$$. Since $$C([0,1];\mathrm{Y})\times [0,1]$$ and $$\mathrm{Y}$$ are Polish spaces, we may apply the disintegration theorem (see e.g. [2, Theorem 5.3.1] or [17, Chap. 45]) to $${\hat{\pi }}$$ and $$\mathrm{e}$$ to find a weakly measurable family $$\{{\hat{\pi }}_x\}_{x\in \mathrm{Y}}$$ of Borel probability measures on $$C([0,1];\mathrm{Y})\times [0,1]$$ such that

\begin{aligned} \mathrm{e}_*{\hat{\pi }}_x=\delta _x\qquad \text { for }\nu \text {-a.e. }x \end{aligned}
(6.8)

and

\begin{aligned} \int \Psi (\gamma ,t)\,\mathrm{d}{\hat{\pi }}(\gamma ,t)=\int \left( \int \Psi \,\mathrm{d}{\hat{\pi }}_x(\gamma ,t)\right) \,\mathrm{d}\nu (x) \end{aligned}
(6.9)

for any Borel real-valued map $$\Psi$$ for which any of these two integrals makes sense.

Recall that the map $$\textsf {RightDer}:\,C([0,1];\mathrm{Y})\times [0,1]\rightarrow \mathrm{T}_G\mathrm{Y}$$ defined in (3.1) is Borel (Proposition 3.7) and set $${{\mathfrak {n}}}_x:=\textsf {RightDer}_*{\hat{\pi }}_x$$. Notice that although, by definition, the measures $${{\mathfrak {n}}}_x$$ are measures on $$\mathrm{T}_G\mathrm{Y}$$, in fact for $$\nu$$-a.e. x we have that $${{\mathfrak {n}}}_x$$ is concentrated on $$\mathrm{T}_x\mathrm{Y}$$ and will therefore be considered, with a slight abuse of notation, as a measure on $$\mathrm{T}_x\mathrm{Y}$$. To see this, let $$\pi ^\mathrm{Y}:\,\mathrm{T}_G\mathrm{Y}\rightarrow \mathrm{Y}$$ be the canonical projection and notice that $$\mathrm{e}=\pi ^\mathrm{Y}\circ \textsf {RightDer}$$, thus (6.8) gives $$\pi ^\mathrm{Y}_*{{\mathfrak {n}}}_x=\delta _x$$ for $$\nu$$-a.e. x, which implies the claim.

Now observe that, for any $$g\in \mathrm {LIP}_b(\mathrm{Y})$$, we have

\begin{aligned} \int g|b|\,\mathrm{d}\mu {\mathop {=}\limits ^{(4.12),(6.2)}}\iint _0^1g(\gamma _t) |{\dot{\gamma }}_t|\,\mathrm{d}{\hat{\pi }}(\gamma ,t). \end{aligned}

By Proposition 2.20 and the definition of $$\textsf {Norm}:\mathrm{T}_G\mathrm{Y}\rightarrow {\mathbb {R}}^+$$ given in Corollary 3.3 (which also grants that this map is Borel, so that the integrals below are well-defined) we have

\begin{aligned} \iint _0^1g(\gamma _t)|{\dot{\gamma }}_t|\,\mathrm{d}{\hat{\pi }}(\gamma ,t)=\iint _0^1g( \mathrm{e}(\gamma ,t))\,\textsf {Norm}(\textsf {RightDer}(\gamma ,t))\,\mathrm{d}{\hat{\pi }}(\gamma ,t). \end{aligned}

Therefore we have

\begin{aligned} \begin{aligned} \int g|b|\,\mathrm{d}\mu&=\iint _0^1g( \mathrm{e}(\gamma ,t))\,\textsf {Norm}(\textsf {RightDer}(\gamma ,t))\,\mathrm{d}{\hat{\pi }}(\gamma ,t) \\ \text {(by}\, (6.9))\qquad&=\int g(x)\int \textsf {Norm}(\textsf {RightDer}(\gamma ,t))\,\mathrm{d}{\hat{\pi }}_x(\gamma ,t)\,\mathrm{d}\nu (x)\\ \text {(by definition of }{{\mathfrak {n}}}_x)\qquad&=\int g(x)\int \textsf {Norm}(y,v)\,\mathrm{d}{{\mathfrak {n}}}_x(y,v)\,\mathrm{d}\nu (x)\\&=\int g(x)\int |v|_x\,\mathrm{d}{{\mathfrak {n}}}_x(v)\,\mathrm{d}\nu (x). \end{aligned} \end{aligned}
(6.10)

As mentioned above, in the last step we made the slight abuse of notation in considering $${{\mathfrak {n}}}_x$$ as a measure on $$\mathrm{T}_x\mathrm{Y}$$. In particular, choosing $$g\equiv 1$$, we get

\begin{aligned} \iint |v|_x\,\mathrm{d}{{\mathfrak {n}}}_x(v)\,\mathrm{d}\nu (x)=\int |b|\,\mathrm{d}\mu <\infty , \end{aligned}

which implies that $${{\mathfrak {n}}}_x\in {\mathscr {P}}_1(\mathrm{T}_x\mathrm{Y})$$ for $$\nu$$-a.e. x. Let us define

\begin{aligned} \textsf {B}(x):=\textsf {Bar} ({{\mathfrak {n}}}_x)\in \mathrm{T}_x\mathrm{Y}\qquad \text { for }\nu \text {-a.e. }x. \end{aligned}

Now set, for brevity, $$\Phi (x):=\int |v|\,\mathrm{d}{{\mathfrak {n}}}_x(v)$$ and notice that the regularity granted by the disintegration theorem ensures that $$\Phi$$ is Borel. In addition, the fact that $$\pi$$ is concentrated on curves whose speed is constant and non-zero tells that $$\textsf {Norm}(\textsf {RightDer}(\gamma ,t))>0$$ for $${\hat{\pi }}$$-a.e. $$(\gamma ,t)$$ and hence that $$\Phi >0$$ $$\nu$$-a.e.. Now notice that (6.10) and the arbitrariness of g yield $$|b|\mu =\Phi \nu$$, so that the positivity of $$\Phi$$ implies $$\nu \ll \mu$$. Hence it holds that $$|b|\mu =\Phi \frac{\mathrm{d}\nu }{\mathrm{d}\mu }\mu$$, i.e.

\begin{aligned} |b|(x)=\frac{\mathrm{d}\nu }{\mathrm{d}\mu }(x)\int |v|_x\,\mathrm{d}{{\mathfrak {n}}}_x(v)\qquad \mu \text {-a.e. }x. \end{aligned}
(6.11)

Let $${\bar{x}}\in \mathrm{Y}$$ and $${\bar{y}}\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$ and denote $$f:=\mathrm {dist}_{{\bar{y}}}$$. Thus f is Lipschitz and semiconvex on $$B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$. Then, for $$g\in \mathrm {LIP}_b(\mathrm{Y})$$ with support in $$B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$, we have, by the same considerations as before to justify the computations (and writing $$\mathrm{d}f(x,v)$$ for $$\mathrm{d}_xf(v)$$):

\begin{aligned} \begin{aligned} \int gb(f)\,\mathrm{d}\mu&=\iint _0^1g( \mathrm{e}(\gamma ,t))\,\mathrm{d}f(\textsf {RightDer}(\gamma ,t))\,\mathrm{d}{\hat{\pi }}(\gamma ,t) \\ \text {(by}\, (6.9))\qquad&=\int g(x)\int \mathrm{d}f(\textsf {RightDer}(\gamma ,t))\,\mathrm{d}{\hat{\pi }}_x(\gamma ,t)\,\mathrm{d}\nu (x)\\ \text {(by definition of }{{\mathfrak {n}}}_x)\qquad&=\int g(x)\int \mathrm{d}f(y,v)\,\mathrm{d}{{\mathfrak {n}}}_x(y,v)\,\mathrm{d}\nu (x)\\&=\int g(x)\int \mathrm{d}_xf(v)\,\mathrm{d}{{\mathfrak {n}}}_x(v)\,\mathrm{d}\nu (x). \end{aligned} \end{aligned}

By the arbitrariness of g, it follows that

\begin{aligned} b(\mathrm {dist}_{{\bar{y}}})(x)=\frac{\mathrm{d}\nu }{\mathrm{d}\mu }(x)\int \mathrm{d}_x\mathrm {dist}_{{\bar{y}}}(v)\,\mathrm{d}{{\mathfrak {n}}}_x(v)\qquad \mu \text {-a.e. }x\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}}). \end{aligned}
(6.12)

By the Jensen inequality recalled in Sect. 2.5 and the convexity and continuity of $$\mathrm{d}_xf$$ (Proposition 2.16) this gives

\begin{aligned} b(\mathrm {dist}_{{\bar{y}}})(x)\ge \frac{\mathrm{d}\nu }{\mathrm{d}\mu }(x)\,\mathrm{d}_x\mathrm {dist}_{{\bar{y}}}(\textsf {B}(x))\qquad \mu \text {-a.e. }x\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}}). \end{aligned}
(6.13)

Now we let $${\bar{x}}$$ vary in a countable set so that the balls $$B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$ cover the whole $$\mathrm{Y}$$ (such set can be found by the Lindelöf property of $$\mathrm{Y}$$) and for each such $${\bar{x}}$$ we let $${\bar{y}}$$ vary in a countable dense set in $$B_{\textsf {r} _{{\bar{x}}}}({\bar{x}})$$: taking the infimum in (6.13) among these $${\bar{x}},{\bar{y}}$$ and recalling Proposition 2.17(ii) and Proposition 5.5, we deduce

\begin{aligned} -|b|(x)\ge - \frac{\mathrm{d}\nu }{\mathrm{d}\mu }(x)|\textsf {B}(x)|_x\qquad \mu \text {-a.e. }x\in \mathrm{Y}. \end{aligned}

Hence taking into account (6.11) we obtain

\begin{aligned} |\textsf {B}(x)|_x\ge \int |v|_x\,\mathrm{d}{{\mathfrak {n}}}_x(v)\qquad \nu \text {-a.e. }x\in \mathrm{Y}. \end{aligned}

Since $$|v|_x=\textsf {d} _x(v,0)$$, by the rigidity statement in Proposition 2.27 we deduce that $${{\mathfrak {n}}}_x$$ is concentrated on a half-line starting from $$0\in \mathrm{T}_x\mathrm{Y}$$ for $$\nu$$-a.e. x. For any x for which this is true, it is easy to check (see also [41, Example 5.2]) that any positively 1-homogeneous function $$h:\mathrm{T}_x\mathrm{Y}\rightarrow {\mathbb {R}}$$ satisfies

\begin{aligned} \int h(v)\,\mathrm{d}{{\mathfrak {n}}}_x(v)=h(\textsf {B}(x)). \end{aligned}

Applying this identity to $$h:=\mathrm{d}_x\mathrm {dist}_{{\bar{y}}}$$, from (6.12) we get

\begin{aligned} b(\mathrm {dist}_{{\bar{y}}})(x)=\frac{\mathrm{d}\nu }{\mathrm{d}\mu }(x) \, \mathrm{d}_x\mathrm {dist}_{{\bar{y}}}(\textsf {B}(x))=\mathrm{d}_x\mathrm {dist}_{{\bar{y}}}\left( \frac{\mathrm{d}\nu }{\mathrm{d}\mu }(x) \textsf {B}(x)\right) \quad \mu \text {-a.e. }x\in B_{\textsf {r} _{{\bar{x}}}}({\bar{x}}), \end{aligned}

which by the arbitrariness of $${\bar{y}}$$ means that $$v(x):=\frac{\mathrm{d}\nu }{\mathrm{d}\mu }(x) \textsf {B}(x)$$ satisfies (6.5), and thus concludes the proof for $$b\in \mathrm{Der}^{1,1}(\mathrm{Y};\mu )$$.

For the case $$b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ we argue as follows. Fix $${\bar{x}}\in \mathrm{Y}$$ and let $$(\eta _n)$$ be a sequence of Lipschitz functions with bounded support such that $$\eta _n\equiv 1$$ on $$B_n({\bar{x}})$$. Then, by the Leibniz rule for the divergence (cf. [14, Lemma 7.1.2]), we see that $$\eta _nb\in \mathrm{Der}^{1,1}(\mathrm{Y};\mu )$$. Thus we have the existence of $$v_n\in L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ satisfying (6.5) for $$b_n$$. In particular, by (6.6), we have that

\begin{aligned} |v_n(x)|_x=|b_n|(x)=|b|(x)\qquad \mu \text {-a.e. }x\in B_n({\bar{x}}). \end{aligned}
(6.14)

From the weak locality of derivations it follows that $$v_n=v_m$$ on $$B_n({\bar{x}})$$ for every $$m\ge n$$, hence the Borel section v of $$\mathrm{T}_G\mathrm{Y}$$ given by

\begin{aligned} v(x):=v_n(x)\qquad \text { for }\mu \text {-a.e. }x\in B_n({\bar{x}}),\ \forall n\in {\mathbb {N}}\end{aligned}

is well-defined and, by (6.14) and the assumption $$|b|\in L^2(\mu )$$, belongs to $$L^2(\mathrm{T}_G\mathrm{Y};\mu )$$. Then again the weak locality of derivations ensures that v satisfies (6.5), thus concluding the proof. $$\square$$

### Definition 6.3

For $$b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$ we shall denote by $${\mathscr {F}}(b)$$ the section $$v\in L^2(\mathrm{T}_G\mathrm{Y};\mu )$$ given by Theorem 6.2. Thus we have a map

\begin{aligned} {\mathscr {F}}:\,\mathrm{Der}^{2,2}(\mathrm{Y};\mu )\rightarrow L^2(\mathrm{T}_G\mathrm{Y};\mu ),\quad b\mapsto {\mathscr {F}}(b). \end{aligned}

Then we have:

### Corollary 6.4

(‘Linearity’ of $${\mathscr {F}}$$) Let $$(\mathrm{Y},\textsf {d} ,\mu )$$ be a complete and separable local $$\mathrm{CAT}(\kappa )$$ space equipped with a Borel measure $$\mu$$ finite on bounded sets and $$b_1,b_2\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu )$$. Then $$\mu$$-a.e. we have

\begin{aligned} \begin{aligned} {\mathscr {F}}(b_1+b_2)&={\mathscr {F}}(b_1)\oplus {\mathscr {F}}(b_2),\\ \textsf {d} _\cdot ({\mathscr {F}}(b_1),{\mathscr {F}}(b_2))&=|b_1-b_2|,\\ |{\mathscr {F}}(b_1+b_2)|_\cdot ^2+|{\mathscr {F}}(b_1-b_2)|_\cdot ^2&=2\big (|{\mathscr {F}}(b_1)|_\cdot ^2+|{\mathscr {F}}(b_2)|_\cdot ^2\big ). \end{aligned} \end{aligned}
(6.15)

### Proof

The statement is local in nature, thus up to using a countable cover of $$\mathrm{Y}$$ with balls of the form $$B_{\textsf {r} _x}(x)$$, we can assume that $$\mathrm{Y}$$ is a separable $$\mathrm{CAT}(\kappa )$$ space with diameter $$<D_\kappa$$.

Now let $$(y_n)\subset \mathrm{Y}$$ be countable and dense and put for brevity $$f_n:=\mathrm {dist}_{y_n}$$. For every $$n\in {\mathbb {N}}$$ we have

\begin{aligned} \begin{aligned} |(b_1-b_2)(f_n)|&=|b_1(f_n)-b_2(f_n)|{\mathop {=}\limits ^{(6.5)}}|\mathrm{d}_\cdot f_n({\mathscr {F}}(b_1))-\mathrm{d}_\cdot f_n({\mathscr {F}}(b_2))|\\&\le \textsf {d} _\cdot \big ({\mathscr {F}}(b_1),{\mathscr {F}}(b_2)\big ) \end{aligned} \end{aligned}

$$\mu$$-a.e., having used the fact that $$\mathrm{d}_xf_n$$ is 1-Lipschitz in the last step (Proposition 2.16). Passing to the supremum in n we obtain

\begin{aligned} |b_1-b_2|\le \textsf {d} _\cdot \big ({\mathscr {F}}(b_1),{\mathscr {F}}(b_2)\big )\qquad \mu \text {-a.e..} \end{aligned}
(6.16)

On the other hand, using the convexity and positive 1-homogeneity of $$\mathrm{d}_xf_n$$ (Proposition 2.16) we have

\begin{aligned} \begin{aligned} \mathrm{d}_x f_n\big ({\mathscr {F}}(b_1)(x)\oplus {\mathscr {F}}(b_2)(x)\big )&\le \mathrm{d}_x f_n\big ({\mathscr {F}}(b_1)(x)\big )+\mathrm{d}_x f_n\big ({\mathscr {F}}(b_2)(x)\big ) \\&=b_1(f_n)(x)+b_2(f_n)(x)\\&=(b_1+b_2)(f_n)(x)\\&=\mathrm{d}_x f_n\big ({\mathscr {F}}(b_1+b_2)(x)\big ) \end{aligned} \end{aligned}
(6.17)

for $$\mu$$-a.e. x. By Proposition 2.17(iii) and the arbitrariness of n this implies

\begin{aligned} \big |{\mathscr {F}}(b_1+b_2)\big |_\cdot \le \big |{\mathscr {F}}(b_1)\oplus {\mathscr {F}}(b_2)\big |_\cdot \qquad \mu \text {-a.e..} \end{aligned}
(6.18)

Therefore, $$\mu$$-a.e. we have

\begin{aligned} |b_1-b_2|^2+|b_1+b_2|^2&\le \textsf {d} ^2_\cdot \big ({\mathscr {F}}(b_1),{\mathscr {F}}(b_2)\big ) +\big |{\mathscr {F}}(b_1+b_2)\big |_\cdot ^2&\text {by } (6.16),(6.5)\\&\le \textsf {d} ^2_\cdot \big ({\mathscr {F}}(b_1),{\mathscr {F}}(b_2)\big )+ \big |{\mathscr {F}}(b_1)\oplus {\mathscr {F}}(b_2)\big |_\cdot ^2&\text {by } (6.18)\\&\le 2\,\big |{\mathscr {F}}(b_1)\big |_\cdot ^2+2\,\big |{\mathscr {F}}(b_2) \big |_\cdot ^2&\text {by }(2.12\hbox {g})\\&=2\,|b_1|^2+2\,|b_2|^2&\text {by }(6.5). \end{aligned}

Writing this for $$b_1+b_2,b_1-b_2$$ in place of $$b_1,b_2$$ we see that all the inequalities that we used are in fact equalities.

In particular the last inequality is an equality, thus proving the last identity in (6.15). The equality in (6.16) is the second in (6.15). Finally, the equality in (6.18) and Proposition 2.17(iii) imply the first identity in (6.15). This completes the proof. $$\square$$

We can now easily prove our main result. We restrict ourselves to the separable setting for the moment, and postpone the technical differences to deal with in non-separable spaces to the next section.

### Proof of Theorem 1.1 for separable spaces

By Proposition 5.10 we have

\begin{aligned} \big \Vert |Df|\big \Vert _{L^2(\mu )}=\Vert {\mathscr {L}}_f\Vert _{{\mathbb {B}}}=\sup \left\{ \int L_f(b)\,\mathrm{d}\mu \;:\;b\in \mathrm{Der}^{2,2}(\mathrm{Y};\mu ),\,\Vert b\Vert _2\le 1 \right\} . \end{aligned}

Since the space

\begin{aligned} {\mathbb {D}}:=\big (\mathrm{Der}^{2,2}(\mathrm{Y};\mu ),\Vert \cdot \Vert _2\big ) \end{aligned}
(6.19)

is (pre)Hilbert by Theorem 6.2 and Corollary 6.4 (in particular by the third in (6.15)), it follows that its dual is a Hilbert space (note that $${\mathscr {L}}_f\in {\mathbb {D}}^*={\mathbb {B}}$$ in the notation of Proposition 5.10). Thus

\begin{aligned} \begin{aligned} {\big \Vert |D(f+g)|\big \Vert }^2_{L^2(\mu )}+{\big \Vert |D(f-g)|\big \Vert }^2_{L^2(\mu )}&={\Vert {\mathscr {L}}_{f+g}\Vert }^2_{{\mathbb {B}}} +{\Vert {\mathscr {L}}_{f-g}\Vert }^2_{{\mathbb {B}}}\\&={\Vert {\mathscr {L}}_f+{\mathscr {L}}_g\Vert }^2_{{\mathbb {B}}} +{\Vert {\mathscr {L}}_f-{\mathscr {L}}_g\Vert }^2_{{\mathbb {B}}}\\&=2\,{\Vert {\mathscr {L}}_f\Vert }^2_{{\mathbb {B}}} +2\,{\Vert {\mathscr {L}}_g\Vert }^2_{{\mathbb {B}}}\\&=2\,{\big \Vert |Df|\big \Vert }^2_{L^2(\mu )} +2\,{\big \Vert |Dg|\big \Vert }^2_{L^2(\mu )}. \end{aligned} \end{aligned}

This completes the proof. $$\square$$

In fact, as we shall see shortly, the completion of the space $${\mathbb {D}}$$ defined in (6.19) is isomorphic to the $$L^2$$-tangent module. This is the content of Proposition 6.5 below. We briefly introduce some additional machinery before stating the proposition.

Recall the space of $$L^2$$-derivations

\begin{aligned} \mathrm{Der}^2(\mathrm{Y};\mu )=\big \{ b\in \mathrm{Der}(\mathrm{Y};\mu )\,:\,|b|\in L^2(\mu )\big \} \end{aligned}

which, by [14, Sect. 7.1.1], is complete when equipped with the norm $$\Vert \cdot \Vert _2$$. Since $${\mathbb {D}}\subset \mathrm{Der}^2(\mathrm{Y};\mu )$$, the completion $${\overline{{\mathbb {D}}}}$$ of $${\mathbb {D}}$$ under $$\Vert \cdot \Vert _2$$ is a Banach space and satisfies $${\overline{{\mathbb {D}}}}\subset \mathrm{Der}^2(\mathrm{Y};\mu )$$. In particular, there is a pointwise norm $$|\cdot |:\,{\overline{{\mathbb {D}}}}\rightarrow L^2(\mu )$$ given by the norm of a derivation (see Lemma 5.3). Using the fact that $${\mathbb {D}}$$ is a $$\mathrm {LIP}_{bs}(\mathrm{Y})$$-module (cf. [14, Lemma 7.1.2]), Lemma 5.11 and the dominated convergence theorem, we see that $${\overline{{\mathbb {D}}}}$$ is an $$L^\infty (\mu )$$-module. Thus $$\big ({\overline{{\mathbb {D}}}},\Vert \cdot \Vert _2,|\cdot |\big )$$ is an $$L^2(\mu )$$-normed $$L^\infty (\mu )$$-module. We refer to  for the theory of normed $$L^\infty (\mu )$$-modules.

The estimate (5.4) implies that, given $$f\in W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$, the module-homomorphism $$L_f:\mathrm{Der}^{2,2}(\mathrm{Y};\mu )\rightarrow L^1(\mu )$$ extends to a $$L^\infty (\mu )$$-linear bounded map $$L_f:{\overline{{\mathbb {D}}}}\rightarrow L^1(\mu )$$ satisfying the bound

\begin{aligned} \big |L_f({\bar{b}})\big |\le |Df||{\bar{b}}|,\quad {\bar{b}}\in {\overline{{\mathbb {D}}}}. \end{aligned}
(6.20)

We briefly recall that the cotangent module $$L^2(\mathrm{T}^*\mathrm{Y};\mu )$$ (see ) is an $$L^2(\mu )$$-normed $$L^\infty (\mu )$$-module, equipped with an exterior derivative

\begin{aligned} \mathrm{d}:\,W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )\rightarrow L^2(\mathrm{T}^*\mathrm{Y};\mu ) \end{aligned}

whose image generates $$L^2(\mathrm{T}^*\mathrm{Y};\mu )$$ as a module. The tangent module $$L^2(\mathrm{T}\mathrm{Y};\mu )$$ is defined to be the module dual of $$L^2(\mathrm{T}^*\mathrm{Y};\mu )$$. A vector field $$X\in L^2(\mathrm{T}\mathrm{Y};\mu )$$ is said to have Sobolev divergence if there exists a function $$g\in L^2(\mu )$$ such that

\begin{aligned} \int fg\,\mathrm{d}\mu =-\int \mathrm{d}f(X)\,\mathrm{d}\mu ,\quad f\in W^{1,2}(\mathrm{Y},\textsf {d} ,\mu ). \end{aligned}

The function g, if it exists, is unique, and denoted by $$\mathrm {div}_SX$$. We denote by $$D(\mathrm {div}_S)$$ the vector space of elements of $$L^2(\mathrm{T}\mathrm{Y};\mu )$$ that have Sobolev divergence. See  for the details.

### Proposition 6.5

Let $$(\mathrm{Y},\textsf {d} ,\mu )$$ be an infinitesimally Hilbertian metric measure space. Then the map takes values in $${\overline{{\mathbb {D}}}}$$ and provides an isomorphism of modules between $$L^2(\mathrm{T}\mathrm{Y};\mu )$$ and $${\overline{{\mathbb {D}}}}$$.

### Proof

It is easy to see that, if $$X\in L^2(\mathrm{T}\mathrm{Y};\mu )$$ has divergence $$\mathrm {div}_SX\in L^2(\mu )$$, then A(X) has divergence in the sense of (5.1), and

\begin{aligned} \mathrm {div}_SX=\mathrm {div}A(X) \end{aligned}

$$\mu$$-almost everywhere. Since $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ is a Hilbert space, [19, Proposition 2.3.17] implies that $$L^2(\mathrm{T}\mathrm{Y};\mu )$$ is a Hilbert module. As a simple consequence of [19, Proposition 2.3.14 and (2.3.13)], the space $$D(\mathrm {div}_S)$$ is dense in $$L^2(\mathrm{T}\mathrm{Y};\mu )$$. Thus since we already noticed that $$A(D(\mathrm {div}_S))\subset {\mathbb {D}}$$, we also get that $$A(L^2(\mathrm{T}\mathrm{Y};\mu ))\subset {\overline{{\mathbb {D}}}}$$. We will prove that A is a module isomorphism $$L^2(\mathrm{T}\mathrm{Y};\mu )\rightarrow {\overline{{\mathbb {D}}}}$$.

For each $$f\in \mathrm {LIP}_b(\mathrm{Y})$$, $$g,h\in L^\infty (\mu )$$ and $$V,W\in L^2(\mathrm{T}\mathrm{Y};\mu )$$, we have

\begin{aligned} A(gV+hW)(f)&=(gV)(\mathrm{d}f)+(hW)(\mathrm{d}f)=gV(\mathrm{d}f)+hW(\mathrm{d}f)\\&=\big (gA(V)+hA(W)\big )(f), \end{aligned}

establishing that A is a $$L^\infty (\mu )$$-linear module homomorphism $$L^2(\mathrm{T}\mathrm{Y};\mu )\rightarrow {\overline{{\mathbb {D}}}}$$. Note that

\begin{aligned} \big |A(V)\big |=\mathrm{ess\,sup}\big \{V(\mathrm{d}f):\ f\in \mathrm {LIP}_b(\mathrm{Y}),\ \mathrm {Lip}(f)\le 1\big \}\le |V|_*, \end{aligned}

so that A is bounded. By definition, we have that

\begin{aligned} |V|_*=\mathrm{ess\,sup}\left\{ \sum _{i=1}^m{\chi }_{E_i}\big |V(\mathrm{d}f_i)\big |\,:\,\sum _{i=1}^m{\chi }_{E_i}|\mathrm{d}f_i|\le 1,\ f_1,\ldots ,f_m\in W^{1,2}(\mathrm{Y},\textsf {d} ,\mu ) \right\} . \end{aligned}

Since $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ is a Hilbert space, using Theorem 5.9 and Mazur’s lemma, it is easy to see that $$\mathrm {LIP}_{bs}(\mathrm{Y})$$ is dense in $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ (see also [21, Corollary 2.9]). From this and Theorem 5.9, it follows that

\begin{aligned} |V|_*=\mathrm{ess\,sup}\left\{ \sum _{i=1}^m{\chi }_{E_i}\big |V(\mathrm{d}f_i)\big |\,:\,\sum _{i=1}^m{\chi }_{E_i}\mathrm {lip}_af_i\le 1,\ f_1,\ldots ,f_m\in \mathrm {LIP}_{bs}(\mathrm{Y}) \right\} . \end{aligned}

Thus we have

\begin{aligned} |V|_*&=\mathrm{ess\,sup}\left\{ \sum _{i=1}^m{\chi }_{E_i}\big |V(\mathrm{d}f_i) \big |\,:\,\sum _{i=1}^m{\chi }_{E_i}\mathrm {lip}_af_i\le 1 \right\} \\&=\mathrm{ess\,sup}\left\{ \sum _{i=1}^m{\chi }_{E_i}\big |A(V)(f_i) \big |\,:\,\sum _{i=1}^m{\chi }_{E_i}\mathrm {lip}_af_i\le 1 \right\} \\&\le \mathrm{ess\,sup}\left\{ \sum _{i=1}^m{\chi }_{E_i}\big |A(V)\big |\mathrm {lip}_a(f_i) \,:\,\sum _{i=1}^m{\chi }_{E_i}\mathrm {lip}_af_i\le 1\right\} =\big |A(V)\big |. \end{aligned}

We have established that $$A:\,L^2(\mathrm{T}\mathrm{Y};\mu )\rightarrow {\overline{{\mathbb {D}}}}$$ is an $$L^\infty (\mu )$$-module homomorphism satisfying

\begin{aligned} \big |A(V)\big |=|V|_*\end{aligned}

pointwise $$\mu$$-almost everywhere, for every $$V\in L^2(\mathrm{T}\mathrm{Y};\mu )$$. To show it is an isometric module isomorphism, it suffices to prove that it is onto.

Let $${\bar{b}}\in {\overline{{\mathbb {D}}}}$$. Define the linear map

\begin{aligned} L:W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )\rightarrow L^1(\mu ),\quad f\mapsto L_f({\bar{b}}). \end{aligned}

By (6.20) and [19, Proposition 1.4.8], L extends to a vector field $$X\in L^2(\mathrm{T}\mathrm{Y};\mu )$$ satisfying In particular, for $$f\in \mathrm {LIP}_{bs}(\mathrm{Y})$$, we have

\begin{aligned} A(X)(f)=X(\mathrm{d}f)=L(f)=L_f({\bar{b}})={\bar{b}}(f). \end{aligned}

This implies the surjectivity of A, and concludes the proof. $$\square$$

See  for more on preduals of the Sobolev spaces.

### Proof of Theorem 1.2

Let $$(\mathrm{Y},\textsf {d} )$$ be a complete and separable $$\mathrm{CAT}(\kappa )$$-space, and $$\mu$$ a Borel measure on $$\mathrm{Y}$$, which is finite on bounded sets. By the proof above of Theorem 1.1 in the separable case, we have that $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ is a Hilbert space. From Theorem 6.2 and Corollary 6.4 it follows that the space $${\overline{{\mathbb {D}}}}$$ admits an isometric embedding

\begin{aligned} {\mathscr {F}}':\,{\overline{{\mathbb {D}}}}\rightarrow L^2(\mathrm{T}_G\mathrm{Y};\mu ) \end{aligned}

satisfying (6.15). Thus the claim follows directly from Proposition 6.5 by precomposing $${\mathscr {F}}'$$ with the isometric module isomorphism $$A:L^2(\mathrm{T}\mathrm{Y};\mu )\rightarrow {\overline{{\mathbb {D}}}}$$. $$\square$$

## The Non-separable Case

In defining derivations and Sobolev functions we assumed, following , that the underlying metric space is separable. Yet, as noted in the introduction, from a purely geometric perspective it is quite unnatural to impose a separability condition when dealing with $$\mathrm{CAT}(\kappa )$$ spaces. In this section we discuss how to remove the condition of separability, the relevant result being Theorem 7.1. Let us remark that we shall continue to assume that the measure $$\mu$$ on $$\mathrm{Y}$$ has separable support, or equivalently that it is tight: the discussion here concerns the definition of Sobolev functions itself, in this setting.

One of the reasons for the success of the theory of Sobolev calculus on metric measure spaces is that there are many different definitions of Sobolev spaces in such environment which turn out to be equivalent. In trying to extend such an equivalence result to the non-separable setting one could either re-run all the arguments and check that they work even in the more general framework (this is possible—and works—but is quite tedious) or argue as below.

Out of the several definitions of Sobolev functions, there are two ‘extremal’ ones introduced in : the one obtained by relaxation of the asymptotic Lipschitz constant (we shall denote the corresponding space and notion of minimal relaxed upper gradient by $$W^{1,2}_\mathrm{rel}(\mathrm{Y},\textsf {d} ,\mu )$$ and $$|\mathrm{d}f|_\mathrm{rel}$$) and the one obtained by duality with test plans (we shall denote the corresponding space and notion of minimal weak upper gradient by $$W^{1,2}_\mathrm{tp}(\mathrm{Y},\textsf {d} ,\mu )$$ and $$|\mathrm{d}f|_\mathrm{tp}$$). These produce in some sense the ‘biggest’ and ‘smallest’ weak notion of upper gradient and it is easy to check from the definitions that

\begin{aligned}&W^{1,2}_\mathrm{rel}(\mathrm{Y},\textsf {d} ,\mu )\subset W^{1,2}_\mathrm{tp}(\mathrm{Y},\textsf {d} ,\mu )\quad \text {with}\quad |\mathrm{d}f|_\mathrm{tp}\le |\mathrm{d}f|_\mathrm{rel}\nonumber \\&\quad \mu \text {-a.e.}\quad \forall f\in W^{1,2}_\mathrm{rel}(\mathrm{Y},\textsf {d} ,\mu ). \end{aligned}
(7.1)

One of the main results in  is the proof that the two spaces and the two notions of upper gradients coincide. This fact is used by the first author in  to prove that the notion of Sobolev space obtained by duality with derivations coincides with $$W^{1,2}_\mathrm{rel}(\mathrm{Y},\textsf {d} ,\mu )= W^{1,2}_\mathrm{tp}(\mathrm{Y},\textsf {d} ,\mu )$$ and induces the same upper gradient.

We add the following ingredient to the discussion above:

### Theorem 7.1

Let $$(\mathrm{Y},\textsf {d} ,\mu )$$ be a complete and separable metric space equipped with a positive Radon measure which is finite on bounded sets. Let $$\mathrm{Y}_1,\mathrm{Y}_2\subset \mathrm{Y}$$ be closed sets on which $$\mu$$ is concentrated. Set , , $$i=1,2$$ and notice that the identity on the support of $$\mu$$ induces an isomorphism $$\iota :L^2(\mathrm{Y}_1,\mu _1)\rightarrow L^2(\mathrm{Y}_2,\mu _2)$$. Then:

1. (i)

$$\iota$$ induces an isomorphism from $$W^{1,2}_\mathrm{rel}(\mathrm{Y}_1,\textsf {d} _1,\mu _1)$$ to $$W^{1,2}_\mathrm{rel}(\mathrm{Y}_2,\textsf {d} _2,\mu _2)$$ which respects $$|\mathrm{d}\cdot |_\mathrm{rel}$$,

2. (ii)

$$\iota$$ induces an isomorphism from $$W^{1,2}_\mathrm{tp}(\mathrm{Y}_1,\textsf {d} _1,\mu _1)$$ to $$W^{1,2}_\mathrm{tp}(\mathrm{Y}_2,\textsf {d} _2,\mu _2)$$ which respects $$|\mathrm{d}\cdot |_\mathrm{tp}$$.

### Proof

We can assume $$\mathrm{Y}_2=\mathrm{Y}$$.

(i) Given that for any $$f:\mathrm{Y}\rightarrow {\mathbb {R}}$$ we have , we see that $$W^{1,2}_\mathrm{rel}(\mathrm{Y},\textsf {d} ,\mu )\subset W^{1,2}_\mathrm{rel}(\mathrm{Y}_1,\textsf {d} _1,\mu _1)$$ with $$|\mathrm{d}f|_{\mathrm{rel},\mathrm{Y}_1}\le |\mathrm{d}f|_{\mathrm{rel},\mathrm{Y}}$$ for any $$f\in W^{1,2}_\mathrm{rel}(\mathrm{Y},\textsf {d} ,\mu )$$. To prove the other inclusion and inequality, by the definition of $$W^{1,2}_\mathrm{rel}(\mathrm{Y},\textsf {d} ,\mu )$$ it is sufficient to prove that for any Lipschitz function $$f:\mathrm{Y}\rightarrow {\mathbb {R}}$$ we have Fix a Lipschitz function $$f:\mathrm{Y}\rightarrow {\mathbb {R}}$$ and $$\varepsilon >0$$. For any $$x\in \mathrm{Y}_1$$, let $$r>0$$ be such that . By the McShane extension lemma there is a Lipschitz function $$g:\mathrm{Y}\rightarrow {\mathbb {R}}$$ coinciding with f on $$\mathrm{Y}_1\cap B_r(x)$$ such that . By the locality property of relaxed upper gradients we see that

\begin{aligned} |\mathrm{d}f|_{\mathrm{rel},\mathrm{Y}}=|\mathrm{d}g|_{\mathrm{rel},\mathrm{Y}}\qquad \mu \text {-a.e. on } \{f=g\}\supset \mathrm{Y}_1\cap B_r(x). \end{aligned}

Keeping in mind that $$|\mathrm{d}g|_{\mathrm{rel},\mathrm{Y}}\le \mathrm {Lip}(g)$$ and the construction we deduce that (7.2)

Repeat this argument for every $$x\in \mathrm{Y}_1$$ and then use the Lindelöf property of $$\mathrm{Y}_1$$ to deduce that, as x varies in a countable dense set, the balls $$B_r(x)$$ as above cover the whole $$\mathrm{Y}_1$$. Then (7.2) gives and the conclusion follows by letting $$\varepsilon \downarrow 0$$.

(ii) It is sufficient to check that a test plan on $$\mathrm{Y}$$ is also a test plan on $$\mathrm{Y}_1$$ and vice versa. The ‘vice versa’ is obvious by the inclusion $$C([0,1];\mathrm{Y}_1)\subset C([0,1];\mathrm{Y})$$. For the other implication it is sufficient to show that any test plan $$\pi$$ on $$\mathrm{Y}$$ is concentrated on $$C([0,1];\mathrm{Y}_1)$$. To see this, let $$\mathrm{e}_t:\,C([0,1];\mathrm{Y})\rightarrow \mathrm{Y}$$ be defined by $$\mathrm{e}_t(\gamma ):=\gamma _t$$ and notice that for any dense set $$(t_n)\subset [0,1]$$ the inclusion

\begin{aligned} C([0,1];\mathrm{Y})\setminus C([0,1];\mathrm{Y}_1)=\bigcup _n\mathrm{e}_{t_n}^{-1}(\mathrm{Y}\setminus \mathrm{Y}_1) \end{aligned}

holds. Since $$(\mathrm{e}_t)_*\pi \ll \mu$$ and $$\mu$$ are concentrated on $$\mathrm{Y}_1$$, we have that $$\pi \big (\mathrm{e}_{t_n}^{-1}(\mathrm{Y}\setminus \mathrm{Y}_1)\big )=0$$ for every n. The claim follows. $$\square$$

Thanks to this result we can now give the following definition:

### Definition 7.2

(Sobolev spaces on non-separable metric spaces) Let $$(\mathrm{Y},\textsf {d} ,\mu )$$ be a complete, not necessarily separable, metric space equipped with a non-negative and non-zero Radon measure $$\mu$$ giving finite mass to bounded sets.

Then the Sobolev space $$W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ (and the corresponding notion of upper gradient $$|\mathrm{d}f|$$) is defined as $$W^{1,2}(\mathrm{Y}_1,\textsf {d} _1,\mu _1)$$, where $$\mathrm{Y}_1$$ is any closed and separable subspace of $$\mathrm{Y}$$ on which $$\mu$$ is concentrated, while and .

The role of Theorem 7.1 is to prove that this definition is consistent with the case of separable spaces. By the fact that most of the notions of Sobolev spaces in mm-spaces (including those of Cheeger [13, 40] and the first author ) are naturally ‘chained’ between $$W^{1,2}_\mathrm{rel}$$ and $$W^{1,2}_\mathrm{tp}$$ and, since these latter spaces coincide as already remarked, we see that Theorem 7.1 implies that all these notions remain unchanged when passing from $$\mathrm{Y}_1$$ to $$\mathrm{Y}_2$$, as in Theorem 7.1. This is why we do not specify the definition of Sobolev space we are referring to in Definition 7.2: they all agree.

With this said, the proof of our main Theorem 1.1 in the general case is a trivial consequence of the result established in the separable setting:

### Proof of Theorem 1.1 in the general non-separable setting

We need to prove that for any $$f,g\in W^{1,2}(\mathrm{Y},\textsf {d} ,\mu )$$ it holds that

\begin{aligned} |\mathrm{d}(f+g)|^2+|\mathrm{d}(f-g)|^2=2\left( |\mathrm{d}f|^2+|\mathrm{d}g|^2\right) \quad \mu \text {-a.e..} \end{aligned}
(7.3)

Notice that the measure $$\mu$$ is by assumption finite on bounded sets and Radon. Hence it is concentrated on a countable union Z of compact sets, which is separable. Fix $$x\in Z$$. We claim that there exists $$\Omega \subset \mathrm{Y}$$ with the following properties:

\begin{aligned}&\mu (\Omega )>0, \end{aligned}
(7.4)
\begin{aligned}&{\bar{\Omega }}\text { is a separable }\mathrm{CAT}(\kappa )\text { space,}\end{aligned}
(7.5)
\begin{aligned}&\Omega \text { contains a neighbourhood of }x\text { in }{\bar{Z}},\end{aligned}
(7.6)
\begin{aligned}&\Omega \text { is open in the space }{\bar{\Omega }}\cup {\bar{Z}}\text { and in such space has }\mu \text {-negligible boundary.} \end{aligned}
(7.7)

To construct such a set $$\Omega$$ we start by noticing that the map $$r\mapsto \mu (B_r(x))$$ is non-decreasing, hence continuous except at a countable number of points. Fix a continuity point $$r<\textsf {r} _x$$, for which $$\mu (B_r(x))>0$$. Since r is a continuity point, we have $$\mu (\partial B_r(x))=0$$. Let C be the closed convex hull of $$B_r(x)\cap {\bar{Z}}$$ and define $$\Omega$$ as the interior of C in $$C\cup {\bar{Z}}$$. (Notice that $$\Omega \subset {\bar{\Omega }}\subset C$$ and that, by convexity of the ball $$B_r(x)$$, $$C\cap {\bar{Z}}=B_r(x)\cap {\bar{Z}}$$.)

Since $$\Omega$$ is the interior of a convex set it follows that $$\Omega$$, and thus its closure $${\bar{\Omega }}$$, is a $$\mathrm{CAT}(\kappa )$$-space. The set $${\bar{\Omega }}$$ is separable by construction. This establishes (7.5).

Note that $$B_r(x)\cap {\bar{Z}}$$ is open in $${\bar{Z}}$$. Moreover, $$B_r(x)\cap {\bar{Z}}\subset \Omega$$. To see this, let $$y\in B_r(x)\cap {\bar{Z}}$$ and let $$\varepsilon >0$$ be a radius for which $$B_\varepsilon (y)\subset B_r(x)$$. Then

\begin{aligned} B_\varepsilon (y)\cap (C\cup {\bar{Z}})=(B_\varepsilon (y)\cap C)\cup (B_\varepsilon (y)\cap {\bar{Z}})=B_\varepsilon (y)\cap C\subset C \end{aligned}

is a neighbourhood of y in $$C\cup {\bar{Z}}$$. Thus y is an interior point of C. This proves (7.4) and (7.6).

To show (7.7), note that since $$\Omega$$ is open in $$C\cup {\bar{Z}}$$, it is open in $${\bar{\Omega }}\cup {\bar{Z}}$$. It suffices to show that $$\mu (\partial _{C\cup {\bar{Z}}}\Omega )=0$$. This follows from the estimate

\begin{aligned} \mu (\partial _{C\cup {\bar{Z}}}\Omega )=&\mu (\partial _{C\cup {\bar{Z}}} \Omega \cap {\bar{Z}})=\mu (\partial _{{\bar{Z}}}\Omega )\le \mu (\partial _{{\bar{Z}}}C)\\ =&\mu (\partial _{{\bar{Z}}}(C\cap {\bar{Z}}))=\mu (\partial _{{\bar{Z}}}(B_r(x)\cap Z))\le \mu (\partial B_r(x))=0. \end{aligned}

Thus we have constructed a set $$\Omega$$ with the desired properties.

By [6, Theorem 4.19(i)] applied with $$\mathrm{X}:={\bar{\Omega }}\cup {\bar{Z}}$$ we see that with (7.8)

and the same holds for g. Since we know that Theorem 1.1 holds on separable $$\mathrm{CAT}(\kappa )$$ spaces we have (see, e.g., also [19, Proposition 2.3.17]) that Then the conclusion (7.3) comes from this identity, (7.8) and the Lindelöf property of Z. $$\square$$