1 Introduction and main results

Preiss and Tišer  [16] proved that a finite Borel measure on a separable Banach space is uniquely determined by its values on balls. Counterexamples for more general cases were given by Davies [6]. Federer [10] formulated generalized versions of Besicovitch’s Covering Lemma and Besicovitch’s Theorem for separable and directionally limited metric spaces. Based on that, Buet and Leonardi [5] recently demonstrated that a Borel measure on a separable and directionally limited metric space \((X,\mathtt {d})\) that is finite on bounded sets is fully reconstructable by its values on closed balls, namely by means of Carathéodory’s metric construction. As a consequence, the map

$$\begin{aligned} {\mathbf {d}}_{{\mathcal {A}}}:{\mathcal {M}}(X)\times {\mathcal {M}}(X)\longrightarrow [0,\infty )\,, \quad (\mu ,\nu )\longmapsto \sup \limits _{A\in {\mathcal {A}}}\left| \mu (A)-\nu (A)\right| \,, \end{aligned}$$
(1)

where \({\mathcal {M}}(X)\) denotes the set of finite Borel measures on X, is a metric whenever \({\mathcal {A}}\) is a subfamily of the Borel algebra \({\mathcal {B}}(X)\) of X that contains all closed balls in X. The purpose of this work is to prove that \({\mathcal {M}}(X)\) is complete with respect to the metric \({\mathbf {d}}_{{\mathcal {A}}}\), namely by closely following and adapting the proof in [5].

In 1999, Zelený [17] proved that the Borel algebra of a Euclidean space even coincides with the Dynkin system generated by the closed balls by complements and countable disjoint unions. In the same year, Jackson and Mauldin [12] proved this result for the space \({\mathbb {R}}^L\) furnished with an arbitrary norm. The statement is false for infinite dimensional Hilbert spaces [13]. Earlier results for the cases \(L=2\) and \(L=3\) were obtained by Olejček [14, 15].

To recall the notion of directional limitedness of a metric space \((X,\mathtt {d})\) and to formulate our results, we denote an open/closed ball of radius \(r>0\) and center \(x\in X\) by

$$\begin{aligned} U_r(x)=\{y\in X: \mathtt {d}(x,y)< r\}\,,\qquad \qquad B_r(x)=\{y\in X: \mathtt {d}(x,y)\le r\}\,, \end{aligned}$$

where we use the convention \(B_{\infty }(x)=X\).

Definition 1

(cf. [5], Def. 2.5) Let \((X,\mathtt {d})\) be a metric space and \((\xi ,\eta ,\zeta )\in (0,\infty )\times (0,\frac{1}{3})\times {\mathbb {N}}\). The distance \(\mathtt {d}\) is called directionally \((\xi ,\eta ,\zeta )\)-limited at \(A\subset X\) if the following two items hold:

  1. 1.

    For all \(a,b,c\in A\) with \(\mathtt {d}(a,b)\ge \mathtt {d}(a,c)>0\), there is some \(x\in X\) such that

    $$\begin{aligned} \mathtt {d}(a,x)=\mathtt {d}(a,c) \qquad \text {and}\qquad \mathtt {d}(b,x)+\mathtt {d}(a,c)=\mathtt {d}(a,b)\,. \end{aligned}$$
    (2)
  2. 2.

    If \(a\in A\) and \(B\subset A\cap \left( U_{\xi }(a)\setminus \{a\}\right) \) are such that

    $$\begin{aligned} \frac{\mathtt {d}(x,c)}{\mathtt {d}(a,c)}\ge \eta \end{aligned}$$

    holds whenever \(b,c\in B\) with \(b\ne c\) and \(x\in X\) satisfy (2), one has \(\text {card}(B)\le \zeta \).

We call \((X,\mathtt {d})\) directionally limited if \(\mathtt {d}\) is directionally \((\xi ,\eta ,\zeta )\)-limited at X for some such \((\xi ,\eta ,\zeta )\).

We are now prepared to formulate our main result.

Theorem 1

We suppose that \((X,\mathtt {d})\) is a separable and directionally limited metric space and \({\mathcal {A}}\subset {\mathcal {B}}(X)\) is a family of Borel sets that contains all closed balls, i.e., \(B_r(x)\in {\mathcal {A}}\) holds for all \(x\in X\) and \(r\in (0,\infty ]\). Then the pair \(({\mathcal {M}}(X),{\mathbf {d}}_{{\mathcal {A}}})\) constitutes a complete metric space.

The main motivation to formulate Theorem 1 is an application to certain Markov processes induced by iterated random functions (see e.g. [4], Sect. 3.2 or [7]) on separable and directionally limited metric state spaces \((X,\mathtt {d})\). We assume henceforth that \((\Sigma ,{\mathscr {A}},{\mathbb {P}})\) is a probability space and that \(\{f_{\sigma }\}_{\sigma \in \Sigma }\) is a family of random functions \(f_{\sigma }:X\rightarrow X\), for which the map \({(\sigma ,x)\mapsto f_{\sigma }(x)}\) on \((\Sigma \times X,{\mathscr {A}}\otimes {\mathcal {B}}(X))\) into \((X,{\mathcal {B}}(X))\) is measurable. A sequence \(\omega \equiv (\sigma _n)_{n\in {\mathbb {N}}}\subset \Sigma \) of independent draws from \({\mathbb {P}}\) then induces a Markov process on X by

$$\begin{aligned} x_{\omega }({n+1})=f_{\sigma _{n+1}}(x_{\omega }(n))\,,\qquad n\in {\mathbb {N}}\,,\qquad x_{\omega }(0)\equiv x(0)\in X\,. \end{aligned}$$
(3)

The associated infinite product probability space is denoted by \((\Omega ,{\mathsf {A}},{\mathbf {P}})=\bigotimes _{n\in {\mathbb {N}}}(\Sigma ,{\mathscr {A}},{\mathbb {P}})\). We recall that \(\Omega =\Sigma ^{{\mathbb {N}}}\), \({\mathsf {A}}={\mathscr {A}}^{\otimes {\mathbb {N}}}\) is the \(\sigma \)-algebra generated by \(\left\{ \pi _m^{-1}(A): A\in {\mathscr {A}}, m\in {\mathbb {N}}\right\} \), where \(\pi _m:\Omega \rightarrow \Sigma ,\,(\sigma _n)_{n\in {\mathbb {N}}}\mapsto \sigma _m\), and \({\mathbf {P}}={\mathbb {P}}^{\otimes {\mathbb {N}}}\) is uniquely given as the probability measure on \((\Omega ,{\mathsf {A}})\) satisfying \({\mathbf {P}}\big (\bigcap _{n=1}^{N}\pi _n^{-1}(A_n)\big )=\prod _{n=1}^N{\mathbb {P}}(A_n)\) for all \((A_n)_{n\in {\mathbb {N}}}\subset {\mathscr {A}}\) and \(N\in {\mathbb {N}}\) (cf. e.g. [2], §9). The expectation w.r.t. \({\mathbb {P}}\) and \({\mathbf {P}}\) is denoted by \({\mathbb {E}}\) and \({\mathbf {E}}\), respectively.

Moreover, we write \(F^N_{\omega }:=f_{\sigma _N}\circ \dots \circ f_{\sigma _1}\) and denote the set of Borel probability measures on X by \({\mathcal {P}}(X)\). Further, we introduce the adjoint of the transition operator associated with \(f_{\sigma }\) by

$$\begin{aligned}&f_{\sigma }^*:{\mathcal {P}}(X)\longrightarrow {\mathcal {P}}(X)\,,\quad (f_{\sigma }^*(\mu ))(A)\\&\quad =\int _X\text {d}\mu (x)\text { }{\mathbb {P}}\left( \{\sigma \in \Sigma : f_{\sigma }(x)\in A\}\right) \,,\quad \, A\in {\mathcal {B}}(X)\,, \end{aligned}$$

and the pushforward under \(f_{\sigma }\) by

$$\begin{aligned} (f_{\sigma })_{\#}:{\mathcal {P}}(X)\longrightarrow {\mathcal {P}}(X)\,,\qquad ((f_{\sigma })_{\#}(\mu ))(A)=\mu (f_{\sigma }^{-1}(A))\,,\qquad \,\, A\in {\mathcal {B}}(X)\,. \end{aligned}$$

The mappings \(f_{\sigma }^*\) and \((f_{\sigma })_{\#}\) are linked to each other via the relation

$$\begin{aligned} (f_{\sigma }^*(\mu ))(A)={\mathbb {E}}_{\sigma }\,((f_{\sigma })_{\#}(\mu ))(A)\,, \end{aligned}$$

which follows from Fubini’s theorem. Iterating this relation with \(\sigma =\sigma _n\) for \(n=1\) to N yields

$$\begin{aligned} (F^{N,*}_{\omega }(\mu ))(A)={\mathbf {E}}_{\omega }\,((F_{\omega }^N)_{\#}(\mu ))(A)\,, \end{aligned}$$

where \(F_{\omega }^{N,*}:=f_{\sigma _N}^*\circ \dots \circ f_{\sigma _1}^*\) is the N-th iterate of \(f_{\sigma }^*\) and \((F_{\omega }^N)_{\#}\) is the pushforward under \(F_{\omega }^N\). Note that the distributions \(\mu _n\) of the \(x_{\omega }(n)\), given by \(\mu _n:{\mathcal {B}}(X)\rightarrow [0,1],\, A\mapsto {\mathbf {P}}(x_{\omega }(n)\in A)\) for all \(n\in {\mathbb {N}}\), obey \(\mu _n=F_{\omega }^{n,*}(\mu _0)\), where \(\mu _0=\delta _{x(0)}\) is the Dirac measure at the starting point x(0).

Bhattacharya and Majumdar came up with the idea of endowing the collection of probability measures with a metric of the type (1) in order to apply Banach’s fixed-point theorem to adjoints of the transition operators (see [4], Sect. 3.5.2). In our concrete setting, this requires the family \({\mathcal {A}}\) and the random function \(f_{\sigma }\) to be such that \({\mathcal {P}}(X)\) is complete w.r.t. \({\mathbf {d}}_{{\mathcal {A}}}\) and some iterate \(F_{\omega }^{N,*}\) of the adjoint of the transition operator \(f_{\sigma }^*\) is a uniformly strict contraction, i.e.,

$$\begin{aligned} \alpha _N:=\sup \limits _{\mu ,\nu \in {\mathcal {P}}(X)\atop \mu \ne \nu }\frac{{\mathbf {d}}_{{\mathcal {A}}}(F_{\omega }^{N,*}(\mu ),F_{\omega }^{N,*}(\nu ))}{{\mathbf {d}}_{{\mathcal {A}}}(\mu ,\nu )}<1\,. \end{aligned}$$
(4)

Under these hypotheses, \(F^{N,*}_{\omega }\) has a unique fixed point \(\varrho \in {\mathcal {P}}(X)\) and all \(\mu \in {\mathcal {P}}(X)\) obey

$$\begin{aligned} {\mathbf {d}}_{{\mathcal {A}}}(F^{N,*}_{\omega }(\mu ),\varrho )\le \alpha _N\,{\mathbf {d}}_{{\mathcal {A}}}(\mu ,\varrho )\,. \end{aligned}$$

Now if \((X,\mathtt {d})\) and \({\mathcal {A}}\) are such as in Theorem 1, we now know that \(({\mathcal {P}}(X),{\mathbf {d}}_{{\mathcal {A}}})\) is indeed complete because \({\mathcal {P}}(X)\) is clearly closed in \({\mathcal {M}}(X)\) w.r.t. \({\mathbf {d}}_{{\mathcal {A}}}\). To pave the way for the condition (4), Bhattacharya and Majumdar provided two rather accessible assumptions based on a splitting condition. One of these assumptions is trivially satisfied if \(f_{\sigma }\) is \({\mathcal {A}}\)-\({\mathcal {A}}\)-measurableFootnote 1 in the sense that \(f_{\sigma }^{-1}({\mathcal {A}})\subset {\mathcal {A}}\). After simplifying the other assumption for connected \((X,\mathtt {d})\) and continuous \(f_{\sigma }\), we combine the result of Bhattacharya and Majumdar with Theorem 1 (for details, see Sect. 2).

This leads us to the following application:

Theorem 2

Let \((X,\mathtt {d})\) be a separable, directionally limited and connected metric space and let \({\mathcal {A}}\subset {\mathcal {B}}(X)\) be a family of Borel sets containing all closed balls, i.e., \(B_r(x)\in {\mathcal {A}}\) holds for all \(x\in X\) and \(r\in (0,\infty ]\). Further, let \(\{f_{\sigma }\}_{\sigma \in \Sigma }\) be a family of \({\mathcal {A}}\)-\({\mathcal {A}}\)-measurable continuous random functions \(f_{\sigma }:X\rightarrow X\) as above and suppose that for some \(N\in {\mathbb {N}}\) one has

$$\begin{aligned} \epsilon _N:=\inf \limits _{A\in {\mathcal {A}}}{\mathbf {P}}\left( F^N_{\omega }(X)\cap \partial A=\emptyset \right) >0\,. \end{aligned}$$
(5)

Then, the Markov process (3) has a unique invariant Borel probability measure \(\varrho \) satisfying \(f_{\sigma }^*(\varrho )=\varrho \) and the distribution \(\mu _n\) of the \(x_{\omega }(n)\) converges uniformly in x(0) to \(\varrho \) w.r.t. \({\mathbf {d}}_{{\mathcal {A}}}\). More precisely, one has for all x(0) and all \(n\in {\mathbb {N}}\) the inequality

$$\begin{aligned} {\mathbf {d}}_{{\mathcal {A}}}(\mu _n,\varrho )\le \big (1-\epsilon _N\big )^{\left\lfloor {n}/{N}\right\rfloor }\,. \end{aligned}$$
(6)

Remark 1

When it comes to an application of Theorem 2, the role of the family \({\mathcal {A}}\) is essential. On the one hand, the smaller \({\mathcal {A}}\) is chosen, the easier condition (5) can be fulfilled. On the other hand, a small family \({\mathcal {A}}\) imposes larger constraints on the admissible random functions \(f_{\sigma }\) due to the requirement of \({\mathcal {A}}\)-\({\mathcal {A}}\)-measurability.

Remark 2

Let \((Y,\mathtt {d})\) be a metric space and \(g_{\sigma }:Y\rightarrow Y\) be a random bijective similitude, i.e.,

$$\begin{aligned} \mathtt {d}(g_{\sigma }(y_1),g_{\sigma }(y_2))=\rho _{\sigma }\,\mathtt {d}(y_1,y_2)\qquad \qquad \forall \text { }y_1,y_2\in Y \end{aligned}$$

holds for some random \(\rho _{\sigma }>0\). Moreover, let \(X\subset Y\) satisfy \(g_{\sigma }(X)\subset X\) and be such that \((X,\mathtt {d})\) is separable, directionally limited and connected. Then \(f_{\sigma }:X\rightarrow X,\, x\mapsto g_{\sigma }(x)\) is an \({\mathcal {A}}\)-\({\mathcal {A}}\)-measurable random similitude on X with \({\mathcal {A}}=\left\{ B_r(y)\cap X: (y,r)\in Y\times (0,\infty ]\right\} \) due to

$$\begin{aligned} f_{\sigma }^{-1}(B_r(y)\cap X)&=g_{\sigma }^{-1}(B_r(y)\cap X)\cap X=g_{\sigma }^{-1}(B_r(y))\cap g_{\sigma }^{-1}(X)\cap X\\ {}&=g_{\sigma }^{-1}(B_r(y))\cap X= B_{r\rho _{\sigma }^{-1}}(g_{\sigma }^{-1}(x))\cap X\qquad \qquad \\&\qquad \forall \text { }(y,r)\in Y\times (0,\infty ]\,, \end{aligned}$$

where the third step incorporates \(X\subset g_{\sigma }^{-1}(g_{\sigma }(X))\subset g_{\sigma }^{-1}(X)\) (see also Example 1 below).

Example 1

Let \(X\subset {\mathbb {R}}^d\) be bounded and convex, let \(\Sigma =\{1,\dots ,L\}\) and \(a_{1},\dots ,a_{L}\in X\). Further, we assume that \({\mathbb {P}}({\sigma }=i)>0\) holds for all \(i\in \Sigma \). The random bijective similitude

$$\begin{aligned} g_{\sigma }:{\mathbb {R}}^d\rightarrow {\mathbb {R}}^d\,,\quad y\mapsto a_{\sigma }+\frac{y-a_{\sigma }}{2} \end{aligned}$$

makes the dynamics approach a random point \(a_{\sigma }\) by halving its distance to \(a_{\sigma }\). As X is convex, it is invariant under \(g_{\sigma }\). Then, \(f_{\sigma }:X\rightarrow X,\, x\mapsto g_{\sigma }(x)\) is an \({\mathcal {A}}\)-\({\mathcal {A}}\)-measurable random similitude on X with \({\mathcal {A}}=\left\{ B_r(y)\cap X: (y,r)\in {\mathbb {R}}^d\times (0,\infty ]\right\} \) by Remark 2.

Now let \(L\ge d+2\) and assume that \(a_1,\dots ,a_{d+1}\) are in general position, i.e., there exists no hyperplane of codimension 1 but instead a unique \((d-1)\)-sphere that contains all \(a_1,\dots ,a_{d+1}\)Footnote 2. Further, suppose that \(a_{d+2}\) is not contained in that unique sphere. Then, the condition (5) holds for some \(N\in {\mathbb {N}}\) (see appendix) so that Theorem 2 applies to the associated Markov process.

Example 2

Let \(X=\overline{{\mathbb {D}}}\) be the closed unit disc contained in the Riemann sphere \(Y=\overline{{\mathbb {C}}}\) and let

$$\begin{aligned} \Sigma= & {} \text {SU}_{\le }(1,1):=\left\{ T\in {\mathbb {C}}^{2\times 2}: T^*GT\le G,\,\,\,\text {det}(T)=1\right\} \subset \text {SL}(2,{\mathbb {C}}),\\ G= & {} \text {diag}(1,-1)\,, \end{aligned}$$

be the semigroup of sub-Lorentzian matrices. Then, the random Möbius transformation

$$\begin{aligned} g_{\sigma }: \overline{\mathbb {C}}\rightarrow \overline{\mathbb {C}}\,,\quad y\mapsto \left\{ \begin{array}{ll} \infty \,,&{} \qquad y=-dc^{-1}\,,\\ ac^{-1}\,,&{} \qquad y=\infty \,,\\ \frac{ay+b}{cy+d}\,,&{} \qquad \text {otherwise}\,, \end{array} \right. \qquad \text {where}\qquad \sigma =\begin{pmatrix}a&{}b\\ c&{}d\end{pmatrix}\,, \end{aligned}$$

leaves the subset \(X\subset Y\) invariant [8] and the preimage of a cline, i.e., either a straight line or a circle, under \(g_{\sigma }\) is again a cline [11]. By a suitable adaption of Remark 2, it then follows that \(f_{\sigma }:X\rightarrow X,\, x\mapsto g_{\sigma }(x)\) is an \({\mathcal {A}}\)-\({\mathcal {A}}\)-measurable random Möbius transformation on X with

$$\begin{aligned} {\mathcal {A}}=\left\{ B_r(y)\cap X: (y,r)\in {\mathbb {C}}\times (0,\infty ]\right\} \cup \left\{ H_r(y)\cap X: (y,r)\in {\mathbb {C}}\setminus \{0\}\times {\mathbb {R}}\right\} \,, \end{aligned}$$

where the \(H_r(y):=\left\{ z\in {\mathbb {C}}: \mathfrak {Re}(z{\overline{y}})\ge r\right\} \) are the closed half-planes on \({\mathbb {C}}\).

By proceeding in a similar manner as demonstrated in the appendix for Example 1, Barthel [1] analyzed the assumption (5) with this \({\mathcal {A}}\) for certain random Möbius transformations leaving the unit disc invariant, namely also with a view to the existence and uniqueness of the invariant Borel probability measure of the associated Markov process. Besides this, the analysis of the strong irreducibility and proximality of the support of the random matrix \(\sigma \) that induces the Markov process by Möbius transformation is another well-established tool to prove the existence and uniqueness of the corresponding invariant Borel probability measure [3].

2 Proof of theorem 2

As stated above, Theorem 2 is essentially an application of a result of Bhattacharya and Majumdar, once the completeness of \(({\mathcal {M}}(X),{\mathbf {d}}_{{\mathcal {A}}})\) is proven in Theorem 1. For this, we recall their result that provides hypotheses, which imply that \(F_{\omega }^{N,*}\) is a uniformly strict contraction, namely in a way adapted to the situation of our interest:

Lemma 1

(cf. [4], Sect. 3.5.2, Thm. 5.2) Let \((X,\mathtt {d})\) be a separable metric space and let \({\mathcal {A}}\subset {\mathcal {B}}(X)\) be a family of Borel sets. Further, let \(\{f_{\sigma }\}_{\sigma \in \Sigma }\) be a family of random functions \(f_{\sigma }:X\rightarrow X\) as above for which there exists some \(N\in {\mathbb {N}}\) such that the inequality

$$\begin{aligned} {\mathbf {d}}_{{\mathcal {A}}}\left( (F_{\omega }^N)_{\#}(\mu ),(F_{\omega }^N)_{\#}(\nu )\right) \le {\mathbf {d}}_{{\mathcal {A}}}(\mu ,\nu ) \end{aligned}$$
(7)

holds for all \(\omega \in \Omega \) and \(\mu ,\nu \in {\mathcal {P}}(X)\). If one also has

$$\begin{aligned} \inf _{A\in {\mathcal {A}}}{\mathbf {P}}\left( (F^N_{\omega })^{-1}(A)=X \text { }\vee \text { }(F^N_{\omega })^{-1}(A)=\emptyset \right) >0\,, \end{aligned}$$
(8)

then \(F_{\omega }^{N,*}\) is a uniformly strict contraction, i.e., (4) is satisfied.

Remark 3

The inequality (7) is clearly satisfied if \(f_{\sigma }\) is \({\mathcal {A}}\)-\({\mathcal {A}}\)-measurable for all \(\sigma \in \Sigma \).

The following lemma yields a condition that is more accessible than the one appearing in (8):

Lemma 2

Assume that \((X,\mathtt {d})\) is connected, \(F:X\rightarrow X\) is continuous and \(A\in {\mathcal {B}}(X)\) is such that F(X) and \(\partial A\) are disjoint. Then one has either \(F^{-1}(A)=X\) or \(F^{-1}(A)=\emptyset \).

Proof

If \(F(X)\cap \partial A=\emptyset \), one has \(F(X)\cap A=F(X)\cap A^{\circ }\) and \(F(X)\cap (X\setminus A)=F(X)\cap (X\setminus {\overline{A}})\). Thus both \(F(X)\cap A\) and \(F(X)\cap (X\setminus A)\) are open in the subspace topology on F(X) and their union equals F(X). As X is connected and F is continuous, F(X) is connected so that either \(F(X)\cap A\) or \(F(X)\cap (X\setminus A)\) is empty. This implies either \(F^{-1}(A)=\emptyset \) or \(F^{-1}(A)=X\). \(\square \)

Theorem 1, Remark 3, Lemmas 1 and 2 and Banach’s fixed point theorem imply Theorem 2.

3 Proof of the completeness of \(\big ({\mathcal {M}}(X),{\mathbf {d}}_{{\mathcal {A}}}\big )\)

The collection of all closed balls with radius \(r\le \delta \) conjoint with \(\emptyset \) is denoted by

$$\begin{aligned} {\mathcal {C}}_{\delta }=\left\{ B_r(x): (x,r)\in X\times (0,\delta ]\right\} \cup \{\emptyset \}\,. \end{aligned}$$

and the set of all closed balls conjoint with \(\emptyset \) is denoted by

$$\begin{aligned} {\mathcal {C}}=\left\{ B_r(x): (x,r)\in X\times (0,\infty ]\right\} \cup \{\emptyset \}\,. \end{aligned}$$

Lemma 3

The map \({\mathbf {d}}_{{\mathcal {A}}}\) is a metric.

Proof

The non-negativity, the symmetry and the triangle inequality are obvious. It remains to verify the identity of indiscernibles. Clearly, all \(\mu \in {\mathcal {M}}(X)\) satisfy \({\mathbf {d}}_{{\mathcal {A}}}(\mu ,\mu )=0\). Furthermore, if \({\mathbf {d}}_{{\mathcal {A}}}(\mu _1,\mu _2)=0\) holds for some \(\mu _1,\mu _2\in {\mathcal {M}}(X)\), then \(\mu _1\) and \(\mu _2\) coincide on all \({\mathcal {C}}\) so that the premeasures \(p_{1}\) and \(p_{2}\) defined by \(p_{j}:{\mathcal {C}}\rightarrow [0,\infty )\,, A\mapsto \mu _j(A)\) are equal. Therefore the Borel measures \(\mu ^{p_1}\) and \(\mu ^{p_2}\) given as the restrictions of the metric outer measures \(\mu ^{p_{1},*}\) and \(\mu ^{p_2,*}\) obtained by Carathéodory’s metric construction (see [5], Theorem 2.4) to the Borel algebra are equal, i.e., one has \(\mu ^{p_1}=\mu ^{p_2}\), where

$$\begin{aligned}&\mu ^{p_j}:{\mathcal {B}}(X)\longrightarrow [0,\infty )\,,\quad \\&E\longmapsto \sup \limits _{\delta >0}\inf \left\{ \sum \limits _{m\in {\mathbb {N}}}\mu _j(A_m): A_m\in {\mathcal {C}}_{\delta }\,, \text { }E\subset \bigcup \limits _{m\in {\mathbb {N}}}A_m\right\} \,,\\&j=1,2\,. \end{aligned}$$

Further, as \((X,\mathtt {d})\) is separable and directionally limited and \(\mu _1\) and \(\mu _2\) are finite, one has \(\mu ^{p_1}=\mu _1\) and \(\mu ^{p_2}=\mu _2\) (see [5], Proposition 2.11). In conclusion, it holds that \(\mu _1=\mu ^{p_1}=\mu ^{p_2}=\mu _2\). \(\square \)

Lemma 4

The set \({\mathcal {M}}(X)\) is complete w.r.t. \({\mathbf {d}}_{{\mathcal {A}}}\).

Proof

Suppose that \(\{\mu _n\}_{n\in {\mathbb {N}}}\) is Cauchy w.r.t. \({\mathbf {d}}_{{\mathcal {A}}}\). Then, the sequence of restrictions of \(\mu _n\) to \({\mathcal {A}}\) is uniformly Cauchy, i.e., one has \(\sup _{A\in {\mathcal {A}}}\left| \mu _n(A)-\mu _m(A)\right| \rightarrow 0\) as \(n,m\rightarrow \infty \). Since \([0,\infty )\) is complete, it is uniformly convergent, i.e., the limits \(\lim _{n\rightarrow \infty }\mu _n(A)\) exist for all \(A\in {\mathcal {A}}\) and one even has \(\lim _{n\rightarrow \infty }\sup _{A\in {\mathcal {A}}}\left| \mu _n(A)-\lim _{m\rightarrow \infty }\mu _m(A)\right| =0\). Now the pointwise limits define a premeasure \(p:{\mathcal {C}}\rightarrow [0,\infty )\) by \(p(A)=\lim _{n\rightarrow \infty }\mu _n(A)\), which, in turn, allows for the construction of a Borel measure \(\mu \) given as the restriction of the metric outer measures obtained by Carathéodory’s metric construction (see [5], Theorem 2.4) to the Borel algebra, viz.,

$$\begin{aligned} \mu :{\mathcal {B}}(X)\rightarrow [0,\infty )\,,\quad E\longmapsto \sup \limits _{\delta >0}\inf \left\{ \sum \limits _{m\in {\mathbb {N}}}p(A_m): A_m\in {\mathcal {C}}_{\delta }\,, \text { }E\subset \bigcup \limits _{m\in {\mathbb {N}}}A_m\right\} \,. \end{aligned}$$
(9)

First, we prove that the Borel measure \(\mu \) is finite, i.e., it lies in \({\mathcal {M}}(X)\), namely by following Step one of the proof of Proposition 2.11 in [5]. To start with, we note that there exist some \(\xi >0\), \(\eta \in (0,\frac{1}{3})\) and \(\zeta \in {\mathbb {N}}\) such that \((X,\mathtt {d})\) is directionally \((\xi ,\eta ,\zeta )\)-limited at X. Due to the Generalized Besicovitch Theorem (see [5], Theorem 2.9), for all \(\delta \in (0,\frac{\xi }{2})\) there exist \(2\zeta +1\) countable subfamilies of \({\mathcal {G}}_1^{\delta },\dots ,{\mathcal {G}}_{2\zeta +1}^{\delta }\subset {\mathcal {C}}_{\delta }\) obeying \(X=\bigcup _{j=1}^{2\zeta +1}\bigsqcup _{G\in {\mathcal {G}}_j^{\delta }}G\). Thus one has

$$\begin{aligned} \begin{aligned} \mu (X)&=\sup \limits _{\delta>0}\inf \left\{ \sum \limits _{m\in {\mathbb {N}}}p(A_m): A_m\in {\mathcal {C}}_{\delta }\,, \text { }X=\bigcup \limits _{m\in {\mathbb {N}}}A_m\right\} \\&=\sup \limits _{\frac{\xi }{2}>\delta>0}\inf \left\{ \sum \limits _{m\in {\mathbb {N}}}p(A_m): A_m\in {\mathcal {C}}_{\delta }\,, \text { }X=\bigcup \limits _{m\in {\mathbb {N}}}A_m\right\} \\&\le \sup \limits _{\frac{\xi }{2}>\delta >0}\sum \limits _{j=1}^{2\zeta +1}\sum \limits _{G\in {\mathcal {G}}_j^{\delta }}p(G)\,, \end{aligned} \end{aligned}$$
(10)

where the second step incorporates \({\mathcal {C}}_{\delta }\subset {\mathcal {C}}_{\delta ^{\prime }}\) for all \(\delta \le \delta ^{\prime }\). Now the mutual disjointness of the elements of the \({\mathcal {G}}_{j}^{\delta }\) and Fatou’s Lemma allow to estimate uniformly in \(\delta \),

$$\begin{aligned} \sum \limits _{G\in {\mathcal {G}}_j^{\delta }}p(G)&=\sum \limits _{G\in {\mathcal {G}}_j^{\delta }}\liminf \limits _{n\rightarrow \infty }\mu _n(G)\le \liminf \limits _{n\rightarrow \infty }\sum \limits _{G\in {\mathcal {G}}_j^{\delta }}\mu _n(G)\nonumber \\&\le \liminf \limits _{n\rightarrow \infty }\mu _n\left( X\right) =p\left( X\right) \,. \end{aligned}$$
(11)

Now (10) and (11) imply \(\mu (X)\le (2\zeta +1)\,p(X)<\infty \) so that one has indeed \(\mu \in {\mathcal {M}}(X)\).

Second, we prove \({\mathbf {d}}_{{\mathcal {A}}}(\mu _n,\mu )=\sup _{A\in {\mathcal {A}}}\left| \mu _n(A)-\mu (A)\right| \rightarrow 0\) as \(n\rightarrow \infty \). For this, it suffices to prove \(\lim _{n\rightarrow \infty }\mu _n(A)=\mu (A)\) for all \(A\in {\mathcal {A}}\), since \(\lim _{n\rightarrow \infty }\sup _{A\in {\mathcal {A}}}\left| \mu _n(A)-\lim _{m\rightarrow \infty }\mu _m(A)\right| =0\).

As for \(\lim _{n\rightarrow \infty }\mu _n(A)\le \mu (A)\) for all \(A\in {\mathcal {A}}\), we adapt the proof of Lemma 2.5 in [5] and show

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\mu _n(E)\le \mu (E)\qquad \forall \text { }E\in {\mathcal {B}}(X)\,. \end{aligned}$$
(12)

We fix \(\delta >0\). For any \(\epsilon >0\), there exist \(\left\{ F_m\right\} _{m\in {\mathbb {N}}}\subset {\mathcal {C}}_{\delta }\) with \(E\subset \bigcup _{m\in {\mathbb {N}}}F_m\) such that

$$\begin{aligned} \sum \limits _{m\in {\mathbb {N}}}p(F_m)\le \inf \left\{ \sum \limits _{m\in {\mathbb {N}}}p(A_m): A_m\in {\mathcal {C}}_{\delta }\,, \text { }E\subset \bigcup \limits _{m\in {\mathbb {N}}}A_m\right\} +\epsilon \le \mu (E)+\epsilon \,. \end{aligned}$$
(13)

Next, we use the subadditivity of \(\mu _n\) and apply Fatou’s Lemma,

$$\begin{aligned} \limsup \limits _{n\rightarrow \infty }\mu _n(E)\le \limsup \limits _{n\rightarrow \infty }\sum \limits _{m\in {\mathbb {N}}}\mu _n(F_m)\le \sum \limits _{m\in {\mathbb {N}}} \limsup \limits _{n\rightarrow \infty }\mu _n(F_m)=\sum \limits _{m\in {\mathbb {N}}} p(F_m)\,. \end{aligned}$$
(14)

Now combining (13) with (14) yields \(\limsup _{n\rightarrow \infty }\mu _n(E)\le \mu (E)+\epsilon \). But \(\epsilon >0\) was arbitrary.

As for \(\lim _{n\rightarrow \infty }\mu _n(A)\ge \mu (A)\) for all \(A\in {\mathcal {A}}\), it suffices to prove \(p(X)\ge \mu (X)\), since

$$\begin{aligned} \lim \limits _{n\rightarrow \infty }\mu _n(A)&\ge \lim \limits _{n\rightarrow \infty }\mu _n(X)-\limsup \limits _{n\rightarrow \infty }\mu _n(X\setminus A)\\&\ge \lim \limits _{n\rightarrow \infty }\mu _n(X)-\mu (X\setminus A)\\&=\mu (A)+\lim \limits _{n\rightarrow \infty }\mu _n(X)-\mu (X)\\&=\mu (A)+p(X)-\mu (X) \end{aligned}$$

holds in view of (12). For this, we mimic Step two of the proof of Proposition 2.11 in [5]. Since \(\mu \) is finite, the Generalized Besicovitch’s Theorem (see [5], Theorem 2.10) implies that for all \(\delta >0\) there exist families \(\{H_m^{\delta }\}_{m\in {\mathbb {N}}}\subset {\mathcal {C}}_{\delta }\) of mutually disjoint closed balls with radius not exceeding \(\delta \) that cover a set of full \({\mu }\)-measure, i.e.,

$$\begin{aligned} {\mu }(X)={\mu }\left( \bigsqcup _{m\in {\mathbb {N}}}H^{\delta }_m\right) \,. \end{aligned}$$

We fix a decreasing infinitesimal sequence \(\left\{ \delta _k\right\} _{k\in {\mathbb {N}}}\) and define \(H=\bigcap _{k\in {\mathbb {N}}}\left( \bigsqcup _{m\in {\mathbb {N}}}H^{\delta _k}_m\right) \). Then H lies in \({\mathcal {B}}(X)\) and still carries the full \(\mu \)-measure, i.e., \(\mu (H)=\mu (X)\). Now all \(k\in {\mathbb {N}}\) satisfy

$$\begin{aligned} \begin{aligned} p(X)&=\lim \limits _{n\rightarrow \infty }\mu _n(X)\ge \liminf \limits _{n\rightarrow \infty }\sum \limits _{m\in {\mathbb {N}}}\mu _n(H^{\delta _k}_m)\\ {}&\ge \sum \limits _{m\in {\mathbb {N}}}\liminf \limits _{n\rightarrow \infty }\mu _n(H^{\delta _k}_m)=\sum \limits _{m\in {\mathbb {N}}}p(H^{\delta _k}_m)\\&\ge \inf \left\{ \sum \limits _{m\in {\mathbb {N}}}p(A_m): A_m\in {\mathcal {C}}_{\delta _k}\,, \text { }H\subset \bigcup \limits _{m\in {\mathbb {N}}}A_m\right\} \,, \end{aligned} \end{aligned}$$
(15)

where we used the mutual disjointness of the \(H_m^{\delta _k}\) and Fatou’s Lemma. Due to the inclusion \({\mathcal {C}}_{\delta }\subset {\mathcal {C}}_{\delta ^{\prime }}\) for all \(\delta \le \delta ^{\prime }\), the supremum in (9) is attained in the limit \(\delta \downarrow 0\), i.e., as \(k\rightarrow \infty \) for \(\delta =\delta _k\). Hence, the right side of (15) converges to \(\mu (H)\) in the limit \(k\rightarrow \infty \). Thus, (15) implies

$$\begin{aligned} p(X)\ge \lim \limits _{k\rightarrow \infty } \inf \left\{ \sum \limits _{m\in {\mathbb {N}}}p(A_m): A_m\in {\mathcal {C}}_{\delta _k}\,, \text { }H\subset \bigcup \limits _{m\in {\mathbb {N}}}A_m\right\} =\mu (H)\,. \end{aligned}$$

This proves \(p(X)\ge \mu (X)\) because \(\mu (H)=\mu (X)\). \(\square \)