1 Introduction

The approximation of probability measures by atomic or empirical ones based on their discrepancies is a well examined problem in approximation and complexity theory [59, 62, 67] with a wide range of applications, e.g., in the derivation of quadrature rules and in the construction of designs. Recently, discrepancies were also used in image processing for dithering [46, 72, 77], i.e., for representing a gray-value image by a finite number of black dots, and in generative adversarial networks [28].

Besides discrepancies, Optimal Transport (OT) and in particular Wasserstein distances have emerged as powerful tools to compare probability measures in recent years, see [24, 81] and the references therein. In fact, so-called Sinkhorn divergences, which are computationally much easier to handle than OT, are known to interpolate between OT and discrepancies [30]. For the sample complexity of Sinkhorn divergences we refer to [37]. The rates for approximating probability measures by atomic or empirical ones with respect to Wasserstein distances depend on the dimension of the underlying spaces, see [21, 58]. In contrast, approximation rates based on discrepancies can be given independently of the dimension [67], i.e., they do not suffer from the curse of dimensionality. Additionally, we should keep in mind that the computation of discrepancies does not involve a minimization problem, which is a major drawback of OT and Sinkhorn divergences. Moreover, discrepancies admit a simple description in Fourier domain and hence the use of fast Fourier transforms is possible, leading to better scalability than the aforementioned methods.

Instead of point measures, we are interested in approximations with respect to measures supported on curves. More precisely, we consider push-forward measures of probability measures \(\omega \in {\mathcal P} ([0,1])\) by Lipschitz curves of bounded speed, with special focus on absolutely continuous measures \(\omega = \rho \lambda \) and the Lebesgue measure \(\omega = \lambda \). In this paper, we focus on approximation with respect to discrepancies. For related results on quadrature and approximation on manifolds, we refer to [31, 47, 64, 65] and the references therein. An approximation model based on the 2-Wasserstein distance was proposed in [61]. That work exploits completely different techniques than ours both in the theoretical and numerical part. Finally, we want to point out a relation to principal curves which are used in computer science and graphics for approximating distributions approximately supported on curves [49, 50, 50, 55, 57]. For the interested reader, we further comment on this direction of research in Remark 3 and in the conclusions. Next, we want to motivate our framework by numerous potential applications:

  • In MRI sampling [11, 17], it is desirable to construct sampling curves with short sampling times (short curve) and high reconstruction quality. Unfortunately, these requirements usually contradict each other and finding a good trade-off is necessary. Experiments demonstrating the power of this novel approach on a real-world scanner are presented in [60].

  • For laser engraving [61] and 3D printing [20], we require nozzle trajectories based on our (continuous) input densities. Compared to the approach in [20], where points given by Llyod’s method are connected as a solution of the TSP (traveling salesman problem), our method jointly selects the points and the corresponding curve. This avoids the necessity of solving a TSP, which can be quite costly, although efficient approximations exist. Further, it is not obvious that the fixed initial point approximation is a good starting point for constructing a curve.

  • The model can be used for wire sculpture creation [2]. In view of this, our numerical experiment presented in Fig. 5 can be interpreted as a building plan for a wire sculpture of the Spock head, namely of a 2D surface. Clearly, the approach can be also used to create images similar to TSP Art [54], where images are created from points by solving the corresponding TSP.

  • In a more manifold related setting, the approach can be used for grand tour computation on \({\mathcal {G}}_{2,4}\) [5], see also our numerical experiment in Fig. 11. More technical details are provided in the corresponding section.

Our contribution is two-fold. On the theoretical side, we provide estimates of the approximation rates in terms of the maximal speed of the curve. First, we prove approximation rates for general probability measures on compact Ahlfors d-regular length spaces \({\mathbb {X}}\). These spaces include many compact sets in the Euclidean space \({\mathbb {R}}^d\), e.g., the unit ball or the unit cube as well as d-dimensional compact Riemannian manifolds without boundary. The basic idea consists in combining the known convergence rates for approximation by atomic measures with cost estimates for the traveling salesman problem. As for point measures, the approximation rate \(L^{d/(2d-2)} \le L^{-1/2}\) for general \(\omega \in {\mathcal P} ([0,1])\) and \(L^{d/(3d-2)} \le L^{-1/3}\) for \(\omega = \lambda \) in terms of the maximal Lipschitz constant (speed) L of the curves does not crucially depend on the dimension of \({\mathbb {X}}\). In particular, the second estimate improves a result given in [18] for the torus.

If the measures fulfill additional smoothness properties, these estimates can be improved on compact, connected, d-dimensional Riemannian manifolds without boundary. Our results are formulated for absolutely continuous measures (with respect to the Riemannian measure) having densities in the Sobolev space \(H^s({\mathbb {X}})\), \(s> d/2\). In this setting, the optimal approximation rate becomes roughly speaking \(L^{-s/(d-1)}\). Our proofs rely on a general result of Brandolini et al. [13] on the quadrature error achievable by integration with respect to a measure that exactly integrates all eigenfunctions of the Laplace–Beltrami with eigenvalues smaller than a fixed number. Hence, we need to construct measures supported on curves that fulfill the above exactness criterion. More precisely, we construct such curves for the d dimensional torus \({\mathbb {T}}^d\), the spheres \({\mathbb {S}}^d\), the rotation group \(\mathrm{SO}(3)\) and the Grassmannian \({\mathcal {G}}_{2,4}\).

On the numerical side, we are interested in finding (local) minimizers of discrepancies between a given continuous measure and those from the set of push-forward measures of the Lebesgue measure by bounded Lipschitz curves. This problem is tackled numerically on \({\mathbb {T}}^2\), \({\mathbb {T}}^3\), \({\mathbb {S}}^2\) as well as \(\mathrm{SO}(3)\) and \({\mathcal {G}}_{2,4}\) by switching to the Fourier domain. The minimizers are computed using the method of conjugate gradients (CG) on manifolds, which incorporates second order information in form of a multiplication by the Hessian. Thanks to the approach in the Fourier domain, the required gradients and the calculations involving the Hessian can be performed efficiently by fast Fourier transform techniques at arbitrary nodes on the respective manifolds. Note that in contrast to our approach, semi-continuous OT minimization relies on Laguerre tessellations [41], which are not available in the required form on the 2-sphere, \(\mathrm{SO}(3)\) or \({\mathcal {G}}_{2,4}\).

This paper is organized as follows: In Sect. 2 we give the necessary preliminaries on probability measures. In particular, we introduce the different sets of measures supported on Lipschitz curves that are used for the approximation. Note that measures supported on continuous curves of finite length can be equivalently characterized by push-forward measures of probability measures by Lipschitz curves. Section 3 provides the notation on reproducing kernel Hilbert spaces and discrepancies including their representation in the Fourier domain. Section 4 contains our estimates of the approximation rates for general given measures and different approximation spaces of measures supported on curves. Following the usual lines in approximation theory, we are then concerned with the approximation of absolutely continuous measures with density functions lying in Sobolev spaces. Our main results on the approximation rates of smoother measures are contained in Sect. 5, where we distinguish between the approximation with respect to the push-forward of general measures \(\omega \in {{\mathcal {P}}}[0,1]\), absolute continuous measures and the Lebesgue measure on [0, 1]. In Sect. 6 we formulate our numerical minimization problem. Our numerical algorithms of choice are briefly described in Sect. 7. For a comprehensive description of the algorithms on the different manifolds, we refer to respective papers. Section 8 contains numerical results demonstrating the practical feasibility of our findings. Conclusions are drawn in Sect. 9. Finally, Appendix A briefly introduces the different manifolds \({\mathbb {X}}\) used in our numerical examples together with the Fourier representation of probability measures on \({\mathbb {X}}\).

2 Probability Measures and Curves

In this section, the basic notation on measure spaces is provided, see [3, 32], with focus on probability measures supported on curves. At this point, let us assume that

\({\mathbb {X}}\) is a compact metric space endowed with a bounded non-negative Borel measure \(\sigma _{\mathbb {X}}\in {\mathcal {M}} ({\mathbb {X}})\) such that \(\text {supp}(\sigma _{\mathbb {X}})={\mathbb {X}}\). Further, we denote the metric by \({{\,\mathrm{dist}\,}}_{\mathbb {X}}\).

Additional requirements on \({\mathbb {X}}\) are added along the way and notations are explained below. By \({\mathcal {B}}({\mathbb {X}})\) we denote the Borel \(\sigma \)-algebra on \({\mathbb {X}}\) and by \({\mathcal {M}}({\mathbb {X}})\) the linear space of all finite signed Borel measures on \({\mathbb {X}}\), i.e., the space of all \(\mu :{\mathcal {B}}({\mathbb {X}}) \rightarrow {\mathbb {R}}\) satisfying \(\mu ({\mathbb {X}}) < \infty \) and for any sequence \((B_k)_{k \in {\mathbb {N}}} \subset {\mathcal {B}}({\mathbb {X}})\) of pairwise disjoint sets the relation \(\mu (\bigcup _{k=1}^\infty B_k) = \sum _{k=1}^\infty \mu (B_k)\). The support of a measure \(\mu \) is the closed set

$$\begin{aligned} \text {supp}(\mu ) :=\bigl \{ x \in {\mathbb {X}}: B \subset {\mathbb {X}}\text { open, }x \in B \implies \mu (B) >0\bigr \}. \end{aligned}$$

For \(\mu \in {\mathcal {M}}({\mathbb {X}})\) the total variation measure is defined by

$$\begin{aligned} |\mu |(B) :=\sup \biggl \{ \sum _{k=1}^\infty |\mu (B_k)|: \bigcup _{k=1}^\infty B_k = B, \, B_k \; \text{ pairwise } \text{ disjoint }\biggr \}. \end{aligned}$$

With the norm \(\Vert \mu \Vert _{{\mathcal {M}}} = |\mu |({\mathbb {X}})\) the space \({\mathcal {M}}({\mathbb {X}})\) becomes a Banach space. By \({{\mathcal {C}}}({\mathbb {X}})\) we denote the Banach space of continuous real-valued functions on \({\mathbb {X}}\) equipped with the norm \(\Vert \varphi \Vert _{{{\mathcal {C}}}({\mathbb {X}})} :=\max _{x \in {\mathbb {X}}} |\varphi (x)|\). The space \({\mathcal {M}}({\mathbb {X}})\) can be identified via Riesz’ theorem with the dual space of \({\mathcal C}({\mathbb {X}})\) and the weak-\(^*\) topology on \({\mathcal {M}}({\mathbb {X}})\) gives rise to the weak convergence of measures, i.e., a sequence \((\mu _k )_k \subset {\mathcal {M}}({\mathbb {X}})\) converges weakly to \(\mu \) and we write \(\mu _k \rightharpoonup \mu \), if

$$\begin{aligned} \lim _{k \rightarrow \infty } \int _{{\mathbb {X}}} \varphi \,\mathrm {d}\mu _k = \int _{{\mathbb {X}}} \varphi \,\mathrm {d}\mu \quad \text {for all } \varphi \in {{\mathcal {C}}}({\mathbb {X}}). \end{aligned}$$

For a non-negative, finite measure \(\mu \), let \(L^p({\mathbb {X}},\mu )\) be the Banach space (of equivalence classes) of complex-valued functions with norm

$$\begin{aligned} \Vert f\Vert _{L^p({\mathbb {X}},\mu )} = \left( \int _{\mathbb {X}}|f|^p \,\mathrm {d}\mu \right) ^\frac{1}{p} < \infty . \end{aligned}$$

By \({\mathcal {P}} ({\mathbb {X}})\) we denote the space of Borel probability measures on \({\mathbb {X}}\), i.e., non-negative Borel measures with \(\mu ({\mathbb {X}}) = 1\). This space is weakly compact, i.e., compact with respect to the topology of weak convergence. We are interested in the approximation of measures in \({\mathcal {P}} ({\mathbb {X}})\) by probability measures supported on points and curves in \({\mathbb {X}}\). To this end, we associate with \(x \in {\mathbb {X}}\) a probability measure \(\delta _x\) with values \(\delta _x(B) = 1\) if \(x \in B\) and \(\delta _x(B) = 0\) otherwise.

The atomic probability measures at N points are defined by

$$\begin{aligned} {\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}} ({\mathbb {X}})&:=\biggl \{\sum _{k=1}^N w_k \delta _{x_k}: (x_k)_{k=1}^N \in {\mathbb {X}}^N, \, (w_k)_{k=1}^N \in [0,1]^N,\, \sum _{k=1}^N w_k = 1 \biggr \}. \end{aligned}$$

In other words, \({\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}} ({\mathbb {X}})\) is the collection of probability measures, whose support consists of at most N points. Further restriction to equal mass distribution leads to the empirical probability measures at N points denoted by

$$\begin{aligned} {\mathcal {P}}_N^{{{\,\mathrm{emp}\,}}} ({\mathbb {X}}):=\biggl \{ \frac{1}{N}\sum _{k=1}^N \delta _{x_k}: (x_k)_{k=1}^N \in {\mathbb {X}}^N\biggr \}. \end{aligned}$$

In this work, we are interested in the approximation by measures having their support on curves. Let \({\mathcal {C}}([a,b],{\mathbb {X}})\) denote the set of closed, continuous curves \(\gamma :[a,b]\rightarrow {\mathbb {X}}\). Although our presented experiments involve solely closed curves, some applications might require open curves. Hence, we want to point out that all of our approximation results still hold without this requirement. Upper bounds would not get worse and we have not used the closedness for the lower bounds on the approximation rates. The length of a curve \(\gamma \in {\mathcal {C}}([a,b],{\mathbb {X}}) \) is given by

$$\begin{aligned} \ell (\gamma ):=\sup _{\begin{array}{c} a\le t_0\le \ldots \le t_n\le b\\ n\in {\mathbb {N}} \end{array}} \sum _{k=1}^n {{\,\mathrm{dist}\,}}_{\mathbb {X}}\bigl (\gamma (t_k),\gamma (t_{k-1})\bigr ). \end{aligned}$$

If \(\ell (\gamma )<\infty \), then \(\gamma \) is called rectifiable. By reparametrization, see [48, Thm. 3.2], the image of any rectifiable curve in \({\mathcal {C}}([a,b],{\mathbb {X}})\) can be derived from the set of closed Lipschitz continuous curves

$$\begin{aligned} {{\,\mathrm{Lip}\,}}({\mathbb {X}}) {:=} \bigl \{\gamma \in {\mathcal {C}}([0,1],{\mathbb {X}}): \exists L {\in } {\mathbb {R}}\; \text {with} \; {{\,\mathrm{dist}\,}}_{\mathbb {X}}\bigl (\gamma (s),\gamma (t)\bigr ){\le } L|s-t|\;\forall s,t\in [0,1]\bigr \}. \end{aligned}$$

The speed of a curve \(\gamma \in {{\,\mathrm{Lip}\,}}({\mathbb {X}})\) is defined a.e. by the metric derivative

$$\begin{aligned} |{\dot{\gamma }}|(t) :=\lim _{s\rightarrow t} \frac{{{\,\mathrm{dist}\,}}_{\mathbb {X}}\bigl (\gamma (s),\gamma (t)\bigr )}{|s-t|},\qquad t\in [0,1], \end{aligned}$$

cf. [4, Sec. 1.1]. The optimal Lipschitz constant \(L=L(\gamma )\) of a curve \(\gamma \) is given by \(L(\gamma ) = \Vert \, |{\dot{\gamma }}| \, \Vert _{^\infty ([0,1])}\). For a constant speed curve it holds \(L(\gamma ) = \ell (\gamma )\).

We aim to approximate measures in \({\mathcal {P}}({\mathbb {X}})\) from those of the subset

$$\begin{aligned} {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}}):=\bigl \{\nu \in {\mathcal {P}}({\mathbb {X}}) : \exists \gamma \in {\mathcal {C}}([a,b],{\mathbb {X}}),\; \text {supp}(\nu )\subset \gamma ([a,b]),\; \ell (\gamma )\le L\bigr \}. \end{aligned}$$
(1)

This space is quite large and in order to define further meaningful subsets, we derive an equivalent formulation in terms of push-forward measures. For \(\gamma \in {\mathcal {C}}([0,1],{\mathbb {X}})\), the push-forward \(\gamma {_*} \omega \in {{\mathcal {P}}}({\mathbb {X}})\) of a probability measure \(\omega \in {{\mathcal {P}}}([0,1])\) is defined by \(\gamma {_*} \omega (B):=\omega (\gamma ^{-1} (B))\) for \(B \in {\mathcal {B}}({\mathbb {X}})\). We directly observe \(\text {supp}(\gamma {_*} \omega )=\gamma (\text {supp}(\omega ))\). By the following lemma, \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\) consists of the push-forward of measures in \({\mathcal {P}}([0,1])\) by constant speed curves.

Lemma 1

The space \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\) in (1) is equivalently given by

$$\begin{aligned} {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})= \bigl \{ \gamma {_*} \omega : \gamma \in {{\,\mathrm{Lip}\,}}({\mathbb {X}}) \text { has constant speed } L(\gamma )\le L ,\; \omega \in {\mathcal {P}}([0,1]) \bigr \}. \end{aligned}$$
(2)

Proof

Let \(\nu \in {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\) as in (1). If \(\text {supp}(\nu )\) consists of a single point \(x \in {\mathbb {X}}\) only, then the constant curve \(\gamma \equiv x\) pushes forward an arbitrary \(\delta _t\) for \(t\in [a,b]\), which shows that \(\nu \) is contained in (2).

Suppose that \(\text {supp}(\nu )\) contains at least two distinct points and let \(\gamma \in {\mathcal {C}}([a,b],{\mathbb {X}})\) with \(\text {supp}(\nu )\subset \gamma ([a,b])\) and \(\ell (\gamma )<\infty \). According to [16, Prop. 2.5.9], there exists a continuous curve \({\tilde{\gamma }} \in {{\,\mathrm{Lip}\,}}({\mathbb {X}})\) with constant speed \(\ell (\gamma )\) and a continuous non-decreasing function \(\varphi :[a,b] \rightarrow [0,1]\) with \(\gamma = {\tilde{\gamma }} \circ \varphi \). Now, define \(f:{\mathbb {X}}\rightarrow [0,1]\) by \(f(x) :=\min \{\tilde{\gamma }^{-1}(x)\}\). This function is measurable, since for every \(t \in [0,1]\) it holds that

$$\begin{aligned} \bigl \{x \in {\mathbb {X}}: f(x) \le t\bigr \} = \bigl \{ x \in {\mathbb {X}}: \min \{{\tilde{\gamma }}^{-1} (x) \} \le t \bigr \} = {\tilde{\gamma }}([0,t]) \end{aligned}$$

is compact. Due to \(\text {supp}(\nu )\subset {\tilde{\gamma }}([0,1])\), we can define \(\omega :=f{_*}\nu \in {\mathcal {P}}([0,1])\). By construction, \(\omega \) satisfies \(\tilde{\gamma }{_*} \omega (B)=\omega (\tilde{\gamma }^{-1}(B)) =\nu (f^{-1} \circ \tilde{\gamma }^{-1}(B))= \nu (B)\) for all \(B\in {\mathcal {B}}({\mathbb {X}})\). This concludes the proof. \(\square \)

The set \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\) contains \({\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}} ({\mathbb {X}})\) if L is sufficiently large compared to N and \({\mathbb {X}}\) is sufficiently nice, cf. Sect. 4. It is reasonable to ask for more restrictive sets of approximation measures, e.g., when \(\omega \in {\mathcal {P}}([0,1])\) is assumed to be absolutely continuous. For the Lebesgue measure \(\lambda \) on [0, 1], we consider

$$\begin{aligned} {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_{L}({\mathbb {X}}) :=\bigl \{ \gamma {_*} \omega : \gamma \in {{\,\mathrm{Lip}\,}}({\mathbb {X}}), \; L(\gamma ) \le L, \; \omega = \rho \lambda \in {\mathcal {P}}([0,1]), \, L(\rho ) \le L\bigr \}. \end{aligned}$$

In the literature [18, 61], the special case of push-forward of the Lebesgue measure \(\omega = \lambda \) on [0, 1] by Lipschitz curves in \({\mathbb {T}}^d\) was discussed and successfully used in certain applications [11, 17]. Therefore, we also consider approximations from

$$\begin{aligned} {\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}}) :=\bigl \{ \gamma {_*}\lambda : \gamma \in {{\,\mathrm{Lip}\,}}({\mathbb {X}}), \; L(\gamma ) \le L \bigr \}. \end{aligned}$$

It is obvious that our probability spaces related to curves are nested,

$$\begin{aligned} {\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}}) \subset {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}}) \subset {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}}). \end{aligned}$$

Hence, one may expect that establishing good approximation rates is most difficult for \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\) and easier for \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\).

3 Discrepancies and RKHS

The aim of this section is to introduce the way we quantify the distance (“discrepancy”) between two probability measures. To this end, choose a continuous, symmetric function \(K:{\mathbb {X}}\times {\mathbb {X}}\rightarrow {\mathbb {R}}\) that is positive definite, i.e., for any finite number \(n \in {\mathbb {N}}\) of points \(x_j\in {\mathbb {X}}\), \(j=1,\ldots ,n\), the relation

$$\begin{aligned} \sum _{i,j=1}^n a_i a_j K(x_i,x_j) \ge 0 \end{aligned}$$

is satisfied for all \(a_j\in {\mathbb {R}}\), \(j=1,\ldots ,n\). We know by Mercer’s theorem [23, 63, 76] that there exists an orthonormal basis \(\{\phi _k: k \in {\mathbb {N}}\}\) of \(L^2({\mathbb {X}},\sigma _{\mathbb {X}})\) and non-negative coefficients \((\alpha _k)_{k \in {\mathbb {N}}} \in \ell _1\) such that K has the Fourier expansion

$$\begin{aligned} K(x,y) = \sum _{k=0}^\infty \alpha _k \phi _k(x)\overline{\phi _k(y)} \end{aligned}$$
(3)

with absolute and uniform convergence of the right-hand side. If \(\alpha _k > 0\) for some \(k \in {\mathbb {N}}_0\), the corresponding function \(\phi _k\) is continuous. Every function \(f\in L^2({\mathbb {X}},\sigma _{\mathbb {X}})\) has a Fourier expansion

$$\begin{aligned} f = \sum _{k=0}^\infty {\hat{f}}_k \phi _k, \quad {\hat{f}}_k :=\int _{{\mathbb {X}}} f \overline{\phi _k} \,\mathrm {d}\sigma _{\mathbb {X}}. \end{aligned}$$

The kernel K gives rise to a reproducing kernel Hilbert space (RKHS). More precisely, the function space

$$\begin{aligned} H_{K} ({\mathbb {X}}) :=\Bigl \{f \in L^2({\mathbb {X}},\sigma _{\mathbb {X}}): \sum _{k=0}^\infty \alpha _k^{-1} |{\hat{f}}_k|^2 < \infty \Bigr \} \end{aligned}$$

equipped with the inner product and the corresponding norm

$$\begin{aligned} \langle f,g \rangle _{H_{K} ({\mathbb {X}})} = \sum _{k=0}^\infty \alpha _k^{-1} {\hat{f}}_k \overline{{\hat{g}}_k}, \qquad \Vert f\Vert _{H_{K} ({\mathbb {X}})} = \sqrt{\langle f,f \rangle }_{H_{K} ({\mathbb {X}})} \end{aligned}$$
(4)

forms a Hilbert space with reproducing kernel, i.e.,

$$\begin{aligned} K (x,\cdot ) \in H_{K} ({\mathbb {X}}) \qquad \qquad&\text{ for } \text{ all } \; x \in {\mathbb {X}}, \\ f(x) = \bigl \langle f, K (x,\cdot ) \bigr \rangle _{H_{K} ({\mathbb {X}})} \qquad&\text{ for } \text{ all } \; f \in H_{K} ({\mathbb {X}}), \; x \in {\mathbb {X}}. \end{aligned}$$

Note that \(f\in H_K({\mathbb {X}})\) implies \(\hat{f}_k=0\) if \(\alpha _k=0\), in which case we make the convention \(\alpha _k^{-1} {\hat{f}}_k=0\) in (4). The space \(H_{K} ({\mathbb {X}})\) is the closure of the linear span of \(\{ K (x_j,\cdot ): x_j \in {\mathbb {X}}\}\) with respect to the norm (4), and \(H_{K} ({\mathbb {X}})\) is continuously embedded in \(C({\mathbb {X}})\). In particular, the point evaluations in \(H_{K} ({\mathbb {X}})\) are continuous.

The discrepancy \({\mathscr {D}}_K(\mu ,\nu )\) is defined as the dual norm on \(H_{K}({\mathbb {X}})\) of the linear operator \(T:H_K({\mathbb {X}}) \rightarrow {\mathbb {C}}\) with \(\varphi \mapsto \int _{{\mathbb {X}}} \varphi \,\mathrm {d}(\mu - \nu )\):

$$\begin{aligned} {\mathscr {D}}_K(\mu ,\nu ) = \max _{\Vert \varphi \Vert _{ H_{K} ({\mathbb {X}}) } \le 1} \Bigl |\int _{{\mathbb {X}}} \varphi \,\mathrm {d}(\mu - \nu ) \Bigr | , \end{aligned}$$
(5)

see [40, 67]. Note that this looks similar to the 1-Wasserstein distance, where the space of test functions consists of Lipschitz continuous functions and is larger. Since

$$\begin{aligned} \int _{\mathbb {X}}\varphi \,\mathrm {d}\mu = \int _{\mathbb {X}}\bigl \langle \varphi , K(x,\cdot ) \bigr \rangle _{H_K({\mathbb {X}})} \,\mathrm {d}\mu (x) = \Bigl \langle \varphi , \int _{\mathbb {X}}K(x,\cdot ) \,\mathrm {d}\mu (x) \Bigr \rangle _{H_K({\mathbb {X}})}, \end{aligned}$$

we obtain by Riesz’s representation theorem

$$\begin{aligned} \max _{\Vert \varphi \Vert _{ H_{K} ({\mathbb {X}}) } \le 1} \int _{{\mathbb {X}}} \varphi \,\mathrm {d}\mu = \Bigl \Vert \int _{\mathbb {X}}K(x,\cdot ) \,\mathrm {d}\mu (x)\Bigr \Vert _{H_{K}({\mathbb {X}})}, \end{aligned}$$

which yields by Fubini’s theorem, (3), (4) and symmetry of K that

$$\begin{aligned} {\mathscr {D}}^2_K(\mu ,\nu ) =&\iint \limits _{{\mathbb {X}}\times {\mathbb {X}}} K \,\mathrm {d}\mu \,\mathrm {d}\mu - 2\iint \limits _{{\mathbb {X}}\times {\mathbb {X}}} K\,\mathrm {d}\mu \,\mathrm {d}\nu +\iint \limits _{{\mathbb {X}}\times {\mathbb {X}}} K\,\mathrm {d}\nu \,\mathrm {d}\nu \end{aligned}$$
(6)
$$\begin{aligned} =&\sum _{k=0}^\infty \alpha _k | {\hat{\mu }}_{k}-{\hat{\nu }}_{k} |^2, \end{aligned}$$
(7)

where the Fourier coefficients of \(\mu , \nu \in \mathcal P({\mathbb {X}})\) are well-defined for k with \(\alpha _k\ne 0\) by

$$\begin{aligned} {\hat{\mu }}_k:=\int _{{\mathbb {X}}}\overline{\phi _k} \,\mathrm {d}\mu ,\qquad {\hat{\nu }}_k:=\int _{{\mathbb {X}}}\overline{\phi _k} \,\mathrm {d}\nu . \end{aligned}$$

Remark 1

The Fourier coefficients \({\hat{\mu }}_{k}\) and \({\hat{\nu }}_{k}\) depend on both K and \(\sigma _{\mathbb {X}}\), but the identity (6) shows that \({\mathscr {D}}_K(\mu ,\nu )\) only depends on K. Thus, our approximation rates do not depend on the choice of \(\sigma _{\mathbb {X}}\). On the other hand, our numerical algorithms in Sect. 7 depend on \(\phi _k\) and hence on the choice of \(\sigma _{\mathbb {X}}\).

If \(\mu _n \rightharpoonup \mu \) and \(\nu _n \rightharpoonup \nu \) as \(n\rightarrow \infty \), then also \(\mu _n \otimes \nu _n \rightharpoonup \mu \otimes \nu \). Therefore, the continuity of K implies that \(\lim _{n \rightarrow \infty } {\mathscr {D}}_K(\mu _n,\nu _n) = {\mathscr {D}}_K(\mu ,\nu )\), so that \({\mathscr {D}}_K\) is continuous with respect to weak convergence in both arguments. Thus, for any weakly compact subset \(P\subset {\mathcal {P}}({\mathbb {X}})\), the infimum

$$\begin{aligned} \inf _{\nu \in P} {\mathscr {D}}_K(\mu ,\nu ) \end{aligned}$$

is actually a minimum. All of the subsets introduced in the previous section are weakly compact.

Lemma 2

The sets \({\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})\), \({\mathcal {P}}_N^{{{\,\mathrm{emp}\,}}}({\mathbb {X}})\), \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\), \({\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})\), and \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\) are weakly compact.

Proof

It is well-known that \({\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})\) and \({\mathcal {P}}_N^{{{\,\mathrm{emp}\,}}}({\mathbb {X}}) \) are weakly compact.

We show that \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\) is weakly compact. In view of (2), let \((\gamma _k)_{k\in {\mathbb {N}}}\) be Lipschitz curves with constant speed \(L(\gamma _k)\le L\) and \((\omega _k)_{k\in {\mathbb {N}}} \subset {{\mathcal {P}}}([0,1])\). Since \({\mathcal P}([0,1])\) is weakly compact, we can extract a subsequence \((\omega _{k_j})_{j\in {\mathbb {N}}}\) with weak limit \({{\hat{\omega }}} \in {\mathcal P}([0,1])\). Now, we observe that \( {{\,\mathrm{dist}\,}}_{\mathbb {X}}( \gamma _{k_j} (s), \gamma _{k_j} (t)) \le L |s-t| \) for all \(j\in {\mathbb {N}}\). Since \({\mathbb {X}}\) is compact, the Arzelà–Ascoli theorem implies that there exists a subsequence of \((\gamma _{k_j})_{j\in {\mathbb {N}}}\) which converges uniformly towards \({{\hat{\gamma }}}\in {{\,\mathrm{Lip}\,}}({\mathbb {X}})\) with \(L({\hat{\gamma }})\le L\). Then, \({{\hat{\nu }}}:={\hat{\gamma }}{_*}{\hat{\omega }}\) fulfills \(\text {supp}({\hat{\nu }})\subset {\hat{\gamma }}([0,1])\), so that \({\hat{\nu }}\in {{\mathcal {P}}}_{L}^{{{\,\mathrm{curv}\,}}}({\mathbb {X}})\) by (1). Thus, \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\) is weakly compact.

The proof for \({\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})\) and \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\) is analogous and hence omitted. \(\square \)

Remark 2

(Discrepancies and Convolution Kernels) Let \({\mathbb {X}}= {\mathbb {T}}^d :={\mathbb {R}}^d / {\mathbb {Z}}^d\) be the torus and \(h \in {\mathcal {C}}({\mathbb {T}}^d)\) be a function with Fourier series

$$\begin{aligned} h(x) = \sum _{k \in {\mathbb {Z}}^d} {\hat{h}}_k \text {e}^{2 \pi \text {i}\langle k,x\rangle }, \quad {\hat{h}}_k :=\int _{{\mathbb {T}}^d} h(x) \text {e}^{- 2 \pi \text {i}\langle k,x\rangle } \,\mathrm {d}\sigma _{{\mathbb {T}}^d}(x), \end{aligned}$$

which converges in \(L^2({\mathbb {T}}^d)\) so that \(\sum _k |{\hat{h}}_k|^2 < \infty \).

Assume that \({\hat{h}}_k \not = 0\) for all \(k \in {\mathbb {Z}}^d\). We consider the special Mercer kernel

$$\begin{aligned} K(x,y) :=\sum _{k \in {\mathbb {Z}}^d} |{\hat{h}}_k|^2 \text {e}^{2 \pi \text {i}\langle k,x-y\rangle } = \sum _{k \in {\mathbb {Z}}^d} |{\hat{h}}_k|^2 \cos \bigl (2 \pi \langle k,x-y\rangle \bigr ) \end{aligned}$$

with associated discrepancy \({\mathscr {D}}_h\) via (6), i.e., \(\phi _k(x) = \text {e}^{2 \pi \text {i}\langle k,x\rangle }\), \(\alpha _k = |{\hat{h}}_k|^2\), \(k \in {\mathbb {Z}}^d\) in (3). The convolution of h with \(\mu \in {{\mathcal {M}}}({\mathbb {T}}^d)\) is the function \(h * \mu \in C({\mathbb {T}}^d)\) defined by

$$\begin{aligned} (h * \mu ) (x) :=\int _{{\mathbb {T}}^d} h(x-y) \,\mathrm {d}\mu (y). \end{aligned}$$

By the convolution theorem for Fourier transforms it holds \(\widehat{(h * \mu )}_k = {\hat{h}}_k {{\hat{\mu }}}_k\), \(k \in {\mathbb {Z}}^d\), and we obtain by Parseval’s identity for \(\mu ,\nu \in \mathcal M({\mathbb {T}}^d)\) and (7) that

$$\begin{aligned} \Vert h*(\mu - \nu )\Vert _{L^2({\mathbb {T}}^d)}^2&= \bigl \Vert \bigl ( {\hat{h}}_k \, ({{\hat{\mu }}}_k - {{\hat{\nu }}}_k) \bigr )_{k \in \mathbb Z^d}\bigr \Vert _{\ell _2}^2 = \sum _{k \in {\mathbb {Z}}^d} |{\hat{h}}_k|^2 |\hat{\mu }_k - {{\hat{\nu }}}_k|^2 = {\mathscr {D}}_h^2(\mu ,\nu ). \end{aligned}$$

In image processing, metrics of this kind were considered in [18, 33, 77].

Remark 3

(Relations to Principal Curves) A similar concept, sharing the common theme of “a curve which passes through the middle of a distribution” with the intention of our paper, is that of principle curves. The notion of principal curves has been developed in a statistical framework and was successfully applied in statistics and machine learning, see [38, 55, 57]. The idea is to generalize the concept of principal components with just one direction to so-called self-consistent (principal) curves. In the seminal paper [49], the authors showed that these principal curves \(\gamma \) are critical points of the energy functional

$$\begin{aligned} E(\gamma ,\mu ) = \int _{{\mathbb {X}}} \Vert x - \mathrm {proj}_\gamma (x) \Vert ^2_2 \mathrm d \mu (x), \end{aligned}$$
(8)

where \(\mu \) is a given probability measure on \({\mathbb {X}}\) and \(\mathrm {proj}_\gamma (x) = \mathrm {argmin}_{y \in \gamma } \Vert x - y\Vert _2\) is a projection of a point \(x \in {\mathbb {X}}\) on \(\gamma \). This notion has also been generalized to Riemannian manifolds in [50], see also [57] for an application on the sphere. Further investigation of principal curves in the plane, cf. [27], showed that self-consistent curves are not (local) minimizers, but saddle points of (8). Moreover, the existence of such curves is established only for certain classes of measures, such as elliptical ones. By additionally constraining the length of curves minimizing (8), these unfavorable effects were eliminated, cf. [55]. In comparison to the objective (8), the discrepancy (6) averages for fixed \(x \in {\mathbb {X}}\) the distance encoded by K to any point on \(\gamma \), instead of averaging over the squared minimal distances to \(\gamma \).

4 Approximation of General Probability Measures

Given \(\mu \in {\mathcal {P}}({\mathbb {X}})\), the estimatesFootnote 1

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})} {\mathscr {D}}_K(\mu ,\nu ) \le \min _{\nu \in {\mathcal {P}}_N^{{{\,\mathrm{emp}\,}}}({\mathbb {X}})} {\mathscr {D}}_K(\mu ,\nu ) \lesssim N^{-\frac{1}{2}}, \end{aligned}$$
(9)

are well-known, cf. [43, Cor. 2.8]. Here, the constant hidden in \(\lesssim \) depends on \({\mathbb {X}}\) and K but is independent of \(\mu \) and \(N\in {\mathbb {N}}\). In this section, we are interested in approximation rates with respect to measures supported on curves.

Our approximation rates for \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\) are based on those for \({\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})\) combined with estimates for the traveling salesman problem (TSP). Let \({{\,\mathrm{TSP}\,}}_{{\mathbb {X}}}(N)\) denote the worst case minimal cost tour in a fully connected graph G of N arbitrary nodes represented by \(x_1,\ldots ,x_N\in {\mathbb {X}}\) and edges with cost \({{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_i,x_j)\), \(i,j=1,\ldots ,N\). Similarly, let \({{\,\mathrm{MST}\,}}_{{\mathbb {X}}}(N)\) denote the worst case cost of the minimal spanning tree of G. To derive suitable estimates, we require that \({\mathbb {X}}\) is Ahlfors d-regular (sometimes also called Ahlfors-David d-regular), i.e., there exists \(0<d<\infty \) such that

$$\begin{aligned} \sigma _{\mathbb {X}}\bigl (B_r(x)\bigr ) \sim r^d, \quad \text {for all } x\in {\mathbb {X}},\quad 0<r\le \mathrm{diam}({\mathbb {X}}), \end{aligned}$$
(10)

where \(B_r(x)=\{y\in {\mathbb {X}}: {{\,\mathrm{dist}\,}}_{{\mathbb {X}}}(x,y)\le r\}\) and the constants in \(\sim \) do not depend on x or r. Note that d is not required to be an integer and turns out to be the Hausdorff dimension. For \({\mathbb {X}}\) being the unit cube the following lemma was proved in [75].

Lemma 3

If \({\mathbb {X}}\) is a compact Ahlfors d-regular metric space, then there is a constant \(0<C_{{{\,\mathrm{TSP}\,}}}<\infty \) depending on \({\mathbb {X}}\) such that

$$\begin{aligned} {{\,\mathrm{TSP}\,}}_{{\mathbb {X}}}(N) \le C_{{{\,\mathrm{TSP}\,}}} N^{1-\frac{1}{d}}. \end{aligned}$$

Proof

Using (10) and the same covering argument as in [74, Lem. 3.1], we see that for every choice \(x_1,\ldots ,x_N\in {\mathbb {X}}\), there exist \(i\ne j\) such that \({{\,\mathrm{dist}\,}}_{{\mathbb {X}}}(x_i,x_j)\lesssim N^{-1/d}\), where the constant depends on \({\mathbb {X}}\).

Let \(S = \{x_1,\ldots ,x_N\}\) be an arbitrary selection of N points from \({\mathbb {X}}\). First, we choose \(x_i\) and \(x_j\) with \({{\,\mathrm{dist}\,}}_{{\mathbb {X}}}(x_i,x_j)\le c N^{-1/d}\). Then, we form a minimal spanning tree T of \(S \setminus \{x_{i}\}\) and augment the tree by adding the edge between \(x_i\) and \(x_j\). This construction provides us with a spanning tree and hence we can estimate \({{\,\mathrm{MST}\,}}_{{\mathbb {X}}}(N) \le {{\,\mathrm{MST}\,}}_{{\mathbb {X}}}(N-1) + c N^{-1/d}\). Iterating the argument, we deduce

$$\begin{aligned} {{\,\mathrm{MST}\,}}_{{\mathbb {X}}}(N)\lesssim N^{1-\frac{1}{d}}, \end{aligned}$$

cf. [75]. Finally, the standard relation \({{\,\mathrm{TSP}\,}}_{\mathbb {X}}(N) \le 2 {{\,\mathrm{MST}\,}}_{\mathbb {X}}(N)\) for edge costs satisfying the triangular inequality concludes the proof. \(\square \)

To derive a curve in \({\mathbb {X}}\) from a minimal cost tour in the graph, we require the additional assumption that \({\mathbb {X}}\) is a length space, i.e., a metric space with

$$\begin{aligned} {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x,y)=\inf \bigl \{ \ell (\gamma ): \gamma \text { a continuous curve that connects }x\text { and }y\bigr \}, \end{aligned}$$

cf. [15, 16]. Thus, for the rest of this section, we are assuming that

\({\mathbb {X}}\) is a compact Ahlfors d-regular length space.

In this case, Lemma 3 yields the next proposition.

Proposition 1

For a compact, Ahlfors d-regular length space \({\mathbb {X}}\) it holds \({\mathcal {P}}^{{{\,\mathrm{atom}\,}}}_N({\mathbb {X}})\subset {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_{C_{{{\,\mathrm{TSP}\,}}} N^{1-1/d}}({\mathbb {X}})\).

Proof

The Hopf-Rinow Theorem for metric measure spaces, see [15, Chap. I.3] and [16, Thm. 2.5.28], yields that every pair of points \(x,y\in {\mathbb {X}}\) can be connected by a geodesic, i.e., there is \(\gamma \in {{\,\mathrm{Lip}\,}}({\mathbb {X}})\) with constant speed and \(\ell (\gamma |_{[s,t]})={{\,\mathrm{dist}\,}}_{\mathbb {X}}(\gamma (s),\gamma (t))\) for all \(0\le s\le t\le 1\). Thus, for any pair \(x,y\in {\mathbb {X}}\), there is a constant speed curve \(\gamma _{x,y}\in {{\,\mathrm{Lip}\,}}({\mathbb {X}})\) of length \(\ell (\gamma _{x,y}) = {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x, y)\) with \(\gamma _{x,y}(0)=x\), \(\gamma _{x,y}(1) = y\), cf. [16, Rem. 2.5.29]. For \(\mu _N\in {\mathcal {P}}^{{{\,\mathrm{atom}\,}}}_N({\mathbb {X}})\), let \(\{x_1,\ldots ,x_N\}=\text {supp}(\mu _N)\). The minimal cost tour in Lemma 3 leads to a curve \(\gamma \in {{\,\mathrm{Lip}\,}}({\mathbb {X}})\), so that \(\mu _N=\gamma {_*}\omega \in {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\) for an appropriate measure \(\omega \in {\mathcal {P}}^{{{\,\mathrm{atom}\,}}}_N([0,1])\). \(\square \)

By Proposition 1 we can transfer approximation rates from \({\mathcal {P}}^{{{\,\mathrm{atom}\,}}}_N({\mathbb {X}})\) to \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\).

Theorem 1

For \(\mu \in {\mathcal {P}}({\mathbb {X}})\), it holds with a constant depending on \({\mathbb {X}}\) and K that

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})} {\mathscr {D}}_K (\mu ,\nu ) \lesssim L^{-\frac{d}{2d-2}}. \end{aligned}$$

Proof

Choose \(\alpha = \frac{d-1}{d}\). For L large enough, set \(N :=\lfloor (L/C_{{{\,\mathrm{TSP}\,}}})^{\frac{1}{\alpha }} \rfloor \in {\mathbb {N}}\), so that we observe \({\mathcal {P}}^{{{\,\mathrm{atom}\,}}}_{N}({\mathbb {X}})\subset {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_{L}({\mathbb {X}})\). According to (9), we obtain

$$\begin{aligned} \min _{\nu \in {{\mathcal {P}}}^{{{\,\mathrm{curv}\,}}}_{L} ({\mathbb {X}})}{\mathscr {D}}_K(\mu ,\nu ) \le \min _{\nu \in {{\mathcal {P}}}^{{{\,\mathrm{atom}\,}}}_{N} ({\mathbb {X}})}{\mathscr {D}}_K(\mu ,\nu ) \lesssim N^{-\frac{1}{2}} \lesssim L^{-\frac{1}{2\alpha }}. \end{aligned}$$

\(\square \)

Next, we derive approximation rates for \({\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}}) \) and \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\).

Theorem 2

For \(\mu \in {\mathcal {P}}({\mathbb {X}})\), we have with a constant depending on \({\mathbb {X}}\) and K that

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})} {\mathscr {D}}_K (\mu ,\nu )\le \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})} {\mathscr {D}}_K (\mu ,\nu ) \lesssim L^{-\frac{d}{3d-2}}. \end{aligned}$$
(11)

Proof

Let \(\alpha = \frac{d-1}{d}\), \(d \ge 2\). For L large enough, set \(N :=\lfloor L^{\frac{2}{2\alpha + 1}} /\mathrm{diam}({\mathbb {X}}) \rfloor \in {\mathbb {N}}\). By (9), there is a set of points \(\{ x_1, \ldots , x_{N } \} \subset {\mathbb {X}}\) such that

$$\begin{aligned} {\mathscr {D}}_K(\mu ,\nu _N) \lesssim N^{-\frac{1}{2}} \lesssim \,L^{-\frac{1}{2\alpha +1}}, \qquad \nu _N :=\frac{1}{N}\sum _{j=1}^N \delta _{x_j}. \end{aligned}$$
(12)

Let these points be ordered as a solution of the corresponding \({{\,\mathrm{TSP}\,}}\). Set \(x_0:=x_N\) and \(\tau _i :={{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_i,x_{i+1})/L\), \(i=0, \ldots , N-1\). Note that

$$\begin{aligned} N \le L^{\frac{2}{2\alpha + 1}} /\mathrm{diam}({\mathbb {X}}) \le L/{{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_i,x_{i+1}), \end{aligned}$$

so that \(\tau _i \le N^{-1}\) for all \(i=0,\ldots , N -1\). We construct a closed curve \(\gamma _{\scriptscriptstyle {L}} :[0,1]\rightarrow {\mathbb {X}}\) that rests in each \(x_i\) for a while and then rushes from \(x_i\) to \(x_{i+1}\). As in the proof of Proposition 1, \({\mathbb {X}}\) being a compact length space enables us to choose \(\gamma _i\in {{\,\mathrm{Lip}\,}}({\mathbb {X}})\) with \(\gamma _i(0)=x_i\), \(\gamma _i(1) = x_{i+1}\) and \(L(\gamma _i) = {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_i, x_{i+1})\). For \(i=0,\ldots ,N_{\scriptscriptstyle {L}}-1\), we define

$$\begin{aligned} \gamma _{\scriptscriptstyle {L}}(t) :=\left\{ \begin{array}{ll} x_i&{} \,\,\mathrm { for} \; t\in \left[ \frac{i}{N},\frac{i+1}{N}-\tau _i \right) ,\\ \gamma _i\bigl (\frac{1}{\tau _i} \bigl (t-\tfrac{i+1}{N} + \tau _i \bigr ) \bigr ) &{} \,\,\mathrm { for} \; t\in \left[ \frac{i+1}{N}-\tau _i,\frac{i+1}{N}\right) . \end{array} \right. \end{aligned}$$

By construction, \(L(\gamma _{\scriptscriptstyle {L}})\) is bounded by \(\min _i d(x_i,x_{i+1}) \tau _i^{-1} \le L\). Defining the measure \(\nu :=(\gamma _{\scriptscriptstyle {L}}){_*}\lambda \in {{\mathcal {P}}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_{L} ({\mathbb {X}})\), the related discrepancy can be estimated by

$$\begin{aligned} {\mathscr {D}}_K(\mu ,\nu )&= \sup _{\Vert \varphi \Vert _{H_K({\mathbb {X}})}\le 1 } \Big |\int _{\mathbb {X}}\varphi \,\mathrm {d}\mu -\int _0^1 \varphi \circ \gamma _{\scriptscriptstyle {L}} \,\mathrm {d}\lambda \Big | \\&\le {\mathscr {D}}_K(\mu ,\nu _N) + \sup _{\Vert \varphi \Vert _{H_K({\mathbb {X}})}\le 1 } \sum _{i=0}^{N-1} \Big (\tau _i |\varphi (x_i)| + \Big |\int _{\frac{i+1}{N}-\tau _i}^{\frac{i+1}{N}} \varphi \circ \gamma _{\scriptscriptstyle {L}} \,\mathrm {d}\lambda \Big |\Big ). \end{aligned}$$

The relation (12) yields \({\mathscr {D}}_K(\mu ,\nu _N)\le CL^{-\frac{1}{2\alpha +1}}\) with some constant \(C>0\). Since for \(\varphi \in H_K({\mathbb {X}})\) it holds \(\Vert \varphi \Vert _{L^\infty ({\mathbb {X}})} \le C_K\Vert \varphi \Vert _{H_K({\mathbb {X}})}\) with \(C_K:=\sup _{x\in {\mathbb {X}}} \sqrt{K(x,x)}\), we finally obtain by Lemma 3

$$\begin{aligned} {\mathscr {D}}_K(\mu ,\nu )&\le C \, L^{-\frac{1}{2\alpha +1}} + 2 \, C_K\sum _{i=0}^{N-1} \tau _i \le C \, L^{-\frac{1}{2\alpha +1}}+ 2C_K \, C_{{{\,\mathrm{TSP}\,}}}\frac{N^\alpha }{L}\\&\le \bigl (C + 2 C_K \, C_{{{\,\mathrm{TSP}\,}}}/\mathrm{diam}({\mathbb {X}}) \bigr ) \, L^{-\frac{1}{2\alpha +1}}. \end{aligned}$$

\(\square \)

Note that many compact sets in \({\mathbb {R}}^d\) are compact Ahlfors d-regular length spaces with respect to the Euclidean metric and the normalized Lebesgue measure such as the unit ball or the unit cube. Moreover many compact connected manifolds with or without boundary satisfy these conditions. All assumptions in this section are indeed satisfied for d-dimensional connected, compact Riemannian manifolds without boundary equipped with the Riemannian metric and the normalized Riemannian measure. The latter setting is studied in the subsequent section to refine our investigations on approximation rates.

Remark 4

For \({\mathbb {X}}= {\mathbb {T}}^d\) with \(d\in {\mathbb {N}}\), the estimate

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}_L^{{{\,\mathrm{{\lambda }-curv}\,}}}({\mathbb {X}})} {\mathscr {D}}_K (\mu ,\nu ) \lesssim L^{-\frac{1}{d}}. \end{aligned}$$
(13)

was derived in [18] provided that K satisfies an additional Lipschitz condition, where the constant in (13) depends on d and K. The rate coincides with our rate in (11) for \(d = 2\) and is worse for higher dimensions as \(\frac{d}{3d-2} > \frac{1}{3}\) for all \(d\ge 3\).

5 Approximation of Probability Measures Having Sobolev Densities

To study approximation rates in more detail, we follow the standard strategy in approximation theory and take additional smoothness properties into account. We shall therefore consider \(\mu \) with a density satisfying smoothness requirements. To define suitable smoothness spaces, we make additional structural assumptions on \({\mathbb {X}}\). Throughout the remaining part of this work, we suppose that

\({\mathbb {X}}\) is a d-dimensional connected, compact Riemannian manifold without boundary equipped with the Riemannian metric \({{\,\mathrm{dist}\,}}_{\mathbb {X}}\) and the normalized Riemannian measure \(\sigma _{\mathbb {X}}\).

In the first part of this section, we introduce the necessary background on Sobolev spaces and derive general lower bounds for the approximation rates. Then, we focus on upper bounds in the rest of the section. So far, we only have general upper bounds for \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\). In case of the smaller spaces \({\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})\) and \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\), we have to restrict to special manifolds \({\mathbb {X}}\) in order to obtain bounds. For a better overview, all theorems related to approximation rates are named accordingly.

5.1 Sobolev Spaces and Lower Bounds

In order to define a smoothness class of functions on \({\mathbb {X}}\), let \(-\varDelta \) denote the (negative) Laplace–Beltrami operator on \({\mathbb {X}}\). It is self-adjoint on \(L^2({\mathbb {X}},\sigma _{\mathbb {X}})\) and has a sequence of positive, non-decreasing eigenvalues \((\lambda _k)_{k\in {\mathbb {N}}}\) (with multiplicities) with a corresponding orthonormal complete system of smooth eigenfunctions \(\{\phi _k: k\in {\mathbb {N}}\}\). Every function \(f \in L^2({\mathbb {X}},\sigma _{\mathbb {X}})\) has a Fourier expansion

$$\begin{aligned} f = \sum _{k=0}^\infty {\hat{f}}(k) \phi _k, \qquad {\hat{f}}(k) :=\int _{{\mathbb {X}}} f \overline{ \phi _k}\,\mathrm {d}\sigma _{\mathbb {X}}. \end{aligned}$$

The Sobolev space \(H^s({\mathbb {X}})\), \(s > 0\), is the set of all functions \(f \in L^2({\mathbb {X}},\sigma _{\mathbb {X}})\) with distributional derivative \((I-\varDelta )^{s/2} f \in L^2({\mathbb {X}},\sigma _{\mathbb {X}})\) and norm

$$\begin{aligned} \Vert f\Vert _{H^s({\mathbb {X}})} :=\Vert (I-\varDelta )^{s/2} f \Vert _{L^2({\mathbb {X}},\sigma _{{\mathbb {X}}})} = \Big ( \sum _{k=0}^\infty (1+ \lambda _k)^{s} |{\hat{f}}(k)|^2 \Big )^{\frac{1}{2}}. \end{aligned}$$

For \(s>d/2\), the space \(H^s({\mathbb {X}})\) is continuously embedded into the space of Hölder continuous functions of degree \(s - d/2\), and every function \(f \in H^s({\mathbb {X}})\) has a uniformly convergent Fourier series, see [70, Thm. 5.7]. Actually, \(H^s({\mathbb {X}})\), \(s>d/2\), is a RKHS with reproducing kernel

$$\begin{aligned} K(x,y) :=\sum _{k=0}^\infty (1+\lambda _k)^{-s} \phi _k(x) \overline{\phi _k(y)}. \end{aligned}$$

Hence, the discrepancy \({\mathscr {D}}_K(\mu ,\nu )\) satisfies (5) with \(H_K({\mathbb {X}})=H^s({\mathbb {X}})\). Clearly, each kernel of the above form with coefficients having the same decay as \((1+\lambda _k)^{-s}\) for \(k \rightarrow \infty \) gives rise to a RKHS that coincides with \(H^s({\mathbb {X}})\) with an equivalent norm. Appendix A contains more details of the above discussion for the torus \({\mathbb {T}}^d\), the sphere \({\mathbb {S}}^d\), the special orthogonal group \(\mathrm{SO}(3)\) and the Grassmannian \({\mathcal {G}}_{k,d}\).

Now, we are in the position to establish lower bounds on the approximation rates. Again, we want to remark that our results still hold if we drop the requirement that the approximating curves are closed.

Theorem 3

(Lower bound) For \(s > d/2\) suppose that \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Assume that \(\mu \) is absolutely continuous with respect to \(\sigma _{\mathbb {X}}\) with a continuous density \(\rho \). Then, there are constants depending on \({\mathbb {X}}\), K, and \(\rho \) such that

$$\begin{aligned} N^{-\frac{s}{d}}&\lesssim \min _{\nu \in {\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})} {\mathscr {D}}_K(\mu ,\nu )\le \min _{\nu \in {\mathcal {P}}_N^{{{\,\mathrm{emp}\,}}}({\mathbb {X}})} {\mathscr {D}}_K (\mu ,\nu ),\\ L^{-\frac{s}{d-1}}&\lesssim \min _{\nu \in \mathcal P_L^{{{\,\mathrm{curv}\,}}}({\mathbb {X}})} {\mathscr {D}}_K(\mu ,\nu ) \le \min _{\nu \in \mathcal P_L^{{{\,\mathrm{ a-curv}\,}}}({\mathbb {X}})} {\mathscr {D}}_K (\mu ,\nu )\le \min _{\nu \in {\mathcal {P}}_L^{{{\,\mathrm{{\lambda }-curv}\,}}}({\mathbb {X}})} {\mathscr {D}}_K (\mu ,\nu ). \end{aligned}$$

Proof

The proof is based on the construction of a suitable fooling function to be used in (5) and follows [13, Thm. 2.16]. There exists a ball \(B\subset {\mathbb {X}}\) with \(\rho (x)\ge \epsilon = \epsilon (B,\rho )\) for all \(x \in B\) and \(\sigma _{\mathbb {X}}(B)>0\), which is chosen as the support of the constructed fooling functions. We shall verify that for every \(\nu \in {{\mathcal {P}}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})\) there exists \(\varphi \in H^s({\mathbb {X}})\) such that \(\varphi \) vanishes on \(\text {supp}(\nu )\) but

$$\begin{aligned} \int _{B} \varphi \,\mathrm {d}\mu \gtrsim \Vert \varphi \Vert _{H^s({\mathbb {X}})} N^{-\frac{s}{d}}, \end{aligned}$$
(14)

where the constant depends on \({\mathbb {X}}\), K, and \(\rho \). For small enough \(\delta \) we can choose 2N disjoint balls in B with diameters \(\delta N^{-1/d}\), see also [39]. For \(\nu \in {{\mathcal {P}}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})\), there are N of these balls that do not intersect with \(\text {supp}(\nu )\). By putting together bump functions supported on each of the N balls, we obtain a non-negative function \(\varphi \) supported in B that vanishes on \(\text {supp}(\nu )\) and satisfies (14), with a constant that depends on \(\epsilon \), cf. [13, Thm. 2.16]. This yields

$$\begin{aligned} \Bigl |\int _{{\mathbb {X}}} \varphi \,\mathrm {d}\mu - \int _{{\mathbb {X}}} \varphi \,\mathrm {d}\nu \Bigr | = \int _{B} \varphi \,\mathrm {d}\mu \gtrsim \Vert \varphi \Vert _{H^s({\mathbb {X}})} N^{-\frac{s}{d}}. \end{aligned}$$

The inequality for \({\mathcal {P}}_L^{{{\,\mathrm{curv}\,}}}({\mathbb {X}})\) is derived in a similar way. Given a continuous curve \(\gamma :[0,1]\rightarrow {\mathbb {X}}\) of length L, choose N such that \(L\le \delta N N^{-1/d}\). By taking half of the radius of the above balls, there are 2N pairwise disjoint balls of radius \(\frac{\delta }{2} N^{-1/d}\) contained in B with pairwise distances at least \(\delta N^{-1/d}\). Any curve of length \(\delta N N^{-1/d}\) intersects at most N of those balls. Hence, there are N balls of radius \(\frac{\delta }{2}N^{-1/d}\) that do not intersect \(\text {supp}(\gamma )\). As above, this yields a fooling function \(\varphi \) satisfying (14), which ends the proof. \(\square \)

5.2 Upper Bounds for \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\)

In this section, we derive upper bounds that match the lower bounds in Theorem 3 for \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\). Our analysis makes use of the following theorem, which was already proved for \({\mathbb {X}}= {\mathbb {S}}^d\) in [51].

Theorem 4

[13, Thm. 2.12] Assume that \(\nu _r \in {{\mathcal {P}}}({\mathbb {X}})\) provides an exact quadrature for all eigenfunctions \(\varphi _k\) of the Laplace–Beltrami operator with eigenvalues \(\lambda _k \le r^2\), i.e.,

$$\begin{aligned} \int _{{\mathbb {X}}} \varphi _k \,\mathrm {d}\sigma _{\mathbb {X}}= \int _{{\mathbb {X}}} \varphi _k \,\mathrm {d}\nu _r. \end{aligned}$$
(15)

Then, it holds for every function \(f \in H^s({\mathbb {X}})\), \(s > d/2\), that there is a constant depending on \({\mathbb {X}}\) and s with

$$\begin{aligned} \Bigl | \int _{{\mathbb {X}}} f \,\mathrm {d}\sigma _{\mathbb {X}}- \int _{{\mathbb {X}}} f \,\mathrm {d}\nu _r \Bigr | \lesssim r^{-s} \Vert f\Vert _{H^s({\mathbb {X}})}. \end{aligned}$$

For our estimates it is important that the number of eigenfunctions of the Laplace–Beltrami operator on \({\mathbb {X}}\) belonging to eigenvalues with \(\lambda _k \le r^2\) is of order \(r^d\), see [19, Chap. 6.4] and [52, Thm. 17.5.3, Cor. 17.5.8]. This is known as Weyl’s estimates on the spectrum of an elliptic operator. For some special manifolds, the eigenfunctions are explicitly given in the appendix. In the following lemma, the result from Theorem 4 is rewritten in terms of discrepancies and generalized to absolutely continuous measures with densities \(\rho \in H^s({\mathbb {X}})\).

Lemma 4

For \(s>d/2\) suppose that \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms and that \(\nu _r\in {\mathcal {P}}({\mathbb {X}})\) satisfies (15). Let \(\mu \in {\mathcal {P}}({\mathbb {X}})\) be absolutely continuous with respect to \(\sigma _{\mathbb {X}}\) with density \(\rho \in H^s({\mathbb {X}})\). For sufficiently large r, the measures \({\tilde{\nu }}_r:=\frac{\rho }{\beta _r} \nu _r \in {\mathcal {P}}({\mathbb {X}})\) with \(\beta _r :=\int _{{\mathbb {X}}} \rho \,\mathrm {d}\nu _r\) are well defined and there is a constant depending on \({\mathbb {X}}\) and K with

$$\begin{aligned} {\mathscr {D}}_K\bigl (\mu ,{\tilde{\nu }}_r\bigr ) \lesssim \Vert \rho \Vert _{H^s({\mathbb {X}})}r^{-s}. \end{aligned}$$

Proof

Note that \(H^s({\mathbb {X}})\) is a Banach algebra with respect to addition and multiplication [22], in particular, for \(f,g \in H^s({\mathbb {X}})\) we have \(fg \in H^s({\mathbb {X}})\) with

$$\begin{aligned} \Vert fg\Vert _{H^s({\mathbb {X}})} \le \Vert f\Vert _{H^s({\mathbb {X}})} \, \Vert g\Vert _{H^s({\mathbb {X}})}. \end{aligned}$$
(16)

By Theorem 4, we obtain for all \(\varphi \in H^s({\mathbb {X}})\) that

$$\begin{aligned} \Big |\int _{{\mathbb {X}}} \varphi \rho \,\mathrm {d}\sigma _{\mathbb {X}}- \int _{\mathbb {X}}\varphi \rho \,\mathrm {d}\nu _r \Big | \lesssim r^{-s} \Vert \varphi \, \rho \Vert _{H^s({\mathbb {X}})} \lesssim r^{-s} \Vert \varphi \Vert _{H^s({\mathbb {X}})}\Vert \rho \Vert _{H^s({\mathbb {X}})}. \end{aligned}$$
(17)

In particular, this implies for \(\varphi \equiv 1\) that

$$\begin{aligned} \big |1 - \beta _r \big | \lesssim r^{-s} \Vert \rho \Vert _{H^s({\mathbb {X}})}. \end{aligned}$$
(18)

Then, application of the triangle inequality results in

$$\begin{aligned}&\Big |\int _{{\mathbb {X}}} \varphi \,\mathrm {d}\mu - \int _{\mathbb {X}}\varphi \,\mathrm {d}\tilde{\nu }_r\Big | \le \Big |\int _{{\mathbb {X}}} \varphi \,\mathrm {d}\mu - \int _{\mathbb {X}}\varphi \rho \,\mathrm {d}\nu _r\Big | + \Big |\int _{\mathbb {X}}\varphi \rho \tfrac{\beta _r -1}{\beta _r} \,\mathrm {d}\nu _r \Big |. \end{aligned}$$

According to (17), the first summand is bounded by \(\lesssim r^{-s} \Vert \varphi \Vert _{H^s({\mathbb {X}})}\Vert \rho \Vert _{H^s({\mathbb {X}})}\). It remains to derive matching bounds on the second term. Hölder’s inequality yields

$$\begin{aligned} \Big |\int _{\mathbb {X}}\varphi \rho \tfrac{\beta _r -1}{\beta _r} \,\mathrm {d}\nu _r \Big | \lesssim \Vert \varphi \Vert _{L^\infty ({\mathbb {X}})} \left| \beta _r - 1\right| \lesssim \Vert \varphi \Vert _{H^s({\mathbb {X}})} r^{-s}\Vert \rho \Vert _{H^s({\mathbb {X}})}, \end{aligned}$$

where the last inequality is due to \(H^s({\mathbb {X}}) \hookrightarrow L^\infty ({\mathbb {X}})\) and (18). \(\square \)

Using the previous lemma, we derive optimal approximation rates for \({\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})\) and \({\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})\).

Theorem 5

(Upper bounds) For \(s > d/2\) suppose that \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Assume that \(\mu \) is absolutely continuous with respect to \(\sigma _{\mathbb {X}}\) with density \(\rho \in H^s({\mathbb {X}})\). Then, there are constants depending on \({\mathbb {X}}\) and K such that

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}_N^{{{\,\mathrm{atom}\,}}}({\mathbb {X}})} {\mathscr {D}}_K(\mu ,\nu )&\lesssim \Vert \rho \Vert _{H^s({\mathbb {X}})} N^{-\frac{s}{d}}, \end{aligned}$$
(19)
$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K (\mu ,\nu )&\lesssim \Vert \rho \Vert _{H^s({\mathbb {X}})} L^{-\frac{s}{d-1}}. \end{aligned}$$
(20)

Proof

By [13, Lem. 2.11] and since the Laplace–Beltrami has \(N \sim r^d\) eigenfunctions belonging to eigenvectors \(\lambda _k < r^2\), there exists a measure \(\nu _r\in {\mathcal {P}}^{{{\,\mathrm{atom}\,}}}_{N}({\mathbb {X}})\) that satisfies (15). Hence, (15) is satisfied with \(r\sim N^{1/d}\), where the constants depend on \({\mathbb {X}}\) and K. Thus, Lemma 4 with \({\tilde{\nu }}_r\in {\mathcal {P}}^{{{\,\mathrm{atom}\,}}}_N({\mathbb {X}})\) leads to (19).

The assumptions of Lemma 3 are satisfied, so that analogous arguments as in the proof of Theorem 1 yield \({\mathcal {P}}^{{{\,\mathrm{atom}\,}}}_{N}({\mathbb {X}})\subset {\mathcal {P}}^{{{\,\mathrm{curv}\,}}}_{L}({\mathbb {X}})\) with suitable \(N\sim L^{d/(d-1)}\). Hence, (19) implies (20).

\(\square \)

5.3 Upper Bounds for \({\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L(\pmb {{\mathbb {X}}})\) and special manifolds \(\pmb {{\mathbb {X}}}\)

To establish upper bounds for the smaller space \({\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})\), restriction to special manifolds is necessary. The basic idea consists in the construction of a curve and a related measure \(\nu _r\) such that all eigenfunctions of the Laplace–Beltrami operator belonging to eigenvalues smaller than a certain value are exactly integrated by this measure and then applying Lemma 4 for estimating the minimum of discrepancies. We begin with the torus.

Theorem 6

(Torus) Let \({\mathbb {X}}= {\mathbb {T}}^d\) with \(d\in {\mathbb {N}}\), \(s>d/2\) and suppose that \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Then, for any absolutely continuous measure \(\mu \in {{\mathcal {P}}}({\mathbb {X}})\) with Lipschitz continuous density \(\rho \in H^s({\mathbb {X}})\), there exists a constant depending on d, K, and \(\rho \) such that

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K(\mu ,\nu ) \lesssim L^{-\frac{s}{d-1}}. \end{aligned}$$

Proof

1. First, we construct a closed curve \(\gamma _r\) such that the trigonometric polynomials from \({\Pi }_r({\mathbb {T}}^{d})\), see (33) in the appendix, are exactly integrated along this curve. Clearly, the polynomials in \({\Pi }_r(\mathbb T^{d-1})\) are exactly integrated at equispaced nodes \(x_{\varvec{k}} = \frac{{\varvec{k}}}{n}\), \({\varvec{k}}=(k_1,\ldots ,k_{d-1}) \in \mathbb N_0^{d-1}\), \(0 \le k_i \le n-1\), with weights \(1/n^{d-1}\), where \(n :=r+1\). Set \(z(\varvec{k}) :=k_1 + k_2 n + \ldots + k_{d-1} n^{d-2}\) and consider the curves

$$\begin{aligned}\gamma _{\varvec{k}} :I_{\varvec{k}} :=\bigl [ \tfrac{z(\varvec{k})}{n^{d-1}},\tfrac{z(\varvec{k})+1}{n^{d-1}}\bigr ] \rightarrow {\mathbb {T}}^d \quad \text {with}\quad \gamma _{\varvec{k}}(t) :=\begin{pmatrix} x_{\varvec{k}}\\ n^{d-1} t\end{pmatrix}. \end{aligned}$$

Then, each element in \({\Pi }^{d}_r\) is exactly integrated along the union of these curves, i.e., using \(I :=\{0,\ldots ,n-1\}^{d-1}\), we have

$$\begin{aligned} \int _{{\mathbb {T}}^d} p \,\mathrm {d}\sigma _{{\mathbb {T}}^d} = \sum _{\varvec{k} \in I} \int _{I_k} p \circ \gamma _{\varvec{k}} \,\mathrm {d}\lambda ,\quad p\in {\Pi }^{d}_r. \end{aligned}$$

The argument is repeated for every other coordinate direction, so that we end up with \(d n^{d-1}\) curves mapping from an interval of length \(\frac{1}{d n^{d-1}}\) to \({\mathbb {T}}^d\). The intersection points of these curves are considered as vertices of a graph, where each vertex has 2d edges. Consequently, there exists an Euler path \(\gamma _r:[0,1] \rightarrow {\mathbb {T}}^d\) trough the vertices build from all curves. It has constant speed \(d n^{d-1}\) and the polynomials \({\Pi }^{d}_r\) are exactly integrated along \(\gamma _r\), i.e.,

$$\begin{aligned} \int _{{\mathbb {T}}^d} p\,\mathrm {d}\sigma _{{\mathbb {T}}^d} = \int _{{\mathbb {T}}^d} p\,\mathrm {d}{\gamma _r}{_*}\lambda ,\quad p\in {\Pi }^{d}_r. \end{aligned}$$

2. Next, we apply Lemma 4 for \(\nu _r={\gamma _r}_{*}\lambda \). We observe \({\tilde{\nu }}_r ={\gamma _r}{_*}((\rho \circ \gamma _r)/{ \beta _r} \lambda )\) and deduce \(L(\rho \circ \gamma _r/\beta _r)\le L(\gamma _r)L(\rho )/{\beta _r}\lesssim r^{d-1}\sim L\) as \(\beta _r\sim 1\). Here, constants depend on d, K, and \(\rho \). \(\square \)

Now, we provide approximation rates for \({\mathbb {X}}={\mathbb {S}}^d\).

Theorem 7

(Sphere) Let \({\mathbb {X}}= {\mathbb {S}}^d\) with \(d \ge 2\), \(s>d/2\) and suppose that \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Then, we have for any absolutely continuous measure \(\mu \in {{\mathcal {P}}}({\mathbb {X}})\) with Lipschitz continuous density \(\rho \in H^s({\mathbb {X}})\) that there is a constant depending on d, K, and \(\rho \) with

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K (\mu ,\nu ) \lesssim L^{-\frac{s}{d-1}}. \end{aligned}$$

Proof

1. First, we construct a constant speed curve \(\gamma _{r}:[0,1]\rightarrow {\mathbb {S}}^{d}\) and a probability measure \(\omega _r = \rho _r \lambda \) with Lipschitz continuous density \(\rho _{r}:[0,1] \rightarrow {\mathbb {R}}_{\ge 0}\) such that for all \(p \in {\Pi }_{r}({\mathbb {S}}^{d})\), it holds

$$\begin{aligned} \int _{{\mathbb {S}}^{d}} p \,\mathrm {d}\sigma _{{\mathbb {S}}^{d}} = \int _{0}^{1} p \circ \gamma _{r} \,\mathrm {d}\omega _{r}. \end{aligned}$$
(21)

Utilizing spherical coordinates

$$\begin{aligned} x_1 = \cos \theta _1, \,\, x_2 = \sin \theta _1 \cos \theta _2, \,\, \ldots \, , \,\, x_{d} = \prod _{j=1}^{d-1} \sin \theta _j \cos \phi ,\,\, x_{d+1} = \prod _{j=1}^{d-1} \sin \theta _j \sin \phi , \end{aligned}$$
(22)

where \(\theta _k \in [0,\pi ]\), \(k=1,\ldots ,d-1\), and \(\phi \in [0,2\pi )\), we obtain

$$\begin{aligned} \int _{{\mathbb {S}}^{d}} p \,\mathrm {d}\sigma _{{\mathbb {S}}^{d}} = \int _{0}^{\pi } c_{d} \sin (\theta _1)^{d-1} \int _{{\mathbb {S}}^{d-1}} p\bigl (\cos (\theta _1),\sin (\theta _1){\tilde{x}}\bigr ) \,\mathrm {d}\sigma _{{\mathbb {S}}^{d-1}}({\tilde{x}}) \,\mathrm {d}\theta _1, \end{aligned}$$
(23)

where \(c_{d} :=(\int _{0}^{\pi } \sin (\theta )^{d-1} \,\mathrm {d}\theta )^{-1} \). There exist nodes \({\tilde{x}}_{i} \in \mathbb S^{d-1}\) and positive weights \(a_{i}\), \(i=1,\dots , n \sim r^{d-1}\), with \(\sum _{i=1}^n a_i = 1\), such that for all \(p \in {\Pi }_{r}({\mathbb {S}}^{d-1})\) it holds

$$\begin{aligned} \int _{{\mathbb {S}}^{d-1}} p \,\mathrm {d}\sigma _{{\mathbb {S}}^{d-1}} = \sum _{i=1}^{n} a_{i} p({\tilde{x}}_{i}). \end{aligned}$$

To see this, substitute \(u_k = \sin \theta _k\), \(k=2,\ldots ,d-1\), apply Gaussian quadrature with nodes \(\lceil (r+1)/2 \rceil \) and corresponding weights to exactly integrate over \(u_k\), and equispaced nodes and weights \(1/(2r+1)\) for the integration over \(\phi \) as, e.g., in [82]. Then, we define \(\gamma _{r}:[0,1]\rightarrow {\mathbb {S}}^{d}\) for \(t \in [(i-1)/n,i/n]\), \(i=1,\dots ,n\), by

$$\begin{aligned} \gamma _{r}(t) :=\gamma _{r,i}(2\pi n t) , \qquad \gamma _{r,i}(\alpha ) :=\bigl (\cos (\alpha ), \sin (\alpha ) {\tilde{x}}_{i}\bigr ), \quad \alpha \in [0,2\pi ]. \end{aligned}$$

Since \((1,0,\dots ,0) = \gamma _{r,i}(0) = \gamma _{r,i}(2\pi )\) for all \(i=1,\dots ,n\), the curve is closed. Furthermore, \(\gamma _{r}(t)\) has constant speed since for \(i=1,\dots ,n\), i.e.,

$$\begin{aligned} | {\dot{\gamma }}_{r}|(t) = | {\dot{\gamma }}_{r,i}|(2 \pi n t) = 2\pi n \sim r^{d-1}. \end{aligned}$$

Next, the density \(\rho _{r}:[0,1]\rightarrow \mathbb {\mathbb {R}}\) is defined for \(t \in [(i-1)/n,i/n]\), \(i=1,\dots ,n\), by

$$\begin{aligned} \rho _{r}(t) :=\rho _{r,i}(2\pi n t) , \qquad \rho _{r,i}(\alpha ) :=a_{i} c_{d} \pi n|\sin (\alpha )|^{d-1}, \qquad \alpha \in [0,2\pi ]. \end{aligned}$$

We directly verify that \( \rho _{r}\) is Lipschitz continuous with \(L(\rho _r) \lesssim \max _{i} a_i n^2\). By [34], the quadrature weights fulfill \(a_i \lesssim \frac{1}{r^{d-1}}\) so that \(L(\rho _r) \lesssim n^2 r^{-(d-1) } \sim r^{d-1}\). By definition of the constant \(c_{d}\) and weights \(a_{i}\), we see that \(\rho _r\) is indeed a probability density

$$\begin{aligned} \int _{0}^{1} \rho _{r} \,\mathrm {d}\lambda&= \sum _{i=1}^{n} \int _{\frac{i-1}{n}}^{\frac{i}{n}} \rho _{r,i}(2\pi n t) \,\mathrm {d}t = \frac{1}{2\pi n} \sum _{i=1}^{n} \int _{0}^{2\pi } \rho _{r,i}(\alpha ) \,\mathrm {d}\alpha \\&= \frac{c_d}{2} \sum _{i=1}^{n} a_{i} \int _{0}^{2\pi } |\sin (\theta )|^{d-1} \,\mathrm {d}\theta = 1. \end{aligned}$$

For \(p \in {\Pi }_{r}({\mathbb {S}}^{d})\), we obtain

$$\begin{aligned}&\int _{0}^{1} p \circ \gamma _{r}\, \rho _{r} \,\mathrm {d}\lambda \\ =&\sum _{i=1}^{n} \int _{\frac{i-1}{n}}^{\frac{i}{n}} p\bigl (\gamma _{r,i}(2 \pi n t)\bigr ) \rho _{r,i}(2\pi M t) \,\mathrm {d}t = \int _{0}^{2\pi } \frac{1}{2\pi n} \sum _{i=1}^{n} p\bigl (\gamma _{r,i}(\alpha )\bigr ) \rho _{r,i}(\alpha ) \,\mathrm {d}\alpha \\ =&\frac{c_{d}}{2}\int _{0}^{2\pi } |\sin (\alpha )|^{d-1} \sum _{i=1}^{n} a_{i} p\bigl (\cos (\alpha ),\sin (\alpha )\tilde{x}_{i}\bigr ) \,\mathrm {d}\alpha \\ =&\frac{c_{d}}{2}\int _{0}^{\pi } |\sin (\alpha )|^{d-1} \sum _{i=1}^{n} a_{i} \Big ( p\bigl (\cos (\alpha ),\sin (\alpha )\tilde{x}_{i}\bigr ) + p\bigl (-\cos (\alpha ),-\sin (\alpha ){\tilde{x}}_{i}\bigr ) \Big ) \,\mathrm {d}\alpha . \end{aligned}$$

Without loss of generality, p is chosen as a homogeneous polynomial of degree \(k \le r\), i.e., \(p(t x) =t^k p(x)\). Then,

$$\begin{aligned} \int _{0}^{1} p \circ \gamma _{r}\, \rho _{r} \,\mathrm {d}\lambda&= \frac{1+(-1)^{k}}{2} \int _{0}^{\pi } c_{d} |\sin (\alpha )|^{d-1} \sum _{i=1}^n a_{i} p\bigl (\cos (\alpha ),\sin (\alpha )\tilde{x}_{i}\bigr ) \,\mathrm {d}\alpha , \end{aligned}$$

and regarding that for fixed \(\alpha \in [0,2\pi ]\) the function \({\tilde{x}} \mapsto p(\cos (\alpha ), \sin (\alpha ) {\tilde{x}})\) is a polynomial of degree at most r on \({\mathbb {S}}^{d-1}\), we conclude

$$\begin{aligned} \int _{0}^{1} p \circ \gamma _{r}\, \rho _{r} \,\mathrm {d}\lambda =&\frac{1+(-1)^{k}}{2}\int _{0}^{\pi } c_{d} |\sin (\alpha )|^{d-1} \int _{{\mathbb {S}}^{d-1}} p\bigl (\cos (\alpha ),\sin (\alpha )\tilde{x}\bigr ) \,\mathrm {d}\sigma _{{\mathbb {S}}^{d-1}}({\tilde{x}}) \,\mathrm {d}\alpha . \end{aligned}$$

Now, the assertion (21) follows from (23) and since \(\int _{{\mathbb {S}}^{d}} p \,\mathrm {d}\sigma _{{\mathbb {S}}^{d}}=0\) if k is odd.

2. Next, we apply Lemma 4 for \(\nu _r={\gamma _r}_{*} \rho _r\lambda \), from which we obtain that \({\tilde{\nu }}_r={\gamma _r}{_*}((\rho \circ \gamma _r) \rho _r/{\beta _r}\lambda )\). As all \(\rho _r\) are uniformly bounded by construction and \(\rho \) is bounded due to continuity, we conclude using \(L(\rho _r) \lesssim r^{d-1}\) and \(L(\gamma _r) \sim r^{d-1}\) that

$$\begin{aligned}L(\rho \circ \gamma _r \, \rho _r/\beta _r) \le \bigl ( L(\rho \circ \gamma _r) \Vert \rho _r \Vert _\infty + L(\rho _r) \Vert \rho \Vert _\infty \bigr )/\beta _r\lesssim \bigl (L(\rho ) + \Vert \rho \Vert _\infty \bigr ) r^{d-1},\end{aligned}$$

which concludes the proof. \(\square \)

Finally, we derive approximation rates for \({\mathbb {X}}=\mathrm{SO}(3)\).

Corollary 1

(Special orthogonal group) Let \({\mathbb {X}}= \mathrm{SO}(3)\), \(s>3/2\) and suppose \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Then, we have for any absolutely continuous measure \(\mu \in {{\mathcal {P}}}({\mathbb {X}})\) with Lipschitz continuous density \(\rho \in H^s({\mathbb {X}})\) that

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K (\mu ,\nu ) \lesssim L^{-\frac{s}{d-1}}, \end{aligned}$$

where the constant depends on K and \(\rho \).

Proof

1. For fixed \(L\sim r^2\), we shall construct a curve \(\gamma _{r}:[0,1] \rightarrow \mathrm {SO(3)}\) with \(L(\gamma _r)\lesssim L\) and a probability measure \(\omega _r = \rho _r \lambda \) with density \(\rho _{r}:[0,1] \rightarrow {\mathbb {R}}_{\ge 0}\) and \(L(\rho _{r}) \lesssim L\), such that

$$\begin{aligned} \int _{\mathrm{SO}(3)} p \,\mathrm {d}\sigma _{\mathrm{SO}(3)} = \int _{\mathrm{SO}(3)} p \,\mathrm {d}{\gamma _r}{_*}(\rho _r\lambda ). \end{aligned}$$

We use the fact that the sphere \({\mathbb {S}}^{3}\) is a double covering of \(\mathrm {SO(3)}\). That is, there is a surjective two-to-one mapping \(a:{\mathbb {S}}^{3} \rightarrow \mathrm {SO(3)}\) satisfying \(a(x) = a(-x)\), \(x \in {\mathbb {S}}^{3}\). Moreover, we know that \(a:{\mathbb {S}}^{3} \rightarrow \mathrm {SO(3)}\) is a local isometry, see [42], i.e., it respects the Riemannian structures, implying the relations \(\sigma _{\mathrm {SO(3)}} = a_{*} \sigma _{{\mathbb {S}}^{3}}\) and

$$\begin{aligned} {{\,\mathrm{dist}\,}}_{\mathrm {SO(3)}}\bigl (a(x_{1}),a(x_{2})\bigr )&= \min \bigl ( {{\,\mathrm{dist}\,}}_{{\mathbb {S}}^{3}}(x_{1},x_{2}),{{\,\mathrm{dist}\,}}_{\mathbb S^{3}}(x_{1},-x_{2}) \bigr ). \end{aligned}$$

It also maps \({\Pi }_{r}(\mathrm {SO(3)})\) into \({\Pi }_{2r}({\mathbb {S}}^3)\), i.e., \(p\in {\Pi }_{r}(\mathrm {SO(3)})\) implies \(p\circ a\in {\Pi }_{2r}({\mathbb {S}}^3)\). Now, let \({\tilde{\gamma }}_{r}:[0,1]\rightarrow {\mathbb {S}}^{3}\) and \({\tilde{\omega }}_{r}\) be given as in the first part of the proof of Theorem 7 for \(d=3\), i.e., \({{\tilde{\gamma }}_r}{_*}{\tilde{\omega }}_{r}\) satisfies (21) with \(L(\tilde{\gamma _r})\lesssim L\) and \(\tilde{\omega }_r=\tilde{\rho _r}\lambda \) with \(L(\tilde{\rho }_r)\lesssim L\).

We now define a curve \(\gamma _r\) in \(\mathrm{SO}(3)\) by

$$\begin{aligned} \gamma _{r}:[0,1] \rightarrow \mathrm {SO(3)},\qquad \gamma _{r}(t) :=a \circ {\tilde{\gamma }}_{2r}(t), \end{aligned}$$

and let \(\omega _r :=\tilde{\omega }_{2r}\). For \(p\in {\Pi }_r(\mathrm{SO}(3))\), the push-forward measure \({\gamma _r}{_*} \omega _r\) leads to

$$\begin{aligned} \int _{\mathrm {SO(3)}} p \,\mathrm {d}\sigma _{\mathrm {SO(3)}}&= \int _{\mathrm {SO(3)}} p\,\mathrm {d}a{_*}\sigma _{{\mathbb {S}}^3} = \int _{{\mathbb {S}}^3} p \circ a\,\mathrm {d}\sigma _{{\mathbb {S}}^3}\\&= \int _{{\mathbb {S}}^3} p \circ a \,\mathrm {d}{\tilde{\gamma _{2r}}}{_*} \tilde{\omega }_{2r} = \int _{\mathrm{SO}(3)} p\,\mathrm {d}{\gamma _{r}}{_*} \omega _{r}. \end{aligned}$$

Hence, property (15) is satisfied for \({\gamma _r}{_*} \omega _r={\gamma _r}{_*} (\tilde{\rho }_{2r}\lambda )\).

2. The rest follows along the lines of step 2. in the proof of Theorem 7. \(\square \)

5.4 Upper Bounds for \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L(\pmb {{\mathbb {X}}})\) and special manifolds \(\pmb {{\mathbb {X}}}\)

To derive upper bounds for the smallest space \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\), we need the following specification of Lemma 4.

Lemma 5

For \(s>d/2\) suppose that \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Let \(\mu \in {\mathcal {P}}({\mathbb {X}})\) be absolutely continuous with respect to \(\sigma _{\mathbb {X}}\) with positive density \(\rho \in H^s({\mathbb {X}})\). Suppose that \(\nu _r :={\gamma _{r}}{_*} \lambda \) with \(\gamma _r\in {{\,\mathrm{Lip}\,}}({\mathbb {X}})\) satisfies (15) and let \( \beta _r :=\int _{\mathbb {X}}\rho \,\mathrm {d}\nu _r \). Then, for sufficiently large r,

$$\begin{aligned} g :[0,1]\rightarrow [0,1],\qquad g(t):=\frac{1}{\beta _r} \int _0^t \rho \circ \gamma _r \,\mathrm {d}\lambda \end{aligned}$$

is well-defined and invertible. Moreover, \(\tilde{\gamma }_r :=\gamma _r \circ g^{-1}\) satisfies \(L({\tilde{\gamma }}_r) \lesssim L( \gamma _r)\) and

$$\begin{aligned} {\mathscr {D}}_K(\mu ,{{\tilde{\gamma }}}_r{_*}\lambda ) \lesssim r^{-s}, \end{aligned}$$
(24)

where the constants depend on \({\mathbb {X}}\), K, and \(\rho \).

Proof

Since \(\rho \) is continuous, there is \(\epsilon >0\) with \(\rho \ge \epsilon \). To bound the Lipschitz constant \(L({\tilde{\gamma }}_r)\), we apply the mean value theorem together with the definition of g and the fact that \((g^{-1})'(s) = 1/g'(g^{-1}(s))\) to obtain

$$\begin{aligned} \bigl |\tilde{\gamma }_r(s)-\tilde{\gamma }_r(t)\bigr | \le L(\gamma _r)\bigl |g^{-1}(s)-g^{-1}(t)\bigr | \le L(\gamma _r) \, \frac{\beta _r}{\epsilon } \, |s-t|. \end{aligned}$$

Using (18), this can be further estimated for sufficiently large r as

$$\begin{aligned} \bigl |\tilde{\gamma }_r(s)-\tilde{\gamma }_r(t)\bigr |&\lesssim L(\gamma _r) \, \frac{1+\Vert \rho \Vert _{H^s({\mathbb {X}})}r^{-s} }{\epsilon } \, |s-t| \lesssim L(\gamma _r) \, \frac{2}{\epsilon } \, |s-t|. \end{aligned}$$

To derive (24), we aim to apply Lemma 4 with \(\nu _r={\gamma _r}{_*}\lambda \). We observe

$$\begin{aligned} {\tilde{\nu }}_r = \frac{\rho }{\beta _r} {\gamma _r}{_*}\lambda = {\gamma _r}{_*} \Bigl ( \frac{\rho \circ \gamma _r}{\beta _r} \lambda \Bigr ) = {\gamma _r}{_*} (g'\lambda ) =(\gamma _r\circ g^{-1}){_*} \lambda =\tilde{{\gamma }_r}{_*}\lambda , \end{aligned}$$

so that Lemma 4 indeed implies (24). \(\square \)

In comparison to Theorem 6, we now trade the Lipschitz condition on \(\rho \) with the positivity requirement, which enables us to cover \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\).

Theorem 8

(Torus) Let \({\mathbb {X}}= {\mathbb {T}}^d\) with \(d\in {\mathbb {N}}\), \(s>d/2\) and suppose that \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Then, for any absolutely continuous measure \(\mu \in {{\mathcal {P}}}({\mathbb {X}})\) with positive density \(\rho \in H^s({\mathbb {X}})\), there is a constant depending on d, K, and \(\rho \) with

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K(\mu ,\nu ) \le \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K(\mu ,\nu ) \lesssim L^{-\frac{s}{d-1}}. \end{aligned}$$

Proof

The first part of the proof is identical to the proof of Theorem 6. Instead of Lemma 4 though, we now apply Lemma 5 for \(\gamma _r\) and \(\rho _r \equiv 1\). Hence, \({\tilde{\gamma }}_r = \gamma _r \circ g_r^{-1}\) satisfies \(L(\tilde{\gamma }_r)\le \frac{\beta _r}{\epsilon } d(2r +1)^{d-1} \lesssim r^{d-1}\), so that \(\tilde{\gamma _r}{_*}\lambda \) satisfies (24) and is in \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\) with \(L \sim r^{d-1}\). \(\square \)

The construction on \({\mathbb {X}}={\mathbb {S}}^d\) for \({\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})\) in the proof of Theorem 7 is not compatible with \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\). Thus, the situation is different from the torus, where we have used the same underlying construction and only switched from Lemma 4 to Lemma 5. Now, we present a new construction for \({\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})\), which is tailored to \({\mathbb {X}}={\mathbb {S}}^2\). In this case, we can transfer the ideas of the torus, but with Gauss-Legendre quadrature points.

Theorem 9

(2-sphere) Let \({\mathbb {X}}= {\mathbb {S}}^2\), \(s>1\) and suppose \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Then, we have for any absolutely continuous measure \(\mu \in {{\mathcal {P}}}({\mathbb {X}})\) with positive density \(\rho \in H^s({\mathbb {X}})\) that there is a constant depending on K and \(\rho \) with

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K(\mu ,\nu )\le \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K (\mu ,\nu ) \lesssim L^{-s}. \end{aligned}$$

Proof

1. We construct closed curves such that the spherical polynomials from \({\Pi }_r({\mathbb {S}}^2)\), see (35) in the appendix, are exactly integrated along this curve. It suffices to show this for the polynomials \(p(x) = x^{k_1} x^{k_2} x_3^{k_3} \in {\Pi }_r({\mathbb {S}}^2)\) with \(k_1+k_2+k_3 \le r\) restricted to \({\mathbb {S}}^2\). We select \(n = \lceil (r+1)/2 \rceil \) Gauss-Legendre quadrature points \(u_j = \cos (\theta _j)\in [-1,1]\) and corresponding weights \(2\omega _j\), \(j=1, \ldots ,n\). Note that \(\sum _{j=1}^n \omega _j = 1\). Using spherical coordinates \(x_1=\cos (\theta )\), \(x_2=\sin (\theta )\cos (\phi )\), and \(x_3=\sin (\theta )\sin (\phi )\) with \((\theta , \phi ) \in [0,\pi ] \times [0,2\pi ]\), we obtain

$$\begin{aligned} \int _{{\mathbb {S}}^2} p \,\mathrm {d}\sigma _{{\mathbb {S}}^2}&= \frac{1}{4\pi } \int _0^{2\pi } \cos (\phi )^{k_2} \sin (\phi )^{k_3} \int _0^{\pi } \cos (\theta )^{k_1} \sin (\theta )^{k_2+k_3} \sin (\phi ) \,\mathrm {d}\theta \,\mathrm {d}\phi \\&= \frac{1}{4\pi } \int _0^{2\pi } \cos (\phi )^{k_2} \sin (\phi )^{k_3} \int _{-1}^1 u^{k_1} (1-u^2)^{\frac{k_2+k_3}{2}} \,\mathrm {d}u \,\mathrm {d}\phi , \end{aligned}$$

see also [83]. If \(k_2+k_3\) is odd, then the integral over \(\phi \) becomes zero. If \(k_2+k_3\) is even, the inner integrand is a polynomial of degree \(\le r\). In both cases we get

$$\begin{aligned} \int _{{\mathbb {S}}^2} p \,\mathrm {d}\sigma _{{\mathbb {S}}^2}&= \frac{1}{2\pi } \sum _{j=1}^n \omega _j \int _0^{2\pi } p\bigl (\cos (\theta _j),\sin (\theta _j) \cos (\phi ), \sin (\theta _j)\sin (\phi )\bigr ) \,\mathrm {d}\phi . \end{aligned}$$

Substituting in each summand \(\phi = 2\pi t /\omega _j\), \(j=1,\ldots ,n\), yields

$$\begin{aligned} \int _{{\mathbb {S}}^2} p\,\mathrm {d}\sigma _{{\mathbb {S}}^2} = \sum _{j=1}^n \int _0^{\omega _j} p \circ \gamma _j \,\mathrm {d}\lambda , \end{aligned}$$

where \(\gamma _j:[0,\omega _j] \rightarrow {\mathbb {S}}^2\) is defined by

$$\begin{aligned} \gamma _j(t) :=\bigl (\cos (\theta _j),\sin (\theta _j)\cos (2\pi t/\omega _j),\sin (\theta _j)\sin (2\pi t/\omega _j) \bigr ), \end{aligned}$$

and has constant speed \(L(\gamma _j) = 2\pi \sin (\theta _j)/\omega _j\). The lower bound \(\omega _j \gtrsim \frac{1}{n}\sin (\theta _j)\), cf. [34], implies that \(L(\gamma _j)\lesssim n\). Defining a curve \(\tilde{\gamma }:[0,1]\rightarrow {\mathbb {S}}^2\) piecewise via

$$\begin{aligned} \tilde{\gamma }|_{[0,s_1]} = \gamma _1,\quad \tilde{\gamma }|_{[s_1,s_2]} = \gamma _2(\cdot -s_1), \quad \ldots \quad , \quad \tilde{\gamma }|_{[s_{n-1},1]} = \gamma _n(\cdot -s_{n-1}), \end{aligned}$$

where \(s_j :=\omega _1 + \ldots + \omega _j\), we obtain

$$\begin{aligned} \int _{{\mathbb {S}}^2} p\,\mathrm {d}\sigma _{{\mathbb {S}}^2} = \int _0^{1} p \,\mathrm {d}\tilde{\gamma }{_*}\lambda , \quad p \in {\Pi }_r({\mathbb {S}}^2). \end{aligned}$$

Further, the curve satisfies \(L(\tilde{\gamma })\lesssim r\).

As with the torus, we now “turn” the sphere (or switch the position of \(\phi \)) so that we get circles along orthogonal directions. This large collection of circles is indeed connected. As with the torus, each intersection point has an incoming and outgoing part of a circle, so that all this corresponds to a graph, where again each vertex has an even number of “edges”. Hence, there is an Euler path inducing our final curve \(\gamma _r:[0,1]\rightarrow {\mathbb {S}}^2\) with piecewise constant speed \(L(\gamma _r)\lesssim r\) satisfying

$$\begin{aligned} \int _{{\mathbb {S}}^2} p\,\mathrm {d}\sigma _{{\mathbb {S}}^2} = \int _0^{1} p \,\mathrm {d}({\gamma _r}{_*}\lambda ), \quad p \in {\Pi }_r({\mathbb {S}}^2). \end{aligned}$$

2. Let \(r \sim L\). Analogous to the end of the proof of Theorem 8, Lemma 5 now yields the assertion. \(\square \)

To get the approximation rate for \({\mathbb {X}}={\mathcal {G}}_{2,4}\), we make use of its double covering \({\mathbb {X}}={\mathbb {S}}^2\times {\mathbb {S}}^2\), cf. Remark 8.

Theorem 10

(Grassmannian) Let \({\mathbb {X}}= {\mathcal {G}}_{2,4}\), \(s>2\) and suppose \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\) holds with equivalent norms. Then, we have for any absolutely continuous measure \(\mu \in {{\mathcal {P}}}({\mathbb {X}})\) with positive density \(\rho \in H^s({\mathbb {X}})\) that there exists a constant depending on K and \(\rho \) with

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{ a-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K(\mu ,\nu )\le \min _{\nu \in {\mathcal {P}}^{{{\,\mathrm{{\lambda }-curv}\,}}}_L({\mathbb {X}})}{\mathscr {D}}_K (\mu ,\nu ) \lesssim L^{-\frac{s}{3}}. \end{aligned}$$

Proof

By Remark 8 in the appendix, we know that \({\mathcal {G}}_{2,4} \cong {\mathbb {S}}^2\times {\mathbb {S}}^2/ \{\pm 1\}\) so that is remains to prove the assertion for \({\mathbb {X}}= {\mathbb {S}}^2 \times {\mathbb {S}}^2\).

There exist pairwise distinct points \(\{x_1,\ldots ,x_N\}\subset {\mathbb {S}}^2\) such that \(\frac{1}{N}\sum _{j=1}^N \delta _{x_j}\) satisfies (15) on \({\mathbb {S}}^2\) with \(N\sim r^2\), cf. [9, 10]. On the other hand, let \(\tilde{\gamma }\) be the curve on \({\mathbb {S}}^2\) constructed in the proof of Theorem 9, so that \(\tilde{\gamma }{_*}\lambda \) satisfies (15) on \({\mathbb {S}}^2\) with \(\ell (\tilde{\gamma })\le L(\tilde{\gamma })\sim r\). Let us introduce the virtual point \(x_{N+1}:=x_1\).

The curve \(\tilde{\gamma }([0,1])\) contains a great circle. Thus, for each pair \(x_j\) and \(x_{j+1}\) there is \(O_j\in {{\,\mathrm{O}\,}}(3)\) such that \(x_j,x_{j+1}\in \varGamma _j:=O_j\tilde{\gamma }([0,1])\).

It turns out that the set on \({\mathbb {S}}^2 \times {\mathbb {S}}^2\) given by \( \bigcup _{j=1}^N (\{x_j\}\times \varGamma _j)\cup (\varGamma _j \times \{x_{j+1}\}) \) is connected. We now choose \(\gamma _j:=O_j\tilde{\gamma }\) and know that the union of the trajectories of the set of curves

$$\begin{aligned} t\mapsto \bigl (x_j,\gamma _j(t)\bigr ),\qquad t\mapsto \bigl (\gamma _{j}(t),x_{j+1}\bigr ),\quad j=1,\ldots ,N, \end{aligned}$$

is connected. Combinatorial arguments involving Euler paths, see Theorems 6 and 9, lead to a curve \(\gamma \) with \(\ell (\gamma )\le L(\gamma )\sim N L(\tilde{\gamma }) \sim r^3\), so that \(\gamma {_*} \lambda \) satisfies (15). The remaining part follows along the lines of the proof of Theorem 7. \(\square \)

Our approximation results can be extended to diffeomorphic manifolds, e.g., from \({\mathbb {S}}^2\) to ellipsoids, see also the 3D-torus example in Sect. 8. To this end, recall that we can describe the Sobolev space \(H^s({\mathbb {X}})\) using local charts, see [78, Sec. 7.2]. The exponential maps \(\exp _{x} :T_{x}{\mathbb {X}}\rightarrow {\mathbb {X}}\) give rise to local charts \((\mathring{B}_{x}(r_0), \exp _x^{-1})\), where \(\mathring{B}_{x}(r_0) :=\{y \in {\mathbb {X}}: {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x,y) < r_0\}\) denotes the geodesic balls around x with the injectivity radius \(r_0\). If \(\delta < r_0\) is chosen small enough, there exists a uniformly locally finite covering of \({\mathbb {X}}\) by a sequence of balls \((\mathring{B}_{x_j}(\delta ))_j\) with a corresponding smooth resolution of unity \((\psi _j)_j\) with \(\text {supp}(\psi _j) \subset \mathring{B}_{x_j}(\delta )\), see [78, Prop. 7.2.1]. Then, an equivalent Sobolev norm is given by

$$\begin{aligned} \Vert f\Vert _{H^s({\mathbb {X}})} :=\Big ( \sum _{j=1}^\infty \Vert (\psi _j f) \circ \exp _{x_j} \Vert ^2_{H^s({\mathbb {R}}^d)} \Big )^{\frac{1}{2}}, \end{aligned}$$
(25)

where \((\psi _j f) \circ \exp _{x_j}\) is extended to \({\mathbb {R}}^d\) by zero, see [78, Thm. 7.4.5]. Using Definition (25), we are able to pull over results from the Euclidean setting.

Proposition 2

Let \({\mathbb {X}}_1\), \({\mathbb {X}}_2\) be two d-dimensional connected, compact Riemannian manifolds without boundary, which are \(s+1\) diffeomorphic with \(s>d/2\). Assume that for \(H_K(\mathbb X_2)=H^s({\mathbb {X}}_2)\) and every absolutely continuous measure \(\mu \) with positive density \(\rho \in H^s({\mathbb {X}}_2)\) it holds

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}_L^{{{\,\mathrm{{\lambda }-curv}\,}}} } {\mathscr {D}}_K(\mu ,\nu )\lesssim L^{-\frac{s}{d-1}}, \end{aligned}$$

where the constant depends on \({\mathbb {X}}_2\), K, and \(\rho \). Then, the same property holds for \({\mathbb {X}}_1\), where the constant additionally depends on the diffeomorphism.

Proof

Let \(f :{\mathbb {X}}_2 \rightarrow {\mathbb {X}}_1\) denote such a diffeomorphism and \(\rho \in H^s({\mathbb {X}}_1)\) the density of the measure \(\mu \) on \({\mathbb {X}}_1\). Any curve \({\tilde{\gamma }} :[0,1] \rightarrow {\mathbb {X}}_2\) gives rise to a curve \(\gamma :[0,1] \rightarrow {\mathbb {X}}_1\) via \(\gamma = f \circ {\tilde{\gamma }}\), which for every \(\varphi \in H^s({\mathbb {X}}_1)\) satisfies

$$\begin{aligned} \Big | \int _{{\mathbb {X}}_1} \varphi \rho \,\mathrm {d}\sigma _{X_1} - \int _{0}^1 \varphi \circ \gamma \,\mathrm {d}\lambda \Big | = \Big | \int _{{\mathbb {X}}_2} (\varphi \rho )\circ f \vert \det (J_f) \vert \,\mathrm {d}\sigma _{X_2} -\int _{0}^1 \varphi \circ f\circ {\tilde{\gamma }} \,\mathrm {d}\lambda \Big |, \end{aligned}$$

where \(J_f\) denotes the Jacobian of f. Now, note that \(\varphi \circ f, \rho \circ f \vert \det (J_f) \vert \in H^s({\mathbb {X}}_2)\), see (16) and [78, Thm. 4.3.2], which is lifted to manifolds using (25). Hence, we can define a measure \({\tilde{\mu }}\) on \({\mathbb {X}}_2\) through the probability density \(\rho \circ f \vert \det (J_f) \vert \). Choosing \(\tilde{\gamma }_L\) as a realization for some minimizer of \(\inf _{\nu \in {\mathcal {P}}_L^{{{\,\mathrm{{\lambda }-curv}\,}}}} {\mathscr {D}}({\tilde{\mu }},\nu )\), we can apply the approximation result for \({\mathbb {X}}_2\) and estimate for \(\gamma _L = f \circ {\tilde{\gamma }}_L\) that

$$\begin{aligned} \Big | \int _{{\mathbb {X}}_1} \varphi \rho \,\mathrm {d}\sigma _{X_1} - \int _{0}^1 \varphi \circ \gamma _L \,\mathrm {d}\lambda \Big | \lesssim L^{-\frac{s}{d-1}} \Vert \varphi \circ f\Vert _{H^s({\mathbb {X}}_2)} \lesssim L^{-\frac{s}{d-1}} \Vert \varphi \Vert _{H^s({\mathbb {X}}_1)}, \end{aligned}$$

where the second estimate follows from [78, Thm. 4.3.2]. Now, \(L(\gamma _L) \le L(f)L\) implies

$$\begin{aligned} \inf _{\nu \in {\mathcal {P}}_L^{{{\,\mathrm{{\lambda }-curv}\,}}}} {\mathscr {D}}_K (\mu ,\nu ) \lesssim L^{-\frac{s}{d-1}}. \end{aligned}$$

\(\square \)

Remark 5

Consider a probability measure \(\mu \) on \({\mathbb {X}}\) such that the dimension \(d_\mu \) of its support is smaller than the dimension d of \({\mathbb {X}}\). Then, \(\mu \) does not have any density with respect to \(\sigma _{\mathbb {X}}\). If \(\text {supp}(\mu )\) is itself a \(d_\mu \)-dimensional connected, compact Riemannian manifold \({\mathbb {Y}}\) without boundary, we switch from \({\mathbb {X}}\) to \({\mathbb {Y}}\). Sobolev trace theorems and reproducing kernel Hilbert space theory imply that the assumption \(H_K({\mathbb {X}})=H^s({\mathbb {X}})\) leads to \(H_{K'}({\mathbb {Y}})=H^{s'}({\mathbb {Y}})\), where \(K':=K|_{{\mathbb {Y}}\times {\mathbb {Y}}}\) is the restricted kernel and \(s'=s-(d-d_\mu )/2\), cf. [36]. If, for instance, \({\mathbb {Y}}\) is diffeomorphic to \({\mathbb {T}}^{d_\mu }\) (or \({\mathbb {S}}^{d_{\mu }}\) with \(d_\mu =2\)), and \(\mu \) has a positive density \(\rho \in H^{s'}({\mathbb {Y}})\) with respect to \(\sigma _{{\mathbb {Y}}}\), then Theorem 8 (or 9) and Proposition 2 eventually yield

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}_L^{{{\,\mathrm{{\lambda }-curv}\,}}} } {\mathscr {D}}_K(\mu ,\nu )\lesssim L^{-\frac{s'}{d_\mu -1}}. \end{aligned}$$

If \(\text {supp}(\mu )\) is a proper subset of \({\mathbb {Y}}\), we are able to analyze approximations with \({\mathcal {P}}_L^{{{\,\mathrm{ a-curv}\,}}}({\mathbb {Y}})\). First, we observe that the analogue of Proposition 2 also holds for \({\mathcal {P}}_L^{{{\,\mathrm{ a-curv}\,}}}({\mathbb {X}}_1), {\mathcal {P}}_L^{{{\,\mathrm{ a-curv}\,}}}({\mathbb {X}}_2)\) when the positivity assumption on \(\rho \) is replaced with the Lipschitz requirement as in Theorems 6 and 7. If, for instance, \({\mathbb {Y}}\) is diffeomorphic to \({\mathbb {T}}^{d_\mu }\) or \({\mathbb {S}}^{d_\mu }\) and \(\mu \) has a Lipschitz continuous density \(\rho \in H^{s'}({\mathbb {Y}})\) with respect to \(\sigma _{{\mathbb {Y}}}\), then Theorems 6 and 7, and Proposition 2 eventually yield

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}_L^{{{\,\mathrm{ a-curv}\,}}} } {\mathscr {D}}_K(\mu ,\nu )\lesssim L^{-\frac{s'}{d_\mu -1}}. \end{aligned}$$

6 Discretization

In our numerical experiments, we are interested in determining minimizers of

$$\begin{aligned} \min _{\nu \in {\mathcal {P}}_{L}^{{{\,\mathrm{{\lambda }-curv}\,}}}({\mathbb {X}})} {\mathscr {D}}^2_K (\mu ,\nu ). \end{aligned}$$
(26)

Defining \( A_L :=\{\gamma \in {{\,\mathrm{Lip}\,}}({\mathbb {X}}): L(\gamma ) \le L\}\) and using the indicator function

$$\begin{aligned} \iota _{A_L} (\gamma ) :=\left\{ \begin{array}{ll} 0&{}\mathrm {if} \; \gamma \in A_L,\\ + \infty &{}\mathrm {otherwise}, \end{array} \right. \end{aligned}$$

we can rephrase problem (26) as a minimization problem over curves

$$\begin{aligned} \min _{\gamma \in {\mathcal {C}}([0,1],{\mathbb {X}})} {\mathcal {J}}_L(\gamma ), \end{aligned}$$

where \({\mathcal {J}}_L(\gamma ):={\mathscr {D}}^2_K(\mu ,\gamma {_*}\lambda ) + \iota _{A_L}(\gamma )\). As \({\mathbb {X}}\) is a connected Riemannian manifold, we can approximate curves in \(A_L\) by piecewise shortest geodesics with N parts, i.e., by curves from

$$\begin{aligned} A_{L,N} :=\left\{ \gamma \in A_L :\gamma |_{[(i-1)/N,i/N]}\text { is a shortest geodesic for } i = 1,\ldots , N\right\} . \end{aligned}$$

Next, we approximate the Lebesgue measure on [0, 1] by \(e_N :=\frac{1}{N} \sum _{i=1}^{N} \delta _{i/N}\) and consider the minimization problems

$$\begin{aligned} \min _{\gamma \in {\mathcal {C}}([0,1],{\mathbb {X}})} {\mathcal {J}}_{L,N}(\gamma ) , \end{aligned}$$
(27)

where \({\mathcal {J}}_{L,N}(\gamma ):={\mathscr {D}}^{2}_K (\mu , \gamma {_*}e_N) + \iota _{A_{L,N}}(\gamma )\). Since \(\mathrm {ess}\sup _{t \in [0,1]} |{\dot{\gamma }}| (t) = L(\gamma )\), the constraint \(L(\gamma ) \le L\) can be reformulated as \(\int _0^1 (|{\dot{\gamma }}| (t) - L)_+^2 \,\mathrm {d}t = 0\).Footnote 2 Hence, using \(x_i = \gamma (i/N)\), \(i=1,\ldots ,N\), \(x_0 = x_N\) and regarding that \(|{\dot{\gamma }}| (t) = N {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{i-1},x_{i})\) for \(t \in \left( \frac{i-1}{N},\frac{i}{N} \right) \), problem (27) is rewritten in the computationally more suitable form

$$\begin{aligned} \min _{(x_1,\ldots ,x_N)\in {\mathbb {X}}^N} {\mathscr {D}}^{2}_K\Big (\mu , \frac{1}{N} \sum _{i=1}^{N} \delta _{x_{i}}\Big ) \quad \text {s.t.} \quad \frac{1}{N}\sum _{i=1}^{N} \big (N {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{i-1},x_{i}) - L \big )_+^{2} = 0. \end{aligned}$$
(28)

This discretization is motivated by the next proposition. To this end, recall that a sequence \((f_N)_{N\in {\mathbb {N}}}\) of functions \(f_N:{{\mathbb {X}}} \rightarrow (-\infty ,+\infty ]\) is said to \(\varGamma \)-converge to \(f :{{\mathbb {X}}} \rightarrow (-\infty ,+\infty ]\) if the following two conditions are fulfilled for each \(x \in {{\mathbb {X}}}\), see [12]:

  1. (i)

    \(f(x) \le \liminf _{N \rightarrow \infty } f_N(x_N)\) whenever \(x_N \rightarrow x\),

  2. (ii)

    there is a sequence \((y_N)_{N\in {\mathbb {N}}}\) with \(y_N \rightarrow x \) and \(\limsup _{N \rightarrow \infty } f_N(y_N) \le f(x)\).

The importance of \(\varGamma \)-convergence relies in the fact that every cluster point of minimizers of \((f_N)_{N\in {\mathbb {N}}}\) is a minimizer of f. Note that for non-compact manifolds \({\mathbb {X}}\) an additional equi-coercivity condition would be required.

Proposition 3

The sequence \(({\mathcal {J}}_{L,N})_{N\in {\mathbb {N}}}\) is \(\varGamma \)-convergent with limit \({\mathcal {J}}_L\).

Proof

1. First, we verify the \(\liminf \)-inequality. Let \((\gamma _N)_{N\in {\mathbb {N}}}\) with \(\lim _{N\rightarrow \infty } \gamma _N = \gamma \), i.e., the sequence satisfies \(\sup _{t \in [0,1]} {{\,\mathrm{dist}\,}}_{\mathbb {X}}(\gamma (t),\gamma _N(t)) \rightarrow 0\). By excluding the trivial case \(\liminf _{N \rightarrow \infty } {\mathcal {J}}_{L,N}(\gamma _N) = \infty \) and restricting to a subsequence \((\gamma _{N_k})_{k \in {\mathbb {N}}}\), we may assume \(\gamma _{N_k} \in A_{L,N_k} \subset A_L\). Since \(A_L\) is closed, we directly infer \(\gamma \in A_L\). It holds \(e_N \rightharpoonup \lambda \), which is equivalent to the convergence of Riemann sums for \(f \in C[0,1]\), and hence also \({\gamma _N}_{*} e_N \rightharpoonup \gamma _{*}\!\,\mathrm {d}r\). By the weak continuity of \({\mathscr {D}}^2_K\), we obtain

$$\begin{aligned} {\mathcal {J}}_L(\gamma ) ={\mathscr {D}}^2_K(\mu ,\gamma _{*}\lambda ) = \lim _{N \rightarrow \infty } {\mathscr {D}}^{2}_K (\mu , {\gamma _N}_{*}e_N)= \liminf _{N \rightarrow \infty } {\mathcal {J}}_{L,N}(\gamma _N). \end{aligned}$$
(29)

2. Next, we prove the \(\limsup \)-inequality, i.e., we are searching for a sequence \((\gamma _N)_{N\in {\mathbb {N}}}\) with \(\gamma _N \rightarrow \gamma \) and \(\limsup _{N \rightarrow \infty } {\mathcal {J}}_{L,N}(\gamma _N) \le \mathcal J_L(\gamma )\). First, we may exclude the trivial case \(\mathcal J_L(\gamma ) = \infty \). Then, \(\gamma _N\) is defined on every interval \([(i-1)/N,i/N]\), \(i=1,\ldots ,N\), as a shortest geodesic from \(\gamma ((i-1)/N)\) to \(\gamma (i/N)\). By construction we have \(\gamma _N \in A_{L,N}\). From \(\gamma ,\gamma _N \in A_L\) we conclude

$$\begin{aligned}&\sup _{t \in [0,1]} {{\,\mathrm{dist}\,}}_{\mathbb {X}}\bigl (\gamma (t), \gamma _N(t) \bigr ) = \max _{i=1,\ldots N} \sup _{t \in [(i-1)/N,i/N]} {{\,\mathrm{dist}\,}}_{\mathbb {X}}\bigl (\gamma (t), \gamma _N(t) \bigr )\\ \le&\max _{i=1,\ldots N} \sup _{t \in [(i-1)/N,i/N]} {{\,\mathrm{dist}\,}}_{\mathbb {X}}\bigl (\gamma (t), \gamma (i/N) \bigr ) + {{\,\mathrm{dist}\,}}_{\mathbb {X}}\bigl (\gamma _N(i/N), \gamma _N(t) \bigr )\le \frac{2L}{N}, \end{aligned}$$

implying \(\gamma _N \rightarrow \gamma \). Similarly as in (29), we infer \(\limsup _{N \rightarrow \infty } {\mathcal {J}}_{L,N}(\gamma _N) \le {\mathcal {J}}_L(\gamma )\). \(\square \)

In the numerical part, we use the penalized form of (28) and minimize

$$\begin{aligned} \min _{(x_1,\ldots ,x_N) \in {\mathbb {X}}^N} {\mathscr {D}}^{2} _K\Big (\mu , \frac{1}{N} \sum _{i=1}^{N} \delta _{x_{i}}\Big ) + \frac{\lambda }{N} \sum _{i=1}^{N} \big (N {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{i-1},x_{i}) - L \big )_+^{2} ,\qquad \lambda >0. \end{aligned}$$
(30)

7 Numerical Algorithm

For a detailed overview on Riemannian optimization we refer to [69] and the books [1, 79]. In order to minimize (30), we have a closer look at the discrepancy term. By (6) and (7), the discrepancy can be represented as follows

$$\begin{aligned} {\mathscr {D}}_K^{2} \Big (\mu , \frac{1}{N} \sum _{i=1}^{N} \delta _{x_{i}}\Big ) =&\frac{1}{N^{2}}\sum _{i,j=1}^{N} K(x_{i},x_{j}) - 2\sum _{i=1}^{N} \int _{{\mathbb {X}}} K(x_{i},x) \,\mathrm {d}\mu (x) + \iint \limits _{{\mathbb {X}} \times {\mathbb {X}}} K \,\mathrm {d}\mu \,\mathrm {d}\mu \\ =&\sum _{k=0}^{\infty } \alpha _{k} \Big | {{\hat{\mu }}}_{k} - \frac{1}{N} \sum _{i=1}^{N} \varphi _{k}(x_{i}) \Big |^{2}. \end{aligned}$$

Both formulas have pros and cons: The first formula allows for an exact evaluation only if the expressions \(\varPhi (x) :=\int _{{\mathbb {X}}} K(x,y) \,\mathrm {d}\mu (y)\) and \(\int _{{\mathbb {X}}} \varPhi \,\mathrm {d}\mu \) can be written in closed forms. In this case the complexity scales quadratically in the number of points N. The second formula allows for exact evaluation only if the kernel has a finite expansion (3). In that case the complexity scales linearly in N.

Our approach is to use kernels fulfilling \(H_K({\mathbb {X}}) = H^s({\mathbb {X}})\), \(s > d/2\), and approximating them by their truncated representation with respect to the eigenfunctions of the Laplace–Beltrami operator

$$\begin{aligned} K_r(x,y) :=\sum _{k\in {{\mathcal {I}}}_r} \alpha _{k} \varphi _{k}(x) \overline{\varphi _{k}(y)}, \quad {{\mathcal {I}}}_r :=\bigl \{k: \varphi _k \in {\Pi }_r({\mathbb {X}}) \bigr \}. \end{aligned}$$

Then, we finally aim to minimize

$$\begin{aligned} \min _{x \in {\mathbb {X}}^N} F(x) :=\sum _{k\in {{\mathcal {I}}}_r} \alpha _{k} \Big ( {{\hat{\mu }}}_{k} - \frac{1}{N} \sum _{i=1}^{N} \varphi _{k}(x_{i}) \Big )^{2}+ \frac{\lambda }{N} \sum _{i=1}^{N} \big (N {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{i-1},x_{i}) - L \big )_+^{2}, \end{aligned}$$
(31)
figure a
figure b

where \(\lambda >0\). Our algorithm of choice is the nonlinear conjugate gradient (CG) method with Armijo line search as outlined in Algorithm 1 with notation and implementation details described in the comments after Remark 6, see [25] for Euclidean spaces. Note that the notation is independent of the special choice of \({\mathbb {X}}\) in our comments. The proposed method is of “exact conjugacy” and uses the second order derivative information provided by the Hessian. For the Armijo line search itself, the sophisticated initialization in Algorithm 2 is used, which also incorporates second order information via the Hessian. The main advantage of the CG method is its simplicity together with fast convergence at low computational cost. Indeed, Algorithm 1, together with Algorithm 2 replaced by an exact line search, converges under suitable assumptions superlinearly, more precisely dN-step quadratically towards a local minimum, cf. [73, Thm. 5.3] and [43, Sec. 3.3.2, Thm. 3.27].

Remark 6

The objective in (31) violates the smoothness requirements whenever \(x_{k-1}=x_{k}\) or \({{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{k-1},x_{k}) = L/N\). However, we observe numerically that local minimizers of (31) do not belong to this set of measure zero. This means in turn, if a local minimizer has a positive definite Hessian, then there is a local neighborhood where the CG method (with exact line search) permits a superlinear convergence rate. We do indeed observe this behavior in our numerical experiments.

Let us briefly comment on Algorithm 1 for \({\mathbb {X}}\in \{{\mathbb {T}}^2, {\mathbb {T}}^3, {\mathbb {S}}^2,\mathrm{SO}(3),{\mathcal {G}}_{2,4}\}\) which are considered in our numerical examples. For additional implementation details we refer to [43]. By \(\gamma _{x, d}\) we denote the geodesic with \(\gamma _{x, d}(0) = x\) and \({\dot{\gamma }}_{x, d}(0) = d\). Besides evaluating the geodesics \(\gamma _{x^{(k)}, d^{(k)}}(\tau ^{(k)})\) in the first iteration step, we have to compute the parallel transport of \(d^{(k)}\) along the geodesics in the second step. Furthermore, we need to compute the Riemannian gradient \(\nabla _{{\mathbb {X}}^N}F\) and products of the Hessian \(H_{{\mathbb {X}}^N} F\) with vectors d, which are approximated by the finite difference

$$\begin{aligned} \mathrm H_{{\mathbb {X}}^{N}}F(x) d \approx \tfrac{\Vert d\Vert }{h} \Bigl (\nabla _{{\mathbb {X}}^{N}}F\bigl (\gamma _{x,h d/\Vert d\Vert }\bigr ) - \nabla _{{\mathbb {X}}^{N}}F(x)\Bigr ), \qquad h:=10^{-8}. \end{aligned}$$

The computation of the gradient of the penalty term in (30) is done by applying the chain rule and noting that for \(x \mapsto {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x,y)\), we have \(\nabla _{\mathbb {X}}{{\,\mathrm{dist}\,}}_{\mathbb {X}}(x,y) = \log _x y/{{\,\mathrm{dist}\,}}_{\mathbb {X}}(x,y)\), \(x\not = y\) with the logarithmic map \(\log \) on \({\mathbb {X}}\), while the distance is not differentiable for \(x=y\). Concerning the later point, see Remark 5. The evaluation of the gradient of the penalty term at a point in \({\mathbb {X}}^N\) requires only \({\mathcal {O}}(N)\) arithmetic operations. The computation of the Riemannian gradient of the data term in (30) is done analytically via the gradient of the eigenfunctions \(\varphi _k\) of the Laplace–Beltrami operator. Then, the evaluation of the gradient of the whole data term at given points can be done efficiently by fast Fourier transform (FFT) techniques at non-equispaced nodes using the NFFT software package of Potts et al. [56]. The overall complexity of the algorithm and references for the computation details for the above manifolds are given in Table 1.

Table 1 References for implementation details of Algorithm 1 (left) and arithmetic complexity for the evaluations per iteration for the different manifolds (right)

8 Numerical Results

In this section, we underline our theoretical results by numerical examples. We start by studying the parameter choice in our numerical model. Then, we provide examples for the approximation of absolutely continuous measures with densities in \(H^s({\mathbb {X}})\), \(s > d/2\), by push-forward measures of the Lebesgue measure on [0, 1] by Lipschitz curves for the manifolds \({\mathbb {X}}\in \{{\mathbb {T}}^2, \mathbb T^3, {\mathbb {S}}^2,\mathrm{SO}(3),G_{2,4}\}\). Supplementary material can be found on our webpage.

8.1 Parameter Choice

We like to emphasize that the optimization problem (31) is highly nonlinear and the objective function has a large number of local minimizers, which appear to increase exponentially in N. In order to find for fixed L reasonable (local) solutions of (26), we carefully adjust the parameters in problem (31), namely the number of points N, the polynomial degree r in the kernel truncation, and the penalty parameter \(\lambda \). In the following, we suppose that \(\mathrm {dim}(\text {supp}(\mu )) = d \ge 2\).

  1. (i)

    Number of points N Clearly, N should not be too small compared to L. However, from a computational perspective it should also be not too large since the optimization procedure is hampered by the vast number of local minimizers. From the asymptotic of the path lengths of TSP in Lemma 3, we conclude that \(N \gtrsim \mathcal \ell (\gamma )^{d/(d-1)}\) is a reasonable choice, where \(\mathcal \ell (\gamma ) \le L\) is the length of the resulting curve \(\gamma \) going through the points.

  2. (ii)

    Polynomial degree r Based on the proofs of the theorems in Sect. 5.4 it is reasonable to choose

    $$\begin{aligned} r \sim L^{\frac{1}{d-1}} \sim N^{\frac{1}{d}}. \end{aligned}$$
  3. (iii)

    Penalty parameter \(\lambda \) If \(\lambda \) is too small, we cannot enforce that the points approximate a regular curve, i.e., \(L/N \gtrsim {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{k-1},x_{k})\). Otherwise, if \(\lambda \) is too large the optimization procedure is hampered by the rigid constraints. Hence, to find a reasonable choice for \(\lambda \) in dependence on L, we assume that the minimizers of (31) treat both terms proportionally, i.e., for \(N\rightarrow \infty \) both terms are of the same order. Therefore, our heuristic is to choose the parameter \(\lambda \) such that

    $$\begin{aligned} \min _{x_{1},\dots ,x_{N}} {\mathscr {D}}_K^{2} \Big (\mu , \frac{1}{N} \sum _{k=1}^{N} \delta _{x_{k}}\Big ) \sim N^{-\frac{2s}{d}} \sim \frac{\lambda }{N} \sum _{k=1}^{N} \big (N {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{k-1},x_{k}) - L \big )_{+}^{2} . \end{aligned}$$

    On the other hand, assuming that for the length \(\ell (\gamma ) = \sum _{k=1}^{N}{{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{k-1},x_{k})\) of a minimizer \(\gamma \) we have \(\ell (\gamma ) \sim L \sim N^{(d-1)/d}\), so that \(N {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{k-1},x_{k}) \sim L\), the value of the penalty term behaves like

    $$\begin{aligned} \frac{\lambda }{N} \sum _{k=1}^{N} \big (N {{\,\mathrm{dist}\,}}_{\mathbb {X}}(x_{k-1},x_{k}) - L \big )_{+}^{2} \sim \lambda L^{2} \sim \lambda N^{\frac{2d-2}{d}}. \end{aligned}$$

    Hence, a reasonable choice is

    $$\begin{aligned} \lambda \sim L^{\frac{-2s-2(d-1)}{d-1}} \sim N^{\frac{-2s-2(d-1)}{d}}. \end{aligned}$$
    (32)

Remark 7

In view of Remark 5 the relations in i)-iii) become

$$\begin{aligned} N \sim L^{\frac{d_{\mu }}{d_{\mu }-1}}, \quad r \sim N^{\frac{1}{d_{\mu }}} \sim L^{\frac{1}{d_{\mu }-1}}, \quad \lambda \sim L^{\frac{-2s-3d_{\mu }+d+2}{d_{\mu }-1}} \sim N^{\frac{-2s-3d_{\mu }+d+2}{d_{\mu }}}. \end{aligned}$$

In the rest of this subsection, we aim to provide some numerical evidence for the parameter choice above. We restrict our attention to the torus \({\mathbb {X}}= {\mathbb {T}}^{2}\) and the kernel K given in (34) with \(d=2\) and \(s = 3/2\). Choose \(\mu \) as the Lebesgue measure on \({\mathbb {T}}^{2}\). From (32), we should keep in mind \(\lambda \sim N^{-5/2} \sim L^{-5}\).

Influence of N and \(\lambda \) We fix \(L=4\) and a large polynomial degree \(r=128\) for truncating the kernel. For any \(\lambda _{i}=0.1\cdot 2^{-5i/2}\), \(i=1,\dots ,4\), we compute local minimizers with \(N_{j}=10 \cdot 2^{j}\), \(j=1,\dots ,4\). More precisely, keeping \(\lambda _{i}\) fixed we start with \(N_{1}=20\) and refine successively the curves by inserting the midpoints of the line segments connecting consecutive points and applying a local minimization with this initialization. The results are depicted in Fig. 1. For fixed \(\lambda \) (fixed row) we can clearly notice that the local minimizers converge towards a smooth curve for increasing N. Moreover, the diagonal images correspond to the choice \(\lambda =0.1 (N/10)^{-5/2}\), where we can already observe good approximation of the curves emerging to the right of it. This should provide some evidence that the choice of the penalty parameter \(\lambda \) and the number of points N discussed above is reasonable. Indeed, for \(\lambda \rightarrow \infty \) we observe \(L(\gamma ) \rightarrow \ell (\gamma ) \rightarrow L = 4\).

Fig. 1
figure 1

Influence of N and \(\lambda \) on local minimizers of (31) for the Lebesgue measure on \({\mathbb {T}}^2\), \(L=4\) and \(r=128\). Results for increasing N (column-wise) and decreasing \(\lambda = 0.1\cdot 2^{-5i/2}\), \(i=1,\dots ,4\), (row-wise). Here, the curve length increases for decreasing \(\lambda \) or increasing N, until stagnation for sufficient small \(\lambda \) or large N. For all minimizer the distance between consecutive points is around \(\ell (\gamma )/N\)

Influence of the polynomial degree r In Fig. 2 we illustrate the local minimizers of (31) for fixed Lipschitz parameters \(L_{i}=2^{i}\) and corresponding regularization weights \(\lambda _{i} = 0.2 \cdot L_{i}^{-5}\), \(i=1,\dots ,4\), (rows) in dependence on the polynomial degrees \(r_{j}=8\cdot 2^{j}\), \(j=1,\dots ,5\) (columns). According to the previous experiments, it seems reasonable to choose \(N = 20 L^{2}\). Note, that the (numerical) choice of \(\lambda \) leads to curves with length \(\ell (\gamma ) \approx 2L\). In Fig. 2 we observe that for \(r=c L\) the corresponding local minimizers have common features. For instance, if \(c=4\) (i.e., \(r \approx \ell (\gamma )\)) the minimizers have mostly vertical and horizontal line segments. Furthermore, for fixed r it appears that the length of the curves increases linearly with L until L exceeds 2r, from where it remains unchanged. This observation can be explained by the fact that there are curves of bounded length cr which provide exact quadratures for degree r.

Fig. 2
figure 2

Influence of r on the local minimizer of (31) for the Lebesgue measure on \({\mathbb {T}}^2\). Column-wise we increase \(r=16,32,64,128,256\) and row-wise we increase \(L=2,4,8,16\), where \(\lambda = 0.2 L^{-5}\) and \(N=20 L^{2}\). Note that the degree r steers the resolution of the curves. It appears that the spacing of the curves is bounded by \(r^{-1}\)

8.2 Quasi-Optimal Curves on Special Manifolds

In this subsection, we give numerical examples for \({\mathbb {X}}\in \{{\mathbb {T}}^2,{\mathbb {T}}^3,{\mathbb {S}}^2, \mathrm{SO}(3), {\mathcal {G}}_{2,4} \}\). Since the objective function in (31) is highly non-convex, the main problem is to find nearly optimal curves \(\gamma _{L} \in \mathcal P_{L}^{{{\,\mathrm{{\lambda }-curv}\,}}}({\mathbb {X}})\) for increasing L. Our heuristic is as follows:

  1. (i)

    We start with a curve \(\gamma _{L_{0}}:[0,1]\rightarrow {\mathbb {X}}\) of small length \(\ell (\gamma ) \approx L_{0}\) and solve the problem (31) for increasing \(L_{i} = c L_{i-1}\), \(c>1\), where we choose the parameters \(N_{i}\), \(\lambda _{i}\) and \(r_{i}\) in dependence on \(L_{i}\) as described in the previous subsection. In each step a local minimizer is computed using the CG method with 100 iterations. Then, the obtained minimizer \(\gamma _{i}\) serves as the initial guess in the next step, which is obtained by inserting the midpoints.

  2. (ii)

    In case that the resulting curves \(\gamma _i\) have non-constant speed, each is refined by increasing \(\lambda _{i}\) and \(N_{i}\). Then, the resulting problem is solved with the CG method and \(\gamma _i\) as initialization. Details on the parameter choice are given in the according examples.

The following examples show that this recipe indeed enables us to compute “quasi-optimal” curves, meaning that the obtained minimizers have optimal decay in the discrepancy.

2d-Torus \({\mathbb {T}}^2\) In this example we illustrate how well a gray-valued image (considered as probability density) may be approximated by an almost constant speed curve. The original image of size \( 170\times 170 \) is depicted in the bottom-right corner of Fig. 3. Its Fourier coefficients \(\hat{\mu }_{k_{1},k_{2}}\) are computed by a discrete Fourier transform (DFT) using the FFT algorithm and normalized appropriately. The kernel K is given by (34) with \(d=2\) and \(s = 3/2\).

We start with \(N_{0}=96\) points on a circle given by the formula

$$\begin{aligned} x_{0,k} = \Bigl ( \tfrac{1}{5} \cos (2\pi k/N_{0}), \tfrac{1}{5} \sin (2\pi k/N_{0})\Bigr ), \qquad k=0,\dots ,N_{0}. \end{aligned}$$

Then, we apply our procedure for \(i=0,\dots ,11\) with parameters

$$\begin{aligned} L_{i} = 0.97\cdot 2^{\frac{i+5}{2}},\quad \lambda _{i} = 100\cdot L_{i}^{-5},\quad N_{i} = 96\cdot 2^{i} \sim L_{i}^{2} \quad r_{i}= \lfloor 2^{\frac{i+11}{2}} \rfloor \sim L_{i}, \end{aligned}$$

chosen such that the length of the local minimizer \(\gamma _{i}\) satisfies \(\ell (\gamma _{i}) \approx 2^{(i+5)/2}\) and the maximal speed is close to \(L_{i}\).

To get nearly constant speed curves \(\gamma _i\), see ii), we increase \(\lambda _{i}\) by a factor of 100, \(N_{i}\) by a factor of 2 and set \(L_{i} :=2^{(i+5)/2}\). Then, we apply the CG method with maximal 100 iterations and i restarts. The results are depicted in Fig. 3. Note that the complexity for the evaluation of the function in (31) scales roughly as \(N \sim L^{2}\). In Fig. 4 we observe that the decay-rate of the squared discrepancy \({\mathscr {D}}_K^{2}(\mu ,\nu )\) in dependence on the Lipschitz constant L matches indeed the theoretical findings of Theorem 8.

Fig. 3
figure 3

Local minimizers of (31) for the image at bottom right

Fig. 4
figure 4

Squared discrepancy between the measure \(\mu \) given by the image in Fig. 3 and the computed local minimizers (black dots) on \({\mathbb {T}}^{2}\) in log-scale. The blue line corresponds to the optimal decay-rate in Theorem 8

3D-Torus \({\mathbb {T}}^3\) The aim of this example is two-fold. First, it shows that the algorithm works pretty well in three dimensions. Second, we are able to approximate any compact surface in the three-dimensional space by a curve. We construct a measure \(\mu \) supported around a two-dimensional surface by taking samples from Spock’s headFootnote 3 and placing small Gaussian peaks at the sampling points, i.e., the density is given for \(x \in [-\tfrac{1}{2},\tfrac{1}{2}]\) by

$$\begin{aligned} \rho (x) :=c^{-1} \sum _{p \in S} \mathrm {e}^{-30000 \Vert p-x\Vert _2^2}, \qquad c :=\int _{[-\tfrac{1}{2},\tfrac{1}{2}]^3} \sum _{p \in S} \mathrm {e}^{-30000 \Vert p-x\Vert _2^2} \,\mathrm {d}x, \end{aligned}$$

where \(S\subset [-\tfrac{1}{2},\tfrac{1}{2}]^{3}\) is the discrete sampling set. From a numerical point of view it holds \(\mathrm {dim}(\text {supp}(\mu )) = 2\). The Fourier coefficients are again computed by a DFT and the kernel K is given by (34) with \(d=3\) and \(s = 2\) so that \(H_{K} = H^{2}({\mathbb {T}}^{3})\).

We start with \(N_{0}=100\) points on a smooth curve given by the formula

$$\begin{aligned} x_{0,k} = \Bigl ( \tfrac{3}{10} \cos (2\pi k/N_{0}), \tfrac{3}{10} \sin (2\pi k/N_{0}), \tfrac{3}{10} \sin (4\pi k/N_{0})\Bigr ), \qquad k=0,\dots ,N_{0}. \end{aligned}$$

Then, we apply our procedure for \(i=0,\dots ,8\) with parameters, cf. Remark 7,

$$\begin{aligned} L_{i} = 2^{\frac{i+5}{2}},\quad \lambda _{i} = 10\cdot L_{i}^{-5},\quad N_{i} = 100 \cdot 2^{i} \sim L_{i}^{2}, \quad r_{i}= \lfloor 2^{\frac{i+5}{2}} \rfloor \sim L_{i}. \end{aligned}$$

To get nearly constant speed curves \(\gamma _i\), we increase \(\lambda _{i}\) by a factor of 100, \(N_{i}\) by a factor of 2 and set \(L_{i} :=2^{(i+6)/2}\). Then, we apply the CG method with maximal 100 iterations and one restart to the previously found curve \(\gamma _{i}\). The results are illustrated in Fig. 5. Note that the complexity of the function evaluation in (31) scales roughly as \(N^{3/2} \sim L^{3}\). In Fig. 6 we depict the squared discrepancy \({\mathscr {D}}_K^{2}(\mu ,\nu )\) of the computed curves. For small Lipschitz constants, say \(L(\gamma ) \le 50\), we observe a decrease of approximately \(L(\gamma )^{-3}\), which matches the optimal decay-rate for measures supported on surfaces as discussed in Remark 5.

Fig. 5
figure 5

Local minimizers of (31) for a measure \(\mu \) concentrated on a surface (head of Spock) in \({\mathbb {T}}^3\)

Fig. 6
figure 6

Squared discrepancy between the measure \(\mu \) given by the surface in Fig. 5 and the computed local minimizers (black dots) on \({\mathbb {T}}^{3}\) in log-scale. The blue line corresponds to the optimal decay-rate in Theorem 8

2-Sphere \({\mathbb {S}}^2\) Next, we approximate a gray-valued image on the sphere \({\mathbb {S}}^{2}\) by an almost constant speed curve. The image represents the earth’s elevation data provided by MATLAB, given by samples \(\rho _{i,j}\), \( i=1,\dots ,180,\;j=1,\dots ,360\), on the grid

$$\begin{aligned} x_{i,j} :=\Bigl (\sin \bigl ( i \tfrac{\pi }{180}\bigr ) \sin \bigl (j \tfrac{\pi }{180}\bigr ), \sin \bigl ( i \tfrac{\pi }{180}\bigr ) \cos \bigl (j \tfrac{\pi }{180}\bigr ), \cos \bigl (i \tfrac{\pi }{180}\bigr )\Bigr ). \end{aligned}$$

The Fourier coefficients are computed by discretizing the Fourier integrals, i.e.,

$$\begin{aligned} {{\hat{\mu }}}_{k}^{m} :={\left\{ \begin{array}{ll} \frac{1}{180 \cdot 360} \sum _{i=1}^{180} \sum _{j=1}^{360} \rho _{i,j} \overline{Y_{k}^{m}(x_{i,j})} \sin \bigl (i \tfrac{\pi }{180}\bigr ),&{} 1\le k\le 2m+1, m \le 180,\\ 0, &{} \text {else}, \end{array}\right. } \end{aligned}$$

followed by a suitable normalization such that \({{\hat{\mu }}}_{0}^{0}=1\). The corresponding sums are efficiently computed by an adjoint non-equispaced fast spherical Fourier transform (NFSFT), see [68]. The kernel K is given by (36). Similar to the previous examples, we apply our procedure for \(i=0,\dots ,12\) with parameters

$$\begin{aligned} L_{i} = 9.7\cdot 2^{\frac{i}{2}}, \quad \lambda _{i} = 100\cdot L_{i}^{-5},\quad N_{i} = 100\cdot 2^{i} \sim L_{i}^{2}, \quad r_{i}= \lfloor L_{i} \rfloor \sim L_{i}. \end{aligned}$$

To get nearly constant speed curves, we increase \(\lambda _{i}\) by a factor of 100, \(N_{i}\) by a factor of 2 and set \(L_{i} :=L_{0} 2^{i/2}\). Then, we apply the CG method with maximal 100 iterations and one restart to the previously constructed curves \(\gamma _{i}\). The results for \(i=6,8,10,12\) are depicted in Fig. 7. Note that the complexity of the function evaluation in (31) scales roughly as \(N \sim L^{2}\). In Fig. 8 we observe that the decay-rate of the squared discrepancy \({\mathscr {D}}_K^{2}(\mu ,\nu )\) in dependence on the Lipschitz constant matches indeed the theoretical findings in Theorem 9.

Fig. 7
figure 7

Local minimizers of (31) for \(\mu \) given by the earth’s elevation data on the sphere \({\mathbb {S}}^{2}\)

Fig. 8
figure 8

Squared discrepancy between the measure \(\mu \) and the computed local minimizers (black dots) in log-scale. The blue line corresponds to the optimal decay-rate in Theorem 9

3D-Rotations \(\mathrm{SO}(3)\) There are several possibilities to parameterize the rotation group \(\mathrm{SO}(3)\). We apply those by Euler angles and an axis-angle representation for visualization. Euler angles \((\varphi _{1}, \theta , \varphi _{2}) \in [0,2\pi ) \times [0,\pi ] \times [0,2\pi )\) correspond to rotations \(\mathrm {Rot} ( \varphi _{1} , \theta , \varphi _{2} )\) in \(\mathrm{SO}(3)\) that are the successive rotations around the axes \(e_3,e_2,e_3\) by the respective angles. Then, the Haar measure of \(\mathrm{SO}(3)\) is determined by

$$\begin{aligned} \,\mathrm {d}\mu _{\mathrm {SO(3)}}(\varphi _{1},\theta ,\varphi _{2}) = \tfrac{1}{8\pi ^{2}} \sin (\theta ) \,\mathrm {d}\varphi _{1} \,\mathrm {d}\theta \,\mathrm {d}\varphi _{2}. \end{aligned}$$

We are interested in the full three-dimensional doughnut

$$\begin{aligned} D = \bigl \{ \mathrm {Rot} ( \varphi _{1} , \theta , \varphi _{2} ) \;:\; 0 \le \theta \le \tfrac{\pi }{2},\; 0\le \varphi _{1},\varphi _{2} \le 2\pi \bigr \} \subset \mathrm {SO(3)}. \end{aligned}$$

Next, we want to approximate the Haar measure \(\mu = \mu _D\) restricted to D, i.e., with normalization we consider the measure defined for \(f \in C(\mathrm{SO}(3))\) by

$$\begin{aligned} \int _{\mathrm {SO(3)}} f \,\mathrm {d}\mu _{D} = \frac{1}{4\pi ^{2}}\int _{0}^{2\pi }\int _{0}^{\frac{\pi }{2}}\int _{0}^{2\pi } f (\varphi _{1},\theta ,\varphi _{2}) \sin (\theta ) \,\mathrm {d}\varphi _{1} \,\mathrm {d}\theta \,\mathrm {d}\varphi _{2}. \end{aligned}$$

The Fourier coefficients of \(\mu _D\) can be explicitly computed by

$$\begin{aligned} {{\hat{\mu }}}_{l,l'}^k = {\left\{ \begin{array}{ll} P_{k-1}(0) - P_{k+1}(0), &{} l,l' = 0,\; k\ge 0,\\ 0, &{} l,l' \ne 0, \end{array}\right. } \end{aligned}$$

where \(P_{k}\) are the Legendre polynomials. The kernel K is given by (37) with \(d=3\) and \(s=2\). For \(i=0,\dots ,8\) the parameters are chosen as

$$\begin{aligned} L_{i} = 0.93 \cdot 2^{\frac{2i + 12}{3}}, \quad \lambda _{i} = 10\cdot L_{i}^{-4},\quad N_{i} = 64 \cdot 2^{i} \sim L_{i}^{2}, \quad r_{i}= \lfloor 2^{\frac{i+9}{3}} \rfloor \sim L_{i}^{\frac{1}{2}}. \end{aligned}$$

Here, we use a CG method with 100 iterations and one restart. Step ii) appears to be not necessary. Note that the complexity for the function evaluations in (31) scales roughly as \(N \sim L^{3/2}\).

The constructed curves are illustrated in Fig. 9, where we utilized the following visualization: Every rotation \(R(\alpha ,r) \in \mathrm{SO}(3)\) is determined by a rotation axis \(r = (r_1,r_2,r_3) \in {\mathbb {S}}^2\) and a rotation angle \(\alpha \in [0,\pi ]\), i.e.,

$$\begin{aligned} R(\alpha ,r) x = r(r^\mathrm {T}x) + \cos (\alpha ) \left( (r \times x) \times r \right) + \sin (\alpha ) (r \times x). \end{aligned}$$

Setting \(q :=(\cos (\tfrac{\alpha }{2}), \sin (\tfrac{\alpha }{2}) r) \in {\mathbb {S}}^3\) with \(r \in {\mathbb {S}}^2\) and \(\alpha \in [0, 2\pi ]\), see (22), we observe that the same rotation is generated by \(-q = (\cos (\tfrac{2\pi - \alpha }{ 2}), \sin (\tfrac{2\pi - \alpha }{ 2} (-r)) \in {\mathbb {S}}^3\), in other words \(\mathrm{SO}(3) \cong \mathbb S^3 / \{\pm 1\}\). Then, by applying the stereographic projection \(\pi (q) = (q_2,q_3,q_4)/(1+q_1)\), we map the upper hemisphere onto the three dimensional unit ball. Note that the equatorial plane of \({\mathbb {S}}^3\) is mapped onto the sphere \({\mathbb {S}}^2\), hence on the surface of the ball antipodal points have to be identified. In other words, the rotation \(R(\alpha , r)\) is plotted as the point

$$\begin{aligned} \pi (q) = \frac{\sin \bigl (\tfrac{\alpha }{2}\bigr )}{1 + \cos \bigl (\tfrac{\alpha }{2}\bigr )}r = \tan \bigl (\tfrac{\alpha }{4}\bigr ) r \in \mathbb R^3. \end{aligned}$$

In Fig. 10 we observe that the decay-rate of \({\mathscr {D}}_K^{2}(\mu ,\nu )\) in dependence on the Lipschitz constant L matches the theoretical findings in Corollary 1.

Fig. 9
figure 9

Local minimizers of (31) for the Haar measure \(\mu _{D}\) of three-dimensional doughnut D in the rotation group \(\mathrm {SO(3)}\) with a color scheme for better visibility of the 3D structure

Fig. 10
figure 10

Squared discrepancy between the measure \(\mu _D\) and the computed local minimizers (black dots) in log-scale. The blue line corresponds to the optimal decay-rate in Corollary 1

The 4-dimensional Grassmannian \({\mathcal {G}}_{2,4}\) Here, we aim to approximate the Haar measure of the Grassmannian \({\mathcal {G}}_{2,4}\) by a curve of almost constant speed. As this curve samples the space \({\mathcal {G}}_{2,4}\) quite evenly, it could be used for the grand tour, a technique to analyze high-dimensional data by their projections onto two-dimensional subspaces, cf. [5].

The kernel K of the Haar measure is given by (38) and the Fourier coefficients are given by \({{\hat{\mu }}}_{m,m'}^{k,k'} = \delta _{m,0}\delta _{m',0}\delta _{k,0}\delta _{k',0}\). For \(i=0,\dots ,8\) the parameters are chosen as

$$\begin{aligned} L_{i} = 0.91 \cdot 2^{\frac{3i+16}{4}}, \,\,\, \lambda _{i} = 100\cdot L_{i}^{-\frac{11}{3}}, \,\,\, N_{i} = 128 \cdot 2^{i} \sim L_{i}^{2},\,\,\, r_{i}= \lfloor 2^{\frac{3i+16}{12}} \rfloor +1 \sim L_{i}^{\frac{1}{3}}. \end{aligned}$$

Here, we use a CG method with 100 iterations and one restart. Our experiments suggest that step ii) is not necessary. Note that the complexity for the function evaluation in (31) scales roughly as \(N \sim L^{3/2}\).

The computed curves are illustrated in Fig. 11, where we use the following visualization. By Remark 8, there exists an isometric one-to-one mapping \(P :{\mathbb {S}}^2\times {\mathbb {S}}^2/ \{\pm 1\} \rightarrow {\mathcal {G}}_{2,4}\). Using this relation, we plot the point \(P(u,v) \in {\mathcal {G}}_{2,4}\) by two antipodal points \(z_{1}=u+v,\, z_{2}=-u-v \in {\mathbb {R}}^{3}\) together with the RGB color-coded vectors \(\pm u\).Footnote 4 More precisely, \(R=(1\mp u_{1})/2\), \(G=(1\mp u_{2})/2\), \(B=(1\mp u_{3})/2\). This means a curve \(\gamma (t) \in {\mathcal {G}}_{2,4}\) only intersects itself if the corresponding curve \(z(t) \in {\mathbb {R}}^3\) intersects and has the same colors at the intersection point. In Fig. 12 we observe that the decay-rate of the squared discrepancy \(\mathscr {D}_K^{2}(\mu ,\nu )\) in dependence on the Lipschitz constant L matches indeed the theoretical findings in Theorem 10.

Fig. 11
figure 11

Local minimizers of (31) for the Haar measure of the Grassmannian \({\mathcal {G}}_{2,4}\)

Fig. 12
figure 12

The squared discrepancy between the Haar measure \(\mu \) and the computed local minimizers (black dots) in log-scale. Here, the blue line corresponds to the optimal decay-rate, cf. Theorem 10

9 Conclusions

In this paper, we provided approximation results for general probability measures on compact Ahlfors d-regular metric spaces \({\mathbb {X}}\) by

  1. (i)

    measures supported on continuous curves of finite length, which are actually push-forward measures of probability measures on [0, 1] by Lipschitz curves;

  2. (ii)

    push-forward measures of absolutely continuous probability measures on [0, 1] by Lipschitz curves;

  3. (iii)

    push-forward measures of the Lebesgue measure on [0, 1] by Lipschitz curves.

Our estimates rely on discrepancies between measures. In contrast to Wasserstein distances, these estimates do not reflect the curse of dimensionality.

In approximation theory, a natural question is how the approximation rates improve as the “measures become smoother”. Therefore, we considered absolutely continuous probability measures with densities in Sobolev spaces, where we have to restrict ourselves to compact Riemannian manifolds \({\mathbb {X}}\). We proved lower estimates for all three approximation spaces i)-iii). Concerning upper estimates, we gave a result for the approximation space i). Unfortunately, we were not able to show similar results for the smaller approximation spaces ii) and iii). Nevertheless, for these cases, we could provide results for the d-dimensional torus, the d-sphere, the three-dimensional rotation group and the Grassmannian \({\mathcal {G}}_{2,4}\), which are all of interest on their own. Numerical examples on these manifolds underline our theoretical findings.

Our results can be seen as starting point for future research. Clearly, we want to have more general results also for the approximation spaces ii) and iii). We hope that our research leads to further practical applications. It would be also interesting to consider approximation spaces of measures supported on higher dimensional submanifolds as, e.g., surfaces.

Recently, results on the principal component analysis (PCA) on manifolds were obtained. It may be interesting to see if some of our approximation results can be also modified for the setting of principal curves, cf. Remark 3. In contrast to [55, Thm. 1] that bounds the discretization error for fixed length, we were able to provide precise error bounds for the discrepancy in dependence on the Lipschitz constant L of \(\gamma \) and the smoothness of the density \(\mathrm d \mu \).