Curve Based Approximation of Measures on Manifolds by Discrepancy Minimization

The approximation of probability measures on compact metric spaces and in particular on Riemannian manifolds by atomic or empirical ones is a classical task in approximation and complexity theory with a wide range of applications. Instead of point measures we are concerned with the approximation by measures supported on Lipschitz curves. Special attention is paid to push-forward measures of Lebesgue measures on the unit interval by such curves. Using the discrepancy as distance between measures, we prove optimal approximation rates in terms of the curve’s length and Lipschitz constant. Having established the theoretical convergence rates, we are interested in the numerical minimization of the discrepancy between a given probability measure and the set of push-forward measures of Lebesgue measures on the unit interval by Lipschitz curves. We present numerical examples for measures on the 2- and 3-dimensional torus, the 2-sphere, the rotation group on R3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathbb R^3$$\end{document} and the Grassmannian of all 2-dimensional linear subspaces of R4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathbb {R}}^4$$\end{document}. Our algorithm of choice is a conjugate gradient method on these manifolds, which incorporates second-order information. For efficient gradient and Hessian evaluations within the algorithm, we approximate the given measures by truncated Fourier series and use fast Fourier transform techniques on these manifolds.


Introduction
The approximation of probability measures by atomic or empirical ones based on their discrepancies is a well examined problem in approximation and complexity theory Communicated by Alan Edelman.
Extended author information available on the last page of the article [59,62,67] with a wide range of applications, e.g., in the derivation of quadrature rules and in the construction of designs. Recently, discrepancies were also used in image processing for dithering [46,72,77], i.e., for representing a gray-value image by a finite number of black dots, and in generative adversarial networks [28].
Besides discrepancies, Optimal Transport (OT) and in particular Wasserstein distances have emerged as powerful tools to compare probability measures in recent years, see [24,81] and the references therein. In fact, so-called Sinkhorn divergences, which are computationally much easier to handle than OT, are known to interpolate between OT and discrepancies [30]. For the sample complexity of Sinkhorn divergences we refer to [37]. The rates for approximating probability measures by atomic or empirical ones with respect to Wasserstein distances depend on the dimension of the underlying spaces, see [21,58]. In contrast, approximation rates based on discrepancies can be given independently of the dimension [67], i.e., they do not suffer from the curse of dimensionality. Additionally, we should keep in mind that the computation of discrepancies does not involve a minimization problem, which is a major drawback of OT and Sinkhorn divergences. Moreover, discrepancies admit a simple description in Fourier domain and hence the use of fast Fourier transforms is possible, leading to better scalability than the aforementioned methods.
Instead of point measures, we are interested in approximations with respect to measures supported on curves. More precisely, we consider push-forward measures of probability measures ω ∈ P([0, 1]) by Lipschitz curves of bounded speed, with special focus on absolutely continuous measures ω = ρλ and the Lebesgue measure ω = λ. In this paper, we focus on approximation with respect to discrepancies. For related results on quadrature and approximation on manifolds, we refer to [31,47,64,65] and the references therein. An approximation model based on the 2-Wasserstein distance was proposed in [61]. That work exploits completely different techniques than ours both in the theoretical and numerical part. Finally, we want to point out a relation to principal curves which are used in computer science and graphics for approximating distributions approximately supported on curves [49,50,50,55,57]. For the interested reader, we further comment on this direction of research in Remark 3 and in the conclusions. Next, we want to motivate our framework by numerous potential applications: -In MRI sampling [11,17], it is desirable to construct sampling curves with short sampling times (short curve) and high reconstruction quality. Unfortunately, these requirements usually contradict each other and finding a good trade-off is necessary. Experiments demonstrating the power of this novel approach on a real-world scanner are presented in [60]. -For laser engraving [61] and 3D printing [20], we require nozzle trajectories based on our (continuous) input densities. Compared to the approach in [20], where points given by Llyod's method are connected as a solution of the TSP (traveling salesman problem), our method jointly selects the points and the corresponding curve. This avoids the necessity of solving a TSP, which can be quite costly, although efficient approximations exist. Further, it is not obvious that the fixed initial point approximation is a good starting point for constructing a curve.
-The model can be used for wire sculpture creation [2]. In view of this, our numerical experiment presented in Fig. 5 can be interpreted as a building plan for a wire sculpture of the Spock head, namely of a 2D surface. Clearly, the approach can be also used to create images similar to TSP Art [54], where images are created from points by solving the corresponding TSP. -In a more manifold related setting, the approach can be used for grand tour computation on G 2,4 [5], see also our numerical experiment in Fig. 11. More technical details are provided in the corresponding section.
Our contribution is two-fold. On the theoretical side, we provide estimates of the approximation rates in terms of the maximal speed of the curve. First, we prove approximation rates for general probability measures on compact Ahlfors d-regular length spaces X. These spaces include many compact sets in the Euclidean space R d , e.g., the unit ball or the unit cube as well as d-dimensional compact Riemannian manifolds without boundary. The basic idea consists in combining the known convergence rates for approximation by atomic measures with cost estimates for the traveling salesman problem. As for point measures, the approximation rate L d/(2d−2) ≤ L −1/2 for general ω ∈ P([0, 1]) and L d/(3d−2) ≤ L −1/3 for ω = λ in terms of the maximal Lipschitz constant (speed) L of the curves does not crucially depend on the dimension of X. In particular, the second estimate improves a result given in [18] for the torus.
If the measures fulfill additional smoothness properties, these estimates can be improved on compact, connected, d-dimensional Riemannian manifolds without boundary. Our results are formulated for absolutely continuous measures (with respect to the Riemannian measure) having densities in the Sobolev space H s (X), s > d/2. In this setting, the optimal approximation rate becomes roughly speaking L −s/(d−1) . Our proofs rely on a general result of Brandolini et al. [13] on the quadrature error achievable by integration with respect to a measure that exactly integrates all eigenfunctions of the Laplace-Beltrami with eigenvalues smaller than a fixed number. Hence, we need to construct measures supported on curves that fulfill the above exactness criterion. More precisely, we construct such curves for the d dimensional torus T d , the spheres S d , the rotation group SO(3) and the Grassmannian G 2,4 .
On the numerical side, we are interested in finding (local) minimizers of discrepancies between a given continuous measure and those from the set of push-forward measures of the Lebesgue measure by bounded Lipschitz curves. This problem is tackled numerically on T 2 , T 3 , S 2 as well as SO (3) and G 2,4 by switching to the Fourier domain. The minimizers are computed using the method of conjugate gradients (CG) on manifolds, which incorporates second order information in form of a multiplication by the Hessian. Thanks to the approach in the Fourier domain, the required gradients and the calculations involving the Hessian can be performed efficiently by fast Fourier transform techniques at arbitrary nodes on the respective manifolds. Note that in contrast to our approach, semi-continuous OT minimization relies on Laguerre tessellations [41], which are not available in the required form on the 2-sphere, SO (3) or G 2,4 . This paper is organized as follows: In Sect. 2 we give the necessary preliminaries on probability measures. In particular, we introduce the different sets of measures supported on Lipschitz curves that are used for the approximation. Note that measures supported on continuous curves of finite length can be equivalently characterized by push-forward measures of probability measures by Lipschitz curves. Section 3 provides the notation on reproducing kernel Hilbert spaces and discrepancies including their representation in the Fourier domain. Section 4 contains our estimates of the approximation rates for general given measures and different approximation spaces of measures supported on curves. Following the usual lines in approximation theory, we are then concerned with the approximation of absolutely continuous measures with density functions lying in Sobolev spaces. Our main results on the approximation rates of smoother measures are contained in Sect. 5, where we distinguish between the approximation with respect to the push-forward of general measures ω ∈ P[0, 1], absolute continuous measures and the Lebesgue measure on [0, 1]. In Sect. 6 we formulate our numerical minimization problem. Our numerical algorithms of choice are briefly described in Sect. 7. For a comprehensive description of the algorithms on the different manifolds, we refer to respective papers. Section 8 contains numerical results demonstrating the practical feasibility of our findings. Conclusions are drawn in Sect. 9. Finally, Appendix A briefly introduces the different manifolds X used in our numerical examples together with the Fourier representation of probability measures on X.

Probability Measures and Curves
In this section, the basic notation on measure spaces is provided, see [3,32], with focus on probability measures supported on curves. At this point, let us assume that X is a compact metric space endowed with a bounded non-negative Borel measure σ X ∈ M(X) such that supp(σ X ) = X. Further, we denote the metric by dist X .
Additional requirements on X are added along the way and notations are explained below. By B(X) we denote the Borel σ -algebra on X and by M(X) the linear space of all finite signed Borel measures on X, i.e., the space of all μ : B(X) → R satisfying μ(X) < ∞ and for any sequence The support of a measure μ is the closed set For μ ∈ M(X) the total variation measure is defined by With the norm μ M = |μ|(X) the space M(X) becomes a Banach space. By C(X) we denote the Banach space of continuous real-valued functions on X equipped with the norm ϕ C(X) := max x∈X |ϕ(x)|. The space M(X) can be identified via Riesz' theorem with the dual space of C(X) and the weak- * topology on M(X) gives rise to the weak convergence of measures, i.e., a sequence (μ k ) k ⊂ M(X) converges weakly to μ and we write μ k μ, if For a non-negative, finite measure μ, let L p (X, μ) be the Banach space (of equivalence classes) of complex-valued functions with norm By P(X) we denote the space of Borel probability measures on X, i.e., non-negative Borel measures with μ(X) = 1. This space is weakly compact, i.e., compact with respect to the topology of weak convergence. We are interested in the approximation of measures in P(X) by probability measures supported on points and curves in X. To this end, we associate with x ∈ X a probability measure δ The atomic probability measures at N points are defined by In other words, P atom N (X) is the collection of probability measures, whose support consists of at most N points. Further restriction to equal mass distribution leads to the empirical probability measures at N points denoted by In this work, we are interested in the approximation by measures having their support on curves. Let C([a, b], X) denote the set of closed, continuous curves γ : [a, b] → X. Although our presented experiments involve solely closed curves, some applications might require open curves. Hence, we want to point out that all of our approximation results still hold without this requirement. Upper bounds would not get worse and we have not used the closedness for the lower bounds on the approximation rates. The length of a curve γ ∈ C([a, b], X) is given by If (γ ) < ∞, then γ is called rectifiable. By reparametrization, see [48,Thm. 3.2], the image of any rectifiable curve in C( [a, b], X) can be derived from the set of closed

Lipschitz continuous curves
The speed of a curve γ ∈ Lip(X) is defined a.e. by the metric derivative The optimal Lipschitz constant L = L(γ ) of a curve γ is given by . For a constant speed curve it holds L(γ ) = (γ ).
We aim to approximate measures in P(X) from those of the subset (1) This space is quite large and in order to define further meaningful subsets, we derive an equivalent formulation in terms of push-forward measures.
Proof Let ν ∈ P curv L (X) as in (1). If supp(ν) consists of a single point x ∈ X only, then the constant curve γ ≡ x pushes forward an arbitrary δ t for t ∈ [a, b], which shows that ν is contained in (2).
The set P curv L (X) contains P atom N (X) if L is sufficiently large compared to N and X is sufficiently nice, cf. Sect. 4. It is reasonable to ask for more restrictive sets of approximation measures, e.g., when ω ∈ P([0, 1]) is assumed to be absolutely continuous. For the Lebesgue measure λ on [0, 1], we consider In the literature [18,61], the special case of push-forward of the Lebesgue measure ω = λ on [0, 1] by Lipschitz curves in T d was discussed and successfully used in certain applications [11,17]. Therefore, we also consider approximations from It is obvious that our probability spaces related to curves are nested, Hence, one may expect that establishing good approximation rates is most difficult for P˘-curv L (X) and easier for P curv L (X).

Discrepancies and RKHS
The aim of this section is to introduce the way we quantify the distance ("discrepancy") between two probability measures. To this end, choose a continuous, symmetric function K : X × X → R that is positive definite, i.e., for any finite number n ∈ N of points x j ∈ X, j = 1, . . . , n, the relation is satisfied for all a j ∈ R, j = 1, . . . , n. We know by Mercer's theorem [23,63,76] that there exists an orthonormal basis {φ k : k ∈ N} of L 2 (X, σ X ) and non-negative coefficients (α k ) k∈N ∈ 1 such that K has the Fourier expansion with absolute and uniform convergence of the right-hand side. If α k > 0 for some k ∈ N 0 , the corresponding function φ k is continuous. Every function f ∈ L 2 (X, σ X ) has a Fourier expansion The kernel K gives rise to a reproducing kernel Hilbert space (RKHS). More precisely, the function space equipped with the inner product and the corresponding norm forms a Hilbert space with reproducing kernel, i.e., Note that f ∈ H K (X) impliesf k = 0 if α k = 0, in which case we make the convention α −1 kf k = 0 in (4). The space H K (X) is the closure of the linear span of {K (x j , ·) : x j ∈ X} with respect to the norm (4), and H K (X) is continuously embedded in C(X). In particular, the point evaluations in H K (X) are continuous.
The discrepancy D K (μ, ν) is defined as the dual norm on H K (X) of the linear operator T : see [40,67]. Note that this looks similar to the 1-Wasserstein distance, where the space of test functions consists of Lipschitz continuous functions and is larger. Since we obtain by Riesz's representation theorem which yields by Fubini's theorem, (3), (4) and symmetry of K that where the Fourier coefficients of μ, ν ∈ P(X) are well-defined for k with α k = 0 bŷ

Remark 1
The Fourier coefficientsμ k andν k depend on both K and σ X , but the identity (6) shows that D K (μ, ν) only depends on K . Thus, our approximation rates do not depend on the choice of σ X . On the other hand, our numerical algorithms in Sect. 7 depend on φ k and hence on the choice of σ X .
If μ n μ and ν n ν as n → ∞, then also μ n ⊗ν n μ⊗ν. Therefore, the continuity of K implies that lim n→∞ D K (μ n , ν n ) = D K (μ, ν), so that D K is continuous with respect to weak convergence in both arguments. Thus, for any weakly compact subset P ⊂ P(X), the infimum inf is actually a minimum. All of the subsets introduced in the previous section are weakly compact.
The proof for P a-curv L (X) and P˘-curv L (X) is analogous and hence omitted.
Assume thatĥ k = 0 for all k ∈ Z d . We consider the special Mercer kernel By the convolution theorem for Fourier transforms it holds (h * μ) k =ĥ kμk , k ∈ Z d , and we obtain by Parseval's identity for μ, ν ∈ M(T d ) and (7) that In image processing, metrics of this kind were considered in [18,33,77].
Remark 3 (Relations to Principal Curves) A similar concept, sharing the common theme of "a curve which passes through the middle of a distribution" with the intention of our paper, is that of principle curves. The notion of principal curves has been developed in a statistical framework and was successfully applied in statistics and machine learning, see [38,55,57]. The idea is to generalize the concept of principal components with just one direction to so-called self-consistent (principal) curves. In the seminal paper [49], the authors showed that these principal curves γ are critical points of the energy functional where μ is a given probability measure on X and proj γ (x) = argmin y∈γ x − y 2 is a projection of a point x ∈ X on γ . This notion has also been generalized to Riemannian manifolds in [50], see also [57] for an application on the sphere. Further investigation of principal curves in the plane, cf. [27], showed that self-consistent curves are not (local) minimizers, but saddle points of (8). Moreover, the existence of such curves is established only for certain classes of measures, such as elliptical ones. By additionally constraining the length of curves minimizing (8), these unfavorable effects were eliminated, cf. [55]. In comparison to the objective (8), the discrepancy (6) averages for fixed x ∈ X the distance encoded by K to any point on γ , instead of averaging over the squared minimal distances to γ .

Approximation of General Probability Measures
Given μ ∈ P(X), the estimates 1 min ν∈P atom N (X) 1 We use the symbols and to indicate that the corresponding inequalities hold up to a positive constant factor on the respective right-hand side. The notation ∼ means that both relations and hold. The dependence of the constants on other parameters shall either be explicitly stated or clear from the context. are well-known, cf. [43,Cor. 2.8]. Here, the constant hidden in depends on X and K but is independent of μ and N ∈ N. In this section, we are interested in approximation rates with respect to measures supported on curves.
Our approximation rates for P curv L (X) are based on those for P atom N (X) combined with estimates for the traveling salesman problem (TSP). Let TSP X (N ) denote the worst case minimal cost tour in a fully connected graph G of N arbitrary nodes represented by x 1 , . . . , x N ∈ X and edges with cost dist Similarly, let MST X (N ) denote the worst case cost of the minimal spanning tree of G.
To derive suitable estimates, we require that X is Ahlfors d-regular (sometimes also called Ahlfors-David d-regular), i.e., there exists 0 < d < ∞ such that where B r (x) = {y ∈ X : dist X (x, y) ≤ r } and the constants in ∼ do not depend on x or r . Note that d is not required to be an integer and turns out to be the Hausdorff dimension. For X being the unit cube the following lemma was proved in [75].

Lemma 3 If X is a compact Ahlfors d-regular metric space, then there is a constant
Proof Using (10) and the same covering argument as in [74,Lem. 3.1], we see that for every choice where the constant depends on X. Let S = {x 1 , . . . , x N } be an arbitrary selection of N points from X. First, we choose x i and x j with dist X (x i , x j ) ≤ cN −1/d . Then, we form a minimal spanning tree T of S \ {x i } and augment the tree by adding the edge between x i and x j . This construction provides us with a spanning tree and hence we can estimate MST X (N ) ≤ MST X (N − 1) + cN −1/d . Iterating the argument, we deduce cf. [75]. Finally, the standard relation TSP X (N ) ≤ 2 MST X (N ) for edge costs satisfying the triangular inequality concludes the proof.
To derive a curve in X from a minimal cost tour in the graph, we require the additional assumption that X is a length space, i.e., a metric space with dist X (x, y) = inf (γ ) : γ a continuous curve that connects x and y , cf. [15,16]. Thus, for the rest of this section, we are assuming that X is a compact Ahlfors d-regular length space.
In this case, Lemma 3 yields the next proposition.

Proposition 1 For a compact, Ahlfors d-regular length space
The Hopf-Rinow Theorem for metric measure spaces, see [15,Chap. I.3] and [16,Thm. 2.5.28], yields that every pair of points x, y ∈ X can be connected by a geodesic, i.e., there is γ ∈ Lip(X) with constant speed and (γ | [s,t] . The minimal cost tour in Lemma 3 leads to a curve γ ∈ Lip(X), so that μ N = γ * ω ∈ P curv L (X) for an appropriate measure ω ∈ P atom N ([0, 1]). By Proposition 1 we can transfer approximation rates from P atom N (X) to P curv L (X). Theorem 1 For μ ∈ P(X), it holds with a constant depending on X and K that . According to (9), we obtain Next, we derive approximation rates for P a-curv L (X) and P˘-curv Theorem 2 For μ ∈ P(X), we have with a constant depending on X and K that Let these points be ordered as a solution of the corresponding TSP. Set x 0 := x N and We construct a closed curve γ L : [0, 1] → X that rests in each x i for a while and then rushes from x i to x i+1 . As in the proof of Proposition 1, X being a compact length space enables us to choose γ i ∈ Lip(X) with , the related discrepancy can be estimated by The relation (12) Note that many compact sets in R d are compact Ahlfors d-regular length spaces with respect to the Euclidean metric and the normalized Lebesgue measure such as the unit ball or the unit cube. Moreover many compact connected manifolds with or without boundary satisfy these conditions. All assumptions in this section are indeed satisfied for d-dimensional connected, compact Riemannian manifolds without boundary equipped with the Riemannian metric and the normalized Riemannian measure. The latter setting is studied in the subsequent section to refine our investigations on approximation rates.
was derived in [18] provided that K satisfies an additional Lipschitz condition, where the constant in (13) depends on d and K . The rate coincides with our rate in (11) for d = 2 and is worse for higher dimensions as d 3d−2 > 1 3 for all d ≥ 3.

Approximation of Probability Measures Having Sobolev Densities
To study approximation rates in more detail, we follow the standard strategy in approximation theory and take additional smoothness properties into account. We shall therefore consider μ with a density satisfying smoothness requirements. To define suitable smoothness spaces, we make additional structural assumptions on X. Throughout the remaining part of this work, we suppose that X is a d-dimensional connected, compact Riemannian manifold without boundary equipped with the Riemannian metric dist X and the normalized Riemannian measure σ X .
In the first part of this section, we introduce the necessary background on Sobolev spaces and derive general lower bounds for the approximation rates. Then, we focus on upper bounds in the rest of the section. So far, we only have general upper bounds for P curv L (X). In case of the smaller spaces P a-curv L (X) and P˘-curv L (X), we have to restrict to special manifolds X in order to obtain bounds. For a better overview, all theorems related to approximation rates are named accordingly.

Sobolev Spaces and Lower Bounds
In order to define a smoothness class of functions on X, let −Δ denote the (negative) Laplace-Beltrami operator on X. It is self-adjoint on L 2 (X, σ X ) and has a sequence of positive, non-decreasing eigenvalues (λ k ) k∈N (with multiplicities) with a corresponding orthonormal complete system of smooth eigenfunctions {φ k : k ∈ N}. Every function f ∈ L 2 (X, σ X ) has a Fourier expansion For s > d/2, the space H s (X) is continuously embedded into the space of Hölder continuous functions of degree s −d/2, and every function f ∈ H s (X) has a uniformly convergent Fourier series, see [70,Thm. 5.7]. Actually, H s (X), s > d/2, is a RKHS with reproducing kernel Hence, the discrepancy D K (μ, ν) satisfies (5) with H K (X) = H s (X). Clearly, each kernel of the above form with coefficients having the same decay as (1 + λ k ) −s for k → ∞ gives rise to a RKHS that coincides with H s (X) with an equivalent norm. Appendix A contains more details of the above discussion for the torus T d , the sphere S d , the special orthogonal group SO(3) and the Grassmannian G k,d . Now, we are in the position to establish lower bounds on the approximation rates. Again, we want to remark that our results still hold if we drop the requirement that the approximating curves are closed.
holds with equivalent norms. Assume that μ is absolutely continuous with respect to σ X with a continuous density ρ. Then, there are constants depending on X, K , and ρ such that Proof The proof is based on the construction of a suitable fooling function to be used in (5) for all x ∈ B and σ X (B) > 0, which is chosen as the support of the constructed fooling functions. We shall verify that for every ν ∈ P atom where the constant depends on X, K , and ρ. For small enough δ we can choose 2N disjoint balls in B with diameters δ N −1/d , see also [39]. For ν ∈ P atom N (X), there are N of these balls that do not intersect with supp(ν). By putting together bump functions supported on each of the N balls, we obtain a non-negative function ϕ supported in B that vanishes on supp(ν) and satisfies (14), with a constant that depends on , cf. [13,Thm. 2.16]. This yields The inequality for P curv L (X) is derived in a similar way. Given a continuous curve γ : [0, 1] → X of length L, choose N such that L ≤ δ N N −1/d . By taking half of the radius of the above balls, there are 2N pairwise disjoint balls of radius δ 2 N −1/d contained in B with pairwise distances at least δ N −1/d . Any curve of length δ N N −1/d intersects at most N of those balls. Hence, there are N balls of radius δ 2 N −1/d that do not intersect supp(γ ). As above, this yields a fooling function ϕ satisfying (14), which ends the proof.

L (X)
In this section, we derive upper bounds that match the lower bounds in Theorem 3 for P curv L (X). Our analysis makes use of the following theorem, which was already proved for X = S d in [51].
Theorem 4 [13,Thm. 2.12] Assume that ν r ∈ P(X) provides an exact quadrature for all eigenfunctions ϕ k of the Laplace-Beltrami operator with eigenvalues λ k ≤ r 2 , i.e., Then, it holds for every function f ∈ H s (X), s > d/2, that there is a constant depending on X and s with For our estimates it is important that the number of eigenfunctions of the Laplace-Beltrami operator on X belonging to eigenvalues with λ k ≤ r 2 is of order r d , see [19,Chap. 6.4] and [52,Thm. 17.5.3,Cor. 17.5.8]. This is known as Weyl's estimates on the spectrum of an elliptic operator. For some special manifolds, the eigenfunctions are explicitly given in the appendix. In the following lemma, the result from Theorem 4 is rewritten in terms of discrepancies and generalized to absolutely continuous measures with densities ρ ∈ H s (X).

Lemma 4 For s > d/2 suppose that H K (X)
= H s (X) holds with equivalent norms and that ν r ∈ P(X) satisfies (15). Let μ ∈ P(X) be absolutely continuous with respect to σ X with density ρ ∈ H s (X). For sufficiently large r , the measuresν r := ρ β r ν r ∈ P(X) with β r := X ρ dν r are well defined and there is a constant depending on X and K with Proof Note that H s (X) is a Banach algebra with respect to addition and multiplication [22], in particular, By Theorem 4, we obtain for all ϕ ∈ H s (X) that In particular, this implies for ϕ ≡ 1 that Then, application of the triangle inequality results in According to (17), the first summand is bounded by . It remains to derive matching bounds on the second term. Hölder's inequality yields where the last inequality is due to H s (X) → L ∞ (X) and (18).
Using the previous lemma, we derive optimal approximation rates for P atom N (X) and P curv L (X).
Proof By [13, Lem. 2.11] and since the Laplace-Beltrami has N ∼ r d eigenfunctions belonging to eigenvectors λ k < r 2 , there exists a measure ν r ∈ P atom N (X) that satisfies (15). Hence, (15) is satisfied with r ∼ N 1/d , where the constants depend on X and K . Thus, Lemma 4 withν r ∈ P atom N (X) leads to (19). The assumptions of Lemma 3 are satisfied, so that analogous arguments as in the proof of Theorem 1 yield P atom implies (20).

Upper Bounds for P
a-curv L (X X X) and special manifolds X X X To establish upper bounds for the smaller space P a-curv L (X), restriction to special manifolds is necessary. The basic idea consists in the construction of a curve and a related measure ν r such that all eigenfunctions of the Laplace-Beltrami operator belonging to eigenvalues smaller than a certain value are exactly integrated by this measure and then applying Lemma 4 for estimating the minimum of discrepancies. We begin with the torus.
Proof 1. First, we construct a closed curve γ r such that the trigonometric polynomials from r (T d ), see (33) in the appendix, are exactly integrated along this curve. Clearly, the polynomials in r (T d−1 ) are exactly integrated at equispaced nodes Then, each element in d r is exactly integrated along the union of these curves, i.e., using The argument is repeated for every other coordinate direction, so that we end up with dn d−1 curves mapping from an interval of length 1 dn d−1 to T d . The intersection points of these curves are considered as vertices of a graph, where each vertex has 2d edges. Consequently, there exists an Euler path γ r : [0, 1] → T d trough the vertices build from all curves. It has constant speed dn d−1 and the polynomials d r are exactly integrated along γ r , i.e., 2. Next, we apply Lemma 4 for ν r = γ r * λ. We observeν r = γ r * ((ρ • γ r )/β r λ) and deduce L(ρ • γ r /β r ) ≤ L(γ r )L(ρ)/β r r d−1 ∼ L as β r ∼ 1. Here, constants depend on d, K , and ρ. Now, we provide approximation rates for X = S d .
Finally, we derive approximation rates for X = SO(3).
where the constant depends on K and ρ. p dγ r * (ρ r λ).
We use the fact that the sphere S 3 is a double covering of SO(3). That is, there is a surjective two-to-one mapping a : Moreover, we know that a : S 3 → SO (3) is a local isometry, see [42], i.e., it respects the Riemannian structures, implying the relations σ SO(3) = a * σ S 3 and It also maps r (SO(3)) into 2r (S 3 ), i.e., p ∈ r (SO(3)) implies p • a ∈ 2r (S 3 ). Now, letγ r : [0, 1] → S 3 andω r be given as in the first part of the proof of Theorem 7 for d = 3, i.e.,γ r * ωr satisfies (21) with L(γ r ) L andω r =ρ r λ with L(ρ r ) L.
2. The rest follows along the lines of step 2. in the proof of Theorem 7.

Upper Bounds for P
-curv L (X X X) and special manifolds X X X To derive upper bounds for the smallest space P˘-curv L (X), we need the following specification of Lemma 4.

Lemma 5 For s > d/2 suppose that H K (X) = H s (X) holds with equivalent norms.
Let μ ∈ P(X) be absolutely continuous with respect to σ X with positive density ρ ∈ H s (X). Suppose that ν r := γ r * λ with γ r ∈ Lip(X) satisfies (15) and let β r := X ρ dν r . Then, for sufficiently large r , is well-defined and invertible. Moreover,γ r := γ r • g −1 satisfies L(γ r ) L(γ r ) and where the constants depend on X, K , and ρ.
Proof Since ρ is continuous, there is > 0 with ρ ≥ . To bound the Lipschitz constant L(γ r ), we apply the mean value theorem together with the definition of g and the fact that (g −1 ) (s) = 1/g (g −1 (s)) to obtain Using (18), this can be further estimated for sufficiently large r as To derive (24), we aim to apply Lemma 4 with ν r = γ r * λ. We observẽ so that Lemma 4 indeed implies (24).
In comparison to Theorem 6, we now trade the Lipschitz condition on ρ with the positivity requirement, which enables us to cover P˘-curv L (X).

Theorem 8 (Torus) Let X = T d with d ∈ N, s > d/2 and suppose that H K (X) = H s (X) holds with equivalent norms. Then, for any absolutely continuous measure μ ∈ P(X) with positive density ρ ∈ H s (X), there is a constant depending on d, K , and ρ with min
Proof The first part of the proof is identical to the proof of Theorem 6. Instead of Lemma 4 though, we now apply Lemma 5 for γ r and ρ r ≡ 1. Hence,γ r = γ r • g −1 r satisfies L(γ r ) ≤ β r d(2r + 1) d−1 r d−1 , so thatγ r * λ satisfies (24) and is in P˘-curv The construction on X = S d for P a-curv L (X) in the proof of Theorem 7 is not compatible with P˘-curv L (X). Thus, the situation is different from the torus, where we have used the same underlying construction and only switched from Lemma 4 to Lemma 5. Now, we present a new construction for P˘-curv L (X), which is tailored to X = S 2 . In this case, we can transfer the ideas of the torus, but with Gauss-Legendre quadrature points.
As with the torus, we now "turn" the sphere (or switch the position of φ) so that we get circles along orthogonal directions. This large collection of circles is indeed connected. As with the torus, each intersection point has an incoming and outgoing part of a circle, so that all this corresponds to a graph, where again each vertex has an even number of "edges". Hence, there is an Euler path inducing our final curve γ r : [0, 1] → S 2 with piecewise constant speed L(γ r ) r satisfying 2. Let r ∼ L. Analogous to the end of the proof of Theorem 8, Lemma 5 now yields the assertion.
To get the approximation rate for X = G 2,4 , we make use of its double covering X = S 2 × S 2 , cf. Remark 8.
Theorem 10 (Grassmannian) Let X = G 2,4 , s > 2 and suppose H K (X) = H s (X) holds with equivalent norms. Then, we have for any absolutely continuous measure μ ∈ P(X) with positive density ρ ∈ H s (X) that there exists a constant depending on K and ρ with min ν∈P a-curv Proof By Remark 8 in the appendix, we know that G 2,4 ∼ = S 2 × S 2 /{±1} so that is remains to prove the assertion for X = S 2 × S 2 . There exist pairwise distinct points {x 1 , . . . , x N } ⊂ S 2 such that 1 N N j=1 δ x j satisfies (15) on S 2 with N ∼ r 2 , cf. [9,10]. On the other hand, letγ be the curve on S 2 constructed in the proof of Theorem 9, so thatγ * λ satisfies (15) on S 2 with It turns out that the set on S 2 × S 2 given by N j=1 ({x j } × Γ j ) ∪ (Γ j × {x j+1 }) is connected. We now choose γ j := O jγ and know that the union of the trajectories of the set of curves is connected. Combinatorial arguments involving Euler paths, see Theorems 6 and 9, lead to a curve γ with (γ ) ≤ L(γ ) ∼ N L(γ ) ∼ r 3 , so that γ * λ satisfies (15). The remaining part follows along the lines of the proof of Theorem 7.
Our approximation results can be extended to diffeomorphic manifolds, e.g., from S 2 to ellipsoids, see also the 3D-torus example in Sect. 8. To this end, recall that we can describe the Sobolev space H s (X) using local charts, see [78,Sec. 7.2]. The exponential maps exp x : T x X → X give rise to local charts (B x (r 0 ), exp −1 x ), where B x (r 0 ) := {y ∈ X : dist X (x, y) < r 0 } denotes the geodesic balls around x with the injectivity radius r 0 . If δ < r 0 is chosen small enough, there exists a uniformly locally finite covering of X by a sequence of balls (B x j (δ)) j with a corresponding smooth resolution of unity (ψ j ) j with supp(ψ j ) ⊂B x j (δ), see [78,Prop. 7.2.1]. Then, an equivalent Sobolev norm is given by where (ψ j f ) • exp x j is extended to R d by zero, see [78,Thm. 7.4.5]. Using Definition (25), we are able to pull over results from the Euclidean setting.
where the constant depends on X 2 , K , and ρ. Then, the same property holds for X 1 , where the constant additionally depends on the diffeomorphism.
Proof Let f : X 2 → X 1 denote such a diffeomorphism and ρ ∈ H s (X 1 ) the density of the measure μ on X 1 . Any curveγ : [0, 1] → X 2 gives rise to a curve γ : where J f denotes the Jacobian of f . Now, note that ϕ• f , ρ• f | det(J f )| ∈ H s (X 2 ), see (16) and [78,Thm. 4.3.2], which is lifted to manifolds using (25). Hence, we can define a measureμ on X 2 through the probability density ρ • f | det(J f )|. Choosingγ L as a realization for some minimizer of inf ν∈P˘-curv L D(μ, ν), we can apply the approximation result for X 2 and estimate for γ L = f •γ L that where the second estimate follows from [78,Thm. 4.3.2]. Now,

Remark 5
Consider a probability measure μ on X such that the dimension d μ of its support is smaller than the dimension d of X. Then, μ does not have any density with respect to σ X . If supp(μ) is itself a d μ -dimensional connected, compact Riemannian manifold Y without boundary, we switch from X to Y. Sobolev trace theorems and reproducing kernel Hilbert space theory imply that the assumption , and μ has a positive density ρ ∈ H s (Y) with respect to σ Y , then Theorem 8 (or 9) and Proposition 2 eventually yield min ν∈P˘-curv If supp(μ) is a proper subset of Y, we are able to analyze approximations with P a-curv L (Y). First, we observe that the analogue of Proposition 2 also holds for P a-curv L (X 1 ), P a-curv L (X 2 ) when the positivity assumption on ρ is replaced with the Lipschitz requirement as in Theorems 6 and 7. If, for instance, Y is diffeomorphic to T d μ or S d μ and μ has a Lipschitz continuous density ρ ∈ H s (Y) with respect to σ Y , then Theorems 6 and 7, and Proposition 2 eventually yield min ν∈P a-curv

Discretization
In our numerical experiments, we are interested in determining minimizers of min ν∈P˘-curv L (X) . As X is a connected Riemannian manifold, we can approximate curves in A L by piecewise shortest geodesics with N parts, i.e., by curves from where (27) is rewritten in the computationally more suitable form This discretization is motivated by the next proposition. To this end, recall that a sequence ( f N ) N ∈N of functions f N : X → (−∞, +∞] is said to Γ -converge to f : X → (−∞, +∞] if the following two conditions are fulfilled for each x ∈ X, see [12]: The importance of Γ -convergence relies in the fact that every cluster point of minimizers of ( f N ) N ∈N is a minimizer of f . Note that for non-compact manifolds X an additional equi-coercivity condition would be required.

Proposition 3
The sequence (J L,N ) N ∈N is Γ -convergent with limit J L .
Proof 1. First, we verify the lim inf-inequality. Let (γ N ) N ∈N with lim N →∞ γ N = γ , i.e., the sequence satisfies sup t∈[0,1] dist X (γ (t), γ N (t)) → 0. By excluding the trivial case lim inf N →∞ J L,N (γ N ) = ∞ and restricting to a subsequence (γ N k ) k∈N , we may assume γ N k ∈ A L,N k ⊂ A L . Since A L is closed, we directly infer γ ∈ A L . It holds e N λ, which is equivalent to the convergence of Riemann sums for f ∈ C[0, 1], and hence also γ N * e N γ * dr . By the weak continuity of D 2 K , we obtain 2. Next, we prove the lim sup-inequality, i.e., we are searching for a sequence In the numerical part, we use the penalized form of (28) and minimize

Numerical Algorithm
For a detailed overview on Riemannian optimization we refer to [69] and the books [1,79]. In order to minimize (30), we have a closer look at the discrepancy term. By (6) and (7), the discrepancy can be represented as follows Both formulas have pros and cons: The first formula allows for an exact evaluation only if the expressions Φ(x) := X K (x, y) dμ(y) and X Φ dμ can be written in closed forms. In this case the complexity scales quadratically in the number of points N . The second formula allows for exact evaluation only if the kernel has a finite expansion (3). In that case the complexity scales linearly in N . Our approach is to use kernels fulfilling H K (X) = H s (X), s > d/2, and approximating them by their truncated representation with respect to the eigenfunctions of the Laplace-Beltrami operator Then, we finally aim to minimize

Algorithm 1 (CG Method with Restarts)
Parameters: maximal iterations k max ∈ N Input: twice differentiable function F : (1) , · · · ∈ X N Algorithm 2 (Armijo Line Search) where λ > 0. Our algorithm of choice is the nonlinear conjugate gradient (CG) method with Armijo line search as outlined in Algorithm 1 with notation and implementation details described in the comments after Remark 6, see [25] for Euclidean spaces. Note that the notation is independent of the special choice of X in our comments. The proposed method is of "exact conjugacy" and uses the second order derivative information provided by the Hessian. For the Armijo line search itself, the sophisticated initialization in Algorithm 2 is used, which also incorporates second order information via the Hessian. The main advantage of the CG method is its simplicity together with fast convergence at low computational cost. Indeed, Algorithm 1, together with Algorithm 2 replaced by an exact line search, converges under suitable assumptions superlinearly, more precisely d N -step quadratically towards a local minimum, cf.

Remark 6
The objective in (31) violates the smoothness requirements whenever x k−1 = x k or dist X (x k−1 , x k ) = L/N . However, we observe numerically that local minimizers of (31) do not belong to this set of measure zero. This means in turn, if a local minimizer has a positive definite Hessian, then there is a local neighborhood where the CG method (with exact line search) permits a superlinear convergence rate. We do indeed observe this behavior in our numerical experiments.
Let us briefly comment on Algorithm 1 for X ∈ {T 2 , T 3 , S 2 , SO(3), G 2,4 } which are considered in our numerical examples. For additional implementation details we refer to [43]. By γ x,d we denote the geodesic with γ x,d (0) = x andγ x,d (0) = d. Besides evaluating the geodesics γ x (k) ,d (k) (τ (k) ) in the first iteration step, we have to compute the parallel transport of d (k) along the geodesics in the second step. Furthermore, we need to compute the Riemannian gradient ∇ X N F and products of the Hessian H X N F with vectors d, which are approximated by the finite difference The computation of the gradient of the penalty term in (30) is done by applying the chain rule and noting that for x → dist X (x, y), we have ∇ X dist X (x, y) = log x y/ dist X (x, y), x = y with the logarithmic map log on X, while the distance is not differentiable for x = y. Concerning the later point, see Remark 5. The evaluation of the gradient of the penalty term at a point in X N requires only O(N ) arithmetic operations. The computation of the Riemannian gradient of the data term in (30) is done analytically via the gradient of the eigenfunctions ϕ k of the Laplace-Beltrami operator. Then, the evaluation of the gradient of the whole data term at given points can be done efficiently by fast Fourier transform (FFT) techniques at non-equispaced nodes using the NFFT software package of Potts et al. [56]. The overall complexity of the algorithm and references for the computation details for the above manifolds are given in Table 1.

Parameter Choice
We like to emphasize that the optimization problem (31) is highly nonlinear and the objective function has a large number of local minimizers, which appear to increase exponentially in N. In order to find for fixed L reasonable (local) solutions of (26), we carefully adjust the parameters in problem (31), namely the number of points N , the polynomial degree r in the kernel truncation, and the penalty parameter λ. In the following, we suppose that dim(supp(μ)) = d ≥ 2.
(i) Number of points N Clearly, N should not be too small compared to L. However, from a computational perspective it should also be not too large since the optimization procedure is hampered by the vast number of local minimizers. From the asymptotic of the path lengths of TSP in Lemma 3, we conclude that is a reasonable choice, where (γ ) ≤ L is the length of the resulting curve γ going through the points. (ii) Polynomial degree r Based on the proofs of the theorems in Sect. 5.4 it is rea- (iii) Penalty parameter λ If λ is too small, we cannot enforce that the points approximate a regular curve, i.e., L/N dist X (x k−1 , x k ). Otherwise, if λ is too large the optimization procedure is hampered by the rigid constraints. Hence, to find a reasonable choice for λ in dependence on L, we assume that the minimizers of (31) treat both terms proportionally, i.e., for N → ∞ both terms are of the same order. Therefore, our heuristic is to choose the parameter λ such that min On the other hand, assuming that for the length Hence, a reasonable choice is

Remark 7 In view of Remark 5 the relations in i)-iii) become
In the rest of this subsection, we aim to provide some numerical evidence for the parameter choice above. We restrict our attention to the torus X = T 2 and the kernel K given in (34) with d = 2 and s = 3/2. Choose μ as the Lebesgue measure on T 2 . From (32), we should keep in mind λ ∼ N −5/2 ∼ L −5 .
Influence of N and λ We fix L = 4 and a large polynomial degree r = 128 for truncating the kernel. For any λ i = 0.1 · 2 −5i/2 , i = 1, . . . , 4, we compute local minimizers with N j = 10 · 2 j , j = 1, . . . , 4. More precisely, keeping λ i fixed we start with N 1 = 20 and refine successively the curves by inserting the midpoints of the line segments connecting consecutive points and applying a local minimization with this initialization. The results are depicted in Fig. 1. For fixed λ (fixed row) we can clearly notice that the local minimizers converge towards a smooth curve for increasing N . Moreover, the diagonal images correspond to the choice λ = 0.1(N /10) −5/2 , where we can already observe good approximation of the curves emerging to the right of it. This should provide some evidence that the choice of the penalty parameter λ and the number of points N discussed above is reasonable. Indeed, for λ → ∞ we observe Influence of the polynomial degree r In Fig. 2 we illustrate the local minimizers of (31) for fixed Lipschitz parameters L i = 2 i and corresponding regularization weights . . , 4, (rows) in dependence on the polynomial degrees r j = 8 · 2 j , j = 1, . . . , 5 (columns). According to the previous experiments, it seems reasonable to choose N = 20L 2 . Note, that the (numerical) choice of λ leads to curves with length (γ ) ≈ 2L. In Fig. 2 we observe that for r = cL the corresponding local minimizers have common features. For instance, if c = 4 (i.e., r ≈ (γ )) the minimizers have mostly vertical and horizontal line segments. Furthermore, for fixed r it appears that the length of the curves increases linearly with L until L exceeds 2r , from where it remains unchanged. This observation can be explained by the fact that there are curves of bounded length cr which provide exact quadratures for degree r .
Since the objective function in (31) is highly non-convex, the main problem is to find nearly optimal curves γ L ∈ P˘-curv L (X) for increasing L. Our heuristic is as follows: (i) We start with a curve γ L 0 : [0, 1] → X of small length (γ ) ≈ L 0 and solve the problem (31) for increasing L i = cL i−1 , c > 1, where we choose the parameters N i , λ i and r i in dependence on L i as described in the previous subsection. In each step a local minimizer is computed using the CG method with 100 iterations.  Then, the obtained minimizer γ i serves as the initial guess in the next step, which is obtained by inserting the midpoints. (ii) In case that the resulting curves γ i have non-constant speed, each is refined by increasing λ i and N i . Then, the resulting problem is solved with the CG method and γ i as initialization. Details on the parameter choice are given in the according examples.
The following examples show that this recipe indeed enables us to compute "quasioptimal" curves, meaning that the obtained minimizers have optimal decay in the discrepancy.
2d-Torus T 2 In this example we illustrate how well a gray-valued image (considered as probability density) may be approximated by an almost constant speed curve. The original image of size 170 × 170 is depicted in the bottom-right corner of Fig. 3. Its Fourier coefficientsμ k 1 ,k 2 are computed by a discrete Fourier transform (DFT) using the FFT algorithm and normalized appropriately. The kernel K is given by (34) with d = 2 and s = 3/2.
We start with N 0 = 96 points on a circle given by the formula x 0,k = 1 5 cos(2π k/N 0 ), 1 5 Then, we apply our procedure for i = 0, . . . , 11 with parameters chosen such that the length of the local minimizer γ i satisfies (γ i ) ≈ 2 (i+5)/2 and the maximal speed is close to L i . To get nearly constant speed curves γ i , see ii), we increase λ i by a factor of 100, N i by a factor of 2 and set L i := 2 (i+5)/2 . Then, we apply the CG method with maximal 100 iterations and i restarts. The results are depicted in Fig. 3. Note that the complexity for the evaluation of the function in (31) scales roughly as N ∼ L 2 . In Fig. 4 we observe that the decay-rate of the squared discrepancy D 2 K (μ, ν) in dependence on the Lipschitz constant L matches indeed the theoretical findings of Theorem 8.
3D-Torus T 3 The aim of this example is two-fold. First, it shows that the algorithm works pretty well in three dimensions. Second, we are able to approximate any compact surface in the three-dimensional space by a curve. We construct a measure μ supported around a two-dimensional surface by taking samples from Spock's head 2 and placing small Gaussian peaks at the sampling points, i.e., the density is given for where S ⊂ [− 1 2 , 1 2 ] 3 is the discrete sampling set. From a numerical point of view it holds dim(supp(μ)) = 2. The Fourier coefficients are again computed by a DFT and the kernel K is given by (34) with d = 3 and s = 2 so that H K = H 2 (T 3 ).
We start with N 0 = 100 points on a smooth curve given by the formula x 0,k = 3 10 cos(2π k/N 0 ), 3 10 sin(2π k/N 0 ), 3 10 Then, we apply our procedure for i = 0, . . . , 8 with parameters, cf. Remark 7, To get nearly constant speed curves γ i , we increase λ i by a factor of 100, N i by a factor of 2 and set L i := 2 (i+6)/2 . Then, we apply the CG method with maximal 100 iterations and one restart to the previously found curve γ i . The results are illustrated in Fig. 5. Note that the complexity of the function evaluation in (31) scales roughly as N 3/2 ∼ L 3 . In Fig. 6 we depict the squared discrepancy D 2 K (μ, ν) of the computed curves. For small Lipschitz constants, say L(γ ) ≤ 50, we observe a decrease of approximately L(γ ) −3 , which matches the optimal decay-rate for measures supported on surfaces as discussed in Remark 5. 2-Sphere S 2 Next, we approximate a gray-valued image on the sphere S 2 by an almost constant speed curve. The image represents the earth's elevation data provided by MATLAB, given by samples ρ i, j , i = 1, . . . , 180, j = 1, . . . , 360, on the grid x i, j := sin i π 180 sin j π 180 , sin i π 180 cos j π 180 , cos i π 180 . The Fourier coefficients are computed by discretizing the Fourier integrals, i.e., followed by a suitable normalization such thatμ 0 0 = 1. The corresponding sums are efficiently computed by an adjoint non-equispaced fast spherical Fourier transform (NFSFT), see [68]. The kernel K is given by (36). Similar to the previous examples, we apply our procedure for i = 0, . . . , 12 with parameters To get nearly constant speed curves, we increase λ i by a factor of 100, N i by a factor of 2 and set L i := L 0 2 i/2 . Then, we apply the CG method with maximal 100 iterations and one restart to the previously constructed curves γ i . The results for i = 6, 8, 10, 12 are depicted in Fig. 7. Note that the complexity of the function evaluation in (31) scales roughly as N ∼ L 2 . In Fig. 8 we observe that the decay-rate of the squared discrepancy D 2 K (μ, ν) in dependence on the Lipschitz constant matches indeed the theoretical findings in Theorem 9.
We are interested in the full three-dimensional doughnut Next, we want to approximate the Haar measure μ = μ D restricted to D, i.e., with normalization we consider the measure defined for f ∈ C(SO(3)) by The Fourier coefficients of μ D can be explicitly computed bŷ where P k are the Legendre polynomials. The kernel K is given by (37) Here, we use a CG method with 100 iterations and one restart.
Step ii) appears to be not necessary. Note that the complexity for the function evaluations in (31) scales roughly as N ∼ L 3/2 . The constructed curves are illustrated in Fig. 9, where we utilized the following visualization: Every rotation R(α, r ) ∈ SO(3) is determined by a rotation axis r = (r 1 , r 2 , r 3 ) ∈ S 2 and a rotation angle α ∈ [0, π], i.e., Setting q := (cos( α 2 ), sin( α 2 )r ) ∈ S 3 with r ∈ S 2 and α ∈ [0, 2π ], see (22), we observe that the same rotation is generated by −q = (cos( 2π −α 2 ), sin( 2π −α 2 (−r )) ∈ S 3 , in other words SO(3) ∼ = S 3 /{±1}. Then, by applying the stereographic projection π(q) = (q 2 , q 3 , q 4 )/(1+q 1 ), we map the upper hemisphere onto the three dimensional unit ball. Note that the equatorial plane of S 3 is mapped onto the sphere S 2 , hence on the surface of the ball antipodal points have to be identified. In other words, the rotation R(α, r ) is plotted as the point In Fig. 10 we observe that the decay-rate of D 2 K (μ, ν) in dependence on the Lipschitz constant L matches the theoretical findings in Corollary 1.
The 4-dimensional Grassmannian G 2,4 Here, we aim to approximate the Haar measure of the Grassmannian G 2,4 by a curve of almost constant speed. As this curve samples the space G 2,4 quite evenly, it could be used for the grand tour, a technique to analyze high-dimensional data by their projections onto two-dimensional subspaces, cf. [5].  The kernel K of the Haar measure is given by (38)   The squared discrepancy between the Haar measure μ and the computed local minimizers (black dots) in log-scale. Here, the blue line corresponds to the optimal decay-rate, cf. Theorem 10 Here, we use a CG method with 100 iterations and one restart. Our experiments suggest that step ii) is not necessary. Note that the complexity for the function evaluation in (31) scales roughly as N ∼ L 3/2 . The computed curves are illustrated in Fig. 11, where we use the following visualization. By Remark 8, there exists an isometric one-to-one mapping P : S 2 ×S 2 /{±1} → G 2,4 . Using this relation, we plot the point P(u, v) ∈ G 2,4 by two antipodal points z 1 = u + v, z 2 = −u − v ∈ R 3 together with the RGB color-coded vectors ±u. 3 More precisely, R = (1 ∓ u 1 )/2, G = (1 ∓ u 2 )/2, B = (1 ∓ u 3 )/2. This means a curve γ (t) ∈ G 2,4 only intersects itself if the corresponding curve z(t) ∈ R 3 intersects and has the same colors at the intersection point. In Fig. 12 we observe that the decayrate of the squared discrepancy D 2 K (μ, ν) in dependence on the Lipschitz constant L matches indeed the theoretical findings in Theorem 10.

Conclusions
In this paper, we provided approximation results for general probability measures on compact Ahlfors d-regular metric spaces X by Our estimates rely on discrepancies between measures. In contrast to Wasserstein distances, these estimates do not reflect the curse of dimensionality.
In approximation theory, a natural question is how the approximation rates improve as the "measures become smoother". Therefore, we considered absolutely continuous probability measures with densities in Sobolev spaces, where we have to restrict ourselves to compact Riemannian manifolds X. We proved lower estimates for all three approximation spaces i)-iii). Concerning upper estimates, we gave a result for the approximation space i). Unfortunately, we were not able to show similar results for the smaller approximation spaces ii) and iii). Nevertheless, for these cases, we could provide results for the d-dimensional torus, the d-sphere, the three-dimensional rotation group and the Grassmannian G 2,4 , which are all of interest on their own. Numerical examples on these manifolds underline our theoretical findings.
Our results can be seen as starting point for future research. Clearly, we want to have more general results also for the approximation spaces ii) and iii). We hope that our research leads to further practical applications. It would be also interesting to consider approximation spaces of measures supported on higher dimensional submanifolds as, e.g., surfaces.
Recently, results on the principal component analysis (PCA) on manifolds were obtained. It may be interesting to see if some of our approximation results can be also modified for the setting of principal curves, cf. Remark 3. In contrast to [55,Thm. 1] that bounds the discretization error for fixed length, we were able to provide precise error bounds for the discrepancy in dependence on the Lipschitz constant L of γ and the smoothness of the density dμ.
It has dimension r k=0 Z (d, k) = (d+2r )Γ (d+r ) Γ (d+1)Γ (r +1) ∼ r d and coincides with the space of polynomials of total degree r in d variables restricted to the sphere. As kernel for H s (S 2 ), s = 3/2, we use with the Legendre polynomials P k . Note that the coefficients have the decay as (k(k + 1)) −3/2 . x − y F , where U k are the Chebyshev polynomials of the second kind. Example 4: X = G 2,4 . For integers 1 ≤ s < r , the (s, r )-Grassmannian is the collection of all s-dimensional linear subspaces of R r and carries the structure of a closed Riemannian manifold. By identifying a subspace with the orthogonal projector onto this subspace, the Grassmannian becomes G s,r := x ∈ R r ×r : x = x, x 2 = x, rank(x) = s .