Ollivier Curvature of Random Geometric Graphs Converges to Ricci Curvature of Their Riemannian Manifolds

Curvature is a fundamental geometric characteristic of smooth spaces. In recent years different notions of curvature have been developed for combinatorial discrete objects such as graphs. However, the connections between such discrete notions of curvature and their smooth counterparts remain lurking and moot. In particular, it is not rigorously known if any notion of graph curvature converges to any traditional notion of curvature of smooth space. Here we prove that in proper settings the Ollivier–Ricci curvature of random geometric graphs in Riemannian manifolds converges to the Ricci curvature of the manifold. This is the first rigorous result linking curvature of random graphs to curvature of smooth spaces. Our results hold for different notions of graph distances, including the rescaled shortest path distance, and for different graph densities. Here the scaling of the average degree, as a function of the graph size, can range from nearly logarithmic to nearly linear.


Introduction
Curvature is a fundamental concept in the study of geometric spaces.It is a local parameter whose behavior often controls global phenomena on the manifold.In particular, bounds on the Ricci curvature are known to imply an array of properties, including diameter bounds, control of the spectrum, and sub-Gaussian decay of the heat kernel.If the curvature of some space is upper-bounded by a negative value, then such space has a boundary at infinity and some other universal characteristics of (coarsely) hyperbolic spaces.Unfortunately, most notions of curvature are applicable only to smooth continuous spaces, such as Riemannian and pseudo-Riemannian manifolds.While there exist some combinatorial notions of curvature [6,11], none has the same power as their smooth counterparts.We refer to [25] for a general overview of discrete curvatures.The focus of this paper is graph curvature.
In [27,28,29], Yann Ollivier introduced a definition of curvature for general metric spaces as a discretization of the well-known Ricci curvature.Since this definition is applicable to any metric space, it is applicable to graphs in particular.Even though relatively recent, it has already proven to be quite influential and fruitful.In analysis of networks, Ollivier-Ricci curvature has been used, for example, to identify communities [36], analyze cancer cells [33], asses the fragility of financial networks [34] and robustness of brain networks [10], and to embed networks for machine learning applications [12].Ollivier-Ricci curvature has also been analyzed for several types of (random) graphs including Erdős-Rényi random graphs [23].Some general bounds for this curvature have also been established based on different graph properties [23,16,4].These and other applications of Ollivier-Ricci curvature have also stimulated general interest in graph curvature, leading to the introduction and studies of many other notions of graph curvature [37,24,17,8].
An interesting aspect of Ollivier-Ricci curvature (or any other notion of discrete curvature) is that it creates a bridge between geometry and discrete structures.For example, discrete curvatures play an important role in the field of manifold learning where the discrete objects are data points lying on some manifold, and the task is to learn from the data the properties of the manifold [1].
A related task is that of graph embedding: given a graph, find its embedding in a smooth space such that graph distances between nodes are approximated by distances in the space.Curvature has proven to be important for finding the right space to embed the graph into [12].
In addition to these classical applications, geometry has also proven to be an important and powerful concept for designing latent-space models of random graphs whose properties-such as degree distributions, clustering, distance distributions-closely resemble those of real-world networks [20,14,19,5].These relations between geometry and network properties inevitably lead to the question whether characteristics of latent geometries of networks can be inferred from discrete properties of graphs that represent these networks.Since curvature is a fundamental characteristic of geometry, it is a natural first candidate for uncovering latent geometry in networks.Hence, a proper notion of graph curvature is needed, a notion that would be known to converge to the true curvature of the geometric space underlying the graph, if it exists.
Quantum gravity is yet another area where convergence of graph curvature is of interest.Here one wants to find a discrete geometry that converges in the continuum limit to the geometry of physical spacetime.To this end, Ollivier-Ricci curvature and its variations have been extensively investigated recently [18,40,7].
Despite the interest in Ollivier-Ricci and related curvatures of discrete and combinatorial spaces, the fundamental question of convergence remains largely open.That is, does there exist a discrete notion of curvature that converges in some limit to a traditional notion of curvature of smooth spaces.In general, such convergence may be too much to wish for since, for instance, it is known that there cannot exist any discrete version of Gaussian curvature that would converge on any triangulation of any smooth space [41].
There are, however, some positive results in this direction.One is for the convergence of an angle-defect-based notion of curvature of smooth triangulations of Riemannian manifolds [6].Another one is a manifold learning method designed for consistent estimation of Ricci curvature of a submanifold in Euclidean space based on a point cloud sprinkled uniformly onto the submanifold [1].Perhaps the closest result to ours is the one in [3,2] where a discrete version of the d'Alembertian operator is defined for causal sets in 2-and 4-dimensional Lorentzian manifolds.This discrete d'Alembertian is then shown to converge to the traditional d'Alembertian in the continuum limit.To the best our knowledge, there currently exist no general convergence results for truly combinatorial objects in general and random graphs in Riemannian manifolds in particular.
In this paper we study the question of convergence of Ollivier-Ricci curvature of graphs.We consider random geometric graphs whose nodes are a Poisson process in a Riemannian manifold and whose edges are formed only between nodes that lie within a given distance threshold from each other in the manifold.We show that as the size of such graphs tends to infinity, their Ollivier-Ricci curvature recovers the Ricci curvature of the underlying manifold.To the best of our knowledge, this is the first result that relates a discrete notion of curvature of graphs to the continuum version of curvature of their underlying geometry.
The remainder of the paper is structured as follows.In the next Section 2 we introduce the basic notations and definitions needed to present our main results.We present these results in Section 3.That section ends with some general comments and outlook.We then provide a general overview of the proof strategy in the first half of Section 4. The second half of that section contains the proofs of the main results.The final Section 5 contains all the remaining details and proofs of intermediate results that are skipped in Section 4.
2 Notations and definitions

Geometric graphs
Given a metric space (X , d), a countable node set X ⊆ X , and connection radius ε > 0, we define G(X, ε) as the graph whose nodes are all the elements in X.An edge between x, y ∈ X exists if and only if d(x, y) ≤ ε.Since the nodes of G are points in the metric space, we will refer to them using x and y, instead of indices i and j, and write x ∈ G if x is a node of G.
We will also use G xy to denote the indicator of an edge between x and y in G and define N x to be the neighborhood of node x, i.e.
Note that N x = X ∩ B (x ; ε), where B (x ; ε) denotes the closed ball around x ∈ X of radius ε with respect to the distance d, but excluding x.

Random geometric graphs
In this paper we consider graphs that are constructed by randomly placing points in the metric space X , according to a Poisson process.In order to analyze a notion of curvature on these graphs we need to impose some additional structure on X .More precisely, we will consider Riemannian manifolds as the spaces on which graphs are constructed.We briefly recall some notions of Riemannian geometry needed for the setup and refer the reader to [15,30] for more details on the topic.
Formally, a Riemannian manifold is a pair (M, g) where M is a smooth manifold and for each x ∈ M, g x is a smooth inner product on the tangent space T x M at x.This inner product induces a metric d M , called the Riemannian metric.Since we are mainly interested in metric spaces, we denote a Riemannian manifold by the pair (M, d M ).
Throughout the remainder of this paper we work with Riemannian manifolds that are orientable, connected and complete.The first property ensures that there exists a globally defined volume form vol M on M so that we can perform integration on M. For any U ⊆ M we will write vol M (U ) = U dvol M to denote the volume of U .The second property says that, as a topological space, M cannot be separated into the union of disjoint open sets.Finally, completeness means that for any two points x, y ∈ M there exists a shortest path (geodesic) in M connecting x and y, whose length is d M (x, y).We also note that if M is connected and compact, then it is complete.With this setup we can define a random geometric graph on a Riemannian manifold in an analogous way to classic random geometric graph in Euclidean space.Remark 2.1 (Conditions on the manifold).From a technical perspective, we only need the manifold to be smooth.This is because we will be working on shrinking neighborhoods of some fixed point x * ∈ M. For a sufficiently small neighborhood U , we can always construct a volume form that is well defined on U and ensure that every two points in U are connected by a geodesic path.We could then fix a sufficiently small and compact neighborhood C of x * and then consider a Poisson process on C with intensity measure n/vol M (C) dvol M .
The only difference with the global setup is that we would need to frame everything in terms of sufficiently small neighborhoods and deal with possible boundary issues in our proofs.In the end, since curvature is a local property, these issues would vanish.Still, framing all results in this local setting would add additional technical layers to the proofs.For convenience, we therefore choose to present everything in terms of global and nice requirements on the manifold.
We shall next introduce a notion of curvature on random geometric graphs.Since curvature is inherently a local property, it makes sense to define curvature on graphs as a property of an edge.For our analysis we will take a more general approach and consider curvature between two fixed nodes in the graph that are connected by a path.We then analyze its behavior as the size of the graph tends to infinity.
For any x ∈ M, we write ) is a random geometric graph with x added to the node set.Similarly, for any pair of points (x, y) ∈ M we write G n (x, y, ε) := G(X ′ n , ε), with X ′ n = {x, y} ∪ P n .We refer to both G n (x, ε) and G n (x, y, ε) as rooted random graphs.

Ollivier-Ricci curvature on graphs
The definition of Ollivier-Ricci curvature uses the Wasserstein metric (transportation distance), which we shall introduce next.Recall that a coupling between two probability measures µ 1 and µ 2 is a joint probability measure µ whose marginals are µ 1 and µ 2 .Definition 2.2.Let µ 1 and µ 2 be probability measures on a metric space (X , d) and let Γ(µ 1 , µ 2 ) denote the set of all couplings µ between µ 1 and µ 2 .Then the Wasserstein metric (Kantorovich-Rubenstein distance of order one) is given by Let G be a graph.The definition of Ollivier-Ricci curvature on graphs relies on two ingredients, a metric on G and a family of probability measures, indexed by the vertices.1 for the Wasserstein metric with respect to the metric space (G, d G ).We then define for any pair of nodes x, y ∈ G the associated Ollivier curvature as Remark 2.2.
1.The concept of Ollivier-Ricci curvature is not restricted to graphs and can be defined on any metric space where we have a sequence of probability measures.A specific example of these are Riemannian manifolds (M, d M ).
2. Note that a sequence {m x } x∈G of probability measures on G gives rise to a random walk on the graph.The transition probabilities are given by P ( x t+1 ∈ A| x t = x) = m x (A).So an Ollivier-triple consists of a graph, a metric and a random walk on the graph.However, since we will only use concepts related to the measures m x we refrain from using any random walk terminology.
3. When d G is the shortest path metric on G and m corresponds to the uniform probability measures on the neighborhoods N x , i.e. m x (y) = G xy /|N x |, we are in the classic setting for Ollivier-Ricci curvature on graphs [16,26,31].In this work, however, we shall use different combinations of metrics on graphs and probability measures to obtain our results.This is why we define Ollivier-Ricci curvature on graphs in a more general way.
4. The reason why we set κ(x, y; G) = 0 if the nodes are not in the same connected component is because we work with random graphs and this way we ensure that κ(x, y; G) is a real-valued random variable.

Curvature in Riemannian manifolds
Our main results relate the standard Ricci curvature of a manifold to the Ollivier-Ricci curvature of the random geometric graph constructed on this manifold.For this, we briefly recall the definition of the Ricci curvature, see [15,30].
In general, the curvature of a geometric space is intended as a local measure for how "different" a region of the space is from that of the flat Euclidean space.Notions of curvature in Riemannian geometry are governed by the Riemannian curvature tensor R. Given an N -dimensional Riemannian Manifold (M, d M ), a point x ∈ M and two vectors v, w ∈ T x M (the tangent space of x), the Riemannian curvature tensor with respect to v, w is a linear map R(v, w) : T x M → T x M, written as u → R(v, w)u and defined in terms of the Levi-Civita connection on the tangent bundle.It quantifies to what extent the manifold M is not diffeomorphic with flat Euclidean space.
In this paper we will use the notion of curvature called Ricci curvature.For two vectors v and w, the Ricci curvature Ric(v, w) is defined, in terms of the Riemannian tensor, as the trace of the linear map Given a point x ∈ M and a unit vector v ∈ T x M, we often refer to Ric(v, v) as the Ricci curvature of x with respect to v.This Ricci curvature is related to another notion of curvature, called sectional curvature, which is defined as where •, • denotes the inner product on the tangent space.One can show that Ric(v, v) is obtained by averaging the sectional curvature K(v, w) over all unit vectors w ∈ T x M.
In the remainder of this paper we will work with the Ricci curvature of a point x, with respect to some tangent vector v.We note that it is not needed to understand the fine details behind curvature of Riemannian manifolds to understand all the details of the results or proofs.

Main results
Here we state our results regarding the convergence of Ollivier-Ricci curvature of random geometric graphs on Riemannian manifolds.We note that if the manifold dimension is N = 1, then there is nothing to prove, so that we always assume that N ≥ 2.
We mainly consider two different distances on the graphs, leading to two different but related results.Although we consider different distances on graphs, we shall always consider uniform measures on balls of a certain radius.We shall clearly distinguish between the connection radius of the graph and the radius used for the uniform measures: The former is the connectivity distance threshold: if the distance between a pair of nodes in the manifold is below this threshold, then these nodes are connected by an edge in the graph.The latter radius is the radius of the ball (either in the graph or in the manifold) over which the uniform probability measure is distributed.
Let G n = G n (ε n ) be a random geometric graph on M and d G a distance on G n .Then, for a node x ∈ G n , we define the graph ball of radius λ around x as Note that B G (x ; λ) depends on the definition of the graph distance d G .For our results we consider Ollivier-triples We reiterate that if ε n = δ n and the graph metric d G is the shortest path distance, then we are in the classical setting of Ollivier-Ricci curvature on graphs as considered in the past literature [16,26,31].

Graphs with manifold weighted distance
Let G n = G n (x * , ε n ) be a random rooted graph on M. Then we define the manifold weighted graph distance d w G as the weighted shortest-path distance on G n where each edge (u, v) is assigned weight d M (u, v), corresponding to the distance between the nodes on the manifold.Similar to B G (x ; λ), we denote by B w G (x ; λ) the graph ball of radius λ with respect to d w G and let m G,w = (m G,w x ) x∈G denote the uniformly measures on the balls B w G (x ; δ n ).Finally, given a point x ∈ M and a vector v ∈ T x M, we say that another point y ∈ M is at distance δ in the direction of v, if d M (x, y) = δ and y lies on the geodesic starting at x in the direction of v.
Our first result shows that for certain combinations of connection radius ε n and measure radius δ n , the Ollivier-Ricci curvature on G n converges to the Ricci curvature.Theorem 3.1.Let N ≥ 2, (M, d M ) be a smooth, orientable, connected and compact N -dimensional Riemannian manifold, x * ∈ M and v a unit tangent vector at x * .Furthermore, let where the constants satisfy and a ≤ b if α = β and min{a, a + 2b} > 2 N if α + 2β = 1 N .Let y * n ∈ M be at distance δ n in the direction of v and G n = G n (x * , y * n , ε n ) be rooted random graphs on M. Then for the Ollivier-triple G w n = (G n , d w Gn , m G,w ), it holds Theorem 3.1 relates two different quantities.The first is the Ollivier-Ricci curvature in the graph between the node x * and another node y * n that is at distance δ n from x * in the direction of vector v.The second is the Ricci curvature of the manifold at x * in the v-direction.The theorem says that if we properly rescale the former, it converges in expectation to the latter.Remark 3.2.
1. Note that Theorem 3.1 states that δ −2 n 2(N + 2)κ(x * , y * n ; G w n ) converges in the L 1 sense to Ric(v, v).In particular, this implies the concentration result 2. Since ε n , δ n → 0, both the connectivity and measure neighborhoods of x * become smaller as n grows.Indeed, curvature is a local property, so that measuring it more accurately requires smaller regions.
3. While the connectivity neighborhood of x * is shrinking, the expected number of x * 's neighbors lying in it is growing with n.To see this, note that for large enough n the volume of the ball B M (x ; ε n ) around x ∈ M can be approximated by that of the N -dimensional Euclidean ball.Hence, for any The conditions of the theorem imply that α ≤ α + 2β ≤ 1 N , so that 1 − αN ≥ 0. This means that the average degree diverges faster than logarithmically if αN < 1.More generally, the conditions of Theorem 3.1 imply that the average degree always diverges faster than log(n) 2 .
If we consider the classic setting where the connection and measure radii are the same, ε n = δ n , then the following result is a direct consequence of Theorem 3.1.
While the conditions in this corollary imply that the average degree in G n (x * , y * n , δ n ) diverges faster than n 2/3 , Theorem 3.1 works for graphs where the average degree can be almost as small as log(n) 2 .The crucial component for establishing the curvature convergence in graphs with so much smaller average degree is to consider different connection and measure radii and let the connection radius decrease at a faster rate than the measure radius, i.e. ε n ≪ δ n .Remark 3.4 (Extreme cases for convergence of curvature).Corollary 3.3 covers one set of extreme casse for the combination a, b, α and β from Theorem 3.1, were we take β to be as big as possible.This means that we compute the curvature using uniform probability measures on a set of nodes that is as small as possible.For the true extreme case, let ǫ > 0 be arbitrarily small and define β = 1−ǫ 3N and b = 2+ǫ N .Then, to calculate the curvature, we need to compute the Wasserstein metric between uniform probability measures on neighborhoods that contain , number of nodes.The consequence, however, is that our graphs have average degree diverging at the same rate: log(n) 2+ǫ n 2+ǫ 3 .In order to get graphs whose average degree diverges as slow as possible, we need to consider an other extreme case.Again let ǫ > 0 be arbitrary small.Now we define For these choices we have that α + 2β = 1/N and min{a, a + 2b} = a > 2/N so that the result from Theorem 3.1 holds.In this case, the average degree scales as which is almost logarithmic.However, we now need to compute the Wasserstein metric with respect to the uniform measure on a number of nodes that scales as That is, in order to compute curvature on graphs with almost logarithmic average degree, we need to consider the uniform probability measure on almost the entire graph.

Graphs with hop count distance
In the previous section we considered Ollivier-Ricci curvature of graphs on Riemannian manifolds, with graph edges weighted by manifold distances.These weights encode a lot of information about the manifold metric structure, so that one may feel not terribly surprised that we can recover manifold curvature from graph curvature using this information.The natural question is then if it is possible to prove convergence of Ollivier-Ricci curvature based on shortest path distances d s G in unweighted graphs.It turns out that this can be done under some slightly more restrictive conditions on the connection and measure radii.
For this we define, for any random geometric graph Similar to the previous setting we let B * G (x ; δ n ) denote the balls of radius δ n around in x ∈ G n with respect to the metric d * G and define the random walk measures Theorem 3.5.Let (M, d M ) be a smooth, orientable, connected and compact 2-dimensional Riemannian manifold, x * ∈ M and v a unit tangent vector at x * .Furthermore, let where the constants satisfy and a < 3b if α = 3β and 2a + 3b 1. Note that unlike Theorem 3.1, here we do not include any information on the distances between nodes on the manifold.We only need the connection radius.
2. Observe that the theorem allows to select an α that is arbitrary close to 1 2 .In particular, Hence by selecting a small β we have a discrete notion of curvature that converges on graphs with almost logarithmic average degree, without using any information on the manifold.
3. Theorem 3.5 currently only works in 2-dimensional manifolds.This is because the proof relies on results for the stretch (the fraction d G /d M ) for random geometric graphs in 2-dimensional Euclidean space [9].Our proof techniques, however, immediately allow the results to be extended to higher dimensions, once similar types of stretch results for these spaces are obtained.

Summary, comments, caveats, and outlook
In summary, we have proven that upon proper rescaling, the Ollivier-Ricci curvature of random geometric graphs on a Riemannian manifold converges to the Ricci curvature of the underlying manifold.
Our first result, Theorem 3.1, establishes convergence of Ollivier-Ricci curvature for a wide range of connectivity and measure radii.In particular, it contains as a corollary the classical setting where both radii are the same, Corollary 3.3.The theorem does, however, require knowledge of pairwise distances between connected nodes in the manifold.
Our second result, Theorem 3.5, relaxes this requirement and establishes the same convergence without any knowledge of distances in the manifold.This does come at the price of slightly more restrictive conditions on the possible connection and measure radii.Still, as for the first result, the convergence holds al the way up to graphs whose average degree grows very slowly (almost logarithmically).
To the best of our knowledge, these are the first rigorous results on the convergence of a discrete notion of curvature of random combinatorial objects to a traditional continuum notion of curvature of smooth space.
While the classical setting for Ollivier-Ricci graph curvature uses probability measures (random walks) on balls of the same radius as the graph connection radius, in this paper we allow the radii to be different.This is an important generalization.In particular, we find that in order for the curvature to converge on graphs with almost logarithmic average degree, we need the probability measure radius to be much larger than the connection radius.This is intuitively expected because in order to "feel" any curvature in graphs with such a low density, we really need to consider large "mesoscopic" neighborhoods in them since otherwise all we could see is local "microscopic" Euclidean flatness.It would be interesting to see how this more general approach would generalize known results for the classical setting of Ollivier-Ricci curvature of graph families that have been investigated in the past, such as trees or Erdős-Rényi random graphs [4,16].
In our recent numeric experiments [13], we have seen that in manifold-distance-weighted random geometric graphs, the Ollivier-Ricci curvature convergence holds even for graphs with constant average degree.Unfortunately, the proof techniques presented in this paper do not allow for a direct generalization to this setting.Therefore, other techniques are needed to (dis)confirm the convergence of Ollivier-Ricci curvature of graphs with constant average degree.We note that one definitely cannot expect Ollivier-Ricci curvature to converge in all possible graph sparsity settings.For example, we definitely need the giant component to exist to talk about any curvature convergence.
For the task of learning latent geometry in networks, our results can still be improved, particularly by removing the requirement to know the connection radius.When presented just with a truly unweighted realization of a random geometric graph, this radius needs first to be learnt, estimated.It would thus be interesting to see if convergence would still hold if we replace the true value of the connection radius with its consistent estimation, e.g. based on the average degree.Here we expect the speed of curvature convergence (if any) to depend on the speed of estimator convergence in a possibly nontrivial way.
Finally, now that we have seen that Ollivier-Ricci curvature of random combinatorial discretizations of smooth spaces converges to their Ricci curvature, it would be interesting to investigate whether such convergence also holds for other popular notions of discrete curvature.Forman-Ricci curvature [37] appears to be a good next candidate for such investigation.

Proof overview
Our main results in Theorems 3.1 and 3.5 follow from our more general result on the Ollivier-Ricci curvature convergence in graphs whose edges are always weighted by some weights.That is, we assume that all edges in our graphs always have some weights, assigned according to some scheme.For our general result it is not important what these weights or their assignment scheme are.What is important is that the graph distance d G between a node pairs is a good approximation of the manifold distance d M between the corresponding pair of points.To quantify how good this approximation is, we introduce the following definition.
and the following holds (as n → ∞): there exits a Q > 3 and Remark 4.1 (Asymptotic expressions).Most of our results will deal with asymptotic relations, e.g.ξ n = o (δ n ).Unless stated otherwise, these asymptotic relations will always be understood as n → ∞.
Recall that B G (x ; δ) denotes the set of nodes in the graph that are at graph distance at most and define (5) This λ n will play the role of an additional radius, for extending the graph distance d G to the manifold.In short, to define a distance between u, v ∈ M, we will connect u and v to all points of the graph withing radius λ n and then use the graph distance.The radius λ n has been selected such that the expected number of nodes inside any ball B M (x ; λ n ) is of the order Θ log(n) 2 .Hence, the probability of observing no node of the graph inside any such ball is O e − log(n) 2 = o n −1 , which is sufficiently small.More details on the use of λ n can be found in Section 5.1 Our general result is then as follows.
Theorem 4.2.Let N ≥ 2, (M, d M ) be a smooth, orientable, connected and compact N -dimensional Riemannian manifold, x * ∈ M and v a unit tangent vector at x * .Furthermore, let ) be rooted random graphs on M and d G a δ n -good approximation of d M .Then, if we consider the Ollivier-triple Once we have established this general result, our main results in Theorems 3.1 and 3.5 follow if we can show that the considered graph distances are δ n -good approximations.
A key ingredient in the proof of Theorem 4.2 is the convergence result for Ollivier-Ricci curvature for uniform measures on Riemannian manifolds, proved in the seminal paper on the topic [28].In a high-level overview, our proof approximates Ollivier-Ricci curvature of probability measures on the graph with those on the manifold.Having obtained such an approximation with a required accuracy, we then apply the convergence result from [28].
Since Ollivier-Ricci curvature is defined by the Wasserstein metric on probability measures, our analysis focuses on approximating the Wasserstein metric of discrete probability measures on the graph by the Wasserstein metric of uniform probability measures on the manifold.This is done in three steps: 1) extend the graph distance d G to a distance d M on the manifold such that the Wasserstein metric W 1 with respect to this new distance is a good approximation of the Wasserstein metric W 1 on the manifold, 2) show that the Wasserstein metric between the probability measure m G x on the graph and the discrete probability measure m M x on the nodes within the ball B M (x ; δ n ) is sufficiently small and 3) show that the Wasserstein metric between the uniform measure on B M (x ; δ n ) and the discrete probability measure m M x is sufficiently small.Remark 4.3.In all cases, sufficiently small means that the error terms are of smaller order than δ 3 n .This is because the Wasserstein metric is first divided by δ n to obtain the curvature, which is then divided by δ 2 n to make it converge to the Ricci curvature.We proceed with explaining all ingredients and the three steps in more detail.We reiterate that unless stated otherwise, we will assume that ε n ≤ δ n are two sequences converging to zero such that λ n = o (ε n ) and λ n = o δ 3 n .

Ollivier curvature on Riemannian manifolds
Let (M, d M ) be a smooth, orientable, connected and compact N -dimensional Riemannian manifold.For x ∈ M and δ > 0, we write B M (x ; δ) ⊆ M to denote the closed ball of radius δ around denotes the volume of the ball B M (x ; δ).Now fix δ > 0 and consider the uniform measure on balls of radius δ.That is, for x ∈ M we take the probability measure µ δ x given by We will refer to µ δ x as the uniform δ-measure.The following result from [28] shows that for a uniform δ-measure on a Riemannian manifold, the Ollivier curvature (properly rescaled) converges to the Ricci curvature as δ → 0. Theorem 4.4 (Example 7 in [28]).Let (M, d M ) be a smooth complete N -dimensional Riemannian manifold x ∈ M and v a unit tangent vector at x. Let δ > 0 and y be the point at distance δ in the direction of v. Then if we consider the Ollivier-Ricci curvature κ for the uniform δ-measures given by (6): Remark 4.5.The result in Theorem 4.4 clearly exhibits the local nature of curvature as it holds in the limit where the distance d M (x, y) = δ between the two points goes to zero.
Taking δ = δ n , x = x * and y = y * n in the above theorem, we have that the rescaled Ollivier-Ricci curvature associated to the uniform δ n -measures converges to the Ricci curvature as n → ∞.The main strategy for proving Theorem 4.2 is to compare this "uniform" version of the curvature κ on the manifold to the discrete version on the graph.
More precisely, we need to prove that There are two complicating factors here.First, we have to deal with two Wasserstein metrics defined on two different spaces.Second, we have to compare discrete probability measures with continuous ones.We deal with the different Wasserstein metrics in the next section and with comparing the different measures in Section 4.3 and Section 4.4.

Extending the graph distance to the manifold
In order to compare the two different Wasserstein metrics in (7) we extend the graph distance d G to a distance d M defined on a sufficiently large part of M. In particular, we will consider the ball B M (x * ; Qδ n ), with Q > 3 from Definition 4.1.The extension is such that for any two nodes n ) can be replaced by the Wasserstein metric associated with d M .
Recall the definition of λ n from (5).Take G n = G n (x * , y * n , δ n ) and let U ⊂ M be a countable set of points.Then we define the graph G n (U ) obtained from G n by adding the points of U to the vertex set and connecting each u ∈ U to any other node x ∈ G n \ U for which d M (x, u) ≤ λ n /2.After this, we assign to each new edge (u, x) the weight d M (x, u)(1 + ξ 2 n ) + ξ 3 n , with ξ n from Definition 4.1.We can then extend the graph distance to the manifold by defining d M (u, v) to be It is important to note that this extended distance depends on the random graph G n .Therefore, it could happen that two added points u, v ∈ U are not connected in G n (U ), i.e. there does not exist a path from u to v in the extended graph.This happens if there are no nodes in B M (u ; λ n /2) or in B M (v ; λ n /2) or if none of the node pairs (x, y) ∈ B M (u ; λ n /2) × B M (v ; λ n /2) are connected by a path in G n .Therefore, to justify the definition of the extended manifold distance we need to make sure that, with sufficiently high probability, theses situations do not occur.Lemma 4.6.Let G n = G n (x * , y * n , δ n ) and Q > 3 be the constant from Definition 4.1.Then, there exists an event n such that on this event the following holds: The first property ensures that our extended distance is an actual distance.Moreover, by the second property, this extended distance is a good approximation of the true distance on the manifold.Finally, the last property makes sure that so that the curvature κ between x * and y * n is well-defined and not forced to be zero.The precise definition of Ω n is not needed to understand the high level arguments as well as the proof of the main results.For now, let us refer to Ω n as the good event.Details on this event can be found in Section 5.1.
Let W 1 denote the Wasserstein metric with respect to d M , which is only well-defined on the good event Ω n .Since the distance is determined by the graph G n = G n (x * , y * n , δ n ), the Wasserstein metric is also a random object.The following proposition states that, on the event Ω n , the difference between the Wasserstein metrics W 1 and W 1 is small.The proof is given in Section 5.1.Proposition 4.7.Let G n = G n (x * , ε n ) and µ 1 , µ 2 be two probability measures on M with support contained in B M (x * ; Qδ n ).Then Hence, since the uniform δ n -measures µ δn x * and µ δn y * n are probability measures on M with support contained in B M (x * ; Qδ n ), Proposition 4.7 implies that on the good event, holds in expectation.This is helpful because both Wasserstein metrics in the expression on the right hand side are now defined on the same space.Therefore, since W 1 is a distance, the reverse triangle inequality implies holds in expectation, conditioned on the good event.However, the right hand side no longer involves the extended distance.Hence, it now suffices to show that for any x ∈ B M (x * ; δ n ),

Approximating probability measures on graph balls
Recall that B M (x ; δ n ) denotes the closed ball around x ∈ M with radius δ n according to the manifold distance d M .The first step in establishing ( 8) is to move from uniform measures on the graph balls B G (x ; δ n ) to uniform measures on the nodes of the graph that lie in the manifold balls B M (x ; δ n ).The reason for this is that y ∈ B G (x ; δ n ) does not necessarily imply that y ∈ B M (x ; δ n ), nor vice versa.This creates difficulties when comparing the measures m G x and µ δn x .
Let G n = G n (x * , ε n ) be rooted random graphs on M. Then we define the probability measures m M on the nodes of G n as Although the uniform measures m G x * and m M x * are not the necessarily equal, the Wasserstein metric between them is sufficiently small.
x the uniform measure on B G (x ; δ n ) and by m M x the uniform measure on The proof of this result is based on some simple computations regarding Poisson random variables and can be found in Section 5.2.Proposition 4.8 allows us to replace (8) with Note that the only dependence on the graph is now in the amount of nodes placed inside the ball B M (x ; δ n ), which is completely determined by the Poisson process.All dependencies on the actual structure of the graph have been removed.This allows us to compute the Wasserstein metric between m M x and µ δn x .

Coupling continuous and discrete probability measures on M
Recall that the Wasserstein metric W 1 (µ 1 , µ 2 ) takes an infimum over all possible joint distributions (couplings) between the measures µ 1 and µ 2 .Hence, to show that (10) holds, we need to design an optimal coupling (transport plan) between m M x and µ δn x .The main idea here is to view m M x as a discrete version of µ δn x .For now, let us assume that we are working in the N -dimensional Euclidean cube M = [0, 1] N .Given a realization of the Poisson process, a transport plan between m M x and µ δn x should assign to each measurable set A ⊆ B M (x ; δ n ) how much of the associated mass µ δn x (A) is transported to each point of the Poisson process.To make it optimal, we should distribute the mass over those points that are closest to A. This problem is actually related to that of finding a minimal matching between points of a Poisson process and points of a grid on [0, 1] N , see [22,39,35].Here, minimal means that the largest distance between a point of the Poisson process and its matched grid point is minimized.
The idea for the transport plan is as follows: 1. Place a grid on [0, 1] N .
2. Find a minimal matching between the Poisson process and the grid.
3. Given a A ⊆ B M (x ; δ n ), we take all points of the Poisson process that are matched to grid points inside A and distribute the mass µ δn x (A) equally over those points.
Using known results for minimal matchings, it can then be shown that, under suitable conditions, the Wasserstein metric between m M x and µ δn x is o δ 3 n .Finally, we need to extend these results in flat Euclidean space to the ball B M (x ; Qδ n ) in general M. For this we use that δ n → 0 and that small neighborhoods of x ∈ M can be mapped diffeomeorphically to the flat N -dimensional tangent space by the exponential map exp x : T x M → M. We then apply the matching results there and map back.Here we need to tread carefully, since the exponential map does not preserve distances.We thus fix a sufficiently small neighborhood U around the origin of the tangent space at x.Then, for some fixed 0 < ξ < 1 and large enough n we have where B N (0 ; δ) is the Euclidean ball of radius δ.This then yields matching upper and lower bounds on the Wasserstein metric on M in terms of the Wasserstein metric on the Euclidean space.
All the details of this approach are provided in Section 5.3.In the end we obtain the following result.

Proof of the main results
We now have all ingredients to prove the main results.We start with Theorem 4.2, where we bound the expression inside the expectation as a sum of several terms and use the above results and the fact that d G is a δ n -good approximation to show that each individual term goes to zero.
Proof of Theorem 4.2.First, we bound the term inside the expectation as follows The last term is deterministic and goes to zero by Theorem 4.4.For the first term we note that when x * and y * n are not connected, κ(x * , y * n , G n ) = 0 and hence Conditioned on the good event Ω n , this does not happen by property Ω3 in Lemma 4.6, so that By construction of the good event we have 1 − P (Ω n ) = o δ 3 n and thus, the last term in the above bound goes to zero.
For the other term we first note that since d G is a δ n -good approximation it follows that and that the absolute value of each curvature term can be bounded from above by 2.
Then the expression inside the conditional expectation can be bounded as follows Next, since d G is a δ n -good approximation, we can apply (4) then follows that the second term in (11) goes to zero.For the first term we have which implies that this term also goes to zero.We are thus left with (12), for which we have to show that We first replace W 1 (µ This then implies To show that the first term in the upper bound is also o δ 3 n we apply the reverse triangle inequality twice to obtain We proceed to show that Since the expectation is o δ 3 n by Proposition 4.9 we conclude that which finishes the proof.Now that we have the general result, Theorem 3.1 and Theorem 3.5 directly follow from Theorem 4.2 if we can show that the graph distances that are considered there are δ n -good approximations.
Throughout the remainder of this section we will assume that for some a, b ∈ R and 0 ≤ α, β ≤ 1.We shall also assume that ε n ≤ δ n .The following results show that for appropriate choices of the constants a.b and α, β both the weighted manifold and the rescaled hopcount distance are δ n -good approximations.The proofs are given in Section 5.4 and Section 5.5, respectively.
Proposition 4.10.Suppose the constants in ε n and δ n satisfy Proposition 4.11.Suppose the constants in ε n and δ n satisfy and a < 3b if α = 3β and 2a + 3b Observe that the conditions of the constants in Proposition 4.10 and Proposition 4.11 are exactly the same as in Theorem 3.1 and Theorem 3.5, respectively.Moreover, these conditions imply that λ n = o (ε n ) and λ n = o δ 3 n , with λ n as defined in ( 5), as we will now demonstrate.In Proposition 4.10 we have β > 0 and α + 2β ≤ 1 N .It then follows that α < 1 N which implies N it must be that α + 2β = 1 N and hence the conditions of Proposition 4.10 imply that 3b ≥ a + 2b > 2 N .From this we deduce that The first inequality holds since β > 0 and α ≤ 1−3β 2 , while the second is due to the fact that 3β ≤ 3 9 = 1 3 .We thus conclude that under the conditions in both propositions, the radii satisfy the conditions of Theorem 4.2.Hence, Theorem 3.1 and Theorem 3.5 follow from it.

Proofs
Here we prove all the intermediate results that we used to prove our main results in the previous section.We start with the proof of Lemma 4.6 and Proposition 4.7 in the next Section 5.1.In Section 5.2 we provide the details for Proposition 4.8, while the proof of Proposition 4.9 is given in Section 5.3.We end with Sections 5.4 and 5.5 where we prove Propositions 4.10 and 4.11, respectively, leading to the main results of this paper.
Recall that and

Extended graph distance
Our first goal is to proof Lemma 4.6.We start by showing that for sufficiently small radius r n and any finite set of points u ∈ M, the balls B M (u ; r n ) will each contain at least one node from the rooted graphs G n = G n (x * , y * n , ε n ).
Lemma 5.1.Let U ⊂ M be a finite set of points in M such that |U | = O (n c ), for some c > 0, and let as n → ∞.
Proof.First note that for r n small enough the ball B M (u ; r n ) can be mapped diffeomorphically onto the tangent space T u M at u.In particular, for small enough r n we have that, as n → ∞, Next, since the nodes in G n are placed according to a Poisson process with intensity n/vol M (M) it follows that Therefore, by applying the union bound we get To finish the proof we note that e −Θ(log(n With this lemma we obtain the following corollary.
Corollary 5.2.There exists a collection {B 1 , . . ., B m } of m = Θ λ −N n balls of radius λ n /4 that cover M, such that if we denote by c 1 , . . ., c m their centers and define the event Proof.The collection is constructed using the standard trick of taking a maximal set of disjoint balls of radius λ n /8 in M. The event C n will play a crucial part in defining the good event Ω n .Let D n denote the event on which (4) holds.Then we define the good event as On this event, with sufficiently high probability, (B M (x * ; Qδ n ) , d M ) is a metric space for any constant Q > 0 and the extended distance d M is a good approximation of the original distance d M .Note that we do not need to consider the whole manifold since curvature is a local property.Lemma 5.3.Let Ω n be the event defined in (14) and Q > 3 the constant from Definition 4.1.Then on the event Ω n , Proof.We first prove the first statement.For this, take any u, v ∈ B M (x * ; Qδ n ) and let γ(u, v) denote the geodesic between u and v.This geodesic will be covered by a subsequence B t1 , . . ., B t k of the cover of M, which we rank in order of appearance moving from u to v. Let c t1 , . . ., c t k denote the corresponding centers of these balls, see Figure 2. On the event C n each ball contains a vertex x ti ∈ G n and since the edges (u, x t1 ) and (v, ) is bounded by four times the radius of the balls, it follows that for large enough n, d Figure 3: Abstract depiction of the weighted shortest path between u and v created by adding z and the path π 2 , given in blue.
and thus, for n large enough, {x t1 , . . ., x t k } is a path in G n .We thus conclude that u and v are connected in G n (u, v).
Note that because of this property, on the event Ω n , the extended manifold distance between d M is well-defined on M.
We are left to show that on the event Ω n , the extended manifold distance is a true distance.Note that the only non-trivial part is the triangle inequality.Let u, v, z ∈ B M (x * ; Qδ n ) and consider the graphs G (1) = G n (u, v) and G (2) = G n (u, v, z).Now observe that the triangle inequality can only be violated if z creates a short-cut, i.e. if the shortest weighted path between u and v in G (1) is longer than in G (2) .Suppose is true, and let π 1 = {u, . . ., y 1 , z, y 2 , . . ., v} denote this new weighted shortest path in G (2) .Since y 1 and y 2 are connected to z in G (2) it follows that d M (z, y i ) ≤ λ n /2.However, by the triangle inequality for d M , this implies that d M (y 1 , y 2 ) ≤ λ n = o (ε n ) and hence, for sufficiently large n, the edge (y 1 , y 2 ) is present in G n and thus also in G (1) and G (2) .
For simplicity lets us denote by π the total weight of a path π.Since d G is a δ n -good approximation, holds on the event Ω n .Applying the triangle inequality for d M we get This implies that the total weight of the path π 2 is at most that of π 1 from which we conclude that z cannot create a short-cut and hence d M satisfies the triangle inequality.
We are now ready to prove Lemma 4.6.
Proof of Lemma 4.6.Note that for any two nodes u, v ∈ G n with u, v ∈ B M (x * ; Qδ n ), Lemma 5.3 implies that u and v are connected by a path in G n .Hence the only part of Lemma 4.6 to prove is property Ω2 there.Take any u, v ∈ B M (x * ; 3δ n ).Then on the event Ω n , by definition of the extended distance d M , there exists Moreover, since Q > 3 and λ n = o δ 3 n we can assume that x u , x v ∈ B M (x * ; Qδ n ), for sufficiently large n.Since the approximation (4) holds on the event Ω n , we have Combining and ( 16) we get Finally, we need to prove Proposition 4.7.Since, on the event Ω n , we have the proof follows immediately from the following elementary result on Wasserstein metrics.Next we note that the Wasserstein metric is achieved by some optimal coupling.Let µ * denote the optimal coupling for µ 1 and µ 2 with respect to d, i.e.W 1 (µ 1 , µ 2 ) = d(x, y)dµ * (x, y), and define µ * similarly.Then and from which the result follows.

Probability measures on graphs
In this section we give the proof of Proposition 4.8.Recall that m G x and m M x denote the uniform probability measures on the set of nodes in B G (x ; δ n ) and B M (x ; δ n ), respectively.The goal is then to show that As we mentioned, these two sets are not necessarily contained in each other.Hence, to bound the Wasserstein metric we will work with slightly smaller and larger balls B − and B + such that We can then obtain an upper bound by comparing the Wasserstein metric between m G x , m M x and the uniform probability measure on B + ∩ G n .This bound can be made o δ 3 n , by carefully selecting the radii of B − and B + .
Before we give the details, we need the following general result concerning Poisson random variables.
Lemma 5.5.Let α n , β n → ∞ and X n , Y n be two independent Poisson random variables with means α n and β n , respectively.Then Proof.First, let C > √ 2 be some large fixed constant.Then we have that (c.f.[32, Lemma 2.1]) In particular, if we define Similar results hold for Y n with β ± n defined similarly.We start by conditioning on X n : n We will bound each term separately.First we bound the expectation inside each summation by further conditioning on Y n : We can now bound I n as follows n we have, using that α − n ∼ α n , and thus the result follows since we are free to select C > √ 2 large enough so that I n is of smaller order.
We are now ready to prove Proposition 4.8 Proof of Proposition 4.8.
) and let D n be the event on which approximation (4) of Definition 4.1 holds.Then It is thus enough to show that the first term is o δ 3 n .Note that on the event D n , where B n = V n ∩ G n .Denote by m n the uniform probability measure on B n .We will prove that Since x the uniform probability measure on B ± n .To establish (17) we will show that Note that by definition of δ ± n we have (δ To establish (18) we condition on For the first term we have, , where we used that We will do this by constructing a specific transport plan (coupling) between the measures m n and m + x .Define the joint probability mass function on and observe that m(u, v) is a coupling between m G x and m + x .Therefore It then follows from Lemma 5.5 that .

Collecting relevant known results
The following is a summary of results on the Wasserstein metric between empirical and uniform measures on the N -dimensional cube.The case N = 2 was explicitly stated in [39].Although the results for N ≥ 3 are known, they are not stated in the explicit form we need.For completeness we thus include a proof here.
For N ≥ 3 we let Y 1 , Y 2 , . . .be independent uniformly distributed random variables on [0, 1] N and define where the infimum is taken over all permutations σ of {1, 2, . . ., n}.Then, it follows from [38, Lemma 1] that where L 1 now denotes the set of Lipschitz continues functions with constant 1, with respect to the Euclidean distance d N .
Next, we recall the duality formula for the Wasserstein metric on the space X , we have and hence Finally [38, Theorem 1] implies for N ≥ 3,

Uniform and discrete measures on the unit cube
We first extend Proposition 5.6 to the case where the points correspond to a Poisson process.We will actually proof a slightly more general version which allows for intensities (1 + o (1))n.
Lemma 5.7.Consider the N -dimensional unit cube [0, 1] N , with N ≥ 2, and consider a Poisson process P with intensity measure (1 + f n )n dvol N on [0, 1] N , for some sequence 0 ≤ f n → 0. Let m N P denote the empirical random measure with respect to P, i.e.
and µ N the uniform measure on the square.Then, as n → ∞, Proof.We shall establish the result by conditioning on the size |P| which has a Poisson distribution with mean (1 + f n )n.Conditioned on |P| = k, each point is distributed and therefore it follows from Proposition 5.6 that as Recall the Chernoff concentration result ([32, Lemma 1.2]) for a Poisson random variable Po(a) with mean a: Fix a c > 0. Then by (21) Therefore, if we define a and similarly We shall use this and the upper bound (20) for For I 1 we have while for I 3 we get, using (20), The main contribution comes from I 2 for which we use that k where we used (20) with k n = (1 + f n )n for the first line and Stirling's approximation for n! for the second line.Since c > 0 was arbitrary we conclude that For our analysis we first extend Corollary 5.8 to N -dimensional balls.For this we note that if m N x and µ N x denote, respectively, the empirical and uniform measure on the ball B N (x ; δ n ) ⊆ R N , then W N 1 (m N x , µ N x ) ≤ W N 1 (m N , µ N ), where m N and µ N are, respectively, the empirical and uniform measure on a cube [0, 2δ n ] N .It then follows from Corollary 5.8 We thus have the following result:

From the manifold to the tangent space and back
To prove Proposition 4.9 we have to extend Proposition 5.9 to the setting of Riemannian manifolds.For this we use that for n large enough, the ball B M (x ; δ n ) can be mapped diffeomorphically by the exponential map to a slightly larger ball in the tangent space of x.Since the tangent space is diffemorphic to R N we can use Proposition 5.9 to obtain the result.However, we have to be careful since the exponential map does not preserve the metric.
Proof of Proposition 4.9.We shall denote by B N (x ; δ) the ball of radius δ around x ∈ R N , according to the Euclidean metric.Fix a 0 < ξ < 1 and pick a small enough, but fixed, neighborhood U of the origin in T x M such that 1) the exponential map restricted to U is a diffeomorphism, 2) there exists a constant C > 1 such that U ⊆ B N (0 ; Cδ n ) and 3) for any two points y, z ∈ exp(U ) (1 − ξ)d N (exp −1 x y, exp −1 x z) ≤ d M (y, z) ≤ (1 + ξ)d N (exp −1 x y, exp −1 x z).In particular, this implies that for n large enough, Next we note that the probability measures m M x and µ δn x on B M (x ; δ n ) only depend on the restriction of the Poisson process to this ball.In particular it only depends on the restriction P U of the process to the fixed neighborhood U , which is again a Poisson process with intensity n dvolM volM(M) .Since U ⊆ B N (0 ; Cδ n ) it follows that on U , vol M • exp x = (1 + O δ 2 n )vol N .Therefore, it follows from the Mapping Theorem for Poisson processes [21] that exp −1  x (P U ) is a Poisson process on exp −1  x (U ) with intensity function (1 + O δ 2 n ) n dvolN volM(M) .Slightly abusing notation, let m N x and µ N x denote respectively the empirical and uniform measure on B N 0 ; δn 1−ξ with respect to the Poisson Point Process exp −1  x (P U ).Then, since δ n /(1 − ξ) = Θ (δ n ), Proposition 5.9 implies that E W N 1 (m N x , µ N x ) = o δ 3 n .On the other hand we have, since exp x is a diffeomorphism on U , that x , µ δn x ) ≤ (1 + ξ)E W N 1 (m N x , µ N x ) , and hence we conclude that E W 1 (m M x , µ δn x ) = o δ 3 n , which proves Proposition 4.9.
Proof.The proof closely follows the strategy of the proof of Lemma 5.3.Let C n denote the event in Corollary 5.2.We will show that on this event, for all u, v ∈ U ∩ G n .This then implies that P (A n , C n ) = 0 from which the results follows, since by Corollary 5.2 P (A n ) ≤ P (A n , C n ) + (1 − P (C n )) = o δ 3 n .Take any two u, v ∈ U ∩ G n and let γ(u, v) denote the geodesic between u and v.We then partition this geodesic into pieces of equal length and let u := u 0 , u 1 , . . ., u k−1 , u k := v denote the k + 1 endpoints of the intervals, see Figure 4. On the event C n , each u t belongs to some ball B t of radius λ n /4 which contains a vertex x t ∈ G, where we can take x 0 = u and x k = v.In particular, since d M (u t , x t ) ≤ λ n /2, d M (u t−1 , u t ) ≤ ε n /3 and λ n = o (ε n ), it follows that for large enough n, d M (x t , x t+1 ) ≤ d M (u t , x t ) + d M (u t+1 , x t+1 ) + d M (u t , u t+1 ) ≤ λ n + ε n 3 ≤ ε n so that {u, x 1 , . . ., x k , v} is a path in G n (see Figure 4).Moreover, d w G (x t , x t+1 ) ≤ d M (u t , u t+1 )+λ n by the triangle inequality.Therefore, To finish the proof we note that by definition d w G (u, v) ≥ d M (u, v) and hence

Definition 2 . 1 .
Let (M, d M ) be a smooth, orientable, connected and compact N -dimensional Riemannian manifold.Fix ε > 0 and consider a Poisson process P n on M with intensity measure n volM(M) dvol M .Then we define the random geometric graph G n (ε) := G(P n , ε).

Definition 2 . 3 .
An Ollivier-triple G is a triple (G, d G , m), where G is a graph, d G a metric on G and m = {m x } x∈G a family of probability measures on G for each node x ∈ G. Given an Ollivier-triple G = (G, d G , m), we write W G

2 εFigure 1 :
Figure 1: Illustration of the extended graph distance d M .Here u is connected to node x 1 and v to x 6 and the shortest geodesic-weighted path between x 1 and x 6 goes over 5 edges.

Figure 2 :
Figure 2: Depiction of the covering of the geodesic between u and v by the balls B ti .
Denote their centers by c 1 , . . ., c m .Simple volume comparison, and the compactness of M, gives m = O λ −N n .By construction, the balls B i = B M (c i ; λ n /4) then cover M, and hence m = Θ λ −N n = Θ log(n) −2 n = O (n).The result then follows from Lemma 5.1.
) twice, once with B n = B G (x ; δ n ) and once with B n = B M (x ; δ n ) ∩ G n , will yield the required result.Let us write B ± n := B M (x ; δ ± n ) ∩ G n and denote by m ±

Proposition 5 . 9 .
Let 0 ≤ f n → 0, x ∈ R N and consider a Poisson process P with intensity measure(1 + f n )n dvol N on the N -dimensional ball B N (x ; δ n ).Let m Nx denote the empirical measure with respect to P, i.e. and µ Nx the uniform measure on B N (x ;δ n ).Then E W N 1 (m N x , µ N x ) = o δ 3 n .

Figure 4 :
Figure 4: Depiction of the splitting of the geodesic between u and v in k equal segments.

2 N n − 1 N
. To prove Proposition 4.10 we first show the following Lemma 5.10.Let Q > 3, U = B M (x * ; Qδ n ) and define the eventA n := u,v∈U∩Gn |d w G (u, v) − d M (u, v)| > d M (u, v) random graphs on a 2-dimensional Riemannian manifold M and denote by d s G the shortest path distance.Then the ε n -weighted graph distance d * G