Introduction

In this paper, we give an entirely variational proof of the \(\epsilon \)-regularity result for optimal transportation with a general cost function c and Hölder continuous densities, as established by De Philippis and Figalli [7]. This provides a keystone to the line of research started in [13] and continued in [14]: In [13], the variational approach was introduced and \(\epsilon \)-regularity was established in case of a Euclidean cost function \(c(x,y) = \frac{1}{2} |x-y|^2\), see [13, Theorem 1.2]. In [14], among other things, the argument was extended to rougher densities, which required a substitute for McCann’s displacement convexity; this generalization is crucial here.

One motivation for considering this more general setting is the study of optimal transportation on Riemannian manifolds with cost function given by \(\frac{1}{2}d^2(x,y)\), where d is the Riemannian distance. In this context, an \(\epsilon \)-regularity result is of particular interest because, compared to the Euclidean setting, even though c is a compact perturbation of the Euclidean case, there are other mechanisms creating singularities like curvature. Indeed, under suitable convexity conditions on the support of the target density, the so-called Ma–Trudinger–Wang (MTW) condition on the cost function c, a strong structural assumption, is needed to obtain global smoothness of the optimal transport map, see [17] and [16]. Since in the most interesting case of cost \(\frac{1}{2}d^2(x,y)\), the MTW condition is quite restrictiveFootnote 1, and does not have a simple interpretation in terms of geometric properties of the manifoldFootnote 2, it is highly desirable to have a regularity theory without further conditions on the cost function c (and on the geometry of the support of the densities).

The outer loop of our argument is similar to that of [7]: a Campanato iteration on dyadically shrinking balls that relies on a one-step improvement lemma, which in turn relies on the closeness of the solution to that of a simpler problem with a high-level interior regularity theory. The main differences are:

  • In [7], the simpler problem is the Monge–Ampère equation with constant right-hand-side and Dirichlet boundary data coming from the convex potential; for us, the simpler problem is the Poisson equation with Neumann boundary data coming from the flux in the Eulerian formulation of optimal transportation [2].

  • In [7], the comparison relies on the maximum principle; in our case, it relies on the fact that the density/flux pair in the Eulerian formulation is a minimizerFootnote 3 given its own boundary conditions.

  • In [7], the interior regularity theory appeals to the \(\epsilon \)-regularity theory for the Monge–Ampère equation [10], which itself relies on Caffarelli’s work [4]; in our case, it is just inner regularity of harmonic functions.

Loosely speaking, the Campanato iteration in [7] relies on freezing the coefficients, whereas here, it relies on linearizing the problem (next to freezing the coefficients). In the language of nonlinear elasticity, we tackle the geometric nonlinearity (which corresponds to the nonlinearity inherent to optimal transport) alongside the material nonlinearity (which corresponds to the cost function c). As a consequence of this, we achieve \({\mathscr {C}}^{2,\alpha }\)-regularity in a single Campanato iteration, whereas [7] proceeds in three rounds of iterations, namely first \({\mathscr {C}}^{1,1-}\), then \({\mathscr {C}}^{1,1}\), and finally \({\mathscr {C}}^{2,\alpha }\). Another consequence of this approach via linearization is that we instantly arrive at an estimate that has the same homogeneities as for a linear equation (meaning that the Hölder semi-norm of the second derivatives is estimated by the Hölder semi-norm of the densities and not a nonlinear function thereof). Likewise, we obtain the natural dependence on the Hölder semi-norm of the mixed derivative of the cost functionFootnote 4. When it comes to this dependence on the cost function c, we observe a similar phenomenon as for boundary regularity: regarding how the regularity of the data and the regularity of the solution are related, optimal transportation seems better behaved than its linearization, as we shall explain nowFootnote 5: Assuming unit densities for the sake of the discussion, the Euler–Lagrange equation can be expressed on the level of the optimal transport map T as the fully nonlinear (and x-dependent) elliptic system given by \(\det \nabla T=1\) and \({\mathrm {curl}}_x(\nabla _x c(x,T(x)))=0\). Since the latter can be re-phrased by imposing that the matrix \(\nabla _{xy}c(x,T(x))\nabla T(x)\) is symmetric, the Hölder norm of \(\nabla T\) lives indeed on the same footing as the Hölder norm of the mixed derivative \(\nabla _{xy}c\). The linearization around \(T(x)=x\) on the other hand is given by the elliptic system \({\mathrm {div}}\,\delta T=0\) and \({\mathrm {curl}}_x(\nabla _x c(x,x)+\nabla _{xy}c(x,x)\delta T(x))=0\), which has divergence-form characterFootnote 6. Here Hölder control of \(\nabla _{xy}c(x,x)\) matches with Hölder control of only \(\delta T\), and not its gradient.

Our approach is analogous to De Giorgi’s strategy for the \(\epsilon \)-regularity of minimal surfaces, foremost in the sense that it proceeds via harmonic approximation. In fact, our strategy is surprisingly similar to Schoen & Simon’s variant [21] to that regularity theory:

  • Both approaches rely on the fact that the configuration is minimal given its own boundary conditions (Dirichlet boundary conditions there [21, (43)], flux boundary conditions here [13, (3.22)]), see [21, p.428] and [13, Proof of Proposition 3.3, Step 4]; the Euler–Lagrange equation does not play a role in either approach.

  • Both approaches have to cope with a mismatch in description between the given configuration and the harmonic construction (non-graph vs. graph there, time-dependent flux vs. time-independent flux here) which leads to an error term that luckily is superlinear in the energy, see [21, (38)] and [13, Proof of Proposition 3.3, Step 4]. On a very high-level description, this super-linearity may be considered as coming from the next term in a Taylor expansion of the nonlinearity, which in the right topology can be seen via lower-dimensional isoperimetric principles, see [21, Lemma 3] and [13, Lemma 2.3]. (However, there is no direct analogue of the Lipschitz approximation here.)

  • Both approaches have to establish an approximate orthogonality that allows to relate the distance between the minimal configuration and the construction in an energy norm by the energy gap, see [21, p.426ff.] and [13, Proof of Proposition 3.3, Step 3] in the simple setting or rather [14, Lemma 1.8] in our setting; it thus ultimately relies on some (strict) convexity, see [21, (4)].

  • In order to establish this approximate orthogonality, both approaches have to smooth out the boundary data (there by simple convolution, here in addition by a nonlinear approximation), see [21, (34), (40), (52)] and [14, Proposition 3.6, (3.46), (3.47)].

  • In view of this, both approaches have to choose a good radius for the cylinder (in the Eulerian space-time here) on which the construction is carried out, see [21, p.424] and [14, Section 3.1.3].

The advantages of a variational approach become particularly apparent in this paper, when we pass from a Euclidean cost function to a more general one: We may appeal to the concept of almost minimizers, which is well-established for minimal surfaces.Footnote 7 In our case, this simple concept means that, on a given scale, we interpret the minimizer (always with respect to its own boundary conditions) of the problem with c as an approximate minimizer of the Euclidean problem. This allows us to directly appeal to the Euclidean harmonic approximation [14, Theorem 1.5]. Incidentally, while dealing with Hölder continuous densities like in [13] and not general measures as in [14], we could not appeal to the simpler [13, Proposition 3.3], since this one relies on the Euler–Lagrange equation in form of McCann’s displacement convexity.

There are essentially two new challenges we face when passing from a Euclidean to a general cost function (next to the geometric ingredients also present in [7]):

  • Starting point for the variational approach is always an \(L^\infty /L^2\)-bound on the displacement, which does rely on the Euler–Lagrange equation in the weak form of monotonicity of the support of the optimal coupling, see [13, Lemma 3.1], [14, Lemma 2.9], and [18, Proposition 2.2]. In this paper, we establish the analogue of [13, Lemma 3.1] based on the c-monotonicity, see Proposition 1.5. Loosely speaking, this relies on a qualitative argument down to some (c-dependent) scale \(R_0\), followed by an argument that constitutes a perturbation of the one in [13, Lemma 3.1] for the scales \(\le R_0\).

  • We now address a somewhat hidden, but quite important additional difficulty that has to be overcome when passing from a Euclidean to a general cost functionalFootnote 8: While the Kantorowich formulation of optimal transportation has the merit of being convex, it is so in a very degenerate way; the Benamou–Brenier formulation on the contrary uncovers an almost strictFootnote 9 convexity. The variational approach to regularity capitalizes on this strict convexity. However, this Eulerian reformulation seems naturally available only in the Riemannian case, and its strict convexity seems apparent only in the Euclidean setting. This is one of the reasons to appeal to the concept of almost minimizer, since it allows us to pass from a general cost function to the Euclidean one. However, for configurations that are not exact minimizers of the Euclidean cost functional, the Lagrangian cost \({\int |y-x|^2\,{\mathrm {d}}\pi }\) and the cost \(\int \frac{1}{\rho }|j|^2\) of their Eulerian description (1.24) are in general different: While the Eulerian cost is always dominated by the Lagrangian one, this is typically strictFootnote 10. Hence the prior work [13, 14, 18] on the variational approach used the Euler–Lagrange equation in a somewhat hidden way, namely in terms of the incidence of Eulerian and Lagrangian cost. Luckily, the discrepancy of both functionals can be controlled for almost minimizers, see Lemma 1.12.

Main Results

Let \(X,Y \subset {\mathbb {R}}^d\) be compact. We assume that the cost function \(c: X\times Y \rightarrow {\mathbb {R}}\) satisfies:

  • C1   \(c \in {\mathscr {C}}^{2}(X\times Y)\).

  • C2   For any \(x\in X\), the map \(Y\ni y \mapsto -\nabla _x c(x,y) \in {\mathbb {R}}^d\) is one-to-one.

  • C3   For any \(y\in Y\), the map \(X\ni x \mapsto -\nabla _y c(x,y) \in {\mathbb {R}}^d\) is one-to-one.

  • C4   \(\det \nabla _{xy}c(x,y)\ne 0\) for all \((x,y)\in X\times Y\).

Let \(\rho _0,\rho _1:{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) be two probability densities, with \({{\,\mathrm{Spt}\,}}\rho _0 \subseteq X\) and \({{\,\mathrm{Spt}\,}}\rho _1 \subseteq Y\). It is well-known that under (an even milder regularity assumption than) condition (C1), the optimal transportation problem

$$\begin{aligned} \inf _{\pi \in {\varPi }(\rho _0,\rho _1)} \int _{{\mathbb {R}}^d\times {\mathbb {R}}^d} c(x,y)\,{\mathrm {d}}\pi , \end{aligned}$$
(1.1)

where the infimum is taken over all couplings \(\pi \) between the measures \(\rho _0\,{\mathrm {d}}x\) and \(\rho _1 \,{\mathrm {d}}y\), admits a solution \(\pi \), which we call a c-optimal coupling.

For \(R>0\) we define the set

(1.2)

which is quite natural in the context of optimal transportation, because it allows for a symmetric treatment of the transport problem: it is suitable to describe all the mass that gets transported out of \(B_R\), and all the mass that is transported into \(B_R\). For \(\alpha \in (0,1)\) we write

(1.3)

for the \({\mathscr {C}}^{0,\alpha }\)-semi-norm of the mixed derivative \(\nabla _{xy}c\) of the cost function in the cross , and denote by

$$\begin{aligned}{}[\rho ]_{\alpha ,R} := \sup _{x\ne x'\in B_R} \frac{|\rho (x) - \rho (x')|}{|x-x'|^{\alpha }} \end{aligned}$$

the \({\mathscr {C}}^{0,\alpha }\)-semi-norm of \(\rho \) in \(B_R\).

Fixing \(\rho _0(0) =\rho _1(0) = 1\), we think of the densities as non-dimensional objects. This means that has units of \((\text {length})^d\), so that the Euclidean transport energy has dimensionality \((\text {length})^{d+2}\), and explains the normalization by \(R^{-(d+2)}\) in assumption (1.4) and in the definition (1.9) of \({\mathscr {E}}_R\) below, making it a non-dimensional quantityFootnote 11. Similarly, the normalization \(\nabla _{xy}c(0,0) = -{\mathbb {I}}\) makes the second derivatives of the cost function non-dimensional.

The main result of this paper is the following \(\epsilon \)-regularity result:

Theorem 1.1

Assume that (C1)–(C4) hold and that \(\rho _0(0) = \rho _1(0) =1\), as well as \(\nabla _{xy}c(0,0)= -{\mathbb {I}}\). Assume further that 0 is in the interior of \(\, X \times Y\).

Let \(\pi \) be a c-optimal coupling from \(\rho _0\) to \(\rho _1\). There exists \(R_0= R_0(c)>0\) such that for all \(R\le R_0\) withFootnote 12

$$\begin{aligned} \frac{1}{R^{d+2}} \int _{B_{4R}\times {\mathbb {R}}^d} |x-y|^2\,{\mathrm {d}}\pi + R^{2\alpha }\left( [\rho _0]_{\alpha , 4R}^2 + [\rho _1]_{\alpha , 4R}^2 + \left[ \nabla _{xy}c\right] _{\alpha , 4R}^2\right) \ll _c 1, \end{aligned}$$
(1.4)

there exists a function \(T\in {\mathscr {C}}^{1,\alpha }(B_R)\) such that \((B_R \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq {\mathrm {graph}}\,T\), and the estimate

$$\begin{aligned} {[}\nabla T]_{\alpha , R}^2 \lesssim \frac{1}{R^{d+2+2\alpha }}\int _{B_{4R}\times {\mathbb {R}}^d} |x-y|^2\,{\mathrm {d}}\pi + [\rho _0]_{\alpha , 4R}^2 + [\rho _1]_{\alpha , 4R}^2 + \left[ \nabla _{xy}c\right] _{\alpha , 4R}^2 \end{aligned}$$
(1.5)

holds.

We stress that the implicit constant in (1.5) is independent of the cost c. The scale \(R_0\) below which our \(\epsilon \)-regularity result holds has to be such that \(B_{2R_0} \subseteq X \cap Y\) and such that the qualitative \(L^{\infty }/L^2\) bound (Lemma 2.1) holds. We note that the dependence of \(R_0\) on c and the implicit dependence on c in the smallness assumption (1.4) are only through the qualitative information (C1)–(C4), see Remark 1.7 and Lemma 2.1 for details. Note also that, without appealing to the well-known result that the solution of (1.1) is a deterministic coupling \(\pi =({{\,\mathrm{Id}\,}}\times T)_{\#} \rho _0\), this structural property of the optimal coupling is an outcome of our iteration.

Remark 1.2

Under the same assumptions as in Theorem 1.1, in particular only asking for the one-sided energy \(\frac{1}{R^{d+2}} \int _{B_{4R}\times {\mathbb {R}}^d} |x-y|^2\,{\mathrm {d}}\pi \) to be small in (1.4), we can also prove the existence of a function \(T^* \in {\mathscr {C}}^{1,\alpha }(B_R)\) such that \(({\mathbb {R}}^d\times B_R) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq \{(T^*(y),y) \, : \, y \in B_R\}\), with the same estimate on the semi-norm of \(\nabla T^*\). This follows from the symmetric nature of the assumptions (C1)–(C4), of the normalization conditions on the densities and the cost, and of the smallness assumption (1.4). We refer the reader to Step 1 of the proof of Theorem 1.1 to see how (1.4) entails smallness of a symmetric version of the Euclidean transport energy, as defined in (1.9), at a smaller scale.

As in [7], Theorem 1.1 leads to a partial regularity result for a c-optimal transport map T, that is, a map such that the c-optimal coupling between \(\rho _0\) and \(\rho _1\) is of the form

$$\begin{aligned} \pi _T := ({{\,\mathrm{Id}\,}}\times T)_{\#} \rho _0. \end{aligned}$$

The existence of such a map is a classical result in optimal transportation under assumptions (C1)–(C2) on the cost, as well as its particular structure, namely the fact that it derives from a potential. More precisely, there exists a c-convex function \(u: X \rightarrow {\mathbb {R}}\) such that

$$\begin{aligned} T(x) = T_u(x) := {{\,\mathrm{c-exp}\,}}_x(\nabla u(x)), \end{aligned}$$

where the c-exponential map is well-defined in view of (C1) and (C2) via

$$\begin{aligned} {{\,\mathrm{c-exp}\,}}_x(p) = y \quad \Leftrightarrow \quad p = -\nabla _x c(x,y) \quad \text {for any } x\in X, y\in Y, p\in {\mathbb {R}}^d. \end{aligned}$$
(1.6)

Recall that a function \(u: X \rightarrow {\mathbb {R}}\) is c-convex if there exists a function \(\lambda : Y\rightarrow {\mathbb {R}}\cup \{-\infty \}\) such that

$$\begin{aligned} u(x) = \sup _{y\in Y}\left( \lambda (y) - c(x,y)\right) . \end{aligned}$$

Note that by assumption (C1) and the boundedness of Y, the function u is semi-convex, i.e., there exists a constant C such that \(u+ C |x|^2\) is convex. Hence, by Alexandrov’s Theorem (see, for instance, [8, Theorem 6.9], or [24, Theorem 14.25]), u is twice differentiable at a.e. \(x\in X\). For more details on c-convexity and its connection to optimal transport and Monge-Ampère equations we refer to [24, Chapter 5] and [9, Section 5.3].

Before stating the partial regularity result, let us mention that our \(L^2\)-based assumption on the smallness of the Euclidean energy of the forward transport is not more restrictive than the \(L^{\infty }\)-based assumption of closeness of the Kantorovich potential u to \(\frac{1}{2}|\cdot |^2\) in [7, Theorems 4.3 & 5.3]. However, the assumption on u is not invariant under transformations of c and u that preserve optimality, whereas the optimal transport map \(T_u\), and hence our assumption on the energy \(R^{-d-2} \int _{B_{4R}} |x-T_u(x)|^2\,\rho _0(x)\,{\mathrm {d}}x\), are unaffected. For that reason we additionally have to fix \(\nabla _{xx}c(0,0)=0\), and \(\nabla _x c(0,0) = 0\) in the following corollaryFootnote 13, and ask for \([\nabla _{xx}c]_{\alpha , 4R}\) to be small. Hence, in this result we think of the cost c as being close to \(-x \cdot y\), which is not necessarily the case in Theorem 1.1.

Corollary 1.3

Assume that (C1)–(C4) hold and that \(\rho _0(0) = \rho _1(0) =1\), as well as \(\nabla _{xy}c(0,0)= -{\mathbb {I}}\), \(\nabla _{xx}c(0,0)=0\), and \(\nabla _x c(0,0) = 0\). Assume further that 0 is in the interior of \(\, X \times Y\).

Let \(T_u\) be the c-optimal transport map from \(\rho _0\) to \(\rho _1\). There exists \(R_0= R_0(c)>0\) such that for all \(R\le R_0\) with

$$\begin{aligned} \frac{1}{R^{2}} \left\| u-\tfrac{1}{2}|\cdot |^2 \right\| _{{\mathscr {C}}^0(B_{8R})} + R^{\alpha }\left( [\rho _0]_{\alpha , 4R} + [\rho _1]_{\alpha , 4R} + [\nabla _{xy}c]_{\alpha , 4R} + \left[ \nabla _{xx}c\right] _{\alpha , 4R}\right) \ll _c 1, \end{aligned}$$
(1.7)

\(T_u\in {\mathscr {C}}^{1,\alpha }(B_R)\) with

$$\begin{aligned}{}[\nabla T_u]_{\alpha , R} \lesssim \frac{1}{R^{2+\alpha }} \left\| u-\tfrac{1}{2}|\cdot |^2 \right\| _{{\mathscr {C}}^0(B_{8R})} + [\rho _0]_{\alpha , 4R} + [\rho _1]_{\alpha , 4R} + [\nabla _{xy}c]_{\alpha , 4R} + \left[ \nabla _{xx}c\right] _{\alpha , 4R}. \end{aligned}$$
(1.8)

The partial regularity statement is then as follows:

Corollary 1.4

Let \(\rho _0,\rho _1:{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) be two probability densities with the properties that \(X= {{\,\mathrm{Spt}\,}}\rho _0\) and \(Y= {{\,\mathrm{Spt}\,}}\rho _1\) are bounded withFootnote 14\(|\partial X| =|\partial Y|= 0\), \(\rho _0\), \(\rho _1\) are positive on their supports, and \(\rho _0\in {\mathscr {C}}^{0,\alpha }(X)\), \(\rho _1\in {\mathscr {C}}^{0,\alpha }(Y)\). Assume that \(c \in {\mathscr {C}}^{2, \alpha }(X \times Y)\) and that (C2)–(C4) hold. Then there exist open sets \(X' \subseteq X\) and \(Y' \subseteq Y\) with \(|X \setminus X'| = |Y \setminus Y'| = 0\) such that the c-optimal transport map T between \(\rho _0\) and \(\rho _1\) is a \({\mathscr {C}}^{1,\alpha }\)- diffeomorphism between \(X'\) and \(Y'\).

As recently pointed out in [12] for the quadratic case, the variational approach is flexible enough to also obtain \(\epsilon \)-regularity for optimal transport maps between merely continuous densities. The modifications presented in [12] can be combined with our results to prove an \(\epsilon \)-regularity result for the class of general cost functions considered above. This will be the context of a separate note [19].

Finally, Theorem 1.1 can also be applied to optimal transportation on a Riemannian manifold \({\mathscr {M}}\) with cost given by the square of the Riemannian distance function d: if \(\rho _0, \rho _1\in {\mathscr {C}}^{0,\alpha }({\mathscr {M}})\) are two probability densities, locally bounded away from zero and infinity on \({\mathscr {M}}\), then the optimal transport map \(T: {\mathscr {M}} \rightarrow {\mathscr {M}}\) sending \(\rho _0\) to \(\rho _1\) for the cost \(c = \frac{d^2}{2}\) is a \({\mathscr {C}}^{1,\alpha }\)-diffeomorphism outside two closed sets \({\varSigma }_X, {\varSigma }_Y \subset {\mathscr {M}}\) of measure zero. See [7, Theorem 1.4] for details.

Strategy of the Proofs

In this section we sketch the proof of the \(\epsilon \)-regularity Theorem 1.1. As in [13, 14] one of the key steps is a harmonic approximation result, which can be obtained by an explicit construction and (approximate) orthogonality on an Eulerian level.

\(L^{\infty }\) Bound on the Displacement

A crucial ingredient to the variational approach is a local \(L^{\infty }/L^2\)-estimate on the level of the displacement. More precisely, given a scale R, it gives a pointwise estimate on the non-dimensionalized displacement \(\frac{y-x}{R}\) in terms of the (non-dimensionalized) Euclidean transport energy

(1.9)

which amounts to a squared \(L^2\)-average of the displacement. While this looks like an inner regularity estimate in the spirit of the main result, Theorem 1.1, it is not. In fact, it is rather an interpolation estimate with the c-monotonicity of \({{\,\mathrm{Spt}\,}}\pi \) providing an invisible second control next to the energy. This becomes most apparent in the simple context of [13, Lemma 3.1] where monotonicity morally amounts to a (one-sided) \(L^{\infty }\)-control of the gradient of the displacement. The interpolation character of the estimate still shines through in the fractional exponent \(\frac{2}{d+2}\in (0,1)\) on the \(L^2\)-norm.

Following [14], we here allow for general measures \(\mu \) and \(\nu \); the natural local control of these data on the energy scale is given by

$$\begin{aligned} {\mathscr {D}}_R(\mu ,\nu )&:= \frac{1}{R^{d+2}} W^2_{B_{R}}(\mu , \kappa _{\mu }) + \frac{(\kappa _{\mu }-1)^2}{\kappa _{\mu }} + \frac{1}{R^{d+2}} W^2_{B_{R}}(\nu , \kappa _{\nu }) + \frac{(\kappa _{\nu }-1)^2}{\kappa _{\nu }} , \end{aligned}$$
(1.10)

which measures locally at scale \(R>0\) the distance from given measures \(\mu \) and \(\nu \) to the Lebesgue measure, where

$$\begin{aligned} \kappa _{\mu } = \frac{\mu (B_R)}{|B_R|} \quad \text {and} \quad W_{B_R}^2(\mu ,\kappa _\mu ) = W^2(\mu \lfloor _{B_R},\kappa _{\mu }\,{\mathrm {d}}x \lfloor _{B_R}) \end{aligned}$$
(1.11)

is the quadratic Wasserstein distance between \(\mu \lfloor _{B_R}\) and \(\kappa _{\mu }\,{\mathrm {d}}x \lfloor _{B_R}\).Footnote 15 Notice that if \(\mu = \rho _0\,{\mathrm {d}}x\) and \(\nu = \rho _1\,{\mathrm {d}}y\) with Hölder continuous probability densities such that \(\frac{1}{2} \le \rho _j \le 2\) on \(B_R\), \(j=0, 1\), thenFootnote 16

$$\begin{aligned} {\mathscr {D}}_R \lesssim R^{2\alpha } \left( [\rho _0]_{\alpha ,R}^2 + [\rho _1]_{\alpha ,R}^2\right) , \end{aligned}$$
(1.12)

see Lemma A.4 in the appendix.

The new aspect compared to [14, Lemma 2.9] is the general cost function c. Not surprisingly, it turns out that the result still holds provided c is close to Euclidean and that the closeness is measured in the non-dimensional \({\mathscr {C}}^2\)-norm. We stress the fact that this closeness is not required on the entire “cross” , cf. (1.2), but only to the “finite cross”

$$\begin{aligned} {\mathbb {B}}_{5R, {\varLambda } R} := \left( B_{5R} \times B_{{\varLambda } R}\right) \cup \left( B_{{\varLambda } R} \times B_{5R}\right) . \end{aligned}$$
(1.13)

This is crucial, since only this smallness is guaranteed by the finiteness of the \({\mathscr {C}}^{2,\alpha }\)-norm, cf. (1.18) below. This sharpening is a consequence of the qualitative hypotheses (C1)–(C4).

Proposition 1.5

Assume that the cost function c satisfies (C1)–(C4), and let \(\pi \in {\varPi }(\mu , \nu )\) be a coupling with c-monotone support.

For all \({\varLambda } < \infty \) and for all \(R>0\) such that

(1.14)

and for which

$$\begin{aligned} {\mathscr {E}}_{6R} + {\mathscr {D}}_{6R}&\ll 1, \end{aligned}$$
(1.15)
$$\begin{aligned} \text {and} \quad \Vert \nabla _{xy}c + {\mathbb {I}}\,\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5R, {\varLambda } R})}&\ll 1, \end{aligned}$$
(1.16)

we have that

(1.17)

Remark 1.6

A close look at the proof of Proposition 1.5 actually tells us that if \({\mathscr {E}}_{6R}\) is replaced in (1.15) by the one-sided energy, that is, if we assume

$$\begin{aligned} \frac{1}{R^{d+2}} \int _{B_{6R} \times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi \ll 1, \end{aligned}$$

and if (1.14) is replaced by the one-sided inclusion

$$\begin{aligned} (B_{5R} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq {\mathbb {B}}_{5R, {\varLambda } R}, \end{aligned}$$

then we still get a one-sided \(L^{\infty }\) bound in the form of

$$\begin{aligned} (x,y) \in (B_{4R} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \quad \Rightarrow \quad |x-y| \lesssim R\left( \frac{1}{R^{d+2}} \int _{B_{6R} \times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi + {\mathscr {D}}_{6R} \right) ^{\frac{1}{d+2}}. \end{aligned}$$

This observation will be useful in the proof of Theorem 1.1 to relate the one-sided energy in (1.4) to the full energy in Proposition 1.16.

Note that due to assumption (1.14) Proposition 1.5 might appear rather useless: indeed, one basically has to assume a (qualitative) \(L^{\infty }\) bound in the sense that there is a constant \({\varLambda }<\infty \) such that if \(x\in B_{5R}\), then \(y\in B_{{\varLambda } R}\), in order to obtain the \(L^{\infty }\) bound (1.17). However, as we show in Lemma 2.1, due to the global assumptions (C1)–(C4) alone, there exists a scale \(R_0>0\) and a constant \({\varLambda }_0<\infty \) such that (1.14) holds. Moreover, in the Campanato iteration used to prove Theorem 1.1, which is based on suitable affine changes of coordinates, the qualitative \(L^{\infty }\) bound (1.14) is reproduced in each step of the iteration (with a constant \({\varLambda }\) that after the first step can be fixed throughout the iteration, e.g. \({\varLambda } = 27\) works).

Remark 1.7

There is an apparent mismatch with respect to the domains involved in the closeness assumptions on c in Theorem 1.1 and Proposition 1.5: we assumeFootnote 17\(R^{2\alpha }[\nabla _{xy}c]_{\alpha , 6R}^2 \ll 1\) in Theorem 1.1 and \(\Vert \nabla _{xy}c+{\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5R,{\varLambda } R})} \ll 1\) in Proposition 1.5. We are able to relate the two assumptions due to the qualitative \(L^{\infty }\) bound (1.14): If \(\nabla _{xy}c(0,0)=-{\mathbb {I}}\), we have for all \({\varLambda }<\infty \), using the inclusion ,

$$\begin{aligned} \Vert \nabla _{xy}c + {\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5R, {\varLambda } R})}&= \sup _{(x,y) \in {\mathbb {B}}_{5R, {\varLambda } R}} |\nabla _{xy} c(x,y) - \nabla _{xy} c(0,0)| \lesssim _{{\varLambda }} R^{\alpha } [\nabla _{xy}c]_{\alpha ,6R}. \end{aligned}$$
(1.18)

Thus, if \(R^{2\alpha } [\nabla _{xy}c]_{\alpha ,6R}^2\) is chosen small enough, then the assumption (1.16) in Proposition 1.5 is fulfilled.

Almost-Minimality with Respect to Euclidean Cost

One of the main new contributions of this work is showing that the concept of almost-minimality, which is well-established in the theory of minimal surfaces, can lead to important insights also in optimal transportation. The key observation is that if c is quantitatively close to Euclidean cost, then a minimizer of (1.1) is almost-minimizing for the quadratic cost.

One difficulty in applying the concept of almost-minimality is that we are dealing with local quantities, for which local minimality (being minimizing with respect to its own boundary condition) would be the right framework to adopt.

Lemma 1.8

Let \(\pi \in {\varPi }(\mu ,\nu )\) be a c-optimal coupling between the measures \(\mu \) and \(\nu \). Then for any Borel set \({\varOmega } \subseteq {\mathbb {R}}^d \times {\mathbb {R}}^d\) the coupling \(\pi _{{\varOmega }}:= \pi \lfloor _{{\varOmega }}\) is c-optimal given its own marginals, i.e. c-optimal between the measures \(\mu _{{\varOmega }}\) and \(\nu _{{\varOmega }}\) defined via

$$\begin{aligned} \mu _{{\varOmega }}(A) = \pi ( (A\times {\mathbb {R}}^d)\cap {\varOmega }), \quad \nu _{{\varOmega }}(A) = \pi ( ({\mathbb {R}}^d\times A)\cap {\varOmega }), \end{aligned}$$
(1.19)

for any Borel measurable \(A\subseteq {\mathbb {R}}^d\).

This lemma allows us to restrict any c-optimal coupling \(\pi \) to a “good” set, where particle trajectories are well-behaved in the sense that they satisfy an \(L^{\infty }\) bound. In particular, we have the following corollary:

Corollary 1.9

Let \(\pi \in {\varPi }(\mu ,\nu )\) be a c-optimal coupling between the measures \(\mu \) and \(\nu \) with the property that there exists \(M\le 1\) such that for all

$$\begin{aligned} |x-y|\le M R. \end{aligned}$$

Then the coupling is c-optimal between the measures \(\mu _R\) and \(\nu _R\) as defined in (1.19) and we have that \({{\,\mathrm{Spt}\,}}\mu _R, {{\,\mathrm{Spt}\,}}\nu _R \subseteq B_{2R}\) (in particular \({{\,\mathrm{Spt}\,}}\pi _R \subseteq B_{2R} \times B_{2R}\)), \(\mu _R = \mu \) and \(\nu _R = \nu \) on \(B_R\), and \(\mu _R\le \mu \), \(\nu _R\le \nu \).

One of the main observations now is that c-optimal couplings of the type considered in Corollary 1.9 are almost-minimizers of the Euclidean transport cost. The following assumptions (1.20) and (1.21) should be read as properties satisfied by the marginal measures \(\mu \) and \(\nu \) of the restriction of a c-optimal coupling to a finite cross on which the \(L^{\infty }\) bound (1.17) holds. Moreover, one of the marginals should be close to the Lebesgue measure in the sense that \(\mu (B_R)\lesssim R^d\).

Proposition 1.10

Let \(\mu \) and \(\nu \) be two measures such that

$$\begin{aligned}&{{\,\mathrm{Spt}\,}}\mu \subseteq B_R, \, {{\,\mathrm{Spt}\,}}\nu \subseteq B_R, \end{aligned}$$
(1.20)
$$\begin{aligned}&\mu (B_R) \le 2 |B_R|, \end{aligned}$$
(1.21)

for some \(R>0\). Let \(\pi \in {\varPi }(\mu , \nu )\) be a c-optimal coupling between the measures \(\mu \) and \(\nu \). Then \(\pi \) is almost-minimizing for the Euclidean cost, in the sense that for any \({\widetilde{\pi }}\in {\varPi }(\mu , \nu )\) we have that

$$\begin{aligned} \int \frac{1}{2} |x-y|^2\,{\mathrm {d}}\pi \le \int \frac{1}{2} |x-y|^2\,{\mathrm {d}}{\widetilde{\pi }} + R^{d+2} {\varDelta }_{R}, \end{aligned}$$
(1.22)

where

$$\begin{aligned} {\varDelta }_R&:= C \Vert \nabla _{xy} c + {\mathbb {I}}\Vert _{{C}^{0}(B_R\times B_R)} \, {\mathscr {E}}_R(\pi )^{\frac{1}{2}} \end{aligned}$$
(1.23)

for some constant C depending only on d.

The above statement is most naturally formulated in terms of couplings, that is, in the Kantorovich framework. However, the way (almost-)minimality enters in the proof of the harmonic approximation result (see Theorem 1.13 below), it is needed in the Eulerian picture, where the construction of a competitor is done.

The Eulerian Side of Optimal Transportation

Given a coupling \(\pi \in {\varPi }(\mu , \nu )\) between measures \(\mu \) and \(\nu \), we can define its Eulerian description, i.e. the density-flux pair \((\rho _t,j_t)\) associated to the coupling \(\pi \) by

$$\begin{aligned} \int \zeta \,{\mathrm {d}}\rho _t := \int \zeta ((1-t)x + ty) \,{\mathrm {d}}\pi , \quad \int \xi \cdot {\mathrm {d}}j_t := \int \xi ((1-t)x + ty)\cdot (y-x) \,{\mathrm {d}}\pi \end{aligned}$$
(1.24)

for \(t\in [0,1]\) and for all test functions \(\zeta \in {\mathscr {C}}_c^{0}({\mathbb {R}}^d \times [0,1])\) and fields \(\xi \in {\mathscr {C}}_c^{0}({\mathbb {R}}^d \times [0,1])^d\). It is easy to check that \((\rho _t,j_t)\) is a distributional solution of the continuity equation

$$\begin{aligned} \partial _t \rho _t + \nabla \cdot j_t = 0, \qquad \rho _0 = \mu ,\quad \rho _1 = \nu , \end{aligned}$$
(1.25)

that is, for any \(\zeta \in {\mathscr {C}}^1_c({\mathbb {R}}^d \times [0,1])\) there holds

(1.26)

For brevity, we will often write \((\rho ,j) := (\rho _t \,{\mathrm {d}}t, j_t \,{\mathrm {d}}t)\). Being divergence-free in (tx), the density-flux pair \((\rho , j)\) admits internal (and external) traces on \(\partial (B_R \times (0,1))\) for any \(R>0\), see [6] for details, i.e., there exists a measure \(f_R\) on \(\partial B_R \times (0,1)\) such that

$$\begin{aligned} \int _{B_R\times [0,1]} \left( \partial _t \zeta \,{\mathrm {d}}\rho + \nabla \zeta \cdot {\mathrm {d}}j \right) = \int _{B_R} \zeta _1\,{\mathrm {d}}\nu - \int _{B_R} \zeta _0\,{\mathrm {d}}\mu + \int _{\partial B_R\times [0,1]} \zeta \,{\mathrm {d}}f_R, \end{aligned}$$
(1.27)

We also introduce the time-averaged measure \({\overline{f}}_R\) on \(\partial B_R\) defined via

$$\begin{aligned} \int _{\partial B_R} \zeta \,{\mathrm {d}}{\overline{f}}_R := \int _{\partial B_R\times [0,1]} \zeta \,{\mathrm {d}}f_R. \end{aligned}$$
(1.28)

Similarly, defining the measure \({\overline{j}} := \int _0^1\,{\mathrm {d}}j(\cdot , t)\), it is easy to see that

$$\begin{aligned} \nabla \cdot {\overline{j}} = \mu - \nu \end{aligned}$$

and that therefore \({\overline{j}}\) admits internal and external traces, which agree for all \(R>0\) with \(|{\overline{j}}|(\partial B_R) = \mu (\partial B_R) = \nu (\partial B_R) = 0\), and the internal trace agrees with \({\overline{f}}_R\).

Note that we have the duality [20, Proposition 5.18]

$$\begin{aligned} \frac{1}{2}\int \frac{1}{\rho } |j|^2 = \sup _{\xi \in {\mathscr {C}}^{0}_{c}({\mathbb {R}}^d \times [0,1])^d} \int \xi \cdot {\mathrm {d}}j - \frac{|\xi |^2}{2}\,{\mathrm {d}}\rho , \end{aligned}$$
(1.29)

which immediately implies the subadditivity of \((\rho ,j) \mapsto \int \frac{1}{\rho } |j|^2\). A localized version of (1.29), in the form of

$$\begin{aligned} \frac{1}{2}\int _{B\times [0,1]} \frac{1}{\rho } |j|^2 = \sup _{\xi \in {\mathscr {C}}^{0}_{c}(B \times [0,1])^d} \int \xi \cdot {\mathrm {d}}j - \frac{|\xi |^2}{2}\,{\mathrm {d}}\rho , \end{aligned}$$
(1.30)

also holds for any open set \(B\subseteq {\mathbb {R}}^d\). From the inequality \(\xi \cdot (y-x) -\frac{1}{2} |\xi |^2 \le \frac{1}{2} |x-y|^2\), which is true for any \(\xi , x, y \in {\mathbb {R}}^d\), the duality formula (1.29) immediately implies that the Eulerian cost of the density-flux pair \((\rho , j)\) corresponding to a coupling \(\pi \) via (1.24) is always dominated by the Lagrangian cost of \(\pi \), i.e.

$$\begin{aligned} \frac{1}{2}\int \frac{1}{\rho } |j|^2 \le \frac{1}{2} \int |x-y|^2 \,{\mathrm {d}}\pi . \end{aligned}$$
(1.31)

We stress that this inequality is in general strict, see the example in Footnote 10.

Contrary to the case of quadratic cost \(c(x,y) = \frac{1}{2}|x-y|^2\), or, equivalently, \({\widetilde{c}}(x,y) = -x \cdot y\), given an optimal coupling \(\pi \) for the cost c, the density-flux pair \((\rho ,j)\) associated to \(\pi \) in the sense of (1.24) is not optimal for the Benamou–Brenier formulation [2] of optimal transportation, i.e.,

$$\begin{aligned} W^2(\mu , \nu ) = \inf \left\{ \frac{1}{2} \int \frac{1}{\rho } |j|^2: \partial _t \rho + \nabla \cdot j = 0, \rho _0 = \mu , \rho _1 = \nu \right\} , \end{aligned}$$
(1.32)

where the continuity equation and boundary conditions are understood in the weak sense (1.26), see [23, Chapter 8] for details.

Remark 1.11

For minimizers \(\pi \) of the Euclidean transport cost, i.e. \(\pi \in {\mathrm {argmin}}\, W^2(\mu ,\nu )\), we have equality in (1.31),

$$\begin{aligned} \frac{1}{2}\int \frac{1}{\rho } |j|^2 = \frac{1}{2} \int |x-y|^2 \,{\mathrm {d}}\pi . \end{aligned}$$
(1.33)

Indeed, if \((\rho ,j)\) is the Eulerian description of \(\pi \), then by (1.32)

$$\begin{aligned} \frac{1}{2} \int |x-y|^2 \,{\mathrm {d}}\pi = W^2(\mu , \nu ) \le \frac{1}{2} \int \frac{1}{\rho }|j|^2, \end{aligned}$$

which together with (1.31) implies (1.33).

As another consequence, while displacement convexity guarantees in the Euclidean case that the Eulerian density \(\rho \le 1\) (up to a small error), c.f. [13, Lemma 4.2], in our case \(\rho \) is in general merely a measure. This complication is already present in [14] and led to important new insights in dealing with marginals that are not absolutely continuous with respect to Lebesgue measure in the Euclidean case, upon which we are also building in this work.

The Eulerian version of the almost-minimality Proposition 1.10 can then be obtained via the following lemma:

Lemma 1.12

Let \(\pi \in {\varPi }(\mu , \nu )\) be a coupling between the measures \(\mu \) and \(\nu \) with the property that there exists a constant \({\varDelta } <\infty \) such that

$$\begin{aligned} \int \frac{1}{2} |x-y|^2\,{\mathrm {d}}\pi \le \int \frac{1}{2} |x-y|^2\,{\mathrm {d}}{\widetilde{\pi }} + {\varDelta } \end{aligned}$$
(1.34)

for any \({\widetilde{\pi }}\in {\varPi }(\mu , \nu )\), and let \((\rho ,j)\) be its Eulerian description defined in (1.24). Then

$$\begin{aligned} \int \frac{1}{2} |x-y|^2\,{\mathrm {d}}\pi \le \frac{1}{2} \int \frac{1}{\rho } |j|^2 + {\varDelta }, \end{aligned}$$

and

$$\begin{aligned} \frac{1}{2} \int \frac{1}{\rho } |j|^2 \le \frac{1}{2} \int \frac{1}{{\widetilde{\rho }}} |{\widetilde{j}}|^2 + {\varDelta } \end{aligned}$$

for any pair of measures \(({{\widetilde{\rho }}},{{\widetilde{j}}})\) satisfying

$$\begin{aligned} \int \left( \partial _t \zeta \,{\mathrm {d}}{\widetilde{\rho }} + \nabla \zeta \cdot {\mathrm {d}}{\widetilde{j}} \, \right) = \int \zeta _1\,{\mathrm {d}}\nu - \int \zeta _0\,{\mathrm {d}}\mu \end{aligned}$$
(1.35)

for all \(\zeta \in {\mathscr {C}}^\infty _c({\mathbb {R}}^d\times [0,1])\).

The Harmonic Approximation Result

The main ingredient in the proof of Theorem 1.1 is the harmonic approximation result, which states that if a coupling between two measures supported on a ball (say of radius 7R for some \(R>0\)) satisfies the \(L^{\infty }\) bound Proposition 1.5 globally on its support and is almost-minimizing with respect to the Euclidean cost, then the displacement \(y-x\) is quantitatively close to a harmonic gradient field \(\nabla {\varPhi }\) in . This is actually a combination of a harmonic approximation result in the Eulerian picture (Theorem 1.13) and Lemma 1.14, which allows us to transfer the Eulerian information back to the Lagrangian framework.

Theorem 1.13

(Harmonic approximation). Let \(R>0\) and \(\mu ,\nu \) be two measures with the property that

$$\begin{aligned} {{\,\mathrm{Spt}\,}}\mu \subseteq B_{7R}, \quad {{\,\mathrm{Spt}\,}}\nu \subseteq B_{7R}, \quad \text {and}\quad \mu (B_{7R})\le 2|B_{7R}|. \end{aligned}$$
(1.36)

Let further \(\pi \in {\varPi }(\mu ,\nu )\) be a coupling between the measures \(\mu \) and \(\nu \), such that:

  1. 1.

    \(\pi \) satisfies a global \(L^{\infty }\)-bound, that is, there exists a constant \(M\le 1\) such that

    $$\begin{aligned} |x-y| \le M R \quad \text {for any } (x,y) \in {{\,\mathrm{Spt}\,}}\pi . \end{aligned}$$
    (1.37)
  2. 2.

    If \((\rho ,j)\) is the Eulerian description of \(\pi \) as defined in (1.24), then there exists a constant \({\varDelta }_R<\infty \) such that

    $$\begin{aligned} \int \frac{1}{\rho } |j|^2 \le \int \frac{1}{{\widetilde{\rho }}} |{\widetilde{j}}|^2 + R^{d+2} {\varDelta }_R \end{aligned}$$
    (1.38)

    for any Eulerian competitor, i.e. any pair of measures \(({{\widetilde{\rho }}},{{\widetilde{j}}})\) satisfying

    $$\begin{aligned} \int \partial _t \zeta \,{\mathrm {d}}{\widetilde{\rho }} + \nabla \zeta \cdot {\mathrm {d}}{\widetilde{j}} = \int _{B_R} \zeta _1 \,{\mathrm {d}}\nu - \int _{B_R} \zeta _0 \,{\mathrm {d}}\mu + \int _{\partial B_R \times [0,1]} \zeta \,{\mathrm {d}}f_R. \end{aligned}$$

Then for every \(0<\tau \ll 1\), there exist \(\epsilon _{\tau }>0\) and \(C,C_{\tau }<\infty \) such that, providedFootnote 18

$$\begin{aligned} {\mathscr {E}}_{6R}(\pi ) + {\mathscr {D}}_{6R}(\mu , \nu ) \le \epsilon _{\tau }, \end{aligned}$$
(1.39)

the following holds: There exists a radius \(R_* \in (3R, 4R)\) such that if \({\varPhi }\) is the solution, unique up to an additive constant, ofFootnote 19

$$\begin{aligned} {\varDelta } {\varPhi } = \kappa _{\mu } - \kappa _{\nu } \quad \text {in } B_{R_*} \quad \text {and} \quad \nu \cdot \nabla {\varPhi } = \nu \cdot {\overline{j}} = {\overline{f}}_{R_*} \quad \text {on } \partial B_{R_*}, \end{aligned}$$
(1.40)

thenFootnote 20

$$\begin{aligned} \frac{1}{R^{d+2}} \int _{B_{2R}\times [0,1]} \frac{1}{\rho } |j-\rho \nabla {\varPhi }|^2 \le \left( \tau + \frac{C M}{\tau }\right) {\mathscr {E}}_{6R} + C_{\tau }{\mathscr {D}}_{6R} + {\varDelta }_R, \end{aligned}$$
(1.41)

and

$$\begin{aligned} \frac{1}{R^2} \sup _{B_{2R}} |\nabla {\varPhi }|^2 + \sup _{B_{2R}} |\nabla ^2{\varPhi }|^2 + R^2 \sup _{B_{2R}} |\nabla ^3{\varPhi }|^2 \lesssim {\mathscr {E}}_{6R} + {\mathscr {D}}_{6R}. \end{aligned}$$
(1.42)

From the Eulerian version of the harmonic approximation Theorem 1.13 we can also obtain a Lagrangian version via almost-minimality:

Lemma 1.14

Let \(R>0\) and let \(\pi \in {\varPi }(\mu ,\nu )\) be a coupling between the measures \(\mu \) and \(\nu \), such that

  1. 1.

    \(\pi \) satisfies a global \(L^{\infty }\)-bound, that is, there exists a constant \(M\le 1\) such that

    $$\begin{aligned} |x-y| \le M R \quad \text {for any } (x,y) \in {{\,\mathrm{Spt}\,}}\pi ; \end{aligned}$$
    (1.43)
  2. 2.

    if \((\rho ,j)\) is the Eulerian description of \(\pi \) as defined in (1.24), then there exists a constant \({\varDelta }_R<\infty \) such that

    $$\begin{aligned} \int |x-y|^2\,{\mathrm {d}}\pi \le \int \frac{1}{\rho } |j|^2 + R^{d+2} {\varDelta }_R. \end{aligned}$$
    (1.44)

Then for any smooth function \({\varPhi }\) there holds

(1.45)

One-Step Improvement and Campanato Iteration

With the harmonic approximation result at hand, we can derive a one-step improvement result, which roughly says that if the coupling \(\pi \) is quantitatively close to \(({{\,\mathrm{Id}\,}}\times {{\,\mathrm{Id}\,}})_{\#}\rho _0\) on some scale R, expressed in terms of the estimate

$$\begin{aligned} {\mathscr {E}}_R(\pi ) + R^{2\alpha }\left( [\rho _0]_{\alpha , R}^2 + [\rho _1]_{\alpha , R}^2 + \left[ \nabla _{xy}c\right] _{\alpha , R}^2\right) \ll 1, \end{aligned}$$

and the fact that the (qualitative) \(L^{\infty }\) bound on the displacement (1.14) holds, then on a smaller scale \(\theta R\), after an affine change of coordinates, it is even closer to \(({{\,\mathrm{Id}\,}}\times {{\,\mathrm{Id}\,}})_{\#}\rho _0\). This is the basis of a Campanato iteration to obtain the existence of the optimal transport map T and its \({\mathscr {C}}^{1,\alpha }\) regularity.

We start with the affine change of coordinates and its properties:

Lemma 1.15

Let \(\pi \in {\varPi }(\mu ,\nu )\) be an optimal transport plan with respect to the cost function c between the measures \(\mu ({\mathrm {d}}x) = \rho _0(x)\,{\mathrm {d}}x\) and \(\nu ({\mathrm {d}}y) = \rho _1(y)\,{\mathrm {d}}y\).

Given a non-singular matrix \(B \in {\mathbb {R}}^{d\times d}\) and a vector \(b\in {\mathbb {R}}^d\), we perform the affine change of coordinatesFootnote 21

$$\begin{aligned} \begin{pmatrix} {\widehat{x}} \\ {\widehat{y}} \end{pmatrix} = \begin{pmatrix} Bx \\ \gamma B^{-*} D^* (y-b) \end{pmatrix} =: Q(x,y), \end{aligned}$$
(1.46)

where \(D = -\nabla _{xy}c(0,b)\) and \(\gamma = \left( \frac{\rho _1(b)}{\rho _0(0)} \frac{|\det B|^2}{|\det D|}\right) ^{\frac{1}{d}}\).Footnote 22 If we let

$$\begin{aligned} {\widehat{\rho }}_0({\widehat{x}})&= \frac{\rho _0(x)}{\rho _0(0)}, \quad {\widehat{\rho }}_1({\widehat{y}}) = \frac{\rho _1(y)}{\rho _1(b)}, \quad {\widehat{c}}({\widehat{x}},{\widehat{y}}) = \gamma c(x,y), \end{aligned}$$

so that in particular \({\widehat{\rho }}_0(0) = {\widehat{\rho }}_1(0) = 1\) and

$$\begin{aligned} \nabla _{{\widehat{x}}{\widehat{y}}} {\widehat{c}}({\widehat{x}},{\widehat{y}}) = BD^{-1}\nabla _{xy}c(x,y)B^{-1}, \end{aligned}$$
(1.47)

from which it follows that \(\nabla _{{\widehat{x}}{\widehat{y}}}{\widehat{c}}(0,0) = -{\mathbb {I}}\), then the coupling

$$\begin{aligned} {\widehat{\pi }} := \frac{|\det B|}{\rho _0(0)} Q_{\#}\pi \end{aligned}$$
(1.48)

is an optimal coupling between the measures \({\widehat{\mu }}({\mathrm {d}}{\widehat{x}}) = {\widehat{\rho }}_0({\widehat{x}})\,{\mathrm {d}}{\widehat{x}}\) and \({\widehat{\nu }}({\mathrm {d}}{\widehat{y}}) = {\widehat{\rho }}_1({\widehat{y}})\,{\mathrm {d}}{\widehat{y}}\) with respect to the cost function \({\widehat{c}}\).

In the change of variables we perform, the role of D is to ensure that we get a normalized cost, i.e. \(\nabla _{{\widehat{x}}{\widehat{y}}}{\widehat{c}}(0,0) = -{\mathbb {I}}\), while \(\gamma \) and \(\det B\) in (1.48) are needed for \({\widehat{\pi }}\) to define a transportation plan between the new densities. We refer the reader to Appendix 1 for a proof of this lemma.

Proposition 1.16

Assume that \(\rho _0(0) = \rho _1(0) = 1\) and \(\nabla _{xy}c(0,0) = -{\mathbb {I}}\), and let \(\pi \in {\varPi }(\rho _0, \rho _1)\) be c-optimal.

Then for all \(\beta \in (0, 1)\), there exist \(\theta \in (0,1)\) and \(C_{\beta } < \infty \) such that for all \({\varLambda }<\infty \) and \(R>0\) for which

(1.49)
$$\begin{aligned}&{\mathscr {E}}_{9R}(\pi ) + R^{2\alpha }\left( [\rho _0]_{\alpha , 9R}^2 + [\rho _1]_{\alpha , 9R}^2 + \left[ \nabla _{xy}c\right] _{\alpha , 9R}^2\right) \ll _{{\varLambda }} 1, \end{aligned}$$
(1.50)

there exist a symmetric matrix \(B\in {\mathbb {R}}^{d\times d}\) and a vector \(b \in {\mathbb {R}}^d\) with

$$\begin{aligned} |B-{\mathbb {I}}|^2 + \frac{1}{R^2}|b|^2 \lesssim {\mathscr {E}}_{9R}(\pi ) + R^{2\alpha }\left( [\rho _0]_{\alpha , 9R}^2 + [\rho _1]_{\alpha , 9R}^2\right) , \end{aligned}$$
(1.51)

such that, performing the change of variables in Lemma 1.15, the coupling \({\widehat{\pi }}\) is \({\widehat{c}}\)-optimal between the measures with densities \(\widehat{\rho _0}\) and \(\widehat{\rho _1}\) and there holds

$$\begin{aligned} {\mathscr {E}}_{\theta R}({\widehat{\pi }}) \le \theta ^{2\beta } {\mathscr {E}}_{9R}(\pi ) + C_{\beta }R^{2\alpha }\left( [\rho _0]_{\alpha , 9R}^2 + [\rho _1]_{\alpha , 9R}^2 + \left[ \nabla _{xy}c\right] _{\alpha , 9R}^2\right) . \end{aligned}$$
(1.52)

Moreover, we have the inclusion

(1.53)

Let us give a rough sketch of how the one-step improvement result can now be iterated: In a first step, the qualitative bound on the displacement is obtained from the global assumptions (C1)–(C4) on the cost function, see Lemma 2.1. This yields an initial scale \(R_0>0\) below which the cost function is close enough to the Euclidean cost function for (1.14) to hold. We may therefore apply Proposition 1.16, so that after an affine change of coordinates the the energy inequality (1.52) holds, the transformed densities and cost function are again normalized at the origin, optimality is preserved, and the qualitative \(L^{\infty }\) bound (1.53) holds for the new coupling. We can therefore apply the one-step improvement Proposition 1.16 again, going to smaller and smaller scales. Together with Campanato’s characterization of Hölder regularity, this yields the claimed existence and \({\mathscr {C}}^{1,\alpha }\) regularity of T.

The details of the above parts of the proof of our main Theorem 1.1 are explained in the sections below, with a full proof of Theorem 1.1 in Section 7. The proof of Corollary 1.4 is essentially a combination of the ideas in [13] and [7], and is given for the convenience of the reader in Section 9.

We conclude the introduction with a comment on the extension of the results presented above to general almost-minimizers with respect to Euclidean cost in the following sense:

Definition 1.17

(Almost-minimality w.r.t. Euclidean cost (on all scales)). A coupling \(\pi \in {\varPi }(\mu ,\nu )\) is almost-minimal with respect to Euclidean cost if there exists \(R_0>0\) and \({\varDelta }_{\cdot }: (0,R_0] \rightarrow [0,\infty )\) non-decreasing such that for all \(r\le R_0\) and \((x_0,y_0)\) in the interior of \(X \times Y\) there holds

$$\begin{aligned} \int |y-x|^2\,{\mathrm {d}}\pi \le \int |y-x|^2\,{\mathrm {d}}{\widetilde{\pi }} + r^{d+2}{\varDelta }_r \end{aligned}$$
(1.54)

for all \({\widetilde{\pi }} \in {\varPi }(\mu ,\nu )\) such that \({{\,\mathrm{Spt}\,}}(\pi -{\widetilde{\pi }}) \subseteq (B_r(x_0)\times {\mathbb {R}}^d) \cup ({\mathbb {R}}^d \times B_r(y_0))\).

We will restrict our attention to almost-minimizers in the class of deterministic transport plans coming from a Monge map T, i.e. \(\pi =\pi _T = ({{\,\mathrm{Id}\,}}, T)_{\#} \mu \), and call a transport map almost-minimizing with respect to Euclidean cost if for all \(r\le R_0\) and \(x_0 \in {\mathrm {int}}X\) there holds

$$\begin{aligned} \int |T(x) - x|^2 \,\mu ({\mathrm {d}}x) \le \int |{\widetilde{T}}(x) - x|^2\,\mu ({\mathrm {d}}x) + r^{d+2} {\varDelta }_r \end{aligned}$$
(1.55)

for all \({\widetilde{T}}\) such that \({\widetilde{T}}_{\#}\mu =\nu \) and \({\mathrm {graph}} {\widetilde{T}} = {\mathrm {graph}} T\) outside \((B_r(x_0)\times {\mathbb {R}}^d) \cup ({\mathbb {R}}^d \times B_r(T(x_0)))\).

In this situation, we get the following generalizationFootnote 23 of Theorem 1.1, whose proof will be sketched in Section 8.

Theorem 1.18

Assume that \(\rho _0(0) = \rho _1(0) = 1\), and that 0 is in the interior of \(X\times Y\). Let \(T:X\rightarrow Y\) be an almost-minimizing transport map from \(\mu \) to \(\nu \) with rate function \({\varDelta }_r = C r^{2\alpha }\) for some \(C<\infty \). Assume further that T is invertible. There exists \(R_1>0\) such that for any \(R\le R_1\) with

$$\begin{aligned}&\frac{1}{R^{d+2}} \left( \int _{B_{4R}} |T(x) - x|^2\,\mu ({\mathrm {d}}x) + \int _{B_{4R}} |T^{-1}(y) - y|^2\,\nu ({\mathrm {d}}y) \right) \\&\quad + R^{2\alpha } \left( [\rho _0]_{\alpha , 4R}^2 + [\rho _1]_{\alpha ,4R}^2 \right) \ll 1, \end{aligned}$$

there holds \(T\in {\mathscr {C}}^{1,\alpha }(B_R)\).

An \(L^{\infty }\) Bound on the Displacement

In this section we establish an \(L^{\infty }\) bound on the displacement for transference plans \(\pi \in {\varPi }(\mu , \nu )\) with c-monotone support, that is,

$$\begin{aligned} c(x, y) + c(x',y') \le c(x, y') + c(x',y) \quad \text {for all} \quad (x,y), (x',y') \in {{\,\mathrm{Spt}\,}}\pi , \end{aligned}$$
(2.1)

provided that the transport cost is small, the marginals \(\mu , \nu \) are close to the Lebesgue measure, and the cost function c is close to the Euclidean cost function. In Lemma 2.1 we use the c-monotonicity (2.1) combined with the qualitative hypotheses (C1)–(C4) in conjunction with compactness to obtain a more qualitative version of the \(L^\infty /L^2\)-bound, which just expresses finite expansion. In Proposition 1.5 this qualitative \(L^{\infty }/L^2\) bound in form of (1.14) is upgraded to the desired quantitative version under the scale-invariant smallness assumption (1.16). The latter is a consequence of the quantitative smallness hypothesis \(R^{2\alpha } [\nabla _{xy}c]_{\alpha ,R}^2 \ll 1\), as we pointed out in Remark 1.7. In both steps, we need to ensure that there are sufficiently many points in \({{\,\mathrm{Spt}\,}}\pi \) close to the diagonal. This is formulated in Lemma A.1, which does not rely on monotonicity.

Proof of Proposition 1.5

Let \({\varLambda } < \infty \) and \(R>0\) be such that (1.14), (1.15) and (1.16) hold. We only prove the bound (1.17) for a couple \((x,y) \in (B_{4R} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \), as the other case \((x,y) \in ({\mathbb {R}}^d \times B_{4R}) \cap {{\,\mathrm{Spt}\,}}\pi \) follows by symmetry.

  • Step 1 (Rescaling). Let \(({\widetilde{x}}, {\widetilde{y}}) = (S_R(x), S_R(y)) := (R^{-1}x, R^{-1}y)\) and set

    $$\begin{aligned} {\widetilde{\mu }}&:= {S_R}_{\#} \mu , \quad {\widetilde{\nu }} := {S_R}_{\#} \nu , \quad {\widetilde{\pi }} := R^{-d} (S_R \times S_R)_{\#} \pi \quad \text {and} \quad {\widetilde{c}}({\widetilde{x}}, {\widetilde{y}}) := R^{-2}c(R{\widetilde{x}}, R{\widetilde{y}}), \end{aligned}$$

    so that \({\widetilde{c}}\) still satisfies properties (C1)–(C4), and we have \({\widetilde{\pi }} \in {\varPi }({\widetilde{\mu }}, {\widetilde{\nu }})\), and \({{\,\mathrm{Spt}\,}}{\widetilde{\pi }}\) is \({\widetilde{c}}\)-monotone. We also have

    $$\begin{aligned} {\mathscr {E}}_6({\widetilde{\pi }}) = {\mathscr {E}}_{6R}(\pi ), \quad {\mathscr {D}}_6({\widetilde{\mu }}, {\widetilde{\nu }}) = {\mathscr {D}}_{6R}(\mu , \nu ),\\ \text {and} \quad \Vert \nabla _{{\widetilde{x}} {\widetilde{y}}}{\widetilde{c}} + {\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5, {\varLambda }})} = \Vert \nabla _{xy}c+{\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5R, {\varLambda } R})}. \end{aligned}$$

    This allows us to only consider the case \(R=1\) in the following. We will abbreviate

    $$\begin{aligned} {\mathscr {E}}:= {\mathscr {E}}_6 \quad \text {and} \quad {\mathscr {D}}:= {\mathscr {D}}_6. \end{aligned}$$
  • Step 2 (Use of c-monotonicity of \({{\,\mathrm{Spt}\,}}\pi \)). Let \((x,y) \in (B_{4} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \). We first show that for all \((x',y') \in (B_{5} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \) we have

    $$\begin{aligned} (x-y) \cdot (x-x')&\le 3 |x-x'|^2 + |x'-y'|^2 + \delta |x-x'| \, |x-y|, \end{aligned}$$
    (2.2)

    where, recalling (1.13),

    $$\begin{aligned} \delta := \Vert \nabla _{xy}c + {\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5, {\varLambda }})}. \end{aligned}$$
    (2.3)

    Indeed, setting \(x_t := tx + (1-t)x'\) and \(y_s := s y + (1-s) y'\) for \(t,s \in [0,1]\), c-monotonicity (2.1) of \({{\,\mathrm{Spt}\,}}\pi \) implies that

    $$\begin{aligned} 0&\ge (c(x,y)-c(x',y))-(c(x,y')-c(x',y')) \nonumber \\&= \int _0^1 \int _0^1 (x-x') \cdot \nabla _{xy} c(x_t, y_s) \, (y-y') \,{\mathrm {d}}s {\mathrm {d}}t \nonumber \\&= - (x-x')\cdot (y-y') + \int _0^1 \int _0^1 (x-x') \cdot \left( \nabla _{xy} c(x_t, y_s) + {\mathbb {I}}\right) \, (y-y') \,{\mathrm {d}}s {\mathrm {d}}t. \end{aligned}$$
    (2.4)

    By assumption (1.14), we have (xy), \((x',y') \in B_{5} \times B_{{\varLambda }}\) and thus \(\{(x_t, y_t)\}_{t\in [0,1]} \subseteq B_{5} \times B_{{\varLambda }} \subset {\mathbb {B}}_{5, {\varLambda }}\). Hence, we obtain from (2.3)

    $$\begin{aligned} \left| \int _0^1 \int _0^1 (x-x') \cdot \left( \nabla _{xy} c(x_t, y_s) + {\mathbb {I}}\right) \, (y-y') \,{\mathrm {d}}s {\mathrm {d}}t \right| \le \delta |x-x'| \, |y-y'|, \end{aligned}$$

    so that (2.4) turns into

    $$\begin{aligned} 0 \ge - (x-x') \cdot (y-y') - \delta |x-x'| \, |y-y'|. \end{aligned}$$
    (2.5)

    Upon writing \(y-y' = (y-x) + (x-x') + (x'-y')\), it follows that

    $$\begin{aligned} 0&\ge - (x-x') \cdot (y-x) - |x-x'|^2 - (x-x') \cdot (x'-y') - \delta |x-x'| \, |y-x| \\&\quad - \delta |x-x'|^2 -\delta |x-x'| \, |x'-y'|, \end{aligned}$$

    from which we obtain the estimate

    $$\begin{aligned} (x-y) \cdot (x-x')&\le \delta |x-x'| \, |x-y| + (1+\delta ) |x-x'|^2 + (1+\delta ) |x-x'| \, |x'-y'| \\&\le \delta |x-x'| \, |x-y| + \frac{3}{2}(1+\delta ) |x-x'|^2 + \frac{1}{2} (1+\delta ) |x'-y'|^2. \end{aligned}$$

    Note that \(\delta \ll 1\) by assumption (1.16), hence (2.2) follows.

  • Step 3 (Proof of estimate (1.17)). Let \(r \ll 1\) and \(e\in S^{d-1}\) be arbitrary, to be fixed later. Let \(\eta \) be supported in \(B_{\frac{r}{2}}(x-re)\) and satisfy the bounds

    $$\begin{aligned} \sup |\eta | + r \sup |\nabla \eta | + r^2 \sup |\nabla ^2 \eta | \lesssim 1. \end{aligned}$$
    (2.6)

    We make the additional assumption that \(\eta \) is normalized in such a way that

    $$\begin{aligned} \int (x-x') \eta (x') \,{\mathrm {d}}x' = r^{d+1} e. \end{aligned}$$
    (2.7)

    Note that since \(x\in B_{4}\) and \(r\ll 1\), we have

    $$\begin{aligned} {{\,\mathrm{Spt}\,}}\eta \subseteq B_{\frac{r}{2}}(x-re) \subset B_{5}. \end{aligned}$$

    Integrating inequality (2.2) against the measure \(\eta (x') \, \pi ({\mathrm {d}}x' {\mathrm {d}}y')\), it follows that

    $$\begin{aligned} \begin{aligned} (x-y) \cdot \int (x-x') \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y')&\le 3 \int |x-x'|^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y')\\&\quad + \int |x'-y'|^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y')\\&\quad + \delta |x-y| \int |x-x'| \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y'). \end{aligned} \end{aligned}$$
    (2.8)

    Note that by (2.7) the integral on the left-hand-side of inequality (2.8) can be expressed as

    $$\begin{aligned} \int (x-x') \eta (x')\, \pi ({\mathrm {d}}x' {\mathrm {d}}y')&= \int (x-x') \eta (x') \mu ({\mathrm {d}}x') \\&= \kappa _{\mu } r^{d+1} e + \int (x-x') \eta (x') \left( \mu ({\mathrm {d}}x') - \kappa _{\mu }\,{\mathrm {d}}x'\right) . \end{aligned}$$

    To estimate the latter integral, we recall the following result from [14, Lemma 2.8]: for any \(\zeta \in {\mathscr {C}}^{\infty }(B_R)\),

    $$\begin{aligned} \left| \int _{B_{R}} \zeta \,({\mathrm {d}}\mu -\kappa _{\mu }\,{\mathrm {d}}x) \right|&\le \left( \kappa _{\mu } \int _{B_{R}} |\nabla \zeta |^2\,{\mathrm {d}}x \, W_{B_{R}}^2(\mu ,\kappa _{\mu }) \right) ^{\frac{1}{2}} + \frac{1}{2} \sup _{B_{R}} |\nabla ^2 \zeta | \, W_{B_{R}}^2(\mu ,\kappa _{\mu }). \end{aligned}$$
    (2.9)

    By this estimate with \(\zeta = (x-\cdot )\eta \) and using that \(\kappa _{\mu }\sim 1\) by assumption (1.15), we obtain with (2.6) that

    $$\begin{aligned} \left| \int (x-x') \eta (x') \left( \mu ({\mathrm {d}}x') - \kappa _{\mu }\,{\mathrm {d}}x'\right) \right|&\lesssim (r^d {\mathscr {D}})^{\frac{1}{2}} + \frac{1}{r} {\mathscr {D}}\lesssim \epsilon r^{d+1} + \frac{1}{\epsilon } \frac{1}{r} {\mathscr {D}}\end{aligned}$$

    for some \(0<\epsilon \ll 1\) to be fixed later. Hence,

    $$\begin{aligned} \begin{aligned} (x-y)&\cdot \int (x-x') \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \\&\ge \kappa _{\mu } r^{d+1} (x-y)\cdot e - C \left( \epsilon r^{d+1} + \frac{1}{\epsilon r} {\mathscr {D}}\right) |x-y|. \end{aligned} \end{aligned}$$
    (2.10)

    We now estimate each term on the right-hand-side of inequality (2.8) separately:

    1. (1)

      For the first term we estimate

      $$\begin{aligned}&\int |x-x'|^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') = \int |x-x'|^2 \eta (x') \mu ({\mathrm {d}}x') \\&\quad \le \kappa _{\mu } \int |x-x'|^2 \eta (x') \,{\mathrm {d}}x' + \left| \int |x-x'|^2 \eta (x') \left( \mu ({\mathrm {d}}x') - \kappa _{\mu }\,{\mathrm {d}}x'\right) \right| . \end{aligned}$$

      Using again (2.6) and \(\kappa _{\mu } \sim 1\) for the first term on the right-hand side, estimate (2.9) with \(\zeta =|x-\cdot |^2 \eta \), and Young’s inequality for the second term we obtain

      $$\begin{aligned} \int |x-x'|^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \lesssim r^{d+2} + {\mathscr {D}}. \end{aligned}$$
      (2.11)
    2. (2)

      For the second term on the right-hand-side of (2.8) we use that \({{\,\mathrm{Spt}\,}}\eta \subseteq B_{5}\) and (2.6), recalling also the definition (1.9) of \({\mathscr {E}}\), to estimate

      $$\begin{aligned} \int |x'-y'|^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \lesssim \int _{B_{5}\times {\mathbb {R}}^d} |x'-y'|^2 \,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \lesssim {\mathscr {E}}. \end{aligned}$$
      (2.12)
    3. (3)

      We may bound the integral in the third term on the right-hand-side of (2.8) as for (2.11) by using (2.6), \(\kappa _{\mu } \sim 1\) and estimate (2.9) withFootnote 24\(\zeta =|x-\cdot |\eta \) to get

      $$\begin{aligned} \int |x-x'| \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \lesssim r^{d+1} + \frac{1}{r} {\mathscr {D}}. \end{aligned}$$
      (2.13)

    Inserting the estimates (2.10), (2.11), (2.12), and (2.13) into inequality (2.8) yields

    $$\begin{aligned}&\kappa _{\mu } r^{d+1} (x-y) \cdot e \lesssim \left( \epsilon r^{d+1} + \frac{1}{\epsilon r} {\mathscr {D}}\right) |x-y| + r^{d+2} + {\mathscr {D}}+ {\mathscr {E}}+ \delta \left( r^{d+1} + \frac{1}{r} {\mathscr {D}}\right) |x-y|. \end{aligned}$$

    Since e is arbitrary and \(\kappa _{\mu } \sim 1\), this turns into

    $$\begin{aligned} |x-y|&\lesssim (\epsilon + \delta ) |x-y| + \left( \delta +\frac{1}{\epsilon }\right) \left( \frac{1}{r}\right) ^{d+2}{\mathscr {D}}\,|x-y| + r \nonumber \\&\qquad + \left( \frac{1}{r}\right) ^{d+1} ({\mathscr {E}}+ {\mathscr {D}}). \end{aligned}$$
    (2.14)

    We first choose \(\epsilon \) and the implicit constant in (1.16), which in view of (2.3) governs \(\delta \), so small that we may absorb the first term on the right-hand-side into the left-hand-side. We then choose r to be a large multiple of \(({\mathscr {E}}+{\mathscr {D}})^\frac{1}{d+2}\), so that also the second right-hand-side term in (2.14) can be absorbed. This choice of r is admissible in the sense of \(r\ll 1\) provided the implicit constant in (1.15) is small enough. This yields (1.17).

\(\square \)

The next lemma shows that due to the global qualitative information on the cost function c, that is, (C1)–(C4), there is a scale below which we can derive a qualitative bound on the displacement. It roughly says that there is a small enough scale after which the cost essentially behaves like Euclidean cost, with an error that is uniformly small due to compactness of the set \(X\times Y\).

Lemma 2.1

Assume that the cost function c satisfies  (C1)–(C4) and let \(\pi \in {\varPi }(\mu , \nu )\) be a coupling with c-monotone support.

There exist \({\varLambda }_0 <\infty \) and \(R_0>0\) such that for all \(R\le R_0\) for which

$$\begin{aligned} {\mathscr {E}}_{6R} + {\mathscr {D}}_{6R} \ll 1, \end{aligned}$$
(2.15)

we have the inclusion

Proof

We only prove the inclusion \((B_{5R}\times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq B_{5R}\times B_{{\varLambda }_0 R}\), the other inclusion \(({\mathbb {R}}^d \times B_{5R}) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq B_{{\varLambda }_0 R} \times B_{5R}\) follows analogously since the assumptions are symmetric in x and y.

  • Step 1 (Use of c-monotonicity of \({{\,\mathrm{Spt}\,}}\pi \)). Let \(R>0\) be such that (2.15) holds, in the sense that we may use Lemma A.1(ii), and set

    $$\begin{aligned} {\widetilde{c}}(x,y) := c(x,y) - c(x,0) - c(0,y) + c(0,0). \end{aligned}$$
    (2.16)

    We claim that there exists a constant \(\lambda <\infty \), depending only on \(\Vert c\Vert _{{\mathscr {C}}^2(X\times Y)}\), such that

    $$\begin{aligned} \nabla _x {\widetilde{c}}(x,y) \in B_{\lambda R} \quad \text {for all} \quad (x,y) \in (B_{5R} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi . \end{aligned}$$

    To show this, we use the c-monotonicity (2.1) of \({{\,\mathrm{Spt}\,}}\pi \). Notice that c-monotonicity of \({{\,\mathrm{Spt}\,}}\pi \) implies its \({\widetilde{c}}\)-monotonicity:

    $$\begin{aligned} {\widetilde{c}}(x,y) - {\widetilde{c}}(x',y) \le {\widetilde{c}}(x,y') - {\widetilde{c}}(x',y') \quad \text {for all} \quad (x,y),(x',y') \in {{\,\mathrm{Spt}\,}}\pi . \end{aligned}$$
    (2.17)

    With \(x_t := tx + (1-t)x'\) we can write

    $$\begin{aligned} {\widetilde{c}}(x,y) - {\widetilde{c}}(x',y)&= \int _0^1 \nabla _x {\widetilde{c}}(x_t, y)\,{\mathrm {d}}t \cdot (x-x') \\&= \nabla _x {\widetilde{c}}(x, y) \cdot (x-x') + \int _0^1 (\nabla _x {\widetilde{c}}(x_t, y) - \nabla _x {\widetilde{c}}(x, y)) \,{\mathrm {d}}t \cdot (x-x'), \end{aligned}$$

    and, using that \(\nabla _x{\widetilde{c}}(0,0) = 0\),

    $$\begin{aligned} {\widetilde{c}}(x,y') - {\widetilde{c}}(x',y')&= (\nabla _x {\widetilde{c}}(0, y') - \nabla _x {\widetilde{c}}(0,0)) \cdot (x-x') \\&\quad + \int _0^1 (\nabla _x {\widetilde{c}}(x_t, y') - \nabla _x {\widetilde{c}}(0, y')) \,{\mathrm {d}}t \cdot (x-x'). \end{aligned}$$

    Inserting these two identities into inequality (2.17) gives

    $$\begin{aligned} \nabla _x {\widetilde{c}}(x, y) \cdot (x-x')&\le \int _0^1 |\nabla _x {\widetilde{c}}(x_t, y) - \nabla _x {\widetilde{c}}(x, y)| \,{\mathrm {d}}t \, |x-x'| \\&\quad + |\nabla _x {\widetilde{c}}(0, y') - \nabla _x {\widetilde{c}}(0,0)| \, |x-x'| \\&\quad +\int _0^1 |\nabla _x {\widetilde{c}}(x_t, y') - \nabla _x {\widetilde{c}}(0, y')| \,{\mathrm {d}}t \, |x-x'|. \end{aligned}$$

    Using the boundedness of \(\Vert c\Vert _{{\mathscr {C}}^2(X\times Y)}\) we estimate this expression further by

    $$\begin{aligned} \nabla _x {\widetilde{c}}(x, y) \cdot (x-x')&\le \Vert \nabla _{xx} {\widetilde{c}} \Vert _{{\mathscr {C}}^0(X\times Y)} \left( \int _0^1 |x_t - x|\,{\mathrm {d}}t + \int _0^1 |x_t|\,{\mathrm {d}}t \right) |x-x'| \nonumber \\&\quad + \Vert \nabla _{xy} {\widetilde{c}} \Vert _{{\mathscr {C}}^0(X\times Y)} |y'| \, |x-x'| \nonumber \\&\lesssim \Vert c\Vert _{{\mathscr {C}}^2(X\times Y)} \left( |x'| + |x| + |y'| \right) |x-x'|, \end{aligned}$$
    (2.18)

    where in the last step we estimated the integrals and used that

    $$\begin{aligned} \Vert \nabla _{xx} {\widetilde{c}} \Vert _{{\mathscr {C}}^0(X\times Y)} \le 2 \Vert \nabla _{xx} c\Vert _{{\mathscr {C}}^0(X\times Y)}, \quad \Vert \nabla _{xy} {\widetilde{c}} \Vert _{{\mathscr {C}}^0(X\times Y)} = \Vert \nabla _{xy} c\Vert _{{\mathscr {C}}^0(X\times Y)} . \end{aligned}$$

    Now by Lemma A.1(ii), given \(x \in B_{5R}\), we have \((S_{R}(x,e) \times B_{7R}) \cap {{\,\mathrm{Spt}\,}}\pi \ne \emptyset \) for any direction \(e\in S^{d-1}\). Hence, letting \(e = \frac{\nabla _x {\widetilde{c}}(x,y)}{|\nabla _x {\widetilde{c}}(x,y)|}\), we can find a point \((x',y') \in (S_{R}(x,e) \times B_{7R})\cap {{\,\mathrm{Spt}\,}}\pi \). Since the opening angle of \(S_R(x,e)\) is \(\frac{\pi }{2}\), we have

    $$\begin{aligned} \nabla _x {\widetilde{c}}(x,y) \cdot (x-x') = |\nabla _{x}{\widetilde{c}}(x,y)| |x-x'| e \cdot \frac{x-x'}{|x-x'|} \gtrsim |\nabla _{x}{\widetilde{c}}(x,y)| |x-x'|. \end{aligned}$$

    It follows with (2.18) that there exists \(\lambda <\infty \) such that

    $$\begin{aligned} |\nabla _x {\widetilde{c}}(x,y)| \lesssim \Vert c\Vert _{{\mathscr {C}}^2(X\times Y)} \left( |x'| + |x| + |y'| \right) \le \lambda R. \end{aligned}$$
  • Step 2 (Use of global information on c). We claim that there exist \(R_0>0\) and \({\varLambda }_0<\infty \) such that for all \(R\le R_0\) and \(x\in X\), we have that

    $$\begin{aligned} B_{\lambda R} \subseteq -\nabla _x {\widetilde{c}}(x, B_{{\varLambda }_0 R}). \end{aligned}$$

    Indeed, by assumption (C2), for any \(x\in X\), the map \(-\nabla _x {\widetilde{c}}(x,\cdot ) = -\nabla _x c(x,\cdot ) + \nabla _x c(x, 0)\) is one-to-one on Y. Since \(\nabla _{xy}{\widetilde{c}}(x,y) = \nabla _{xy}c(x,y)\), it follows by (C4) that \(\det \nabla _{xy}{\widetilde{c}}(x,y) \ne 0\) for all \((x,y) \in X\times Y\). Hence, the map

    $$\begin{aligned} F_x : -\nabla _x {\widetilde{c}}(x,Y) \rightarrow Y, \quad v \mapsto \left[ - \nabla _x {\widetilde{c}}(x,\cdot ) \right] ^{-1}(v) \end{aligned}$$

    is well-defined and a \({\mathscr {C}}^1\)-diffeomorphism, so that in particular

    $$\begin{aligned} F_x(v) = F_x(0) + DF_x(0)v + o_x(|v|). \end{aligned}$$

    Using that \(-\nabla _x {\widetilde{c}}(x,0) = 0\), which translates into \(F_x(0) = 0\), this takes the form

    $$\begin{aligned} F_x(v) = DF_x(0)v + o_x(|v|). \end{aligned}$$

    By compactness of X, there thus exist a radius \(R_0>0\) and a constant \({\varLambda }_0<\infty \) such that

    $$\begin{aligned} \lambda |F_x(v)| \le {\varLambda }_0 |v| \quad \text {for all} \quad x\in X \quad \text {and} \quad |v| \le \lambda R_0, \end{aligned}$$

    which we may reformulate as

    $$\begin{aligned} F_x(B_{\lambda R}) \subseteq B_{{\varLambda }_0 R}, \quad \text {i.e.} \quad B_{\lambda R} \subseteq F_x^{-1}(B_{{\varLambda }_0 R}) = -\nabla _x {\widetilde{c}}(x,B_{{\varLambda }_0 R}) \end{aligned}$$

    for all \(R\le R_0\) and \(x\in X\).

  • Step 3 (Conclusion). If \((x,y) \in (B_{5R} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \), then we claim that \(|y| \le {\varLambda }_0 R\) for \(R\le R_0\).

    Indeed, by Step 1 we have \(\nabla _x{\widetilde{c}}(x,y) \in B_{\lambda R}\). Since \(B_{\lambda R} \subseteq \nabla _x {\widetilde{c}}(x,B_{{\varLambda }_0 R})\) by Step 2, injectivity of \(y \mapsto \nabla _x{\widetilde{c}}(x,y)\) implies that we must have \(y \in B_{{\varLambda }_0 R}\).

\(\square \)

Remark 2.2

Note that if (2.15) is replaced by smallness of the one-sided energy, i.e.,

$$\begin{aligned} \frac{1}{R^{d+2}} \int _{B_{6R} \times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi + {\mathscr {D}}_{6R} \ll 1, \end{aligned}$$

then Lemma A.1 still applies and we obtain the one-sided qualitative bound

$$\begin{aligned} (B_{5R}\times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq {\mathbb {B}}_{5R, {\varLambda }_0 R}. \end{aligned}$$

Almost-Minimality with Respect to Euclidean Cost

In this section we show that a minimizer of the optimal transport problem with cost function c is an approximate minimizer for the problem with Euclidean cost function. However, in order to make full use of the Euclidean harmonic approximation result from [14, Proposition 1.6] on the Eulerian side, we have to be careful in relating Lagrangian and Eulerian energies. This is where the concept of almost-minimality shows its strength, since it provides us with the missing bound of Lagrangian energy in terms of its Eulerian counterpart.

Proof of Proposition 1.10

First, let us observe that we may assume in the following that

$$\begin{aligned} \frac{1}{2} \int |x-y|^2 \ {\mathrm {d}}(\pi -{\widetilde{\pi }}) \ge 0, \end{aligned}$$
(3.1)

since otherwise there is nothing to show. By the support assumption (1.20) on \(\mu \) and \(\nu \), the couplings \(\pi \) and \({\widetilde{\pi }}\) satisfy

$$\begin{aligned} {{\,\mathrm{Spt}\,}}\pi \subseteq B_R \times B_R, \quad {{\,\mathrm{Spt}\,}}{\widetilde{\pi }} \subseteq B_R \times B_R. \end{aligned}$$
(3.2)

Together with (3.1) this implies that

$$\begin{aligned} {\mathscr {E}}_R({\widetilde{\pi }}) \le {\mathscr {E}}_R(\pi ). \end{aligned}$$
(3.3)

By the admissibility of \({\widetilde{\pi }}\), i.e. that \(\pi \) and \({\widetilde{\pi }}\) have the same marginals, we may write

$$\begin{aligned} \frac{1}{2} \int |x-y|^2 \, {\mathrm {d}}(\pi -{\widetilde{\pi }})&= - \int ( c(x,y)+x\cdot y ) \, {\mathrm {d}}(\pi -{\widetilde{\pi }}) + \int c(x,y) \, {\mathrm {d}}(\pi -{\widetilde{\pi }}) \nonumber \\&= - \int ({\widetilde{c}}(x,y)+x\cdot y ) \, {\mathrm {d}}(\pi -{\widetilde{\pi }}) + \int c(x,y) \, {\mathrm {d}}(\pi -{\widetilde{\pi }}), \end{aligned}$$
(3.4)

where \({\widetilde{c}}\) is defined as in (2.16). Abbreviating

$$\begin{aligned} \zeta (x,y) := -({\widetilde{c}}(x,y) + x\cdot y), \end{aligned}$$

optimality of \(\pi \) with respect to the cost function c implies that

$$\begin{aligned} \frac{1}{2} \int |x-y|^2 \, {\mathrm {d}}(\pi -{\widetilde{\pi }}) \le \int \zeta (x,y) \, {\mathrm {d}}(\pi -{\widetilde{\pi }}). \end{aligned}$$
(3.5)

Using again the admissibility of \({\widetilde{\pi }}\), we may write

$$\begin{aligned} \int \zeta (x,y) \, {\mathrm {d}}(\pi -{\widetilde{\pi }}) = \int (\zeta (x,y)-\zeta (x,x)) \, {\mathrm {d}}(\pi -{\widetilde{\pi }}). \end{aligned}$$

Note that by the definition (2.16) of \({\widetilde{c}}\), the function \(\zeta \) satisfies

$$\begin{aligned} \nabla _x\zeta (x,0)&= 0 \quad \text {for all} \, x \quad \text {and} \quad \nabla _y\zeta (0,y) = 0 \quad \text {for all} \, y, \end{aligned}$$
(3.6)
$$\begin{aligned} \nabla _{xy}\zeta (x,y)&= -(\nabla _{xy}{\widetilde{c}}(x,y) + {\mathbb {I}}) = -(\nabla _{xy}c(x,y) + {\mathbb {I}}). \end{aligned}$$
(3.7)

Now, by (3.6),

$$\begin{aligned} \zeta (x,y) - \zeta (x,x)&= \int _0^1 \left( \nabla _y \zeta (x, sy+(1-s)x) - \nabla _y \zeta (0, sy+(1-s)x) \right) \cdot (y-x) \, {\mathrm {d}}s \\&= \int _0^1 \int _0^1 x \cdot \nabla _{xy} \zeta (tx, sy+(1-s)x) (y-x) \, {\mathrm {d}}t {\mathrm {d}}s, \end{aligned}$$

so that, using (3.7) and (3.2), it follows that

$$\begin{aligned} \left| \int \zeta (x,y) \, {\mathrm {d}}(\pi -{\widetilde{\pi }}) \right|&\lesssim \Vert \nabla _{xy} c + {\mathbb {I}}\Vert _{L^{\infty }(B_R\times B_R)} \int _{B_R \times B_R} |x| |y-x| \, {\mathrm {d}}(\pi +{\widetilde{\pi }}) \nonumber \\&\lesssim R \Vert \nabla _{xy} c + {\mathbb {I}}\Vert _{L^{\infty }(B_R\times B_R)} \int |y-x| \, {\mathrm {d}}(\pi +{\widetilde{\pi }}). \end{aligned}$$
(3.8)

By the estimate

$$\begin{aligned} (\pi +{\widetilde{\pi }})({\mathbb {R}}^d \times {\mathbb {R}}^d) = 2 \mu ({\mathbb {R}}^d) = 2 \mu (B_R) {\mathop {\lesssim }\limits ^{(1.21)}} R^d, \end{aligned}$$

and Hölder’s inequality, we get

$$\begin{aligned} \int |x-y| \, {\mathrm {d}}(\pi +{\widetilde{\pi }}) \lesssim R^{\frac{d}{2}} \left( \int |x-y|^2 \, {\mathrm {d}}(\pi +{\widetilde{\pi }})\right) ^{\frac{1}{2}} \overset{(3.3)}{\lesssim } R^{d+1} {\mathscr {E}}(\pi )^{\frac{1}{2}}. \end{aligned}$$

\(\square \)

Eulerian Point of View

The purpose of this section is to translate almost-minimality from the Lagrangian setting, as encoded by Proposition 1.10, to the Eulerian setting so that it may be plugged into the proof of the harmonic approximation result [14, Proposition 1.7]. This is the purpose of Lemma 1.12 below, which relates a (Lagrangian) coupling \(\pi \), which we think of being an almost-minimizer, to its Eulerian description \((\rho ,j)\) (introduced in (1.24)).

The proof of Lemma 1.12 relies on the fact that the Eulerian cost is always dominated by the Lagrangian one, while the other inequality in general only holds for minimizers of the Euclidean transport cost. However, in the proof of Lemma 1.12 we can use almost-minimality of \(\pi \), together with the equality of Eulerian and Lagrangian energy for minimizers of the Euclidean transport cost (Remark 1.11) to overcome this nuisance.

Proof of Lemma 1.12

Let \(\pi ^* \in {\mathrm {argmin}} \, W^2(\mu , \nu )\) be a minimizer of the Euclidean transport cost and let \((\rho ^*, j^*)\) be its Eulerian description. Then by (1.31) and (1.34) it follows that

$$\begin{aligned} \int \frac{1}{\rho } |j|^2 \le \int |x-y|^2\,{\mathrm {d}}\pi \le \int |x-y|^2\,{\mathrm {d}}\pi ^* + {\varDelta }. \end{aligned}$$

Since \(\pi ^* \in {\mathrm {argmin}} \, W^2(\mu , \nu )\), we have that

$$\begin{aligned} \int |x-y|^2\,{\mathrm {d}}\pi ^* = \int \frac{1}{\rho ^*} |j^*|^2, \end{aligned}$$

see Remark 1.11, so that by minimality of \(\pi ^*\), which implies minimality of \((\rho ^*,j^*)\) in the Benamou–Brenier formulation (1.32) of \(W^2(\mu , \nu )\), we obtain

$$\begin{aligned} \int \frac{1}{\rho } |j|^2 \le \int |x-y|^2\,{\mathrm {d}}\pi \le \int \frac{1}{\rho ^*} |j^*|^2 + {\varDelta } \le \int \frac{1}{{\widetilde{\rho }}} |{\widetilde{j}}|^2 + {\varDelta } \end{aligned}$$

for any \(({{\widetilde{\rho }}},{{\widetilde{j}}})\) satisfying (1.35), in particular also for \(({{\widetilde{\rho }}},{{\widetilde{j}}})=(\rho ,j)\). \(\square \)

The Eulerian version of almost-minimality also implies the following localized version, which will be needed for the harmonic approximation:

Corollary 4.1

Let \(\pi \in {\varPi }(\mu , \nu )\) be a coupling between the measures \(\mu \) and \(\nu \) with the property that there exists a constant \({\varDelta } <\infty \) such that

$$\begin{aligned} \int \frac{1}{2} |x-y|^2\,{\mathrm {d}}\pi \le \int \frac{1}{2} |x-y|^2\,{\mathrm {d}}{\widetilde{\pi }} + {\varDelta } \end{aligned}$$
(4.1)

for any \({\widetilde{\pi }}\in {\varPi }(\mu , \nu )\), and let \((\rho ,j)\) be its Eulerian description defined in (1.24). For any \(r > 0\) small enoughFootnote 25, let \(f_r\) be the inner trace of j on \(\partial B_r \times (0,1)\) in the sense of (1.27), i.e.

$$\begin{aligned} \int _{B_r \times [0,1]} \partial _t \zeta \,{\mathrm {d}}\rho + \nabla \zeta \cdot {\mathrm {d}}j = \int _{B_r} \zeta _1 \,{\mathrm {d}}\nu - \int _{B_r} \zeta _0 \,{\mathrm {d}}\mu + \int _{\partial B_r \times [0,1]} \zeta \,{\mathrm {d}}f_r. \end{aligned}$$
(4.2)

Then for any density-flux pair \(({\widetilde{\rho }}, {\widetilde{j}})\) satisfying

$$\begin{aligned} \int \partial _t \zeta \,{\mathrm {d}}{\widetilde{\rho }} + \nabla \zeta \cdot {\mathrm {d}}{\widetilde{j}} = \int _{B_r} \zeta _1 \,{\mathrm {d}}\nu - \int _{B_r} \zeta _0 \,{\mathrm {d}}\mu + \int _{\partial B_r \times [0,1]} \zeta \,{\mathrm {d}}f_r \end{aligned}$$
(4.3)

there holds

$$\begin{aligned} \frac{1}{2}\int _{B_r\times [0,1]} \frac{1}{\rho } |j|^2 \le \frac{1}{2}\int \frac{1}{{\widetilde{\rho }}} |{\widetilde{j}}|^2 + {\varDelta }. \end{aligned}$$
(4.4)

Proof

Let \(({\widetilde{\rho }}, {\widetilde{j}})\) satisfy (4.3). Then the density-flux pair \(({\widehat{\rho }}, {\widehat{j}}) := ({\widetilde{\rho }}, {\widetilde{j}}) + (\rho , j)\lfloor _{B_r^c\times [0,1]}\) obeys (1.35). Indeed,

$$ \begin{aligned} \int \partial _t \zeta \,{\mathrm {d}}{\widehat{\rho }} + \nabla \zeta \cdot {\mathrm {d}}{\widehat{j}}&= \int \partial _t \zeta \,{\mathrm {d}}{\widetilde{\rho }} + \nabla \zeta \cdot {\mathrm {d}}{\widetilde{j}} + \int _{B_r^c \times [0,1]} \partial _t \zeta \,{\mathrm {d}}\rho + \nabla \zeta \cdot {\mathrm {d}}j \\&= \int \partial _t \zeta \,{\mathrm {d}}{\widetilde{\rho }} + \nabla \zeta \cdot {\mathrm {d}}{\widetilde{j}} + \int \partial _t \zeta \,{\mathrm {d}}\rho + \nabla \zeta \cdot {\mathrm {d}}j \\&\quad - \int _{B_r \times [0,1]} \partial _t \zeta \,{\mathrm {d}}\rho + \nabla \zeta \cdot {\mathrm {d}}j \\&{\mathop {=}\limits ^{(4.3) \& (1.26) \& (4.2)}} \int \zeta _1\,{\mathrm {d}}\nu - \int \zeta _0\,{\mathrm {d}}\mu . \end{aligned}$$

Hence, by Lemma 1.12 and subadditivity of the Eulerian cost it follows that

$$\begin{aligned} \frac{1}{2}\int \frac{1}{\rho } |j|^2 \le \frac{1}{2}\int \frac{1}{{\widehat{\rho }}} |{\widehat{j}}|^2 + {\varDelta } \le \frac{1}{2}\int \frac{1}{{\widetilde{\rho }}} |{\widetilde{j}}|^2 + \frac{1}{2}\int _{B_r^c\times [0,1]} \frac{1}{\rho } |j|^2 + {\varDelta }, \end{aligned}$$

which implies (4.4). \(\square \)

Lemma 1.12, in the form of the bound (1.44), allows us to relate Eulerian and Lagrangian side of the harmonic approximation result, which will be central in the application to the one-step-improvement Proposition 1.16. The proof of this Lagrangian version is very similar to [14, Proof of Theorem 1.5], however, we stress again that since we are not dealing with minimizers of the Euclidean transport cost, one has to be careful when passing from Eulerian to Lagrangian energies.

Proof of Lemma 1.14

By scaling, it is enough to show that

(4.5)

By the \(L^{\infty }\)-bound, we know that if , then for any \(t\in [0,1]\) there holds \(t x + (1-t)y \in B_{2}\). Hence, we may estimate

(4.6)

Multiplying out the square and using the definition of \((\rho ,j)\) from (1.24), we may writeFootnote 26

$$\begin{aligned}&\iint _0^1 |x-y + \nabla {\varPhi }(t x + (1-t)y)|^2 \, 1_{\{t x + (1-t)y \in B_{2}\}} \,{\mathrm {d}}t {\mathrm {d}}\pi \\&\quad = \iint _0^1 |x-y|^2 \, 1_{\{t x + (1-t)y \in B_{2}\}} \,{\mathrm {d}}t {\mathrm {d}}\pi + \iint _0^1 |\nabla {\varPhi }(t x + (1-t)y)|^2 \, 1_{\{t x + (1-t)y \in B_{2}\}}\,{\mathrm {d}}t {\mathrm {d}}\pi \\&\qquad + 2 \iint _0^1 (x-y)\cdot \nabla {\varPhi }(t x + (1-t)y) \, 1_{\{t x + (1-t)y \in B_{2}\}}\,{\mathrm {d}}t {\mathrm {d}}\pi \\&\quad =\int |x-y|^2 \,{\mathrm {d}}\pi - \iint _0^1 |x-y|^2 \, 1_{\{t x + (1-t)y \in B_{2}^c\}} \,{\mathrm {d}}t {\mathrm {d}}\pi + \int _{B_{2}\times [0,1]} |\nabla {\varPhi }|^2\,{\mathrm {d}}\rho \\&\qquad - 2 \int _{B_{2}\times [0,1]} \nabla {\varPhi }\cdot {\mathrm {d}}j. \end{aligned}$$

Now note that (1.30) implies a local counterpart of (1.31): for any open set \(B \subseteq {\mathbb {R}}^d\),

$$\begin{aligned} \int _{B \times [0,1]} \frac{1}{\rho } |j|^2 \le \iint _0^1 |x-y|^2 1_{\{tx + (1-t)y \in B\}} \,{\mathrm {d}}t {\mathrm {d}}\pi . \end{aligned}$$
(4.7)

Arguing with an open \(\epsilon \)-neighborhood of B and continuity of the right-hand side with respect to B, one may show that (4.7) holds for any closed set B, so that in particular

$$\begin{aligned} \int _{B_{2}^c \times [0,1]} \frac{1}{\rho }|j|^2 \le \iint _0^1 |x-y|^2 1_{\{tx + (1-t)y \in B_{2}^c\}} \,{\mathrm {d}}t {\mathrm {d}}\pi , \end{aligned}$$

which, together with (1.44), gives

$$\begin{aligned}&\int |x-y|^2 \,{\mathrm {d}}\pi - \iint _0^1 |x-y|^2 \, 1_{\{t x + (1-t)y \in B_{2}^c\}} \,{\mathrm {d}}t {\mathrm {d}}\pi \\&\quad \le \int \frac{1}{\rho } |j|^2 + {\varDelta }_1 - \int _{B_{2}^c\times [0,1]} \frac{1}{\rho } |j|^2 \\&\quad = \int _{B_{2}\times [0,1]} \frac{1}{\rho } |j|^2 + {\varDelta }_1, \end{aligned}$$

so that (4.5) follows from the identity

$$\begin{aligned}&\int _{B_{2}\times [0,1]} \frac{1}{\rho } |j|^2 + \int _{B_{2}\times [0,1]} |\nabla {\varPhi }|^2\,{\mathrm {d}}\rho - 2 \int _{B_{2}\times [0,1]} \nabla {\varPhi }\cdot {\mathrm {d}}j\nonumber \\&\quad = \int _{B_{2}\times [0,1]} \left( \left| \frac{{\mathrm {d}}j}{{\mathrm {d}}\rho }\right| ^2 + |\nabla {\varPhi }|^2 - 2 \nabla {\varPhi }\cdot \frac{{\mathrm {d}}j}{{\mathrm {d}}\rho } \right) \,{\mathrm {d}}\rho = \int _{B_{2}\times [0,1]} \left| \frac{{\mathrm {d}}j}{{\mathrm {d}}\rho } - \nabla {\varPhi }\right| ^2 \,{\mathrm {d}}\rho \nonumber \\&\quad = \int _{B_{2}\times [0,1]} \frac{1}{\rho } |j-\rho \nabla {\varPhi }|^2. \end{aligned}$$
(4.8)

\(\square \)

Harmonic Approximation

In this section we sketch the proof of the (Eulerian) harmonic approximation Theorem 1.13. As already noted in the introduction, the proof of Theorem 1.13 is done at the Eulerian level (as in [13, 14]) by constructing a suitable competitor.

Proof of Theorem 1.13

By scaling we may without loss of generality assume that \(R=1\). Let \((\rho ,j)\) be the Eulerian description of the coupling \(\pi \in {\varPi }(\mu , \nu )\). The proof of the Eulerian version of the harmonic approximation consists of the following four steps, at the heart of which is the construction of a suitable competitor (Step 3). Note that since we want to make the dependence on the parameter M in the \(L^{\infty }\)-bound (1.37) precise, one actually has to look a bit closer into the proofs of the corresponding statements in [14], since in their presentation of the results the estimate \(M\lesssim ({\mathscr {E}}_6 + {\mathscr {D}}_6)^{\frac{1}{d+2}}\) is used.Footnote 27

Step 1 (Passage to a regularized problem). Choose a good radius \(R_*\in (3,4)\) for which the flux \({\overline{f}}\) is well-behaved on \(\partial B_{R_*}\). Actually, since we are working with \(L^2\)-based quantities, to be able to get \(L^2\) bounds on \(\nabla {\varPhi }\), we would have to be able to estimate \({\overline{f}}\) in \(L^2\) or at least in the Sobolev trace space \(L^{\frac{2(d-1)}{d}}\). However, since the boundary fluxes j are just measures and since for the approximate orthogonality (see Step 2) a regularity theory is required up to the boundary, one first has to go to a solution \(\varphi \) (with \(\int _{B_{R_*}} \varphi \,{\mathrm {d}}x = 0\)) of a regularized problem

$$\begin{aligned} {\varDelta } \varphi = {\mathrm {const}} \quad \text {in } B_{R_*} \quad \text {and} \quad \nu _{R_*} \cdot \nabla \varphi = {\widehat{g}} \quad \text {on } \partial B_{R_*}, \end{aligned}$$
(5.1)

where \({\mathrm {const}}\) is the generic constant for which the equation has a solution, and \({\widehat{g}}\) is a regularization through rearrangement of \({\overline{f}}\) with good \(L^2\) bounds (see [14, Section 3.1.2] for details).

Using properties of the regularized flux \({\widehat{g}}\) and elliptic regularity, the error made by this approximation can be quantified as

$$\begin{aligned} \sup _{B_2} |\nabla ({\varPhi }-\varphi )|^2 + \sup _{B_2} |\nabla ^2({\varPhi }-\varphi )|^2 + \sup _{B_2} |\nabla ^3({\varPhi }-\varphi )|^2 \lesssim M^{\alpha } ({\mathscr {E}}_6 + {\mathscr {D}}_6)^{1+ \frac{\alpha }{2}}, \end{aligned}$$
(5.2)

for any \(\alpha \in (0,1)\), see [14, Proof of Proposition 1.7]. Note that by the definition of \(\rho \) in (1.24) and assumption (1.36) we have that

$$\begin{aligned} \int _{B_2\times [0,1]} {\mathrm {d}}{\rho } \le \int {\mathrm {d}}\rho = \int {\mathrm {d}}\pi = \mu ({\mathbb {R}}^d) = \mu (B_7) \lesssim 1, \end{aligned}$$

so that together with (5.2) we may estimate, using \(M\le 1\),

$$\begin{aligned} \int _{B_{2}\times [0,1]} \frac{1}{\rho } |j-\rho \nabla {\varPhi }|^2&\lesssim \int _{B_{2}\times [0,1]} \frac{1}{\rho } |j-\rho \nabla \varphi |^2 + ({\mathscr {E}}_6 + {\mathscr {D}}_6)^{1+\frac{\alpha }{2}} \end{aligned}$$

for any \(\alpha \in (0,1)\). Note that the error term is superlinear in \({\mathscr {E}}_6 + {\mathscr {D}}_6\).

Step 2 (Approximate orthogonality [14, Proof of Lemma 1.8]). For every \(0<\tau \ll 1\) there exist \(\epsilon _{\tau } > 0\), \(C_{\tau } < \infty \) such that if \({\mathscr {E}}_6 + {\mathscr {D}}_6 \le \epsilon _{\tau }\), then

$$\begin{aligned} \int _{B_{2}\times [0,1]} \frac{1}{\rho } |j-\rho \nabla \varphi |^2 - \left( \int _{B_{R_*}\times [0,1]} \frac{1}{\rho } |j|^2 - \int _{B_{R_*}} |\nabla \varphi |^2 \right) \le \tau {\mathscr {E}}_6 + C_{\tau } {\mathscr {D}}_6. \end{aligned}$$
(5.3)

The proof of (5.3) essentially relies on the representation formula

$$\begin{aligned} \int _{B_{R_*}\times [0,1]} \frac{1}{\rho } |j-\rho \nabla \varphi |^2&= \int _{B_{R_*}\times [0,1]} \frac{1}{\rho } |j|^2 - \int _{B_{R_*}} |\nabla \varphi |^2 + 2 \int _{B_{R_*}} \varphi \,{\mathrm {d}}(\mu -\nu ) \\&\quad + 2 \int _{\partial B_{R_*}} \varphi \,{\mathrm {d}}({\widehat{g}}-{\overline{f}}) + \int _{B_{R_*}\times [0,1]} |\nabla \varphi |^2 \,({\mathrm {d}}\rho - {\mathrm {d}}x). \end{aligned}$$

The three error terms in the second line of this equality are then bounded as follows. The first term uses that \(\mu \) and \(\nu \) are close in Wasserstein distance. An estimate on the second term relies on the fact that \({\widehat{g}}\) and \({\overline{f}}\) are closeFootnote 28. This bound relies on the choice of a good radius in Step 1 and \(L^2\) estimates up to the boundary on \(\nabla \varphi \). The bound on the third error term uses elliptic regularity theory and a restriction result for the Wasserstein distance, which implies that \(W^2_{B_{R_*}}(\int _0^1\rho ,\kappa ) \lesssim {\mathscr {E}}_6 + {\mathscr {D}}_6\).Footnote 29 This estimate actually requires a further regularization of \({\widehat{g}}\), and by relying on interior regularity estimates explains why one has to go from \(B_{R_*}\) to the slightly smaller ball \(B_{2}\) in the estimate (5.3). A close inspection of [14, Proof of Lemma 1.8] shows that the term involving M in these error estimates comes in product with a superlinear power of \({\mathscr {E}}_6 + {\mathscr {D}}_6\) as in Step 1, so that we may bound \(M\le 1\) and still be able to obtain an arbitrarily small prefactor in (5.3) by choosing \({\mathscr {E}}_6 + {\mathscr {D}}_6\) small enough.

Step 3 (Construction of a competitor [14, Proof of Lemma 1.9]). For every \(0<\tau \ll 1\) there exist \(\epsilon _{\tau }>0\) and \(C,C_{\tau }<\infty \) such that if \({\mathscr {E}}_6+{\mathscr {D}}_6\le \epsilon _{\tau }\), then there exists a density-flux pair \(({\widetilde{\rho }}, {\widetilde{j}})\) satisfying (4.3) for \(r=R_*\), and such that

$$\begin{aligned} \int \frac{1}{{\widetilde{\rho }}} |{\widetilde{j}}|^2 - \int _{B_{R_*}} |\nabla \varphi |^2&\le \left( \tau + \frac{CM}{\tau }\right) {\mathscr {E}}_6 + C_{\tau } {\mathscr {D}}_6. \end{aligned}$$
(5.4)

Step 4 (Almost-minimality on the Eulerian level). Since the density-flux pair \(({\widetilde{\rho }}, {\widetilde{j}})\) satisfies (4.3) for \(r=R_*\), Corollary 4.1 implies that

$$\begin{aligned} \int _{B_{R_*}\times [0,1]} \frac{1}{\rho } |j|^2 - \int _{B_{R_*}} |\nabla \varphi |^2&\le \int \frac{1}{{\widetilde{\rho }}} |{\widetilde{j}}|^2 - \int _{B_{R_*}} |\nabla \varphi |^2 + {\varDelta }. \end{aligned}$$
(5.5)

Combining the above steps, we have proved that for any \(0<\tau \ll 1\) there exist \(\epsilon _{\tau }>0\) and \(C_{\tau }<\infty \) such that if \({\varPhi }\) is the solution of (1.40), then

$$\begin{aligned} \int _{B_{2}\times [0,1]} \frac{1}{\rho } |j-\rho \nabla {\varPhi }|^2 \le \left( \tau + \frac{CM}{\tau }\right) {\mathscr {E}}_6 + C_{\tau } {\mathscr {D}}_6 + {\varDelta }. \end{aligned}$$
(5.6)

\(\square \)

One-Step Improvement

The following proposition is a one-step improvement result, which will be the basis of a Campanato iteration in Theorem 1.1. Note that the iteration is more complicated than in [13], because at each step we have to restrict the c-optimal coupling to the smaller cross where the \(L^{\infty }\)-bound holds to be able to apply the harmonic approximation result. As a consequence, we have to make sure that the qualitative bound (2.15) on the displacement (which is an important ingredient in obtaining quantitative version of the \(L^{\infty }\)-bound (1.17)) is propagated in each step of the iteration.Footnote 30

We start with the short proof of Lemma 1.8, which is the starting point of each iteration step.

Proof of Lemma 1.8

Let \(\pi \in {\varPi }(\mu , \nu )\) be a c-optimal coupling, and let \(\pi _{{\varOmega }}:=\pi \lfloor _{{\varOmega }}\) be its restriction to a subset \({\varOmega }\subseteq {\mathbb {R}}^d \times {\mathbb {R}}^d\). Then \(\pi _{{\varOmega }} \in {\varPi }(\mu _{{\varOmega }}, \nu _{{\varOmega }})\) where the marginal measures \(\mu _{{\varOmega }}\), \(\nu _{{\varOmega }}\) are defined in (1.19).

Given any coupling \({\widetilde{\pi }} \in {\varPi }(\mu _{{\varOmega }}, \nu _{{\varOmega }})\), we can define \({\widehat{\pi }}:= \pi - \pi _{{\varOmega }} + {\widetilde{\pi }}\). It is easy to see that \({\widehat{\pi }}\) is an admissible coupling between the measures \(\mu \) and \(\nu \), hence by c-optimality of \(\pi \), we obtain by the additivity of the cost functional with respect to the transference plan

$$\begin{aligned} \int c(x,y) \,{\mathrm {d}}\pi \le \int c(x,y) \,{\mathrm {d}}{\widehat{\pi }} = \int c(x,y) \,{\mathrm {d}}\pi - \int c(x,y) \,{\mathrm {d}}\pi _{{\varOmega }} + \int c(x,y) \,{\mathrm {d}}{\widetilde{\pi }}, \end{aligned}$$

hence

$$\begin{aligned} \int c(x,y) \,{\mathrm {d}}\pi _{{\varOmega }} \le \int c(x,y) \,{\mathrm {d}}{\widetilde{\pi }} \quad \text {for any} \quad {\widetilde{\pi }}\in {\varPi }(\mu _{{\varOmega }}, \nu _{{\varOmega }}), \end{aligned}$$

that is, \(\pi _{{\varOmega }}\) is a c-optimal coupling between \(\mu _{{\varOmega }}\) and \(\nu _{{\varOmega }}\). \(\square \)

As a direct application and as a further preparation for the one-step-improvement we present the short proof of Corollary 1.9.

Proof of Corollary 1.9

Let \(\pi \in {\varPi }(\mu ,\nu )\) be c-optimal. Then by Lemma 1.8 the coupling is a c-optimal coupling between the measures \(\mu _R\) and \(\nu _R\) defined via

for any \(A\subseteq {\mathbb {R}}^d\) Borel. In particular, we have that

$$\begin{aligned} \mu _R(A) \le \pi (A\times {\mathbb {R}}^d) = \mu (A). \end{aligned}$$

If \(A\subseteq B_{2R}^c\), then

$$\begin{aligned} \mu _R(A) = \pi (((A\cap B_R)\times {\mathbb {R}}^d) \cup (A\times B_R)) = \pi (A\times B_R) = 0, \end{aligned}$$

since for any \((x,y)\in {{\,\mathrm{Spt}\,}}\pi _R\), by assumption we have that \(|x-y| \le MR \le R\), so \(A\times B_R \cap {{\,\mathrm{Spt}\,}}\pi = \emptyset \). Hence, \({{\,\mathrm{Spt}\,}}\mu _R \subseteq B_{2R}\). Similarly, if \(A\subseteq B_R\), then

$$\begin{aligned} \mu _R(A) = \pi ((A \times {\mathbb {R}}^d) \cup (A\times B_R)) = \pi (A\times {\mathbb {R}}^d) = \mu (A), \end{aligned}$$

which implies that \(\mu _R = \mu \) on \(B_R\).

By symmetry, the same properties hold for \(\nu _R\). \(\square \)

We now give the proof of the one-step improvement Proposition 1.16, which is the working horse of the Campanato iteration.

Proof of Proposition 1.16

Step 1 (Rescaling). Without loss of generality we can assume that \(R=1\). Hence, to simplify notation we will also use the shorthand

$$\begin{aligned} {\mathscr {E}}:= {\mathscr {E}}_9(\pi ), \quad&{\mathscr {D}}:= {\mathscr {D}}_9(\rho _0, \rho _1), \\ \text {and} \quad [\rho _0]_{\alpha } := [\rho _0]_{\alpha , 9}, \quad [\rho _1]_{\alpha }&:= [\rho _1]_{\alpha , 9}, \quad \left[ \nabla _{xy}c \right] _{\alpha } := \left[ \nabla _{xy}c \right] _{\alpha , 9}. \end{aligned}$$

Note that by the normalization \(\rho _0(0)=\rho _1(0)=1\) we have that \(\frac{1}{2}\le \rho _j \le 2\) for \(j=1,2\), provided that \([\rho _0]_{\alpha } + [\rho _1]_{\alpha }\) is small enough. In particular, Lemma A.4 then implies that

$$\begin{aligned} {\mathscr {D}}\lesssim [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2. \end{aligned}$$

Indeed, let \(({\widetilde{x}},{\widetilde{y}}) = S_R(x,y) := (R^{-1}x,R^{-1}y)\) and set

$$\begin{aligned} {\widetilde{\rho }}_0({\widetilde{x}}) = \rho _0(R{\widetilde{x}}), \quad {\widetilde{\rho }}_1({\widetilde{y}}) = \rho _1(R{\widetilde{y}}), \quad {\widetilde{c}}({\widetilde{x}},{\widetilde{y}}) = R^{-2} c(R{\widetilde{x}},R{\widetilde{y}}), \end{aligned}$$

and \({\widetilde{b}} = R^{-1} b\). We still have \({\widetilde{\rho }}_0(0) = {\widetilde{\rho }}_1(0) = 1\) and \(\nabla _{{\widetilde{x}}{\widetilde{y}}}{\widetilde{c}}(0,0) = -{\mathbb {I}}\), and \({\widetilde{\pi }} := R^{-d}(S_R)_{\#}\pi \) is the \({\widetilde{c}}\)-optimal coupling between \({\widetilde{\rho }}_0\) and \({\widetilde{\rho }}_1\).

Step 2 (\(L^{\infty }\)-bound for \(\pi \)). We claim that the coupling \(\pi \) satisfies the \(L^{\infty }\)-bound

(6.1)

with \(M = C_{\infty } ({\mathscr {E}}+{\mathscr {D}})^{\frac{1}{d+2}}\) for some constant \(C_{\infty }<\infty \) depending only on d.

In view of the harmonic approximation Theorem 1.13 it is therefore natural to consider the coupling , which by Lemma 1.8 and Corollary 1.9 is a c-optimal coupling between its own marginals, denoted by \(\mu \) and \(\nu \).

Indeed, since \(\pi \) is c-optimal, its support is c-monotone. By assumptions (1.49), (1.50), and Remark 1.7 we may therefore estimate

$$\begin{aligned} \Vert \nabla _{xy}c+{\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{8,{\varLambda }})} \lesssim _{{\varLambda }} [\nabla _{xy}c]_{\alpha }\ll _{{\varLambda }} 1. \end{aligned}$$
(6.2)

Hence, appealing to Proposition 1.5 (suitably rescaled), we obtain the claimed \(L^{\infty }\)-bound (6.1). Notice that the dependence of the smallness assumption (1.50) on \({\varLambda }\) only enters through the \({\varLambda }\)-dependence of (6.2).

Step 3 (Properties of the marginals \(\mu \) and \(\nu \) of ). We claim that the marginals \(\mu \) and \(\nu \) of satisfy

$$\begin{aligned}&{{\,\mathrm{Spt}\,}}\mu , {{\,\mathrm{Spt}\,}}\nu \subseteq B_7, \quad \text {and} \quad \mu (B_7) \le 2 |B_7|. \end{aligned}$$

Indeed, due to the \(L^{\infty }\)-bound (6.1) the marginal measures are supported on \(B_{7}\) if \({\mathscr {E}}+{\mathscr {D}}\ll 1\) (such that \(M\le 1\)). Furthermore, from Corollary 1.9 we have that \(\mu = \rho _0 \,{\mathrm {d}}x\) and \(\nu = \rho _1 \,{\mathrm {d}}y\) on \(B_6\), as well as \(\mu \le \rho _0\,{\mathrm {d}}x\), which implies that

$$\begin{aligned} \mu (B_7) \le \int _{B_7} \rho _0 \,{\mathrm {d}}x \le (1+[\rho _0]_{\alpha }) |B_7| \le 2 |B_7| \end{aligned}$$

since by assumption (1.50) we may assume that \([\rho _0]_{\alpha }\le 1\).

Step 4 (Almost-minimality of and applicability of the harmonic approximation Theorem 1.13). We show next that the coupling is an almost-minimizer of the Euclidean transport problem in the sense that for any \({\widetilde{\pi }}\in {\varPi }(\mu ,\nu )\) there holds

(6.3)

with \({\varDelta } = C [\nabla _{xy}c]_{\alpha } {\mathscr {E}}^{\frac{1}{2}}\), and that satisfies the assumptions of Theorem 1.13.

Indeed, by Step 3 and the c-optimality of , we may apply Proposition 1.10 to obtain inequality (6.3) with

$$\begin{aligned} {\varDelta } = C \Vert \nabla _{xy} c + {\mathbb {I}}\Vert _{{\mathscr {C}}^0(B_7\times B_7)} {\mathscr {E}}_7^{\frac{1}{2}} \lesssim [\nabla _{xy}c]_{\alpha } {\mathscr {E}}^{\frac{1}{2}}, \end{aligned}$$
(6.4)

where we also used that \({\mathscr {E}}_7 \lesssim {\mathscr {E}}\).

Note that in view of Lemma 1.12 the Eulerian description of satisfies (1.38). Together with Step 2 and Step 3 this implies that the assumptions of Theorem 1.13 are fulfilled.

Step 5 (Definition of b and B and proof of estimate (1.51)). For given \(\tau > 0\), to be chosen later, let \(\epsilon _{\tau }\) and \(C_{\tau }\) be as in the harmonic approximation result (Theorem 1.13). We claim that, with the harmonic gradient field \(\nabla {\varPhi }\) defined in (1.40),

$$\begin{aligned} b:=\nabla {\varPhi }(0), \quad A:=\nabla ^2{\varPhi }(0), \quad \text {and} \quad B:= {\mathrm {e}}^{\frac{A}{2}}, \end{aligned}$$
(6.5)

satisfy the bounds (1.51).

Since \(\rho _0(0) = \rho _1(0) = 1\), we have \(\frac{1}{2} \le \rho _j \le 2\), \(j=0,1\), provided \([\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2\) is small enough, so that (1.12) holds. In particular, since \(\mu \) agrees with \(\rho _0 \,{\mathrm {d}}x\) on \(B_6\) and \(\nu \) agrees with \(\rho _1 \,{\mathrm {d}}y\) on \(B_6\), we may estimate

$$\begin{aligned} {\mathscr {D}}_6(\mu , \nu ) = {\mathscr {D}}_6(\rho _0, \rho _1) {\mathop {\lesssim }\limits ^{(1.12)}} [\rho _0]_{\alpha ,6}^2 + [\rho _1]_{\alpha ,6}^2 \lesssim [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2. \end{aligned}$$

Assume now that

$$\begin{aligned} {\mathscr {E}}_6(\pi ) + {\mathscr {D}}_6(\mu ,\nu ) \lesssim {\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 \le \epsilon _{\tau }, \end{aligned}$$
(6.6)

so that Theorem 1.13 allows us to define the vector b and the matrices AB as in (6.5). Note that since A is symmetric, so is the matrix B. By (1.42) and (1.12), we then have

$$\begin{aligned} |b|^2 \lesssim {\mathscr {E}}_6(\pi ) +{\mathscr {D}}_6(\mu , \nu ) \lesssim {\mathscr {E}}+[\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2, \end{aligned}$$

and the same estimate holds for \(|A|^2\), so that

$$\begin{aligned} |B-{\mathbb {I}}|^2 \lesssim {\mathscr {E}}+[\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2. \end{aligned}$$

This proves the estimate (1.51). Furthermore, recalling the definition of \({\varPhi }\) in (1.40), we bound

$$\begin{aligned} |1-\det B^2|^2&= \left| 1- {\mathrm {e}}^{{\mathrm {tr}}A}\right| ^2 = \left| 1- {\mathrm {e}}^{{\varDelta }{\varPhi }(0)}\right| ^2 = \left| 1-{\mathrm {e}}^{\kappa _{\mu }-\kappa _{\nu }}\right| ^2 \lesssim \left| \kappa _{\nu } - \kappa _{\mu } \right| ^2\nonumber \\&\overset{(1.10)}{\lesssim } {\mathscr {D}}_{R_*}(\mu , \nu ) \overset{(1.12)}{\lesssim } R_*^{2\alpha } \left( [\rho _0]_{\alpha , R_*}^2 + [\rho _1]_{\alpha , R_*}^2 \right) \nonumber \\&\lesssim [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2. \end{aligned}$$
(6.7)

In view of (1.51) we may assume that \(b \in B_9\), so thatFootnote 31

$$\begin{aligned} |\rho _1(b)^{\frac{1}{d}} - \rho _1(0)^{\frac{1}{d}}|^2 \lesssim |\rho _1(b) - \rho _1(0)|^2 \le |b|^{2\alpha } [\rho _1]_{\alpha }^2 \lesssim [\rho _1]_{\alpha }^2, \end{aligned}$$
(6.8)

and similarly for \(D = -\nabla _{xy}c(0,b)\),

$$\begin{aligned} |D-{\mathbb {I}}|^2&\lesssim \left[ \nabla _{xy}c\right] _{\alpha }^2, \end{aligned}$$
(6.9)

which implies that

$$\begin{aligned} |1-\det D|^2 \lesssim |D-{\mathbb {I}}|^2 \lesssim [\nabla _{xy}c]_{\alpha }^2. \end{aligned}$$
(6.10)

Now, noticing that \(\gamma = \rho _1(b)^{\frac{1}{d}} \left( \frac{|\det B|^2}{|\det D|}\right) ^{\frac{1}{d}}\) because \(\rho _0(0) = 1\), (6.7), (6.8) together with \(\rho _1(0)=1\), and (6.10) imply

$$\begin{aligned} |1-\gamma |^2 \lesssim [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 + [\nabla _{xy}c]_{\alpha }^2. \end{aligned}$$
(6.11)

Step 6 (Mapping properties of Q). We show that if \({\mathscr {E}}+ {\mathscr {D}}+ [\nabla _{xy}c]_{\alpha }^2 \ll \theta ^2\),Footnote 32 then

(6.12)

Indeed, we have

and from (1.51), (6.9), and (6.11) it follows that \(B^{-1}B_{\theta } \subseteq B_{2\theta }\) and \(\gamma ^{-1} D^{-*}BB_{\theta } + b \subseteq B_{2\theta }\) if \({\mathscr {E}}+ {\mathscr {D}}+ [\nabla _{xy}c]_{\alpha }^2 \ll \theta ^2\), which we assume to be true from now on.

Step 7 (Transformation of the displacement) We next show that for all \(({\widehat{x}},{\widehat{y}}) = Q(x,y) \in {\mathbb {R}}^d \times {\mathbb {R}}^d\) and \(t\in [0,1]\) there holds

$$\begin{aligned} B({\widehat{y}}-{\widehat{x}}) = y-x - \nabla {\varPhi }(t x + (1-t)y) + \delta , \end{aligned}$$
(6.13)

where the error \(\delta \) is controlled by

$$\begin{aligned} |\delta |&\lesssim \left( |\gamma -1| + |D-{\mathbb {I}}| \right) (|y| + |b|) + |\nabla ^2{\varPhi }(0)|^2 |x| + |\nabla ^2{\varPhi }(0)| |x-y| \nonumber \\&\quad + \sup _{s\in [0,1]} |\nabla ^3{\varPhi }(s(tx + (1-t)y))| \, |tx + (1-t)y|^2. \end{aligned}$$
(6.14)

Indeed, by (1.46) we have

$$\begin{aligned} B({\widehat{y}}-{\widehat{x}})&= \gamma D^* (y-b) - B^2 x = y-b-B^2 x + (\gamma D^* - {\mathbb {I}})(y-b), \end{aligned}$$

and the second term, which will turn out to be an error term, can be bounded by

$$\begin{aligned} |(\gamma D^* - {\mathbb {I}})(y-b)|&\le \left( |\gamma -1||D^*| + |D^*-{\mathbb {I}}|\right) |y-b| \nonumber \\&{\mathop {\lesssim }\limits ^{(6.9)}} \left( |\gamma -1| + |D-{\mathbb {I}}| \right) (|y| + |b|). \end{aligned}$$
(6.15)

We show next that \(B^2 x + b \approx x + \nabla {\varPhi }(tx + (1-t)y)\), with an error that can be controlled. This relies on the fact that,

$$\begin{aligned} B^2 = {\mathrm {e}}^{A} = {\mathbb {I}}+ A + ({\mathrm {e}}^{A} - {\mathbb {I}}-A), \end{aligned}$$
(6.16)

where

$$\begin{aligned} |{\mathrm {e}}^{A} - {\mathbb {I}}-A| \lesssim |A|^2, \end{aligned}$$
(6.17)

and Taylor approximation. Indeed,

$$\begin{aligned}&|\nabla {\varPhi }(tx + (1-t)y) - (\nabla {\varPhi }(0) + \nabla ^2 {\varPhi }(0) (tx + (1-t)y))| \nonumber \\&\quad \lesssim \sup _{s\in [0,1]} |\nabla ^3{\varPhi }(s(tx + (1-t)y))| \, |tx + (1-t)y|^2 \end{aligned}$$
(6.18)

so that with (6.16), (6.17), and (6.5),

$$\begin{aligned}&|B^2 x + b - (x + \nabla {\varPhi }(tx + (1-t)y)) | \nonumber \\&\quad \lesssim |\nabla ^2{\varPhi }(0) x + \nabla {\varPhi }(0) - \nabla {\varPhi }(tx + (1-t)y))| + |\nabla ^2{\varPhi }(0)|^2 |x| \nonumber \\&\quad \le |\nabla ^2{\varPhi }(0) (tx + (1-t)y) + \nabla {\varPhi }(0) - \nabla {\varPhi }(tx + (1-t)y))| \nonumber \\&\qquad + |1-t| |\nabla ^2{\varPhi }(0)| |x-y| + |\nabla ^2{\varPhi }(0)|^2 |x| \nonumber \\&\quad {\mathop {\lesssim }\limits ^{(6.18)}} \sup _{s\in [0,1]} |\nabla ^3{\varPhi }(s(tx + (1-t)y))| \, |tx + (1-t)y|^2 \nonumber \\&\qquad + |\nabla ^2{\varPhi }(0)| |x-y| + |\nabla ^2{\varPhi }(0)|^2 |x|. \end{aligned}$$
(6.19)

Collecting all the error terms from (6.15) and (6.19) then yields (6.13).

Step 8 (Proof of the energy estimate (1.52)). In this final step we prove the energy estimate (1.52).

To this end, let us compute

(6.20)

For the first term on the right-hand side of (6.20), we use the harmonic approximation theorem in its Lagrangian version (Lemma 1.14), which is applicable by Step 4. In particular, we may estimate, assuming that \(\theta \le \frac{1}{2}\),

so that by (6.4), (1.12), and Young’s inequality,

(6.21)

To estimate the error term we use the following consequence of the \(L^{\infty }\)-bound (6.1): if \({\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 \ll \theta ^{d+2}\), which holds in view of (1.50), then

(6.22)

With this in mind, we will bound each term in (6.14) separately. Note that

and similarly . Moreover, by (6.22), we may estimate for any

$$\begin{aligned}&\sup _{s\in [0,1]} |\nabla ^3{\varPhi }(s(tx + (1-t)y))| \, |tx + (1-t)y|^2\\&\quad \lesssim \theta ^2 \sup _{B_{3\theta }} |\nabla ^3{\varPhi }| \, \lesssim \theta ^2 \sup _{B_2} |\nabla ^3{\varPhi }| \, . \end{aligned}$$

Together with the bound (1.51) and since we assumed that \({\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 \ll \theta ^2\), which implies that \(|b|^2\lesssim \theta ^2\), it follows that

Hence with the bounds (6.11), (6.9), and (1.42), together with \({\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 \ll \theta ^2\) and \(\theta \le 1\), we obtain

(6.23)

Inserting the estimates (6.21) and (6.23) into (6.20) yields the existence of a constant \(C=C(d,\alpha )\) such that

$$\begin{aligned} {\mathscr {E}}_{\theta }({\widehat{\pi }})&\le C\left( \tau \theta ^{-(d+2)} + \frac{M}{\tau \theta ^{d+2}} + \theta ^2 \right) {\mathscr {E}}+ C_{\tau ,\theta } \left( [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 + [\nabla _{xy}c]_{\alpha }^2 \right) . \end{aligned}$$

We now fix \(\theta =\theta (d,\alpha ,\beta )\) such that \(C \theta ^2 \le \frac{1}{3} \theta ^{2\beta }\), which is possible since \(\beta <1\), and then \(\tau \) small enough so that \(C \tau \theta ^{-(d+2)} \le \frac{1}{3} \theta ^{2\beta }\). This fixes \(\epsilon _{\tau }\) given by Theorem 1.13. Finally, we may choose \({\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2\) small enough so that

$$\begin{aligned} C\frac{M}{\tau \theta ^{d+2}} {\mathop {=}\limits ^{(6.1)}} C \frac{C_{\infty }({\mathscr {E}}+ {\mathscr {D}})^{\frac{1}{d+2}}}{\tau \theta ^{d+2}} \le \frac{1}{3}\theta ^{2\beta }, \end{aligned}$$

yielding (1.52).

Step 9 (Proof of (1.53)). Let and set \((x,y) := Q^{-1}({\widehat{x}}, {\widehat{y}})\), see Lemma 1.15 for the definition of Q. Then by  A.3, there holds \((x,y) \in {{\,\mathrm{Spt}\,}}\pi \). Similarly to Step 6, we have that , provided that \({\mathscr {E}}+ {\mathscr {D}}+ [\nabla _{xy}c]_{\alpha }^2 \ll \theta ^2\), so that

Using the \(L^{\infty }\) bound (1.17) (notice that \(\theta \le \frac{2}{3}\)) and the fact that we assumed \({\mathscr {E}}+ {\mathscr {D}}\ll \theta ^{d+2}\), we obtain

$$\begin{aligned} |x-y| \le M = C_{\infty } ({\mathscr {E}}+ {\mathscr {D}})^{\frac{1}{d+2}} \le \theta , \end{aligned}$$

so that \((x,y) \in {\mathbb {B}}_{\theta , 2\theta }\). It thus follows that \(({\widehat{x}}, {\widehat{y}}) \in Q({\mathbb {B}}_{\theta , 2\theta })\). Following the proof of Step 6, we have

$$\begin{aligned} Q({\mathbb {B}}_{\theta , 2\theta }) \subseteq {\mathbb {B}}_{2\theta , 3\theta }, \end{aligned}$$

and conclude that . \(\square \)

Proof of \(\epsilon \)-regularity

To lighten the notation in this subsection, let us set

$$\begin{aligned} {\mathscr {H}}_R(\rho _0, \rho _1, c)&:= R^{2\alpha } \left( [\rho _0]_{\alpha , R}^2 + [\rho _1]_{\alpha , R}^2 + \left[ \nabla _{xy}c\right] _{\alpha , R}^2\right) . \end{aligned}$$

We are now in the position to give the proof of our main \(\epsilon \)-regularity theorem, which we restate for the reader’s convenience:

Theorem 1.1

Assume that (C1)–(C4) hold and that \(\rho _0(0) = \rho _1(0) =1\), as well as \(\nabla _{xy}c(0,0)= -{\mathbb {I}}\). Assume further that 0 is in the interior of \(\, X \times Y\).

Let \(\pi \) be a c-optimal coupling from \(\rho _0\) to \(\rho _1\). There exists \(R_0= R_0(c)>0\) such that for all \(R\le R_0\) with

$$\begin{aligned} \frac{1}{R^{d+2}} \int _{B_{4R} \times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi + {\mathscr {H}}_{4R}(\rho _0, \rho _1, c) \ll _c 1, \end{aligned}$$
(7.1)

there exists a function \(T\in {\mathscr {C}}^{1,\alpha }(B_R)\) such that \({{\,\mathrm{Spt}\,}}\pi \cap (B_R \times {\mathbb {R}}^d) \subseteq {\mathrm {graph}}\,T\), and the estimate

$$\begin{aligned}{}[\nabla T]_{\alpha , R}^2 \lesssim \frac{1}{R^{2\alpha }} \left( \frac{1}{R^{d+2}} \int _{B_{4R} \times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi + {\mathscr {H}}_{4R}(\rho _0, \rho _1, c)\right) \end{aligned}$$
(7.2)

holds.

Proof of Theorem 1.1

To simplify notation, we write \({\mathscr {E}}_R\) for \({\mathscr {E}}_R(\pi )\) and \({\mathscr {H}}_R\) for \({\mathscr {H}}_{R}(\rho _0, \rho _1, c)\). Note that since \(\rho _0(0)=\rho _1(0)=1\) and \(R^{\alpha }\left( [\rho _0]_{\alpha , 4R}+[\rho _1]_{\alpha , 4R}\right) \) is small by (7.1), we may assume throughout the proof that \(\frac{1}{2}\le \rho _0,\rho _1 \le 2\) on \(B_{4R}\).

Step 1 (Control of the full energy at scale 2R). We show that under assumption (7.1) we can bound

$$\begin{aligned} {\mathscr {E}}_{2R} \lesssim \frac{1}{R^{d+2}} \int _{B_{4R} \times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi . \end{aligned}$$
(7.3)

Indeed, from Remarks 1.6 and 2.2 , we know that (7.1) implies the \(L^{\infty }\) bound

$$\begin{aligned} \begin{aligned} (x,y) \in \,&(B_{3R}\times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \\&\Rightarrow \quad |x-y| \lesssim R \left( \frac{1}{R^{d+2}} \int _{B_{4R} \times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi + {\mathscr {D}}_{4R}(\rho _0,\rho _1) \right) ^{\frac{1}{d+2}}. \end{aligned} \end{aligned}$$
(7.4)

Let us now prove that

$$\begin{aligned} ({\mathbb {R}}^d \times B_{2R}) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq B_{4R} \times B_{2R}, \end{aligned}$$
(7.5)

from which we get

$$\begin{aligned} \int _{{\mathbb {R}}^d \times B_{2R}} |y-x|^2 \, {\mathrm {d}}\pi \le \int _{B_{4R}\times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi , \end{aligned}$$

thus yielding (7.3). To this end, assume there exists \((x,y) \in (B_{4R}^c \times B_{2R}) \cap {{\,\mathrm{Spt}\,}}\pi \). Let also \(x' \in [x,y] \cap \partial B_{\frac{5}{2}R}\) and \(y' \in {\mathbb {R}}^d\) such that \((x',y') \in {{\,\mathrm{Spt}\,}}\pi \), see Figure 1. Then by (7.4), (7.1) and (1.12), we have

$$\begin{aligned} |x'-y'| \ll R. \end{aligned}$$
(7.6)

From (2.5), recalling the definition (2.3) of \(\delta \) and the fact that \(\delta \ll 1\), we get, upon writing \(y-y'=y-x'+x'-y'\),

$$\begin{aligned} 0&\le (x-x') \cdot (y-x') + (x-x')\cdot (x'-y') \\&\quad + \delta |x-x'||y-x'| + \delta |x-x'||x'-y'| \\&\le (\delta -1)|x-x'||y-x'| + (1+\delta )|x-x'||x'-y'| \\&\le |x-x'| \left( -\frac{1}{2}\frac{R}{2} + (1+\delta ) |x'-y'|\right) , \end{aligned}$$

which together with (7.6) yields a contradiction, proving (7.5).

In the following, Step 2 – Step 6 are devoted to prove that under the assumption

$$\begin{aligned} {\mathscr {E}}_{2R} + {\mathscr {H}}_{2R} \ll 1, \end{aligned}$$
(7.7)

the following Campanato estimate holds:

$$\begin{aligned}&\underset{0<r<\frac{R}{2}}{\sup } \ \underset{x_0 \in B_{R}}{\sup } \ \frac{1}{r^{d+2+2\alpha }} \ \underset{A,b}{\inf } \int _{(B_r(x_0)\cap B_R) \times {\mathbb {R}}^d} |y-(Ax+b)|^2 \, {\mathrm {d}}\pi \nonumber \\&\quad \lesssim \frac{1}{R^{2\alpha }} \left( {\mathscr {E}}_{2R}+{\mathscr {H}}_{2R} \right) . \end{aligned}$$
(7.8)
Fig. 1
figure 1

The definition of \(x'\) and \(y'\) in the proof of (7.5)

Step 2 (Getting to a normalized setting). We claim that it is enough to prove that if

$$\begin{aligned} {\mathscr {E}}_R + {\mathscr {H}}_R \ll 1 \end{aligned}$$
(7.9)

then for all \(r \le \frac{R}{2}\),

$$\begin{aligned} \frac{1}{r^{d+2}} \ \underset{A,b}{\inf } \int _{B_{r}\times {\mathbb {R}}^d} |y-(Ax+b)|^2 \, {\mathrm {d}}\pi \lesssim \frac{r^{2\alpha }}{R^{2\alpha }} \left( {\mathscr {E}}_{R} + {\mathscr {H}}_{R} \right) . \end{aligned}$$
(7.10)

Let us first notice that for any \(x_0 \in B_{R}\), so that

$$\begin{aligned} {\mathscr {E}}_{x_0, R} := \frac{1}{R^{d+2}} \int _{(B_r(x_0) \cap B_R) \times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi&\lesssim {\mathscr {E}}_{2R}, \\ \text {and} \quad {\mathscr {H}}_{x_0,R} := R^{2\alpha } \left( [\rho _0]_{\alpha , B_R(x_0)}^2 + [\rho _1]_{\alpha ,R}^2 + \left[ \nabla _{xy}c\right] _{\alpha , (B_r(x_0) \cap B_R) \times {\mathbb {R}}^d}^2 \right)&\lesssim {\mathscr {H}}_{2R}. \end{aligned}$$

Therefore, in view of (7.7), it is sufficient to show for all \(x_0 \in B_{R}\) that, if

$$\begin{aligned} {\mathscr {E}}_{x_0,R} + {\mathscr {H}}_{x_0,R} \ll 1, \end{aligned}$$
(7.11)

then for all \(r \le \frac{R}{2}\),

$$\begin{aligned} \frac{1}{r^{d+2}} \ \underset{A,b}{\inf } \int _{(B_r(x_0) \cap B_R) \times {\mathbb {R}}^d} |y-(Ax+b)|^2 \, {\mathrm {d}}\pi \lesssim \frac{r^{2\alpha }}{R^{2\alpha }} \left( {\mathscr {E}}_{x_0,R} + {\mathscr {H}}_{x_0,R} \right) . \end{aligned}$$
(7.12)

Performing a similar change of coordinates as Lemma 1.15, namely letting \(M := -\nabla _{xy}c(x_0,0)\), \(\gamma := (\rho _0(x_0) \det M)^{\frac{1}{d}}\) and \(S(y) := \gamma M^{-1} y\) and defining

$$\begin{aligned} {\widetilde{\pi }}&:= \frac{1}{\rho _0(x_0)} ({{\,\mathrm{Id}\,}}\times \,S^{-1}) _{\#} \pi , \quad {\widetilde{\rho }}_0 := \frac{\rho _0}{\rho _0(x_0)},\\&\quad {\widetilde{\rho }}_1 := \rho _1 \circ S, \quad {\widetilde{c}}(x,y) := \gamma ^{-1} c(x, S(y)), \end{aligned}$$

we have that \({\widetilde{\rho }}_0(x_0) = {\widetilde{\rho }}_1(0) = 1\) and \(\nabla _{xy}{\widetilde{c}}(x_0,0) = -{\mathbb {I}}\). Furthermore, \({\widetilde{\pi }}\) is a \({\widetilde{c}}\)-optimal coupling between \({\widetilde{\rho }}_0\) and \({\widetilde{\rho }}_1\) and \({\widetilde{\pi }}\), \({\widetilde{\rho }}_0\), \({\widetilde{\rho }}_1\) and \({\widetilde{c}}\) still satisfy (7.11). Without loss of generality, we may thus assume \(x_0=0\) and then (7.11) turns into (7.9) and (7.12) turns into (7.10).

Step 3 (First step of the iteration). Recall that (1.12) holds, i.e.,

$$\begin{aligned} {\mathscr {D}}_{R} \lesssim R^{2\alpha }\left( [\rho _0]_{\alpha ,R}^2 + [\rho _1]_{\alpha ,R}^2 \right) . \end{aligned}$$

By (a rescaling of) Lemma 2.1, there exist \({\varLambda }_0 < \infty \) and \(R_0>0\) such that for all \(R\le R_0\) for which (7.9) holds, we have

(7.13)

Let \(\beta >0\). In view of (7.9) and (7.13), we may apply Proposition 1.16 to get the existence of a symmetric matrix \(B_1\) and a vector \(b_1\) such that, defining \(D_1 := -\nabla _{xy}c_1(0, b_1)\), \(\gamma _1 := (\rho _1(b_1) \det B_1^2D_1^{-1})^{\frac{1}{d}}\) and \(\rho _{0,1}\), \(\rho _{1,1}\), \(c_1\) and \(\pi _1\) as \({\widehat{\rho }}_0\), \({\widehat{\rho }}_1\), \({\widehat{c}}\) and \({\widehat{\pi }}\) in Lemma 1.15, we get

$$\begin{aligned} {\mathscr {E}}^1 := {\mathscr {E}}_{\theta R}(\pi _1) \le \theta ^{2\beta } {\mathscr {E}}_R + C_{\beta } {\mathscr {H}}_R, \end{aligned}$$
(7.14)

and that \(\pi _1\) is a \(c_1\)-optimal coupling between \(\rho _{0,1}\) and \(\rho _{1,1}\). From (1.53) we also have the inclusion

(7.15)

so that we may fix \({\varLambda }=27\) from now on (assuming w.l.o.g. that \({\varLambda }_0\ge 27\)).

Step 4 (Iterating Proposition 1.16). We now show that we can iterate Proposition 1.16 a finite number of times.

Notice that from the estimates (1.51), (6.9) and (6.11) we have the inclusion

$$\begin{aligned} \gamma _1^{-1} D_1^{-*} B_1 B_{\theta R} + b_1 \subseteq B_R. \end{aligned}$$
(7.16)

Hence, we can bound

$$\begin{aligned} {[}\rho _{1,1}]_{\alpha , \theta R}&=\! \rho _1(b_1)^{-1} \underset{y \ne y' \in B_{\theta R}}{\sup } \frac{|\rho _1( \gamma _1^{-1} D_1^{-*} B_1 y \!+\! b_1) \!-\! \rho _1(\gamma _1^{-1} D_1^{-*} B_1 y' \!+\! b_1)|}{|y-y'|^{\alpha }} \\&\overset{(6.8)}{\lesssim } |\gamma _1^{-1} D_1^{-*} B_1|^{\alpha } \underset{y \ne y' \in B_{\theta R}}{\sup } \frac{|\rho _1( \gamma _1^{-1} D_1^{-*} B_1 y + b_1) - \rho _1(\gamma _1^{-1} D_1^{-*} B_1 y' + b_1)|}{|(\gamma _1^{-1} D_1^{-*} B_1 y + b_1) - (\gamma _1^{-1} D_1^{-*} B_1 y' + b_1)|^{\alpha }} \\&{\mathop {\le }\limits ^{(7.16)}} \gamma _1^{-\alpha } |D_1^{-1}|^{\alpha } |B_1|^{\alpha } \underset{{\widetilde{y}} \ne {\widetilde{y}}' \in B_{R}}{\sup } \ \frac{|\rho _1({\widetilde{y}}) - \rho _1({\widetilde{y}}')|}{|{\widetilde{y}} - {\widetilde{y}}'|^{\alpha }}. \end{aligned}$$

Estimates (1.51), (6.8), (6.9), and (6.11) thus yield for \(i \in \{0,1\}\) (\(\rho _{0,1}\) is treated similarly)

$$\begin{aligned}{}[\rho _{i,1}]_{\alpha , \theta R}^2 \le \left( 1+C({\mathscr {E}}_R^{\frac{1}{2}} + {\mathscr {H}}_R^{\frac{1}{2}})\right) [\rho _i]_{\alpha , R}^2, \end{aligned}$$

where C is a constant depending only on d and \(\alpha \). Using (1.47), the same kind of computation gives

$$\begin{aligned} \left[ \nabla _{xy}c_1\right] _{\alpha , \theta R}^{2} \le \left( 1+C({\mathscr {E}}_R^{\frac{1}{2}} + {\mathscr {H}}_R^{\frac{1}{2}})\right) \left[ \nabla _{xy}c\right] _{\alpha , R}^{2}, \end{aligned}$$
(7.17)

so that

$$\begin{aligned} {\mathscr {H}}^1 := {\mathscr {H}}_{\theta R}(\rho _{0,1}, \rho _{1,1}, c_1) \le \theta ^{2\alpha } \left( 1+C({\mathscr {E}}_R^{\frac{1}{2}} + {\mathscr {H}}_R^{\frac{1}{2}})\right) {\mathscr {H}}_R. \end{aligned}$$
(7.18)

Therefore, (7.9), (7.14), (7.15), and (7.18) imply that we may iterate Proposition 1.16K times, \(K\ge 2\), to find sequences of matrices \(B_k\) and \(D_k\), a sequence of vectors \(b_k\), and a sequence of real numbers \(\gamma _k\) such that, setting for \(1 \le k \le K\)

$$\begin{aligned} \rho _{0,k}(x)&:= \rho _{0,k-1}(B_k^{-1} x), \quad \rho _{1,k}(y) := \frac{\rho _{1,k-1}(\gamma _k^{-1} D_k^{-*} B_k y + b_k)}{\rho _{1,k-1}(b_k)}, \nonumber \\ Q_k(x,y)&:= (B_k x, \gamma _k B_k^{-1} D_k^* (y-b_k)), \quad \pi _k := \det B_k \, {Q_k}_{\#} \pi _{k-1}, \end{aligned}$$
(7.19)
$$\begin{aligned} \text {and} \quad c_k(x, y)&:= \gamma _k c_{k-1}(B_k^{-1} x,\gamma _k^{-1} D_k^{-*} B_k y + b_k), \end{aligned}$$
(7.20)

we recover \(\rho _{0,k}(0) = \rho _{1,k}(0) = 1\) and \(\nabla _{xy}c_k(0,0) = -{\mathbb {I}}\), \(\pi _k\) is a \(c_k\)-optimal coupling between \(\rho _{0,k}\) and \(\rho _{1,k}\), and from (1.53) we have

(7.21)

Moreover, defining

$$\begin{aligned} {\mathscr {E}}^k := {\mathscr {E}}_{\theta ^k R}(\pi _k) \quad \text {and} \quad {\mathscr {H}}^{k} := {\mathscr {H}}_{\theta ^{k} R}(\rho _{0,k}, \rho _{1,k}, c_{k}), \end{aligned}$$
(7.22)

we have

$$\begin{aligned} {\mathscr {E}}^k&\overset{(1.52)}{\le } \theta ^{2\beta } {\mathscr {E}}^{k-1} + C_{\beta } {\mathscr {H}}^{k-1}, \end{aligned}$$
(7.23)
$$\begin{aligned} |B_k-{\mathbb {I}}|^2 + \frac{1}{(\theta ^{k-1}R)^2} |b_k|^2&\overset{(1.51)}{\lesssim } {\mathscr {E}}^{k-1} + \theta ^{2(k-1)\alpha } R^{2\alpha }\nonumber \\&\qquad \left( [\rho _{0,k-1}]_{\alpha , \theta ^{k-1}R}^2 + [\rho _{1,k-1}]_{\alpha , \theta ^{k-1}R}^2 \right) , \end{aligned}$$
(7.24)
$$\begin{aligned} |D_k-{\mathbb {I}}|^2&\overset{(6.9)}{\lesssim } \theta ^{2(k-1)\alpha } R^{2\alpha } [\nabla _{xy}c_{k-1}]_{\alpha , \theta ^{k-1}R}^2, \end{aligned}$$
(7.25)
$$\begin{aligned} |\gamma _k-1|^2&\overset{(6.11)}{\lesssim } {\mathscr {H}}^{k-1}. \end{aligned}$$
(7.26)

Arguing as for (7.18), there exists a constant \(C_1 = C_1(d, \alpha ) < \infty \) such that

$$\begin{aligned} {\mathscr {H}}^k \le \theta ^{2\alpha } \left( 1+C_1\left( ({\mathscr {E}}^{k-1})^{\frac{1}{2}} + ({\mathscr {H}}^{k-1})^{\frac{1}{2}}\right) \right) {\mathscr {H}}^{k-1}. \end{aligned}$$
(7.27)

Step 5 (Smallness at any step of the iteration). From now on, we fix \(\beta > \alpha \). Let us show by induction that there exists a constant \(C_2 = C_2(d, \alpha , \beta )<\infty \) such that for all \(K \in {\mathbb {N}}\)

$$\begin{aligned} {\mathscr {E}}^K&\le C_2 \theta ^{2K\alpha } \left( {\mathscr {E}}_R + {\mathscr {H}}_R \right) , \end{aligned}$$
(7.28)
$$\begin{aligned} {\mathscr {H}}^K&\le \theta ^{2\alpha }(1+\theta ^{K\alpha }) {\mathscr {H}}^{K-1}. \end{aligned}$$
(7.29)

This will show, together with (7.21), that we can keep on iterating Proposition 1.16. As an outcome of this induction, the estimate

$$\begin{aligned} {\mathscr {H}}^K \lesssim C_3 \theta ^{2K\alpha } {\mathscr {H}}_R \end{aligned}$$
(7.30)

holds for all \(K\in {\mathbb {N}}\) with \(C_3 := \prod _{k=1}^{+\infty } (1+\theta ^{k\alpha }) < \infty \).

We set

$$\begin{aligned} C_2 := \max \left\{ \theta ^{2(\beta -\alpha )}, \frac{C_3C_{\beta }\theta ^{-2\alpha }}{1-\theta ^{2(\beta -\alpha )}}\right\} . \end{aligned}$$
(7.31)

By (7.14) and (7.18), the estimates (7.28) and (7.29) hold for \(K=1\) provided \({\mathscr {E}}_R +{\mathscr {H}}_R\) is small enough, since \(C_2 \ge \max \{\theta ^{2(\beta -\alpha )}, C_{\beta }\theta ^{-2\alpha }\}\). Assume now that (7.28) and (7.29) hold for all \(1 \le k \le K-1\). By induction hypothesis, we have

$$\begin{aligned} {\mathscr {E}}^{K-1} \le C_2 \theta ^{2(K-1)\alpha } \left( {\mathscr {E}}_R + {\mathscr {H}}_R \right) \quad \text {and} \quad {\mathscr {H}}^{K-1} \le C_3 \theta ^{2(K-1)\alpha } {\mathscr {H}}_R. \end{aligned}$$

Combining these two estimates with (7.27) for \(k=K\) and assuming that

$$\begin{aligned} C_1 \left( C_2^{\frac{1}{2}} ({\mathscr {E}}_R^{\frac{1}{2}}+{\mathscr {H}}_R^{\frac{1}{2}}) + C_3^{\frac{1}{2}} {\mathscr {H}}_R^{\frac{1}{2}} \right) \le \theta ^{\alpha }, \end{aligned}$$

which is possible provided \({\mathscr {E}}_R+{\mathscr {H}}_R\) is small enough, we get (7.29). Similarly, by (7.23) with \(k=K\) we obtain, using that \(C_2 \ge \frac{C_3C_{\beta }\theta ^{-2\alpha }}{1-\theta ^{2(\beta -\alpha )}}\) by (7.31), the bound

$$\begin{aligned} \theta ^{-2K\alpha } {\mathscr {E}}^K&\le \left( \theta ^{2(\beta -\alpha )} C_2 + C_{\beta }C_3\theta ^{-2\alpha } \right) ( {\mathscr {E}}_R + {\mathscr {H}}_R ) \le C_2 ( {\mathscr {E}}_R + {\mathscr {H}}_R ). \end{aligned}$$

This concludes the induction.

Step 6 (Campanato estimate). We can now prove the main estimate, that is, assuming (7.9), we show that (7.10) holds.

Let

$$\begin{aligned}&{\overline{\gamma }}_k := \gamma _k \ldots \gamma _1, \quad {\overline{B}}_k := B_k \ldots B_1, \quad {\overline{D}}_k := B_k^{-1} D_k^* \ldots B_1^{-1} D_1^*, \\&{\overline{b}}_k := \sum _{i=1}^k (\gamma _k B_k^{-1} D_k^*) \ldots (\gamma _i B_i^{-1} D_i^*)b_i, \end{aligned}$$

and

$$\begin{aligned} {\overline{Q}}_k(x,y)&:= ({\overline{B}}_k x, {\overline{\gamma }}_k {\overline{D}}_k y - {\overline{b}}_k). \end{aligned}$$
(7.32)

We see that, recalling (7.19) and noticing that \({\overline{Q}}_k = Q_k \circ \cdots \circ Q_1\),

$$\begin{aligned} \pi _k = \det {\overline{B}}_k {{\overline{Q}}_k}_{\#} \pi \quad \text {and} \quad \rho _{0,k}(x) = \rho _0({\overline{B}}_k^{-1}x). \end{aligned}$$
(7.33)

Moreover, from (7.24), (7.28), and (7.29), we have the estimate

$$\begin{aligned} |{\overline{B}}_k-{\mathbb {I}}|^2 \lesssim {\mathscr {E}}_R + {\mathscr {H}}_R \ll 1, \end{aligned}$$
(7.34)

so that

$$\begin{aligned} B_{\frac{1}{2}\theta ^kR} \times {\mathbb {R}}^d \subseteq {\overline{B}}_k^{-1}(B_{\theta ^kR}) \times {\mathbb {R}}^d = {\overline{Q}}_k^{-1}(B_{\theta ^kR} \times {\mathbb {R}}^d). \end{aligned}$$
(7.35)

Similarly, from (7.24), (7.25) and (7.26),

$$\begin{aligned} |{\overline{\gamma }}_k-1|^2 + |{\overline{D}}_k-{\mathbb {I}}|^2 \lesssim {\mathscr {E}}_R + {\mathscr {H}}_R \ll 1. \end{aligned}$$
(7.36)

Let us now compute

$$ \begin{aligned}&\frac{1}{\left( \frac{1}{2} \theta ^kR\right) ^{d+2}} \, \inf _{A,b} \int _{B_{\frac{1}{2} \theta ^kR}\times {\mathbb {R}}^d} |y-(Ax+b)|^2 \, {\mathrm {d}}\pi \\&\quad \overset{(7.35)}{\lesssim } \frac{1}{(\theta ^kR)^{d+2}} \int _{{\overline{Q}}_k^{-1}(B_{\theta ^kR}\times {\mathbb {R}}^d)} |y-{\overline{\gamma }}_k^{-1}{\overline{D}}_k^{-1}{\overline{B}}_k x - {\overline{\gamma }}_k^{-1}{\overline{D}}_k^{-1}{\overline{b}}_k|^2 \, {\mathrm {d}}\pi \\&\quad \overset{(7.33)}{=} \frac{(\det {\overline{B}}_k)^{-1}}{(\theta ^kR)^{d+2}} \int _{B_{\theta ^kR}\times {\mathbb {R}}^d} |{\overline{\gamma }}_k^{-1}{\overline{D}}_k^{-1} (y-x)|^2 \, {\mathrm {d}}\pi _k \\&\quad \overset{(7.34) \& (7.36)}{\lesssim } \frac{1}{(\theta ^kR)^{d+2}} \int _{B_{\theta ^kR}\times {\mathbb {R}}^d} |y-x|^2 \, {\mathrm {d}}\pi _k \overset{(7.22)}{=} {\mathscr {E}}^k. \end{aligned}$$

By (7.28), we obtain

$$\begin{aligned} \frac{1}{\left( \frac{1}{2} \theta ^kR\right) ^{d+2}} \, \underset{A,b}{\inf } \int _{B_{\frac{1}{2} \theta ^kR}\times {\mathbb {R}}^d} |y-(Ax+b)|^2 \, {\mathrm {d}}\pi \lesssim \theta ^{2k\alpha } \left( {\mathscr {E}}_R + {\mathscr {H}}_R \right) , \end{aligned}$$

from which (7.10) follows, concluding the proof of (7.8)

Step 7 (\({{\,\mathrm{Spt}\,}}\pi \) is contained in the graph of a function T within \(B_R \times {\mathbb {R}}^d\)). We claim that (7.8) implies the existence of a function \(T:B_R \rightarrow Y\) such that

$$\begin{aligned} (B_R \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq {\mathrm {graph\, T}}. \end{aligned}$$
(7.37)

In the following, we abbreviate

$$\begin{aligned} \llbracket \pi \rrbracket _{\alpha }^2&:= \sup _{0<r<\frac{R}{2}} \sup _{x_0 \in B_{R}} \frac{1}{r^{d+2+2\alpha }} \inf _{A,b} \int _{(B_r(x_0)\cap B_R)\times {\mathbb {R}}^d} |y-(Ax+b)|^2 \,{\mathrm {d}}\pi . \end{aligned}$$

To prove the claim, fix \(x_0 \in B_R\) and notice that (7.8) implies that for any \(r>0\) small enough, there holds

$$\begin{aligned}&\frac{1}{r^{d+2}} \inf _{A,b} \int _{B_r(x_0) \times {\mathbb {R}}^d} |y-(Ax+b)|^2 \,{\mathrm {d}}\pi \nonumber \\&\quad \lesssim \frac{r^{2\alpha }}{R^{2\alpha }} \left( \frac{1}{R^{d+2}} \int _{B_{4R}\times {\mathbb {R}}^d} |x-y|^2\,{\mathrm {d}}\pi +{\mathscr {H}}_{4R} \right) . \end{aligned}$$
(7.38)

Step 7.A. It is easy to see that the infimum in (7.38) is attained at some \(A_r = A_r(x_0)\) and \(b_r = b_r(x_0)\). Analogous to [5, Lemma 3.IV] one can show that there exist a matrix \(A_0 = A_0(x_0)\) and a vector \(b_0 = b_0(x_0)\) such that \(A_r \rightarrow A_0\) and \(b_r \rightarrow b_0\) as \(r\rightarrow 0\) (uniformly in \(x_0\)) with rates

$$\begin{aligned} |A_r - A_0| \lesssim \llbracket \pi \rrbracket _{\alpha } r^{\alpha } \quad \text {and} \quad |b_r - b_0| \lesssim \llbracket \pi \rrbracket _{\alpha } r^{\alpha +1}. \end{aligned}$$
(7.39)

We refer the reader to Appendix 1 for a proof of the convergences and (7.39).

Step 7.B. We claim that

$$\begin{aligned} \frac{1}{r^d} \int _{B_r(x_0)\times {\mathbb {R}}^d} |y- (A_0 x_0 + b_0)|^2\,{\mathrm {d}}\pi \rightarrow 0 \quad \text {as } r \searrow 0. \end{aligned}$$
(7.40)

Indeed, we can split

$$\begin{aligned} \int _{B_r(x_0)\times {\mathbb {R}}^d} |y- (A_0 x_0 + b_0)|^2\,{\mathrm {d}}\pi&\lesssim \int _{B_r(x_0)\times {\mathbb {R}}^d} |y- (A_r x + b_r)|^2\,{\mathrm {d}}\pi \nonumber \\&\quad + \int _{B_r(x_0)\times {\mathbb {R}}^d} |(A_r-A_0) x + (b_r- b_0)|^2\,{\mathrm {d}}\pi \nonumber \\&\quad + \int _{B_r(x_0)\times {\mathbb {R}}^d} |A_0 (x-x_0)|^2\,{\mathrm {d}}\pi . \end{aligned}$$
(7.41)

By definition of \(A_r, b_r\), we have

$$\begin{aligned} \frac{1}{r^d} \int _{B_r(x_0)\times {\mathbb {R}}^d} |y- (A_r x + b_r)|^2\,{\mathrm {d}}\pi&= \frac{1}{r^d} \inf _{A,b} \int _{B_r(x_0)\times {\mathbb {R}}^d} |y- (A x + b)|^2\,{\mathrm {d}}\pi \\&\lesssim \llbracket \pi \rrbracket _{\alpha }^2 \, r^{2\alpha +2}. \end{aligned}$$

Using (7.39), \(\rho _0\le 2\), and \(x_0\in B_R\), it follows that

$$\begin{aligned}&\frac{1}{r^d} \int _{B_r(x_0)\times {\mathbb {R}}^d} |(A_r-A_0) x + (b_r- b_0)|^2\,{\mathrm {d}}\pi \\&\quad = \frac{1}{r^d} \int _{B_r(x_0)} |(A_r-A_0) x + (b_r- b_0)|^2\,\rho _0(x){\mathrm {d}}x\\&\quad \lesssim |A_r-A_0|^2 R^2 + |b_r - b_0|^2 \\&\quad \lesssim \llbracket \pi \rrbracket _{\alpha }^2 \, r^{2\alpha } (R^2 + r^2). \end{aligned}$$

Finally, the last term in (7.41) is estimated by

$$\begin{aligned} \frac{1}{r^d} \int _{B_r(x_0)\times {\mathbb {R}}^d} |A_0 (x-x_0)|^2\,{\mathrm {d}}\pi&= \frac{1}{r^d} \int _{B_r(x_0)} |A_0 (x-x_0)|^2\,\rho _0(x){\mathrm {d}}x \lesssim r^2. \end{aligned}$$

Letting \(r\rightarrow 0\) in the above estimates proves the claim (7.40).

Step 7.C. By disintegration, there exists a family of measures \(\{\pi _x\}_{x\in X}\) on Y such that

$$\begin{aligned}&\frac{1}{r^d} \int _{B_r(x_0)\times {\mathbb {R}}^d} |y- (A_0 x_0 + b_0)|^2\,{\mathrm {d}}\pi \nonumber \\&\quad = \frac{1}{r^d} \int _{B_r(x_0)} \int |y- (A_0 x_0 + b_0)|^2\,\pi _x({\mathrm {d}}y) \, \rho _0(x){\mathrm {d}}x. \end{aligned}$$
(7.42)

Since the left-hand side of (7.42) tends to zero as \(r\rightarrow 0\) by Step 7.B, it follows that if \(x_0\) is a Lebesgue point, we must have

$$\begin{aligned} \int |y- (A_0 x_0 + b_0)|^2\,\pi _{x_0}({\mathrm {d}}y) = 0, \end{aligned}$$

hence \(\pi _{x_0} = \delta _{A_0 x_0 + b_0}\).

Step 7.D. For any Lebesgue point \(x_0 \in B_R\), define \(T(x_0):= A_0(x_0) x_0 + b_0(x_0)\). Then the previous Step 7.C shows that

$$\begin{aligned} \pi \big \lfloor _{B_R \times {\mathbb {R}}^d} = ({\mathrm {Id}} \times T)_{\#}\rho _0, \end{aligned}$$

that is (7.37).

Step 8 By boundedness of \(\rho _0\), (7.8) implies the bound

$$\begin{aligned}&\sup _{0<r<\frac{R}{2}} \sup _{x_0 \in B_{R}} \frac{1}{r^{d+2+2\alpha }} \inf _{A,b} \int _{B_r(x_0)\cap B_R} |T(x)-(Ax+b)|^2 \,{\mathrm {d}}x \\&\quad \lesssim \frac{1}{R^{2\alpha }} \left( \frac{1}{R^{d+2}} \int _{B_{4R}\times {\mathbb {R}}^d} |x-y|^2\,{\mathrm {d}}\pi +{\mathscr {H}}_{4R} \right) , \end{aligned}$$

which by means of Campanato’s theory [5] proves that \(T\in {\mathscr {C}}^{1,\alpha }(B_R)\) and that the Hölder seminorm of \(\nabla T\) satisfies (7.2). \(\square \)

Remark 7.1

The deterministic structure of the c-optimal coupling, that is, the existence of T such that \(\pi = ({\mathrm {Id}}\times T)_{\#}\rho _0\), is a classical result in optimal transportation. If we had used this result, the proof would have become shorter, as Step 7 would not have been needed.

Before we give the proof of Corollary 1.3, let us remark that one can show the following variant of our qualitative \(L^{\infty }\) bound on the displacement (Lemma 2.1):

Lemma 7.2

Assume that the cost function c satisfies (C1)–(C4) and that \(\nabla _x c(0,0)=0\). Let u be a c-convex function. There exist \({\varLambda }_0 < \infty \) and \(R_0'>0\) such that for all \(R\le R_0'\) for which \(\frac{1}{R^2} \left\| u - \tfrac{1}{2}|\cdot |^2\right\| _{{\mathscr {C}}^0(B_{8R})} \le 1\) we have

$$\begin{aligned} \mathop {{\mathrm{esssup}}}\limits _{x\in B_{4R}} \left| {{\,\mathrm{c-exp}\,}}_x(\nabla u(x)) \right| \le {\varLambda }_0 R. \end{aligned}$$
(7.43)

Proof

Since u is c-convex, it is differentiable a.e. For any \(x \in B_{4R}\) such that \(\nabla u(x)\) exists, let \(y = {{\,\mathrm{c-exp}\,}}_x(\nabla u(x))\), that is,

$$\begin{aligned} \nabla u(x) + \nabla _x c(x,y) = 0. \end{aligned}$$
(7.44)

Let \({\widetilde{c}}\) be defined as in (2.16). Then, using \(\nabla _x c(0,0) = 0\), we have

$$\begin{aligned} -\nabla _x{\widetilde{c}}(x,y)&= -\nabla _x c(x,y) + \nabla _x c(x,0) {\mathop {=}\limits ^{(7.44)}} \nabla u(x) + \nabla _x c(x,0) - \nabla _x c(0,0) \\&= \nabla u(x) -x + x + \int _{0}^1 \nabla _{xx} c(t x,0) \,{\mathrm {d}}t x, \end{aligned}$$

so that

$$\begin{aligned} |\nabla _x{\widetilde{c}}(x,y)| \le |\nabla u(x) -x| + |x| + \Vert \nabla _{xx} c\Vert _{{\mathscr {C}}^0(X\times Y)} |x|. \end{aligned}$$

Being c-convex, the function u, and therefore also the function \(x\mapsto u(x) - \frac{1}{2}|x|^2\), is semi-convex, which implies that

$$\begin{aligned} \mathop {{\mathrm{esssup}}}\limits _{x\in B_{4R}} |\nabla u(x) - x| \lesssim \frac{1}{R} \sup _{x\in B_{8R}} |u-\tfrac{1}{2}|x|^2|, \end{aligned}$$
(7.45)

see Lemma A.6 in the appendix. By the closeness assumption on u and (C1) we may therefore bound

$$\begin{aligned} |\nabla _x{\widetilde{c}}(x,y)| \le \lambda R. \end{aligned}$$

Steps 2 and 3 of the proof of Lemma 2.1 then imply that there exist \({\varLambda }_0<\infty \) and \(R_0'>0\) (depending on c only through assumptions (C1)–(C4)) such that for all \(R \le R_0'\) we have

$$\begin{aligned} |\nabla _x{\widetilde{c}}(x,y)| \le \lambda R \quad \Rightarrow \quad |y| \le {\varLambda }_0 R, \end{aligned}$$

that is, (7.43) holds. \(\square \)

Proof of Corollary 1.3

By Lemma 7.2 there exist \({\varLambda }_0<\infty \) and \(R_0'>0\), depending only on the qualitative assumptions (C1)–(C4) on c such that for all \(R\le R_0'\) for which (1.7) holds, we have

$$\begin{aligned} \Vert T_u\Vert _{L^{\infty }(B_{4R})} \le {\varLambda }_0 R. \end{aligned}$$
(7.46)

We claim that

$$\begin{aligned} \frac{1}{R} \left\| x - T_u \right\| _{L^{\infty }(B_{4R})} \lesssim \frac{1}{R^2} \left\| u - \tfrac{1}{2}|\cdot |^2\right\| _{{\mathscr {C}}^0(B_{8R})}+ R^{\alpha } \left( [\nabla _{xy}c]_{\alpha ,4R} + [\nabla _{xx}c]_{\alpha ,4R}\right) , \end{aligned}$$
(7.47)

which immediately implies that

$$\begin{aligned}&\frac{1}{R^{d+2}} \int _{B_{4R}\times {\mathbb {R}}^d} |x-y|^2\,{\mathrm {d}}\pi = \frac{1}{R^{d+2}} \int _{B_{4R}} |x-T_u(x)|^2\,\rho _0(x){\mathrm {d}}x \nonumber \\&\quad \lesssim \frac{1}{R^4}\left\| u - \tfrac{1}{2}|\cdot |^2\right\| _{{\mathscr {C}}^0(B_{8R})}^2 + R^{2\alpha } \left( [\nabla _{xy}c]_{\alpha ,4R}^2 + [\nabla _{xx}c]_{\alpha ,4R}^2\right) \ll 1. \end{aligned}$$
(7.48)

In particular, it follows by Theorem 1.1, that there exists a potentially smaller scale \(R_0\le R_0'\) such that for all \(R\le R_0\) for which (1.7) holds, we have that \(T_u \in {\mathscr {C}}^{1,\alpha }(B_R)\) and \(\nabla T_u\) satisfies the bound (1.5). Applying (7.48) once more, we see that (1.8) holds.

To prove the claim (7.47), we appeal to semi-convexity of the c-convex function u (which implies semi-convexity of the function \(x\mapsto u(x) - \frac{1}{2}|x|^2\)), in particular Lemma A.6, to bound

$$\begin{aligned} \left\| x - T_u \right\| _{L^{\infty }(B_{4R})}&\le \Vert x-\nabla u\Vert _{L^{\infty }(B_{4R})} + \Vert \nabla u - T_u\Vert _{L^{\infty }(B_{4R})} \\&\lesssim \frac{1}{R} \Vert u-\tfrac{1}{2}|\cdot |^2\Vert _{{\mathscr {C}}^0(B_{8R})} + \Vert \nabla u - T_u\Vert _{L^{\infty }(B_{4R})}. \end{aligned}$$

It remains to estimate the latter term. To this end, notice that for a.e. \(x\in B_{4R}\) we have \(\nabla u(x) = -\nabla _x c(x, T_u(x))\), so that with the normalization assumption \(\nabla _x c(0,0) = 0\) we may bound

$$\begin{aligned} |\nabla u(x) - T_u(x)|&= |\nabla _x c(x, T_u(x)) + T_u(x)| \\&\le |\nabla _x c(x, T_u(x)) - \nabla _x c(x,0) + T_u(x)| + |\nabla _x c(x,0) - \nabla _x c(0,0)| \\&\le \int _0^1 |(\nabla _{xy}c (x,s T_u(x)) + {\mathbb {I}}) T_u(x)|\,{\mathrm {d}}s + \int _0^1 |\nabla _{xx}c(t x, 0) x|\,{\mathrm {d}}t. \end{aligned}$$

It now follows with (7.46), definition (1.3), and \(\nabla _{xx}c(0,0) = 0\), \(\nabla _{xy} c(0,0) = -{\mathbb {I}}\), that

$$\begin{aligned} |\nabla u(x) - T_u(x)|&\le [\nabla _{xy}c]_{\alpha ,4R} (|x|^{\alpha } + |T_u(x)|^{\alpha }) |T_u(x)| + [\nabla _{xx}c]_{\alpha ,4R} |x|^{\alpha +1} \nonumber \\&{\mathop {\le }\limits ^{(7.46)}} C_{{\varLambda }_0} R^{\alpha }\left( [\nabla _{xy}c]_{\alpha ,4R} + [\nabla _{xx}c]_{\alpha ,4R}\right) R. \end{aligned}$$
(7.49)

In view of (1.7), we may assume

$$\begin{aligned} C_{{\varLambda }_0} R^{\alpha }\left( [\nabla _{xy}c]_{\alpha ,4R} + [\nabla _{xx}c]_{\alpha ,4R}\right) \le 1, \end{aligned}$$

so that

$$\begin{aligned} |\nabla u(x) - T_u(x)| \le R. \end{aligned}$$
(7.50)

Using that by Lemma A.6 and the smallness assumption (1.7), we have

$$\begin{aligned} |\nabla u(x) - x| \lesssim \frac{1}{R} \Vert u-\tfrac{1}{2}|\cdot |^2\Vert _{{\mathscr {C}}^0(B_{8R})} \lesssim R, \end{aligned}$$

and writing \(T_u(x) = (T_u(x)-\nabla u(x)) + (\nabla u(x) -x) +x\), the estimate (7.49) turns into

$$\begin{aligned}&|\nabla u(x) - T_u(x)| \\&\quad \lesssim [\nabla _{xy}c]_{\alpha ,4R}(R^{\alpha } + |\nabla u(x)-T_u(x)|^{\alpha })(|\nabla u(x)-T_u(x)|+R) + R^{1+\alpha }[\nabla _{xx}c]_{\alpha ,4R} \\&\overset{(7.50)}{\lesssim } R^{1+\alpha }\left( [\nabla _{xy}c]_{\alpha ,4R} + [\nabla _{xx}c]_{\alpha ,4R}\right) . \end{aligned}$$

This proves the claimed inequality (7.47). \(\square \)

\(\epsilon \)-Regularity for Almost-Minimizers

In this section we give a sketch of the proof of Theorem 1.18. One of the main differences compared to the situation of Theorem 1.1 is that our assumptions do not allow us to prove an \(L^{\infty }\) bound on the displacement (which followed from (c-)monotonicity of \({{\,\mathrm{Spt}\,}}\pi \)). However, almost-minimality (on all scales) allows us to obtain an \(L^p\) bound for arbitrarily large \(p<\infty \).

Proposition 8.1

Assume that \(\rho _0, \rho _1 \in C^{0,\alpha }\) with \(\rho _0(0) = \rho _1(0) = 1\). Let T be an almost-minimizing transport map from \(\mu =\rho _0\,{\mathrm {d}}x\) to \(\nu =\rho _1\,{\mathrm {d}}y\) with \({\varDelta }_r \le 1\). Assume further that T is invertible. Then there exists a radius \(R_1 = R_1(\rho _0, \rho _1) >0\) such that for any \(6R\le R_1\),

$$\begin{aligned} {\mathscr {E}}_{6R}(\pi _T) + R^{\alpha } \left( [\rho _0]_{\alpha , 6R} + [\rho _1]_{\alpha , 6R} \right) \ll 1 \end{aligned}$$
(8.1)

implies that for any \(p < \infty \),

$$\begin{aligned}&\frac{1}{R} \left( \frac{1}{R^d} \int _{B_{2R}} |T(x)-x|^p \, \mu ({\mathrm {d}}x) + \frac{1}{R^d} \int _{B_{2R}} |T^{-1}(y)-y|^p \, \nu ({\mathrm {d}}y) \right) ^{\frac{1}{p}}\nonumber \\&\qquad \lesssim _p {\mathscr {E}}_{6R}(\pi _T)^{\frac{1}{d+2}}. \end{aligned}$$
(8.2)

The scale \(R_1\) below which the result holds depends on the global Hölder semi-norms \([\rho _0]_{\alpha }\) and \([\rho _1]_{\alpha }\) of the densities and the condition \(B_{R_1} \subset {{\,\mathrm{Spt}\,}}\rho _0 \cap {{\,\mathrm{Spt}\,}}\rho _1\).

The proof of Proposition 8.1 is given in Appendix 1. Note that since \(T^{-1}\) is also almost-minimizing, the \(L^p\) bound for \(T^{-1}\) follows from applying Proposition D.1 to \(T^{-1}\). The \(L^p\) estimate (for arbitrarily large \(p<\infty \)) allows us to split the particle trajectories into two groups:

  • good trajectories that satisfy an \(L^{\infty }\) bound on the displacement, corresponding to starting points in the set

    $$\begin{aligned} {\mathscr {G}} = \{x\in B_{2R}\cup T^{-1}(B_{2R}): |T(x) - x| \le M R \}, \end{aligned}$$
    (8.3)

    where \(M := ({\mathscr {E}}_{6R}(\pi _T) + {\mathscr {D}}_{6R}(\mu , \nu ))^{\alpha }\) for some \(\alpha \in (0, \frac{1}{d+2})\), that we fix in what follows, and

  • bad trajectories that are too long, corresponding to

    $$\begin{aligned} {\mathscr {B}} = \{x\in B_{2R}\cup T^{-1}(B_{2R}): |T(x) - x| > M R \}. \end{aligned}$$
    (8.4)

Due to the \(L^p\) bound, the energy carried by bad trajectories is superlinearly small: By definition of \({\mathscr {E}}_{6R}(\pi _T)\) and M,

$$\begin{aligned} \frac{1}{R^d} \mu ({\mathscr {B}}) \lesssim \frac{1}{M^2} \frac{1}{R^{d+2}} \int _{B_{2R}\cup T^{-1}(B_{2R})} |T(x) - x|^2\,\mu ({\mathrm {d}}x) \lesssim {\mathscr {E}}_{6R}(\pi _T)^{1-2\alpha }, \end{aligned}$$
(8.5)

hence

$$\begin{aligned}&\frac{1}{R^{d+2}} \int _{{\mathscr {B}}} |T(x) - x|^2\,\mu ({\mathrm {d}}x)\\&\quad \le \left( \frac{\mu ({\mathscr {B}})}{R^d} \right) ^{1-\frac{2}{p}} \left( \frac{1}{R^{d+p}} \int _{B_{2R}\cup T^{-1}(B_{2R})} |T(x) - x|^{p}\, \mu ({\mathrm {d}}x) \right) ^{\frac{2}{p}} \\&\quad \lesssim {\mathscr {E}}_{6R}(\pi _T)^{\frac{2}{d+2} + (1-2\alpha )\left( 1- \frac{2}{p} \right) }, \end{aligned}$$

from which we see that, given \(\alpha \in (0,\frac{1}{d+2})\), we may choose p large enough so that the exponent is larger than 1. In particular, for any \(\tau >0\) we may bound

$$\begin{aligned} \frac{1}{R^{d+2}}\int _{{\mathscr {B}}} |T(x) - x|^2\,\mu ({\mathrm {d}}x) \le \tau {\mathscr {E}}_{6R}(\pi _T), \end{aligned}$$
(8.6)

provided \({\mathscr {E}}_{6R}(\pi _T)\ll 1\).

Once the bad trajectories have been removed, the good trajectories can be treated as before. More precisely, if we restrict the coupling \(\pi _T\) to the set \({\mathscr {G}}\times T({\mathscr {G}})\), then the resulting coupling is still deterministic and almost-minimizing with respect to quadratic cost (given its own boundary conditions). In particular, since the global estimate

$$\begin{aligned} \int _{{\mathscr {G}}} |T(x) - x|^2\,\mu ({\mathrm {d}}x) \le \int _{{\mathscr {G}}} |{\widetilde{T}}(x) - x|^2\,\mu ({\mathrm {d}}x) + {\varDelta }_{2R} \end{aligned}$$

for all \({\widetilde{T}}_{\#}\mu \lfloor _{{\mathscr {G}}} = T_{\#}\mu \lfloor _{{\mathscr {G}}} = \nu \lfloor _{T({\mathscr {G}})}\) holds. This allows for a passage from the Lagrangian to the Eulerian point of view like in Lemma 1.12.Footnote 33 Moreover, the measures \(\mu \lfloor _{{\mathscr {G}}}\) and \(\nu \lfloor _{T({\mathscr {G}})}\), as well as the coupling \(\pi _{T}\lfloor _{{\mathscr {G}} \times T({\mathscr {G}})}\), satisfy the assumptions of the harmonic approximation Theorem 1.13.Footnote 34 Hence, given \(0<\tau \ll 1\), there exists a threshold \(\epsilon _{\tau }>0\), constants \(C,C_{\tau }<\infty \), and a harmonic gradient field \(\nabla {\varPhi }\) (defined through (1.40)) satisfying (1.42), such that (1.41) holds for the Eulerian description \((\rho ,j)\) of \(\pi _{T}\lfloor _{{\mathscr {G}} \times T({\mathscr {G}})}\), provided \({\mathscr {E}}_{6R}(\pi _T) + R^{2\alpha } \left( [\rho _0]_{\alpha , 6R}^2 + [\rho _1]_{\alpha , 6R}^2\right) \le \epsilon _{\tau }\). The harmonic gradient field allows us to define the affine change of coordinates from Lemma 1.15 with \(B={\mathrm {e}}^{\frac{A}{2}}\) with \(A=\nabla ^2{\varPhi }(0)\) and \(b = \nabla {\varPhi }(0)\) satisfying (1.51) (and \(D = {\mathbb {I}}\)) to obtain a new coupling \(\widehat{\pi _T}= \pi _{{\widehat{T}}}\) between the measures \({\widehat{\mu }}\) and \({\widehat{\nu }}\) from the (full) coupling \(\pi _{T}\).

We can now use the harmonic approximation result Theorem 1.13 together with the harmonic estimates (1.42) to bound

For the bad trajectories we use the estimate (8.6) together with the bound

to obtain, recalling that \(\alpha< \frac{1}{d+2} < \frac{1}{2}\),

(8.7)

In particular, for \(\theta \in (0,1)\) we can write

so that using the identity \(\gamma B^{-*}(y-b) - Bx = \gamma B^{-*} (y-x-(Ax+b)) - \gamma B^{-*} (B^*B-{\mathbb {I}}-A) x + (\gamma -1) Bx\), the estimate (8.7) together with (1.42) and (1.51) imply that for any \(\beta \in (0,1)\) there exist \(0<\theta \ll 1\) and \(C_{\beta }<\infty \) such that

$$\begin{aligned} {\mathscr {E}}_{\theta R}(\pi _{{\widehat{T}}}) \le \theta ^{2\beta } {\mathscr {E}}_{6R}(\pi _T) + C_{\beta } \left( R^{2\alpha } [\rho _0]_{\alpha , 6R}^2 + R^{2\alpha } [\rho _1]_{\alpha , 6R}^2 + {\varDelta }_{2R} \right) , \end{aligned}$$

which implies a one-step-improvement result for the case of general almost-minimizers.

It remains to show that the transformed coupling \(\pi _{{\widehat{T}}}\) is still almost-minimal on all small scales in the sense of Definition 1.17. To this end, let \(r\le R_1\), \((\widehat{x_0}, \widehat{y_0}) \in {{\,\mathrm{Spt}\,}}\pi _{{\widehat{T}}}\), and \(\pi _{\widehat{T'}} \in {\varPi }({\widehat{\mu }},{\widehat{\nu }})\) with \({{\,\mathrm{Spt}\,}}(\pi _{{\widehat{T}}}-\pi _{\widehat{T'}}) \subset (B_r(\widehat{x_0})\times {\mathbb {R}}^d) \cup ({\mathbb {R}}^d \times B_r(\widehat{y_0}))\). Then, writing \(\widehat{T'}({\widehat{x}}) = \gamma B^{-*} (T'(B^{-1}{\widehat{x}}) - b)\), where \(T'_{\#}\mu = \nu \), one sees that

$$\begin{aligned} \int |{\widehat{y}} - {\widehat{x}}|^2\,{\mathrm {d}}(\pi _{{\widehat{T}}} - \pi _{\widehat{T'}})&= -2 \int ({\widehat{T}}({\widehat{x}}) - \widehat{T'}({\widehat{x}}))\cdot {\widehat{x}}\,{\widehat{\mu }}({\mathrm {d}}{\widehat{x}}) \\&= -2 |\det B| \int \gamma B^{-*}(T(x) - T'(x))\cdot Bx \,\mu ({\mathrm {d}}x) \\&= - 2 \gamma |\det B| \int (T(x) - T'(x))\cdot x \,\mu ({\mathrm {d}}x) \\&= \gamma |\det B| \int |y-x|^2\,{\mathrm {d}}(\pi _{T} - \pi _{T'}). \end{aligned}$$

Note that

$$\begin{aligned} {{\,\mathrm{Spt}\,}}(\pi _T - \pi _{T'}) = Q^{-1} {{\,\mathrm{Spt}\,}}(\pi _{{\widehat{T}}} - \pi _{\widehat{T'}})&\subset (B_{|B|r}(x_0) \times {\mathbb {R}}^d) \cup ({\mathbb {R}}^d \times B_{\frac{|B|}{\gamma }r}(y_0)), \end{aligned}$$

where \(x_0 = B\widehat{x_0}\) and \(y_0 = \gamma ^{-1}B \widehat{y_0} + b\). Since \(\pi _T\) is almost-minimizing, it follows that

$$\begin{aligned} \int |{\widehat{y}} - {\widehat{x}}|^2\,{\mathrm {d}}(\pi _{{\widehat{T}}} - \pi _{\widehat{T'}})&\le \gamma |\det B| (\max \{1, \gamma ^{-1}\} |B| r)^{d+2} {\varDelta }_{\max \{1, \gamma ^{-1}\} |B| r}, \end{aligned}$$

hence \(\pi _{{\widehat{T}}}\) is almost-minimizing among deterministic couplings with rate

$$\begin{aligned} {\widehat{{\varDelta }}}_r = \gamma |\det B| (\max \{1, \gamma ^{-1}\} |B|)^{d+2} {\varDelta }_{\max \{1, \gamma ^{-1}\} |B| r}. \end{aligned}$$

Assuming that \({\varDelta }_r = C r^{2\alpha }\), together with the bounds on \(\gamma \) and B from (1.51) this gives

$$\begin{aligned} {\widehat{{\varDelta }}}_{r} \le \left( 1+ C ({\mathscr {E}}_{R_1}^{\frac{1}{2}} + R_1^{\alpha }[\rho _0]_{\alpha ,R_1} + R_1^{\alpha }[\rho _1]_{\alpha , R_1})\right) {\varDelta }_r, \end{aligned}$$

in particular the rate \({\varDelta }_r\) exhibits the same behaviour as the Hölder seminorm of \(\nabla _{xy}c\) in (7.17) and shows that the one-step-improvement can be iterated down to arbitrarily small scales, yielding the \(C^{1,\alpha }\)-regularity of T in a ball with radius given by a fraction of R. \(\square \)

Partial Regularity: Proof of Corollary 1.4

As a corollary of Theorem 1.1, we obtain a variational proof of partial regularity for optimal transport maps proved in [7]. The changes of variables used to arrive to a normalized situation are exactly the same as in [7] and the argument to derive partial regularity from \(\epsilon \)-regularity follows [13].

Proof of Corollary 1.4

A classical result in optimal transport states that the optimal map T from \(\rho _0\) to \(\rho _1\) for the cost c and the optimal map \(T^*\) from \(\rho _1\) to \(\rho _0\) for the cost \(c^*(y,x) := c(x,y)\) are almost everywhere inverse to each other, and are of the form

$$\begin{aligned} T(x) = {{\,\mathrm{c-exp}\,}}_x(\nabla u(x)) \quad \text {and} \quad T^{-1}(y) := T^*(y) = {{\,\mathrm{c^{*}-exp}\,}}_y(\nabla u^c(y)), \end{aligned}$$

where u is a c-convex function and \(u^c\) is the c-conjugate of u. u and \(u^c\) are semi-convex so that by Alexandrov’s Theorem, they are twice differentiable almost everywhere. Therefore, we can find two sets of full measure \(X_1 \subseteq X\) and \(Y_1 \subseteq Y\) such that for all \((x_0,y_0) \in X_1 \times Y_1\), u is twice differentiable at \(x_0\), \(u^c\) is twice differentiable at \(y_0\) and

$$\begin{aligned} T^{-1}(T(x_0)) = x_0 \quad \text {and} \quad T(T^{-1}(y_0)) = y_0. \end{aligned}$$
(9.1)

Now let

$$\begin{aligned} X' := X_1 \cap T^{-1}(Y_1) \quad \text {and} \quad Y' := Y_1 \cap T(X_1). \end{aligned}$$
(9.2)

Because \(\rho _0\) and \(\rho _1\) are bounded and bounded away from zero, T sends sets of measure 0 to sets of measure 0 so that \(|X \setminus X'| = |Y \setminus Y'| = 0\). The goal is now to prove that \(X'\) and \(Y'\) are open sets and that T is a \({\mathscr {C}}^{1, \alpha }\)-diffeomorphism between \(X'\) and \(Y'\).

Fix \(x_0 \in X'\), then by (9.2), \(y_0 := T(x_0) \in Y'\). Up to translation, we may assume that \(x_0 = y_0 = 0\). Define

$$\begin{aligned} {\overline{u}}(x)&:= u(x)-u(0)+c(x,0)-c(0,0), \nonumber \\ {\overline{c}}(x,y)&:= c(x,y)-c(x,0)-c(0,y)+c(0,0). \end{aligned}$$
(9.3)

Then \({\overline{u}}\) is a \({\overline{c}}\)-convex function and we have

$$\begin{aligned} {{\,\mathrm{{\overline{c}}-exp}\,}}_x(\nabla {\overline{u}}(x)) = {{\,\mathrm{c-exp}\,}}_x(\nabla u(x)), \end{aligned}$$
(9.4)

so that \(T(x) = {{\,\mathrm{{\overline{c}}-exp}\,}}_x(\nabla {\overline{u}}(x))\), from which we know that T is the \({\overline{c}}\)-optimal transport map from \(\rho _0\) to \(\rho _1\).

By Alexandrov’s Theorem, there exist a symmetric matrix A such that

$$\begin{aligned} \nabla {\overline{u}}(x) = \nabla {\overline{u}}(0) + Ax + o(|x|), \end{aligned}$$

so that, using that \((p,x) \mapsto {{\,\mathrm{c-exp}\,}}_x(p)\) is \({\mathscr {C}}^1\) and setting \(M := -\nabla _{xy}{\overline{c}}(0,0) = -\nabla _{xy}c(0,0)\), noticing that by Assumption (C4) M is nondegenerate, a simple computation yields

$$\begin{aligned} T(x) = M^{-1}Ax + o(|x|). \end{aligned}$$

Therefore, we have

$$\begin{aligned} \frac{1}{R^{d+2}} \int _{B_R} |T(x)-M^{-1}Ax|^2 \rho _0(x) \, {\mathrm {d}}x \underset{R \rightarrow 0}{\longrightarrow } 0. \end{aligned}$$
(9.5)

The \({\overline{c}}\)-convexity of \({\overline{u}}\) and the fact that \({{\,\mathrm{{\overline{c}}-exp}\,}}_x(\nabla {\overline{u}}(x)) \in \partial _{{\overline{c}}} {\overline{u}}(x)\) imply that, see for instance [9, Section 5.3],

$$\begin{aligned} \nabla ^2 {\overline{u}}(x) + \nabla _{xx} {\overline{c}}(x, {{\,\mathrm{{\overline{c}}-exp}\,}}_x(\nabla {\overline{u}}(x))) \ge 0, \end{aligned}$$

so that, together with (9.3), (9.4) and the property \(T(0)=0\), the matrix \(A = \nabla ^2{\overline{u}}(0)\) is positive definite. We now make the change of variables \({\widetilde{x}} := A^{\frac{1}{2}}x\) and \({\widetilde{y}} := A^{-\frac{1}{2}}My\) so that

$$\begin{aligned} {\widetilde{T}}({\widetilde{x}})&:= A^{-\frac{1}{2}}MT(A^{-\frac{1}{2}}{\widetilde{x}}), \\ {\widetilde{c}}({\widetilde{x}}, {\widetilde{y}})&:= {\overline{c}}(A^{-\frac{1}{2}}{\widetilde{x}}, M^{-1}A^{\frac{1}{2}}{\widetilde{y}}). \end{aligned}$$

Defining

$$\begin{aligned} {\widetilde{\rho }}_0({\widetilde{x}}) := \det (A^{-\frac{1}{2}}) \rho _0(A^{-\frac{1}{2}}{\widetilde{x}}) \quad \text {and} \quad {\widetilde{\rho }}_1({\widetilde{y}}) := |\det (M^{-1}A^{\frac{1}{2}})| \rho _1(M^{-1}A^{\frac{1}{2}}{\widetilde{y}}), \end{aligned}$$

we get that \({\widetilde{T}}_{\#} {\widetilde{\rho }}_0 = {\widetilde{\rho }}_1\) \({\widetilde{c}}\)-optimally. This may be seen noticing that

$$\begin{aligned} {\widetilde{T}}({\widetilde{x}}) = {{\,\mathrm{{\widetilde{c}}-exp}\,}}_{{\widetilde{x}}}(\nabla {\widetilde{u}}({\widetilde{x}})), \end{aligned}$$

where \({\widetilde{u}}({\widetilde{x}}) := {\overline{u}}(A^{-1/2}x)\) is a \({\widetilde{c}}\)-convex function. The cost \({\widetilde{c}}\) satisfies \(\nabla _{{\widetilde{x}} {\widetilde{y}}}{\widetilde{c}}(0,0) = -{\mathbb {I}}\) and by the Monge–Ampère equation

$$\begin{aligned} \big |\det \nabla {\widetilde{T}}(x)\big | = \frac{{\widetilde{\rho }}_0(x)}{{\widetilde{\rho }}_1({\widetilde{T}}(x))}, \end{aligned}$$

we obtain \({\widetilde{\rho }}_0(0) = {\widetilde{\rho }}_1(0)\). Up to dividing \({\widetilde{\rho }}_0\) and \({\widetilde{\rho }}_1\) by an equal constant, we may assume that \({\widetilde{\rho }}_0(0) = {\widetilde{\rho }}_1(0) = 1\). Moreover, with this change of variables, (9.5) turns into

$$\begin{aligned} \frac{1}{R^{d+2}} \int _{B_R} |{\widetilde{T}}({\widetilde{x}})-{\widetilde{x}}|^2 {\widetilde{\rho }}_0({\widetilde{x}}) \, {\mathrm {d}}{\widetilde{x}} \underset{R \rightarrow 0}{\longrightarrow } 0. \end{aligned}$$
(9.6)

Finally, \({\widetilde{c}}\) is still \({\mathscr {C}}^{2,\alpha }\) and satisfies Assumptions (C2)–(C4) and since \(\rho _0\) and \(\rho _1\) are bounded and bounded away from zero, \({\widetilde{\rho }}_0\) and \({\widetilde{\rho }}_1\) are \({\mathscr {C}}^{0, \alpha }\), and we have

$$\begin{aligned} {\mathscr {H}}_R({\widetilde{\rho }}_0, {\widetilde{\rho }}_1, {\widetilde{c}}) = R^{2\alpha } \left( [{\widetilde{\rho }}_0]_{\alpha , R}^2 + [{\widetilde{\rho }}_1]_{\alpha , R}^2 + \left[ \nabla _{xy} {\widetilde{c}}\, \right] _{\alpha , R}^2 \right) \underset{R \rightarrow 0}{\longrightarrow } 0. \end{aligned}$$
(9.7)

Hence by (9.6) and (9.7), we may apply Theorem 1.1 to obtain that \({\widetilde{T}}\) is \({\mathscr {C}}^{1,\alpha }\) in a neighborhood of zero. By Remark 1.2, we also obtain that \({\widetilde{T}}^{-1}\) is \({\mathscr {C}}^{1,\alpha }\) in a neighborhood of zero.

Going back to the original map, this means that T is a \({\mathscr {C}}^{1,\alpha }\) diffeomorphism between a neighborhood U of \(x_0\) and the neighborhood T(U) of \(T(x_0)\). In particular, \(U \times T(U) \subseteq X' \times Y'\) so that \(X'\) and \(Y'\) are both open and by (9.1), T is a global \({\mathscr {C}}^{1,\alpha }\) diffeomorphism between \(X'\) and \(Y'\). \(\square \)