Abstract
We extend the variational approach to regularity for optimal transport maps initiated by Goldman and the first author to the case of general cost functions. Our main result is an \(\epsilon \)regularity result for optimal transport maps between Hölder continuous densities slightly more quantitative than the result by De Philippis–Figalli. One of the new contributions is the use of almostminimality: if the cost is quantitatively close to the Euclidean cost function, a minimizer for the optimal transport problem with general cost is an almostminimizer for the one with quadratic cost. This further highlights the connection between our variational approach and De Giorgi’s strategy for \(\epsilon \)regularity of minimal surfaces.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
In this paper, we give an entirely variational proof of the \(\epsilon \)regularity result for optimal transportation with a general cost function c and Hölder continuous densities, as established by De Philippis and Figalli [7]. This provides a keystone to the line of research started in [13] and continued in [14]: In [13], the variational approach was introduced and \(\epsilon \)regularity was established in case of a Euclidean cost function \(c(x,y) = \frac{1}{2} xy^2\), see [13, Theorem 1.2]. In [14], among other things, the argument was extended to rougher densities, which required a substitute for McCann’s displacement convexity; this generalization is crucial here.
One motivation for considering this more general setting is the study of optimal transportation on Riemannian manifolds with cost function given by \(\frac{1}{2}d^2(x,y)\), where d is the Riemannian distance. In this context, an \(\epsilon \)regularity result is of particular interest because, compared to the Euclidean setting, even though c is a compact perturbation of the Euclidean case, there are other mechanisms creating singularities like curvature. Indeed, under suitable convexity conditions on the support of the target density, the socalled Ma–Trudinger–Wang (MTW) condition on the cost function c, a strong structural assumption, is needed to obtain global smoothness of the optimal transport map, see [17] and [16]. Since in the most interesting case of cost \(\frac{1}{2}d^2(x,y)\), the MTW condition is quite restrictive^{Footnote 1}, and does not have a simple interpretation in terms of geometric properties of the manifold^{Footnote 2}, it is highly desirable to have a regularity theory without further conditions on the cost function c (and on the geometry of the support of the densities).
The outer loop of our argument is similar to that of [7]: a Campanato iteration on dyadically shrinking balls that relies on a onestep improvement lemma, which in turn relies on the closeness of the solution to that of a simpler problem with a highlevel interior regularity theory. The main differences are:

In [7], the simpler problem is the Monge–Ampère equation with constant righthandside and Dirichlet boundary data coming from the convex potential; for us, the simpler problem is the Poisson equation with Neumann boundary data coming from the flux in the Eulerian formulation of optimal transportation [2].

In [7], the comparison relies on the maximum principle; in our case, it relies on the fact that the density/flux pair in the Eulerian formulation is a minimizer^{Footnote 3} given its own boundary conditions.

In [7], the interior regularity theory appeals to the \(\epsilon \)regularity theory for the Monge–Ampère equation [10], which itself relies on Caffarelli’s work [4]; in our case, it is just inner regularity of harmonic functions.
Loosely speaking, the Campanato iteration in [7] relies on freezing the coefficients, whereas here, it relies on linearizing the problem (next to freezing the coefficients). In the language of nonlinear elasticity, we tackle the geometric nonlinearity (which corresponds to the nonlinearity inherent to optimal transport) alongside the material nonlinearity (which corresponds to the cost function c). As a consequence of this, we achieve \({\mathscr {C}}^{2,\alpha }\)regularity in a single Campanato iteration, whereas [7] proceeds in three rounds of iterations, namely first \({\mathscr {C}}^{1,1}\), then \({\mathscr {C}}^{1,1}\), and finally \({\mathscr {C}}^{2,\alpha }\). Another consequence of this approach via linearization is that we instantly arrive at an estimate that has the same homogeneities as for a linear equation (meaning that the Hölder seminorm of the second derivatives is estimated by the Hölder seminorm of the densities and not a nonlinear function thereof). Likewise, we obtain the natural dependence on the Hölder seminorm of the mixed derivative of the cost function^{Footnote 4}. When it comes to this dependence on the cost function c, we observe a similar phenomenon as for boundary regularity: regarding how the regularity of the data and the regularity of the solution are related, optimal transportation seems better behaved than its linearization, as we shall explain now^{Footnote 5}: Assuming unit densities for the sake of the discussion, the Euler–Lagrange equation can be expressed on the level of the optimal transport map T as the fully nonlinear (and xdependent) elliptic system given by \(\det \nabla T=1\) and \({\mathrm {curl}}_x(\nabla _x c(x,T(x)))=0\). Since the latter can be rephrased by imposing that the matrix \(\nabla _{xy}c(x,T(x))\nabla T(x)\) is symmetric, the Hölder norm of \(\nabla T\) lives indeed on the same footing as the Hölder norm of the mixed derivative \(\nabla _{xy}c\). The linearization around \(T(x)=x\) on the other hand is given by the elliptic system \({\mathrm {div}}\,\delta T=0\) and \({\mathrm {curl}}_x(\nabla _x c(x,x)+\nabla _{xy}c(x,x)\delta T(x))=0\), which has divergenceform character^{Footnote 6}. Here Hölder control of \(\nabla _{xy}c(x,x)\) matches with Hölder control of only \(\delta T\), and not its gradient.
Our approach is analogous to De Giorgi’s strategy for the \(\epsilon \)regularity of minimal surfaces, foremost in the sense that it proceeds via harmonic approximation. In fact, our strategy is surprisingly similar to Schoen & Simon’s variant [21] to that regularity theory:

Both approaches rely on the fact that the configuration is minimal given its own boundary conditions (Dirichlet boundary conditions there [21, (43)], flux boundary conditions here [13, (3.22)]), see [21, p.428] and [13, Proof of Proposition 3.3, Step 4]; the Euler–Lagrange equation does not play a role in either approach.

Both approaches have to cope with a mismatch in description between the given configuration and the harmonic construction (nongraph vs. graph there, timedependent flux vs. timeindependent flux here) which leads to an error term that luckily is superlinear in the energy, see [21, (38)] and [13, Proof of Proposition 3.3, Step 4]. On a very highlevel description, this superlinearity may be considered as coming from the next term in a Taylor expansion of the nonlinearity, which in the right topology can be seen via lowerdimensional isoperimetric principles, see [21, Lemma 3] and [13, Lemma 2.3]. (However, there is no direct analogue of the Lipschitz approximation here.)

Both approaches have to establish an approximate orthogonality that allows to relate the distance between the minimal configuration and the construction in an energy norm by the energy gap, see [21, p.426ff.] and [13, Proof of Proposition 3.3, Step 3] in the simple setting or rather [14, Lemma 1.8] in our setting; it thus ultimately relies on some (strict) convexity, see [21, (4)].

In order to establish this approximate orthogonality, both approaches have to smooth out the boundary data (there by simple convolution, here in addition by a nonlinear approximation), see [21, (34), (40), (52)] and [14, Proposition 3.6, (3.46), (3.47)].

In view of this, both approaches have to choose a good radius for the cylinder (in the Eulerian spacetime here) on which the construction is carried out, see [21, p.424] and [14, Section 3.1.3].
The advantages of a variational approach become particularly apparent in this paper, when we pass from a Euclidean cost function to a more general one: We may appeal to the concept of almost minimizers, which is wellestablished for minimal surfaces.^{Footnote 7} In our case, this simple concept means that, on a given scale, we interpret the minimizer (always with respect to its own boundary conditions) of the problem with c as an approximate minimizer of the Euclidean problem. This allows us to directly appeal to the Euclidean harmonic approximation [14, Theorem 1.5]. Incidentally, while dealing with Hölder continuous densities like in [13] and not general measures as in [14], we could not appeal to the simpler [13, Proposition 3.3], since this one relies on the Euler–Lagrange equation in form of McCann’s displacement convexity.
There are essentially two new challenges we face when passing from a Euclidean to a general cost function (next to the geometric ingredients also present in [7]):

Starting point for the variational approach is always an \(L^\infty /L^2\)bound on the displacement, which does rely on the Euler–Lagrange equation in the weak form of monotonicity of the support of the optimal coupling, see [13, Lemma 3.1], [14, Lemma 2.9], and [18, Proposition 2.2]. In this paper, we establish the analogue of [13, Lemma 3.1] based on the cmonotonicity, see Proposition 1.5. Loosely speaking, this relies on a qualitative argument down to some (cdependent) scale \(R_0\), followed by an argument that constitutes a perturbation of the one in [13, Lemma 3.1] for the scales \(\le R_0\).

We now address a somewhat hidden, but quite important additional difficulty that has to be overcome when passing from a Euclidean to a general cost functional^{Footnote 8}: While the Kantorowich formulation of optimal transportation has the merit of being convex, it is so in a very degenerate way; the Benamou–Brenier formulation on the contrary uncovers an almost strict^{Footnote 9} convexity. The variational approach to regularity capitalizes on this strict convexity. However, this Eulerian reformulation seems naturally available only in the Riemannian case, and its strict convexity seems apparent only in the Euclidean setting. This is one of the reasons to appeal to the concept of almost minimizer, since it allows us to pass from a general cost function to the Euclidean one. However, for configurations that are not exact minimizers of the Euclidean cost functional, the Lagrangian cost \({\int yx^2\,{\mathrm {d}}\pi }\) and the cost \(\int \frac{1}{\rho }j^2\) of their Eulerian description (1.24) are in general different: While the Eulerian cost is always dominated by the Lagrangian one, this is typically strict^{Footnote 10}. Hence the prior work [13, 14, 18] on the variational approach used the Euler–Lagrange equation in a somewhat hidden way, namely in terms of the incidence of Eulerian and Lagrangian cost. Luckily, the discrepancy of both functionals can be controlled for almost minimizers, see Lemma 1.12.
1.1 Main Results
Let \(X,Y \subset {\mathbb {R}}^d\) be compact. We assume that the cost function \(c: X\times Y \rightarrow {\mathbb {R}}\) satisfies:

C1 \(c \in {\mathscr {C}}^{2}(X\times Y)\).

C2 For any \(x\in X\), the map \(Y\ni y \mapsto \nabla _x c(x,y) \in {\mathbb {R}}^d\) is onetoone.

C3 For any \(y\in Y\), the map \(X\ni x \mapsto \nabla _y c(x,y) \in {\mathbb {R}}^d\) is onetoone.

C4 \(\det \nabla _{xy}c(x,y)\ne 0\) for all \((x,y)\in X\times Y\).
Let \(\rho _0,\rho _1:{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) be two probability densities, with \({{\,\mathrm{Spt}\,}}\rho _0 \subseteq X\) and \({{\,\mathrm{Spt}\,}}\rho _1 \subseteq Y\). It is wellknown that under (an even milder regularity assumption than) condition (C1), the optimal transportation problem
where the infimum is taken over all couplings \(\pi \) between the measures \(\rho _0\,{\mathrm {d}}x\) and \(\rho _1 \,{\mathrm {d}}y\), admits a solution \(\pi \), which we call a coptimal coupling.
For \(R>0\) we define the set
which is quite natural in the context of optimal transportation, because it allows for a symmetric treatment of the transport problem: it is suitable to describe all the mass that gets transported out of \(B_R\), and all the mass that is transported into \(B_R\). For \(\alpha \in (0,1)\) we write
for the \({\mathscr {C}}^{0,\alpha }\)seminorm of the mixed derivative \(\nabla _{xy}c\) of the cost function in the cross , and denote by
the \({\mathscr {C}}^{0,\alpha }\)seminorm of \(\rho \) in \(B_R\).
Fixing \(\rho _0(0) =\rho _1(0) = 1\), we think of the densities as nondimensional objects. This means that has units of \((\text {length})^d\), so that the Euclidean transport energy has dimensionality \((\text {length})^{d+2}\), and explains the normalization by \(R^{(d+2)}\) in assumption (1.4) and in the definition (1.9) of \({\mathscr {E}}_R\) below, making it a nondimensional quantity^{Footnote 11}. Similarly, the normalization \(\nabla _{xy}c(0,0) = {\mathbb {I}}\) makes the second derivatives of the cost function nondimensional.
The main result of this paper is the following \(\epsilon \)regularity result:
Theorem 1.1
Assume that (C1)–(C4) hold and that \(\rho _0(0) = \rho _1(0) =1\), as well as \(\nabla _{xy}c(0,0)= {\mathbb {I}}\). Assume further that 0 is in the interior of \(\, X \times Y\).
Let \(\pi \) be a coptimal coupling from \(\rho _0\) to \(\rho _1\). There exists \(R_0= R_0(c)>0\) such that for all \(R\le R_0\) with^{Footnote 12}
there exists a function \(T\in {\mathscr {C}}^{1,\alpha }(B_R)\) such that \((B_R \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq {\mathrm {graph}}\,T\), and the estimate
holds.
We stress that the implicit constant in (1.5) is independent of the cost c. The scale \(R_0\) below which our \(\epsilon \)regularity result holds has to be such that \(B_{2R_0} \subseteq X \cap Y\) and such that the qualitative \(L^{\infty }/L^2\) bound (Lemma 2.1) holds. We note that the dependence of \(R_0\) on c and the implicit dependence on c in the smallness assumption (1.4) are only through the qualitative information (C1)–(C4), see Remark 1.7 and Lemma 2.1 for details. Note also that, without appealing to the wellknown result that the solution of (1.1) is a deterministic coupling \(\pi =({{\,\mathrm{Id}\,}}\times T)_{\#} \rho _0\), this structural property of the optimal coupling is an outcome of our iteration.
Remark 1.2
Under the same assumptions as in Theorem 1.1, in particular only asking for the onesided energy \(\frac{1}{R^{d+2}} \int _{B_{4R}\times {\mathbb {R}}^d} xy^2\,{\mathrm {d}}\pi \) to be small in (1.4), we can also prove the existence of a function \(T^* \in {\mathscr {C}}^{1,\alpha }(B_R)\) such that \(({\mathbb {R}}^d\times B_R) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq \{(T^*(y),y) \, : \, y \in B_R\}\), with the same estimate on the seminorm of \(\nabla T^*\). This follows from the symmetric nature of the assumptions (C1)–(C4), of the normalization conditions on the densities and the cost, and of the smallness assumption (1.4). We refer the reader to Step 1 of the proof of Theorem 1.1 to see how (1.4) entails smallness of a symmetric version of the Euclidean transport energy, as defined in (1.9), at a smaller scale.
As in [7], Theorem 1.1 leads to a partial regularity result for a coptimal transport map T, that is, a map such that the coptimal coupling between \(\rho _0\) and \(\rho _1\) is of the form
The existence of such a map is a classical result in optimal transportation under assumptions (C1)–(C2) on the cost, as well as its particular structure, namely the fact that it derives from a potential. More precisely, there exists a cconvex function \(u: X \rightarrow {\mathbb {R}}\) such that
where the cexponential map is welldefined in view of (C1) and (C2) via
Recall that a function \(u: X \rightarrow {\mathbb {R}}\) is cconvex if there exists a function \(\lambda : Y\rightarrow {\mathbb {R}}\cup \{\infty \}\) such that
Note that by assumption (C1) and the boundedness of Y, the function u is semiconvex, i.e., there exists a constant C such that \(u+ C x^2\) is convex. Hence, by Alexandrov’s Theorem (see, for instance, [8, Theorem 6.9], or [24, Theorem 14.25]), u is twice differentiable at a.e. \(x\in X\). For more details on cconvexity and its connection to optimal transport and MongeAmpère equations we refer to [24, Chapter 5] and [9, Section 5.3].
Before stating the partial regularity result, let us mention that our \(L^2\)based assumption on the smallness of the Euclidean energy of the forward transport is not more restrictive than the \(L^{\infty }\)based assumption of closeness of the Kantorovich potential u to \(\frac{1}{2}\cdot ^2\) in [7, Theorems 4.3 & 5.3]. However, the assumption on u is not invariant under transformations of c and u that preserve optimality, whereas the optimal transport map \(T_u\), and hence our assumption on the energy \(R^{d2} \int _{B_{4R}} xT_u(x)^2\,\rho _0(x)\,{\mathrm {d}}x\), are unaffected. For that reason we additionally have to fix \(\nabla _{xx}c(0,0)=0\), and \(\nabla _x c(0,0) = 0\) in the following corollary^{Footnote 13}, and ask for \([\nabla _{xx}c]_{\alpha , 4R}\) to be small. Hence, in this result we think of the cost c as being close to \(x \cdot y\), which is not necessarily the case in Theorem 1.1.
Corollary 1.3
Assume that (C1)–(C4) hold and that \(\rho _0(0) = \rho _1(0) =1\), as well as \(\nabla _{xy}c(0,0)= {\mathbb {I}}\), \(\nabla _{xx}c(0,0)=0\), and \(\nabla _x c(0,0) = 0\). Assume further that 0 is in the interior of \(\, X \times Y\).
Let \(T_u\) be the coptimal transport map from \(\rho _0\) to \(\rho _1\). There exists \(R_0= R_0(c)>0\) such that for all \(R\le R_0\) with
\(T_u\in {\mathscr {C}}^{1,\alpha }(B_R)\) with
The partial regularity statement is then as follows:
Corollary 1.4
Let \(\rho _0,\rho _1:{\mathbb {R}}^d \rightarrow {\mathbb {R}}\) be two probability densities with the properties that \(X= {{\,\mathrm{Spt}\,}}\rho _0\) and \(Y= {{\,\mathrm{Spt}\,}}\rho _1\) are bounded with^{Footnote 14}\(\partial X =\partial Y= 0\), \(\rho _0\), \(\rho _1\) are positive on their supports, and \(\rho _0\in {\mathscr {C}}^{0,\alpha }(X)\), \(\rho _1\in {\mathscr {C}}^{0,\alpha }(Y)\). Assume that \(c \in {\mathscr {C}}^{2, \alpha }(X \times Y)\) and that (C2)–(C4) hold. Then there exist open sets \(X' \subseteq X\) and \(Y' \subseteq Y\) with \(X \setminus X' = Y \setminus Y' = 0\) such that the coptimal transport map T between \(\rho _0\) and \(\rho _1\) is a \({\mathscr {C}}^{1,\alpha }\) diffeomorphism between \(X'\) and \(Y'\).
As recently pointed out in [12] for the quadratic case, the variational approach is flexible enough to also obtain \(\epsilon \)regularity for optimal transport maps between merely continuous densities. The modifications presented in [12] can be combined with our results to prove an \(\epsilon \)regularity result for the class of general cost functions considered above. This will be the context of a separate note [19].
Finally, Theorem 1.1 can also be applied to optimal transportation on a Riemannian manifold \({\mathscr {M}}\) with cost given by the square of the Riemannian distance function d: if \(\rho _0, \rho _1\in {\mathscr {C}}^{0,\alpha }({\mathscr {M}})\) are two probability densities, locally bounded away from zero and infinity on \({\mathscr {M}}\), then the optimal transport map \(T: {\mathscr {M}} \rightarrow {\mathscr {M}}\) sending \(\rho _0\) to \(\rho _1\) for the cost \(c = \frac{d^2}{2}\) is a \({\mathscr {C}}^{1,\alpha }\)diffeomorphism outside two closed sets \({\varSigma }_X, {\varSigma }_Y \subset {\mathscr {M}}\) of measure zero. See [7, Theorem 1.4] for details.
1.2 Strategy of the Proofs
In this section we sketch the proof of the \(\epsilon \)regularity Theorem 1.1. As in [13, 14] one of the key steps is a harmonic approximation result, which can be obtained by an explicit construction and (approximate) orthogonality on an Eulerian level.
1.2.1 \(L^{\infty }\) Bound on the Displacement
A crucial ingredient to the variational approach is a local \(L^{\infty }/L^2\)estimate on the level of the displacement. More precisely, given a scale R, it gives a pointwise estimate on the nondimensionalized displacement \(\frac{yx}{R}\) in terms of the (nondimensionalized) Euclidean transport energy
which amounts to a squared \(L^2\)average of the displacement. While this looks like an inner regularity estimate in the spirit of the main result, Theorem 1.1, it is not. In fact, it is rather an interpolation estimate with the cmonotonicity of \({{\,\mathrm{Spt}\,}}\pi \) providing an invisible second control next to the energy. This becomes most apparent in the simple context of [13, Lemma 3.1] where monotonicity morally amounts to a (onesided) \(L^{\infty }\)control of the gradient of the displacement. The interpolation character of the estimate still shines through in the fractional exponent \(\frac{2}{d+2}\in (0,1)\) on the \(L^2\)norm.
Following [14], we here allow for general measures \(\mu \) and \(\nu \); the natural local control of these data on the energy scale is given by
which measures locally at scale \(R>0\) the distance from given measures \(\mu \) and \(\nu \) to the Lebesgue measure, where
is the quadratic Wasserstein distance between \(\mu \lfloor _{B_R}\) and \(\kappa _{\mu }\,{\mathrm {d}}x \lfloor _{B_R}\).^{Footnote 15} Notice that if \(\mu = \rho _0\,{\mathrm {d}}x\) and \(\nu = \rho _1\,{\mathrm {d}}y\) with Hölder continuous probability densities such that \(\frac{1}{2} \le \rho _j \le 2\) on \(B_R\), \(j=0, 1\), then^{Footnote 16}
see Lemma A.4 in the appendix.
The new aspect compared to [14, Lemma 2.9] is the general cost function c. Not surprisingly, it turns out that the result still holds provided c is close to Euclidean and that the closeness is measured in the nondimensional \({\mathscr {C}}^2\)norm. We stress the fact that this closeness is not required on the entire “cross” , cf. (1.2), but only to the “finite cross”
This is crucial, since only this smallness is guaranteed by the finiteness of the \({\mathscr {C}}^{2,\alpha }\)norm, cf. (1.18) below. This sharpening is a consequence of the qualitative hypotheses (C1)–(C4).
Proposition 1.5
Assume that the cost function c satisfies (C1)–(C4), and let \(\pi \in {\varPi }(\mu , \nu )\) be a coupling with cmonotone support.
For all \({\varLambda } < \infty \) and for all \(R>0\) such that
and for which
we have that
Remark 1.6
A close look at the proof of Proposition 1.5 actually tells us that if \({\mathscr {E}}_{6R}\) is replaced in (1.15) by the onesided energy, that is, if we assume
and if (1.14) is replaced by the onesided inclusion
then we still get a onesided \(L^{\infty }\) bound in the form of
This observation will be useful in the proof of Theorem 1.1 to relate the onesided energy in (1.4) to the full energy in Proposition 1.16.
Note that due to assumption (1.14) Proposition 1.5 might appear rather useless: indeed, one basically has to assume a (qualitative) \(L^{\infty }\) bound in the sense that there is a constant \({\varLambda }<\infty \) such that if \(x\in B_{5R}\), then \(y\in B_{{\varLambda } R}\), in order to obtain the \(L^{\infty }\) bound (1.17). However, as we show in Lemma 2.1, due to the global assumptions (C1)–(C4) alone, there exists a scale \(R_0>0\) and a constant \({\varLambda }_0<\infty \) such that (1.14) holds. Moreover, in the Campanato iteration used to prove Theorem 1.1, which is based on suitable affine changes of coordinates, the qualitative \(L^{\infty }\) bound (1.14) is reproduced in each step of the iteration (with a constant \({\varLambda }\) that after the first step can be fixed throughout the iteration, e.g. \({\varLambda } = 27\) works).
Remark 1.7
There is an apparent mismatch with respect to the domains involved in the closeness assumptions on c in Theorem 1.1 and Proposition 1.5: we assume^{Footnote 17}\(R^{2\alpha }[\nabla _{xy}c]_{\alpha , 6R}^2 \ll 1\) in Theorem 1.1 and \(\Vert \nabla _{xy}c+{\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5R,{\varLambda } R})} \ll 1\) in Proposition 1.5. We are able to relate the two assumptions due to the qualitative \(L^{\infty }\) bound (1.14): If \(\nabla _{xy}c(0,0)={\mathbb {I}}\), we have for all \({\varLambda }<\infty \), using the inclusion ,
Thus, if \(R^{2\alpha } [\nabla _{xy}c]_{\alpha ,6R}^2\) is chosen small enough, then the assumption (1.16) in Proposition 1.5 is fulfilled.
1.2.2 AlmostMinimality with Respect to Euclidean Cost
One of the main new contributions of this work is showing that the concept of almostminimality, which is wellestablished in the theory of minimal surfaces, can lead to important insights also in optimal transportation. The key observation is that if c is quantitatively close to Euclidean cost, then a minimizer of (1.1) is almostminimizing for the quadratic cost.
One difficulty in applying the concept of almostminimality is that we are dealing with local quantities, for which local minimality (being minimizing with respect to its own boundary condition) would be the right framework to adopt.
Lemma 1.8
Let \(\pi \in {\varPi }(\mu ,\nu )\) be a coptimal coupling between the measures \(\mu \) and \(\nu \). Then for any Borel set \({\varOmega } \subseteq {\mathbb {R}}^d \times {\mathbb {R}}^d\) the coupling \(\pi _{{\varOmega }}:= \pi \lfloor _{{\varOmega }}\) is coptimal given its own marginals, i.e. coptimal between the measures \(\mu _{{\varOmega }}\) and \(\nu _{{\varOmega }}\) defined via
for any Borel measurable \(A\subseteq {\mathbb {R}}^d\).
This lemma allows us to restrict any coptimal coupling \(\pi \) to a “good” set, where particle trajectories are wellbehaved in the sense that they satisfy an \(L^{\infty }\) bound. In particular, we have the following corollary:
Corollary 1.9
Let \(\pi \in {\varPi }(\mu ,\nu )\) be a coptimal coupling between the measures \(\mu \) and \(\nu \) with the property that there exists \(M\le 1\) such that for all
Then the coupling is coptimal between the measures \(\mu _R\) and \(\nu _R\) as defined in (1.19) and we have that \({{\,\mathrm{Spt}\,}}\mu _R, {{\,\mathrm{Spt}\,}}\nu _R \subseteq B_{2R}\) (in particular \({{\,\mathrm{Spt}\,}}\pi _R \subseteq B_{2R} \times B_{2R}\)), \(\mu _R = \mu \) and \(\nu _R = \nu \) on \(B_R\), and \(\mu _R\le \mu \), \(\nu _R\le \nu \).
One of the main observations now is that coptimal couplings of the type considered in Corollary 1.9 are almostminimizers of the Euclidean transport cost. The following assumptions (1.20) and (1.21) should be read as properties satisfied by the marginal measures \(\mu \) and \(\nu \) of the restriction of a coptimal coupling to a finite cross on which the \(L^{\infty }\) bound (1.17) holds. Moreover, one of the marginals should be close to the Lebesgue measure in the sense that \(\mu (B_R)\lesssim R^d\).
Proposition 1.10
Let \(\mu \) and \(\nu \) be two measures such that
for some \(R>0\). Let \(\pi \in {\varPi }(\mu , \nu )\) be a coptimal coupling between the measures \(\mu \) and \(\nu \). Then \(\pi \) is almostminimizing for the Euclidean cost, in the sense that for any \({\widetilde{\pi }}\in {\varPi }(\mu , \nu )\) we have that
where
for some constant C depending only on d.
The above statement is most naturally formulated in terms of couplings, that is, in the Kantorovich framework. However, the way (almost)minimality enters in the proof of the harmonic approximation result (see Theorem 1.13 below), it is needed in the Eulerian picture, where the construction of a competitor is done.
1.2.3 The Eulerian Side of Optimal Transportation
Given a coupling \(\pi \in {\varPi }(\mu , \nu )\) between measures \(\mu \) and \(\nu \), we can define its Eulerian description, i.e. the densityflux pair \((\rho _t,j_t)\) associated to the coupling \(\pi \) by
for \(t\in [0,1]\) and for all test functions \(\zeta \in {\mathscr {C}}_c^{0}({\mathbb {R}}^d \times [0,1])\) and fields \(\xi \in {\mathscr {C}}_c^{0}({\mathbb {R}}^d \times [0,1])^d\). It is easy to check that \((\rho _t,j_t)\) is a distributional solution of the continuity equation
that is, for any \(\zeta \in {\mathscr {C}}^1_c({\mathbb {R}}^d \times [0,1])\) there holds
For brevity, we will often write \((\rho ,j) := (\rho _t \,{\mathrm {d}}t, j_t \,{\mathrm {d}}t)\). Being divergencefree in (t, x), the densityflux pair \((\rho , j)\) admits internal (and external) traces on \(\partial (B_R \times (0,1))\) for any \(R>0\), see [6] for details, i.e., there exists a measure \(f_R\) on \(\partial B_R \times (0,1)\) such that
We also introduce the timeaveraged measure \({\overline{f}}_R\) on \(\partial B_R\) defined via
Similarly, defining the measure \({\overline{j}} := \int _0^1\,{\mathrm {d}}j(\cdot , t)\), it is easy to see that
and that therefore \({\overline{j}}\) admits internal and external traces, which agree for all \(R>0\) with \({\overline{j}}(\partial B_R) = \mu (\partial B_R) = \nu (\partial B_R) = 0\), and the internal trace agrees with \({\overline{f}}_R\).
Note that we have the duality [20, Proposition 5.18]
which immediately implies the subadditivity of \((\rho ,j) \mapsto \int \frac{1}{\rho } j^2\). A localized version of (1.29), in the form of
also holds for any open set \(B\subseteq {\mathbb {R}}^d\). From the inequality \(\xi \cdot (yx) \frac{1}{2} \xi ^2 \le \frac{1}{2} xy^2\), which is true for any \(\xi , x, y \in {\mathbb {R}}^d\), the duality formula (1.29) immediately implies that the Eulerian cost of the densityflux pair \((\rho , j)\) corresponding to a coupling \(\pi \) via (1.24) is always dominated by the Lagrangian cost of \(\pi \), i.e.
We stress that this inequality is in general strict, see the example in Footnote 10.
Contrary to the case of quadratic cost \(c(x,y) = \frac{1}{2}xy^2\), or, equivalently, \({\widetilde{c}}(x,y) = x \cdot y\), given an optimal coupling \(\pi \) for the cost c, the densityflux pair \((\rho ,j)\) associated to \(\pi \) in the sense of (1.24) is not optimal for the Benamou–Brenier formulation [2] of optimal transportation, i.e.,
where the continuity equation and boundary conditions are understood in the weak sense (1.26), see [23, Chapter 8] for details.
Remark 1.11
For minimizers \(\pi \) of the Euclidean transport cost, i.e. \(\pi \in {\mathrm {argmin}}\, W^2(\mu ,\nu )\), we have equality in (1.31),
Indeed, if \((\rho ,j)\) is the Eulerian description of \(\pi \), then by (1.32)
which together with (1.31) implies (1.33).
As another consequence, while displacement convexity guarantees in the Euclidean case that the Eulerian density \(\rho \le 1\) (up to a small error), c.f. [13, Lemma 4.2], in our case \(\rho \) is in general merely a measure. This complication is already present in [14] and led to important new insights in dealing with marginals that are not absolutely continuous with respect to Lebesgue measure in the Euclidean case, upon which we are also building in this work.
The Eulerian version of the almostminimality Proposition 1.10 can then be obtained via the following lemma:
Lemma 1.12
Let \(\pi \in {\varPi }(\mu , \nu )\) be a coupling between the measures \(\mu \) and \(\nu \) with the property that there exists a constant \({\varDelta } <\infty \) such that
for any \({\widetilde{\pi }}\in {\varPi }(\mu , \nu )\), and let \((\rho ,j)\) be its Eulerian description defined in (1.24). Then
and
for any pair of measures \(({{\widetilde{\rho }}},{{\widetilde{j}}})\) satisfying
for all \(\zeta \in {\mathscr {C}}^\infty _c({\mathbb {R}}^d\times [0,1])\).
1.2.4 The Harmonic Approximation Result
The main ingredient in the proof of Theorem 1.1 is the harmonic approximation result, which states that if a coupling between two measures supported on a ball (say of radius 7R for some \(R>0\)) satisfies the \(L^{\infty }\) bound Proposition 1.5 globally on its support and is almostminimizing with respect to the Euclidean cost, then the displacement \(yx\) is quantitatively close to a harmonic gradient field \(\nabla {\varPhi }\) in . This is actually a combination of a harmonic approximation result in the Eulerian picture (Theorem 1.13) and Lemma 1.14, which allows us to transfer the Eulerian information back to the Lagrangian framework.
Theorem 1.13
(Harmonic approximation). Let \(R>0\) and \(\mu ,\nu \) be two measures with the property that
Let further \(\pi \in {\varPi }(\mu ,\nu )\) be a coupling between the measures \(\mu \) and \(\nu \), such that:

1.
\(\pi \) satisfies a global \(L^{\infty }\)bound, that is, there exists a constant \(M\le 1\) such that
$$\begin{aligned} xy \le M R \quad \text {for any } (x,y) \in {{\,\mathrm{Spt}\,}}\pi . \end{aligned}$$(1.37) 
2.
If \((\rho ,j)\) is the Eulerian description of \(\pi \) as defined in (1.24), then there exists a constant \({\varDelta }_R<\infty \) such that
$$\begin{aligned} \int \frac{1}{\rho } j^2 \le \int \frac{1}{{\widetilde{\rho }}} {\widetilde{j}}^2 + R^{d+2} {\varDelta }_R \end{aligned}$$(1.38)for any Eulerian competitor, i.e. any pair of measures \(({{\widetilde{\rho }}},{{\widetilde{j}}})\) satisfying
$$\begin{aligned} \int \partial _t \zeta \,{\mathrm {d}}{\widetilde{\rho }} + \nabla \zeta \cdot {\mathrm {d}}{\widetilde{j}} = \int _{B_R} \zeta _1 \,{\mathrm {d}}\nu  \int _{B_R} \zeta _0 \,{\mathrm {d}}\mu + \int _{\partial B_R \times [0,1]} \zeta \,{\mathrm {d}}f_R. \end{aligned}$$
Then for every \(0<\tau \ll 1\), there exist \(\epsilon _{\tau }>0\) and \(C,C_{\tau }<\infty \) such that, provided^{Footnote 18}
the following holds: There exists a radius \(R_* \in (3R, 4R)\) such that if \({\varPhi }\) is the solution, unique up to an additive constant, of^{Footnote 19}
then^{Footnote 20}
and
From the Eulerian version of the harmonic approximation Theorem 1.13 we can also obtain a Lagrangian version via almostminimality:
Lemma 1.14
Let \(R>0\) and let \(\pi \in {\varPi }(\mu ,\nu )\) be a coupling between the measures \(\mu \) and \(\nu \), such that

1.
\(\pi \) satisfies a global \(L^{\infty }\)bound, that is, there exists a constant \(M\le 1\) such that
$$\begin{aligned} xy \le M R \quad \text {for any } (x,y) \in {{\,\mathrm{Spt}\,}}\pi ; \end{aligned}$$(1.43) 
2.
if \((\rho ,j)\) is the Eulerian description of \(\pi \) as defined in (1.24), then there exists a constant \({\varDelta }_R<\infty \) such that
$$\begin{aligned} \int xy^2\,{\mathrm {d}}\pi \le \int \frac{1}{\rho } j^2 + R^{d+2} {\varDelta }_R. \end{aligned}$$(1.44)
Then for any smooth function \({\varPhi }\) there holds
1.2.5 OneStep Improvement and Campanato Iteration
With the harmonic approximation result at hand, we can derive a onestep improvement result, which roughly says that if the coupling \(\pi \) is quantitatively close to \(({{\,\mathrm{Id}\,}}\times {{\,\mathrm{Id}\,}})_{\#}\rho _0\) on some scale R, expressed in terms of the estimate
and the fact that the (qualitative) \(L^{\infty }\) bound on the displacement (1.14) holds, then on a smaller scale \(\theta R\), after an affine change of coordinates, it is even closer to \(({{\,\mathrm{Id}\,}}\times {{\,\mathrm{Id}\,}})_{\#}\rho _0\). This is the basis of a Campanato iteration to obtain the existence of the optimal transport map T and its \({\mathscr {C}}^{1,\alpha }\) regularity.
We start with the affine change of coordinates and its properties:
Lemma 1.15
Let \(\pi \in {\varPi }(\mu ,\nu )\) be an optimal transport plan with respect to the cost function c between the measures \(\mu ({\mathrm {d}}x) = \rho _0(x)\,{\mathrm {d}}x\) and \(\nu ({\mathrm {d}}y) = \rho _1(y)\,{\mathrm {d}}y\).
Given a nonsingular matrix \(B \in {\mathbb {R}}^{d\times d}\) and a vector \(b\in {\mathbb {R}}^d\), we perform the affine change of coordinates^{Footnote 21}
where \(D = \nabla _{xy}c(0,b)\) and \(\gamma = \left( \frac{\rho _1(b)}{\rho _0(0)} \frac{\det B^2}{\det D}\right) ^{\frac{1}{d}}\).^{Footnote 22} If we let
so that in particular \({\widehat{\rho }}_0(0) = {\widehat{\rho }}_1(0) = 1\) and
from which it follows that \(\nabla _{{\widehat{x}}{\widehat{y}}}{\widehat{c}}(0,0) = {\mathbb {I}}\), then the coupling
is an optimal coupling between the measures \({\widehat{\mu }}({\mathrm {d}}{\widehat{x}}) = {\widehat{\rho }}_0({\widehat{x}})\,{\mathrm {d}}{\widehat{x}}\) and \({\widehat{\nu }}({\mathrm {d}}{\widehat{y}}) = {\widehat{\rho }}_1({\widehat{y}})\,{\mathrm {d}}{\widehat{y}}\) with respect to the cost function \({\widehat{c}}\).
In the change of variables we perform, the role of D is to ensure that we get a normalized cost, i.e. \(\nabla _{{\widehat{x}}{\widehat{y}}}{\widehat{c}}(0,0) = {\mathbb {I}}\), while \(\gamma \) and \(\det B\) in (1.48) are needed for \({\widehat{\pi }}\) to define a transportation plan between the new densities. We refer the reader to Appendix 1 for a proof of this lemma.
Proposition 1.16
Assume that \(\rho _0(0) = \rho _1(0) = 1\) and \(\nabla _{xy}c(0,0) = {\mathbb {I}}\), and let \(\pi \in {\varPi }(\rho _0, \rho _1)\) be coptimal.
Then for all \(\beta \in (0, 1)\), there exist \(\theta \in (0,1)\) and \(C_{\beta } < \infty \) such that for all \({\varLambda }<\infty \) and \(R>0\) for which
there exist a symmetric matrix \(B\in {\mathbb {R}}^{d\times d}\) and a vector \(b \in {\mathbb {R}}^d\) with
such that, performing the change of variables in Lemma 1.15, the coupling \({\widehat{\pi }}\) is \({\widehat{c}}\)optimal between the measures with densities \(\widehat{\rho _0}\) and \(\widehat{\rho _1}\) and there holds
Moreover, we have the inclusion
Let us give a rough sketch of how the onestep improvement result can now be iterated: In a first step, the qualitative bound on the displacement is obtained from the global assumptions (C1)–(C4) on the cost function, see Lemma 2.1. This yields an initial scale \(R_0>0\) below which the cost function is close enough to the Euclidean cost function for (1.14) to hold. We may therefore apply Proposition 1.16, so that after an affine change of coordinates the the energy inequality (1.52) holds, the transformed densities and cost function are again normalized at the origin, optimality is preserved, and the qualitative \(L^{\infty }\) bound (1.53) holds for the new coupling. We can therefore apply the onestep improvement Proposition 1.16 again, going to smaller and smaller scales. Together with Campanato’s characterization of Hölder regularity, this yields the claimed existence and \({\mathscr {C}}^{1,\alpha }\) regularity of T.
The details of the above parts of the proof of our main Theorem 1.1 are explained in the sections below, with a full proof of Theorem 1.1 in Section 7. The proof of Corollary 1.4 is essentially a combination of the ideas in [13] and [7], and is given for the convenience of the reader in Section 9.
We conclude the introduction with a comment on the extension of the results presented above to general almostminimizers with respect to Euclidean cost in the following sense:
Definition 1.17
(Almostminimality w.r.t. Euclidean cost (on all scales)). A coupling \(\pi \in {\varPi }(\mu ,\nu )\) is almostminimal with respect to Euclidean cost if there exists \(R_0>0\) and \({\varDelta }_{\cdot }: (0,R_0] \rightarrow [0,\infty )\) nondecreasing such that for all \(r\le R_0\) and \((x_0,y_0)\) in the interior of \(X \times Y\) there holds
for all \({\widetilde{\pi }} \in {\varPi }(\mu ,\nu )\) such that \({{\,\mathrm{Spt}\,}}(\pi {\widetilde{\pi }}) \subseteq (B_r(x_0)\times {\mathbb {R}}^d) \cup ({\mathbb {R}}^d \times B_r(y_0))\).
We will restrict our attention to almostminimizers in the class of deterministic transport plans coming from a Monge map T, i.e. \(\pi =\pi _T = ({{\,\mathrm{Id}\,}}, T)_{\#} \mu \), and call a transport map almostminimizing with respect to Euclidean cost if for all \(r\le R_0\) and \(x_0 \in {\mathrm {int}}X\) there holds
for all \({\widetilde{T}}\) such that \({\widetilde{T}}_{\#}\mu =\nu \) and \({\mathrm {graph}} {\widetilde{T}} = {\mathrm {graph}} T\) outside \((B_r(x_0)\times {\mathbb {R}}^d) \cup ({\mathbb {R}}^d \times B_r(T(x_0)))\).
In this situation, we get the following generalization^{Footnote 23} of Theorem 1.1, whose proof will be sketched in Section 8.
Theorem 1.18
Assume that \(\rho _0(0) = \rho _1(0) = 1\), and that 0 is in the interior of \(X\times Y\). Let \(T:X\rightarrow Y\) be an almostminimizing transport map from \(\mu \) to \(\nu \) with rate function \({\varDelta }_r = C r^{2\alpha }\) for some \(C<\infty \). Assume further that T is invertible. There exists \(R_1>0\) such that for any \(R\le R_1\) with
there holds \(T\in {\mathscr {C}}^{1,\alpha }(B_R)\).
2 An \(L^{\infty }\) Bound on the Displacement
In this section we establish an \(L^{\infty }\) bound on the displacement for transference plans \(\pi \in {\varPi }(\mu , \nu )\) with cmonotone support, that is,
provided that the transport cost is small, the marginals \(\mu , \nu \) are close to the Lebesgue measure, and the cost function c is close to the Euclidean cost function. In Lemma 2.1 we use the cmonotonicity (2.1) combined with the qualitative hypotheses (C1)–(C4) in conjunction with compactness to obtain a more qualitative version of the \(L^\infty /L^2\)bound, which just expresses finite expansion. In Proposition 1.5 this qualitative \(L^{\infty }/L^2\) bound in form of (1.14) is upgraded to the desired quantitative version under the scaleinvariant smallness assumption (1.16). The latter is a consequence of the quantitative smallness hypothesis \(R^{2\alpha } [\nabla _{xy}c]_{\alpha ,R}^2 \ll 1\), as we pointed out in Remark 1.7. In both steps, we need to ensure that there are sufficiently many points in \({{\,\mathrm{Spt}\,}}\pi \) close to the diagonal. This is formulated in Lemma A.1, which does not rely on monotonicity.
Proof of Proposition 1.5
Let \({\varLambda } < \infty \) and \(R>0\) be such that (1.14), (1.15) and (1.16) hold. We only prove the bound (1.17) for a couple \((x,y) \in (B_{4R} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \), as the other case \((x,y) \in ({\mathbb {R}}^d \times B_{4R}) \cap {{\,\mathrm{Spt}\,}}\pi \) follows by symmetry.

Step 1 (Rescaling). Let \(({\widetilde{x}}, {\widetilde{y}}) = (S_R(x), S_R(y)) := (R^{1}x, R^{1}y)\) and set
$$\begin{aligned} {\widetilde{\mu }}&:= {S_R}_{\#} \mu , \quad {\widetilde{\nu }} := {S_R}_{\#} \nu , \quad {\widetilde{\pi }} := R^{d} (S_R \times S_R)_{\#} \pi \quad \text {and} \quad {\widetilde{c}}({\widetilde{x}}, {\widetilde{y}}) := R^{2}c(R{\widetilde{x}}, R{\widetilde{y}}), \end{aligned}$$so that \({\widetilde{c}}\) still satisfies properties (C1)–(C4), and we have \({\widetilde{\pi }} \in {\varPi }({\widetilde{\mu }}, {\widetilde{\nu }})\), and \({{\,\mathrm{Spt}\,}}{\widetilde{\pi }}\) is \({\widetilde{c}}\)monotone. We also have
$$\begin{aligned} {\mathscr {E}}_6({\widetilde{\pi }}) = {\mathscr {E}}_{6R}(\pi ), \quad {\mathscr {D}}_6({\widetilde{\mu }}, {\widetilde{\nu }}) = {\mathscr {D}}_{6R}(\mu , \nu ),\\ \text {and} \quad \Vert \nabla _{{\widetilde{x}} {\widetilde{y}}}{\widetilde{c}} + {\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5, {\varLambda }})} = \Vert \nabla _{xy}c+{\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5R, {\varLambda } R})}. \end{aligned}$$This allows us to only consider the case \(R=1\) in the following. We will abbreviate
$$\begin{aligned} {\mathscr {E}}:= {\mathscr {E}}_6 \quad \text {and} \quad {\mathscr {D}}:= {\mathscr {D}}_6. \end{aligned}$$ 
Step 2 (Use of cmonotonicity of \({{\,\mathrm{Spt}\,}}\pi \)). Let \((x,y) \in (B_{4} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \). We first show that for all \((x',y') \in (B_{5} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \) we have
$$\begin{aligned} (xy) \cdot (xx')&\le 3 xx'^2 + x'y'^2 + \delta xx' \, xy, \end{aligned}$$(2.2)where, recalling (1.13),
$$\begin{aligned} \delta := \Vert \nabla _{xy}c + {\mathbb {I}}\Vert _{{\mathscr {C}}^0({\mathbb {B}}_{5, {\varLambda }})}. \end{aligned}$$(2.3)Indeed, setting \(x_t := tx + (1t)x'\) and \(y_s := s y + (1s) y'\) for \(t,s \in [0,1]\), cmonotonicity (2.1) of \({{\,\mathrm{Spt}\,}}\pi \) implies that
$$\begin{aligned} 0&\ge (c(x,y)c(x',y))(c(x,y')c(x',y')) \nonumber \\&= \int _0^1 \int _0^1 (xx') \cdot \nabla _{xy} c(x_t, y_s) \, (yy') \,{\mathrm {d}}s {\mathrm {d}}t \nonumber \\&=  (xx')\cdot (yy') + \int _0^1 \int _0^1 (xx') \cdot \left( \nabla _{xy} c(x_t, y_s) + {\mathbb {I}}\right) \, (yy') \,{\mathrm {d}}s {\mathrm {d}}t. \end{aligned}$$(2.4)By assumption (1.14), we have (x, y), \((x',y') \in B_{5} \times B_{{\varLambda }}\) and thus \(\{(x_t, y_t)\}_{t\in [0,1]} \subseteq B_{5} \times B_{{\varLambda }} \subset {\mathbb {B}}_{5, {\varLambda }}\). Hence, we obtain from (2.3)
$$\begin{aligned} \left \int _0^1 \int _0^1 (xx') \cdot \left( \nabla _{xy} c(x_t, y_s) + {\mathbb {I}}\right) \, (yy') \,{\mathrm {d}}s {\mathrm {d}}t \right \le \delta xx' \, yy', \end{aligned}$$so that (2.4) turns into
$$\begin{aligned} 0 \ge  (xx') \cdot (yy')  \delta xx' \, yy'. \end{aligned}$$(2.5)Upon writing \(yy' = (yx) + (xx') + (x'y')\), it follows that
$$\begin{aligned} 0&\ge  (xx') \cdot (yx)  xx'^2  (xx') \cdot (x'y')  \delta xx' \, yx \\&\quad  \delta xx'^2 \delta xx' \, x'y', \end{aligned}$$from which we obtain the estimate
$$\begin{aligned} (xy) \cdot (xx')&\le \delta xx' \, xy + (1+\delta ) xx'^2 + (1+\delta ) xx' \, x'y' \\&\le \delta xx' \, xy + \frac{3}{2}(1+\delta ) xx'^2 + \frac{1}{2} (1+\delta ) x'y'^2. \end{aligned}$$Note that \(\delta \ll 1\) by assumption (1.16), hence (2.2) follows.

Step 3 (Proof of estimate (1.17)). Let \(r \ll 1\) and \(e\in S^{d1}\) be arbitrary, to be fixed later. Let \(\eta \) be supported in \(B_{\frac{r}{2}}(xre)\) and satisfy the bounds
$$\begin{aligned} \sup \eta  + r \sup \nabla \eta  + r^2 \sup \nabla ^2 \eta  \lesssim 1. \end{aligned}$$(2.6)We make the additional assumption that \(\eta \) is normalized in such a way that
$$\begin{aligned} \int (xx') \eta (x') \,{\mathrm {d}}x' = r^{d+1} e. \end{aligned}$$(2.7)Note that since \(x\in B_{4}\) and \(r\ll 1\), we have
$$\begin{aligned} {{\,\mathrm{Spt}\,}}\eta \subseteq B_{\frac{r}{2}}(xre) \subset B_{5}. \end{aligned}$$Integrating inequality (2.2) against the measure \(\eta (x') \, \pi ({\mathrm {d}}x' {\mathrm {d}}y')\), it follows that
$$\begin{aligned} \begin{aligned} (xy) \cdot \int (xx') \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y')&\le 3 \int xx'^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y')\\&\quad + \int x'y'^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y')\\&\quad + \delta xy \int xx' \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y'). \end{aligned} \end{aligned}$$(2.8)Note that by (2.7) the integral on the lefthandside of inequality (2.8) can be expressed as
$$\begin{aligned} \int (xx') \eta (x')\, \pi ({\mathrm {d}}x' {\mathrm {d}}y')&= \int (xx') \eta (x') \mu ({\mathrm {d}}x') \\&= \kappa _{\mu } r^{d+1} e + \int (xx') \eta (x') \left( \mu ({\mathrm {d}}x')  \kappa _{\mu }\,{\mathrm {d}}x'\right) . \end{aligned}$$To estimate the latter integral, we recall the following result from [14, Lemma 2.8]: for any \(\zeta \in {\mathscr {C}}^{\infty }(B_R)\),
$$\begin{aligned} \left \int _{B_{R}} \zeta \,({\mathrm {d}}\mu \kappa _{\mu }\,{\mathrm {d}}x) \right&\le \left( \kappa _{\mu } \int _{B_{R}} \nabla \zeta ^2\,{\mathrm {d}}x \, W_{B_{R}}^2(\mu ,\kappa _{\mu }) \right) ^{\frac{1}{2}} + \frac{1}{2} \sup _{B_{R}} \nabla ^2 \zeta  \, W_{B_{R}}^2(\mu ,\kappa _{\mu }). \end{aligned}$$(2.9)By this estimate with \(\zeta = (x\cdot )\eta \) and using that \(\kappa _{\mu }\sim 1\) by assumption (1.15), we obtain with (2.6) that
$$\begin{aligned} \left \int (xx') \eta (x') \left( \mu ({\mathrm {d}}x')  \kappa _{\mu }\,{\mathrm {d}}x'\right) \right&\lesssim (r^d {\mathscr {D}})^{\frac{1}{2}} + \frac{1}{r} {\mathscr {D}}\lesssim \epsilon r^{d+1} + \frac{1}{\epsilon } \frac{1}{r} {\mathscr {D}}\end{aligned}$$for some \(0<\epsilon \ll 1\) to be fixed later. Hence,
$$\begin{aligned} \begin{aligned} (xy)&\cdot \int (xx') \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \\&\ge \kappa _{\mu } r^{d+1} (xy)\cdot e  C \left( \epsilon r^{d+1} + \frac{1}{\epsilon r} {\mathscr {D}}\right) xy. \end{aligned} \end{aligned}$$(2.10)We now estimate each term on the righthandside of inequality (2.8) separately:

(1)
For the first term we estimate
$$\begin{aligned}&\int xx'^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') = \int xx'^2 \eta (x') \mu ({\mathrm {d}}x') \\&\quad \le \kappa _{\mu } \int xx'^2 \eta (x') \,{\mathrm {d}}x' + \left \int xx'^2 \eta (x') \left( \mu ({\mathrm {d}}x')  \kappa _{\mu }\,{\mathrm {d}}x'\right) \right . \end{aligned}$$Using again (2.6) and \(\kappa _{\mu } \sim 1\) for the first term on the righthand side, estimate (2.9) with \(\zeta =x\cdot ^2 \eta \), and Young’s inequality for the second term we obtain
$$\begin{aligned} \int xx'^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \lesssim r^{d+2} + {\mathscr {D}}. \end{aligned}$$(2.11) 
(2)
For the second term on the righthandside of (2.8) we use that \({{\,\mathrm{Spt}\,}}\eta \subseteq B_{5}\) and (2.6), recalling also the definition (1.9) of \({\mathscr {E}}\), to estimate
$$\begin{aligned} \int x'y'^2 \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \lesssim \int _{B_{5}\times {\mathbb {R}}^d} x'y'^2 \,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \lesssim {\mathscr {E}}. \end{aligned}$$(2.12) 
(3)
We may bound the integral in the third term on the righthandside of (2.8) as for (2.11) by using (2.6), \(\kappa _{\mu } \sim 1\) and estimate (2.9) with^{Footnote 24}\(\zeta =x\cdot \eta \) to get
$$\begin{aligned} \int xx' \eta (x')\,\pi ({\mathrm {d}}x' {\mathrm {d}}y') \lesssim r^{d+1} + \frac{1}{r} {\mathscr {D}}. \end{aligned}$$(2.13)
Inserting the estimates (2.10), (2.11), (2.12), and (2.13) into inequality (2.8) yields
$$\begin{aligned}&\kappa _{\mu } r^{d+1} (xy) \cdot e \lesssim \left( \epsilon r^{d+1} + \frac{1}{\epsilon r} {\mathscr {D}}\right) xy + r^{d+2} + {\mathscr {D}}+ {\mathscr {E}}+ \delta \left( r^{d+1} + \frac{1}{r} {\mathscr {D}}\right) xy. \end{aligned}$$Since e is arbitrary and \(\kappa _{\mu } \sim 1\), this turns into
$$\begin{aligned} xy&\lesssim (\epsilon + \delta ) xy + \left( \delta +\frac{1}{\epsilon }\right) \left( \frac{1}{r}\right) ^{d+2}{\mathscr {D}}\,xy + r \nonumber \\&\qquad + \left( \frac{1}{r}\right) ^{d+1} ({\mathscr {E}}+ {\mathscr {D}}). \end{aligned}$$(2.14)We first choose \(\epsilon \) and the implicit constant in (1.16), which in view of (2.3) governs \(\delta \), so small that we may absorb the first term on the righthandside into the lefthandside. We then choose r to be a large multiple of \(({\mathscr {E}}+{\mathscr {D}})^\frac{1}{d+2}\), so that also the second righthandside term in (2.14) can be absorbed. This choice of r is admissible in the sense of \(r\ll 1\) provided the implicit constant in (1.15) is small enough. This yields (1.17).

(1)
\(\square \)
The next lemma shows that due to the global qualitative information on the cost function c, that is, (C1)–(C4), there is a scale below which we can derive a qualitative bound on the displacement. It roughly says that there is a small enough scale after which the cost essentially behaves like Euclidean cost, with an error that is uniformly small due to compactness of the set \(X\times Y\).
Lemma 2.1
Assume that the cost function c satisfies (C1)–(C4) and let \(\pi \in {\varPi }(\mu , \nu )\) be a coupling with cmonotone support.
There exist \({\varLambda }_0 <\infty \) and \(R_0>0\) such that for all \(R\le R_0\) for which
we have the inclusion
Proof
We only prove the inclusion \((B_{5R}\times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq B_{5R}\times B_{{\varLambda }_0 R}\), the other inclusion \(({\mathbb {R}}^d \times B_{5R}) \cap {{\,\mathrm{Spt}\,}}\pi \subseteq B_{{\varLambda }_0 R} \times B_{5R}\) follows analogously since the assumptions are symmetric in x and y.

Step 1 (Use of cmonotonicity of \({{\,\mathrm{Spt}\,}}\pi \)). Let \(R>0\) be such that (2.15) holds, in the sense that we may use Lemma A.1(ii), and set
$$\begin{aligned} {\widetilde{c}}(x,y) := c(x,y)  c(x,0)  c(0,y) + c(0,0). \end{aligned}$$(2.16)We claim that there exists a constant \(\lambda <\infty \), depending only on \(\Vert c\Vert _{{\mathscr {C}}^2(X\times Y)}\), such that
$$\begin{aligned} \nabla _x {\widetilde{c}}(x,y) \in B_{\lambda R} \quad \text {for all} \quad (x,y) \in (B_{5R} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi . \end{aligned}$$To show this, we use the cmonotonicity (2.1) of \({{\,\mathrm{Spt}\,}}\pi \). Notice that cmonotonicity of \({{\,\mathrm{Spt}\,}}\pi \) implies its \({\widetilde{c}}\)monotonicity:
$$\begin{aligned} {\widetilde{c}}(x,y)  {\widetilde{c}}(x',y) \le {\widetilde{c}}(x,y')  {\widetilde{c}}(x',y') \quad \text {for all} \quad (x,y),(x',y') \in {{\,\mathrm{Spt}\,}}\pi . \end{aligned}$$(2.17)With \(x_t := tx + (1t)x'\) we can write
$$\begin{aligned} {\widetilde{c}}(x,y)  {\widetilde{c}}(x',y)&= \int _0^1 \nabla _x {\widetilde{c}}(x_t, y)\,{\mathrm {d}}t \cdot (xx') \\&= \nabla _x {\widetilde{c}}(x, y) \cdot (xx') + \int _0^1 (\nabla _x {\widetilde{c}}(x_t, y)  \nabla _x {\widetilde{c}}(x, y)) \,{\mathrm {d}}t \cdot (xx'), \end{aligned}$$and, using that \(\nabla _x{\widetilde{c}}(0,0) = 0\),
$$\begin{aligned} {\widetilde{c}}(x,y')  {\widetilde{c}}(x',y')&= (\nabla _x {\widetilde{c}}(0, y')  \nabla _x {\widetilde{c}}(0,0)) \cdot (xx') \\&\quad + \int _0^1 (\nabla _x {\widetilde{c}}(x_t, y')  \nabla _x {\widetilde{c}}(0, y')) \,{\mathrm {d}}t \cdot (xx'). \end{aligned}$$Inserting these two identities into inequality (2.17) gives
$$\begin{aligned} \nabla _x {\widetilde{c}}(x, y) \cdot (xx')&\le \int _0^1 \nabla _x {\widetilde{c}}(x_t, y)  \nabla _x {\widetilde{c}}(x, y) \,{\mathrm {d}}t \, xx' \\&\quad + \nabla _x {\widetilde{c}}(0, y')  \nabla _x {\widetilde{c}}(0,0) \, xx' \\&\quad +\int _0^1 \nabla _x {\widetilde{c}}(x_t, y')  \nabla _x {\widetilde{c}}(0, y') \,{\mathrm {d}}t \, xx'. \end{aligned}$$Using the boundedness of \(\Vert c\Vert _{{\mathscr {C}}^2(X\times Y)}\) we estimate this expression further by
$$\begin{aligned} \nabla _x {\widetilde{c}}(x, y) \cdot (xx')&\le \Vert \nabla _{xx} {\widetilde{c}} \Vert _{{\mathscr {C}}^0(X\times Y)} \left( \int _0^1 x_t  x\,{\mathrm {d}}t + \int _0^1 x_t\,{\mathrm {d}}t \right) xx' \nonumber \\&\quad + \Vert \nabla _{xy} {\widetilde{c}} \Vert _{{\mathscr {C}}^0(X\times Y)} y' \, xx' \nonumber \\&\lesssim \Vert c\Vert _{{\mathscr {C}}^2(X\times Y)} \left( x' + x + y' \right) xx', \end{aligned}$$(2.18)where in the last step we estimated the integrals and used that
$$\begin{aligned} \Vert \nabla _{xx} {\widetilde{c}} \Vert _{{\mathscr {C}}^0(X\times Y)} \le 2 \Vert \nabla _{xx} c\Vert _{{\mathscr {C}}^0(X\times Y)}, \quad \Vert \nabla _{xy} {\widetilde{c}} \Vert _{{\mathscr {C}}^0(X\times Y)} = \Vert \nabla _{xy} c\Vert _{{\mathscr {C}}^0(X\times Y)} . \end{aligned}$$Now by Lemma A.1(ii), given \(x \in B_{5R}\), we have \((S_{R}(x,e) \times B_{7R}) \cap {{\,\mathrm{Spt}\,}}\pi \ne \emptyset \) for any direction \(e\in S^{d1}\). Hence, letting \(e = \frac{\nabla _x {\widetilde{c}}(x,y)}{\nabla _x {\widetilde{c}}(x,y)}\), we can find a point \((x',y') \in (S_{R}(x,e) \times B_{7R})\cap {{\,\mathrm{Spt}\,}}\pi \). Since the opening angle of \(S_R(x,e)\) is \(\frac{\pi }{2}\), we have
$$\begin{aligned} \nabla _x {\widetilde{c}}(x,y) \cdot (xx') = \nabla _{x}{\widetilde{c}}(x,y) xx' e \cdot \frac{xx'}{xx'} \gtrsim \nabla _{x}{\widetilde{c}}(x,y) xx'. \end{aligned}$$It follows with (2.18) that there exists \(\lambda <\infty \) such that
$$\begin{aligned} \nabla _x {\widetilde{c}}(x,y) \lesssim \Vert c\Vert _{{\mathscr {C}}^2(X\times Y)} \left( x' + x + y' \right) \le \lambda R. \end{aligned}$$ 
Step 2 (Use of global information on c). We claim that there exist \(R_0>0\) and \({\varLambda }_0<\infty \) such that for all \(R\le R_0\) and \(x\in X\), we have that
$$\begin{aligned} B_{\lambda R} \subseteq \nabla _x {\widetilde{c}}(x, B_{{\varLambda }_0 R}). \end{aligned}$$Indeed, by assumption (C2), for any \(x\in X\), the map \(\nabla _x {\widetilde{c}}(x,\cdot ) = \nabla _x c(x,\cdot ) + \nabla _x c(x, 0)\) is onetoone on Y. Since \(\nabla _{xy}{\widetilde{c}}(x,y) = \nabla _{xy}c(x,y)\), it follows by (C4) that \(\det \nabla _{xy}{\widetilde{c}}(x,y) \ne 0\) for all \((x,y) \in X\times Y\). Hence, the map
$$\begin{aligned} F_x : \nabla _x {\widetilde{c}}(x,Y) \rightarrow Y, \quad v \mapsto \left[  \nabla _x {\widetilde{c}}(x,\cdot ) \right] ^{1}(v) \end{aligned}$$is welldefined and a \({\mathscr {C}}^1\)diffeomorphism, so that in particular
$$\begin{aligned} F_x(v) = F_x(0) + DF_x(0)v + o_x(v). \end{aligned}$$Using that \(\nabla _x {\widetilde{c}}(x,0) = 0\), which translates into \(F_x(0) = 0\), this takes the form
$$\begin{aligned} F_x(v) = DF_x(0)v + o_x(v). \end{aligned}$$By compactness of X, there thus exist a radius \(R_0>0\) and a constant \({\varLambda }_0<\infty \) such that
$$\begin{aligned} \lambda F_x(v) \le {\varLambda }_0 v \quad \text {for all} \quad x\in X \quad \text {and} \quad v \le \lambda R_0, \end{aligned}$$which we may reformulate as
$$\begin{aligned} F_x(B_{\lambda R}) \subseteq B_{{\varLambda }_0 R}, \quad \text {i.e.} \quad B_{\lambda R} \subseteq F_x^{1}(B_{{\varLambda }_0 R}) = \nabla _x {\widetilde{c}}(x,B_{{\varLambda }_0 R}) \end{aligned}$$for all \(R\le R_0\) and \(x\in X\).

Step 3 (Conclusion). If \((x,y) \in (B_{5R} \times {\mathbb {R}}^d) \cap {{\,\mathrm{Spt}\,}}\pi \), then we claim that \(y \le {\varLambda }_0 R\) for \(R\le R_0\).
Indeed, by Step 1 we have \(\nabla _x{\widetilde{c}}(x,y) \in B_{\lambda R}\). Since \(B_{\lambda R} \subseteq \nabla _x {\widetilde{c}}(x,B_{{\varLambda }_0 R})\) by Step 2, injectivity of \(y \mapsto \nabla _x{\widetilde{c}}(x,y)\) implies that we must have \(y \in B_{{\varLambda }_0 R}\).
\(\square \)
Remark 2.2
Note that if (2.15) is replaced by smallness of the onesided energy, i.e.,
then Lemma A.1 still applies and we obtain the onesided qualitative bound
3 AlmostMinimality with Respect to Euclidean Cost
In this section we show that a minimizer of the optimal transport problem with cost function c is an approximate minimizer for the problem with Euclidean cost function. However, in order to make full use of the Euclidean harmonic approximation result from [14, Proposition 1.6] on the Eulerian side, we have to be careful in relating Lagrangian and Eulerian energies. This is where the concept of almostminimality shows its strength, since it provides us with the missing bound of Lagrangian energy in terms of its Eulerian counterpart.
Proof of Proposition 1.10
First, let us observe that we may assume in the following that
since otherwise there is nothing to show. By the support assumption (1.20) on \(\mu \) and \(\nu \), the couplings \(\pi \) and \({\widetilde{\pi }}\) satisfy
Together with (3.1) this implies that
By the admissibility of \({\widetilde{\pi }}\), i.e. that \(\pi \) and \({\widetilde{\pi }}\) have the same marginals, we may write
where \({\widetilde{c}}\) is defined as in (2.16). Abbreviating
optimality of \(\pi \) with respect to the cost function c implies that
Using again the admissibility of \({\widetilde{\pi }}\), we may write
Note that by the definition (2.16) of \({\widetilde{c}}\), the function \(\zeta \) satisfies
Now, by (3.6),
so that, using (3.7) and (3.2), it follows that
By the estimate
and Hölder’s inequality, we get
\(\square \)
4 Eulerian Point of View
The purpose of this section is to translate almostminimality from the Lagrangian setting, as encoded by Proposition 1.10, to the Eulerian setting so that it may be plugged into the proof of the harmonic approximation result [14, Proposition 1.7]. This is the purpose of Lemma 1.12 below, which relates a (Lagrangian) coupling \(\pi \), which we think of being an almostminimizer, to its Eulerian description \((\rho ,j)\) (introduced in (1.24)).
The proof of Lemma 1.12 relies on the fact that the Eulerian cost is always dominated by the Lagrangian one, while the other inequality in general only holds for minimizers of the Euclidean transport cost. However, in the proof of Lemma 1.12 we can use almostminimality of \(\pi \), together with the equality of Eulerian and Lagrangian energy for minimizers of the Euclidean transport cost (Remark 1.11) to overcome this nuisance.
Proof of Lemma 1.12
Let \(\pi ^* \in {\mathrm {argmin}} \, W^2(\mu , \nu )\) be a minimizer of the Euclidean transport cost and let \((\rho ^*, j^*)\) be its Eulerian description. Then by (1.31) and (1.34) it follows that
Since \(\pi ^* \in {\mathrm {argmin}} \, W^2(\mu , \nu )\), we have that
see Remark 1.11, so that by minimality of \(\pi ^*\), which implies minimality of \((\rho ^*,j^*)\) in the Benamou–Brenier formulation (1.32) of \(W^2(\mu , \nu )\), we obtain
for any \(({{\widetilde{\rho }}},{{\widetilde{j}}})\) satisfying (1.35), in particular also for \(({{\widetilde{\rho }}},{{\widetilde{j}}})=(\rho ,j)\). \(\square \)
The Eulerian version of almostminimality also implies the following localized version, which will be needed for the harmonic approximation:
Corollary 4.1
Let \(\pi \in {\varPi }(\mu , \nu )\) be a coupling between the measures \(\mu \) and \(\nu \) with the property that there exists a constant \({\varDelta } <\infty \) such that
for any \({\widetilde{\pi }}\in {\varPi }(\mu , \nu )\), and let \((\rho ,j)\) be its Eulerian description defined in (1.24). For any \(r > 0\) small enough^{Footnote 25}, let \(f_r\) be the inner trace of j on \(\partial B_r \times (0,1)\) in the sense of (1.27), i.e.
Then for any densityflux pair \(({\widetilde{\rho }}, {\widetilde{j}})\) satisfying
there holds
Proof
Let \(({\widetilde{\rho }}, {\widetilde{j}})\) satisfy (4.3). Then the densityflux pair \(({\widehat{\rho }}, {\widehat{j}}) := ({\widetilde{\rho }}, {\widetilde{j}}) + (\rho , j)\lfloor _{B_r^c\times [0,1]}\) obeys (1.35). Indeed,
Hence, by Lemma 1.12 and subadditivity of the Eulerian cost it follows that
which implies (4.4). \(\square \)
Lemma 1.12, in the form of the bound (1.44), allows us to relate Eulerian and Lagrangian side of the harmonic approximation result, which will be central in the application to the onestepimprovement Proposition 1.16. The proof of this Lagrangian version is very similar to [14, Proof of Theorem 1.5], however, we stress again that since we are not dealing with minimizers of the Euclidean transport cost, one has to be careful when passing from Eulerian to Lagrangian energies.
Proof of Lemma 1.14
By scaling, it is enough to show that
By the \(L^{\infty }\)bound, we know that if , then for any \(t\in [0,1]\) there holds \(t x + (1t)y \in B_{2}\). Hence, we may estimate
Multiplying out the square and using the definition of \((\rho ,j)\) from (1.24), we may write^{Footnote 26}
Now note that (1.30) implies a local counterpart of (1.31): for any open set \(B \subseteq {\mathbb {R}}^d\),
Arguing with an open \(\epsilon \)neighborhood of B and continuity of the righthand side with respect to B, one may show that (4.7) holds for any closed set B, so that in particular
which, together with (1.44), gives
so that (4.5) follows from the identity
\(\square \)
5 Harmonic Approximation
In this section we sketch the proof of the (Eulerian) harmonic approximation Theorem 1.13. As already noted in the introduction, the proof of Theorem 1.13 is done at the Eulerian level (as in [13, 14]) by constructing a suitable competitor.
Proof of Theorem 1.13
By scaling we may without loss of generality assume that \(R=1\). Let \((\rho ,j)\) be the Eulerian description of the coupling \(\pi \in {\varPi }(\mu , \nu )\). The proof of the Eulerian version of the harmonic approximation consists of the following four steps, at the heart of which is the construction of a suitable competitor (Step 3). Note that since we want to make the dependence on the parameter M in the \(L^{\infty }\)bound (1.37) precise, one actually has to look a bit closer into the proofs of the corresponding statements in [14], since in their presentation of the results the estimate \(M\lesssim ({\mathscr {E}}_6 + {\mathscr {D}}_6)^{\frac{1}{d+2}}\) is used.^{Footnote 27}
Step 1 (Passage to a regularized problem). Choose a good radius \(R_*\in (3,4)\) for which the flux \({\overline{f}}\) is wellbehaved on \(\partial B_{R_*}\). Actually, since we are working with \(L^2\)based quantities, to be able to get \(L^2\) bounds on \(\nabla {\varPhi }\), we would have to be able to estimate \({\overline{f}}\) in \(L^2\) or at least in the Sobolev trace space \(L^{\frac{2(d1)}{d}}\). However, since the boundary fluxes j are just measures and since for the approximate orthogonality (see Step 2) a regularity theory is required up to the boundary, one first has to go to a solution \(\varphi \) (with \(\int _{B_{R_*}} \varphi \,{\mathrm {d}}x = 0\)) of a regularized problem
where \({\mathrm {const}}\) is the generic constant for which the equation has a solution, and \({\widehat{g}}\) is a regularization through rearrangement of \({\overline{f}}\) with good \(L^2\) bounds (see [14, Section 3.1.2] for details).
Using properties of the regularized flux \({\widehat{g}}\) and elliptic regularity, the error made by this approximation can be quantified as
for any \(\alpha \in (0,1)\), see [14, Proof of Proposition 1.7]. Note that by the definition of \(\rho \) in (1.24) and assumption (1.36) we have that
so that together with (5.2) we may estimate, using \(M\le 1\),
for any \(\alpha \in (0,1)\). Note that the error term is superlinear in \({\mathscr {E}}_6 + {\mathscr {D}}_6\).
Step 2 (Approximate orthogonality [14, Proof of Lemma 1.8]). For every \(0<\tau \ll 1\) there exist \(\epsilon _{\tau } > 0\), \(C_{\tau } < \infty \) such that if \({\mathscr {E}}_6 + {\mathscr {D}}_6 \le \epsilon _{\tau }\), then
The proof of (5.3) essentially relies on the representation formula
The three error terms in the second line of this equality are then bounded as follows. The first term uses that \(\mu \) and \(\nu \) are close in Wasserstein distance. An estimate on the second term relies on the fact that \({\widehat{g}}\) and \({\overline{f}}\) are close^{Footnote 28}. This bound relies on the choice of a good radius in Step 1 and \(L^2\) estimates up to the boundary on \(\nabla \varphi \). The bound on the third error term uses elliptic regularity theory and a restriction result for the Wasserstein distance, which implies that \(W^2_{B_{R_*}}(\int _0^1\rho ,\kappa ) \lesssim {\mathscr {E}}_6 + {\mathscr {D}}_6\).^{Footnote 29} This estimate actually requires a further regularization of \({\widehat{g}}\), and by relying on interior regularity estimates explains why one has to go from \(B_{R_*}\) to the slightly smaller ball \(B_{2}\) in the estimate (5.3). A close inspection of [14, Proof of Lemma 1.8] shows that the term involving M in these error estimates comes in product with a superlinear power of \({\mathscr {E}}_6 + {\mathscr {D}}_6\) as in Step 1, so that we may bound \(M\le 1\) and still be able to obtain an arbitrarily small prefactor in (5.3) by choosing \({\mathscr {E}}_6 + {\mathscr {D}}_6\) small enough.
Step 3 (Construction of a competitor [14, Proof of Lemma 1.9]). For every \(0<\tau \ll 1\) there exist \(\epsilon _{\tau }>0\) and \(C,C_{\tau }<\infty \) such that if \({\mathscr {E}}_6+{\mathscr {D}}_6\le \epsilon _{\tau }\), then there exists a densityflux pair \(({\widetilde{\rho }}, {\widetilde{j}})\) satisfying (4.3) for \(r=R_*\), and such that
Step 4 (Almostminimality on the Eulerian level). Since the densityflux pair \(({\widetilde{\rho }}, {\widetilde{j}})\) satisfies (4.3) for \(r=R_*\), Corollary 4.1 implies that
Combining the above steps, we have proved that for any \(0<\tau \ll 1\) there exist \(\epsilon _{\tau }>0\) and \(C_{\tau }<\infty \) such that if \({\varPhi }\) is the solution of (1.40), then
\(\square \)
6 OneStep Improvement
The following proposition is a onestep improvement result, which will be the basis of a Campanato iteration in Theorem 1.1. Note that the iteration is more complicated than in [13], because at each step we have to restrict the coptimal coupling to the smaller cross where the \(L^{\infty }\)bound holds to be able to apply the harmonic approximation result. As a consequence, we have to make sure that the qualitative bound (2.15) on the displacement (which is an important ingredient in obtaining quantitative version of the \(L^{\infty }\)bound (1.17)) is propagated in each step of the iteration.^{Footnote 30}
We start with the short proof of Lemma 1.8, which is the starting point of each iteration step.
Proof of Lemma 1.8
Let \(\pi \in {\varPi }(\mu , \nu )\) be a coptimal coupling, and let \(\pi _{{\varOmega }}:=\pi \lfloor _{{\varOmega }}\) be its restriction to a subset \({\varOmega }\subseteq {\mathbb {R}}^d \times {\mathbb {R}}^d\). Then \(\pi _{{\varOmega }} \in {\varPi }(\mu _{{\varOmega }}, \nu _{{\varOmega }})\) where the marginal measures \(\mu _{{\varOmega }}\), \(\nu _{{\varOmega }}\) are defined in (1.19).
Given any coupling \({\widetilde{\pi }} \in {\varPi }(\mu _{{\varOmega }}, \nu _{{\varOmega }})\), we can define \({\widehat{\pi }}:= \pi  \pi _{{\varOmega }} + {\widetilde{\pi }}\). It is easy to see that \({\widehat{\pi }}\) is an admissible coupling between the measures \(\mu \) and \(\nu \), hence by coptimality of \(\pi \), we obtain by the additivity of the cost functional with respect to the transference plan
hence
that is, \(\pi _{{\varOmega }}\) is a coptimal coupling between \(\mu _{{\varOmega }}\) and \(\nu _{{\varOmega }}\). \(\square \)
As a direct application and as a further preparation for the onestepimprovement we present the short proof of Corollary 1.9.
Proof of Corollary 1.9
Let \(\pi \in {\varPi }(\mu ,\nu )\) be coptimal. Then by Lemma 1.8 the coupling is a coptimal coupling between the measures \(\mu _R\) and \(\nu _R\) defined via
for any \(A\subseteq {\mathbb {R}}^d\) Borel. In particular, we have that
If \(A\subseteq B_{2R}^c\), then
since for any \((x,y)\in {{\,\mathrm{Spt}\,}}\pi _R\), by assumption we have that \(xy \le MR \le R\), so \(A\times B_R \cap {{\,\mathrm{Spt}\,}}\pi = \emptyset \). Hence, \({{\,\mathrm{Spt}\,}}\mu _R \subseteq B_{2R}\). Similarly, if \(A\subseteq B_R\), then
which implies that \(\mu _R = \mu \) on \(B_R\).
By symmetry, the same properties hold for \(\nu _R\). \(\square \)
We now give the proof of the onestep improvement Proposition 1.16, which is the working horse of the Campanato iteration.
Proof of Proposition 1.16
Step 1 (Rescaling). Without loss of generality we can assume that \(R=1\). Hence, to simplify notation we will also use the shorthand
Note that by the normalization \(\rho _0(0)=\rho _1(0)=1\) we have that \(\frac{1}{2}\le \rho _j \le 2\) for \(j=1,2\), provided that \([\rho _0]_{\alpha } + [\rho _1]_{\alpha }\) is small enough. In particular, Lemma A.4 then implies that
Indeed, let \(({\widetilde{x}},{\widetilde{y}}) = S_R(x,y) := (R^{1}x,R^{1}y)\) and set
and \({\widetilde{b}} = R^{1} b\). We still have \({\widetilde{\rho }}_0(0) = {\widetilde{\rho }}_1(0) = 1\) and \(\nabla _{{\widetilde{x}}{\widetilde{y}}}{\widetilde{c}}(0,0) = {\mathbb {I}}\), and \({\widetilde{\pi }} := R^{d}(S_R)_{\#}\pi \) is the \({\widetilde{c}}\)optimal coupling between \({\widetilde{\rho }}_0\) and \({\widetilde{\rho }}_1\).
Step 2 (\(L^{\infty }\)bound for \(\pi \)). We claim that the coupling \(\pi \) satisfies the \(L^{\infty }\)bound
with \(M = C_{\infty } ({\mathscr {E}}+{\mathscr {D}})^{\frac{1}{d+2}}\) for some constant \(C_{\infty }<\infty \) depending only on d.
In view of the harmonic approximation Theorem 1.13 it is therefore natural to consider the coupling , which by Lemma 1.8 and Corollary 1.9 is a coptimal coupling between its own marginals, denoted by \(\mu \) and \(\nu \).
Indeed, since \(\pi \) is coptimal, its support is cmonotone. By assumptions (1.49), (1.50), and Remark 1.7 we may therefore estimate
Hence, appealing to Proposition 1.5 (suitably rescaled), we obtain the claimed \(L^{\infty }\)bound (6.1). Notice that the dependence of the smallness assumption (1.50) on \({\varLambda }\) only enters through the \({\varLambda }\)dependence of (6.2).
Step 3 (Properties of the marginals \(\mu \) and \(\nu \) of ). We claim that the marginals \(\mu \) and \(\nu \) of satisfy
Indeed, due to the \(L^{\infty }\)bound (6.1) the marginal measures are supported on \(B_{7}\) if \({\mathscr {E}}+{\mathscr {D}}\ll 1\) (such that \(M\le 1\)). Furthermore, from Corollary 1.9 we have that \(\mu = \rho _0 \,{\mathrm {d}}x\) and \(\nu = \rho _1 \,{\mathrm {d}}y\) on \(B_6\), as well as \(\mu \le \rho _0\,{\mathrm {d}}x\), which implies that
since by assumption (1.50) we may assume that \([\rho _0]_{\alpha }\le 1\).
Step 4 (Almostminimality of and applicability of the harmonic approximation Theorem 1.13). We show next that the coupling is an almostminimizer of the Euclidean transport problem in the sense that for any \({\widetilde{\pi }}\in {\varPi }(\mu ,\nu )\) there holds
with \({\varDelta } = C [\nabla _{xy}c]_{\alpha } {\mathscr {E}}^{\frac{1}{2}}\), and that satisfies the assumptions of Theorem 1.13.
Indeed, by Step 3 and the coptimality of , we may apply Proposition 1.10 to obtain inequality (6.3) with
where we also used that \({\mathscr {E}}_7 \lesssim {\mathscr {E}}\).
Note that in view of Lemma 1.12 the Eulerian description of satisfies (1.38). Together with Step 2 and Step 3 this implies that the assumptions of Theorem 1.13 are fulfilled.
Step 5 (Definition of b and B and proof of estimate (1.51)). For given \(\tau > 0\), to be chosen later, let \(\epsilon _{\tau }\) and \(C_{\tau }\) be as in the harmonic approximation result (Theorem 1.13). We claim that, with the harmonic gradient field \(\nabla {\varPhi }\) defined in (1.40),
satisfy the bounds (1.51).
Since \(\rho _0(0) = \rho _1(0) = 1\), we have \(\frac{1}{2} \le \rho _j \le 2\), \(j=0,1\), provided \([\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2\) is small enough, so that (1.12) holds. In particular, since \(\mu \) agrees with \(\rho _0 \,{\mathrm {d}}x\) on \(B_6\) and \(\nu \) agrees with \(\rho _1 \,{\mathrm {d}}y\) on \(B_6\), we may estimate
Assume now that
so that Theorem 1.13 allows us to define the vector b and the matrices A, B as in (6.5). Note that since A is symmetric, so is the matrix B. By (1.42) and (1.12), we then have
and the same estimate holds for \(A^2\), so that
This proves the estimate (1.51). Furthermore, recalling the definition of \({\varPhi }\) in (1.40), we bound
In view of (1.51) we may assume that \(b \in B_9\), so that^{Footnote 31}
and similarly for \(D = \nabla _{xy}c(0,b)\),
which implies that
Now, noticing that \(\gamma = \rho _1(b)^{\frac{1}{d}} \left( \frac{\det B^2}{\det D}\right) ^{\frac{1}{d}}\) because \(\rho _0(0) = 1\), (6.7), (6.8) together with \(\rho _1(0)=1\), and (6.10) imply
Step 6 (Mapping properties of Q). We show that if \({\mathscr {E}}+ {\mathscr {D}}+ [\nabla _{xy}c]_{\alpha }^2 \ll \theta ^2\),^{Footnote 32} then
Indeed, we have
and from (1.51), (6.9), and (6.11) it follows that \(B^{1}B_{\theta } \subseteq B_{2\theta }\) and \(\gamma ^{1} D^{*}BB_{\theta } + b \subseteq B_{2\theta }\) if \({\mathscr {E}}+ {\mathscr {D}}+ [\nabla _{xy}c]_{\alpha }^2 \ll \theta ^2\), which we assume to be true from now on.
Step 7 (Transformation of the displacement) We next show that for all \(({\widehat{x}},{\widehat{y}}) = Q(x,y) \in {\mathbb {R}}^d \times {\mathbb {R}}^d\) and \(t\in [0,1]\) there holds
where the error \(\delta \) is controlled by
Indeed, by (1.46) we have
and the second term, which will turn out to be an error term, can be bounded by
We show next that \(B^2 x + b \approx x + \nabla {\varPhi }(tx + (1t)y)\), with an error that can be controlled. This relies on the fact that,
where
and Taylor approximation. Indeed,
so that with (6.16), (6.17), and (6.5),
Collecting all the error terms from (6.15) and (6.19) then yields (6.13).
Step 8 (Proof of the energy estimate (1.52)). In this final step we prove the energy estimate (1.52).
To this end, let us compute
For the first term on the righthand side of (6.20), we use the harmonic approximation theorem in its Lagrangian version (Lemma 1.14), which is applicable by Step 4. In particular, we may estimate, assuming that \(\theta \le \frac{1}{2}\),
so that by (6.4), (1.12), and Young’s inequality,
To estimate the error term we use the following consequence of the \(L^{\infty }\)bound (6.1): if \({\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 \ll \theta ^{d+2}\), which holds in view of (1.50), then
With this in mind, we will bound each term in (6.14) separately. Note that
and similarly . Moreover, by (6.22), we may estimate for any
Together with the bound (1.51) and since we assumed that \({\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 \ll \theta ^2\), which implies that \(b^2\lesssim \theta ^2\), it follows that
Hence with the bounds (6.11), (6.9), and (1.42), together with \({\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2 \ll \theta ^2\) and \(\theta \le 1\), we obtain
Inserting the estimates (6.21) and (6.23) into (6.20) yields the existence of a constant \(C=C(d,\alpha )\) such that
We now fix \(\theta =\theta (d,\alpha ,\beta )\) such that \(C \theta ^2 \le \frac{1}{3} \theta ^{2\beta }\), which is possible since \(\beta <1\), and then \(\tau \) small enough so that \(C \tau \theta ^{(d+2)} \le \frac{1}{3} \theta ^{2\beta }\). This fixes \(\epsilon _{\tau }\) given by Theorem 1.13. Finally, we may choose \({\mathscr {E}}+ [\rho _0]_{\alpha }^2 + [\rho _1]_{\alpha }^2\) small enough so that
yielding (1.52).
Step 9 (Proof of (1.53)). Let and set \((x,y) := Q^{1}({\widehat{x}}, {\widehat{y}})\), see Lemma 1.15 for the definition of Q. Then by A.3, there holds \((x,y) \in {{\,\mathrm{Spt}\,}}\pi \). Similarly to Step 6, we have that , provided that \({\mathscr {E}}+ {\mathscr {D}}+ [\nabla _{xy}c]_{\alpha }^2 \ll \theta ^2\), so that
Using the \(L^{\infty }\) bound (1.17) (notice that \(\theta \le \frac{2}{3}\)) and the fact that we assumed \({\mathscr {E}}+ {\mathscr {D}}\ll \theta ^{d+2}\), we obtain
so that \((x,y) \in {\mathbb {B}}_{\theta , 2\theta }\). It thus follows that \(({\widehat{x}}, {\widehat{y}}) \in Q({\mathbb {B}}_{\theta , 2\theta })\). Following the proof of Step 6, we have
and conclude that . \(\square \)
7 Proof of \(\epsilon \)regularity
To lighten the notation in this subsection, let us set
We are now in the position to give the proof of our main \(\epsilon \)regularity theorem, which we restate for the reader’s convenience:
Theorem 1.1
Assume that (C1)–(C4) hold and that \(\rho _0(0) = \rho _1(0) =1\), as well as \(\nabla _{xy}c(0,0)= {\mathbb {I}}\). Assume further that 0 is in the interior of \(\, X \times Y\).
Let \(\pi \) be a coptimal coupling from \(\rho _0\) to \(\rho _1\). There exists \(R_0= R_0(c)>0\) such that for all \(R\le R_0\) with
there exists a function \(T\in {\mathscr {C}}^{1,\alpha }(B_R)\) such that \({{\,\mathrm{Spt}\,}}\pi \cap (B_R \times {\mathbb {R}}^d) \subseteq {\mathrm {graph}}\,T\), and the estimate
holds.
Proof of Theorem 1.1
To simplify notation, we write \({\mathscr {E}}_R\) for \({\mathscr {E}}_R(\pi )\) and \({\mathscr {H}}_R\) for \({\mathscr {H}}_{R}(\rho _0, \rho _1, c)\). Note that since \(\rho _0(0)=\rho _1(0)=1\) and \(R^{\alpha }\left( [\rho _0]_{\alpha , 4R}+[\rho _1]_{\alpha , 4R}\right) \) is small by (7.1), we may assume throughout the proof that \(\frac{1}{2}\le \rho _0,\rho _1 \le 2\) on \(B_{4R}\).
Step 1 (Control of the full energy at scale 2R). We show that under assumption (7.1) we can bound
Indeed, from Remarks 1.6 and 2.2 , we know that (7.1) implies the \(L^{\infty }\) bound
Let us now prove that
from which we get
thus yielding (7.3). To this end, assume there exists \((x,y) \in (B_{4R}^c \times B_{2R}) \cap {{\,\mathrm{Spt}\,}}\pi \). Let also \(x' \in [x,y] \cap \partial B_{\frac{5}{2}R}\) and \(y' \in {\mathbb {R}}^d\) such that \((x',y') \in {{\,\mathrm{Spt}\,}}\pi \), see Figure 1. Then by (7.4), (7.1) and (1.12), we have
From (2.5), recalling the definition (2.3) of \(\delta \) and the fact that \(\delta \ll 1\), we get, upon writing \(yy'=yx'+x'y'\),
which together with (7.6) yields a contradiction, proving (7.5).
In the following, Step 2 – Step 6 are devoted to prove that under the assumption
the following Campanato estimate holds:
Step 2 (Getting to a normalized setting). We claim that it is enough to prove that if
then for all \(r \le \frac{R}{2}\),
Let us first notice that for any \(x_0 \in B_{R}\), so that
Therefore, in view of (7.7), it is sufficient to show for all \(x_0 \in B_{R}\) that, if
then for all \(r \le \frac{R}{2}\),
Performing a similar change of coordinates as Lemma 1.15, namely letting \(M := \nabla _{xy}c(x_0,0)\), \(\gamma := (\rho _0(x_0) \det M)^{\frac{1}{d}}\) and \(S(y) := \gamma M^{1} y\) and defining
we have that \({\widetilde{\rho }}_0(x_0) = {\widetilde{\rho }}_1(0) = 1\) and \(\nabla _{xy}{\widetilde{c}}(x_0,0) = {\mathbb {I}}\). Furthermore, \({\widetilde{\pi }}\) is a \({\widetilde{c}}\)optimal coupling between \({\widetilde{\rho }}_0\) and \({\widetilde{\rho }}_1\) and \({\widetilde{\pi }}\), \({\widetilde{\rho }}_0\), \({\widetilde{\rho }}_1\) and \({\widetilde{c}}\) still satisfy (7.11). Without loss of generality, we may thus assume \(x_0=0\) and then (7.11) turns into (7.9) and (7.12) turns into (7.10).
Step 3 (First step of the iteration). Recall that (1.12) holds, i.e.,
By (a rescaling of) Lemma 2.1, there exist \({\varLambda }_0 < \infty \) and \(R_0>0\) such that for all \(R\le R_0\) for which (7.9) holds, we have
Let \(\beta >0\). In view of (7.9) and (7.13), we may apply Proposition 1.16 to get the existence of a symmetric matrix \(B_1\) and a vector \(b_1\) such that, defining \(D_1 := \nabla _{xy}c_1(0, b_1)\), \(\gamma _1 := (\rho _1(b_1) \det B_1^2D_1^{1})^{\frac{1}{d}}\) and \(\rho _{0,1}\), \(\rho _{1,1}\), \(c_1\) and \(\pi _1\) as \({\widehat{\rho }}_0\), \({\widehat{\rho }}_1\), \({\widehat{c}}\) and \({\widehat{\pi }}\) in Lemma 1.15, we get
and that \(\pi _1\) is a \(c_1\)optimal coupling between \(\rho _{0,1}\) and \(\rho _{1,1}\). From (1.53) we also have the inclusion
so that we may fix \({\varLambda }=27\) from now on (assuming w.l.o.g. that \({\varLambda }_0\ge 27\)).
Step 4 (Iterating Proposition 1.16). We now show that we can iterate Proposition 1.16 a finite number of times.
Notice that from the estimates (1.51), (6.9) and (6.11) we have the inclusion
Hence, we can bound
Estimates (1.51), (6.8), (6.9), and (6.11) thus yield for \(i \in \{0,1\}\) (\(\rho _{0,1}\) is treated similarly)
where C is a constant depending only on d and \(\alpha \). Using (1.47), the same kind of computation gives
so that
Therefore, (7.9), (7.14), (7.15), and (7.18) imply that we may iterate Proposition 1.16K times, \(K\ge 2\), to find sequences of matrices \(B_k\) and \(D_k\), a sequence of vectors \(b_k\), and a sequence of real numbers \(\gamma _k\) such that, setting for \(1 \le k \le K\)
we recover \(\rho _{0,k}(0) = \rho _{1,k}(0) = 1\) and \(\nabla _{xy}c_k(0,0) = {\mathbb {I}}\), \(\pi _k\) is a \(c_k\)optimal coupling between \(\rho _{0,k}\) and \(\rho _{1,k}\), and from (1.53) we have
Moreover, defining
we have
Arguing as for (7.18), there exists a constant \(C_1 = C_1(d, \alpha ) < \infty \) such that
Step 5 (Smallness at any step of the iteration). From now on, we fix \(\beta > \alpha \). Let us show by induction that there exists a constant \(C_2 = C_2(d, \alpha , \beta )<\infty \) such that for all \(K \in {\mathbb {N}}\)
This will show, together with (7.21), that we can keep on iterating Proposition 1.16. As an outcome of this induction, the estimate
holds for all \(K\in {\mathbb {N}}\) with \(C_3 := \prod _{k=1}^{+\infty } (1+\theta ^{k\alpha }) < \infty \).
We set
By (7.14) and (7.18), the estimates (7.28) and (7.29) hold for \(K=1\) provided \({\mathscr {E}}_R +{\mathscr {H}}_R\) is small enough, since \(C_2 \ge \max \{\theta ^{2(\beta \alpha )}, C_{\beta }\theta ^{2\alpha }\}\). Assume now that (7.28) and (7.29) hold for all \(1 \le k \le K1\). By induction hypothesis, we have
Combining these two estimates with (7.27) for \(k=K\) and assuming that
which is possible provided \({\mathscr {E}}_R+{\mathscr {H}}_R\) is small enough, we get (7.29). Similarly, by (7.23) with \(k=K\) we obtain, using that \(C_2 \ge \frac{C_3C_{\beta }\theta ^{2\alpha }}{1\theta ^{2(\beta \alpha )}}\) by (7.31), the bound
This concludes the induction.
Step 6 (Campanato estimate). We can now prove the main estimate, that is, assuming (7.9), we show that (7.10) holds.
Let
and
We see that, recalling (7.19) and noticing that \({\overline{Q}}_k = Q_k \circ \cdots \circ Q_1\),
Moreover, from (7.24), (7.28), and (7.29), we have the estimate
so that
Similarly, from (7.24), (7.25) and (7.26),
Let us now compute
By (7.28), we obtain
from which (7.10) follows, concluding the proof of (7.8)
Step 7 (\({{\,\mathrm{Spt}\,}}\pi \) is contained in the graph of a function T within \(B_R \times {\mathbb {R}}^d\)). We claim that (7.8) implies the existence of a function \(T:B_R \rightarrow Y\) such that
In the following, we abbreviate
To prove the claim, fix \(x_0 \in B_R\) and notice that (7.8) implies that for any \(r>0\) small enough, there holds
Step 7.A. It is easy to see that the infimum in (7.38) is attained at some \(A_r = A_r(x_0)\) and \(b_r = b_r(x_0)\). Analogous to [5, Lemma 3.IV] one can show that there exist a matrix \(A_0 = A_0(x_0)\) and a vector \(b_0 = b_0(x_0)\) such that \(A_r \rightarrow A_0\) and \(b_r \rightarrow b_0\) as \(r\rightarrow 0\) (uniformly in \(x_0\)) with rates
We refer the reader to Appendix 1 for a proof of the convergences and (7.39).
Step 7.B. We claim that
Indeed, we can split
By definition of \(A_r, b_r\), we have
Using (7.39), \(\rho _0\le 2\), and \(x_0\in B_R\), it follows that
Finally, the last term in (7.41) is estimated by
Letting \(r\rightarrow 0\) in the above estimates proves the claim (7.40).
Step 7.C. By disintegration, there exists a family of measures \(\{\pi _x\}_{x\in X}\) on Y such that
Since the lefthand side of (7.42) tends to zero as \(r\rightarrow 0\) by Step 7.B, it follows that if \(x_0\) is a Lebesgue point, we must have
hence \(\pi _{x_0} = \delta _{A_0 x_0 + b_0}\).
Step 7.D. For any Lebesgue point \(x_0 \in B_R\), define \(T(x_0):= A_0(x_0) x_0 + b_0(x_0)\). Then the previous Step 7.C shows that
that is (7.37).
Step 8 By boundedness of \(\rho _0\), (7.8) implies the bound
which by means of Campanato’s theory [5] proves that \(T\in {\mathscr {C}}^{1,\alpha }(B_R)\) and that the Hölder seminorm of \(\nabla T\) satisfies (7.2). \(\square \)
Remark 7.1
The deterministic structure of the coptimal coupling, that is, the existence of T such that \(\pi = ({\mathrm {Id}}\times T)_{\#}\rho _0\), is a classical result in optimal transportation. If we had used this result, the proof would have become shorter, as Step 7 would not have been needed.
Before we give the proof of Corollary 1.3, let us remark that one can show the following variant of our qualitative \(L^{\infty }\) bound on the displacement (Lemma 2.1):
Lemma 7.2
Assume that the cost function c satisfies (C1)–(C4) and that \(\nabla _x c(0,0)=0\). Let u be a cconvex function. There exist \({\varLambda }_0 < \infty \) and \(R_0'>0\) such that for all \(R\le R_0'\) for which \(\frac{1}{R^2} \left\ u  \tfrac{1}{2}\cdot ^2\right\ _{{\mathscr {C}}^0(B_{8R})} \le 1\) we have
Proof
Since u is cconvex, it is differentiable a.e. For any \(x \in B_{4R}\) such that \(\nabla u(x)\) exists, let \(y = {{\,\mathrm{cexp}\,}}_x(\nabla u(x))\), that is,
Let \({\widetilde{c}}\) be defined as in (2.16). Then, using \(\nabla _x c(0,0) = 0\), we have
so that
Being cconvex, the function u, and therefore also the function \(x\mapsto u(x)  \frac{1}{2}x^2\), is semiconvex, which implies that
see Lemma A.6 in the appendix. By the closeness assumption on u and (C1) we may therefore bound
Steps 2 and 3 of the proof of Lemma 2.1 then imply that there exist \({\varLambda }_0<\infty \) and \(R_0'>0\) (depending on c only through assumptions (C1)–(C4)) such that for all \(R \le R_0'\) we have
that is, (7.43) holds. \(\square \)
Proof of Corollary 1.3
By Lemma 7.2 there exist \({\varLambda }_0<\infty \) and \(R_0'>0\), depending only on the qualitative assumptions (C1)–(C4) on c such that for all \(R\le R_0'\) for which (1.7) holds, we have
We claim that
which immediately implies that
In particular, it follows by Theorem 1.1, that there exists a potentially smaller scale \(R_0\le R_0'\) such that for all \(R\le R_0\) for which (1.7) holds, we have that \(T_u \in {\mathscr {C}}^{1,\alpha }(B_R)\) and \(\nabla T_u\) satisfies the bound (1.5). Applying (7.48) once more, we see that (1.8) holds.
To prove the claim (7.47), we appeal to semiconvexity of the cconvex function u (which implies semiconvexity of the function \(x\mapsto u(x)  \frac{1}{2}x^2\)), in particular Lemma A.6, to bound
It remains to estimate the latter term. To this end, notice that for a.e. \(x\in B_{4R}\) we have \(\nabla u(x) = \nabla _x c(x, T_u(x))\), so that with the normalization assumption \(\nabla _x c(0,0) = 0\) we may bound
It now follows with (7.46), definition (1.3), and \(\nabla _{xx}c(0,0) = 0\), \(\nabla _{xy} c(0,0) = {\mathbb {I}}\), that
In view of (1.7), we may assume
so that
Using that by Lemma A.6 and the smallness assumption (1.7), we have
and writing \(T_u(x) = (T_u(x)\nabla u(x)) + (\nabla u(x) x) +x\), the estimate (7.49) turns into
This proves the claimed inequality (7.47). \(\square \)
8 \(\epsilon \)Regularity for AlmostMinimizers
In this section we give a sketch of the proof of Theorem 1.18. One of the main differences compared to the situation of Theorem 1.1 is that our assumptions do not allow us to prove an \(L^{\infty }\) bound on the displacement (which followed from (c)monotonicity of \({{\,\mathrm{Spt}\,}}\pi \)). However, almostminimality (on all scales) allows us to obtain an \(L^p\) bound for arbitrarily large \(p<\infty \).
Proposition 8.1
Assume that \(\rho _0, \rho _1 \in C^{0,\alpha }\) with \(\rho _0(0) = \rho _1(0) = 1\). Let T be an almostminimizing transport map from \(\mu =\rho _0\,{\mathrm {d}}x\) to \(\nu =\rho _1\,{\mathrm {d}}y\) with \({\varDelta }_r \le 1\). Assume further that T is invertible. Then there exists a radius \(R_1 = R_1(\rho _0, \rho _1) >0\) such that for any \(6R\le R_1\),
implies that for any \(p < \infty \),
The scale \(R_1\) below which the result holds depends on the global Hölder seminorms \([\rho _0]_{\alpha }\) and \([\rho _1]_{\alpha }\) of the densities and the condition \(B_{R_1} \subset {{\,\mathrm{Spt}\,}}\rho _0 \cap {{\,\mathrm{Spt}\,}}\rho _1\).
The proof of Proposition 8.1 is given in Appendix 1. Note that since \(T^{1}\) is also almostminimizing, the \(L^p\) bound for \(T^{1}\) follows from applying Proposition D.1 to \(T^{1}\). The \(L^p\) estimate (for arbitrarily large \(p<\infty \)) allows us to split the particle trajectories into two groups:

good trajectories that satisfy an \(L^{\infty }\) bound on the displacement, corresponding to starting points in the set
$$\begin{aligned} {\mathscr {G}} = \{x\in B_{2R}\cup T^{1}(B_{2R}): T(x)  x \le M R \}, \end{aligned}$$(8.3)where \(M := ({\mathscr {E}}_{6R}(\pi _T) + {\mathscr {D}}_{6R}(\mu , \nu ))^{\alpha }\) for some \(\alpha \in (0, \frac{1}{d+2})\), that we fix in what follows, and

bad trajectories that are too long, corresponding to
$$\begin{aligned} {\mathscr {B}} = \{x\in B_{2R}\cup T^{1}(B_{2R}): T(x)  x > M R \}. \end{aligned}$$(8.4)
Due to the \(L^p\) bound, the energy carried by bad trajectories is superlinearly small: By definition of \({\mathscr {E}}_{6R}(\pi _T)\) and M,
hence
from which we see that, given \(\alpha \in (0,\frac{1}{d+2})\), we may choose p large enough so that the exponent is larger than 1. In particular, for any \(\tau >0\) we may bound
provided \({\mathscr {E}}_{6R}(\pi _T)\ll 1\).
Once the bad trajectories have been removed, the good trajectories can be treated as before. More precisely, if we restrict the coupling \(\pi _T\) to the set \({\mathscr {G}}\times T({\mathscr {G}})\), then the resulting coupling is still deterministic and almostminimizing with respect to quadratic cost (given its own boundary conditions). In particular, since the global estimate
for all \({\widetilde{T}}_{\#}\mu \lfloor _{{\mathscr {G}}} = T_{\#}\mu \lfloor _{{\mathscr {G}}} = \nu \lfloor _{T({\mathscr {G}})}\) holds. This allows for a passage from the Lagrangian to the Eulerian point of view like in Lemma 1.12.^{Footnote 33} Moreover, the measures \(\mu \lfloor _{{\mathscr {G}}}\) and \(\nu \lfloor _{T({\mathscr {G}})}\), as well as the coupling \(\pi _{T}\lfloor _{{\mathscr {G}} \times T({\mathscr {G}})}\), satisfy the assumptions of the harmonic approximation Theorem 1.13.^{Footnote 34} Hence, given \(0<\tau \ll 1\), there exists a threshold \(\epsilon _{\tau }>0\), constants \(C,C_{\tau }<\infty \), and a harmonic gradient field \(\nabla {\varPhi }\) (defined through (1.40)) satisfying (1.42), such that (1.41) holds for the Eulerian description \((\rho ,j)\) of \(\pi _{T}\lfloor _{{\mathscr {G}} \times T({\mathscr {G}})}\), provided \({\mathscr {E}}_{6R}(\pi _T) + R^{2\alpha } \left( [\rho _0]_{\alpha , 6R}^2 + [\rho _1]_{\alpha , 6R}^2\right) \le \epsilon _{\tau }\). The harmonic gradient field allows us to define the affine change of coordinates from Lemma 1.15 with \(B={\mathrm {e}}^{\frac{A}{2}}\) with \(A=\nabla ^2{\varPhi }(0)\) and \(b = \nabla {\varPhi }(0)\) satisfying (1.51) (and \(D = {\mathbb {I}}\)) to obtain a new coupling \(\widehat{\pi _T}= \pi _{{\widehat{T}}}\) between the measures \({\widehat{\mu }}\) and \({\widehat{\nu }}\) from the (full) coupling \(\pi _{T}\).
We can now use the harmonic approximation result Theorem 1.13 together with the harmonic estimates (1.42) to bound
For the bad trajectories we use the estimate (8.6) together with the bound
to obtain, recalling that \(\alpha< \frac{1}{d+2} < \frac{1}{2}\),
In particular, for \(\theta \in (0,1)\) we can write
so that using the identity \(\gamma B^{*}(yb)  Bx = \gamma B^{*} (yx(Ax+b))  \gamma B^{*} (B^*B{\mathbb {I}}A) x + (\gamma 1) Bx\), the estimate (8.7) together with (1.42) and (1.51) imply that for any \(\beta \in (0,1)\) there exist \(0<\theta \ll 1\) and \(C_{\beta }<\infty \) such that
which implies a onestepimprovement result for the case of general almostminimizers.
It remains to show that the transformed coupling \(\pi _{{\widehat{T}}}\) is still almostminimal on all small scales in the sense of Definition 1.17. To this end, let \(r\le R_1\), \((\widehat{x_0}, \widehat{y_0}) \in {{\,\mathrm{Spt}\,}}\pi _{{\widehat{T}}}\), and \(\pi _{\widehat{T'}} \in {\varPi }({\widehat{\mu }},{\widehat{\nu }})\) with \({{\,\mathrm{Spt}\,}}(\pi _{{\widehat{T}}}\pi _{\widehat{T'}}) \subset (B_r(\widehat{x_0})\times {\mathbb {R}}^d) \cup ({\mathbb {R}}^d \times B_r(\widehat{y_0}))\). Then, writing \(\widehat{T'}({\widehat{x}}) = \gamma B^{*} (T'(B^{1}{\widehat{x}})  b)\), where \(T'_{\#}\mu = \nu \), one sees that
Note that
where \(x_0 = B\widehat{x_0}\) and \(y_0 = \gamma ^{1}B \widehat{y_0} + b\). Since \(\pi _T\) is almostminimizing, it follows that
hence \(\pi _{{\widehat{T}}}\) is almostminimizing among deterministic couplings with rate
Assuming that \({\varDelta }_r = C r^{2\alpha }\), together with the bounds on \(\gamma \) and B from (1.51) this gives
in particular the rate \({\varDelta }_r\) exhibits the same behaviour as the Hölder seminorm of \(\nabla _{xy}c\) in (7.17) and shows that the onestepimprovement can be iterated down to arbitrarily small scales, yielding the \(C^{1,\alpha }\)regularity of T in a ball with radius given by a fraction of R. \(\square \)
9 Partial Regularity: Proof of Corollary 1.4
As a corollary of Theorem 1.1, we obtain a variational proof of partial regularity for optimal transport maps proved in [7]. The changes of variables used to arrive to a normalized situation are exactly the same as in [7] and the argument to derive partial regularity from \(\epsilon \)regularity follows [13].
Proof of Corollary 1.4
A classical result in optimal transport states that the optimal map T from \(\rho _0\) to \(\rho _1\) for the cost c and the optimal map \(T^*\) from \(\rho _1\) to \(\rho _0\) for the cost \(c^*(y,x) := c(x,y)\) are almost everywhere inverse to each other, and are of the form
where u is a cconvex function and \(u^c\) is the cconjugate of u. u and \(u^c\) are semiconvex so that by Alexandrov’s Theorem, they are twice differentiable almost everywhere. Therefore, we can find two sets of full measure \(X_1 \subseteq X\) and \(Y_1 \subseteq Y\) such that for all \((x_0,y_0) \in X_1 \times Y_1\), u is twice differentiable at \(x_0\), \(u^c\) is twice differentiable at \(y_0\) and
Now let
Because \(\rho _0\) and \(\rho _1\) are bounded and bounded away from zero, T sends sets of measure 0 to sets of measure 0 so that \(X \setminus X' = Y \setminus Y' = 0\). The goal is now to prove that \(X'\) and \(Y'\) are open sets and that T is a \({\mathscr {C}}^{1, \alpha }\)diffeomorphism between \(X'\) and \(Y'\).
Fix \(x_0 \in X'\), then by (9.2), \(y_0 := T(x_0) \in Y'\). Up to translation, we may assume that \(x_0 = y_0 = 0\). Define
Then \({\overline{u}}\) is a \({\overline{c}}\)convex function and we have
so that \(T(x) = {{\,\mathrm{{\overline{c}}exp}\,}}_x(\nabla {\overline{u}}(x))\), from which we know that T is the \({\overline{c}}\)optimal transport map from \(\rho _0\) to \(\rho _1\).
By Alexandrov’s Theorem, there exist a symmetric matrix A such that
so that, using that \((p,x) \mapsto {{\,\mathrm{cexp}\,}}_x(p)\) is \({\mathscr {C}}^1\) and setting \(M := \nabla _{xy}{\overline{c}}(0,0) = \nabla _{xy}c(0,0)\), noticing that by Assumption (C4) M is nondegenerate, a simple computation yields
Therefore, we have
The \({\overline{c}}\)convexity of \({\overline{u}}\) and the fact that \({{\,\mathrm{{\overline{c}}exp}\,}}_x(\nabla {\overline{u}}(x)) \in \partial _{{\overline{c}}} {\overline{u}}(x)\) imply that, see for instance [9, Section 5.3],
so that, together with (9.3), (9.4) and the property \(T(0)=0\), the matrix \(A = \nabla ^2{\overline{u}}(0)\) is positive definite. We now make the change of variables \({\widetilde{x}} := A^{\frac{1}{2}}x\) and \({\widetilde{y}} := A^{\frac{1}{2}}My\) so that
Defining
we get that \({\widetilde{T}}_{\#} {\widetilde{\rho }}_0 = {\widetilde{\rho }}_1\) \({\widetilde{c}}\)optimally. This may be seen noticing that
where \({\widetilde{u}}({\widetilde{x}}) := {\overline{u}}(A^{1/2}x)\) is a \({\widetilde{c}}\)convex function. The cost \({\widetilde{c}}\) satisfies \(\nabla _{{\widetilde{x}} {\widetilde{y}}}{\widetilde{c}}(0,0) = {\mathbb {I}}\) and by the Monge–Ampère equation
we obtain \({\widetilde{\rho }}_0(0) = {\widetilde{\rho }}_1(0)\). Up to dividing \({\widetilde{\rho }}_0\) and \({\widetilde{\rho }}_1\) by an equal constant, we may assume that \({\widetilde{\rho }}_0(0) = {\widetilde{\rho }}_1(0) = 1\). Moreover, with this change of variables, (9.5) turns into
Finally, \({\widetilde{c}}\) is still \({\mathscr {C}}^{2,\alpha }\) and satisfies Assumptions (C2)–(C4) and since \(\rho _0\) and \(\rho _1\) are bounded and bounded away from zero, \({\widetilde{\rho }}_0\) and \({\widetilde{\rho }}_1\) are \({\mathscr {C}}^{0, \alpha }\), and we have
Hence by (9.6) and (9.7), we may apply Theorem 1.1 to obtain that \({\widetilde{T}}\) is \({\mathscr {C}}^{1,\alpha }\) in a neighborhood of zero. By Remark 1.2, we also obtain that \({\widetilde{T}}^{1}\) is \({\mathscr {C}}^{1,\alpha }\) in a neighborhood of zero.
Going back to the original map, this means that T is a \({\mathscr {C}}^{1,\alpha }\) diffeomorphism between a neighborhood U of \(x_0\) and the neighborhood T(U) of \(T(x_0)\). In particular, \(U \times T(U) \subseteq X' \times Y'\) so that \(X'\) and \(Y'\) are both open and by (9.1), T is a global \({\mathscr {C}}^{1,\alpha }\) diffeomorphism between \(X'\) and \(Y'\). \(\square \)
Notes
It is violated whenever a Riemannian sectional curvature is negative at any point of the manifold, see [16].
See [15] for the concept of crosscurvature and its relation to the MTW condition.
In fact, an almost minimizer.
The regularity of the cost function enters our result in a more nonlinear way, too: Its qualitative regularity on the \({\mathscr {C}}^2\)level determines the energy and length scales below which the linearization regime kicks in.
We refer to [18] for a discussion of this phenomenon in the study of boundary regularity.
In 2d this can be seen by reexpressing the second equation in terms of the stream function of \(\delta T\).
In fact, we missed this point in the first posted version.
But not uniformly strict.
Here is an easy example: Let \(\pi \) be generated by the map T that maps \([1,0]\) onto [0, 1] and [0, 1] onto \([1,0]\), both by shift by one, to the right and to the left, respectively. The Lagrangian cost is obviously given by 2. For the Eulerian cost we note that the Eulerian velocity v vanishes on the diamond \(z<\min \{t,1t\}\), where the density assumes the value 2. Due to this “destructive interference”, the Eulerian cost is reduced by 1.
Notice that we are using a different convention here than in [14], since it is more natural to work with the nondimensional energy in our context.
An assumption of the form \(f\ll 1\) means that there exists \(\epsilon >0\), typically only depending on the dimension d and Hölder exponents, such that if \(f\le \epsilon \), then the conclusion holds. We write \(\ll _{{\varLambda }}\) to indicate that \(\epsilon \) also depends on the parameter \({\varLambda }\). The symbols \(\sim \), \(\gtrsim \) and \(\lesssim \) indicate estimates that hold up to a global constant C, which typically only depends on the dimension d and Hölder exponents. For instance, \(f\lesssim g\) means that there exists such a constant with \(f\le Cg\). \(f\sim g\) means that \(f\lesssim g\) and \(g\lesssim f\).
Assuming that \(T_u(0) = 0\), the assumption \(\nabla _x c(0,0) = 0\) fixes \(\nabla u(0) = \nabla _{x}c(0, T_u(0)) = 0\).
\(\cdot \) denotes the Lebesgue measure on \({\mathbb {R}}^d\).
We use the convention that \(W^2(\mu , \nu ) = \inf _{\pi \in {\varPi }(\mu ,\nu )} \int \frac{1}{2}xy^2\,{\mathrm {d}}\pi \) in this paper.
Whenever there is no room for confusion, we will drop the dependence of \({\mathscr {D}}\) on the measures \(\mu \) and \(\nu \).
The assumption of Theorem 1.1 is scaled to 6R here to match the scale on which smallness of \({\mathscr {E}}\) and \({\mathscr {D}}\) is assumed in both statements.
Note that by assumption (1.36) the coupling \(\pi \) is supported on \(B_{7R}\times B_{7R}\), so that by means of the estimate \({\mathscr {E}}_{6R}(\pi ) \lesssim {\mathscr {E}}_{7R}(\pi ) = \int xy^2\,{\mathrm {d}}\pi \) the smallness assumption of the Euclidean energy of \(\pi \) on scale 6R could be replaced by a smallness assumption on the global Euclidean energy of \(\pi \). However, since \({\mathscr {D}}\) behaves nicely under restriction only on average, the applicability of the harmonic approximation result derived in [14] becomes more apparent in the form of assumption (1.39).
We use the notation \(B^{*} = (B^*)^{1}\), where \(B^*\) is the transpose of B, and \(\nabla ^2 c(x, y) = \begin{pmatrix} \nabla _{xx}c(x,y) &{} \nabla _{yx}c(x,y) \\ \nabla _{xy} c(x,y) &{} \nabla _{yy}c(x,y) \end{pmatrix}\).
Note that D is nonsingular by assumption (C4).
A more quantitative version of this result is work in progress.
Notice that by the assumption that \(\eta \) is supported on \(B_{\frac{r}{2}}(xre)\), the function \(x' \mapsto xx' \eta (x')\) has no singularity at \(x'=x\).
r must be small enough so that \(B_r \subset {{\,\mathrm{Spt}\,}}\mu \cap {{\,\mathrm{Spt}\,}}\nu \).
Note that by an approximation argument, we may use \(\zeta := 1_{B_2} \nabla {\varPhi }^2 \) and \(\xi := 1_{B_2} \nabla {\varPhi }\) as test functions in (1.24).
Closeness here means closeness of \({\widehat{g}}_{\pm }\) and \({\overline{f}}_{\pm }\) (the positive and negative parts of the measures) with respect to the geodesic Wasserstein distance on \(\partial B_{R_*}\).
We note that for the case of quadratic cost and Hölder continuous densities treated in [13] a bound on this term is easy due to McCann’s displacement convexity, which implies that \(\rho \le 1\) up to a small error.
Alternatively, one could devise an argument based on the fact that the qualitative \(L^{\infty }\) bound only depends on the cost through its global properties (C1)–(C4) and that the set of cost functions considered in the iteration is relatively compact.
The first inequality follows from \(\rho _1\ge \frac{1}{2}\) and the Lipschitz continuity of the function \(t\mapsto t^{\frac{1}{d}}\) away from zero.
Note that we have not yet fixed \(\theta \).
Using that the optimal coupling between \(\mu \lfloor _{{\mathscr {G}}}\) and \(\nu \lfloor _{T({\mathscr {G}})}\) is deterministic, so that one can appeal to almostminimality within the class of deterministic couplings.
See for instance [22, Theorem 3.16] for details.
We use the notation \(\fint \) for the average of a function on a set, with respect to the Lebesgue measure.
In the rest of Step 2, we drop the index \(x_0\) for the map \(S_{x_0}\). Now, S is the map such that \(S(x_0) = x_0\) and \(DS(x_0) = {\mathbb {I}}\).
References
Almgren Jr., F.J.: Existence and Regularity Almost Everywhere of Solutions to Elliptic Variational Problems with Constraints. Memoirs of the American Mathematical Society, vol. 4(165). American Mathematical Society, Providence, RI (1976). ISBN: 9780821818657
Benamou, J.D., Brenier, Y.: A numerical method for the optimal timecontinuous mass transport problem and related problems. In: Monge Ampére Equation: Applications to Geometry and Optimization (Deerfield Beach, FL, 1997), Contemporary Mathematics, vol. 226, pp. 1–11. American Mathematical Society, Providence, RI (1999)
Bombieri, E.: Regularity theory for almost minimal currents. Arch. Ration Mech. Anal. 78, 99–130 (1982)
Caffarelli, L.A.: The regularity of mappings with a convex potential. J. Am. Math. Soc. 5(1), 99–104 (1992)
Campanato, S.: Proprietà di una famiglia di spazi funzionali. Annali della Scuola Normale Superiore di Pisa  Classe di Scienze (Serie 3) 18(1), 137–160 (1964)
Chen, G.Q., Frid, H.: Extended divergencemeasure fields and the Euler equations for gas dynamics. Commun. Math. Phys. 236, 251–280 (2003)
De Philippis, G., Figalli, A.: Partial regularity for optimal transport maps. Publications Mathématiques. Institut de Hautes Études Scientifiques 121, 81–112 (2015)
Evans, L.C., Gariepy, R.F.: Measure Theory and Fine Properties of Functions. Revised edition. Textbooks in Mathematics. CRC Press, Boca Raton, FL (2015). ISBN: 9781482242386
Figalli, A.: The MongeAmpère Equation and Its Applications. Zurich Lectures in Advanced Mathematics. European Mathematical Society (EMS), Zürich, 2017. ISBN 9783037191705
Figalli, A., Kim, Y.H.: Partial regularity of Brenier solutions of the MongeAmpère equation. Discret. Contin. Dyn. Syst. (Ser. A) 28, 559–565 (2010)
Giusti, E.: Direct Methods in the Calculus of Variations. World Scientific Publishing Co., Inc., River Edge, NJ (2003). ISBN: 9812380434
Goldman, M.: An \(\varepsilon \)regularity result for optimal transport maps between continuous densities. Atti della Accademia Nazionale dei Lincei. Rendiconti Lincei. Matematica e Applicazioni 31(4), 971–979 (2020)
Goldman, M., Otto, F.: A variational proof of partial regularity for optimal transportation maps. Annales Scientifiques de l’École Normale Supérieure. Quatrième Série 53(5), 1209–1233 (2020)
Goldman, M., Huesmann, M., Otto, F.: Quantitative linearization results for the MongeAmpère equation. Commun. Pure Appl. Math. (2021). https://doi.org/10.1002/cpa.21994
Kim, Y.H., McCann, R.J.: Continuity, curvature, and the general covariance of optimal transportation. J. Eur. Math. Soc. 12(4), 1009–1040 (2010)
Loeper, G.: On the regularity of solutions of optimal transportation problems. Acta Math. 202(2), 241–283 (2009)
Ma, X.N., Trudinger, N.S., Wang, X.J.: Regularity of potential functions of the optimal transportation problem. Arch. Ration. Mech. Anal. 177(2), 151–183 (2005)
Miura, T., Otto, F.: Sharp boundary \(\epsilon \)regularity of optimal transport maps. Adv. Math. 381 (2021), Article ID 107603
Prod’homme, M., Ried, T.: Variational approach to partial regularity of optimal transport maps: general cost functions and continuous densities (in Preparation)
Santambrogio, F.: Optimal Transport for Applied Mathematicians. Calculus of Variations, PDEs, and Modeling. Progress in Nonlinear Differential Equations and their Applications, vol. 87. Birkhäuser, Cham (2015). ISBN: 9783319208275
Schoen, R., Simon, L.: A new proof of the regularity theorem for rectifiable currents which minimize parametric elliptic functionals. Indiana Univ. Math. J. 31(3), 415–434 (1982)
Troianiello, G.M.: Elliptic Differential Equations and Obstacle Problems. The University Series in Mathematics. Plenum Press, New York (1987). ISBN 0306424487
Villani, C.: Topics in Optimal Transportation. Graduate Studies in Mathematics, vol. 58. American Mathematical Society, Providence, RI (2003). ISBN 9780821833124
Villani, C.: Optimal transport: old and new. Grundlehren der Mathematischen Wissenschaften, vol. 338. Springer, Berlin (2009). ISBN 9783540710493
Acknowledgements
We thank Jonas Hirsch for pointing out and explaining the reference [21] and other works related to minimal surfaces. We also thank Georgiana Chatzigeorgiou for carefully reading a previous version of this article and valuable comments, which led to the identification of a more serious error that has been fixed in the current version. We also thank the referee for very helpful comments, in particular for suggesting to include the more general statement on almostminimizing transport maps.
MP thanks the Max Planck Institute for Mathematics in the Sciences and TR thanks the University of Toulouse for their kind hospitality. MP was partially supported by the Projects MESA (ANR18CE40006) and EFI (ANR17CE400030) of the French National Research Agency (ANR).
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Some Technical Lemmata
1.1 A.1 Properties of the Support of Couplings
The following lemma is an important ingredient in the proofs of our \(L^{\infty }\) bounds on the displacement of couplings with cmonotone support, Proposition 1.5 and Lemma 2.1:
Lemma A.1
Let \(\pi \in {\varPi }(\mu , \nu )\) and assume that there exists \(R>0\) such that
Then

(i)
\((B_{\lambda R} \times B_{2\lambda R}) \cap {{\,\mathrm{Spt}\,}}\pi \ne \emptyset \) provided \(\lambda \gg \left( \frac{1}{R^{d+2}} \int _{B_{6R} \times {\mathbb {R}}^d} yx^2 \, {\mathrm {d}}\pi + {\mathscr {D}}_{6R}\right) ^{\frac{1}{d+2}}\).

(ii)
For any \(x \in B_{5R}\) and \(e \in S^{d1}\) we have that \((S_{R}(x,e) \times B_{7R}) \cap {{\,\mathrm{Spt}\,}}\pi \ne \emptyset \), where
$$\begin{aligned} S_{R}(x,e) := C(x,e) \cap (B_{R}(x)\setminus B_{\frac{R}{2}}(x)) \end{aligned}$$is the intersection of the annulus \(B_{R}(x)\setminus B_{\frac{R}{2}}(x)\) with the spherical cone C(x, e) of opening angle \(\frac{\pi }{2}\) with apex at x and axis along e.
Remark A.2
If (A.1) is replaced by \(\frac{1}{R^{d+2}} \int _{{\mathbb {R}}^d \times B_{6R}} yx^2 \, {\mathrm {d}}\pi + {\mathscr {D}}_{6R} \ll 1\), then the symmetric results hold, namely \((B_{2\lambda R} \times B_{\lambda R}) \cap {{\,\mathrm{Spt}\,}}\pi \ne \emptyset \) and \((B_{7R} \times S_R(y,e)) \cap {{\,\mathrm{Spt}\,}}\pi \ne \emptyset \) for all \(y \in B_{5R}\) and \(e \in S^{d1}\).
Proof
To lighten the notation in the proof, let us set
We start with the more delicate statement (ii). Let \(x \in B_{5R}\) and \(e \in S^{d1}\), note that
Since \(S_{R}(x,e) \subseteq B_{6R}\), we have
To estimate \(\mu (S_{R}(x,e))\) from below, let \(\eta \) be a smooth cutoff function equal to one on a ball of radius \(\frac{r}{2}\) and zero outside a concentric ball of radius r satisfying
and such that \({{\,\mathrm{Spt}\,}}\eta \subseteq S_{R}(x,e) \subseteq B_{6R}\), which is possible provided \(r \le \frac{R}{4}\). Then by (A.3)
We now use (2.9) with \(\zeta =\eta \) to get, by the definition (1.10) of \({\mathscr {D}}_{6R}\), and since \(\kappa _{\mu } \sim 1\) by (A.1), that
Hence,
We may now choose \(r = \frac{R}{4}\) so that
from which we conclude that \(\pi (S_{R}(x,e) \times B_{7R})\) is strictly positive if \({\mathscr {D}}_{6R}\) and \({\mathscr {E}}_{6R}^+\) are small enough.
In order to prove (i) we run a similar argument to obtain
Hence \((B_{\lambda R} \times B_{2\lambda R}) \cap {{\,\mathrm{Spt}\,}}\pi \ne \emptyset \) provided that \(\lambda \gg ({\mathscr {E}}_{6R}^+ +{\mathscr {D}}_{6R})^{\frac{1}{d+2}}\). \(\square \)
The next lemma, which is quite elementary, relates the support of a measure and the support of its push forward under an affine transformation:
Lemma A.3
Let \(\gamma \) be a measure on \({\mathbb {R}}^n\) and set \({\widetilde{\gamma }} := F_{\#} \gamma \), where \(F(x) := Ax+b\) with \(A \in {\mathbb {R}}^{n\times n}\) invertible and \(b\in {\mathbb {R}}^n\). Then
Proof
Let \({\widetilde{x}} \in {{\,\mathrm{Spt}\,}}{\widetilde{\gamma }}\). Then for all \(\epsilon > 0\)
Now
so that for all \(\epsilon ' >0\) we have \(\gamma (B_{\epsilon '}(F^{1}({\widetilde{x}}))) > 0\). The other implication follows analogously. \(\square \)
1.2 A.2 Bound on \(\pmb {{\mathscr {D}}}_R\)
In this subsection we show how the quantity \({\mathscr {D}}_R(\rho _0, \rho _1)\) can be bounded in terms of the Hölder seminorms of the densities \(\rho _0, \rho _1\):
Lemma A.4
Let \(\rho _0,\rho _1\in {\mathscr {C}}^{0,\alpha }\), \(\alpha \in (0,1)\), be two probability densities with bounded support, and such that \(\frac{1}{2} \le \rho _j \le 2\), \(j=0, 1\), on their support. If \(\rho _0(0) = \rho _1(0) = 1\), then for all \(R>0\) such that \(B_R \subseteq {{\,\mathrm{Spt}\,}}\rho _j\), \(j=0,1\), we have
Proof
By the definition (1.11) of \(\kappa _j := \kappa _{\rho _j}\) and using \(\rho _j(0) = 1\), Jensen’s inequality implies
If \(R>0\) is such that \(B_R \subseteq {{\,\mathrm{Spt}\,}}\rho _j\), the assumption that \(\rho _j\) is bounded away from zero on its support implies that \(\kappa _j \gtrsim 1\). The claimed inequality (A.4) then follows with Lemma A.5. \(\square \)
Lemma A.5
Let \(\rho \in {\mathscr {C}}^{0,\alpha }\), \(\alpha \in (0,1)\), be a density with bounded support, and such that \(\frac{1}{2} \le \rho \le 2\) on its support. Then
Proof
By the assumptions on \(\rho \), the 2Wasserstein distance between \(\rho \,{\mathrm {d}}x \) and \(\kappa \, {\mathrm {d}}x\) restricted to \(B_R\) can be bounded by
where \({\varPsi }\) is the meanzero solution of the Neumann problem
see [20, Theorem 5.34]. Global Schauder theory^{Footnote 35} for (A.5) implies that
so that
\(\square \)
1.3 A.3 Property of Semiconvex Functions
Lemma A.6
Let \(C\in {\mathbb {R}}\) and assume that \(v: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) is Csemiconvex, that is, \(x\mapsto v(x) + \frac{C}{2} x^2\) is convex. Then
Proof
Notice first that by semiconvexity, the gradient of v exists a.e. Convexity of \(x\mapsto v(x) + \frac{C}{2} x^2\) implies that for a.e. y and all x,
In particular, for a.e. \(y \in B_1\) and every \(x\in B_3\) lying in a cone of opening angle \(\frac{2\pi }{3}\) with apex at y and axis along \(\nabla v(y)\), so that \(\nabla v(y) \cdot (xy) \ge \frac{1}{2} \nabla v(y) xy\), we have
Optimizing in \(xy\) then gives (A.6). \(\square \)
Appendix B: The Change of Coordinates Lemma 1.15
Proof of Lemma 1.15
First, we clearly have \({\widehat{\rho }}_0(0) = {\widehat{\rho }}_1(0) = 1\) and \(\nabla _{{\widehat{x}} {\widehat{y}}}{\widehat{c}}(0,0) = {\mathbb {I}}\). It is also easy to check that \({\widehat{\pi }} \in {\varPi }({\widehat{\rho }}_0, {\widehat{\rho }}_1)\). Let us now compute
Thus, \({\widehat{c}}\)optimality of \({\widehat{\pi }}\) is equivalent to coptimality of \(\pi \). Indeed, if \({\widehat{\pi }}\) is not optimal for the cost \({\widehat{c}}\) then one can find a coupling \({\widehat{\sigma }} \in {\varPi }({\widehat{\rho }}_0, {\widehat{\rho }}_1)\) such that \(\int {\widehat{c}}({\widehat{x}}, {\widehat{y}}) \, {\widehat{\sigma }}({\mathrm {d}}{\widehat{x}} {\mathrm {d}}{\widehat{y}}) < \int {\widehat{c}}({\widehat{x}}, {\widehat{y}}) \, {\widehat{\pi }}({\mathrm {d}}{\widehat{x}} {\mathrm {d}}{\widehat{y}})\).
Defining now \(\sigma := \rho _0(0) \det B^{1} \, (Q^{1})_{\#} {\widehat{\sigma }} \in {\varPi }(\rho _0, \rho _1)\), the previous computation would yield
a contradiction. \(\square \)
Remark B.1
It is also possible to prove the \({\widehat{c}}\)optimality of \({\widehat{\pi }}\) by showing that \({{\,\mathrm{Spt}\,}}{\widehat{\pi }}\) is \({\widehat{c}}\)cyclically monotone, which characterizes optimality (see for instance [20, Theorem 1.49]). This property readily follows from Lemma A.3 and the ccyclical monotonicity of \({{\,\mathrm{Spt}\,}}\pi \).
Appendix C: Some Aspects of Campanato’s Theory
Lemma C.1
Let \(R>0\), \(\frac{1}{2}\le \rho _0\le 2\) on \(B_R\), and assume that the coupling \(\pi \in {\varPi }(\rho _0, \rho _1)\) satisfies (7.8) for \(\alpha \in (0,1)\). For \(r<\frac{R}{2}\) let \(A_r = A_r(x_0, \pi )\in {\mathbb {R}}^{d\times d}\) and \(b_r = b_r(x_0, \pi )\in {\mathbb {R}}^d\) be the (unique) minimizers of
Then there exist \(A_0\in {\mathbb {R}}^{d\times d}\) and \(b_0\in {\mathbb {R}}^d\) such that \(A_r \rightarrow A_0\) and \(b_r \rightarrow b_0\) uniformly in \(x_0\) and the estimates (7.39) hold.
Proof of Lemma C.1
We only give the proof for \(A_r\), as the one for \(b_r\) is analogous. Without loss of generality we may assume that r is small enough such that \(B_r(x_0)\subset B_R\). For further reference in the proof, let us mention here the following estimates obtained by equivalence of the \(L^{\infty }\) and \(L^2\) norms in the set of polynomials of degree one: Let \(P(x) = Ax + b\) for some \(A\in {\mathbb {R}}^{d\times d}\) and \(b\in {\mathbb {R}}^d\), then for any \(x_0\in {\mathbb {R}}^d\) and \(r>0\) there holds
Step 1 Define
We claim that for any \(k\in {\mathbb {N}}_0\) there holds
Indeed, since \(\pi \in {\varPi }(\rho _0, \rho _1)\) and \(B_{r 2^{k1}}(x_0) \subset B_{r 2^{k}}(x_0)\), we may estimate
so that by the definition of \(A_r\) and \(b_r\) as minimizers and the definition (7.8) of \(\llbracket \pi \rrbracket _{\alpha }\), the bound (C.3) easily follows.
Step 2 We claim that for any \(i\in {\mathbb {N}}\),
Indeed, writing the difference as a telescopic sum, we may apply (C.1) to the polynomial \(P_{r 2^{k}}  P_{r 2^{k1}}\) to obtain
so using that \(\frac{1}{2}\le \rho _0 \le 2\) on \(B_{r 2^{k1}}(x_0)\), the claim follows with (C.3).
Step 3 We show next that the sequence \(\{A_{r2^{i}}\}_{i\in {\mathbb {N}}}\) converges as \(i\rightarrow \infty \) to a limit \(A_0\) independent of r.
Indeed, for any \(i>j\) we may use (C.4) to estimate
since the series \(\sum _{k=0}^{\infty } 2^{k\alpha }\) converges. Hence, the sequence \(\{A_{r2^{i}}\}_{i\in {\mathbb {N}}}\) is Cauchy and there exists \(A_0\in {\mathbb {R}}^{d\times d}\) such that \(A_{r2^{i}} \rightarrow A_0\) as \(i\rightarrow \infty \).
To see the independence of the limit of r, let \(0<r<\rho <\frac{R}{2}\) be small enough. Then applying (C.1) to the function \(P_{r 2^{j}}  P_{\rho 2^{j}}\) for \(j\in {\mathbb {N}}\) gives
which can be bounded similarly to Step 1 to yield
The claimed inequality (7.39) now follows easily by letting \(i\rightarrow \infty \) in (C.4). \(\square \)
Appendix D: An \(L^p\) Bound on the Displacement for AlmostMinimizing Transport Maps
In this section we give a proof of the interior \(L^p\) estimate for arbitrary \(p<\infty \) on the displacement of almostminimizing transport maps:
Proposition D.1
Assume \(\mu \) has a \(C^{0,\alpha }\) density f satisfying \(f(0) = 1\). Let T be an almostminimizing transport map from \(\mu \) to \(\nu \) in the sense of (1.55) with a rate function \({\varDelta }_r \le 1\). Then there exists \(R_1 > 0\) such that, if
then for any \(p < \infty \),
Proof
Step 1 (A smooth transport map from \(\mu \) to the Lebesgue measure). We fix an open ball \(B_0\) centered at 0 such that \(B_0 \subset {{\,\mathrm{Spt}\,}}f\). Let S be the DacorognaMoser transport map from \(\mu \lfloor _{B_0}\) to \(\frac{\mu (B_0)}{B_0} {\mathrm {d}}x \lfloor _{B_0}\), which is \({\mathscr {C}}^{1, \alpha }\) on \(B_0\), see for example [24, Chapter 1]. By an affine change of variables in the target space, we may define for any \(x_0 \in X\) a map \(S_{x_0}\) with the same regularity as S that pushes forward \(\mu \lfloor _{B_0}\) to \(f(x_0) \, {\mathrm {d}}x \lfloor _{{\widetilde{B}}_0}\), where \({\widetilde{B}}_0\) is the modified target space under this affine transformation, such that
In particular, for all \(x \in B_0\),
where C depends on S and therefore on f through the global Hölder seminorm \([f]_{\alpha }\). Moreover, since \(S^{1}\) has the same regularity as S, \(S_{x_0}^{1}\) is \({\mathscr {C}}^{1,\alpha }\) and for all \(y \in B_R(x_0)\),
Letting \(R_1 > 0\) small enough so that first, \(B_{R_1} \subset B_0 \subset {{\,\mathrm{Spt}\,}}f\), and second, \(C R_1^{\alpha } \le 1\), we may therefore assume, writing \(S_{x_0}(x)  x_0 = S_{x_0}(x)  x + x  x_0\) and \(S_{x_0}^{1}(y)  x_0 = S_{x_0}^{1}(y)  y + y  x_0\), that for all \(x_0 \in B_{R_1}\),
We note that \(R_1\) depends on f only through \([f]_{\alpha }\). In view of (D.1) and the condition \(f(0) = 1\), we may assume that
Step 2 (Use of almostminimality at scale R). Let us denote the displacement by \(u:=T{{\,\mathrm{Id}\,}}\). We claim that if \(R_1\) is small enough, for any ball \(B := B_{12R}(x_0) \subset B_{R_1}\) and any set \(A \subset B_R(x_0)\), we have^{Footnote 36}
To see this, we start from the following pointwise identity, that holds for any map \({\varPhi }\),
which is (a deformation of) a standard identity used to derive \((c)\)monotonicity of an optimal transport map. As usual in optimal transportation, such a monotonicity property follows from considering a competitor that swaps points in the target space. However, because of our assumption of almostminimality, we cannot really work with a pointwise argument but have to consider small sets A of positive measure. We will do this by applying (D.7) with a map \({\varPhi }\) that swaps some parts of the graph of T. Indeed, assuming \({\varPhi }\) is a \(\mu \)preserving involution such that \({\varPhi } = {{\,\mathrm{Id}\,}}\) outside of a set of diameter of order R, we may integrate (D.7) with respect to \(\mu \), so that the identities
combined with almostminimality of T at a scale of order R, yield
To implement this, we give ourselves a direction \(e \in B_2 \setminus B_1\), swap the (images through S of the)^{Footnote 37} sets
with their translates along e or \(e\), and average over all directions. For this we first define a map \(\phi \) via
Notice that \(\phi \) is welldefined because, in view of (D.4), we have \(S(A) \subset B_{2R}(x_0)\). Furthermore, we see that \(\phi = {{\,\mathrm{Id}\,}}\) outside of \(B_{6R}(x_0)\), and that \(\phi \) swaps \(S(A_+)\) and its translate \(S(A_+) + 2Re\) and swaps \(S(A_)\) and its translate \(S(A_)  2Re\), so that \(\phi \) is an involution; in particular, \(\phi \) preserves the Lebesgue measure. Combining the second point with the inclusion \(B_{6R}(x_0) \subset S_{x_0}(B)\) from (D.4), we get that \({\varPhi } := S^{1} \circ \phi \circ S\) is identity outside of B. From the third point, we see that \({\varPhi }\) is also an involution and \({\varPhi }_{\#} \mu = \mu \), and it follows that the map \({\widetilde{T}} := T \circ {\varPhi }\) satisfies \({\widetilde{T}}_{\#} \mu = \nu \) and \({\widetilde{T}} = T\) outside of B. Thus, (D.8) holds.
Now, recalling that \(u = T{{\,\mathrm{Id}\,}}\),
so using the fact that \(\phi = {{\,\mathrm{Id}\,}}\) everywhere except on S(A) and \((S(A_+) + 2Re) \cup (S(A_)  2Re)\), we have two terms to estimate. From the identity
and (D.3), we first have
We then estimate, using (D.3) and (D.5),
so that combining (D.8), (D.10) and (D.11) gives
It remains to integrate (D.12) with respect to \(e \in B_2 \setminus B_1\), using that
and the equivalence
which gives
From (D.13), (D.14) and the inclusion
we obtain
Finally, the transport condition \(\det DS = \frac{f}{f(x_0)}\) ensures that we have \(S(A) \lesssim A\), so that (D.12) yields
Choosing \(R_1\) so that \(C R_1^{\alpha } \le \frac{1}{2}\) and using (D.5), we obtain (D.6). We note again that the choice of \(R_1\) depends on f only through \([f]_{\alpha }\).
Step 3 We claim that for any set \(A \subset B_{\frac{R_1}{2}}\) with \(\text {diam} \, A \le R \ll R_1\), for any \(\beta > 0\), we have
Step 3.A. Let us show that for all \(\beta > 0\) and for any two balls \(B_r(x_0) \subset B_R(x_0) \subset B_{R_1}\),
To this end, we momentarily introduce
Applying Step 2 to the sets \(A := B_{\rho }(x_0)\) and \(B := B_{M \rho }(x_0)\), we obtain, provided \(12\rho \le M\rho \le R\),
so that, provided \(r\le \rho \le \frac{R}{M}\),
Fixing a large enough \(M \sim 1\), this yields in the range \(r\le \rho \le \frac{R}{M}\),
In the remaining range \(\frac{R}{M} \le \rho \le R\), we have
so that (D.17) turns into
which is (D.16).
Step 3.B. We prove (D.15). By applying CauchySchwarz to (D.16), we obtain
so that, optimizing in R by choosing \(R = \left( \int _{B_{R_1}} u^2 \right) ^{\frac{1}{d+2}} \ll R_1\) by (D.1), we get, provided \(B_r(x_0) \subset B_{R_1}\),
In combination with Step 2, this yields (D.15).
Step 4. (Conclusion). We now have all the ingredients to prove the estimate (D.2).
Given a threshold \(M < \infty \) and a ball \(B \subset B_{\frac{R_1}{2}}\) of radius \(R \ll R_1\), we apply Step 3 to \(A := \{u>M\} \cap B\), to the effect that
Provided \(M \ge \left( \int _{B_{R_1}} u^2 \right) ^{\frac{1}{d+2}}\), thanks to (D.1), there are radii \(R \ll R_1\) for which the statement on the lefthand side of (D.18) holds. Hence, by covering \(B_{\frac{R_1}{4}}\) by these balls, we obtain
Note that (D.19) is trivially satisfied for \(M \le \left( \int _{B_{R_1}} u^2 \right) ^{\frac{1}{d+2}}\), so that with \(p := 1+\frac{1}{\beta }\),
holds for all M. This amounts to an estimate in the weak \(L^p\) norm of u on \(B_{\frac{R_1}{4}}\). Because of (D.1), we trivially have
so by interpolation, we obtain (D.2). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Otto, F., Prod’homme, M. & Ried, T. Variational Approach to Regularity of Optimal Transport Maps: General Cost Functions. Ann. PDE 7, 17 (2021). https://doi.org/10.1007/s40818021001061
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s40818021001061
Keywords
 Optimal transportation
 \(\epsilon \)regularity
 Partial regularity
 General cost functions
 Almostminimality