Stability and guaranteed error control of approximations to the Monge--Amp\`ere equation

This paper analyzes a regularization scheme of the Monge--Amp\`ere equation by uniformly elliptic Hamilton--Jacobi--Bellman equations. The main tools are stability estimates in the $L^\infty$ norm from the theory of viscosity solutions which are independent of the regularization parameter $\varepsilon$. They allow for the uniform convergence of the solution $u_\varepsilon$ to the regularized problem towards the Alexandrov solution $u$ to the Monge--Amp\`ere equation for any nonnegative $L^n$ right-hand side and continuous Dirichlet data. The main application are guaranteed a posteriori error bounds in the $L^\infty$ norm for continuously differentiable finite element approximations of $u$ or $u_\varepsilon$.


Introduction
Overview.Let Ω ⊂ R n , n ≥ 2, be a bounded and convex domain.Given a nonnegative function 0 ≤ f ∈ L n (Ω) and continuous Dirichlet data g ∈ C(∂Ω), the Monge-Ampère equation seeks the unique (convex) Alexandrov solution u ∈ C(Ω) to det D 2 u = (f /n) n in Ω and u = g on ∂Ω.
(1. 1) If the Dirichlet data g = 0 is non-homogenous, then we additionally assume that Ω is strictly convex.The re-scaling f := (f /n) n of the right-hand side is not essential, but turns out convenient for purposes of notation.By the Alexandrov solution u to (1.1) we mean a convex function u ∈ C(Ω) with u = g on ∂Ω and L n (∂v(ω)) = ˆω f dx for any Borel subset ω ⊂ Ω.
The left-hand side denotes the Monge-Ampère measure of ω, i.e., the n-dimensional Lebesgue measure of all vectors in the subdifferential ∂v(ω) := ∪ x∈ω ∂v(x) where ∂v(x) is the usual subdifferential of v in a point x.We remark that this solution concept admits more general right-hand sides, which are, however, not disregarded in this work.For further details, we refer to the monographs [13,11].It is known [1] that the Alexandrov solution to (1.1) exists and is unique.In addition, it was shown [4] that if f ∈ C 0,α (Ω), 0 < λ ≤ f ≤ Λ, and g ∈ C 1,β (∂Ω) with positive constants 0 < α, β < 1 and 0 < λ ≤ Λ, then u ∈ C(Ω) ∩ C 2,α loc (Ω).It is known [14,10] that (1.1) can be equivalently formulated as a Hamilton-Jacobi-Bellman (HJB) equation, a property that turned out useful for the numerical solution of (1.1) [10,12]; one of the reasons being that the latter is elliptic on the whole space of symmetric matrices S ⊂ R n×n and, therefore, the convexity constraint is automatically enforced by the HJB formulation.For nonnegative continuous right-hand sides 0 ≤ f ∈ C(Ω), the Monge-Ampère equation (1.1) is equivalent to F 0 (f ; x, D 2 u) = 0 in Ω and u = g on ∂Ω with F 0 (f ; x, M ) := sup A∈S(0) (−A : M + f n √ det A) for any x ∈ Ω and M ∈ R n×n .Here, S(0) := {A ∈ S : A ≥ 0 and tr A = 1} denotes the set of positive semidefinite symmetric matrices A with unit trace tr A = 1.Since F 0 is only degenerate elliptic, the regularization scheme proposed in [12] replaces S(0) by a compact subset S(ε) := {A ∈ S(0) : A ≥ ε} ⊂ S(0) of matrices with eigenvalues bounded from below by the regularization parameter 0 < ε ≤ 1/n.The solution u ε to the regularized PDE solves where, for any x ∈ Ω and M ∈ R n×n , the function F ε is defined as In two space dimensions n = 2, uniformly elliptic HJB equations satisfy the Cordes condition [15] and this allows for a variational setting for (1.2) with a unique strong solution u ε ∈ H 2 (Ω) in the sense that F ε (f ; x, D 2 u ε ) = 0 holds a.e. in Ω [18,19].The paper [12] establishes uniform convergence of u ε towards the generalized solution u to the Monge-Ampère equation (1.1) as ε 0 under the assumption g ∈ H 2 (Ω) ∩ C 1,α (Ω) and that 0 ≤ f ∈ L 2 (Ω) can be approximated from below by a pointwise monotone sequence of positive continuous functions.
Contributions of this paper.The variational setting of (1.2) in two space dimensions leads to H 2 stability estimates that deteriorate with ε −1 → ∞ as the regularization parameter ε → 0 vanishes.This can be explained by the regularity of Alexandrov solutions to the Monge-Ampère equation (1.1) as they are, in general, not in H 2 (Ω) without additional assumptions on the domain Ω and the data f, g.Consequently, error estimates in the H 2 norm may not be of interest, and the focus is on error estimates in the L ∞ norm.
The analysis departs from the following L ∞ stability estimate that arises from the Alexandrov maximum principle.
The constant C(n, diam(Ω)) exclusively depends on the dimension n and the diameter diam(Ω) of Ω, but not on the ellipticity constant of (1.2) or on the regularization parameter ε.Consequently, this allows for control of the L ∞ error even as ε → 0. By density of with the following two applications.First, this paper establishes, in extension to [12], uniform convergence of (generalized) viscosity solutions u ε of the regularized PDE (1.2) to the Alexandrov solution u ∈ C(Ω) of the Monge-Ampère equation (1.2) under the (essentially) minimal assumptions 0 ≤ f ∈ L n (Ω) and g ∈ C(∂Ω) on the data.Second, (1.4) provides guaranteed error control in the L ∞ norm (even for inexact solve) for H 2 conforming FEM.
Outline.The principal tool we use for establishing our results is the celebrated Alexandrov maximum principle.It provides an upper bound for the L ∞ norm of any convex function in dependence of its Monge-Ampère measure.
The remaining parts of this paper are organized as follows.Section 2 establishes L ∞ stability estimates for viscosity solutions to the HJB equation (1.2) for all parameters 0 ≤ ε ≤ 1/n in any space dimension.Section 3 provides a proof of convergence of the regularization scheme.A posteriori error estimates for the discretization error in the L ∞ norm for H 2 -conforming FEM are presented in Section 4. The three numerical experiments in Section 5 conclude this paper.
Standard notation for function spaces applies throughout this paper.Let C k (Ω) for k ∈ N denote the space of scalar-valued k-times continuously differentiable functions.Given a positive parameter 0 < α ≤ 1, the Hölder space C k,α (Ω) is the subspace of C k (Ω) such that all partial derivates of order k are Hölder continuous with exponent α.For any set ω ⊂ R n , χ ω denotes the indicator function associated with ω.For A, B ∈ R n×n , the Euclidean scalar product A : The notation | • | also denotes the absolute value of a scalar or the length of a vector.The relation A ≤ B of symmetric matrices A, B ∈ S holds whenever B − A is positive semidefinite.

Stability estimate
We first recall the concept of viscosity solutions to the HJB equation (1.2).

Definition 2.1 (viscosity solution)
The following result provides the first tool in the analysis of this section.Lemma 2.2 (classical comparison principle).Given 0 ≤ ε ≤ 1/n and a continuous right-hand side f ∈ C(Ω), where we assume f ≥ 0 if ε = 0, let v * ∈ C(Ω) resp.v * ∈ C(Ω) be a super-resp.subsolution to the PDE Proof.The proof applies the arguments from [7, Section 3] to the PDE (2.1) and can follow [10,Lemma 3.6] with straightforward modifications; further details are therefore omitted.
An extended version of Lemma 2.2 below is the following.

Lemma 2.3 (comparison principle). Given any
Proof.Given any test function ϕ ∈ C 2 (Ω) and x ∈ Ω such that v * − ϕ has a local minimum at x, then F ε * (f * ; x, D 2 v * ) = 0 in the sense of viscosity solutions implies The comparison principle from Lemma 2.2 allows for the existence and uniqueness of viscosity solutions (1.2) by Perron's method.(b) (interior regularity for HJB) If ε > 0 and f ∈ C 0,α (Ω) with 0 < α < 1, then u ∈ C(Ω) ∩ C 2,κ loc (Ω) with a constant 0 < κ < 1 that solely depends on α and ε.(c) (interior regularity for Monge-Ampère) Proof.On the one hand, an elementary reasoning as in the proof of Lemma 2.3 proves that the viscosity solution v * to the Poisson equation  [13,Proposition 1.3.4]implies the assertion in (a).The interior regularity in (b) is a classical result from [5,17].For the Monge-Ampère equation, the interior regularity in (c) holds under the assumption that the Alexandrov solution u is strictly convex [11,Corollary 4.43].Sufficient conditions for this are that f > 0 is bounded away from zero and g ∈ C 1,β (∂Ω) is sufficiently smooth [11,Corollary 4.11].Some comments are in order, before we state a precise version of the L ∞ stability estimate (1.4) from the introduction.In general, these estimates arise from the Alexandrov-Bakelman-Pucci maximum principle for the uniform elliptic Pucci operator, cf.[3] and the references therein for further details.However, the constant therein may depend on the ellipticity constant of F ε and therefore, on ε.In the case of the HJB equation (1.2) that approximates the Monge-Ampère equation (1.1) as ε → 0, the Alexandrov maximum principle is the key argument to avoid a dependency on ε.Recall the constant c n from Lemma 1.1.
Proof.The proof is divided into two steps.
Step 1: The first step establishes (2.3) under the assumptions loc (Ω) for some positive parameter α that (possibly) depends on ε.In particular, w k ∈ C 2 (Ω) is a classical solution to the PDE (2.5).We define the continuous function v ∂Ω by design and the comparison principle from Lemma 2.2 provide On the one hand, the zero function with Hence, the comparison principle from Lemma 2.2 shows w k ≤ 0 in Ω.On the other hand, Proposition 2.4(a) proves that the Alexandrov solution A passage of the right-hand side to the limit as k → ∞ and Step 2: The second step establishes (2.3) without the additional assumptions from Step 1.For the functions f The application of Step 1 to the viscosity solutions v * , v * of (2.9)-(2.10)with f * ≤ f * and v * ≤ v * on ∂Ω, and the identity max{a, b} − min{a, b} = |a − b| reveal The combination of this with (2.11) concludes (2.3).
The stability estimate from Theorem 2.5 motivates a solution concept for the HJB equation (1.2) with L n right-hand sides.
Lemma 2.6 (generalized viscosity solution).Given f ∈ L n (Ω), g ∈ C(∂Ω) and 0 ≤ ε ≤ 1/n, where we assume f ≥ 0 if ε = 0, there exists a unique function u ∈ C(Ω) such that u is the uniform limit of any sequence (u j ) j∈N of viscosity solutions u j ∈ C(Ω) to for right-hand sides f j ∈ C(Ω) and Dirichlet data g j ∈ C(Ω) with lim j→∞ f − f j L n (Ω) = 0 and lim j→∞ g − g j L ∞ (∂Ω) = 0.The function u is called generalized viscosity solution to (1.2).If ε = 0 and f ≥ 0, then the generalized viscosity solution to (1.2) and the Alexandrov solution to (1.1) coincide.
For any index j, k ∈ N, the stability estimate (2.4) from Theorem 2.5 provides Since (f j ) j∈N (resp.(g j ) j∈N ) is a Cauchy sequence in L n (Ω) (resp.C(∂Ω)), this implies that (u j ) j∈N is a Cauchy sequence in the Banach space C(Ω) endowed with the L ∞ norm.Therefore, there exists u ∈ C(Ω) with lim j→∞ u − u j L ∞ (Ω) = 0.It remains to prove that u is independent of the choice of the approximation sequences for f and g.To this end, let ( f j ) j∈N be another sequence of continuous functions Then the sequence ( u j ) j∈N of viscosity solutions u j ∈ C(Ω) to (2.12) with f j replaced by f j converges uniformly to some u ∈ C(Ω).The stability estimate (2.4) from Theorem 2.5 shows for any j ∈ N. The right-hand side of this vanishes in the limit and the lefthand side converges to u − u L ∞ (Ω) as j → ∞, whence u = u in Ω.If f ≥ 0, then there exists a sequence (f j ) j∈N of nonnegative continuous functions 0 ≤ f j ∈ C(Ω) with lim j→∞ f − f j L ∞ (Ω) (e.g., from convolution with a nonnegative mollifier).Proposition 2.4(a) provides, for all j ∈ N, that the viscosity solution u j to (2.12) with ε = 0 is the Alexandrov solution to det D 2 u j = f j in Ω.Since u j converges uniformly to the generalized viscosity solution u to (1.2), the stability of Alexandrov solutions [11, Corollary 2.12 and Proposition 2.16] concludes that u is the Alexandrov solution to (1.1).
By approximation of the right-hand sides, the stability estimates from Theorem 2.5 also applies to generalized viscosity solutions to the HJB equation (1.2).
Corollary 2.7 (extended L ∞ stability).Given any 0 ≤ ε ≤ 1/n, f j ∈ L n (Ω), where we assume f j ≥ 0 if ε = 0, and g j ∈ C(Ω), the generalized viscosity solutions Proof.For any index j ∈ {1, 2}, there exists a sequence (f j,k ) j∈N of smooth functions The left-hand side of this converges to v 1 −v 2 L ∞ (Ω) by the definition of generalized viscosity solutions in Lemma 2.6.Hence, concludes the proof.
The convexity of the differential operator F ε in S leads to existence (and uniqueness) of strong solutions u ε ∈ C(Ω) ∩ W 2,n loc (Ω) to (1.2) for any ε > 0, f ∈ L n (Ω), and g ∈ C(∂Ω) [3].It turns out that strong solutions are generalized viscosity solutions.For the purpose of this paper, we only provide a weaker result.Theorem 2.9 (strong solution implies generalized viscosity solution).Let 0 < ε ≤ 1/n, f ∈ L n (Ω), and g ∈ C(∂Ω) be given.Suppose that u ε ∈ W 2,n (Ω) is a strong solution to (1.2) in the sense that (1.2) is satisfied a.e. in Ω.Then this strong solution u ε is the unique generalized viscosity solution to (1.2).
The proof of Theorem 2.9 utilizes the following elementary result.Lemma 2.10 (computation and stability of right-hand side).Let ε > 0 be given.For any M ∈ S, there exists a unique ξ(M Proof.Given a symmetric matrix M ∈ S, define the continuous real-valued function Since Ψ M is strictly monotonically increasing with the limits lim ξ→−∞ Ψ M = −∞ and lim ξ→∞ Ψ M = +∞, there exists a unique root ξ(M ) such that Ψ M (ξ(M )) = 0.For any M, N ∈ S, the inequality max Exchanging the roles of M and N in (2.15) Proof of Theorem 2.9.Let v j ∈ C 2 (Ω) be a sequence of smooth functions that approximate u ε with lim j→∞ u ε − v j W 2,n (Ω) = 0. Lemma 2.10 proves that there exists a (unique) function f j := ξ(D 2 v j ) with F ε (f j ; x, D 2 v j ) = 0 in Ω.We apply the stability from Lemma 2.10 twice.First, Notice from the Sobolev embedding that v j converges uniformly to u ε in Ω as j → ∞.In conclusion, u ε is the uniform limit of classical (and in particular, viscosity) solutions v j such that the corresponding right-hand sides and Dirichlet data converge in the correct norm, i.e., lim j→∞ f − f j L n (Ω) = 0 and lim j→∞ g − v j L ∞ (∂Ω) = 0. Lemma 2.6 proves that u ε is the unique (generalized) viscosity solution.

Convergence of the regularization
This section establishes the uniform convergence of the generalized viscosity solution u ε of the regularized HJB equation (1.2) to the Alexandrov solution u of the Monge-Ampère equation (1.1) for any nonnegative right-hand side 0 ≤ f ∈ L n (Ω).The proof is carried out in any space dimension n and does not rely on the concept of strong solutions in two space dimensions from [18,19].It departs from a main result of [12].Theorem 3.1 (convergence of regularization for smooth data).Let f ∈ C 0,α (Ω), 0 < λ ≤ f ≤ Λ, and g ∈ C 1,β (∂Ω) with positive constants 0 < α, β < 1 and 0 < λ ≤ Λ be given.Let u ∈ C(Ω) ∩ C 2,α loc (Ω) be the unique classical solution to (1.1) from Proposition 2.4(c).

A posteriori error estimate
In this section we prove an a posteriori error bound for a given approximation v h to the Alexandrov solution u of the Monge-Ampère equation.In what follows we assume a given finite partition T of Ω of closed polytopes such that the interiors of any distinct T, K ∈ T are disjoint and the union over T equals Ω.Let V h ⊂ C 1,1 (Ω) be a subspace of functions in C 2 (T ) when restricted to any set T ∈ T of the partition.(Here, C 2 up to the boundary of T means that there exists a sufficiently smooth extension of the function In practical examples, we think of V h as a space of C 1 -regular finite element functions.Given any v ∈ C(Ω), its convex envelope is defined as Theorem 4.1 (guaranteed error control for Monge-Ampère).Given a nonnegative right-hand side f ∈ L n (Ω) and g ∈ C(∂Ω), let u ∈ C(Ω) be the Alexandrov solution to (1.1).Let v h ∈ V h with its convex envelope Γ v h be given and define The proof of Theorem 4.1 requires the following result on the Monge-Ampère measure of the convex envelope Γ v h .Lemma 4.2 (MA measure of the convex envelope).The convex envelope Γ v h of any v h ∈ V h satisfies det D 2 Γ v h = f h dx in the sense of Monge-Ampère measure with the nonnegative function p touches v h at x from below.We deduce p = ∇v h (x) from the differentiability of v h .The claim then follows from the fact that the subdifferential This formula implies that χ Cv h det D 2 pw v h ≥ 0 is a nonnegative function a.e. in Ω.

Consequently, µ Γv
The application of the stability estimate (2.4) from Corollary 2.7 on the convex subset Since Γ v h may only be continuous in the domain The combination of the two previously displayed formula concludes the proof.
We note that, for certain examples, the convex envelope Γ v h of an approximation v h is continuous up to the boundary.Proposition 4.3 (continuity at boundary).Let v ∈ C 0,1 (Ω) be Lipschitz continuous such that v| ∂Ω can be extended to a Lipschitz-continuous convex function Proof.We first prove the assertion for homogenous boundary condition v| ∂Ω = 0. Given any point x ∈ Ω, let x ∈ ∂Ω denote a best approximation of x onto the boundary ∂Ω so that |x − x | = dist(x, ∂Ω).Define the affine function a is a convex function with w ≤ v in Ω and w = v on ∂Ω.Let (x j ) j ⊂ Ω be a sequence of points converging to some point x ∈ ∂Ω on the boundary.For a given γ > 0, there exists, from the uniform continuity of v − w in the compact set Ω, a δ > 0 such that for sufficiently large j.In combination with the triangle inequality and the Lipschitz continuity of v, The theory of this paper also allows for an a posteriori error control for the regularized HJB equation (1.2).We state this for the sake of completeness as, in general, it is difficult to quantify the regularization error u − u ε L ∞ (Ω) .Theorem 4.4 (guaranteed L ∞ error control for uniform elliptic HJB).Given a positive parameter 0 < ε ≤ 1/n and a C 1 conforming finite element function v h ∈ V h , there exists a unique f h ∈ L ∞ (Ω) such that The viscosity solution u ε to (1.2) with right-hand side f ∈ L n (Ω) and Dirichlet data g ∈ C(∂Ω) satisfies, for any convex subset Ω Ω, that Proof.As in the proof of Theorem 2.9, Lemma 2.10 provides a (unique) piecewise continuous and essentially bounded function f h := ξ(D 2 pw v h ) ∈ L ∞ (Ω) with (4.3).Theorem 2.9 shows that v h is the generalized viscosity solution to (4.3).Therefore, the stability estimates from Corollary 2.7 can be applied to u ε and v h .First, the application of (2.4) to the subdomain Ω instead Ω leads to Second, the local estimate (2.3) with ω := Ω \ Ω implies , the combination of the two previously displayed formulas concludes the proof.
We point out that in both theorems of this section, it is possible to apply the stability estimate (2.3) to further subsets of Ω to localize the error estimator.

Numerical examples
In this section, we apply the theory from Section 4 to numerical benchmarks on the (two-dimensional) unit square domain Ω := (0, 1) 2 .5.1.Implementation.Some remarks on the practical realization precede the numerical benchmarks of this section.5.1.1.Setup.Given T as a rectangular partition of the domain Ω with the set E of edges, we choose V h to be the Bogner-Fox-Schmit finite element space [6].It is the space of global C 1,1 (Ω) functions that are bicubic when restricted to any element T ∈ T .We compute the discrete approximation in V h by approximating the regularized problem (1.3) with a Galerkin method.In the two-dimensional setting, this yields a strongly monotone problem with a unique discrete solution u h,ε [12].Since v h := u h,ε is a C 1,1 (Ω) function, we can apply Theorem 4.1 to obtain error bounds for u − Γ v h L ∞ (Ω) , which motivates an adaptive scheme as outlined below.5.1.2.Evaluation of the upper bound of Theorem 4.1.We proceed as follows for the computation of the right-hand side RHS 0 of (4.2).

Integration of f
for any subset ω ⊂ Ω is computed via numerical integration.Given a set of Gauss points N associated to the degree of exact integration , this reads with some positive weight function the proof of Theorem 4.1).While this condition can be checked explicitly, it leads to a global problem for each Gauss point, which may become rather expensive.Instead, (5.2) is verified at only a finite number of points, e.g., z ∈ V := N ∪ N b , where N b ⊂ ∂Ω is a discrete subset of ∂Ω.The set of points V create a quasi-uniform refinement T of the partition T into triangles and we assume that the mesh-size of T tends to zero as → ∞.Let I v h denote the nodal interpolation of v h w.r.t. the mesh T .We replace the function χ Cv h in (5.1) by the indicator function χ C v h of the set In practice, the numerical integration formula for f − f h L 2 (ω) reads The convex envelope Γ I v h of I v h can be computed, for instance, by the quickhull algorithm [2].Therefore, it is straight-forward to compute (5.3).We note that if x ∈ C v h ∩ N , then (5.2) holds for any z ∈ V .Since the convex envelope of the continuous piecewise affine function I v h only depends on the nodal values of v h , this implies x ∈ C v h ∩ N .However, the reverse is not true.Hence, (5.3) and (5.1) may not coincide.From the uniform convergence of I v h to v h as → ∞, we deduce whose Lebesgue measure vanishes in the limit as → ∞.In conclusion, the limits of (5.1) and (5.3) coincide.

Computation of µ
Choice of Ω .Let δ := min E∈E h E denote the minimal edge length of the mesh T .For all integers 0 ≤ j < 1/(2δ), define Ω jδ := {x ∈ Ω : dist(x, ∂Ω) ≥ jδ}.It seems canonical to choose Ω := Ω jδ , where j is the index that minimizes RHS 0 .However, this choice may lead to significant computational effort.From the interior regularity of Alexandrov solutions [4], we can expect that the error is concentrated on the boundary and so, the best j will be close to one.Accordingly, the smallest j ≥ 0 is chosen so that RHS 0 with Ω := Ω (j+1)δ is larger than RHS 0 with Ω := Ω jδ .5.1.3.Adaptive marking strategy.We define the refinement indicator for any T ∈ T , where the scaling in δ arises from (4.2) with n = 2. Let σ := RHS 0 −µ denote the remaining contributions of RHS 0 , where µ , then we mark one fifth of all boundary edges E ∈ E with the largest contributions g − u h,ε L ∞ (E) .Otherwise, we mark a set M of rectangles with minimal cardinality so that 1 2 5.1.4.Displayed quantities.The convergence history plots display the errors u − u h,ε L ∞ (Ω) , LHS := u − Γ u h,ε L ∞ (Ω) as well as the error estimator RHS 0 against the number of degrees of freedom ndof in a log-log plot.(We note that ndof scales like h −2 max on uniformly refined meshes.)Whenever the solution u is sufficiently smooth, the errors u − u h,ε H 1 (Ω) and u − u h,ε H 2 (Ω) are also displayed.Solid lines in the convergence history plots indicate adaptive mesh-refinements, while dashed lines are associated with uniform mesh-refinements.The experiments are carried out for the regularization parameters ε = 10 −3 in the first two experiments and ε = 10 −4 for the third experiment.For a numerical comparison of various ε, we refer to [12].  to the additional regularization provided by the convex envelope, f − f h L 2 (Ω) is concentrated at the points (1/2, 0) and (1/2, 1), but becomes very small after some mesh-refining steps.We even observed in Figure 2 that LHS = RHS 0 on two meshes, i.e., f − f h L 2 (Ω) = 0. Then RHS 0 is dominated by the data boundary approximation error and leads to mesh refinements on the boundary.This may result in significant changes in the Monge-Ampère measure of Γ u h,ε , because the convex envelope of the discrete function u h,ε depends heavily on its values on the boundary in this class of problems.The function u belongs to C 2 (Ω) ∩ H 2−δ (Ω) for all δ > 0, but neither to H 2 (Ω) nor C 2 (Ω).The convergence history is displayed in Figure 5. Notice from Proposition 4.3 that RHS 0 consists solely of the error in the Monge-Ampère measures.
In this example, f exhibits strong oscillations at the four corners of the domain Ω and the adaptive algorithm seems to solely refine towards these corners as displayed in Figure 6.While RHS 0 converges on uniform meshes (although with a slow rate), there is only a marginal reduction of RHS 0 for adaptive computation.We can conclude that the discrete approximation cannot resolve the infinitesimal oscillation of the Monge-Ampère measure of u properly.This results in the stagnation of u − u h,ε L ∞ (Ω) and LHS at an early level in comparison to uniform mesh refinements.However, we also observed that the stagnation point depends on the maximal mesh-size.In fact, if we start from an initial uniform mesh with a small mesh-size h 0 , significant improvements of RHS 0 are obtained on the first levels of adaptive mesh refinements as displayed in Figure 7. Undisplayed experiments show the same behaviour for u − u h,ε L ∞ (Ω) .This leads us to believe that, in this example, a combination of uniform and adaptive mesh-refining strategy provides the best results.

Lemma 1 . 1 (
Alexandrov maximum principle).There exists a constant c n solely depending on the dimension n such that any convex function v ∈ C(Ω) with homogenous boundary data v| ∂Ω = 0 over an open bounded convex domain Ω satisfies |v

Figure 3 .
Figure 3. Discrete solution on a uniform mesh with 4225 nodes.

Figure 6 .
Figure 6.Adaptive mesh with 1351 nodes for the third experiment.