On the numerical approximation of vectorial absolute minimisers in $L^\infty$

Let $\Omega$ be an open set. We consider the supremal functional \[ \tag{1} \label{1} \ \ \ \ \ \ \mathrm{E}_\infty (u,\mathcal{O})\, :=\, \| \mathrm D u \|_{L^\infty( \mathcal{O} )}, \ \ \ \mathcal{O} \subseteq \Omega \text{ open}, \] applied to locally Lipschitz mappings $u : \mathbb R^n \supseteq \Omega \longrightarrow \mathbb R^N$, where $n,N\in \mathbb N$. This is the model functional of Calculus of Variations in $L^\infty$. The area is developing rapidly, but the vectorial case of $N\geq 2$ is still poorly understood. Due to the non-local nature of \eqref{1}, usual minimisers are not truly optimal. The concept of so-called absolute minimisers is the primary contender in the direction of variational concepts. However, these cannot be obtained by direct minimisation and the question of their existence under prescribed boundary data is open when $n,N\geq 2$. Herein we present numerical experiments based on a new method recently proposed by the first author in the papers [33, 35].


Introduction
Let n, N ∈ N be fixed integers and Ω an open set in R n . In this paper we perform numerical experiments and make some theoretical observations regarding appropriately defined minimisers of the functional The area of Calculus of Variations in L ∞ has a relatively short history in Analysis, pioneered by G. Aronsson in the 1960s, see e.g. [4,5,6]. Nonetheless, since its inception it has attracted the interest of many mathematicians due to both the theoretical importance (see e.g. [2,9,10,15,25,19,39] and the expository texts [7,14,28]) as well as due to the relevance to various and diverse applications (see for instance [11,13,18,38,8,12,26]).
One of the starting difficulties to the study of (1.1) is that, quite surprisingly, the standard (global) minimisers are not truly optimal objects any more, in the sense that "they can be minimised even further" and in general may not solve any kind of Euler-Lagrange equations. Namely, there are many simple examples even in one space dimension where a minimiser does not minimise with respect to its own boundary conditions on subdomains. The latter property is of course automatic for integral functionals. In essence, this owes to the "non-local" nature of (1.1): although one can see (1.1) as the limit case of the p-Dirichlet functional as p → ∞ when passing to the extreme case of p = ∞, the σ-additivity properties of the Lebesgue integral are lost. This was already realised by Aronsson, who introduced the next concept.
for any open sets O ⊆ Ω and any variation φ ∈ W 1,∞ Although the area is developing rapidly, the vectorial case of N ≥ 2 is still poorly understood. In particular, to date neither the existence nor the uniqueness of absolute minimisers with prescribed boundary data on ∂Ω are known, unless min{n, N } = 1, namely either when n = 1 (curves in R N ) or N = 1 (scalar functions on R n ), even in more general situations when one has a general Hamiltonian H(., u, Du) instead of |Du|. The difficulty lies in the fact that for the "localised" variational concept of Definition 1, the standard approach of direct minimisation ( [17,22,23]) of the functional (1.1) in the affine space for a fixed u 0 ∈ W 1,∞ (Ω; R N ) does not in general yield absolute minimisers, but merely global minimisers which minimise on Ω only. Hence, in the absence of alternative methods, the main vectorial tool is to resort to L p approximations and then let p → ∞. To this end, let (u p ) p be the family of p-minimisers in W 1,∞ u0 (Ω; R N ). The problem that one encounters in the attempt to pass to the limit in , is that the weak* sequential compactness u p * − − u ∞ one easily has is not strong enough to obtain that the limit is an absolute minimiser. This obstacle can be bypassed when min{n, N } = 1 because of either the scalar nature of the competing functions and comparison methods, or because of the one-dimensionality of their domain of definition (see for instance [9,29,1]), although the mode of convergence itself cannot in general be strengthened. In particular, the functional (1.1) is only weakly* lower semi-continuous and not weakly* continuous.
Herein we present and perform numerical experiments based on a new method recently proposed by the first author in the joint papers [33,35], where various existence and uniqueness have been obtained. This approach is motivated by the paper [19] and relates to developments in mass transportation (see e.g. [20,21,41], underpinning also the approach followed in the recent paper [32], despite in different guises since the higher order case treated therein is quite special. We refrain from giving any details now, and instead expound on the main ideas of this method in some detail in Section 2.
The numerical method we employ is a finite element approximation, based on an earlier work of Barrett and Liu on numerical methods for elliptic systems [BL]. Therein the authors prove that, for a fixed exponent p, the method converges to the respective p-harmonic mapping under certain regularity assumptions on the solution. We would like to stress that significant care must be taken with numerical computations using this approach because the underlying nonlinear system is ill-conditioned. This owes to the nonlinearity of the problem which grows exponentially with p. Work to overcome this issue includes, for example, the work of Huang, Li and Liu [24] where preconditioners based on gradient descent algorithms are designed and shown to work well for p up to 1000.
Our mechanism for designing approximations of absolute minimisers is based on the approximation of the L ∞ minimisation problem by a respective L p one. We then utilise the approach described in [40,30,31] where it was shown that by forming an appropriate limit we are able to select candidates for numerical approximation along a "good" sequence of solutions, the p-harmonic mappings.
The purpose of this work is to demonstrate some key properties of vectorial absolute minimisers using an analytically justifiable numerical scheme which currently is the only technique available to give insight into the limiting vector-valued problem. We note that our goal is not to construct an efficient approximation method for vectorial absolute minimisers; indeed this indirect approximation of the limiting problem is not computationally efficient.
We conclude by noting that, albeit the concept of absolute minimisers is the primary contender in the direction of variational concepts, is not the unique candidate in the vector case of N ≥ 2. This is connected to that fact that, when N ≥ 2, the associated Euler-Lagrange ∞-Laplace system admits smooth non-minimising solutions which are characterised by two distinct variational concepts, see e.g. [3,27,34].
Conflict of interest. The authors declare that there is conflict of interest.

Approximating absolute minimisers through p-concentration measures
In this section we present the recently proposed method to construct absolute minimisers. The ideas herein are taken from the papers in preparation [35,33]. For the clarity of this discussion suppose that u ∈ C 1 (Ω; R N ). The central point of this approach is to bypass the difficulties caused by the lack of continuity of the essential-supremum with respect to weak convergence by attempting to write the supremum as an integral but for a different quirky measure, namely and then choose the Dirac mass σ O := δ x O . However, this condition by itself does not suffice for our purposes and has to be coupled by another condition. Firstly, in order to utilise duality arguments with ease, let us modify (2.1) to its L 2 variant and suppose in addition that the matrix-valued Radon measure Du σ is divergencefree on O, namely that The PDE (2.3) is to be understood in a sense stronger than the usual distributional sense, due to the emergence of the boundary. In fact, (2.3) has to be interpreted aŝ namely in the space of those C 1 test maps φ : O −→ R n which vanish on ∂O and extend continuously together with their derivative on ∂O (but whose derivative may not vanish on ∂O).
Although the pair of conditions (2.2)-(2.3) seem to be ad-hoc and unmotivated, the payoff is that it is quite easy to see that if one has a C 1 map u : 3), then in fact u is an absolute minimiser in the sense of Aronsson (Definition 1.1). Indeed, fix a test function φ ∈ C 1 0 (O; R N ) and for simplicity suppose that ∂O has zero Lebesgue n-measure. Then, we have Hence, u is indeed minimising on O, at least among smooth variations. The general case then needs some approximation arguments, appropriately adapted to L ∞ . What is less clear is how one can actually prove the existence of such objects in W 1,∞ u0 (Ω; R N ) with given boundary conditions. The idea is to use L p approximations in the following way: for each p > n, consider the minimisation problem By standard variational arguments (see e.g. [17,22]), this problem has a solution u p , which is a weak solution to the celebrated p-Laplace system: In particular, this means that for any O Ω and any φ ∈ W 1,p 0 (O; R N ), u p satisfies By defining the (absolutely continuous) probability measure we may rewrite (2.5) as By juxtaposing (2.7) with (2.3), one sees that upon devising appropriate analytic tools in order to pass in an appropriate weak* sense to some limit . This is particularly challenging, as one the one hand mass might be lost towards the boundary ∂O and one has to consider measures on O rather than on O, even though by definition σ O p (∂O) = 0. On the other hand, the main cause of additional ramifications is the possible "oscillations" of the pair of weakly* converging objects (Du p , σ O p ) and one needs particular "compensated compactness" mechanisms to pass to the limit and show that the oscillations occur in such a way that cancel each other. Despite being highly non-obvious, the above approach can indeed work and this is done in [35,33].
It remains to show that, at least formally, one also obtains (2.2) in the limit as p → ∞. To see this, note that for any α ∈ (0, 1) and for , directly from the definition of σ O p we have the estimate Since one can check with simple arguments that (with semi-continuity methods similar to e.g. in [28,Ch. 8]), we see that, at least formally Equality (2.8) implies that the limit probability measure σ O ∞ is supported on the arg-max set Hence, (2.2) indeed follows.

Numerical Approximations of p-Concentration Measures
In this section we perform several numerical experiments in both the scalar and the vectorial case for various appropriately selected boundary conditions. These experiments demonstrate in a concrete fashion the plethora of possible behaviours of the limit measures σ O ∞ , depending on the shape of O and its position in relation to the level sets of the modulus of the gradient |Du ∞ |.
The method we use is a conforming finite element discretisation of the p-Laplace system analysed in [BL] for fixed p. We will describe the discretisation and summarise extensive numerical experiments aimed at quantifying the behaviour of the limit measure σ O ∞ . We refer to [40,30,31] for analytic justification. We let T be an admissible triangulation of Ω, namely, T is a finite collection of sets such that (1) K ∈ T implies K is an open triangle, (2) for any K, J ∈ T we have that K ∩ J is either ∅, a vertex, an edge, or the whole of K and J and (3) ∪ K∈T K = Ω.
The shape regularity constant of T is defined as the number where ρ K is the radius of the largest ball contained inside K and h K is the diameter of K. An indexed family of triangulations {T n } n is called shape regular if Further, we define h : Ω → R to be the piecewise constant meshsize function of T given by A mesh is called quasiuniform when there exists a positive constant C such that max x∈Ω h(x) ≤ C min x∈Ω h(x). In what follows we shall assume that all triangulations are shape-regular and quasiuniform.
We let P 1 (T ) denote the space of piecewise linear polynomials over the triangulation T , i.e., (3.4) P 1 (T ) = φ such that φ| K ∈ P 1 (K) and introduce the finite element space V := P 1 (T ) ∩ C 0 (Ω) (3.5) to be the usual space of continuous piecewise linear polynomial functions.
3.1. Galerkin discretisation. We consider the Galerkin discretisation, to find U ∈ [V] N with U | ∂Ω = I h g, the piecewise Lagrange interpolant, such that  Further, for fixed p, let {U p } be the finite element approximation generated by solving (3.6) (indexed by h) and u p , the weak solution of the p-Laplace system, then we have that (3.9) U p −→ u p in C 0 (Ω, R N ), as h → 0.

3.2.
Numerical experiments. Our numerical approximations are achieved using Galerkin approximations to the p-Laplacian for sufficiently high values of p. We focus on studying the behaviour solutions have as p increases which allow us to make various conjectures on the behaviour of their asymptotic limit as p → ∞. The computation of p-harmonic mappings is an extremely challenging problem in its own right. The class of nonlinearity in the problem results in the algebraic system, which ultimately yields the finite element solution, being extremely badly conditioned. One method to tackle this class of problems is by using preconditioners based on descent algorithms [24]. For extremely large p, say p ≥ 10000, this may be required; however for our purposes we restrict our attention to p ∼ 100. This yields sufficient accuracy for the results we are illustrating.
We emphasise that even the case p ∼ 100 is computationally difficult to handle. The numerical approximation we are using is based on a damped Newton solver. As it is well known, Newton solvers require a sufficiently close initial guess in order to converge. A reasonable initial guess for the p-Laplacian is given by numerically approximating with the q-Laplacian for q < p sufficiently close to p. This leads to an iterative process in the generation of the initial guess, i.e., we solve the 2-Laplacian as an initial guess to the 3-Laplacian which serves as an initial guess to the 4-Laplacian, and so on. In each of our experiments the number of nonlinear iterations required to achieve a relative tolerance of 10 −8 was achieved in less that 20 iterations.
Using the methodology of p-approximation which we advocate here, it has been numerically demonstrated in the scalar case the rates of convergence both in p and in h that we expect to achieve [40,Section 4]. It was noticed that these rates were dependant on the regularity of the underlying ∞-harmonic function and that In both cases as h is decreased, an increasing value of p is required to achieve optimal approximation (in h). This suggests a coupling p = Ch α is necessary to achieve convergence, where the α is determined by the regularity expected in u ∞ . We found experimentally that coupling p = h −1/2 worked well for the singular case (u ∞ ∈ C 1,1/3 ) and p = h −1 for the smooth case (u ∞ ∈ C ∞ ).
We mention that we do not have access to exact solutions in general. For the experiments in Sections 3.3-3.5 we only provide boundary data and in principle the solutions to this problem are non-smooth and must be interpreted in an appropriate weak sense. For the scalar problem this is through the viscosity solution framework [16].
We study the measure σ O p (T ) with various domains O and increasing values of p to enable conjectures to be made as to the structure of σ O ∞ in some test cases. In particular, we examine the interplay between σ O ∞ and the level sets of |Du ∞ |. In Figures 1-4 we plot the domain O and σ O p (T ) for p = 2, 4, 10, 20, 100. Specific conjectures regarding the behaviour of the measure σ O is provided at the captions of each figure, since the particular behaviour depends on the shape of the domain and its position in relation to the level sets of the gradient, but generally it is a sum of lower-dimensional concentration measures.

Scalar case -Eikonal solution.
In this test we examine the scalar problem with prescribed boundary data given by the conic solution (3.13) u(x, y) = (x 2 + y 2 ) 1/2 .
Notice this function is Eikonal. We study the measure σ O p (T ) with O taken such that 0 / ∈ O. Results are shown in Figure 5 where we plot the domain O and σ O p (T ) for p = 2, 4, 10, 20, 100. Notice that, in contrast to the Aronsson case, the mass is not sent toward the boundary as p → ∞. In this case we conjecture that σ O is always absolutely continuous with respect to the Lebesgue measure L n O .

Vectorial case -Eikonal solution.
In this test we examine the vectorial problem with prescribed boundary data given by (3.14) u(x, y) = e ix − e iy , where we use the notation e it = (cos t, sin t). Notice this function is Eikonal. We study the measure σ O p (T ) with O taken such that 0 / ∈ O. Results are shown in Figure 6 where we plot the domain O and σ O p (T ) for p = 2, 4, 10, 20, 100. Notice that mass is not sent towards the boundary as p → ∞. We conjecture that again σ O is absolutely continuous with respect to the Lebesgue measure L n O .
3.6. Vectorial case -Mixed boundary conditions. In this test we examine the vectorial problem with prescribed boundary data given by

3.7.
Vectorial case -Orientation preserving diffeomorphism. In this test we examine the vectorial problem with prescribed boundary data given by (3.16) u(x, y) = e (log |x|S) x, where S is the orthogonal, skew-symmetric matrix This is the explicit example given in [34,Lemma 3.1]. Results are shown in Figure  9 where we plot the domain O and σ O p (T ) for p = 2, 4, 10, 20, 100. In this case we also conjecture that σ O must be absolutely continuous with respect to the Lebesgue measure L n O . Given that absolute continuity appears in all cases we have eikonal data, it appears that this must be a general fact regardless of any particular additional structures.         Figure 6. A test to characterise σ O ∞ for the boundary data given by (3.14). We show the domain O and the measure σ O p (T ) for increasing values of p. In each of the subfigures B-F we plot 10 contours that are equally spaced between the minimum and maximum values of σ O p . Notice that as p increases these contours remain equally spaced indicating mass is not pushed to the boundary.    Figure 9. A test to characterise σ O ∞ for the boundary data given by (3.16) with λ = 1 2 . We show the domain O and the measure σ O p (T ) for increasing values of p. In each of the subfigures B-F we plot 10 contours that are equally spaced between the minimum and maximum values of σ O p . Notice that as p increases these contours are relatively evenly spaced. Again the conjecture is that σ O is absolutely continuous with respect to L n O .