A Least-Squares Method for the Solution of the Non-smooth Prescribed Jacobian Equation

We consider a least-squares/relaxation finite element method for the numerical solution of the prescribed Jacobian equation. We look for its solution via a least-squares approach. We introduce a relaxation algorithm that decouples this least-squares problem into a sequence of local nonlinear problems and variational linear problems. We develop dedicated solvers for the algebraic problems based on Newton’s method and we solve the differential problems using mixed low-order finite elements. Various numerical experiments demonstrate the accuracy, efficiency and the robustness of the proposed method, compared for instance to augmented Lagrangian approaches.


Introduction
Numerical methods for fully nonlinear equations have recently started to receive a lot of attention, the most well-known equation in that category being the so-called Monge-Ampère equation. Various approaches have been proposed for the numerical solution of second order fully nonlinear equations: the Monge-Ampère equation, see, e.g. [1][2][3][4][5][6], but also the Pucci's equation [7][8][9] or the curvature equation [10,11].
First order fully nonlinear equations have received slightly less attention: we can mention here the Eikonal equation [12,13], or the Hamilton-Jacobi equation [14], which have several applications in optics, wave propagation, material science, differential geometry (geodesics), or even economics [15].
The problem of interest here is a particular equation involving the Jacobian of the unknown function, which has been introduced by Dacorogna and Moser (1990) [16]. More precisely, we consider here a Dirichlet boundary value problem inspired by [16][17][18], which states that, for a given data f , we want to find a vector field u satisfying det ∇u = f in a bounded domain ⊂ R 2 (together with appropriate Dirichlet boundary conditions). A special interest is paid to problems for which the data f is non-smooth. Several works in the literature have focused on this prescribed Jacobian equation, starting with the seminal article [16], which has been developed and extended in, e.g., [17,[19][20][21][22][23][24][25]. Existence of classical solutions rely on solutions in Hölder spaces [16]. Weak solutions to the prescribed Jacobian equation must be considered in the sense of Aleksandrov [26], as emphasized, e.g., in [27,28]. This problem is linked to the Monge-Ampère equation if considering the case where the vector function u is the gradient of a scalar function ϕ, as det ∇u becomes det D 2 ϕ. Another related problem in incompressible elasticity has been addressed in [29] (see also the references therein), where the incompressibility condition reads as det(I d + ∇u) = 1, I d being the identity tensor and u the unknown displacement field.
Theoretical solutions, obtained with explicit constructions, exist in the literature for cases in simple geometries, and usually with identity boundary conditions. However, numerical methods for this particular problem are rather scarce. The goal of the present article is to design a numerical method for the finite element approximation of the prescribed Jacobian equation for arbitrary two-dimensional domains, including non-convex domains and nonsmooth data.
A first methodology has been proposed in [30,31] by using an Alternating Direction Method of Multipliers (ADMM) approach. The numerical algorithm has proved to be efficient, but requires a tedious fine tuning of parameters.
Following previous works on the Monge-Ampère equation [1,32], we advocate here a variational approach for the solution of the prescribed Jacobian equation that is based on a least-squares approach. In order to decouple the nonlinearities of the problem from the linear variational aspects, we use a relaxation algorithm. Low order finite elements are used for the space discretization, while mathematical programming methods [33] allow to solve local constrained optimization problems. Equality constraints are taken into consideration via a Lagrangian approach.
In order to demonstrate the flexibility of the proposed method, we also consider in addition the prescribed Jacobian inequality. The, strongly underdertemined, problem consists in finding a vector field u satisfying det ∇u ≥ f instead of det ∇u = f (together with appropriate Dirichlet boundary conditions). Theoretical results of the prescribed Jacobian inequality are given, e.g., in [34], and to the best of our knowledge there are no proposed numerical methods to solve this type of inequality. It will be addressed by, e.g., including interior-point methods [33].
The numerical validation is achieved first via the solution of test problems allowing a computational investigation of the convergence properties of our methodology. The treatment of more demanding test problems associated with non-smooth data and/or non-convex domains, allows to investigate the accuracy and the robustness of the proposed methodology.

The Mathematical Problem
Let be a bounded domain of R 2 . We denote by the boundary of . Let f : → R and g : → R 2 be given sufficiently smooth functions. The partial differential equation involving the Jacobian determinant we want to solve reads as follows: find u : Generally, we assume that f ≥ 0. This problem has been originally investigated in [16] from a theoretical point of view, when the boundary condition is the identity function (i.e. g (x) = x for x ∈ ). The corresponding problem is of the following type: Problem (2) corresponds to finding a mapping u that preserves both the boundary data and some kind of volume (up to some stretching of the mapping). These problems have been addressed in [30,31] with a numerical approach based on augmented Lagrangian techniques. We will provide here an alternative, more robust, numerical method, based on a least-squares approach.
Note that the solution to (2) is not necessarily unique and the same remark holds for (1). Indeed, let us consider (2) with f = 1 and the unit disk centered at the origin; in this case, u (x) = x is an obvious solution. But, when using the polar coordinates (ρ, θ), one can see that v defined by v (ρ, θ) = ρ cos θ + 2kπρ 2 , ρ sin θ + 2kπρ 2 T is also a solution.
The proof of the existence of a solution to (1) (via the divergence theorem) requires data to be compatible with the geometrical domain, see [16]. When the boundary conditions are given by u (x) = x on , this compatibility condition reads as: The positiveness of the right-hand side f is useful from an analytical point of view to prove existence results. Moreover, it makes problem (1) an elliptic problem, which is important for the solution methodology discussed in this article. However, it has been loosened (slightly) to accept locally negative data (see, e.g., [17]). From now on we will assume that (3) holds.
Originally, the regularity of (classical) solutions in Hölder spaces can be found in [16]. In order to derive such regularity results and obtain classical solutions in C k+1,α (¯ ), the regularity needed on the data is f , g ∈ C k,α (¯ ), together with f , g > 0 and the compatibility condition (3). Moreover, existence results require that the regularity of the domain is ∂ has to be C k+3,α (¯ ).
On the other hand, weak solutions to the generated Jacobian equation must be considered in the Aleksandrov-sense [26-28, 35, 36]. The derivation and proofs are similar to the approach for solutions to the Monge-Ampère equation, see, e.g., [2]. As stated earlier, the relationship between the Monge-Ampère equation detD 2 ψ = f and the Jacobian equation det∇u = f is explicit, by setting u = ∇ψ [22]. It is thus not surprising to find similar concepts of weak solutions.
The proposed method allows to find a finite element approximation in a least-squares sense. As in [1,32], there is no evidence that the discrete solution is converging to an Aleksandrov-type of solution. However, experiments have exhibited a strong numerical evidence of a convergent behavior of the approximations when the discretization parameters tend to zero.

Remark 1 (Jacobian problem with inequalities)
In parallel, we consider the following partial differential inequality : find u : → R 2 satisfying with again, in particular, the case of the identity boundary condition (i.e. g (x) = x for x ∈ ). This problem has been addressed in [34] where existence results are established under the condition We will show in the sequel that the numerical techniques developed for (1) also apply naturally to (4), with small modifications.

A Numerical Algorithm
We advocate a numerical algorithm based on a least-squares approach and a relaxation algorithm. The relaxation algorithm allows to split the minimization problem into a sequence of subproblems. The first subproblem consists of low dimensional local nonlinear problems, the number of them being determined from the chosen mesh grid. The second subproblem is a linear variational problem and it results in a fourth-order partial differential equation.

A Least-Squares Method
Let us define We assume that f and g are sufficiently smooth, so that Q f and V g are non-empty. The least squares problem introduces an auxiliary variable q ∈ Q f , and reads as follows: find where J (·, ·) is defined by Here | · | denotes the Frobenius norm |T| = (T : T) 1/2 , with the inner product S : T = 2 i, j=1 s i j t i j where S, T are 2 × 2 matrices with elements s i j , t i j for i, j = 1, 2, respectively.
We propose a biharmonic regularization of the objective function. The added term is motivated by previous works involving first-order elliptic equations, see [12,31,37]. The modified objective function is defined as

Numerical Solution of the Local Nonlinear Problems
We focus here on the solution of (12). Since u n is known, the solution p n is obtained by solving the minimization problem Problem (15) can be solved point-wise since it does not involve any derivative for the variable q. The solution can be obtained, locally for all x ∈ , as where Following [29,31], we reduce the dimension of the problem with a proper change of variables. Let us introduce the vectors b = (b 11 , b 22 , b 12 , b 21 ), and q = (q 11 , q 22 , q 12 , q 21 ) and the 4 × 4 matrix By introducing the new variables y = Sq T and a = Sb T , (16) is equivalent to with F f (x) = y ∈ R 4 , y 2 1 − y 2 2 − y 2 3 + y 2 4 = 2 f (x) . In order to solve (17), we introduce the Lagrangian functional L defined by: Let z denote the solution of (17), with λ the corresponding Lagrange multiplier. The first order optimality conditions read: After elimination, (19) can be solved by a (scalar) Newton method to find λ first, and then the primal variables. Moreover, we target a solution λ that is close to zero. Indeed, if λ = 0, then z i = a i , i = 1, . . . , 4, and the last equation of (19) reads as and therefore u n−1 is the solution of (1) and q n−1 = ∇u n−1 . Numerical experiments will show that, in practise, the solution for λ is close to zero.
When considering the Jacobian inequality (4), the solution of the minimization problem, for all x ∈ , reads as p n (x) = arg min In order to solve (20), we can introduce a slack variable and re-write the problem as Then, a logarithmic barrier function allows to eliminate the inequality constraint (see e.g. [33]): where μ ≥ 0. The minimization problem (22) is similar to (16). Its solution will be implemented with the same approach, relying on the same change of variables, the Lagrangian functional, and its first order optimality conditions, which are solved with a Newton method.

Numerical Solution of the Linear Variational Problems
We focus here on the solution of (13), which is equivalent to solving: We derive the first optimality conditions and obtain a fourth order partial differential equation: find u n+1/2 ∈V g such that

Finite Element Approximation
Let h > 0 be a space discretization step and let {T h } h be family of conformal triangulations of (see [38,Appendix 1]). Let Q h be defined as equipped with the discrete inner product and corresponding norm: Let Q f h andQ f h be the finite dimensional subsets approximating Q f andQ f , respectively defined by Let V gh and V 0h be the finite dimensional subspaces of V g and V 0 given by where P 1 the space of the two-variable polynomials of degree ≤ 1, and g h is a piecewise linear interpolant of g. We define a discrete inner product and corresponding norm for V gh and V 0h as with W i the weights and ζ i the evaluation points of a Gauss quadrature rule, m denoting the number of points of the quadrature rule. The discrete formulation of the least-squares method to solve (9) reads as follows: find

Remark 4 (Jacobian problem with inequalities)
Similarly, the discrete formulation of the least-squares method to solve (10) reads as follows: find {u, p} ∈ V gh ×Q f h such that The discrete formulation of the relaxation algorithm (11)-(14) described in Sect. 3.2 becomes the following: 1. We initialize the algorithm by finding u 1 h ∈ V gh such that: wheref = (1, 1) T for all x ∈ . Then, for n ≥ 1, 2. We solve the discrete local nonlinear problem: When considering the Jacobian problem with inequality constraints (4), we replace Q f h byQ f h in (26). The solution of the discrete minimization problem (26), p n h , is obtained on each element T of T h , in an identical manner as the solution of the continuous problem described in Sect. 3.3.
3. We solve the discrete linear variational problem: The first order optimality conditions read: find u for all v h ∈ V 0h . By applying a mixed finite element method to (27), we solve an equivalent augmented system of equations: find u

Numerical Experiments
We validate the convergence and accuracy of the proposed least-squares method with various numerical experiments. We consider several domains, namely the unit square q = (0, 1) 2 , the unit disk d = x ∈ R 2 , ||x|| 2 < 1 , the so-called 'pacman' domain and the cracked unit disk Figure 1 illustrates typical triangulations of these domains. For coarse meshes, we initialize the algorithm by solving (25) with given boundary data. For finer meshes, we initialize the algorithm by using the numerical solution obtained on a coarser mesh. In all experiments, ε h 2 (unless specified otherwise). The relaxation parameter is ω 1 initially, and then gradually increases to ω = 2. The stopping criterion for the relaxation algorithm is ||u n h − u n−1 h || 0h < 10 −8 . The Newton's method to solve the local nonlinear problems stops when the difference between two successive iterations is smaller than 10 −15 . Usually, this number of iterations does not exceed 5. The interior-point parameter μ is specified later.
The estimation of λ is denoted byλ, and is obtained by averaging its values λ T on each triangle, for each triangle T ∈ T h . It is accompanied by the standard deviation of the estimator, σ , to describe the variability over the mesh elements. These indicators are: where N is the number of triangles of the triangulation T h .

Numerical Approximation of the Identity Map
The first experiment corresponds to considering the identity map as the exact solution. The data are chosen as f = 1 and u (x) = x on . The problem can be written as: find u : → R 2 such that: , we use structured meshes with mesh size h = {0.0831, 0.0421, 0.0209, 0.0104}. Similar to when working on the unit square, the errors for the approximations are again of order 10 −10 in the L 2 ( ) error norm, and of order 10 −9 to 10 −10 in the H 1 ( ) error norm. Furthermore, ||∇u h − p h || 0h andλ are of the order of 10 −10 to 10 −11 . In particular, for h = 0.025, and after 29 iterations, we obtain and ||∇u h − p h || 0h = 2.06 · 10 −10 ,λ = 9.77 · 10 −11 . Figure 3 illustrates the approximation of the two components of the numerical solution on the structured mesh of the unit disk.
Then, the algorithm is tested by considering the inequality problem (4). We consider the following problem ( There is some information missing in (29) since f = 0, and the only available relevant information on the solution comes from the boundary conditions. The algorithm converges to an exact solution u (x) = x, in a similar fashion as for (28). Actually the solution obtained when solving (29) is similar to the one obtained by solving (28).
In this test case we choose μ = 0.1 and we decrease it with a factor of √ h at each iteration. By contrast to solving the Jacobian equality in (28), numerical errors in L 2 ( ) and H 1 ( ) norms of solving the inequality (29) using the same mesh size values are of the order of 10 −2 and 10 −3 , respectively. These parameters depend on the choice of μ: if initially μ is chosen to be large, ||∇u h − p h || 0h will be also large; if μ is small, we experience some convergence problems. When considering h = 0.025, we obtain after 69 iterations: and ||∇u h − p h || 0h = 1.04 · 10 −2 ,λ = 2.08 · 10 −3 .

Smooth Solution with Radial Right-Hand Side
1 + x 2 2 < 1} be the unit disk. We consider the following problem: find u : → R 2 such that: where One exact solution satisfies The non-uniqueness of the solution of Eq. (30) makes this test case challenging. For instance, this may cause the algorithm to fluctuate between two different solutions. Later on, we show that our algorithm converges to a solution for different sets of parameters. The numerical solution of (30) is illustrated in Fig. 4 (top row). Note that both the right-hand side of this equation and ||u h || (bottom left), are radial symmetric. However, the solution u h does not exhibit the same symmetry pattern. Table 1 provides insights about the convergence of the relaxation algorithm towards the numerical solution on the structured mesh for the unit disk. The only difference with the results presented in the previous section is that the numerical solution converges in L 2 -norm with a nearly optimal rate of O h 1.7 to O h 2 , and in H 1 semi-norm with an optimal rate of O (h). Similar results are observed in Table 2, where the unstructured mesh is used. We observe that the numerical solution converges in L 2 -norm and H 1 -semi norm with rates of O h 1.9 and O(h), respectively. This confirms that the behavior of the algorithm does not depend on the structure of the mesh.
In a second step, we consider the Jacobian inequality problem (4). Let us consider the unit disk = {(x 1 , x 2 ) ∈ R 2 , x 2 1 + x 2 2 < 1}, together with f ≡ 0, and the same function g(x) from (31). Numerical results show that the numerical approximation of the solution to the  Table 1 Smooth solution with radial right-hand side test case. The case of the unit disk with a structured Computational results include the mesh size h, the L 2 and H 1 error norms with the corresponding rates, the error ||∇u h − p h || L 2 ( ) , the average valueλ and its standard deviationσ , and the number of iterations of the relaxation algorithm     Table 3 Smooth solution of the Jacobian inequality problem with radial right-hand side test case. The case of the unit disk with a structured mesh. ( f (x 1 , x 2 ) = 0 and g(x 1 , Computational results include the mesh size h, the L 2 and H 1 error norms with the corresponding rates, the error ||∇u h − p h || L 2 ( ) , the average valueλ and its standard deviationσ , and the number of iterations of the relaxation algorithm inequality problem obtained is the same as the one of the Jacobian equality problem (32). Table 3 shows results obtained with the structured mesh. The parameter μ for the solution of the local non-linear problems is chosen to be 0.1 and we decrease it by a factor 0.001 √ h at each iteration of the interior point method. We can observe that the numerical solution in this case converges in L 2 -norm and H 1 -semi norm with a rate of O h 1.9 (nearly optimal) and O(h) (optimal), respectively. 2 2 , and g (x) = x in (1). Problem (1) therefore admits an exact solution in that case, given by:

This numerical experiment introduces a singularity in the gradient of the solution. Let
whose gradient is Note that the solution u is a smooth radial function, but ∇u is not defined at the origin (0, 0). Figure 5 visualizes the numerical solution. We can notice that i) both components of the Table 4 Smooth solution with radial right-hand side test case. The case of the unit disk with a structured mesh.
( f (x 1 , x 2 ) = 2 x 2 1 + x 2 2 and g(x 1 , x 2 ) = (x 1 , x 2 ) T on ). Computational results include the mesh size h, the L 2 and H 1 error norms with the corresponding rates, the error ||∇u h − p h || L 2 ( ) , the average valueλ and its standard deviationσ , and the number of iterations of the relaxation algorithm Computational results include the mesh size h, the L 2 and H 1 error norms with the corresponding rates, the error ||∇u h − p h || L 2 ( ) , the average valueλ and its standard deviationσ , and the number of iterations of the relaxation algorithm solution are smooth (first row), ii) the norm ||u h || 0h is radially symmetric (bottom left), and iii) the solution field is directed towards the origin (bottom right). Convergence properties of the relaxation algorithm on the structured mesh on the unit disk are presented in Table 4. We see that the numerical solution converges with a rate of O h 1.1 to O h 1.5 , in L 2 -norm, and with an optimal rate of O (h) in the H 1 semi norm. This test case is more computationally expensive, and the maximum allowed number of iterations may be reached. Similar results are reported in Table 5 when using unstructured meshes.

Smooth Solution with Radial Right-Hand Side on Non-convex Domains
In this experiment we revisit the case presented in Sect. 5.2 on non-convex domains . Generally speaking, the existence of a solution to a fully nonlinear equation in non-convex domains is not guaranteed, see, e.g., [39]. Nevertheless, we treat here one case that admits an exact solution. We focus on the following problem: find u : → R 2 such that: In this case, the exact solution in is  Tables 6 and 7 illustrate the results for both non-convex domains (pacman with unstructured mesh, and cracked disk with an unstructured mesh respectively). We see that the numerical solutions converge in L 2 -norm with a rate of O h 1.9 to O h 1.7 and O h 1.8 , respectively. Convergence in H 1 semi norm is of (optimal) order O(h) for both domains. The number of iterations of the relaxation algorithm is linearly increasing for decreasing h. Comparing these two tables, we can say that the performance of the algorithm is the same for the two nonconvex domains. Comparing with Table 1, the algorithm has the same level of performance for both convex and non-convex domains.
The numerical solution of (33) for both domains is illustrated in the top row of Figs. 6 and 7, respectively. In particular, we observe that the difference between det ∇u h and det p h is vanishing, showing convergence of the least-squares method. Table 6 Smooth solution with radial right-hand side test case. The case of the pacman disk with an unstructured mesh. ( f (x 1 , x 2

Nonsmooth Right-Hand Side with a Line Singularity
Let us now consider the unit disk, and a non-smooth right hand side with a singularity (jump) along a line in , and given by: Note that f satisfies the necessary condition f = measure ( ). On the boundary, we impose g(x) = x. This problem has no known exact solution to the best of our knowledge. Table 8 shows results for ε = 0 and ε = h 2 for the disk structured meshes. We observe that the L 2 error between the numerical solution ∇u h and the auxiliary variable p h is of the order of O (h) for both values of ε, although it is more accurate for ε = 0. This confirms that the accuracy of the method improves when h tends to zero. The same observations can be made forλ andσ . The approximationλ decreases when h tends to zero and, more importantly, the same holds forσ ; this shows that the variability of those values across all triangles tends to zero, meaning the overall method accuracy increases. In addition, the iterations of the relaxation algorithm are reaching the limit of stopping criterion for ε = 0; on the other hand, for ε = h 2 , the iterations are well controlled. This shows that there are cases where the ε-regularization helps the convergence of the algorithm.
The numerical approximation of the solution for ε = 0 is illustrated in Fig. 8. A close inspection shows that u 1,h (top left) is discontinuous across x 2 = 0, as expected, while u 2,h (top right) remains smooth. The numerical approximation of det ∇u h and det p h are displayed in the second row and are identical. The numerical approximation of the solution for ε = h 2 is illustrated in Fig. 9. Both components look smooth. The second row shows the numerical approximations det ∇u h and det p h , which look different. Figure 10 illustrates a comparison between the two choices of ε, along the cutting line x 2 = 0. On the top row,u 1,h (left) and ||u h || (right)-a smoothing effect is observed when ε > 0 at the discontinuity point. On the bottom row, we observe an overshoot of the approximation det ∇u h (left), while the approximation det p h (right) shows a solution that is independent of the choice of ε.

Nonsmooth Problem Involving a Dirac Delta Function
Let us consider the unit disk , and a Dirac delta function f (x 1 , x 2 ) = πδ (0,0) for the righthand side, and g(x 1 , x 2 ) = (x 1 , x 2 ) T for the boundary conditions. One exact solution of the problem is, in this case, In order to apply our methodology, we approximate f by f η defined by where η > 0 is a small positive value, see [1,40]. When η tends to 0, the approximate function f η converges to f . Note that f η satisfies and also tends to the measure of when η tends to 0 (necessary condition). The modified problem then reads as and the exact solution is We can show that lim η→0 u η = u.
We will examine the approximation of the solution for various values of η, ε and h. Table 9 illustrates the computation of numerical approximations for different η and h, and for an unstructured mesh on a unit disk. For the values of η considered (η ∈ {1/8, 1/16, 1/32, 1/64}), the error in L 2 norm decreases with an order equal or larger than O(h). The smaller the value of η, the less smooth the problem, and the smaller the conver- Table 9 Nonsmooth problem involving a Dirac delta function, with f (x) = η 2 η 2 +||x|| 2 2 2 and g(x) = x 1+η 2 η 2 +||x|| 2 2 . Convergence results for various η. Computational results include the mesh size h, the errors ||u − u h || L 2 ( ) and ||∇u h − p h || L 2 ( ) , the average valueλ and its standard deviationσ , and the number of iterations of the relaxation algorithm. The case of the unit disk with a structured mesh gence order of our algorithm. Indeed small values of η are associated with solutions with large gradients, which would require a finer mesh in order for the algorithm to converge. The same convergence behavior is observed for ||∇u h − p h || L 2 ( ) , and for the estimates ofλ andσ . Note that the variability (symbolized byσ ) is decreasing when h tends to zero. Note also that, for large values of η, the necessary condition f η = π is not satisfied, which impacts the convergence of the algorithm. Table 10 illustrates numerical results for a fixed h = 0.0209 and various η. We observe that, when η tends to zero, the solutions presents a stronger singularity, and all indicators in the table are increasing.  . Convergence results for various η and fixed mesh (h = 0.0209). Computational results include η, the errors ||u − u h || L 2 ( ) and ||∇u h − p h || L 2 ( ) , the average valueλ and its standard deviationσ , and the number of iterations of the relaxation algorithm. The case of the unit disk with a structured mesh  is sign of the need of a finer mesh. Table 12 shows, for two fixed values of η, convergence results when h and ε vary. We observe that, for ε = {0, h 2 }, the numerical solution converges in L 2 -norm with a rate O (h) (or better). Moreover, ||∇u h − p h || L 2 ( ) decreases with an order of O (h). Similar effects occur forλ andσ . The number of iterations of the relaxation algorithm reaches its maximal number when ε = 0. Figures 11 and 12 illustrate the numerical approximation obtained for η = 1/4 and η = 1/64 respectively (with ε = h 2 ). We see that the solution of the two components u 1,h and u 2,h are smooth for η sufficiently large (η = {1/4, 1/8}), and the singularity in (0, 0) is visible for η smaller (η = {1/16, 1/32, 1/64}). The numerical approximations of det ∇u h and det p h are illustrated in the second row of both figures, and shows that as η gets smaller, the singularity become more prominent. The largest values of these determinants have been shown in Table 11. Figure 13 visualizes a comparison of the different figures associated with various values of η, with plots along the line x 2 = 0 (by symmetry). In the top row, both the approximation of u 1,h (left) and ||u h || (right) are visualized. As η gets smaller, the tangent line to the graph of u 1,h at the singularity point (0, 0) becomes vertical, and the gradient undefined. This singularity point significantly appears on the plot of ||u h ||. Figure 13 (second row) shows det ∇u h (left) and det p h (right). . Convergence results for various ε (η = 1/32). Computational results include the mesh size h, the errors ||u − u h || L 2 ( ) and ||∇u h − p h || L 2 ( ) , the average valueλ and its standard deviationσ , and the number of iterations of the relaxation algorithm. The case of the unit disk with a structured mesh  Figure 14 illustrates the numerical approximation of the solution when η = 1/32, ε = 0 and h = 0.0209. The solutions of the two components u 1,h and u 2,h (first row) are non-smooth and the singularity at the origin is visible; this point also appears in the plots of det ∇u h and det p h (second row). Figure 15 shows a comparison between the solution obtained with ε = 0 and ε = h 2 (η = 1/32, h = 0.0209), by plotting the solution along the line x 2 = 0. We observe that the curves obtained when ε = h 2 have less oscillations and are smoother than those obtained when ε = 0.

Conclusions and Perspectives
A least-squares/relaxation finite element method has been advocated for the numerical solution of the prescribed Jacobian equation. The relaxation algorithm that decouples this least-squares problem into a sequence of local nonlinear problems and variational linear problems has shown optimal convergence orders for smooth problems. It has also proved to be robust in non-smooth cases, with nearly optimal convergence orders.
Generally speaking, the least-squares approach is as efficient as the augmented Lagrangian methodology introduced in [31], but without the required fine-tuning of the augmentation parameters, a well known drawback of ADMM algorithms.  regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.