A Finite Difference Method for the Variational p-Laplacian

We propose a new monotone finite difference discretization for the variational p-Laplace operator, Δpu=div(|∇u|p-2∇u),\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Delta _pu=\text{ div }(|\nabla u|^{p-2}\nabla u),$$\end{document} and present a convergent numerical scheme for related Dirichlet problems. The resulting nonlinear system is solved using two different methods: one based on Newton-Raphson and one explicit method. Finally, we exhibit some numerical simulations supporting our theoretical results. To the best of our knowledge, this is the first monotone finite difference discretization of the variational p-Laplacian and also the first time that nonhomogeneous problems for this operator can be treated numerically with a finite difference scheme.


Introduction and Main Results
In the recent paper [10], we studied a new 1 mean value formula (MVF) for the variational p-Laplace operator, p u = div(|∇u| p−2 ∇u). (1.1) Here D d, p := d 2(d+ p) ffl ∂ B 1 |y 1 | p dσ (y), where y 1 is the first coordinate, dσ the surface measure on the sphere and B r denotes the ball of radius r > 0 centered at 0.
The aim of this paper is to propose a new monotone finite difference discretization of the p-Laplacian based on the asymptotic expansion (1.2). As an application of our discretization we also propose a convergent numerical scheme associated to the nonhomogeneous Dirichlet problem The scheme results in a nonlinear system. We propose two methods to solve this system: (1) Newton-Raphson and (2) an explicit method, based on the convergence to a steady state of an evolution problem. We comment the advantages of each one in Sect. 5. Finally, we exhibit some numerical tests of the accuracy and convergence of the scheme.
To the best of our knowledge, this is the first monotone finite difference discretization of the variational p-Laplacian available in the literature and therefore the first time that nonhomogeneous problems of the form (1.3)-(1.4) can be treated numerically via finite difference schemes. The monotonicity property (see Lemma 4.4) is crucial for the convergence of finite difference schemes in the context of viscosity solutions (see [4]). It is also worth mentioning that, in contrast to the finite difference schemes for the normalized (or game theoretical) p-Laplacian considered earlier (see Sect. 1.2), our scheme is well suited for Newton-Raphson solvers, which is an advantage when it comes to solving a nonlinear system effectively.

Main Results
In order to describe our main results we need to introduce some notation. Given a discretization parameter h > 0, consider the uniform grid defined by G h := hZ d = {y α := hα : α ∈ Z d }. Let r > 0 and consider the following discrete operator h p φ(x) := h d D d, p ω d r p+d y α ∈B r J p (φ(x + y α ) − φ(x)), (1.5) where ω d denotes the measure of the unit ball in R d . Throughout the paper, we will assume the following relation between h and r : Our first result regards the consistency of the discretization (1.5).
Theorem 1.1 Let p ∈ (1, ∞), x ∈ R d and φ ∈ C 2 (B R (x)) for some R > 0. Assume (H). Our second result concerns the finite difference numerical scheme for (1.3)-(1.4) induced by the discretization (1.5). More precisely, fix r 0 > 0 and let ∂ r := {x ∈ c : dist(x, ) ≤ r }, r = ∪ ∂ r and G be a continuous extension of g from ∂ to ∂ r for all r < r 0 . Consider u h : r → R such that (1.7) We have the following result.  3 We conjecture that the relation h = o(r 3/2 ) is sufficient also in the range p ∈ (1, 3). See Sect. 6.5 for numerical evidence supporting this.
We note that if we restrict (1.6)-(1.7) to the uniform grid G h we obtain a fully discrete problem suited for numerical computations. More precisely, define the discrete sets Observe that h p given in (1.5) can be interpreted as an operator h p : ∞ (G h ) → ∞ (G h ) since given any x β , y α ∈ G h we have x β + y α = (β + α)h = x β+α ∈ G h and then with φ : G h → R and φ γ := φ(γ h), whenever γ h ∈ G h . Finally note that if x β ∈ h and y α ∈ B h r we have that x β + y α = x β+α ∈ h r , so that (1.6)-(1.7) can be interpreted as − h p U β = f β , x β ∈ h (1.8) with U : h r → R, f β := f (x β ) and G β := G(x β ). In this way we have the following trivial consequence of Theorem 1.2.

Corollary 1.4 Assume the hypotheses of Theorem 1.2.
(a) Then there exists a unique pointwise solution U ∈ ∞ ( h r ) of (1.8)-(1.9) when r is small enough.

Related Results
For an overview of classical and modern results for the p-Laplacian, we refer the reader to the book [22]. For an overview of numerical methods for degenerate elliptic PDEs we refer the reader to Section 1.1 in [34].
We want to stress that the operator of interest in this paper is the variational p-Laplacian, i.e., Once we have found a monotone discretization of p , it is straightforward to find monotone finite difference schemes also for p-Laplace equations involving gradient terms, such as or other Hamilton-Jacobi-type equations involving the p-Laplacian, which do not necessarily allow for a variational formulation. In particular, we could recover and treat equations involving the normalized p-Laplacian (see (1.10)).
On the other hand, finite difference methods for equations involving the p-Laplacian have been successfully developed using the normalized (or game theoretical) version of the p-Laplacian N p . The ideas are based on the identity This allows to define where N ∞ is the so-called normalized infinity Laplacian, which is given by the second order directional derivative in the direction of the gradient. One limitation of such methods is the fact that they are not well adapted to treat nonhomogeneous problems of the form − p u = f , unless p ≤ 2. Instead they allow for treating inhomogeneities of the form − p u = |∇u| p−2 f (this problem is equivalent to − N p u = f ), which our method could handle as well, at least if p ≥ 2 (since monotone approximations of |∇u| are well known). Of course, both problems are equivalent only if f ≡ 0.
Let us first comment on the literature related to finite difference methods for N p . In [34], the author presents a monotone finite difference scheme for the normalized infinity Laplacian and the game theoretical (or normalized) p-Laplacian for p ≥ 2. In addition, a scheme for (1.3)-(1.4) with f ≡ 0 is presented, together with a semi-implicit solver. In [11], a strategy to prove the convergence of dynamic programming principles (including monotone finite difference schemes) for the normalized p-Laplacian is presented, as well as the strong uniqueness property for the p-Laplacian, which is crucial for the application of the convergence criteria of Barles and Souganidis in [4]. We also seize the opportunity mention Section 6 in [7], where a finite difference method (based on the mean value properties of the normalized p-Laplacian) is proposed for a double-obstacle problem involving the p-Laplacian. We note that in the case 1 < p < 2 neither of the above mentioned schemes are monotone, and as such, the numerical scheme in this paper is the first one treating this range, even in the homogeneous case f ≡ 0.
There are many other monotone approximations of N p available in the literature. Strictly speaking, they are not numerical approximations, but the proof of convergence follows similar strategies based on monotonicity and consistency. See [11] for a discussion on this topic. Such approximations were first presented in [29] (see also [20,30,31] for a probabilistic game theoretical approach). The basic idea of these approximations is to combine the classical mean value property (MVP) for the Laplacian with a MVP for the normalized infinity Laplacian motivated by Tug-of-War games [35]. The literature on this topic has become extensive in the last decade. In [2,23] the equivalence between being p-harmonic and satisfying a MVP is treated. See [16,19] for a MVP in the full range 1 < p < ∞ and [21] for the application of such approximations in the context of obstacle problems.
Regarding monotone approximations of the variational p-Laplacian, the literature is very recent and not so extensive. The MVP given by (1.2) was derived in [6,10]. In [10] it is shown to be a monotone approximation of p . The authors are also able to prove convergence of the corresponding approximating problems to a viscosity solution.
It is noteworthy that the discretization presented in this paper is reminiscent of the definition of the variational p-Laplacian on graphs, see [1] and also [37]. In this direction, Corollary 1.4 can be interpreted as the convergence of the solution to a PDE defined on a graph associated to the grid. We refer to the recent paper [36] for a study of the eigenvalues of this operator and to [12] for its applications to image processing. Note that also the normalized p-Laplacian has been defined on graphs, see [28].
Finally, we seize the opportunity to mention that since the p-Laplacian is of divergence form, it is well suited for finite element based methods. We mention a few papers in this direction: [5,13,14,17,[24][25][26][27]. We want to stress that finite element methods are not well suited for being treated with viscosity methods.

Organization of the Paper
In Sect. 2, we introduce some notation and prerequisites needed in the rest of the paper. Section 3 is devoted to the proof of consistency of the discretization previously introduced. In Sect. 4, we study the numerical scheme for the boundary value problems. This is followed by a discussion around solving the nonlinear systems of equations derived from our scheme, in Sect. 5. Finally, in Sect. 6, we perform some numerical experiments to support our theoretical results. We also have an appendix containing technical results and a discussion regarding the invertibility of the Jacobian used in one of the methods in Sect. 5.

Notations and Prerequisites
We adopt the following definition of viscosity solutions, which is the classical definition adjusted to the nonhomogeneous equation (see e.g. [15]). Definition 2.1 (Solutions of the equation) Suppose that f ∈ C( ). We say that a lower (resp. upper) semicontinuous function u in is a viscosity supersolution (resp. subsolution) of the equation in if the following holds: whenever x 0 ∈ and ϕ ∈ C 2 (B R (x 0 )) for some R > 0 are such A viscosity solution is a function u ∈ C( ) being both a viscosity supersolution and a viscosity subsolution.

Remark 2.1
We consider condition (2.1) to avoid problems with the definition of − p ϕ(x 0 ) when |∇ϕ(x 0 )| = 0 and p ∈ (1, 2). However, when either p ≥ 2 or |∇ϕ(x 0 )| = 0, (2.1) can be replaced by the standard one, i.e., A viscosity solution of the boundary value problem (1.3)-(1.4) attaining the boundary condition in a pointwise sense is naturally defined as follows. Definition 2.2 (Solutions of the boundary value problem) Suppose that f ∈ C( ) and g ∈ C(∂ ). We say that a lower (resp. upper) semicontinuous function u in is a viscosity supersolution (resp. subsolution) of (1.3)-(1.4) if (a) u is a viscosity supersolution (resp. subsolution) of − p u = f in (as in Definition 2.1); A viscosity solution of (1.3)-(1.4) is a function u ∈ C( ) being both a viscosity supersolution and a viscosity subsolution.

Remark 2.2
To prove the convergence result (Theorem 1.2(b)) we will make use of a generalized notion of viscosity solutions of a boundary value problem. We will introduce this notion just before using it. See Sect. 4.3.

Consistency of the Discretization: Proof of Theorem 1.1
In this section we prove the consistency of the discretization h p for C 2 -functions as presented in Theorem 1.1.
Proof of Theorem 1.1 Throughout this proof, C will denote a constant that may depend on p, the dimension d, but not on r or h.
The mean value property introduced in [10] involves the quantity By the triangle inequality and Theorem 2.1 in [10] h Therefore, it is sufficient to show that Step 1: Approximation of B r by h-boxes. Define the following family of h-boxes centred at y α ∈ G h , and the union of boxes that approximates B r In this step we will prove that It is easy to verify that Observe that regardless of the value of p, we always have h = o(r ). Therefore, On the other hand, by Taylor expansion In the case p ≥ 2, Lemma A.1 implies We can conclude In the case p < 2, we argue slightly different. On page 8 in [10] it is proved that In a similar fashion, one can provê To show (3.1) it is therefore sufficient to show that Without loss of generality assume that ∇φ(x) = ce 1 with c = 0. Then Hence, From (6.2) in [10] it then follows that After integration (to pass from spheres to balls) we obtain This is (3.2).
Step 2: Discretization ofÃ r . Consider We will show that Observe that If p ≥ 2 we use Taylor expansion of order two and obtain Let ρ = φ(x + y α ) − φ(x) and η = ∇φ(x + y α ). Then this can be expressed as Therefore, by Lemma A.1 . It follows that it will be enough to obtain an estimate of the form For p = 2, this estimate is trivial. When p > 3 we use the second order Taylor expansion of J p to obtain
If p < 2 we use the fact that J p is ( p − 1)-Hölder continuous. Thus, Step 3: Conclusion. Combining Step 1 and Step 2, we obtain

Properties of the Numerical Scheme
In this section we will state and prove some properties of the numerical scheme (1.6)-(1.7).

Existence and Uniqueness
We will obtain the existence and uniqueness result given in Theorem 1.2(a).
First note that we can write with μ being the discrete measure given by where δ z denotes the dirac delta measure at z ∈ R d . With this simple observation, all the results of Section 9.1 in [10] follow here word by word (replacing M p r by h p and dy by dμ(y)). We state them for completeness. Our running assumptions in this section will be f ∈ C( ) and G ∈ C(∂ r ) (a continuous extension of g ∈ C(∂ )).
The comparison result below implies in particular the uniqueness of solutions of (1.6)-(1.7).
Then v ≤ w in r .
The existence of solutions is proved by a monotonicity argument. For this purpose, we need the following L ∞ -bound. In order to prove the existence we also need a two step iteration process. For that purpose we define We have the following result.
Proof The proof follows as the proof of Lemma 9.3 in [10].
We are finally ready to prove the existence.

Proof of Theorem 1.2(a)
The proof follows the proof of Proposition 9.4 in [10]. We spell out some details below.
The approach for existence is to construct a monotone increasing sequence converging to the solution. Let B be the barrier constructed in Proposition 4.2. Define and the sequence u k h as the sequence of solutions of One can prove that u k h exists for all k, is nondecreasing (by the monotonicity of L) and uniformly bounded (by Proposition 4.2). We can then define the pointwise limit Due to the the pointwise convergence Thus, u is a solution of (1.6). Clearly u h = G in ∂ r so it is also a solution of (1.7). The uniqueness follows from Proposition 4.1.

Monotonicity and Consistency
In order to prove convergence of the numerical scheme, we will need certain monotonicity and consistency properties (we already obtained a uniform bound in Proposition 4.2). For a function φ : r → R define Note that (1.6)-(1.7) can be equivalently formulated as We have the following result.

Lemma 4.4 Assume (H).
(a) (Monotonicity) Let t ∈ R and ψ ≥ φ. Then Proof The proof follows as in Lemma 9.7 in [10]. For part (b) it is essential to use the fact that J p is a Hölder continuous function, the basic properties of lim sup and lim inf and the consistency of h p given in Theorem 1.1.

Convergence
We are now ready to prove the convergence stated in Theorem 1.2. The idea of the proof originates from [4]. The proof is almost the same as the proof of Theorem 2.5 ii) in [10]. We point out that it was necessary to adapt the proof in order to make it fit with the definition of viscosity solutions in the case p ∈ (1, 2). Below, we spell out some details. First we need another definition of viscosity solutions of the boundary value problem and two auxiliary results that are taken from [10]. Definition 4.1 (Generalized viscosity solutions of the boundary value problem) Let f ∈ C( ) and g ∈ C(∂ ). We say that a lower (resp. upper) semicontinuous function u in is a generalized viscosity supersolution (resp. subsolution) of (1.3)-(1.4) in if whenever x 0 ∈ and ϕ ∈ C 2 (B R (x 0 )) for some R > 0 are such that |∇ϕ( Remark 4.5 As in Remark 2.1, we note that when either p ≥ 2 or |∇ϕ(x 0 )| = 0, the limits in the above definition can simply be replaced by (− p ϕ(x 0 ) − f (x 0 )).
The following uniqueness result is Theorem 9.5 in [10]. We also need that a generalized viscosity solution is a (usual) viscosity solution in the case of a bounded C 2 domain. The proposition below is Proposition 9.6 in [10].

Proof of Theorem 1.2(b) Define
where h → 0 as in the hypotheses of Theorem 1.2. By definition u ≤ u in . If we show that u (resp. u) is a generalized viscosity subsolution (resp. supersolution) of (1.3), Theorem 4.6 would imply u ≤ u. Thus, u := u = u is a generalized viscosity solution of (1.3) and u h → u uniformly in . Proposition 4.7 then would imply that u is a viscosity solution of (1.3).
We now sketch how to show that u is a generalized viscosity subsolution. First note that u is an upper semicontinuous function by definition, and it is also bounded since u h is uniformly bounded by Proposition 4.2. Take We separate the proof into different cases depending of the value of the gradient of ϕ at x 0 and the range of p.
We claim that we can find a sequence (r n , y n ) → (0, x 0 ) as n → ∞, with h n → 0 as in the hypotheses of the theorem, such that This can be argued for as in the proof of Theorem 2.5 ii) in [10]. Choose now ξ n := u h n (y n ) − ϕ(y n ). We have from (4.2) that, Using Lemma 4.4(b)we obtain 0 = S(r n , h n , y n , u r n (y n ), u h n ) = S(r n , h n , y n , ϕ(y n ) + ξ n , u h n ) ≥ S(r n , h n , y n , ϕ(y n ) + ξ n , ϕ + ξ n + e −1/r n ) = S(r n , h n , y n , ϕ(y n ) + ξ n − e −1/r n , ϕ + ξ n ).
Note that e −1/r = o(r p ). By Lemma 4.4(b), we have 0 ≥ lim inf r n →0, y n →x 0 , ξ n →0 S(r n , h n , y n , ϕ(y n ) + ξ n − e −1/r n , ϕ + ξ n ) which shows that u is a viscosity subsolution and finishes the proof in this case.
Case 2: Let p ∈ (1, 2) and |∇ϕ(x 0 )| = 0 such that u is constant in some ball B ρ (x 0 ) for ρ > 0 small enough. Choose φ(x) = u(x 0 ) + |x − x 0 | p p−1 +1 . Then, we can argue as in Case 1 above that by the Hölder continuity of J p . Together with Lemma A.3 this shows that Hence, u is a classical subsolution at x 0 and thus also a viscosity subsolution. Case 3: Let |∇ϕ(x 0 )| = 0 and assume that u is not constant in any ball B ρ (x 0 ). Then we may argue as in the proof of Proposition 2.4 in [3] to prove that there is a sequence y k → 0 such that the function ϕ k (x) = ϕ(x + y k ) touches u from above at x k = x 0 + y k and |∇ϕ k (x k )| = 0 for all k. As in Case 1, this gives which is the desired inequality. This completes the proof.

Solution of the Nonlinear System
When we discretize the Dirichlet problem (1.3)-(1.4), we need to solve the nonlinear system (1.8)-(1.9). In contrast to the situation in [34], our system is not based on the mean value formula for the ∞-Laplacian which is not differentiable. Instead, it is based on an implicit and differentiable mean value property. This system is therefore well suited for Newton-Raphson, which is one of the methods we have employed. The Newton-Raphson method is fast (also mentioned by Oberman in [34]) and the number of iterations required to solve the system seems to be independent of its size, see Table 1. However, we neither have a proof of this nor a proof of the convergence of this method. The convergence would be guaranteed for example if this is related to find the minimum of a strongly convex function. This is not the case here, since the related minimizer functional is merely convex. Nevertheless, we can give sufficient conditions for the Jacobian matrix used to be invertible. We have included this discussion in the one-dimensional setting in "Appendix 1". Since we cannot prove convergence for the Newton-Raphson method, we have also chosen to include an explicit method based on the convergence to a steady state of an evolution problem, for which we can guarantee the convergence. The convergence of this method is conditioned by the CFL-type condition (CFL) in Sect. 5.2. See Table 1 for a more detailed comparison between the efficiency in terms of speed of the two methods. We describe the two methods in detail below.

Newton-Raphson
The method we have used is the standard one. Let F : R k → R k for some k ≥ 1. In order to solve the system we use the iteration where J F denotes the Jacobian matrix of the function F. In our particular case we have that k = #{G h ∩ r }. Let us illustrate the form of F and J F in the one dimensional case. Let γ = min{β ∈ Z : x β ∈ r }, and z i = U γ +i−1 . Consider . . .
. . , k are given by Let (J F (z)) i, j = (J F (z 1 , . . . , z k )) i, j denote the component of the Jacobian matrix of F corresponding to the i-th and j-th column. If i is such that x γ +i−1 ∈ ∂ r then otherwise.

Explicit Method
We consider {U m } m∈N to be the sequence of solutions U m : h r → R of U m+1 where U 0 is some initial data, U m = G on ∂ h r and {τ m } m∈N > 0 are certain discretization parameters. The idea here is that, as m → ∞, U m converges to the solution U of (1.8)-(1.9). This convergence holds given a nonlinear counterpart to the CFL-stability condition.
Actually, we also need to slightly modify (5.1) to ensure convergence; in words of Oberman in [33], we need to ensure that our operator is proper.
More precisely, given ε > 0, let {(U ε ) m } ∞ m=1 be the solution of (U ε ) m+1 subject to the same initial and boundary conditions as in (5.1). Let U ε be the solution of It is standard to check, using the techniques of Sect. 4.1 that U ε exists, is unique and uniformly bounded in r , h and ε. We have the following result.
Lemma 5.1 Let p ≥ 2 and {U m ε } ∞ m=1 be the solution of (5.2) with any bounded initial condition U 0 ε . Let also U be the solution of (1.8)-(1.9). Assume that Then Proof Since U ε is uniformly bounded in a discrete finite set, there exists a convergent subsequence U ε j converging to some V pointwise. It is also standard to show that V is indeed a solution of (1.8)-(1.9). By uniqueness, V = U and the full sequence U ε converges, i.e., On the other hand, by subtracting the equations for U ε and (U ε ) m we get where K := h d |B r |D d, p r p and ξ α,β lies between (U ε ) m β+α − (U ε ) m β and (U ε ) β+α − (U ε ) β , so that |ξ α,β | ≤ 2L m and |J p (ξ α,β )| = ( p − 1)|ξ α,β | p−2 ≤ ( p − 1)2 p−2 L p−2 m since p ≥ 2. Therefore, when r is small enough where we used (CFL) and that h ≤ r (since h = o(r ) with r small enough). In this way, Clearly J p ≥ 0, and then The results follows using the triangle inequality:

Remark 5.2
The fact that U ε is uniformly bounded together with the bound U m ε − U ε ∞ ≤ 2L 0 ensures that L m is uniformly bounded from above so that {τ m } m∈N can be taken uniformly bounded from below.
In the case 1 < p < 2, we used a regularization of the singularity in h p in order to make it a Lipschitz map. This could be done for example by modifying the nonlinearity with an extra approximation parameter δ > 0 and replacing J p by J δ p given by The drawback of this type of regularization is that the condition (CFL) becomes more and more restrictive as δ → 0. This regularization is typically used when dealing with explicit schemes for fast diffusion equations (see for example [8,9])

Comparison Between the Solvers
We now present a comparison of the above methods regarding the number of iterations and computational time 2 .
We have solved the system (1.8)-(1.9) for p = 3, in dimension d = 1 with = (−1, 1), f ≡ 1 and g ≡ 0. As starting value for the iteration we have chosen u 0 (x) = (1 − |x|) + . Finally, for the explicit solver we have chosen τ to satisfy (CFL). We have stopped the solver when difference between two consecutive iterations is less that 10 −16 . Here k denotes the size of the system, It-E and It-NR are the number of iterations needed by the explicit and the Newton-Raphson solver respectively, and T-E and T-NR are the times (in seconds) spent to solve the system by the explicit and the Newton-Raphson solver respectively In Table 1 we present the results for different values of r and its corresponding h satisfying (H) (in this case h = r 3/2+0. 1   4 ). As the table shows, the Newton-Raphson solver is fast in the sense that the number of iterations does not seem to depend on the size of the system. This is a big advantage compared to the explicit solver, for which smaller values of r enforces smaller choices of τ which increase the number of iterations required substantially.

Numerical Experiments
To perform numerical experiments we need two ingredients. It is standard to check that, in dimension d = 1, we have .
In dimension d = 2 we have the following formula for integer numbers, that can be obtained through integration by parts. Let p ∈ [2, ∞) and d = 2.
(a) (Even) If p = 2n for some n ∈ N then (b) (Odd) If p = 2n + 1 for some n ∈ N then For general dimension and p, one can find the following expression .
As mentioned in the introduction, homogeneous problems can successfully be treated by means of the so-called normalized p-Laplacian, for which numerical schemes are well understood (see [32,34]). Therefore, we will focus on nonhomogeneous problems ( f = 0). We compare our numerically obtained solution with the explicit solution In dimension two we will also use that for p = 4, the smooth function u(x, y) = x y solves and that the less regular function u(x, y) = |x|

Error Analysis in Dimension d = 1
Here we present the results of a numerical experiment using our numerical scheme to solve problem (6.1) in dimension d = 1 using MATLAB.
To solve the nonlinear system present in (1.8)-(1.9) we use the explicit solver given by (5.2). The parameter τ m has been chosen to satisfy the (CFL), while ε is chosen small enough not to interfere with the error in h and r . We have also taken G(x) = 0 for all x ∈ ∂ r as extended boundary condition.
We have stopped the explicit solver when it has reached a numerical steady state, i.e., max In this case we have chosen to take h = r 2 /4 which clearly satisfy the condition h = o(r 3/2 ). The results obtained are presented in Fig. 2 and Table 2 which contain the simulations for p = 3, p = 4 and p = 10.
It can be clearly seen that the error seems to behave linearly with r . This can be seen more clearly in Table 2, where we present the details of the results in Fig. 2. The observed convergence rate γ have been computed using where r j = 0.2/2 j . In this way, γ = log 2 error j−1 error j .

First Error Analysis in Dimension d = 2
We now perform numerical experiments in dimension d = 2 for problem (6.1). We have almost the same setup as in Sect. 6.1, except that we now take h = r 3 2 +0.1 which clearly satisfy the condition h = o(r 3 2 ). Again, as in the computation in dimension d = 1, the error observed in Fig. 3 seems to decay at least linearly with r , despite the fact that we have taken the parameter h to decay slower than before. It seems as if as long as h = o(r 3/2 ), the choice of h does not interfere with the order of convergence in r .
In Table 3 we observe some instabilities in the order of convergence in the simulations for big choices of r and h. However, if we compute the order of convergence between the simulation with r = 2.00e-1 and r = 2.50e-2 the observed rate is γ r = log 8 7.73e-2 7.21e-3 = 1.14 > 1 if p = 3.

Second Error Analysis in Dimension d = 2
Here, we perform numerical simulations for problems (6.2) and (6.3). We intend to illustrate that the regularity of the solution should have an impact on the order of convergence of the scheme. Note that, while the solution of (6.2) is infinitely smooth, the solution of (6.3) is no more than C 1,1/3 . This time, we have chosen h = r 3 2 +0.01 , which satisfy the condition h = o(r 3 2 ). We comment now on Fig. 4. Note that for problem (6.2), where the solution is smooth, the observed error decays again linearly in r . On the other hand, for problem (6.3), the error decays clearly in a sublinear way.  In Table 4, we observe that for the smooth solution of problem (6.2), the overall convergence rates are while for the non-smooth one of (6.3), Everything seems to indicate that the regularity of the solution has a significant effect in the order of convergence of the numerical scheme.

Improvement of the Error with an Adapted Boundary Condition
During the simulations presented in Sects. 6.1 and 6.2, we observed that the extension of G ≡ 0 produced a certain instability in the solution close to the boundary. Due to this fact, the maximal error is attained near the boundary. In order to avoid this phenomenon, we have adapted the boundary condition to make the transition between the interior and the boundary smoother. We have taken In the results presented in Fig. 5, we clearly see that the maximum error of the solution with an adapted condition comes from the middle point, which is the point where solution is the least regular, while without adaption, the error comes from the instabilities created near the boundary.
Thus, the correction seems to give a smoother transition between the interior and the extended condition. It also seems to improve the error estimate (but not the order of convergence).
The correction suggested in this section is clearly dependent on the fact that we know the explicit form of the solution. This is of course not the case in general. The behaviour near the boundary strongly depends on the boundary condition g and the problem is how to choose an extended boundary condition G such that the transition of the solution from to ∂ r is as smooth as possible. Unless some additional information regarding the behaviour of the solution near the boundary is available, we do not know how to do this in general.This seems to be a problem that must be handled in a problem-dependent way.

Solution of a Fully Nonhomogeneous Problem
Finally, we present some numerical simulations of a problem with nonhomogeneous right hand side and nonhomogeneous boundary conditions. We present the numerical solutions corresponding to problem (1.  The boundary condition has been extended to ∂ r by G(x, y) = 1 2 + x y and we have chosen the numerical parameters r = 0.2 and h = r 2 = 0.04.
In Fig. 6, we present a level set representation of the solutions for p = 1.1, p = 1.5, p = 2, p = 4 and p = 20 (using the regularization described at the end of Sect. 5 when p < 2). Here, h = r 2 has been used, also when p < 2 (see Remark 1.3).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Once this is proved, it only remains to to prove that the first term in (A.1) goes to zero.
This shows (A.1) and concludes the proof.

Appendix 2: Invertibility of the Jacobian
In this part we discuss the invertibility of the Jacobian used in the Newton-Raphson method formulated in Sect. 5. We have the following general result. Then A is invertible.
Proof We argue towards a contradiction. Assume there is a non-zero x = (x 1 , . . . , x n ) ∈ R n such that Ax = 0. We may also assume that x ∞ = 1. We observe that if |x i | = 1 for i = 1 or i such that property (B.1) holds, then which is a contradiction. In the last inequality, we used property (c) or (B.1). Assume now that |x i | < 1 for i = 1, . . . , k − 1 and that |x k | = 1. If k is such that (B.1) holds, then arguing as in (B.3), we arrive at a contradiction. Therefore, k must be such that (B.2) holds, which means that there is < k such that A k = 0. Then where we have used that |x | < 1 and that |x j | ≤ 1 in the third inequality, and property 2) in the last inequality. This is a contradiction.
We now review the form of the Jacobian in the Newton-Raphson method (see Sect. 5) and give sufficient conditions for it to satisfies the assumptions of Lemma B.1. Recall that the Jacobian is given by the matrix A i j as follows. If i is such that x γ +i−1 ∈ ∂ r then while if x γ +i−1 ∈ then Assume that B r contains 2L + 1 elements. Then the first L rows are simply given by and this continues until the last L rows, that are of the form (0, . . . , 0, 1, 0, . . . , 0 L zeros ), · · · , (0, . . . , 1, 0), (0, . . . , 0, 1).
We will now give conditions on the values z 1 , . . . , z k , assuring that the assumptions of Lemma B.1 are satisfied. By assumption, at least one of the terms of the sum is non-zero. It is therefore clear that assumption (a) is satisfied. Assumptions (b) and (c) are trivially satisfied. Regarding assumption (d), it is clear that (B.2) is always satisfied.