1 Introduction and Main Results

In the recent paper [10], we studied a newFootnote 1 mean value formula (MVF) for the variational p-Laplace operator,

$$\begin{aligned} \Delta _p u = \text{ div }(|\nabla u|^{p-2}\nabla u). \end{aligned}$$
(1.1)

With the notation \(J_p(t):=|t|^{p-2}t\) for all \(p>1\), the MVF, valid for any \(C^2(\mathbb {R}^d)\) function, reads

$$\begin{aligned} \frac{1}{D_{d,p} r^p}\fint _{B_r} J_p(u(x+y)-u(x)) \,\mathrm {d}y =\Delta _p u(x) + o_r(1) \quad as \quad r\rightarrow 0^+. \end{aligned}$$
(1.2)

Here \(D_{d,p}:=\frac{d}{2(d+p)}\fint _{\partial B_1} |y_1|^{p} \,\mathrm {d}\sigma (y)\), where \(y_1\) is the first coordinate, \(\,\mathrm {d}\sigma \) the surface measure on the sphere and \(B_r\) denotes the ball of radius \(r>0\) centered at 0.

The aim of this paper is to propose a new monotone finite difference discretization of the p-Laplacian based on the asymptotic expansion (1.2). As an application of our discretization we also propose a convergent numerical scheme associated to the nonhomogeneous Dirichlet problem

$$\begin{aligned} -\Delta _p u(x) =f(x),&\quad x\in \Omega , \end{aligned}$$
(1.3)
$$\begin{aligned} u(x)=g(x),&\quad x\in \partial \Omega . \end{aligned}$$
(1.4)

The scheme results in a nonlinear system. We propose two methods to solve this system: (1) Newton-Raphson and (2) an explicit method, based on the convergence to a steady state of an evolution problem. We comment the advantages of each one in Sect. 5. Finally, we exhibit some numerical tests of the accuracy and convergence of the scheme.

To the best of our knowledge, this is the first monotone finite difference discretization of the variational p-Laplacian available in the literature and therefore the first time that nonhomogeneous problems of the form (1.3)–(1.4) can be treated numerically via finite difference schemes. The monotonicity property (see Lemma 4.4) is crucial for the convergence of finite difference schemes in the context of viscosity solutions (see [4]). It is also worth mentioning that, in contrast to the finite difference schemes for the normalized (or game theoretical) p-Laplacian considered earlier (see Sect. 1.2), our scheme is well suited for Newton-Raphson solvers, which is an advantage when it comes to solving a nonlinear system effectively.

1.1 Main Results

In order to describe our main results we need to introduce some notation. Given a discretization parameter \(h>0\), consider the uniform grid defined by \(\mathcal {G}_h:= h \mathbb {Z}^d=\{y_\alpha :=h\alpha \, : \, \alpha \in \mathbb {Z}^d\}\). Let \(r>0\) and consider the following discrete operator

$$\begin{aligned} \Delta _{p}^h\phi (x):= \frac{h^d}{D_{d,p}\, \omega _d\, r^{p+d}} \sum _{y_\alpha \in B_r} J_p(\phi (x+y_\alpha )-\phi (x)), \end{aligned}$$
(1.5)

where \(\omega _d\) denotes the measure of the unit ball in \(\mathbb {R}^d\). Throughout the paper, we will assume the following relation between h and r:

$$\begin{aligned} h={\left\{ \begin{array}{ll} o(r^\frac{p}{p-1}), &{} \quad if \quad p \in (1,3)\setminus \{2\},\\ o(r), &{} \quad if \quad p=2,\\ o(r^{\frac{3}{2}}),&{} \quad if \quad p\in [3,\infty ).\\ \end{array}\right. } \end{aligned}$$
(H)

Our first result regards the consistency of the discretization (1.5).

Theorem 1.1

Let \(p\in (1,\infty )\), \(x\in \mathbb {R}^d\) and \(\phi \in C^2(B_R(x))\) for some \(R>0\). Assume (H). Then

$$\begin{aligned} \Delta _{p}^h\phi (x)=\Delta _p \phi (x)+ o_r(1) \quad as \quad r\rightarrow 0^+. \end{aligned}$$

Our second result concerns the finite difference numerical scheme for (1.3)–(1.4) induced by the discretization (1.5). More precisely, fix \(r_0>0\) and let \(\partial \Omega _r:=\{x\in \Omega ^c \, :\, dist (x,\Omega )\le r \}\), \(\Omega _r=\Omega \cup \partial \Omega _r\) and G be a continuous extension of g from \(\partial \Omega \) to \(\partial \Omega _r\) for all \(r<r_0\). Consider \(u_h:\Omega _r\rightarrow \mathbb {R}\) such that

$$\begin{aligned} -\Delta _{p}^hu_h(x) =f(x),&\quad x\in \Omega , \end{aligned}$$
(1.6)
$$\begin{aligned} u_h(x)=G(x),&\quad x\in \partial \Omega _r. \end{aligned}$$
(1.7)

We have the following result.

Theorem 1.2

Let \(p\in (1,\infty )\), \(\Omega \subset \mathbb {R}^d\) be a bounded, open and \(C^2\) domain, \(f\in C(\overline{\Omega })\) and \(g\in C(\partial \Omega )\). Assume (H).

  1. (a)

    Then there exists a unique pointwise solution \(u_h\in L^\infty (\Omega _r)\) of (1.6)–(1.7) when r is small enough.

  2. (b)

    If u is the unique viscosity solution of (1.3)–(1.4), then

    $$\begin{aligned} \sup _{x\in \overline{\Omega }}\left| u_h(x)-u(x)\right| \rightarrow 0 \quad as \quad r\rightarrow 0^+. \end{aligned}$$

Remark 1.3

We conjecture that the relation \(h=o(r^{3/2})\) is sufficient also in the range \(p\in (1,3)\). See Sect. 6.5 for numerical evidence supporting this.

We note that if we restrict (1.6)–(1.7) to the uniform grid \(\mathcal {G}_h\) we obtain a fully discrete problem suited for numerical computations. More precisely, define the discrete sets

$$\begin{aligned} B_r^h:=B_r\cap \mathcal {G}_h, \quad \Omega ^h:= \Omega \cap \mathcal {G}_h, \quad \partial \Omega ^h:= \partial \Omega \cap \mathcal {G}_h\quad and \quad \Omega ^h_r:= \Omega _r\cap \mathcal {G}_h. \end{aligned}$$

Observe that \(\Delta _{p}^h\) given in (1.5) can be interpreted as an operator \(\Delta _{p}^h:\ell ^\infty (\mathcal {G}_h)\rightarrow \ell ^\infty (\mathcal {G}_h)\) since given any \(x_\beta , y_\alpha \in \mathcal {G}_h\) we have \(x_\beta +y_\alpha =(\beta +\alpha )h = x_{\beta +\alpha }\in \mathcal {G}_h\) and then

$$\begin{aligned} \Delta _{p}^h\phi _\beta := \frac{h^d}{D_{d,p}\, \omega _d\, r^{p+d}} \sum _{y_\alpha \in B_r} J_p(\phi _{\beta +\alpha }-\phi _\beta ) \quad for \quad x_\beta \in \Omega ^h \end{aligned}$$

with \(\phi : \mathcal {G}_h\rightarrow \mathbb {R}\) and \(\phi _{\gamma }:= \phi (\gamma h)\), whenever \(\gamma h \in \mathcal {G}_h\). Finally note that if \(x_\beta \in \Omega ^h\) and \(y_\alpha \in B_r^h\) we have that \(x_\beta +y_\alpha = x_{\beta +\alpha } \in \Omega _r^h\), so that (1.6)–(1.7) can be interpreted as

$$\begin{aligned} -\Delta _{p}^hU_\beta =f_\beta ,&\quad x_\beta \in \Omega ^h \end{aligned}$$
(1.8)
$$\begin{aligned} \ U_\beta =G_\beta ,&\quad x_\beta \in \partial \Omega _r^h, \end{aligned}$$
(1.9)

with \(U:\Omega _r^h\rightarrow \mathbb {R}\), \(f_\beta :=f(x_\beta )\) and \(G_\beta :=G(x_\beta )\). In this way we have the following trivial consequence of Theorem 1.2.

Corollary 1.4

Assume the hypotheses of Theorem 1.2.

  1. (a)

    Then there exists a unique pointwise solution \(U\in \ell ^\infty (\Omega _r^h)\) of (1.8)–(1.9) when r is small enough.

  2. (b)

    If u is the unique viscosity solution of (1.3)–(1.4), then

    $$\begin{aligned} \max _{x_\beta \in \Omega _h}\left| U_\beta -u(x_\beta )\right| \rightarrow 0 \quad as \quad r\rightarrow 0. \end{aligned}$$

1.2 Related Results

For an overview of classical and modern results for the p-Laplacian, we refer the reader to the book [22]. For an overview of numerical methods for degenerate elliptic PDEs we refer the reader to Section 1.1 in [34].

We want to stress that the operator of interest in this paper is the variational p-Laplacian, i.e.,

$$\begin{aligned} \Delta _p u = \text{ div }(|\nabla u|^{p-2}\nabla u). \end{aligned}$$

Once we have found a monotone discretization of \(\Delta _p\), it is straightforward to find monotone finite difference schemes also for p-Laplace equations involving gradient terms, such as

$$\begin{aligned} -\Delta _p u =|\nabla u|^{q_1}f(x)+|\nabla u|^{q_2}, \end{aligned}$$

or other Hamilton-Jacobi-type equations involving the p-Laplacian, which do not necessarily allow for a variational formulation. In particular, we could recover and treat equations involving the normalized p-Laplacian (see (1.10)).

On the other hand, finite difference methods for equations involving the p-Laplacian have been successfully developed using the normalized (or game theoretical) version of the p-Laplacian \(\Delta _p^N \). The ideas are based on the identity

$$\begin{aligned} \Delta _p u =|\nabla u|^{p-2}\Delta u +(p-2)|\nabla u|^{p-4}\Delta _\infty u. \end{aligned}$$

This allows to define

$$\begin{aligned} \Delta _p^N u := |\nabla u|^{2-p}\Delta _p u = \Delta u +(p-2)\Delta _\infty ^N u, \end{aligned}$$
(1.10)

where \(\Delta _\infty ^N \) is the so-called normalized infinity Laplacian, which is given by the second order directional derivative in the direction of the gradient. One limitation of such methods is the fact that they are not well adapted to treat nonhomogeneous problems of the form \(-\Delta _p u=f\), unless \(p\le 2\). Instead they allow for treating inhomogeneities of the form \(-\Delta _p u=|\nabla u|^{p-2}f\) (this problem is equivalent to \(-\Delta _p^N u=f\)), which our method could handle as well, at least if \(p\ge 2\) (since monotone approximations of \(|\nabla u|\) are well known). Of course, both problems are equivalent only if \(f\equiv 0\).

Let us first comment on the literature related to finite difference methods for \(\Delta _p^N \). In [34], the author presents a monotone finite difference scheme for the normalized infinity Laplacian and the game theoretical (or normalized) p-Laplacian for \(p\ge 2\). In addition, a scheme for (1.3)–(1.4) with \(f\equiv 0\) is presented, together with a semi-implicit solver. In [11], a strategy to prove the convergence of dynamic programming principles (including monotone finite difference schemes) for the normalized p-Laplacian is presented, as well as the strong uniqueness property for the p-Laplacian, which is crucial for the application of the convergence criteria of Barles and Souganidis in [4]. We also seize the opportunity mention Section 6 in [7], where a finite difference method (based on the mean value properties of the normalized p-Laplacian) is proposed for a double-obstacle problem involving the p-Laplacian. We note that in the case \(1<p<2\) neither of the above mentioned schemes are monotone, and as such, the numerical scheme in this paper is the first one treating this range, even in the homogeneous case \(f\equiv 0\).

There are many other monotone approximations of \(\Delta _p^N \) available in the literature. Strictly speaking, they are not numerical approximations, but the proof of convergence follows similar strategies based on monotonicity and consistency. See [11] for a discussion on this topic. Such approximations were first presented in [29] (see also [20, 30, 31] for a probabilistic game theoretical approach). The basic idea of these approximations is to combine the classical mean value property (MVP) for the Laplacian with a MVP for the normalized infinity Laplacian motivated by Tug-of-War games [35]. The literature on this topic has become extensive in the last decade. In [2, 23] the equivalence between being p-harmonic and satisfying a MVP is treated. See [16, 19] for a MVP in the full range \(1<p<\infty \) and [21] for the application of such approximations in the context of obstacle problems.

Regarding monotone approximations of the variational p-Laplacian, the literature is very recent and not so extensive. The MVP given by (1.2) was derived in [6, 10]. In [10] it is shown to be a monotone approximation of \(\Delta _p\). The authors are also able to prove convergence of the corresponding approximating problems to a viscosity solution.

It is noteworthy that the discretization presented in this paper is reminiscent of the definition of the variational p-Laplacian on graphs, see [1] and also [37]. In this direction, Corollary 1.4 can be interpreted as the convergence of the solution to a PDE defined on a graph associated to the grid. We refer to the recent paper [36] for a study of the eigenvalues of this operator and to [12] for its applications to image processing. Note that also the normalized p-Laplacian has been defined on graphs, see [28].

Finally, we seize the opportunity to mention that since the p-Laplacian is of divergence form, it is well suited for finite element based methods. We mention a few papers in this direction: [5, 13, 14, 17, 24,25,26,27]. We want to stress that finite element methods are not well suited for being treated with viscosity methods.

1.3 Organization of the Paper

In Sect. 2, we introduce some notation and prerequisites needed in the rest of the paper. Section 3 is devoted to the proof of consistency of the discretization previously introduced. In Sect. 4, we study the numerical scheme for the boundary value problems. This is followed by a discussion around solving the nonlinear systems of equations derived from our scheme, in Sect. 5. Finally, in Sect. 6, we perform some numerical experiments to support our theoretical results. We also have an appendix containing technical results and a discussion regarding the invertibility of the Jacobian used in one of the methods in Sect. 5.

2 Notations and Prerequisites

We adopt the following definition of viscosity solutions, which is the classical definition adjusted to the nonhomogeneous equation (see e.g. [15]).

Definition 2.1

(Solutions of the equation) Suppose that \(f\in C(\Omega )\). We say that a lower (resp. upper) semicontinuous function u in \(\Omega \) is a viscosity supersolution (resp. subsolution) of the equation

$$-\Delta _pu=f $$

in \(\Omega \) if the following holds: whenever \(x_0 \in \Omega \) and \(\varphi \in C^2(B_R(x_0))\) for some \(R>0\) are such that \(|\nabla \varphi (x)|\ne 0\) for \(x\in B_R(x_0)\setminus \{x_0\}\),

$$\begin{aligned} \varphi (x_0) = u(x_0) \quad \text {and} \quad \varphi (x) \le u(x) \quad {(resp. \varphi (x)\ge u(x))} \quad \text {for all} \quad x \in B_R(x_0)\cap \Omega , \end{aligned}$$

then we have

$$\begin{aligned} \lim _{\rho \rightarrow 0}\sup _{B_\rho (x_0)\setminus \{x_0\}}\left( -\Delta _p\varphi (x)\right) \ge f(x_0) \quad {(resp. \lim _{\rho \rightarrow 0}\inf _{B_\rho (x_0)\setminus \{x_0\}}\left( -\Delta _p\varphi (x)\right) \le f(x_0))}. \end{aligned}$$
(2.1)

A viscosity solution is a function \(u\in C(\Omega )\) being both a viscosity supersolution and a viscosity subsolution.

Remark 2.1

We consider condition (2.1) to avoid problems with the definition of \(-\Delta _p \varphi (x_0)\) when \(|\nabla \varphi (x_0)|=0\) and \(p\in (1,2)\). However, when either \(p\ge 2\) or \(|\nabla \varphi (x_0)|\not =0\), (2.1) can be replaced by the standard one, i.e.,

$$\begin{aligned} -\Delta _p\varphi (x_0)\ge f(x_0) \quad {(resp. -\Delta _p\varphi (x_0)\le f(x_0))}. \end{aligned}$$
(2.2)

A viscosity solution of the boundary value problem (1.3)–(1.4) attaining the boundary condition in a pointwise sense is naturally defined as follows.

Definition 2.2

(Solutions of the boundary value problem) Suppose that \(f\in C(\overline{\Omega })\) and \(g\in C(\partial \Omega )\). We say that a lower (resp. upper) semicontinuous function u in \(\overline{\Omega }\) is a viscosity supersolution (resp. subsolution) of (1.3)–(1.4) if

  1. (a)

    u is a viscosity supersolution (resp. subsolution) of \(-\Delta _p u=f\) in \(\Omega \) (as in Definition 2.1);

  2. (b)

    \(u(x)\ge g(x)\) (resp. \(u(x)\le g(x)\)) for \(x\in \partial \Omega .\)

A viscosity solution of (1.3)–(1.4) is a function \(u\in C(\overline{\Omega })\) being both a viscosity supersolution and a viscosity subsolution.

Remark 2.2

To prove the convergence result (Theorem 1.2(b)) we will make use of a generalized notion of viscosity solutions of a boundary value problem. We will introduce this notion just before using it. See Sect. 4.3.

3 Consistency of the Discretization: Proof of Theorem 1.1

In this section we prove the consistency of the discretization \(\Delta _{p}^h\) for \(C^2\)-functions as presented in Theorem 1.1.

Proof of Theorem 1.1

Throughout this proof, C will denote a constant that may depend on p, the dimension d, but not on r or h.

The mean value property introduced in [10] involves the quantity

$$\begin{aligned} \mathcal {M}_r^p[\phi ](x)=\frac{1}{D_{d,p} r^p} \fint _{ B_r} J_p(\phi (x+y)-\phi (x)) \,\mathrm {d}y. \end{aligned}$$

By the triangle inequality and Theorem 2.1 in [10]

$$\begin{aligned} {\begin{matrix} \left| \Delta _{p}^h\phi (x)-\Delta _p \phi (x)\right| &{}\le \left| \Delta _{p}^h\phi (x)-\mathcal {M}_r^p[\phi ](x)\right| + \left| \mathcal {M}_r^p[\phi ](x)-\Delta _p \phi (x)\right| \\ &{}= \left| \Delta _{p}^h\phi (x)-\mathcal {M}_r^p[\phi ](x)\right| + o_r(1) \qquad as \qquad r\rightarrow 0^+. \end{matrix}} \end{aligned}$$

Therefore, it is sufficient to show that

$$\begin{aligned} |\Delta _{p}^h\phi (x)-\mathcal {M}_r^p[\phi ](x)|=o_r(1) \qquad as \qquad r\rightarrow 0^+. \end{aligned}$$

Step 1: Approximation of \(B_r\) by h-boxes. Define the following family of h-boxes centred at \(y_\alpha \in \mathcal {G}_h\),

$$\begin{aligned} R^h_\alpha := y_\alpha + \frac{h}{2}[-1,1)^d, \end{aligned}$$

and the union of boxes that approximates \(B_r\)

$$\begin{aligned} \tilde{B}_r:= \bigcup _{y_\alpha \in B_r^h} R^h_\alpha . \end{aligned}$$

\(\square \)

See Fig. 1.

Fig. 1
figure 1

The family of boxes and their union \(\tilde{B}_r\) that covers \(B_r\)

Consider

$$\begin{aligned} A_r:=\frac{1}{2}\int _{ B_r} \left( J_p(\phi (x+y)-\phi (x))+J_p(\phi (x-y)-\phi (x))\right) \,\mathrm {d}y, \end{aligned}$$

and

$$\begin{aligned} \tilde{A}_r:=\frac{1}{2}\int _{ \tilde{B}_r} \left( J_p(\phi (x+y)-\phi (x))+J_p(\phi (x-y)-\phi (x))\right) \,\mathrm {d}y. \end{aligned}$$

In this step we will prove that

$$\begin{aligned} |A_r-\tilde{A}_r|=o(r^{d+p}). \end{aligned}$$
(3.1)

Notice first that

$$\begin{aligned} {\begin{matrix} \left| A_r-\tilde{A}_r\right| = \frac{1}{2}&{}\left| \int _{ B_r\setminus \tilde{B_r}} \left( J_p(\phi (x+y)-\phi (x))+J_p(\phi (x-y)-\phi (x))\right) \,\mathrm {d}y \right. \\ &{}\qquad \left. - \int _{ \tilde{B}_r\setminus B_r} \left( J_p(\phi (x+y)-\phi (x))+J_p(\phi (x-y)-\phi (x))\right) \,\mathrm {d}y \right| \\ \le \quad &{} \frac{1}{2}\int _{(B_r\cup \tilde{B}_r)\setminus (B_r\cap \tilde{B}_r)} \left| J_p(\phi (x+y)-\phi (x))+J_p(\phi (x-y)-\phi (x))\right| \,\mathrm {d}y. \end{matrix}} \end{aligned}$$

It is easy to verify that \(B_r\cup \tilde{B}_r\subset B_{r+\sqrt{d}h}\) and \(B_{r-\sqrt{d}h}\subset B_r\cap \tilde{B}_r\) so that

$$\begin{aligned} (B_r\cup \tilde{B}_r)\setminus (B_r\cap \tilde{B}_r)\subset B_{r+\sqrt{d}h}\setminus B_{r-\sqrt{d}h}. \end{aligned}$$

Observe that regardless of the value of p, we always have \(h=o(r)\). Therefore,

$$\begin{aligned} {\begin{matrix} |(B_r\cup \tilde{B}_r)\setminus (B_r\cap \tilde{B}_r)|&{}\le |B_{r+\sqrt{d}h}\setminus B_{r-\sqrt{d}h}|= \omega _d \left( (r+\sqrt{d}h)^d- (r-\sqrt{d}h)^d\right) \\ &{}\le \omega _d d(r+\sqrt{d}h)^{d-1}2\sqrt{d}h\le C r^{d-1}h\\ &{}= o(r^{d}). \end{matrix}} \end{aligned}$$

On the other hand, by Taylor expansion

$$\begin{aligned} |\phi (x+y)-\phi (x)+ \phi (x-y)-\phi (x)|=\mathcal {O}(|y|^2). \end{aligned}$$

In the case \(p\ge 2\), Lemma A.1 implies

$$\begin{aligned} {\begin{matrix} |J_p&{}(\phi (x+y)-\phi (x))+J_p(\phi (x-y)-\phi (x))|\\ &{}= |J_p(\phi (x+y)-\phi (x))-J_p(-\phi (x-y)+\phi (x))|\\ &{}\le C \max ( |\phi (x-y)-\phi (x)|,|\phi (x+y)-\phi (x)|)^{p-2}|\phi (x+y)\\ &{}\quad -\phi (x)+ \phi (x-y)-\phi (x)|\\ &{}=O(|y|^p). \end{matrix}} \end{aligned}$$

We can conclude

$$\begin{aligned} {\begin{matrix} |A_r-\tilde{A}_{r}| &{}\le \frac{\tilde{C}}{2} \int _{(B_r\cup \tilde{B}_r)\setminus (B_r\cap \tilde{B}_r)} |y|^{p} \,\mathrm {d}y\le \frac{1}{2}(r+\sqrt{d}h)^{p} |(B_r\cup \tilde{B}_r)\setminus (B_r\cap \tilde{B}_r)|\\ &{}= o(r^{p+d}). \end{matrix}} \end{aligned}$$

In the case \(p< 2\), we argue slightly different. On page 8 in [10] it is proved that

$$\begin{aligned} \fint _{\partial B_r}|J_p(\phi (x+y)-\phi (x))-J_p(y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)| dy =o(r^p). \end{aligned}$$

In a similar fashion, one can prove

$$\begin{aligned} {\begin{matrix} \int _{(B_r\cup \tilde{B}_r)\setminus (B_r\cap \tilde{B}_r)}\Big |&{}J_p(\phi (x+y)-\phi (x))+J_p(\phi (x-y)-\phi (x))\\ {} &{}-J_p(y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)-J_p(-y\cdot \nabla \phi (x)\\ &{}+\frac{1}{2}y^T D^2\phi (x) y)\Big | dy =o(r^{p+d}). \end{matrix}} \end{aligned}$$

To show (3.1) it is therefore sufficient to show that

$$\begin{aligned}&\int _{(B_r\cup \tilde{B}_r)\setminus (B_r\cap \tilde{B}_r)}\left| J_p(y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)+J_p(-y\cdot \nabla \phi (x)\right. \nonumber \\&\left. +\frac{1}{2}y^T D^2\phi (x) y)\right| dy=o(r^{p+d}). \end{aligned}$$
(3.2)

Without loss of generality assume that \(\nabla \phi (x)=ce_1\) with \(c\not =0\). Then

$$\begin{aligned} {\begin{matrix} J_p(y\cdot \nabla \phi (x) +\frac{1}{2}y^T D^2 \phi (x) y)&{}=J_p(c y \cdot \mathbf {e}_1 +\frac{1}{2} y^T D^2 \phi (x) y)=(c|y|)^{p-1} J_p(\hat{y} \cdot \mathbf {e}_1\\ &{}+\frac{1}{2} c^{-1}|y|\hat{y}^T D^2 \phi (x) \hat{y} ) \end{matrix}} \end{aligned}$$

where \(\hat{y}=y/|y|\). By Lemma A.2 with \(a=\hat{y} \cdot \mathbf {e}_1\) and \(b=\frac{1}{2} c^{-1}|y|\hat{y}^T D^2 \phi (x) \hat{y}\) we get

$$\begin{aligned} {\begin{matrix} (c|y|)^{p-1}&{} \Big |J_p(\hat{y} \cdot \mathbf {e}_1 +\frac{1}{2} c^{-1}|y|\hat{y}^T D^2 \phi (x) \hat{y} )- J_p(\hat{y} \cdot \mathbf {e}_1)\Big |\\ &{}\le C(c|y|)^{p-1}\left( |\hat{y} \cdot \mathbf {e}_1|+\frac{1}{2} c^{-1}|y||\hat{y}^T D^2 \phi (x) \hat{y}|\right) ^{p-2}\frac{1}{2} c^{-1}|y||\hat{y}^T D^2 \phi (x) \hat{y}|\\ &{}\le C|y|^p|\hat{y} \cdot \mathbf {e}_1|^{p-2}. \end{matrix}} \end{aligned}$$

Hence,

$$\begin{aligned} \left| J_p(y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)+J_p(-y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)\right| \le C|y|^p|\hat{y} \cdot \mathbf {e}_1|^{p-2}. \end{aligned}$$

From (6.2) in [10] it then follows that

$$\begin{aligned} {\begin{matrix} \int _{\partial B_r}\Big |&{} J_p(y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)+J_p(-y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)\Big | d\sigma (y) \\ &{}\le C r^p \int _{\partial B_r}|\hat{y} \cdot \mathbf {e}_1|^{p-2} \,\mathrm {d}\sigma (y)\le \tilde{C}r^{p+d-1}. \end{matrix}} \end{aligned}$$

After integration (to pass from spheres to balls) we obtain

$$\begin{aligned} {\begin{matrix} \int _{(B_r\cup \tilde{B}_r)\setminus (B_r\cap \tilde{B}_r)}&{}\left| J_p(y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)+J_p(-y\cdot \nabla \phi (x)+\frac{1}{2}y^T D^2\phi (x) y)\right| dy \\ &{}\le \int _{B_{r+\sqrt{d}h}\setminus B_{r-\sqrt{d}h}}\left| J_p(y\cdot \nabla \phi (x) +\frac{1}{2}y^T D^2\phi (x) y)+J_p(-y\cdot \nabla \phi (x)\right. \\ &{}\quad \left. +\frac{1}{2}y^T D^2\phi (x) y)\right| dy \\ &{} \le Chr^{p+d-1}=o(r^{d+p}). \end{matrix}} \end{aligned}$$

This is (3.2).

Step 2: Discretization of \(\tilde{A}_r\). Consider

$$\begin{aligned} \tilde{A}_r^h:=h^d\sum _{y_\alpha \in B_r}J_p(\phi (x+y_\alpha )-\phi (x)). \end{aligned}$$

We will show that

$$\begin{aligned} |\tilde{A}_r-\tilde{A}_r^h|=o(r^{d+p}). \end{aligned}$$
(3.3)

Observe that

$$\begin{aligned} \begin{aligned} \tilde{A}_r&=\frac{1}{2}\sum _{y_\alpha \in B_r}\int _{ R_\alpha ^h} \left( J_p(\phi (x+y)-\phi (x))+J_p(\phi (x-y)-\phi (x))\right) \,\mathrm {d}y\\&=\frac{1}{2}\sum _{y_\alpha \in B_r}\int _{ R_0^h} \left( J_p(\phi (x+y_\alpha +y)-\phi (x))+J_p(\phi (x+y_\alpha -y)-\phi (x))\right) \,\mathrm {d}y. \end{aligned} \end{aligned}$$

Since \(|R_0^h|=h^d\) we have

$$\begin{aligned} {\begin{matrix} |\tilde{A}_{r}&{}-A_r^h|\\ &{}= \frac{1}{2}\left| \sum _{y_\alpha \in B_r} \int _{R_0^h} \left( J_p(\phi (x+y_\alpha +y)-\phi (x))+J_p(\phi (x+y_\alpha -y)-\phi (x))\right. \right. \\ &{}\qquad \left. \left. - 2J_p(\phi (x+y_\alpha )-\phi (x)) \right) \,\mathrm {d}y \right| . \end{matrix}} \end{aligned}$$

If \(p\ge 2\) we use Taylor expansion of order two and obtain

$$\begin{aligned} \phi (x+y_\alpha \pm y)-\phi (x)=\phi (x+y_\alpha )-\phi (x)\pm \nabla \phi (x+y_\alpha )\cdot y + \mathcal {O}(y^2). \end{aligned}$$

Let \(\rho =\phi (x+y_\alpha )-\phi (x)\) and \(\eta =\nabla \phi (x+y_\alpha )\). Then this can be expressed as

$$\begin{aligned} \phi (x+y_\alpha +y)-\phi (x)=\rho +\eta \cdot y+\mathcal {O}(|y|^2). \end{aligned}$$

Therefore, by Lemma A.1

$$\begin{aligned} \begin{aligned} \Big |J_p&(\phi (x+y_\alpha +y)-\phi (x))-J_p(\rho +\eta \cdot y)\Big |\\&\le C\max (|\phi (x+y_\alpha +y)-\phi (x)|,|\rho +\eta \cdot y|)^{p-2}|y|^2\\&\le Cr^{p-2}o(r^2)\\&= o(r^{p}), \end{aligned} \end{aligned}$$

where we have used that \(y=\mathcal {O}(h)=o(r)\) and that \(\rho =\mathcal {O}(y_\alpha )=\mathcal {O}(r)\). It follows that it will be enough to obtain an estimate of the form

$$\begin{aligned} |J_p(\rho +\eta \cdot y)+J_p(\rho -\eta \cdot y)-2J_p(\rho )| = o(r^p). \end{aligned}$$
(3.4)

For \(p=2\), this estimate is trivial. When \(p >3\) we use the second order Taylor expansion of \(J_p\) to obtain

$$\begin{aligned} {\begin{matrix} |J_p(\rho +\eta \cdot y)- J_p(\rho )-(p-1)|\rho |^{p-2}\eta \cdot y| &{}\le C\max (|\rho |,|\rho +\eta \cdot y|)^{p-3}|\eta \cdot y|^{2}\\ &{}\le Cr^{p-3}o(r^{3}) =o(r^p), \end{matrix}} \end{aligned}$$
(3.5)

since \(\rho =\mathcal {O}(y_\alpha )=\mathcal {O}(r)\) and \(y=\mathcal {O}(h)=o(r^\frac{3}{2})\) when \(p>3\).

When \(p\in (2,3]\) we use the fact that the derivative of the function \(t\mapsto J_p(t)\) is \(({p-2})\)-Hölder continuous and obtain

$$\begin{aligned} {\begin{matrix} |J_p(\rho +\eta \cdot y)- J_p(\rho )-(p-1)|\rho |^{p-2}\eta \cdot y| &{}\le C|\eta \cdot y|^{p-1}\\ &{} =o(r^p), \end{matrix}} \end{aligned}$$
(3.6)

where we used that \(y=\mathcal {O}(h)=o(r^{p/(p-1)})\) when \(p\in (2,3]\). The estimate (3.4) follows immediately from (3.5) and (3.6), respectively.

If \(p<2\) we use the fact that \(J_p\) is \((p-1)\)-Hölder continuous. Thus,

$$\begin{aligned} \begin{aligned} \big |J_p(\phi (x+y_\alpha +y)-\phi (x))-J_p(\phi (x+y_\alpha )-\phi (x) )\big |&\le C|\phi (x+y_\alpha +y)-\phi (x+y_\alpha )|^{p-1} \\&\le C|y|^{p-1}=o(r^p), \end{aligned} \end{aligned}$$
(3.7)

where we used the assumption \(y=\mathcal {O}(h)=o(r^{p/(p-1)})\) when \(p<2\). Using (3.4) and (3.7) we get

$$\begin{aligned} |\tilde{A}_{r}-A_r^h|=o(r^p)\sum _{y_\alpha \in B_r} h^d=o(r^p)|\tilde{B}_r|\le o(r^p) |B_{r+\sqrt{d}h}|=o(r^{p+d}). \end{aligned}$$

Step 3: Conclusion. Combining Step 1 and Step 2, we obtain

$$\begin{aligned} {\begin{matrix} |\Delta _{p}^h\phi (x)-\mathcal {M}_r^p[\phi ](x)|&{}=\frac{1}{D_{d,p}r^p |B_r|}|A_r^h-A_r|\\ &{}\le \frac{C}{ r^{p+d}} \left( |A_r^h-\tilde{A}_r|+ |\tilde{A}_r-A_r|\right) = \frac{1}{ r^{p+d}} o(r^{p+d})= o_r(1). \end{matrix}} \end{aligned}$$

4 Properties of the Numerical Scheme

In this section we will state and prove some properties of the numerical scheme (1.6)–(1.7).

4.1 Existence and Uniqueness

We will obtain the existence and uniqueness result given in Theorem 1.2(a).

First note that we can write

$$\begin{aligned} {\begin{matrix} \Delta _{p}^h\phi (x)&{}= \frac{h^d}{D_{d,p}\, \omega _d\, r^{p+d}} \sum _{y_\alpha \in B_r} J_p(\phi (x+y_\alpha )-\phi (x))\\ &{}=\frac{1}{D_{d,p}r^p} \fint _{B_r}J_p(\phi (x+y)-\phi (x))\,\mathrm {d}\mu (y) \end{matrix}} \end{aligned}$$

with \(\mu \) being the discrete measure given by

$$\begin{aligned} \,\mathrm {d}\mu (y):=h^d\sum _{y_\alpha \in B_r} \,\mathrm {d}\delta _{y_\alpha }(y), \end{aligned}$$

where \(\delta _{z}\) denotes the dirac delta measure at \(z\in \mathbb {R}^d\). With this simple observation, all the results of Section 9.1 in [10] follow here word by word (replacing \(\mathcal {M}^p_{r}\) by \(\Delta _{p}^h\) and \(\,\mathrm {d}y\) by \(\,\mathrm {d}\mu (y)\)). We state them for completeness. Our running assumptions in this section will be \(f\in C(\overline{\Omega })\) and \(G\in C(\partial \Omega _r)\) (a continuous extension of \(g\in C(\partial \Omega )\)).

The comparison result below implies in particular the uniqueness of solutions of (1.6)–(1.7).

Proposition 4.1

(Comparison) Let \(p\in (1,\infty )\), \(h,r>0\), and \(v,w\in L^\infty (\Omega _r)\) be such that

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta _{p}^hw (x)\ge f(x),&{} x\in \Omega ,\\ \qquad w(x)\ge G(x),&{} x\in \partial \Omega _r, \end{array}\right. } \qquad and \qquad {\left\{ \begin{array}{ll} -\Delta _{p}^hv (x)\le f(x),&{} x\in \Omega ,\\ \qquad v(x)\le G(x),&{} x\in \partial \Omega _r. \end{array}\right. } \end{aligned}$$

Then \(v\le w\) in \(\Omega _r\).

The existence of solutions is proved by a monotonicity argument. For this purpose, we need the following \(L^\infty \)-bound.

Proposition 4.2

(L\(^\infty \)-bound) Let \(p\in (1,\infty )\), let \(R>0\) and \(u_h\) be the solution (if any) of (1.6)–(1.7) corresponding to some \(r\le R\). Assume (H). Then

$$\begin{aligned} \Vert u_h\Vert _{\infty }\le A, \end{aligned}$$

for r small enough, with \(A>0\) depending on \(p, \Omega , f, g\) and R (but not on r and h).

Proof

See the proof of Proposition 9.2 [10]. The proof is based on an explicit barrier for the p-Laplace equation, which by Theorem 1.1 gives a barrier for (1.6)–(1.7). \(\square \)

In order to prove the existence we also need a two step iteration process. For that purpose we define

$$\begin{aligned} L[\psi ,\phi ](x):= \frac{1}{D_{d,p}r^p} \fint _{ B_r} J_p(\phi (x+y)-\psi (x) )\,\mathrm {d}\mu (y). \end{aligned}$$

We have the following result.

Lemma 4.3

Let \(r>0\) and \(\phi \in L^\infty (\Omega _r)\).

  1. (a)

    Then there exists a unique \(\psi \in L^\infty (\Omega )\) such that \(-L[\psi ,\phi ](x)=f(x)\) for all \(x\in \Omega \).

  2. (b)

    Let \(\psi _1\) and \(\psi _2\) be such that \( - L[\psi _1,\phi ](x) \le f(x)\) and \(- L[\psi _2,\phi ](x)\ge f(x)\) for all \(x\in \Omega \), then \(\psi _1\le \psi _2\) in \(\Omega \).

Proof

The proof follows as the proof of Lemma 9.3 in [10]. \(\square \)

We are finally ready to prove the existence.

Proof of Theorem 1.2(a)

The proof follows the proof of Proposition 9.4 in [10]. We spell out some details below.

The approach for existence is to construct a monotone increasing sequence converging to the solution. Let \(\mathcal {B}\) be the barrier constructed in Proposition 4.2. Define

$$\begin{aligned} u^0_h(x)= {\left\{ \begin{array}{ll} \displaystyle \inf _{ \partial \Omega _r} G -\mathcal {B}(x) &{}x\in \Omega ,\\ G(x) &{}x\in \partial \Omega _r, \end{array}\right. } \end{aligned}$$

and the sequence \(u^k_h\) as the sequence of solutions of

$$\begin{aligned} {\left\{ \begin{array}{ll} - L[u^k_h,u^{k-1}_h](x)= f(x)&{} x\in \Omega ,\\ u^k_h(x)= G(x)&{} x\in \partial \Omega _r. \end{array}\right. } \end{aligned}$$

One can prove that \(u^k_h\) exists for all k, is nondecreasing (by the monotonicity of L) and uniformly bounded (by Proposition 4.2). We can then define the pointwise limit

$$\begin{aligned} u_h(x):=\lim _{k\rightarrow \infty } u^k_h(x). \end{aligned}$$

Due to the the pointwise convergence

$$\begin{aligned} {\begin{matrix} -f(x)&=\lim _{k\rightarrow \infty } L[u^{k+1}_h, u^k_h](x)= L[\lim _{k\rightarrow \infty } u^{k+1}_h, \lim _{k\rightarrow \infty } u^k_h](x)=L[u_h, u_h](x)= \Delta _{p}^h[u_h](x). \end{matrix}} \end{aligned}$$

Thus, u is a solution of (1.6). Clearly \(u_h=G\) in \(\partial \Omega _r\) so it is also a solution of (1.7). The uniqueness follows from Proposition 4.1. \(\square \)

4.2 Monotonicity and Consistency

In order to prove convergence of the numerical scheme, we will need certain monotonicity and consistency properties (we already obtained a uniform bound in Proposition 4.2). For a function \(\phi :\Omega _r\rightarrow \mathbb {R}\) define

$$\begin{aligned} S(r,h,x,\phi (x),\phi ):={\left\{ \begin{array}{ll} \displaystyle - \frac{h^d}{D_{d,p}\, \omega _d\, r^{p+d}} \sum _{y_\alpha \in B_r} J_p(\phi (x+y_\alpha )-\phi (x))- f(x) &{}x\in \Omega ,\\ \phi (x)-G(x) &{}x\in \partial \Omega _r. \end{array}\right. } \end{aligned}$$

Note that (1.6)–(1.7) can be equivalently formulated as

$$\begin{aligned} S(r,h,x,u_h(x),u_h)=0 \quad x\in \Omega _r. \end{aligned}$$

We have the following result.

Lemma 4.4

Assume (H).

  1. (a)

    (Monotonicity) Let \(t\in \mathbb {R}\) and \(\psi \ge \phi \). Then

    $$\begin{aligned} S(r,h,x,t,\psi )\le S(r,h,x,t,\phi ) \end{aligned}$$
  2. (b)

    (Consistency) For all \(x\in \overline{\Omega }\) and \(\phi \in C^2( B_R(x))\) for some \(R>0\) such that \(|\nabla \phi (x)|\ne 0\) we have that

    $$\begin{aligned} {\begin{matrix} \limsup _{r\rightarrow 0, z\rightarrow x, \xi \rightarrow 0} S(r,h, z&{}, \phi (z)+\xi +\eta _{r}, \phi +\xi )\\ &{}\le \left\{ \begin{array}{cccl} -\Delta _p\phi (x)-f(x)&{} \text {if} &{}x\in \Omega \\ \max \{-\Delta _p\phi (x)-f(x), \phi (x)-g(x)\}&{} \text {if} &{}x\in \partial {\Omega }, \end{array}\right. \end{matrix}} \end{aligned}$$

and

$$\begin{aligned} {\begin{matrix} \liminf _{r\rightarrow 0, z\rightarrow x, \xi \rightarrow 0} S(r,h, z&{}, \phi (z)+\xi -\eta _{r}, \phi +\xi )\\ &{}\ge \left\{ \begin{array}{cccl} -\Delta _p\phi (x)-f(x)&{} \text {if} &{}x\in \Omega \\ \min \{-\Delta _p\phi (x)-f(x), \phi (x)-g(x)\}&{} \text {if} &{}x\in \partial {\Omega }, \end{array}\right. \end{matrix}} \end{aligned}$$

where \(0\le \eta _{r}=o(r^p)\) as \(r\rightarrow 0^+\).

Proof

The proof follows as in Lemma 9.7 in [10]. For part (b) it is essential to use the fact that \(J_p\) is a Hölder continuous function, the basic properties of \(\limsup \) and \(\liminf \) and the consistency of \(\Delta _{p}^h\) given in Theorem 1.1. \(\square \)

4.3 Convergence

We are now ready to prove the convergence stated in Theorem 1.2. The idea of the proof originates from [4]. The proof is almost the same as the proof of Theorem 2.5 ii) in [10]. We point out that it was necessary to adapt the proof in order to make it fit with the definition of viscosity solutions in the case \(p\in (1,2)\). Below, we spell out some details.

First we need another definition of viscosity solutions of the boundary value problem and two auxiliary results that are taken from [10].

Definition 4.1

(Generalized viscosity solutions of the boundary value problem) Let \(f\in C(\overline{\Omega })\) and \(g\in C(\partial \Omega )\). We say that a lower (resp. upper) semicontinuous function u in \(\overline{\Omega }\) is a generalized viscosity supersolution (resp. subsolution) of (1.3)–(1.4) in \(\overline{\Omega }\) if whenever \(x_0 \in \overline{\Omega }\) and \(\varphi \in C^2( B_R(x_0))\) for some \(R>0\) are such that \(|\nabla \varphi (x)|\ne 0\) for \(x\in B_R(x_0)\setminus \{x_0\}\),

$$\begin{aligned} \varphi (x_0) = u(x_0) \quad \text {and} \quad \varphi (x) \le u(x) \ {(resp. \varphi (x)\ge u(x))}\quad \text {for all} \quad x \in B_R(x_0)\cap \overline{\Omega }, \end{aligned}$$

then we have

$$\begin{aligned} {\begin{matrix} \lim _{\rho \rightarrow 0}\sup _{B_{\rho (x_0)}\setminus \{x_0\}}\left( -\Delta _p\varphi (x)-f(x_0)\right) &{}\ge 0 \quad if \quad x_0\in \Omega \\ \text {(resp. } \lim _{\rho \rightarrow 0}\inf _{B_{\rho (x_0)}\setminus \{x_0\}}\left( -\Delta _p\varphi (x)-f(x_0)\right) &{} \le 0\text {)}\\ \max \left\{ \lim _{\rho \rightarrow 0}\sup _{B_{\rho (x_0)}\setminus \{x_0\}}\left( -\Delta _p\varphi (x)-f(x_0)\right) , u(x_0)-g(x_0)\right\} &{}\ge 0 \quad if \quad x_0\in \partial \Omega \\ \Big (\text {resp. } \min \left\{ \lim _{\rho \rightarrow 0}\inf _{B_{\rho (x_0)}\setminus \{x_0\}}\left( -\Delta _p\varphi (x)-f(x_0)\right) , u(x_0)-g(x_0)\right\} &{}\le 0\Big ) \end{matrix}} \end{aligned}$$

Remark 4.5

As in Remark 2.1, we note that when either \(p\ge 2\) or \(|\nabla \varphi (x_0)|\not =0\), the limits in the above definition can simply be replaced by \((-\Delta _p\varphi (x_0)- f(x_0))\).

The following uniqueness result is Theorem 9.5 in [10].

Theorem 4.6

(Strong uniqueness property) Let \(\Omega \) be a bounded \(C^2\) domain. If u and v are generalized viscosity subsolutions and supersolutions of (1.3)–(1.4) respectively, then \(u\le v\).

We also need that a generalized viscosity solution is a (usual) viscosity solution in the case of a bounded \(C^2\) domain. The proposition below is Proposition 9.6 in [10].

Proposition 4.7

Let \(\Omega \) be a bounded \(C^2\) domain. Then u is a viscosity subsolution (resp. supersolution) of (1.3)–(1.4) if and only if u is a generalized viscosity subsolution (resp. supersolution) of (1.3)–(1.4).

Proof of Theorem 1.2(b)

Define

$$\begin{aligned} \overline{u}(x)=\limsup _{r\rightarrow 0,y\rightarrow x} u_h(y), \qquad \underline{u}(x)=\liminf _{r\rightarrow 0, y\rightarrow x} u_h(y), \end{aligned}$$

where \(h\rightarrow 0\) as in the hypotheses of Theorem 1.2. By definition \(\underline{u}\le \overline{u}\) in \(\overline{\Omega }\). If we show that \(\overline{u}\) (resp. \(\underline{u}\)) is a generalized viscosity subsolution (resp. supersolution) of (1.3), Theorem 4.6 would imply \(\overline{u}\le \underline{u}\). Thus, \(u:=\overline{u}=\underline{u}\) is a generalized viscosity solution of (1.3) and \(u_h\rightarrow u\) uniformly in \(\overline{\Omega }\). Proposition 4.7 then would imply that u is a viscosity solution of (1.3).

We now sketch how to show that \(\overline{u}\) is a generalized viscosity subsolution. First note that \(\overline{u}\) is an upper semicontinuous function by definition, and it is also bounded since \(u_h\) is uniformly bounded by Proposition 4.2. Take \(x_0\in \overline{\Omega }\) and \(\varphi \in C^2(B_R(x_0))\) such that \(\overline{u}(x_0)=\varphi (x_0)\), \(\overline{u}(x)<\varphi (x_0)\) if \(x\not =x_0\). We separate the proof into different cases depending of the value of the gradient of \(\varphi \) at \(x_0\) and the range of p.

Case 1: \(|\nabla \varphi (x_0)|\not =0\) or \(p\ge 2\). Then, for all \(x\in \overline{\Omega }\cap B_R(x_0)\setminus \{x_0\}\), we have that

$$\begin{aligned} \overline{u}(x)-\varphi (x)<0= \overline{u}(x_0)-\varphi (x_0). \end{aligned}$$
(4.1)

We claim that we can find a sequence \((r_n,y_n)\rightarrow (0,x_0)\) as \(n\rightarrow \infty \), with \(h_n\rightarrow 0\) as in the hypotheses of the theorem, such that

$$\begin{aligned} u_{h_n}(x)-\varphi (x) \le u_{h_n}(y_n)-\varphi (y_n)+ e^{-1/r_n} \quad for all \quad x\in \overline{\Omega }\cap B_R(x_0). \end{aligned}$$
(4.2)

This can be argued for as in the proof of Theorem 2.5 ii) in [10].

Choose now \(\xi _n:=u_{h_n}(y_n)-\varphi (y_n)\). We have from (4.2) that,

$$\begin{aligned} u_{h_n}(x)\le \varphi (x) + \xi _n + e^{-1/r_n} \quad for all \quad x\in \overline{\Omega }\cap B_R(x_0). \end{aligned}$$

Using Lemma 4.4(b)we obtain

$$\begin{aligned} {\begin{matrix} 0&{}=S(r_n,h_n, y_n, u_{r_n}(y_n),u_{h_n})\\ &{}=S(r_n,h_n, y_n, \varphi (y_n)+\xi _n,u_{h_n})\\ &{}\ge S(r_n,h_n, y_n, \varphi (y_n)+\xi _n,\varphi + \xi _n + e^{-1/r_n} )\\ &{}=S(r_n,h_n,y_n, \varphi (y_n)+\xi _n- e^{-1/r_n} ,\varphi + \xi _n ). \end{matrix}} \end{aligned}$$

Note that \(e^{-1/r}=o(r^p)\). By Lemma 4.4(b), we have

$$\begin{aligned} {\begin{matrix} 0&{}\ge \liminf _{r_n\rightarrow 0,\, y_n\rightarrow x_0,\, \xi _n\rightarrow 0}S(r_n,h_n,y_n, \varphi (y_n)+\xi _n- e^{-1/r_n} ,\varphi + \xi _n )\\ &{}\ge \liminf _{r\rightarrow 0,\, y\rightarrow x_0,\, \xi \rightarrow 0}S(r,h,y, \varphi (y)+\xi - e^{-1/r} ,\varphi + \xi )\\ &{}\ge \left\{ \begin{array}{cccl} -\Delta _p\varphi (x_0)-f(x_0)&{} \text { if } &{}x_0\in \Omega ,\\ \min \{-\Delta _p\varphi (x_0)-f(x_0), \overline{u}(x_0)-g(x_0)\}&{} \text { if } &{}x_0\in \partial {\Omega }, \end{array}\right. \end{matrix}} \end{aligned}$$

which shows that \(\overline{u}\) is a viscosity subsolution and finishes the proof in this case.

Case 2: Let \(p\in (1,2)\) and \(|\nabla \varphi (x_0)|=0\) such that \(\overline{u}\) is constant in some ball \(B_\rho (x_0)\) for \(\rho >0\) small enough. Choose \(\phi (x)=\overline{u}(x_0)+|x-x_0|^{\frac{p}{p-1}+1}\). Then, we can argue as in Case 1 above that

$$\begin{aligned} 0 \ge \liminf _{r\rightarrow 0,\, y\rightarrow x_0,\, \xi \rightarrow 0}S(r,h,y, \phi (y)+\xi - e^{-1/r} ,\phi + \xi ), \end{aligned}$$

which implies

$$\begin{aligned} 0 \ge \liminf _{r\rightarrow 0,\, y\rightarrow x_0}S(r,h,y, \phi (y) ,\phi ), \end{aligned}$$

by the Hölder continuity of \(J_p\). Together with Lemma A.3 this shows that

$$\begin{aligned} -\Delta _p \overline{u}(x_0)=0\le f(x_0). \end{aligned}$$

Hence, \(\overline{u}\) is a classical subsolution at \(x_0\) and thus also a viscosity subsolution.

Case 3: Let \(|\nabla \varphi (x_0)|=0\) and assume that \(\overline{u}\) is not constant in any ball \(B_\rho (x_0)\). Then we may argue as in the proof of Proposition 2.4 in [3] to prove that there is a sequence \(y_k\rightarrow 0\) such that the function \(\varphi _k(x)=\varphi (x+y_k)\) touches \(\overline{u}\) from above at \(x_k=x_0+y_k\) and \(|\nabla \varphi _k(x_k)|\ne 0\) for all k. As in Case 1, this gives

$$\begin{aligned} {\begin{matrix} 0&{}\ge \left\{ \begin{array}{cccl} \displaystyle -\Delta _p\varphi (x_k)-f(x_k)&{} \text { if } &{}x_k\in \Omega ,\\ \displaystyle \min \{-\Delta _p\varphi (x_k)-f(x_k), \overline{u}(x_k)-g(x_k)\}&{} \text { if } &{}x_k\in \partial {\Omega }, \end{array}\right. \end{matrix}} \end{aligned}$$

for all k. Passing \(k\rightarrow \infty \), we obtain

$$\begin{aligned} {\begin{matrix} 0&{}\ge \limsup _{k\rightarrow \infty } \left\{ \begin{array}{cccl} \displaystyle (-\Delta _p\varphi (x_k)-f(x_k))&{} \text { if } &{}x_k\in \Omega ,\\ \displaystyle \min \{-\Delta _p\varphi (x_k)-f(x_k), \overline{u}(x_k)-g(x_k)\}&{} \text { if } &{}x_k\in \partial {\Omega }, \end{array}\right. \\ &{}\ge \left\{ \begin{array}{cccl} \displaystyle \lim _{\rho \rightarrow 0}\inf _{B_{\rho }(x_0)\setminus \{x_0\}}\left( -\Delta _p\varphi (x)-f(x)\right) &{} \text { if } &{}x_0\in \Omega ,\\ \displaystyle \min \{ \lim _{\rho \rightarrow 0}\inf _{B_{\rho }(x_0)\setminus \{x_0\}}\left( -\Delta _p\varphi (x)-f(x)\right) , \overline{u}(x)-g(x)\}&{} \text { if } &{}x_0\in \partial {\Omega }, \end{array}\right. \end{matrix}} \end{aligned}$$

which is the desired inequality. This completes the proof. \(\square \)

5 Solution of the Nonlinear System

When we discretize the Dirichlet problem (1.3)–(1.4), we need to solve the nonlinear system (1.8)–(1.9). In contrast to the situation in [34], our system is not based on the mean value formula for the \(\infty \)-Laplacian which is not differentiable. Instead, it is based on an implicit and differentiable mean value property. This system is therefore well suited for Newton-Raphson, which is one of the methods we have employed. The Newton-Raphson method is fast (also mentioned by Oberman in [34]) and the number of iterations required to solve the system seems to be independent of its size, see Table 1. However, we neither have a proof of this nor a proof of the convergence of this method. The convergence would be guaranteed for example if this is related to find the minimum of a strongly convex function. This is not the case here, since the related minimizer functional is merely convex. Nevertheless, we can give sufficient conditions for the Jacobian matrix used to be invertible. We have included this discussion in the one-dimensional setting in “Appendix 1”. Since we cannot prove convergence for the Newton-Raphson method, we have also chosen to include an explicit method based on the convergence to a steady state of an evolution problem, for which we can guarantee the convergence. The convergence of this method is conditioned by the CFL-type condition (CFL) in Sect. 5.2. See Table 1 for a more detailed comparison between the efficiency in terms of speed of the two methods. We describe the two methods in detail below.

5.1 Newton-Raphson

The method we have used is the standard one. Let \(F:\mathbb {R}^k \rightarrow \mathbb {R}^k\) for some \(k\ge 1\). In order to solve the system

$$\begin{aligned} F (z) = 0, \end{aligned}$$

we use the iteration

$$\begin{aligned} z_{n+1}=z_n-( J_F(z_n))^{-1} F(z_n). \end{aligned}$$

where \(J_F\) denotes the Jacobian matrix of the function F. In our particular case we have that \(k=\# \{\tilde{G}_h \cap \Omega _r\)}.

Let us illustrate the form of F and \(J_F\) in the one dimensional case. Let \(\gamma =\min \{\beta \in \mathbb {Z}\ : \ x_\beta \in \Omega _r\}\), and \(z_i=U_{\gamma + i-1}\). Consider

$$\begin{aligned} F(z_1,\ldots ,z_k)=\left( \begin{matrix} F_1(z_1,\ldots ,z_k)\\ F_2(z_1,\ldots ,z_k)\\ \vdots \\ F_k(z_1,\ldots ,z_k) \end{matrix} \right) \end{aligned}$$

where \(F_i:\mathbb {R}^k\rightarrow \mathbb {R}\) for \(i=1,\ldots ,k\) are given by

$$\begin{aligned} F_i(z_1,\ldots ,z_k)= {\left\{ \begin{array}{ll} z_i-G_{\gamma +i-1}\quad &{}if \quad x_{\gamma +i-1}\in \partial \Omega _r,\\ \frac{h^d}{D_{d,p}\, \omega _d\, r^{p+d}} \displaystyle \sum _{x_\alpha \in B_r} J_p(z_{i+\alpha }-z_i) - f_{\gamma +i-1} \quad &{}if \quad x_{\gamma +i-1}\in \Omega . \end{array}\right. } \end{aligned}$$

Let \((J_F(z))_{i,j}=(J_F(z_1,\ldots ,z_k))_{i,j}\) denote the component of the Jacobian matrix of F corresponding to the i-th and j-th column. If i is such that \(x_{\gamma +i-1}\in \partial \Omega _r\) then

$$\begin{aligned} (J_F(z))_{i,j}={\left\{ \begin{array}{ll} 1 \quad if \quad j=i\\ 0 \quad if \quad j\not =i \end{array}\right. } \end{aligned}$$

while if \(x_{\gamma +i-1}\in \Omega \) then

$$\begin{aligned} (J_F(z))_{i,j}\!=\!\frac{(p-1)h^d}{D_{d,p}\, \omega _d\, r^{p+d}}\!\times \!{\left\{ \begin{array}{ll} |z_{j}-z_i|^{p-2} \quad &{}if \quad j\not =i \quad and \quad x_{\gamma +j-1}-x_{\gamma +i-1}\!\in \! B_r,\\ \displaystyle -\sum _{x_\alpha \in B_r}|z_{i+\alpha }-z_i|^{p-2} \quad &{}if \quad j=i,\\ 0 &{}otherwise . \end{array}\right. } \end{aligned}$$

5.2 Explicit Method

We consider \(\{U^m\}_{m\in \mathbb {N}}\) to be the sequence of solutions \(U^m:\Omega ^h_r\rightarrow \mathbb {R}\) of

$$\begin{aligned} U^{m+1}_\beta = U^m_\beta +\tau _m \Delta _{p}^hU^m_\beta +\tau _m f_\beta , \quad x_\beta \in \Omega ^h \end{aligned}$$
(5.1)

where \(U^0\) is some initial data, \(U^m = G\) on \(\partial \Omega _{r}^h\) and \(\{\tau _m\}_{m\in \mathbb {N}}>0\) are certain discretization parameters. The idea here is that, as \(m\rightarrow \infty \), \(U^m\) converges to the solution U of (1.8)–(1.9). This convergence holds given a nonlinear counterpart to the CFL-stability condition. Actually, we also need to slightly modify (5.1) to ensure convergence; in words of Oberman in [33], we need to ensure that our operator is proper.

More precisely, given \(\varepsilon >0\), let \(\{(U_\varepsilon )^m\}_{m=1}^\infty \) be the solution of

$$\begin{aligned} (U_\varepsilon )^{m+1}_\beta = (U_\varepsilon )^m_\beta +\tau _m \Delta _{p}^h(U_\varepsilon )^m_\beta - \tau _m\varepsilon (U_\varepsilon )^m_\beta +\tau f_\beta , \quad x_\beta \in \Omega ^h \end{aligned}$$
(5.2)

subject to the same initial and boundary conditions as in (5.1). Let \(U_\varepsilon \) be the solution of

$$\begin{aligned}&-\Delta _{p}^h(U_\varepsilon )_\beta + \varepsilon (U_\varepsilon )_\beta = f_\beta ,&\quad x_\beta \in \Omega ^h, \end{aligned}$$
(5.3)
$$\begin{aligned}&\ (U_\varepsilon )_\beta =G_\beta ,&\quad x_\beta \in \partial \Omega _r^h. \end{aligned}$$
(5.4)

It is standard to check, using the techniques of Sect. 4.1 that \(U_\varepsilon \) exists, is unique and uniformly bounded in rh and \(\varepsilon \). We have the following result.

Lemma 5.1

Let \(p\ge 2\) and \(\{U_\varepsilon ^m\}_{m=1}^\infty \) be the solution of (5.2) with any bounded initial condition \(U_\varepsilon ^0\). Let also U be the solution of (1.8)–(1.9). Assume that

$$\begin{aligned}&0<\tau _m\le \min \left\{ 1, \frac{r^p}{(p-1)2^{p-2}L_m^{p-2}}\frac{D_{d,p}}{(1+\sqrt{d})^d}(1-\varepsilon )\right\} \\&\quad with \quad L_m = \max ( \Vert U^m_\varepsilon \Vert _{\ell ^\infty }, \Vert U_\varepsilon \Vert _{\ell ^\infty } ). \end{aligned}$$
(CFL)

Then

$$\begin{aligned} \max _{x_\alpha \in \Omega }\left| (U_\varepsilon )^m_\alpha -U_\alpha \right| =2L_0 (1-\tau \varepsilon )^m + o_\varepsilon (1), \end{aligned}$$

where \(\tau =\inf _{m\in \mathbb {N}} \{\tau _m\}\).

Proof

Since \(U_\varepsilon \) is uniformly bounded in a discrete finite set, there exists a convergent subsequence \(U_{\varepsilon _j}\) converging to some V pointwise. It is also standard to show that V is indeed a solution of (1.8)–(1.9). By uniqueness, \(V=U\) and the full sequence \(U_{\varepsilon }\) converges, i.e.,

$$\begin{aligned} \Vert U_{\varepsilon }-U\Vert _\infty =o_\varepsilon (1). \end{aligned}$$

On the other hand, by subtracting the equations for \(U_\varepsilon \) and \((U_\varepsilon )^m\) we get

$$\begin{aligned} {\begin{matrix} (U_\varepsilon )^{m+1}_\beta -(U_\varepsilon )_\beta =&{} ((U_\varepsilon )^{m}_\beta -(U_\varepsilon )_\beta )(1-\tau _m\varepsilon ) + \tau _m K\\ &{}\times \sum _{y_\alpha \in B_r}\left( J_p((U_\varepsilon )^m_{\beta +\alpha }-(U_\varepsilon )^m_{\beta })-J_p((U_\varepsilon )_{\beta +\alpha }-(U_\varepsilon )_{\beta }) \right) \\ =&{} ((U_\varepsilon )^{m}_\beta -(U_\varepsilon )_\beta )(1-\tau _m\varepsilon ) + \tau _m K\\ &{}\times \sum _{y_\alpha \in B_r} J'_p(\xi _{\alpha ,\beta })\left( ((U_\varepsilon )^m_{\beta +\alpha }-(U_\varepsilon )^m_{\beta }) -((U_\varepsilon )_{\beta +\alpha }-(U_\varepsilon )_{\beta }) \right) \\ =&{} ((U_\varepsilon )^{m}_\beta -(U_\varepsilon )_\beta )\left( 1-\tau _m\varepsilon - \tau _m K\sum _{y_\alpha \in B_r}J'_p(\xi _{\alpha ,\beta })\right) \\ &{}+ \tau _m K\sum _{y_\alpha \in B_r} J'_p(\xi _{\alpha ,\beta })\left( (U_\varepsilon )^m_{\beta +\alpha }-(U_\varepsilon )_{\beta +\alpha } \right) \end{matrix}} \end{aligned}$$

where \(K:=\frac{h^d}{|B_r|D_{d,p}r^p}\) and \(\xi _{\alpha ,\beta }\) lies between \((U_\varepsilon )^m_{\beta +\alpha }-(U_\varepsilon )^m_\beta \) and \((U_\varepsilon )_{\beta +\alpha }-(U_\varepsilon )_\beta \), so that \(|\xi _{\alpha ,\beta }|\le 2L_m\) and \(|J_p'(\xi _{\alpha ,\beta })|=(p-1)|\xi _{\alpha ,\beta }|^{p-2}\le (p-1)2^{p-2}L^{p-2}_m\) since \(p\ge 2\). Therefore, when r is small enough

$$\begin{aligned} {\begin{matrix} \tau _m K\sum _{y_\alpha \in B_r}J'_p(\xi _{\alpha ,\beta })\le (1-\varepsilon )\frac{1}{|B_r|}\frac{1}{(1+\sqrt{d})^d}\sum _{y_\alpha \in B_r} h^d\le (1-\varepsilon ) \frac{| B_{r+\sqrt{d}h}|}{|B_{r+\sqrt{d}r}|}\le (1-\varepsilon ), \end{matrix}} \end{aligned}$$

where we used (CFL) and that \(h\le r\) (since \(h=o(r)\) with r small enough). In this way,

$$\begin{aligned} 1-\tau _m\varepsilon - \tau _m K\sum _{y_\alpha \in B_r}J'_p(\xi _{\alpha ,\beta })\ge \varepsilon (1-\tau _m)\ge 0. \end{aligned}$$

Clearly \(J_p'\ge 0\), and then

$$\begin{aligned} {\begin{matrix} \Vert (U_\varepsilon )^{m+1}-U_\varepsilon \Vert _{\ell ^\infty } &{}\le \Vert (U_\varepsilon )^{m}-U_\varepsilon \Vert _{\ell ^\infty }\left( 1-\tau _m\varepsilon - \tau _m K\sum _{y_\alpha \in B_r}J'_p(\xi _{\alpha ,\beta })\right) \\ &{}\qquad + \tau _m K\sum _{y_\alpha \in B_r} J'_p(\xi _{\alpha ,\beta }) \Vert (U_\varepsilon )^{m}-U_\varepsilon \Vert _{\ell ^\infty }\\ &{}\le \Vert (U_\varepsilon )^{m}-U_\varepsilon \Vert _{\ell ^\infty }\left( 1-\tau _m\varepsilon \right) \\ &{}\le \Vert (U_\varepsilon )^{0}-U_\varepsilon \Vert _{\ell ^\infty }\left( 1-\tau _m\varepsilon \right) ^{m+1}\\ &{}\le 2L_0\left( 1-\tau \varepsilon \right) ^{m+1}. \end{matrix}} \end{aligned}$$

The results follows using the triangle inequality:

$$\begin{aligned} \left\| (U_\varepsilon )^m-U\right\| _{\ell ^\infty }\le \Vert (U_\varepsilon )^m-U_\varepsilon \Vert _{\ell ^\infty } +\Vert {U}_{\varepsilon }-U\Vert _{\ell ^\infty }\le 2L_0 (1-\tau \varepsilon )^m + o_\varepsilon (1). \end{aligned}$$

\(\square \)

Remark 5.2

The fact that \(U_\varepsilon \) is uniformly bounded together with the bound \(\Vert U^{m}_\varepsilon -U_\varepsilon \Vert _\infty \le 2L_0\) ensures that \(L_m\) is uniformly bounded from above so that \(\{\tau _m\}_{m\in N}\) can be taken uniformly bounded from below.

In the case \(1<p<2\), we used a regularization of the singularity in \(\Delta _{p}^h\) in order to make it a Lipschitz map. This could be done for example by modifying the nonlinearity with an extra approximation parameter \(\delta >0\) and replacing \(J_p\) by \(J_p^\delta \) given by

$$\begin{aligned} J_p^\delta (t)=\left\{ {\begin{matrix} &{}J_p(t+\delta ) -J_p(\delta ) \quad \ \ \ if \quad t\ge 0,\\ &{}J_p(t-\delta ) -J_p(-\delta ) \quad if \quad t<0. \end{matrix}}\right. \end{aligned}$$

The drawback of this type of regularization is that the condition (CFL) becomes more and more restrictive as \(\delta \rightarrow 0\). This regularization is typically used when dealing with explicit schemes for fast diffusion equations (see for example [8, 9])

5.3 Comparison Between the Solvers

We now present a comparison of the above methods regarding the number of iterations and computational timeFootnote 2.

We have solved the system (1.8)–(1.9) for \(p=3\), in dimension \(d=1\) with \(\Omega =(-1,1)\), \(f\equiv 1\) and \(g\equiv 0\). As starting value for the iteration we have chosen \(u_0(x)=(1-|x|)_+\). Finally, for the explicit solver we have chosen \(\tau \) to satisfy (CFL). We have stopped the solver when difference between two consecutive iterations is less that \(10^{-16}\).

In Table 1 we present the results for different values of r and its corresponding h satisfying (H) (in this case \(h=\frac{r^{3/2+0.1}}{4}\)).

Table 1 A comparison of the efficiency of our methods used to solve the nonlinear system for \(p=3\)

As the table shows, the Newton-Raphson solver is fast in the sense that the number of iterations does not seem to depend on the size of the system. This is a big advantage compared to the explicit solver, for which smaller values of r enforces smaller choices of \(\tau \) which increase the number of iterations required substantially.

6 Numerical Experiments

To perform numerical experiments we need two ingredients.

  1. (1)

    The explicit value of the constant \(D_{d,p}\).

  2. (2)

    Explicit solutions of (1.3)–(1.4) to test with.

It is standard to check that, in dimension \(d=1\), we have

$$\begin{aligned} D_{1,p}=\frac{1}{2(1+p)}. \end{aligned}$$

In dimension \(d=2\) we have the following formula for integer numbers, that can be obtained through integration by parts. Let \(p\in [2,\infty )\) and \(d=2\).

  1. (a)

    (Even) If \(p=2n\) for some \(n\in \mathbb {N}\) then

    $$\begin{aligned} D_{2,p}= \frac{1}{2+p}\left( \prod _{i=1}^n \frac{2i-1}{2i}\right) . \end{aligned}$$
  2. (b)

    (Odd) If \(p=2n+1\) for some \(n\in \mathbb {N}\) then

    $$\begin{aligned} D_{2,p}= \frac{2}{\pi (2+p)}\left( \prod _{i=1}^n \frac{2i}{2i+1}\right) . \end{aligned}$$

For general dimension and p, one can find the following expression

$$\begin{aligned} D_{d,p} = \frac{d}{2\sqrt{\pi }}\cdot \frac{p-1}{d+p}\cdot \frac{\Gamma (\frac{d}{2})\Gamma (\frac{p-1}{2})}{\Gamma (\frac{d+p}{2})}. \end{aligned}$$

As mentioned in the introduction, homogeneous problems can successfully be treated by means of the so-called normalized p-Laplacian, for which numerical schemes are well understood (see [32, 34]). Therefore, we will focus on nonhomogeneous problems (\(f\not =0\)). We compare our numerically obtained solution with the explicit solution

$$\begin{aligned} u(x)= (1-|x|^{\frac{p}{p-1}}) \frac{p-1}{p}\frac{1}{d^{\frac{1}{p-1}}} . \end{aligned}$$

Note that u is a solution of

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta _p u(x) =1, &{} x\in B_1,\\ u(x)=0, &{} x\in \partial B_1. \end{array}\right. } \end{aligned}$$
(6.1)

In dimension two we will also use that for \(p=4\), the smooth function \(u(x,y)=xy\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta _p u(x,y) =-4xy, &{} (x,y)\in B_1,\\ u(x,y)=xy, &{} (x,y)\in \partial B_1, \end{array}\right. } \end{aligned}$$
(6.2)

and that the less regular function \(u(x,y)=|x|^{\frac{4}{3}}y\) solves

$$\begin{aligned} {\left\{ \begin{array}{ll} -\Delta _p u(x,y) =-\frac{64}{27}y^3-\frac{44}{9}|x|^2y, &{} (x,y)\in B_1,\\ u(x,y)=|x|^{\frac{4}{3}}y, &{} (x,y)\in \partial B_1. \end{array}\right. } \end{aligned}$$
(6.3)

6.1 Error Analysis in Dimension \(d=1\)

Here we present the results of a numerical experiment using our numerical scheme to solve problem (6.1) in dimension \(d=1\) using MATLAB.

To solve the nonlinear system present in (1.8)–(1.9) we use the explicit solver given by (5.2). The parameter \(\tau _m\) has been chosen to satisfy the (CFL), while \(\varepsilon \) is chosen small enough not to interfere with the error in h and r. We have also taken \(G(x)=0\) for all \(x\in \partial \Omega _r\) as extended boundary condition.

We have stopped the explicit solver when it has reached a numerical steady state, i.e.,

$$\begin{aligned} \max _{x_\alpha \in \Omega }|(U_\varepsilon )^{m+1}_\alpha -(U_\varepsilon )^{m}_\alpha |<10^{-16}. \end{aligned}$$

In this case we have chosen to take \(h=r^2/4\) which clearly satisfy the condition \(h=o(r^{3/2})\). The results obtained are presented in Fig. 2 and Table 2 which contain the simulations for \(p=3\), \(p=4\) and \(p=10\).

Fig. 2
figure 2

\(\ell ^\infty \)-absolute error \(\Vert (U_\varepsilon )_h-u\Vert _{\ell ^\infty }\) and an approximated convergence rate in r and h (in rose) in dimension \(d=1\) for problem (6.1)

It can be clearly seen that the error seems to behave linearly with r. This can be seen more clearly in Table 2, where we present the details of the results in Fig. 2.

Table 2 \(\ell ^\infty \)-absolute error \(\Vert (U_\varepsilon )_h-u\Vert _{\ell ^\infty }\) and observed convergence rates \(\gamma _r\) in r and \(\gamma _h\) in h in dimension \(d=1\) for problem (6.1)

The observed convergence rate \(\gamma \) have been computed using

$$\begin{aligned} error _j= k (r_j)^\gamma , \quad j=0,1,2,3,4, \end{aligned}$$

where \(r_j=0.2/2^j\). In this way,

$$\begin{aligned} \gamma = \log _2\left( \frac{error _{j-1}}{error _{j}}\right) . \end{aligned}$$

6.2 First Error Analysis in Dimension \(d=2\)

We now perform numerical experiments in dimension \(d=2\) for problem (6.1). We have almost the same setup as in Sect. 6.1, except that we now take \(h=r^{\frac{3}{2}+0.1}\) which clearly satisfy the condition \(h=o(r^\frac{3}{2})\).

Fig. 3
figure 3

\(\ell ^\infty \)-absolute error \(\Vert (U_\varepsilon )_h-u\Vert _{\ell ^\infty }\) and an approximated convergence rate in r and h (in rose) in dimension \(d=2\) for problem (6.1)

Again, as in the computation in dimension \(d=1\), the error observed in Fig. 3 seems to decay at least linearly with r, despite the fact that we have taken the parameter h to decay slower than before. It seems as if as long as \(h=o(r^{3/2})\), the choice of h does not interfere with the order of convergence in r.

In Table 3 we observe some instabilities in the order of convergence in the simulations for big choices of r and h. However, if we compute the order of convergence between the simulation with \(r=2.00\)e-1 and \(r=2.50\)e-2 the observed rate is

$$\begin{aligned} \overline{\gamma }_r=\log _8\left( \frac{7.73e- 2}{7.21e- 3}\right)&= 1.14>1 \quad if \quad p=3. \\ \overline{\gamma }_r=\log _8\left( \frac{8.25e- 2}{8.61e- 3}\right)&= 1.09>1 \quad if \quad p=4\\ \overline{\gamma }_r=\log _8\left( \frac{1.22e- 1}{1.26e- 2}\right)&= 1.09>1 \quad if \quad p=10 \end{aligned}$$

which is actually slightly better than linear in all the cases.

Table 3 \(l^\infty \)-absolute error \(\Vert (U_\varepsilon )_h-u\Vert _{\ell ^\infty }\) and observed convergence rates \(\gamma _r\) in r and \(\gamma _h\) in h in dimension \(d=2\) for problem (6.1)

6.3 Second Error Analysis in Dimension \(d=2\)

Here, we perform numerical simulations for problems (6.2) and (6.3). We intend to illustrate that the regularity of the solution should have an impact on the order of convergence of the scheme. Note that, while the solution of (6.2) is infinitely smooth, the solution of (6.3) is no more than \(C^{1,1/3}\). This time, we have chosen \(h=r^{\frac{3}{2}+0.01}\), which satisfy the condition \(h=o(r^\frac{3}{2})\).

Fig. 4
figure 4

\(\ell ^\infty \)-absolute error \(\Vert (U_\varepsilon )_h-u\Vert _{\ell ^\infty }\) for problem (6.2) (red) and (6.3) (light blue) and respective approximated convergence rates (pink and dark blue) in dimension \(d=2\)

We comment now on Fig. 4. Note that for problem (6.2), where the solution is smooth, the observed error decays again linearly in r. On the other hand, for problem (6.3), the error decays clearly in a sublinear way.

Table 4 \(l^\infty \)-absolute error \(\Vert (U_\varepsilon )_h-u\Vert _{\ell ^\infty }\) and observed convergence rates \(\gamma _r\) in r and \(\gamma _h\) in h in dimension for problem (6.2) and (6.3)

In Table 4, we observe that for the smooth solution of problem (6.2), the overall convergence rates are

$$\begin{aligned} \overline{\gamma }_r=\log _8\left( \frac{2.45e-2}{2.32e-3}\right) =1.13, \quad \overline{\gamma }_h=\frac{\overline{\gamma }_r}{\frac{3}{2}+0.01}=0.75, \end{aligned}$$

while for the non-smooth one of (6.3),

$$\begin{aligned} \overline{\gamma }_r=\log _8\left( \frac{8.71e-2}{ 2.25e-2}\right) =0.65, \quad \overline{\gamma }_h=\frac{\overline{\gamma }_r}{\frac{3}{2}+0.01}=0.43. \end{aligned}$$

Everything seems to indicate that the regularity of the solution has a significant effect in the order of convergence of the numerical scheme.

6.4 Improvement of the Error with an Adapted Boundary Condition

During the simulations presented in Sects. 6.1 and 6.2, we observed that the extension of \(G\equiv 0\) produced a certain instability in the solution close to the boundary. Due to this fact, the maximal error is attained near the boundary.

In order to avoid this phenomenon, we have adapted the boundary condition to make the transition between the interior and the boundary smoother. We have taken

$$\begin{aligned} G(x)=\frac{p-1}{p} (1-|x|^{\frac{p}{p-1}}) \quad for \quad x\in \partial \Omega _r. \end{aligned}$$
(6.4)

In the results presented in Fig. 5, we clearly see that the maximum error of the solution with an adapted condition comes from the middle point, which is the point where solution is the least regular, while without adaption, the error comes from the instabilities created near the boundary.

Thus, the correction seems to give a smoother transition between the interior and the extended condition. It also seems to improve the error estimate (but not the order of convergence).

The correction suggested in this section is clearly dependent on the fact that we know the explicit form of the solution. This is of course not the case in general. The behaviour near the boundary strongly depends on the boundary condition g and the problem is how to choose an extended boundary condition G such that the transition of the solution from \(\Omega \) to \(\partial \Omega _r\) is as smooth as possible. Unless some additional information regarding the behaviour of the solution near the boundary is available, we do not know how to do this in general.This seems to be a problem that must be handled in a problem-dependent way.

Fig. 5
figure 5

Dimension \(d=1\) and \(p=10\). Top figure: Error analysis for the adapted and rough boundary extension (\(G\equiv 0\)) and the adapted boundary extension (G given by (6.4)). Bottom figures: Representation of the numerical and real solutions for the two boundary extensions

Fig. 6
figure 6

Dimension \(d=2\). Numerical solution of the fully nonhomogeneous problem (1.3)–(1.4) with \(g(x,y)=\frac{1}{2}+xy\) and \(f(x,y)\equiv \) constant

6.5 Solution of a Fully Nonhomogeneous Problem

Finally, we present some numerical simulations of a problem with nonhomogeneous right hand side and nonhomogeneous boundary conditions. We present the numerical solutions corresponding to problem (1.3)–(1.4) in dimension \(d=2\), posed in \(\Omega =B_1(0)\) with \(f\equiv \) constant in \(\Omega \) and \(g(x,y)=\frac{1}{2}+xy\) on \(\partial \Omega \).

The boundary condition has been extended to \(\partial \Omega _r\) by \(G(x,y)=\frac{1}{2}+xy\) and we have chosen the numerical parameters \(r=0.2\) and \(h=r^2=0.04\).

In Fig. 6, we present a level set representation of the solutions for \(p=1.1\), \(p=1.5\), \(p=2\), \(p=4\) and \(p=20\) (using the regularization described at the end of Sect. 5 when \(p<2\)). Here, \(h=r^{2}\) has been used, also when \(p<2\) (see Remark 1.3).