1 Introduction

In this article we study diffusion processes governed by quasi-linear operators of \(p-\)Laplacian type with (possibly) a phase transition regime for solutions, i.e. solutions prescribe a PDE in each a priori unknown set (of positivity and negativity, respectively)

$$\begin{aligned} -\Delta _p u + f(u_{-}, u_{+}) = 0 \quad \text{ in } \quad \Omega , \end{aligned}$$

for a suitable measurable function \(f:[0, \infty )\times [0, \infty ) \rightarrow {\mathbb {R}}\) with a discontinuity at origin. These models have become mathematically relevant due to their connections with phenomena in applied sciences, as well as several free boundary problems as obstacle type problems, minimization problems with free boundaries and dead core problems just to mention a few. The problem we are particularly interested is given by

$$\begin{aligned} \left\{ \begin{array}{lllll} -\Delta _p u(x)+\lambda _0(x)\chi _{\{u>0\}}(x) &{} = &{} 0 &{} \text{ in } &{} \Omega \\ u(x) &{} = &{} F(x) &{} \text{ on } &{} \partial \Omega , \end{array} \right. \end{aligned}$$
(1.1)

where \(\Delta _p u = \mathrm {div}(|\nabla u|^{p-2}\nabla u)\) stands for the p-Laplace operator, \(\lambda _0>0\) is a function (bounded away from zero and from infinity), F is a continuous boundary data and \(\Omega \subset {\mathbb {R}}^N\) is a bounded and regular domain. In this context, \(\partial \{u>0\} \cap \Omega \) is the free boundary of the problem.

It is worth mentioning that the unique weak solution (cf. [8, Theorem 1.1 ]) to (1.1) appears when we minimize the following functional

$$\begin{aligned} J_p[v] = \int _{\Omega } \left( \frac{1}{p}|\nabla v (x)|^p + \lambda _0(x)v\chi _{\{v>0\}}(x)\right) \mathrm{d}x \end{aligned}$$
(1.2)

over the admissible set \({\mathbb {K}} = \left\{ v \in W^{1, p}(\Omega )\,\,\,\text{ and }\,\, v=F \,\, \text{ on } \,\,\partial \Omega \right\} \). Variational problems like (1.2) are connected with several applications and were widely studied in the last decades, see [1, 8, 13, 15, 19].

In our first result, we infer how weak solutions leave the free boundaries in their positivity set.

Theorem 1.1

(Strong Non-degeneracy) Let u be a bounded weak solution to (1.1), \(\Omega ^{\prime } \Subset \Omega \) and let \(x_0 \in \overline{\{u >0\}} \cap \Omega ^{\prime }\). Then, there exists a universal constant \(C_0 = C_0 (N, p, \inf _{\Omega } \lambda _0(x))\) such that for all \(0<r<\min \{1, \mathrm {dist}(\Omega ^{\prime }, \partial \Omega )\}\) there holds

$$\begin{aligned} \displaystyle \sup _{\partial B_r(x_0)} \,u(x) \ge C_0 r^{\frac{p}{p-1}}. \end{aligned}$$
(1.3)

We also deal with the analysis of asymptotic behaviour as p diverges. Recently, motivated by game theory (“Tug of-war games”), in [12] it is studied the following variational problem

$$\begin{aligned} \left\{ \begin{array}{lllll} \displaystyle \Delta _p \,u_p(x)&{} = &{} f(x) &{} \text{ in }&{} \Omega \\ u_p(x) &{} = &{} F(x) &{} \text{ on } &{} \partial \Omega \end{array} \right. \end{aligned}$$

with a forcing term \(f\ge 0\) and a continuous boundary data. In this context, \((u_p)_{p\ge 2}\) converges, up to a subsequence, to a limiting function \(u_{\infty }\), which fulfils the following problem in the viscosity sense

$$\begin{aligned} \left\{ \begin{array}{lllll} \min \left\{ \Delta _{\infty } \, u_{\infty }(x), |\nabla u_{\infty }(x)|- \chi _{\{f>0\}} (x) \right\} &{} = &{} 0 &{} \text{ in }&{} \Omega \\ u_{\infty }(x) &{} = &{} F(x) &{} \text{ on } &{} \partial \Omega , \end{array} \right. \end{aligned}$$
(1.4)

where \(\Delta _\infty u(x) {:}{=}\,\nabla u(x)^TD^2u(x) \cdot \nabla u(x)\) is the \(\infty -\)Laplace operator. (cf. [2] for a survey). Such limit problems are known as problems with gradient constraint. Gradient constraint problems like

$$\begin{aligned} \min \{\Delta _\infty u(x), |\nabla u(x)|-h(x) \}=0, \end{aligned}$$
(1.5)

where \(h\ge 0\), appeared in [10]. By considering solutions to

$$\begin{aligned} F_{\varepsilon }[u] {:}{=}\,\min \{\Delta _\infty u, |\nabla u|-\epsilon \}=0 \quad \text{(resp. } \text{ of } \text{ its } \text{ pair) } \quad F^{\varepsilon }[u] {:}{=}\,\max \{\Delta _\infty u, \epsilon -|\nabla u|\}=0 \end{aligned}$$

Jensen provides a mechanism to obtain solutions of the infinity Laplace equation \(-\Delta _\infty u=0\) via an approximation procedure. In this context, he proved uniqueness for the infinity Laplace equation by first showing that it holds for the approximating equations and then sending \(\epsilon \rightarrow 0\). A similar strategy was used in the anisotropic counterpart in [16], and a variant of (1.5) appears in the so-called \(\infty \)-eigenvalue problem, see, for example, [11].

We highlight that in general, the uniqueness of solutions to (1.5) is an easy task if h is a continuous function and strictly positive everywhere. Moreover, uniqueness is known to hold if \(h\equiv 0\), see [10]. Nevertheless, the case \(h\ge 0\) yields significant obstacles. Such a situation resembles the one that holds for the infinity Poisson equation \(-\Delta _\infty u=h\), where the uniqueness is known to hold if \(h>0\) or \(h\equiv 0\), and the case \(h\ge 0\) is an open problem. In this direction, [12, Theorem 4.1] proved uniqueness for (1.5) in the special case \(h=\chi _D\) under the mild topological condition \({{\overline{D}}}=\overline{D^{\circ }}\) on the set \(D \subset {\mathbb {R}}^N\). Furthermore, they show counterexamples where the uniqueness fails if such topological condition is not satisfied, see [12, Section 4.1]. Finally, from a regularity viewpoint, [12] also establishes that viscosity solutions to (1.5) are Lipschitz continuous.

Hence, in our case a natural question arises: What is the expected behaviour for family of solutions and their free boundaries as \(p\rightarrow \infty \)? This question is one of our motivations in order to study existence, uniqueness, regularity and further properties for solutions of gradient constraint type models like (1.5).

In our next result, we establish existence and regularity of limit solutions. We will assume in this limit procedure that the boundary datum g is a fixed Lipschitz function.

Theorem 1.2

(Limiting problem) Let \((u_p)_{p\ge 2}\) be the family of weak solutions to (1.1). Then, up to a subsequence, \(u_p \rightarrow u_{\infty }\) uniformly in \({\overline{\Omega }}\). Furthermore, such a limit fulfils in the viscosity sense

$$\begin{aligned} \left\{ \begin{array}{lllll} \max \left\{ -\Delta _{\infty } u_{\infty }, \,\, -|\nabla u_{\infty }| + \chi _{\{u_{\infty }>0\}}\right\} &{} = &{} 0 &{} \text{ in } &{} \Omega \cap \{u_{\infty } \ge 0\} \\ u_{\infty } &{} = &{} F &{} \text{ on } &{} \partial \Omega . \end{array} \right. \end{aligned}$$
(1.6)

Finally, \(u_{\infty }\) is a Lipschitz continuous function with

$$\begin{aligned}{}[u_{\infty }]_{\text{ Lip }({\overline{\Omega }})} \le C(N)\max \left\{ 1, [F]_{\text{ Lip }(\partial \Omega )}\right\} . \end{aligned}$$

Notice that (1.6) can be written as a fully nonlinear second order operator as follows:

$$\begin{aligned} \begin{array}{lll} \mathrm {F}_\infty : {\mathbb {R}}\times {\mathbb {R}}^N \times \text{ Sym }(N) \longrightarrow {\mathbb {R}}&{}&{}\\ (s, \xi , X) \mapsto \max \left\{ -\xi ^T X \xi , -|\xi |+ \chi _{\{s>0\}}\right\} ,&{}&{} \end{array} \end{aligned}$$

which is non-decreasing in s. Moreover, \(\mathrm {F}_\infty \) is a degenerate elliptic operator in the sense that

$$\begin{aligned} \mathrm {F}_\infty (s, \xi , X) \le \mathrm {F}_\infty (s, \xi , Y) \quad \text{ whenever } \quad Y\le X. \end{aligned}$$

Nevertheless, \(\mathrm {F}_\infty \) is not in the framework of [6, Theorem 3.3]. Then, to prove uniqueness of limit solutions becomes a non-trivial task. We overcome such difficulty by using ideas from [12, Section 4] and show that solutions to the limit problem are unique.

Theorem 1.3

(Uniqueness) There is a unique viscosity solution to (1.6). Moreover, a comparison principle holds, i.e. if \(g_1 \le g_2\) on \(\partial \Omega \) then the corresponding solutions \(u_{\infty }^1\) and \(u_{\infty }^2\) verify \(u_{\infty }^1 \le u_{\infty }^2\) in \(\Omega \).

Notice that since we have uniqueness for the limit problem, we have convergence of the whole family \((u_p)_{p \ge 2}\) as \(p\rightarrow \infty \) in Theorem 1.2 (and not only convergence along a subsequence).

Next, we will turn our attention to the study of several geometric and analytical properties for limit solutions and their free boundaries. This analysis has been motivated by the analysis of the asymptotic behaviour of several variational problems (see, for example, [7, 12, 21,22,23]). We have a sharp lower control on how limit solutions detach from their free boundaries.

Theorem 1.4

(Linear growth for limit solutions) Let \(u_{\infty }\) be a uniform limit to solutions \(u_p\) of (1.1) and \(\Omega ^{\prime } \Subset \Omega \). Then, for any \(x_0 \in \partial \{u_{\infty }>0\} \cap \Omega ^{\prime }\) and any \(0<r \ll 1\), the following estimate holds:

$$\begin{aligned} \displaystyle \sup _{B_r(x_0)} u_{\infty }(x) \ge r. \end{aligned}$$
(1.7)

Our main motivation for considering (1.6) comes from its connection to modern game theory. Recently, in [20] the authors introduced a two-player random turn game called “Tug-of-war”, and showed that as the “step size” converges to zero, the value functions of this game converge to the unique viscosity solution of the infinity Laplace equation \(-\Delta _\infty u=0\). We define and study a variant of the Tug-of-War game, which we call Pay or Leave Tug-of-War, which was inspired by the one in [12]. In our game, one of the players decide to play the usual Tug-of-War or to pass the turn to the other who decides to end the game immediately (and get 0 as final payoff) or move and pay \({\varepsilon }\) (which is the step size). It is then shown that the value functions of this new game, namely \(u^{\varepsilon } \), fulfil a dynamic programming principle (DPP) given by

$$\begin{aligned} \begin{aligned} u^{\varepsilon }(x) = \min \Bigg \{ \frac{1}{2} \left( \sup _{y\in B_{\varepsilon }(x)} u^{\varepsilon }(y) + \inf _{y\in B_{\varepsilon }(x)} u^{\varepsilon }(y) \right) ; \max \Bigg \{0; \sup _{y\in B_{\varepsilon }(x)} u^{\varepsilon }(y) - {\varepsilon }\Bigg \}\Bigg \}. \end{aligned} \end{aligned}$$

Moreover, we show that the sequence \((u^{\varepsilon })_{\varepsilon >0}\) converges and the corresponding limit is a viscosity solution to (1.6). Therefore, besides its own interest, the game-theoretic scheme provides an alternative mechanism to prove the existence of a viscosity solution to (1.6).

Theorem 1.5

Let \(u^{\varepsilon }\) be the value functions of the game previously described. Then, it holds that

$$\begin{aligned} \begin{aligned} u^{{\varepsilon }}\rightarrow u \qquad \text{ uniformly } \text{ in }\quad {\overline{\Omega }}, \end{aligned} \end{aligned}$$

being u the unique viscosity solution to Eq. (1.6).

It is important to mention that we have been able to obtain a game approximation for a free boundary problem that involves the set where the solution is positive, \(\{u>0\}\). This task involves the following difficulty, if one tries to play with a rule of the form “one player sells the turn when the expected payoff is positive”, then the value of the game will not be well defined since this rule is an anticipating strategy. (The player needs to see the future in order to decide where he is going to play.) We overcome this difficulty by letting the other player the chance to stop the game (and obtain 0 as final payoff in this case) or buy the turn (when the first player gives this option). In this way we obtain a set of rules that are non-anticipating and give a DPP that can be seen as a discretization of the limit PDE.

2 Preliminaries

Definition 2.1

(Weak solution) \(u \in W^{1, p}_{\text{ loc }}(\Omega )\) is a weak supersolution (resp. subsolution) to

$$\begin{aligned} -\Delta _p u = \Psi (x, u) \quad \text{ in } \Omega , \end{aligned}$$
(2.1)

if for all \(0\le \varphi \in C^1_0(\Omega )\) it holds

$$\begin{aligned} \displaystyle \int _{\Omega } |\nabla u|^{p-2}\nabla u\cdot \nabla \varphi (x)\,\mathrm{d}x \ge \int _{\Omega } \Psi (x, u)\varphi (x)\,\mathrm{d}x \quad \left( \text{ resp. }\,\, \le \int _{\Omega } \Psi (x, u)\,\mathrm{d}x\right) . \end{aligned}$$

Finally, u is a weak solution to (2.1) when it is simultaneously a supersolution and a subsolution.

Since we are assuming that p is large, then (1.1) is not singular at points where the gradient vanishes. Consequently, the mapping

$$\begin{aligned} x \mapsto \Delta _p \phi (x) = |\nabla \phi (x)|^{p-2}\Delta \phi (x) + (p-2)|\nabla \phi (x)|^{p-4}\Delta _{\infty } \phi (x) \end{aligned}$$

is well defined and continuous for all \(\phi \in C^2(\Omega )\).

Taking into account that the limiting solutions need not be smooth and the fact that the infinity Laplace operator is not in divergence form, we must use the appropriate notion of weak solutions. Next, we introduce the notion of viscosity solution to (1.1). We refer to the survey [6] for the general theory of viscosity solutions.

Definition 2.2

(Viscosity solution) An upper (resp. lower) semi-continuous function \(u: \Omega \rightarrow {\mathbb {R}}\) is called a viscosity subsolution (resp. supersolution) to (1.1) if, whenever \(x_0 \in \Omega \) and \(\phi \in C^2(\Omega )\) are such that \(u-\phi \) has a strict local maximum (resp. minimum) at \(x_0\), then

$$\begin{aligned} -\Delta _p \phi (x_0) + \lambda _0(x_0)\chi _{\{\phi >0\}}(x_0)\le 0 \quad (\text{ resp. } \,\,\,\ge 0). \end{aligned}$$

Finally, \(u \in C(\Omega )\) is a viscosity solution to (1.1) if it is simultaneously a viscosity subsolution and a viscosity supersolution.

Now we state the definition of viscosity solution to (1.6). Notice that here we are using the sets \(\{u\ge 0\}\) and \(\{u> 0\}\) instead of the set that corresponds to the test function, \(\{\phi > 0\}\), as we did in the previous definition.

Definition 2.3

An upper semi-continuous (resp. lower semi-continuous) function \(u:\Omega \rightarrow {\mathbb {R}}\) is a viscosity subsolution (resp. supersolution) to (1.6) in \(\Omega \) if, whenever \( x_0 \in \Omega \) and \(\varphi \in C^2(\Omega )\) are such that \(u-\varphi \) has a strict local maximum (resp. minimum) at \( x_0\), then

$$\begin{aligned} \max \{-\Delta _\infty \varphi (x), \chi _{\{u> 0\}}(x_0)-\left| \nabla \varphi (x_0)\right| \}\le 0 \end{aligned}$$
(2.2)

respectively

$$\begin{aligned} \max \{-\Delta _\infty \varphi (x), \chi _{\{u\ge 0\}}(x_0)-\left| \nabla \varphi (x_0)\right| \}\ge 0. \end{aligned}$$
(2.3)

Finally, a continuous function \(u:\Omega \rightarrow {\mathbb {R}}\) is a viscosity solution to (1.6) in \(\Omega \) if it is both a viscosity subsolution and a viscosity supersolution.

Remark that since (2.2) does not depend on \(\phi (x_0)\), we can assume that \(\phi \) satisfies \(u(x_0) = \phi (x_0)\) and \(u(x)<\phi (x)\), when \(x \ne x_0\). Analogously, in (2.3) we can assume that \(u(x_0) = \phi (x_0)\) and \(u(x)>\phi (x)\), when \(x \ne x_0\). Also we remark that (2.2) is equivalent to

$$\begin{aligned} -\Delta _{\infty } \phi (x_0) \le 0 \quad \text{ and } \quad -|\nabla \phi (x_0)|+1.\chi _{\{u>0\}}(x_0) \le 0; \end{aligned}$$

and that (2.3) is equivalent to

$$\begin{aligned} -\Delta _{\infty } \phi (x_0) \ge 0 \quad \text{ or } \quad -|\nabla \phi (x_0)|+1.\chi _{\{u\ge 0\}}(x_0) \ge 0. \end{aligned}$$

The following lemma gives a relation between weak and viscosity sub- and supersolutions to (1.1).

Lemma 2.4

A continuous weak subsolution (resp. supersolution) \(u \in W_{\text{ loc }}^{1,p}(\Omega )\) to (1.1) is a viscosity subsolution (resp. supersolution) to

$$\begin{aligned} -\left[ |\nabla u(x)|^{p-2} \Delta u(x) + (p-2)|\nabla u(x)|^{p-4}\Delta _{\infty } u (x) \right] = -\lambda _0(x)\chi _{\{u>0\}}(x) \quad \text{ in } \quad \Omega . \end{aligned}$$

Proof

Let us proceed for the case of supersolutions. Fix \(x_0 \in \Omega \) and \(\phi \in C^2(\Omega )\) such that \(\phi \) touches u by below, i.e. \(u(x_0) = \phi (x_0)\) and \(u(x)> \phi (x)\) for \(x \ne x_0\). Our goal is to show that

$$\begin{aligned} -\left[ |\nabla \phi (x_0)|^{p-2}\Delta \phi (x_0) + (p-2)|\nabla \phi (x_0)|^{p-4}\Delta _{\infty } \phi (x_0)\right] + \lambda _0(x_0)\chi _{\{\phi >0\}}(x_0) \ge 0. \end{aligned}$$

Let us suppose, for sake of contradiction, that the inequality does not hold. Then, by continuity there exists \(r>0\) small enough such that

$$\begin{aligned} -\left[ |\nabla \phi (x)|^{p-2}\Delta \phi (x) + (p-2)|\nabla \phi (x)|^{p-4}\Delta _{\infty } \phi (x)\right] +\lambda _0(x)\chi _{\{\phi >0\}}(x) < 0, \end{aligned}$$

provided that \(x \in B_r(x_0)\). Now, we consider

$$\begin{aligned} \Psi (x) {:}{=}\,\phi (x)+ \frac{1}{1000}\iota , \quad \text{ where } \quad \iota {:}{=}\,\inf _{\partial B_r(x_0)} (u(x)-\phi (x)). \end{aligned}$$

Notice that \(\Psi \) verifies \(\Psi < u\) on \(\partial B_r(x_0)\), \(\Psi (x_0)> u(x_0)\) and

$$\begin{aligned} -\Delta _p \Psi (x) + \lambda _0(x)\chi _{\{\phi >0\}}(x) <0. \end{aligned}$$
(2.4)

By extending by zero outside \(B_r(x_0)\), we may use \((\Psi -u)_{+}\) as a test function in (1.1). Moreover, since u is a weak supersolution, we obtain

$$\begin{aligned} \displaystyle \int _{\{\Psi>u\}} |\nabla u|^{p-2}\nabla u \cdot \nabla (\Psi -u) \mathrm{d}x \ge -\int _{\{\Psi>u\}} \lambda _0(x)\chi _{\{u>0\}}(x_0)(x)(\Psi -u) \mathrm{d}x. \end{aligned}$$
(2.5)

On the other hand, multiplying (2.4) by \(\Psi - u\) and integrating by parts we get

$$\begin{aligned} \displaystyle \int _{\{\Psi>u\}} |\nabla \Psi |^{p-2}\nabla \Psi \cdot \nabla (\Psi -u) \mathrm{d}x < -\int _{\{\psi>u\}} \lambda _0(x)\chi _{\{\phi >0\}}(x)(\Psi -u) \mathrm{d}x. \end{aligned}$$
(2.6)

Next, subtracting (2.5) from (2.6) we obtain

$$\begin{aligned}&\displaystyle \int \limits _{\{\Psi>u\}} \left( |\nabla \Psi |^{p-2}\nabla \Psi - |\nabla u|^{p-2}\nabla u\right) \cdot \nabla (\Psi -u) \mathrm{d}x\\&\quad< \int \limits _{\{\psi>u\}} \lambda _0(x)\left( \chi _{\{\phi>0\}}(x)-\chi _{\{u>0\}}(x)\right) (\Psi -u)\mathrm{d}x <0. \end{aligned}$$

Finally, since the left-hand side is bounded below by \(\displaystyle 2^{-p}\int _{\{\Psi >u\}} |\nabla \Psi - \nabla u|^p\mathrm{d}x \ge 0,\) this forces \(\Psi \le u\) in \(B_r(x_0)\). However, this contradicts the fact that \(\Psi (x_0)>u(x_0)\) and proves the result.

Similarly, one can prove that a continuous weak subsolution is a viscosity subsolution. \(\square \)

Theorem 2.5

(Morrey’s inequality) Let \(N<p\le \infty \). Then, for \(u \in W^{1, p}(\Omega )\), there exists a constant \(C(N, p)>0\) such that

$$\begin{aligned} \Vert u\Vert _{C^{0, 1-\frac{N}{p}}(\Omega )} \le C(N, p)\Vert \nabla u\Vert _{L^{p}(\Omega )}. \end{aligned}$$

We must highlight that the dependence of C on p does not deteriorate as \(p \rightarrow \infty \). In fact,

$$\begin{aligned} C(N, p) {:}{=}\,\frac{2c(N)}{|\partial B_1|^{\frac{1}{p}}}\left( \frac{p-1}{p-N}\right) ^{\frac{p-1}{p}}, \end{aligned}$$

where \(c(N)>0\) is a dimensional constant.

3 Non-degeneracy of solutions

This section is devoted to establish a weak geometrical property which plays a key role in the description of how solutions leave their free boundaries. We show non-degeneracy of solutions.

Proof of Theorem 1.1

Due to the continuity of solutions, it is enough to prove such a estimate just at points \(x_0 \in \{u>0\} \cap \Omega ^{\prime }\). Let us define the scaled function

$$\begin{aligned} u_r(x) {:}{=}\,\frac{u(x_0+rx)}{r^{\frac{p}{p-1}}}. \end{aligned}$$

and the auxiliary barrier

$$\begin{aligned} \displaystyle \Psi (x) {:}{=}\,C_0 |x|^{\frac{p}{p-1}} \qquad \text{ with } \quad \displaystyle C_0 {:}{=}\,\frac{p-1}{p}\left( \frac{ \inf _{\Omega }\lambda _0(x)}{ N}\right) ^{\frac{1}{p-1}}. \end{aligned}$$

It is easy to check that

$$\begin{aligned} -\Delta _p \Psi + {\hat{\lambda }}_0\left( x \right) .\chi _{\{\Psi>0\}}(x) \ge 0 \ge -\Delta _p u_r + {\hat{\lambda }}_0\left( x \right) . \chi _{\{u_r>0\}}(x) \quad \text{ in } \quad B_1, \end{aligned}$$

in the weak sense, where \({\hat{\lambda }}_0(x) {:}{=}\,\lambda _0(x_0 + rx)\). Now, if \(u_r \le \Psi \) on the whole boundary of \(B_1\), then the comparison principle yields that

$$\begin{aligned} u_r \le \Psi \quad \text{ in } \quad B_1, \end{aligned}$$

which contradicts the assumption that \(u_r(0)>0\). Therefore, there exists a point \(y \in \partial B_1\) such that

$$\begin{aligned} u_r(y) > \Psi (y) = C_0, \end{aligned}$$

The proof finishes by scaling back \(u_r\). \(\square \)

4 The limit problem

This section is devoted to prove Theorems 1.2 and 1.4 concerning the limit as \(p\rightarrow \infty \). First, we will prove the existence of a uniform limit for Theorem 1.2 as \(p\rightarrow \infty \). Remind that since the boundary datum F is assumed to be Lipschitz continuous, we can extend it to a Lipschitz function (that we will still call F) to the whole \(\Omega \).

Lemma 4.1

Assume \(\max \{2, N\}<p < \infty \) and let \(u_p \in W^{1, p}(\Omega )\) be a weak solution to (1.1). Then,

$$\begin{aligned} \Vert \nabla u_p\Vert _{L^p(\Omega )} \le C_1. \end{aligned}$$

Additionally, \(u_p \in C^{0, \alpha }(\Omega )\), where \(\alpha = 1- \frac{N}{p}\) with the following estimate

$$\begin{aligned} \frac{|u_p(x)-u_p(y)|}{|x-y|^{\alpha }} \le C_2, \end{aligned}$$

where \(C_1, C_2>0\) are constants depending on N, \( \Vert \lambda _0\Vert _{L^{\infty }(\Omega )}\), \(\Vert F\Vert _{L^{\infty }(\Omega )}\), \(\Vert \nabla F\Vert _{L^{\infty }(\Omega )}\).

Proof

The unique weak solution \(u_p\in W^{1,p} (\Omega )\cap C({{\overline{\Omega }}})\) to \(\Delta _p u_p= \lambda _0 \chi _{\{u_p>0\}}\) with fixed Lipschitz continuous boundary values F, can be characterized as being the minimizer for the functional

$$\begin{aligned} J_p[u] = \int _{\Omega } \frac{|\nabla u|^p}{p} \, \mathrm{d}x + \int _{\{u>0\}} \lambda _0 u \, \mathrm{d}x \end{aligned}$$

in the set of functions \({\mathbb {K}} = \{ u \in W^{1,p} (\Omega ) \ : \ u =F \text{ on } \partial \Omega \}\). Using F as test function and the fact that \(\Vert u_p\Vert _{L^{\infty }(\Omega )}\le \Vert F\Vert _{L^{\infty }(\Omega )}\) we obtain

$$\begin{aligned} \begin{array}{lll} \displaystyle \int _{\Omega } |\nabla u_p|^p \, \mathrm{d}x &{} = &{} \displaystyle \int _{\Omega } |\nabla F|^p \, \mathrm{d}x + \int _{\{F>0\}} \lambda _0 F \, \mathrm{d}x - \int _{\Omega } \lambda _0(u_p)_{+}\, \mathrm{d}x \\ &{} \le &{} \displaystyle C \Vert \nabla F\Vert ^{p}_{L^\infty (\Omega )} + C\Vert \lambda _0\Vert _{L^{\infty }(\Omega )}\Vert F\Vert _{L^{\infty }(\Omega )}. \end{array} \end{aligned}$$

Therefore,

$$\begin{aligned} \Vert \nabla u_p\Vert _{L^p(\Omega )} \le C_1. \end{aligned}$$

Next, for \(p>N\) by Morrey’s estimates we get

$$\begin{aligned} \frac{|u_p(x)-u_p(y)|}{|x-y|^{1- \frac{N}{p}}} \le C\Vert \nabla u_p\Vert _{L^p(\Omega )}. \end{aligned}$$

\(\square \)

Next, we show that any family of weak solutions to (1.1) is pre-compact and therefore, we get the existence of a uniform limit (as stated in Theorem 1.2).

Lemma 4.2

(Existence of limit solutions) Let \((u_p)_{p>2}\) be a sequence of weak solutions to (1.1). Then, there exists a subsequence \(p_j \rightarrow \infty \) and a limit function \(u_{\infty }\) such that

$$\begin{aligned} \displaystyle \lim _{p_j \rightarrow \infty } u_{p_j}(x) = u_{\infty }(x) \end{aligned}$$

uniformly in \(\Omega \). Moreover, \(u_{\infty }\) is Lipschitz continuous with

$$\begin{aligned}{}[u_{\infty }]_{\text{ Lip }({\overline{\Omega }})} \le \limsup _{p_j \rightarrow \infty } C(N, p_j, \Omega )\Vert \nabla u_{p_j}\Vert _{L^{p_j}(\Omega )} \le C(N)\max \left\{ 1, [F]_{\text{ Lip }(\partial \Omega )}\right\} . \end{aligned}$$

Proof

Existence of a uniform limit, \(u_{\infty }\), is a direct consequence of our estimates in Lemma 4.1 using with an Arzelà-Ascoli compactness criteria. Finally, the last statement holds by passing to the limit in the Hölder’s estimates from Lemma 4.1. \(\square \)

Next, we will show that any uniform limit, \(u_\infty \), is a viscosity solution to the limit equation.

Proof of Theorem 1.2

Notice that from the uniform convergence, it holds that \(u_{\infty } = F\) on \(\partial \Omega \). Next, we prove that the limit function \(u_{\infty }\) is a viscosity solution to

$$\begin{aligned} \max \left\{ -\Delta _{\infty } u_{\infty }(x), -|\nabla u_{\infty }(x)|+ \chi _{\{u_{\infty }>0\}}(x)\right\} = 0 \quad \text{ in } \quad \Omega . \end{aligned}$$

First, let us prove that \(u_{\infty }\) is a viscosity supersolution. To this end, fix \(x_0 \in \{u_{\infty }>0\} \cap \Omega \) and let \(\phi \in C^2(\Omega )\) be a test function such that \(u_{\infty }(x_0) = \phi (x_0)\) and the inequality \(u_{\infty }(x) > \phi (x)\) holds for all \(x \ne x_0\). Notice that since we have \(x_0 \in \{u_{\infty }>0\} \cap \Omega \), it holds that \(\chi _{\{u_{\infty }\ge 0\}}(x_0) =\chi _{\{u_{\infty }> 0\}}(x_0) =1\).

We want to show that

$$\begin{aligned} - \Delta _{\infty } \phi (x_0) \ge 0 \quad \text{ or } \quad -|\nabla \phi (x_0)|+ \chi _{\{u_{\infty }\ge 0\}}(x_0)= -|\nabla \phi (x_0)|+ 1 \ge 0. \end{aligned}$$

Notice that if \(-|\nabla \phi (x_0)|+1 \ge 0\) there is nothing to prove. Hence, we may assume that

$$\begin{aligned} -|\nabla \phi (x_0)|+1 <0. \end{aligned}$$
(4.1)

Since, up to a subsequence, \(u_p \rightarrow u_{\infty }\) uniformly, there exists a sequence \(x_p \rightarrow x_0\) such that \(x_{p} \rightarrow x_0\) such that \(u_{p}-\phi \) has a local minimum at \(x_{p}\). Since \(u_{p}\) is a weak supersolution (and then a viscosity supersolution by Lemma 2.4) to (1.1), we get

$$\begin{aligned} -\left[ |\nabla \phi (x_{p})|^{p-2}\Delta \phi (x_{p}) + (p-2)|\nabla \phi (x_{p})|^{p-4}\Delta _{\infty } \phi (x_{p})\right] \ge -\lambda _0(x_p)\chi _{\{\phi \ge 0\}}(x_p). \end{aligned}$$

Now, dividing both sides by \((p-2)|\nabla \phi (x_{p})|^{p-4}\) (which is not zero for \(p\gg 1\) due to (4.1)) we get

$$\begin{aligned} - \Delta _{\infty } \phi (x_{p}) \ge \frac{|\nabla \phi (x_{p})|^2 \Delta \phi (x_{p})}{p-2} -\left( \frac{\root p-4 \of {\lambda _0(x_p)\chi _{\{\phi \ge 0\}}(x_p)}}{|\nabla \phi (x_{p})|}\right) ^{p-4}. \end{aligned}$$

Passing the limit as \(p \rightarrow \infty \) in the above inequality we conclude that

$$\begin{aligned} - \Delta _{\infty } \phi (x_0) \ge 0, \end{aligned}$$

which proves that \(u_{\infty }\) is a viscosity supersolution.

Now, let us show that \(u_{\infty }\) is a viscosity subsolution. To this end, fix \(x_0 \in \{u_{\infty }>0\} \cap \Omega \) and a test function \(\phi \in C^2(\Omega )\) such that \(u_{\infty }(x_0) = \phi (x_0)\) and the inequality \(u_{\infty }(x) < \phi (x)\) holds for \(x \ne x_0\). We want to prove that

$$\begin{aligned} - \Delta _{\infty } \phi (x_0) \le 0 \quad \text{ and } \quad -|\nabla \phi (x_0)|+\chi _{\{u_{\infty } > 0\}}(x_0) \le 0. \end{aligned}$$
(4.2)

One more time, there exists a sequence \(x_{p} \rightarrow x_0\) such that \(u_{p}-\phi \) has a local maximum at \(x_{p}\) and since \(u_{p}\) is a weak subsolution (resp. viscosity subsolution) to (1.1), we have that

$$\begin{aligned} - \frac{|\nabla \phi (x_{p})|^2 \Delta \phi (x_{p})}{p-2} - \Delta _{\infty } \phi (x_{p}) \le -\left( \frac{\root p-4 \of {\lambda _0(x_p)\chi _{\{u_{\infty }\ge 0\}}(x_p)}}{|\nabla \phi (x_{p})|}\right) ^{p-4} \le 0. \end{aligned}$$

Thus, letting \(p \rightarrow \infty \) we obtain \(- \Delta _{\infty } \phi (x_0) \le 0\). Furthermore, if \(-|\nabla \phi (x_0)|+ \chi _{\{u_{\infty }> 0\}}(x_0) > 0\), as \(p \rightarrow \infty \), then the right-hand side diverges to \(-\infty \), giving a contradiction. Therefore, (4.2) holds.

Next, let us establish the limit equation in the null set. To this end, fix \(x_0 \in \Omega \cap \{u_{\infty } = 0\}\) and \(\phi \in C^2(\Omega )\) such that \(u_{\infty }(x_0) = \phi (x_0)=0\) and \(u_{\infty }(x) < \phi (x)\) holds for \(x \ne x_0\). As before, there exists a sequence \(x_{p} \rightarrow x_0\) such that \(u_{p}-\phi \) has a local minimum at \(x_{p}\). We consider two cases:

  1. Case 1:

    \(\phi (x_{p_k}) \le 0\) for a subsequence \((p_k)_{k\ge 1}\). In this case, since \(u_{p_k}\) is a weak supersolution (resp. viscosity supersolution) to (1.1), we obtain after passing to the limit as \(p_k \rightarrow \infty \) that \(-\Delta _{\infty } \phi (x_0) \ge 0\).

  2. Case 2:

    \(\phi (x_{p_k}) > 0\) for a subsequence \((p_k)_{k\ge 1}\). In this case, since \(u_{p_k}\) is a weak supersolution (resp. viscosity supersolution) to (1.1), we have that

    $$\begin{aligned} - \Delta _{p_k} \phi (x_{p_k}) \ge \lambda _0(x_{p_k}). \end{aligned}$$

    As in the first part of this proof, we obtain after passing to the limit as \(p_k \rightarrow \infty \) that

    $$\begin{aligned} -\Delta _{\infty } \phi (x_0) \ge 0 \quad \text{ or } -|\nabla \phi (x_0)| + 1\ge 0 \end{aligned}$$

In both cases, we conclude that

$$\begin{aligned} \max \left\{ -\Delta _{\infty } \phi (x_0), -|\nabla \phi (x_0)|+ \chi _{\{u\ge 0\}}\right\} \ge 0, \end{aligned}$$

which assures that \(u_{\infty }\) is a viscosity supersolution to (1.6) in its null set.

Now, fix \(x_0 \in \Omega \cap \{u_{\infty } = 0\}\) and \(\phi \in C^2(\Omega )\) such that \(u_{\infty }(x_0) = \phi (x_0)=0\) and \(u_{\infty }(x) > \phi (x)\) holds for \(x \ne x_0\). One more time, there exists a sequence \(x_{p} \rightarrow x_0\) such that \(u_{p}-\phi \) has a local maximum at \(x_{p}\). As before, let us consider two possibilities:

  1. Case 1:

    \(\phi (x_{p_k}) \le 0\) for a subsequence \((p_k)_{k\ge 1}\). In this case, since \(u_{p_k}\) is a weak subsolution (resp. viscosity subsolution) to (1.1), we obtain \(-\Delta _{\infty } \phi (x_0) \le 0\). Moreover, we also have \(-|\nabla \phi (x_0)|+ \chi _{\{u> 0\}} = -|\nabla \phi (x_0)| \le 0\).

  2. Case 2:

    \(\phi (x_{p_k}) > 0\) for a subsequence \((p_k)_{k\ge 1}\). In this case, since \(u_{p_k}\) is a weak subsolution (resp. viscosity subsolution) to (1.1), we have that

    $$\begin{aligned} - \Delta _{p_k} \phi (x_{p_k}) \le \lambda _0(x_{p_k}). \end{aligned}$$

    Once again, we obtain after passing to the limit as \(p_k \rightarrow \infty \),

    $$\begin{aligned} -\Delta _{\infty } \phi (x_0) \le 0 \quad \text{ and } -|\nabla \phi (x_0)|+ 1\le 0 \end{aligned}$$

Therefore, in any of the two cases, we conclude that

$$\begin{aligned} \max \left\{ -\Delta _{\infty } \phi (x_0), -|\nabla \phi (x_0)|+ \chi _{\{u> 0\}}\right\} \le 0, \end{aligned}$$

which shows that \(u_{\infty }\) is a viscosity subsolution to (1.6) in its null set.

Finally, to prove that \(u_{\infty }\) is \(\infty -\)harmonic in its negativity set is a standard task, and the reasoning is similar to one employed in [21, Theorem 1], [22, page 384] and [23, Theorem 1.1]. We omit the details here. \(\square \)

Proof of Theorem 1.4

Any sequence of weak solutions \((u_p)_{p\ge 2}\) converges, up to a subsequence, to a limit, \(u_{\infty }\), uniformly in \(\Omega \). From Theorem 1.1 we have that

$$\begin{aligned} \displaystyle \sup _{B_r(x_0)} \,u_p(x) \ge C_{0} r^{\frac{p}{p-1}} \qquad \text{ with } \quad \displaystyle C_{0} {:}{=}\,\frac{p-1}{p}\left( \frac{ \inf _{\Omega }\lambda _0(x)}{ N}\right) ^{\frac{1}{p-1}}. \end{aligned}$$

As before for \({\hat{x}} \in \overline{\{u_{\infty }>0\}} \cap \Omega ^{\prime }\) there exist \(x_p \rightarrow {\hat{x}}\) with \(x_p \in \overline{\{u_p>0\}} \cap \Omega ^{\prime }\). Hence, we get,

$$\begin{aligned} \displaystyle \sup _{B_r(x_0)} u_{\infty }(x) = \lim _{p \rightarrow \infty } \sup _{B_r(x_p)} u_p(x) \ge r. \end{aligned}$$

\(\square \)

5 Uniqueness for the limit problem

Our main goal throughout this section is to show uniqueness of viscosity solutions to

$$\begin{aligned} \left\{ \begin{array}{lllll} \max \left\{ -\Delta _{\infty } \, u_{\infty }(x), \chi _{\{u_{\infty }>0\}}(x) -|\nabla u_{\infty }(x)| \right\} &{} = &{} 0 &{} \text{ in }&{} \Omega \\ u_{\infty }(x) &{} = &{} F(x) &{} \text{ on } &{} \partial \Omega . \end{array} \right. \end{aligned}$$
(5.1)

Remind that existence of a solution \(u_\infty \) was obtained as the uniform limit (along subsequences) of solutions to \(p-\)Laplacian problems (1.1), see Theorem 1.2 for more details. Next, we will deliver the proof of Theorem 1.3, which is based on [12, Section 4]. For this reason, we will only include some details.

Proof of Theorem 1.3

To prove such a result we first construct a function v and then show that any possible viscosity solution to (5.1) coincides with v. To construct such an special v we first consider h the unique (see [10]) viscosity solution to

$$\begin{aligned} \left\{ \begin{array}{lllll} -\Delta _{\infty } \, h (x) &{} = &{} 0 &{} \text{ in }&{} \Omega \\ h(x) &{} = &{} F(x) &{} \text{ on } &{} \partial \Omega . \end{array} \right. \end{aligned}$$
(5.2)

Then, let z be the unique viscosity solution to

$$\begin{aligned} \left\{ \begin{array}{lllll} \max \left\{ -\Delta _{\infty } \, z (x), 1 -|\nabla z (x)| \right\} &{} = &{} 0 &{} \text{ in }&{} \Omega \\ z (x) &{} = &{} F(x) &{} \text{ on } &{} \partial \Omega . \end{array} \right. \end{aligned}$$
(5.3)

Remark that for this problem we have uniqueness, as well as validity of a comparison principle, see [12, Theorem 4.5]. Hence, we have

$$\begin{aligned} z(x) \le u_\infty (x) \le h(x) \qquad \forall \,\, x \in \Omega . \end{aligned}$$

Moreover, from [12, Theorem 4.2], we have

$$\begin{aligned} z(x) = u_\infty (x) = h(x) \qquad \text{ in } \quad \{ x\in \Omega \, : \, \nabla h(x) \ge 1 \}. \end{aligned}$$

Now, we modify z in the set \(\{x\in \Omega \, : \, z (x) < 0 \}\) to obtain the function v as follows: Let w be the solution to

$$\begin{aligned} \left\{ \begin{array}{lllll} -\Delta _{\infty } \, w (x) &{} = &{} 0 &{} \text{ in }&{} \{x\in \Omega \, : \, z (x)< 0 \} \\ w(x) &{} = &{} z(x) &{} \text{ on } &{} \partial \{x\in \Omega \, : \, z (x) < 0 \}. \end{array} \right. \end{aligned}$$
(5.4)

and then we set

$$\begin{aligned} v(x) = \left\{ \begin{array}{ll} z(x) \qquad &{} \text{ for } \{x\in \Omega \, : \, z (x) \ge 0 \}, \\ w(x) \qquad &{} \text{ for } \{x\in \Omega \, : \, z (x) < 0 \}. \end{array} \right. \end{aligned}$$
(5.5)

Remark that this function v is uniquely determined by the boundary datum F since all the involved PDE problems have uniqueness. Moreover, since we have a comparison principle for the involved PDE problems, we have a comparison principle for v, that is, if \(F_1 \le F_2\) on \(\partial \Omega \), then the corresponding functions \(v_1\) and \(v_2\) verify

$$\begin{aligned} v_1 (x) \le v_2 (x) , \qquad \text{ in } \Omega . \end{aligned}$$

Now our aim is to show that

$$\begin{aligned} u_\infty = v, \qquad \text{ in } \Omega . \end{aligned}$$

Firstly, let us show that \(u_\infty =z=v\) in the set \( \{x\in \Omega \, : \, z (x) \ge 0 \}\). To this end, we observe that in the set \( \{ x\in \Omega \, : \, \nabla h(x) \ge 1 \}\) we have \( z(x) = u_\infty (x) = h(x) \). Hence, we have to deal with \( \{x\in \Omega \, : \, z (x) \ge 0 \text{ and } \nabla h(x) < 1 \}\). Now, as in [12, Theorem 4.2], we argue by contradiction and suppose that there is \({\hat{x}} \in \{x\in \Omega \, : \, z (x) \ge 0 \text{ and } \nabla h(x) < 1 \}\) such that \(u_\infty ({{\hat{x}}})-z({{\hat{x}}})>0\). If \(u_\infty \) were smooth, we would have \(|\nabla u_\infty ({{\hat{x}}})|\ge 1\) by the second part of the equation, and from \(\Delta _\infty u_\infty \ge 0\) it would follow that \(t\mapsto |\nabla u_\infty (\gamma (t))|\) is non-decreasing along the curve \(\gamma \) for which \(\gamma (0)={{\hat{x}}}\) and \({\dot{\gamma }}(t)=\nabla u_\infty (\gamma (t))\). Using this information and the fact that \(|z(x)-z(y)|\le |x-y|\) in \(\{\nabla h < 1\}\), we could then follow \(\gamma \) up to the boundary to find a point y where \(u_\infty (y)>z(y)\); but this is a contradiction since \(u_\infty \) and z coincide on \(\partial \Omega \).

To overcome the lack of smoothness to \(u_\infty \) and to justify rigorously the steps outlined above, we use an approximation procedure with the sup-convolution. Let \(\delta >0\) and

$$\begin{aligned} (u_\infty )_\delta (x)=\sup _{y\in \Omega } \left\{ u_\infty (y)-\frac{1}{2\delta } |x-y|^2\right\} \end{aligned}$$

be the standard sup-convolution of \(u_\infty \). Observe that since \(u_\infty \) is bounded in \(\Omega \), we in fact have

$$\begin{aligned} (u_\infty )_\delta (x)=\sup _{y\in B_{R(\delta )}(x)} \left\{ u_\infty (y)-\frac{1}{2\delta } |x-y|^2\right\} \end{aligned}$$

with \(R(\delta )=2\sqrt{\delta \Vert u_\infty \Vert _{L^\infty (\Omega )}}\). We assume that \(\delta >0\) is small. In what follows we will use the notation

$$\begin{aligned} L(f, x) := \lim _{r \rightarrow +0} \mathrm {Lip}(f, B_r(x)) \end{aligned}$$

for the point-wise Lipschitz constant of a function f. Next we observe that since \(u_\infty \) is a solution to (5.1), it follows that \(\Delta _\infty (u_\infty )_\delta \ge 0\) and \(|\nabla (u_\infty )_\delta |-\chi _{(u_\infty )_\delta > 0}\ge 0\). In particular, since \((u_\infty )_\delta \) is semi-convex, there exists \(x_0\) such that

$$\begin{aligned} (u_\infty )_\delta (x_0)-z(x_0) > \sup \limits _{x\in \partial \Omega } ((u_\infty )_\delta -z), \end{aligned}$$

and

$$\begin{aligned} |\nabla (u_\infty )_\delta (x_0)|=L(u_\delta ,x_0)\ge 1. \end{aligned}$$

Now let \(r_0=\frac{1}{2}\text{ dist }(x_0,\partial \Omega )\) and let \(x_1\in \partial B_{r_0}(x_0)\) be a point such that

$$\begin{aligned} \max _{y\in {{\overline{B}}}_{r_0}(x_0)} (u_\infty )_\delta (y)=(u_\infty )_\delta (x_1). \end{aligned}$$

Since \(\Delta _\infty (u_\infty )_\delta \ge 0\), the increasing slope estimate, see [4], implies

$$\begin{aligned} 1\le L((u_\infty )_\delta ,x_0) \le L((u_\infty )_\delta ,x_1)\quad \text{ and }\quad (u_\infty )_\delta (x_1) \ge (u_\infty )_\delta (x_0)+ |x_0-x_1|. \end{aligned}$$

By defining \(r_1=\frac{1}{2}\mathrm {dist}(x_1,\partial \Omega )\), choosing \(x_2\in \partial B_{r_1}(x_1)\) so that

$$\begin{aligned} \max \limits _{y\in {{\overline{B}}}_{r_1}(x_1)} (u_\infty )_\delta (y)=(u_\infty )_\delta (x_2), \end{aligned}$$

and using the increasing slope estimate again yields

$$\begin{aligned} 1\le L((u_\infty )_\delta ,x_0) \le L((u_\infty )_\delta ,x_1)\le L((u_\infty )_\delta ,x_2) \end{aligned}$$

and

$$\begin{aligned} (u_\infty )_\delta (x_2) \ge (u_\infty )_\delta (x_1)+ |x_1-x_2| \ge (u_\infty )_\delta (x_0)+ |x_0-x_1|+ |x_1-x_2|. \end{aligned}$$

Repeating this construction we obtain a sequence \((x_k)\) such that \(x_k\rightarrow a\in \partial \{x\in \Omega \, : \, z (x) \ge 0 \text{ and } \nabla h(x) < 1 \} \cap \partial \Omega \) as \(k\rightarrow \infty \) and

$$\begin{aligned} (u_\infty )_\delta (x_k)\ge (u_\infty )_\delta (x_0)+\sum _{j=0}^{k-1} |x_j-x_{j+1}| \quad \hbox { for}\ k=1,2,\ldots \end{aligned}$$

On the other hand, since \(|z(x)-z(y)|\le |x-y|\) whenever the line segment [xy] is contained in \(\{\nabla h \le 1\}\) (see [5]), we have

$$\begin{aligned} z(x_k) \le z(x_0)+\sum _{j=0}^{k-1} |x_j-x_{j+1}|. \end{aligned}$$

Thus, by continuity,

$$\begin{aligned} (u_\infty )_\delta (a)-z(a) =\lim _{k\rightarrow \infty } (u_\infty )_\delta (x_k)-z(x_k) \ge (u_\infty )_\delta (x_0)-z(x_0) > \sup \limits _{x\in \partial \Omega } ((u_\infty )_\delta -z), \end{aligned}$$

which is clearly a contradiction. Therefore, we conclude that \(u_\infty =z=v\) in the set \( \{x\in \Omega \, : \, z (x) \ge 0 \}\).

To extend the equality \(u_\infty = v\) to the set \( \{x\in \Omega \, : \, z (x) < 0 \}\) we just observe that \(-\Delta _\infty v = 0\) there and also that \(-\Delta _\infty u_\infty =0\) since \(u_\infty \le 0\) on the boundary of \( \{x\in \Omega \, : \, z (x) < 0 \}\) and then \(u_\infty \le 0\) in the set \( \{x\in \Omega \, : \, z (x) < 0 \}\) (notice that if \(u_\infty =0\) there then trivially \(-\Delta _\infty u_\infty =0\)). Therefore, we conclude that

$$\begin{aligned} u_\infty = v \end{aligned}$$

in the whole \(\Omega \). \(\square \)

Remark 5.1

From the previous proof we have that the positivity sets of \(u_\infty \) and z coincide. The function z can be computed as follows (see [12, Section 2.2]): Since h is everywhere differentiable, see [9], and \(|\nabla h(x)|\) equals to the point-wise Lipschitz constant of h,

$$\begin{aligned} L(h, x) {:}{=}\,\lim _{r \rightarrow +0} \mathrm {Lip}(h, B_r(x)) \end{aligned}$$

for every \(x\in \Omega \), using that the map \(x\mapsto L(h,x)\) is upper semi-continuous, see, for example, [4], we have that the set

$$\begin{aligned} V {:}{=}\,\{x \in \Omega :|\nabla h(x)| < 1\} \end{aligned}$$

is an open subset of \(\Omega \). Now, define the “patched function” \(z :{{\overline{\Omega }}} \rightarrow {\mathbb {R}}\) by first setting

$$\begin{aligned} z=h\quad \text{ in } {{\overline{\Omega }}}\setminus V, \end{aligned}$$

and then, for each connected component U of V and \(x\in U\), we let

$$\begin{aligned} z(x) = \sup _{y \in \partial U} \left( h(y) - d_{U} (x, y) \right) , \end{aligned}$$

where \(d_{U} (x, y)\) stands for the (interior) distance between x and y in U.

6 Games: pay or leave Tug-of-War

In this section, we consider a variant of the Tug-of-War games introduced in [20] and [12]. Let us describe the two-player zero-sum game that we call Pay or Leave Tug-of-War.

Let \(\Omega \) be a bounded open set and \({\varepsilon }>0\). A token is placed at \(x_0\in \Omega \). Player II, the player seeking to minimize the final payoff, can either pass the turn to Player I or decide to toss a fair coin and play Tug-of-War. In this case, the winner of the coin toss gets to move the token to any \(x_1\in B_{\varepsilon }(x_0)\). If Player II passes the turn to Player I, then she can either move the game token to any \(x_1\in B_{\varepsilon }(x_0)\) with the price \(-{\varepsilon }\) or decide to end the game immediately with no payoff for either of the players. After the first round, the game continues from \(x_1\) according to the same rules.

This procedure yields a possibly infinite sequence of game states \(x_0,x_1,\ldots \) where every \(x_k\) is a random variable. If the game is not ended by the rules described above, the game ends when the token leaves \(\Omega \), and at this point the token will be in the boundary strip of width \({\varepsilon }\) given by

$$\begin{aligned} \Gamma _{\varepsilon }= \{x\in {\mathbb {R}}^n \setminus \Omega \,:\,\mathrm {dist}(x,\partial \Omega )< {\varepsilon }\}. \end{aligned}$$

We denote by \(x_\tau \in \Gamma _{\varepsilon }\) the first point in the sequence of game states that lies in \(\Gamma _{\varepsilon }\) so that \(\tau \) refers to the first time we hit \(\Gamma _{{\varepsilon }}\).

At this time the game ends with the terminal payoff given by \(F(x_\tau )\), where \(F:\Gamma _{\varepsilon }\rightarrow {\mathbb {R}}\) is a given Borel measurable continuous payoff function. Player I earns \(F(x_\tau )\), while Player II earns \(-F(x_\tau )\).

A strategy \(S_\text {I}\) for Player I is a function defined on the partial histories that gives the next game position \(S_\text {I}{\left( x_0,x_1,\ldots ,x_k\right) }=x_{k+1}\in B_{\varepsilon }(x_k)\) if Player I gets to move the token. Similarly, Player II plays according to a strategy \(S_\text {II}\). In addition, we define a decision variable for Player II, which tells when Player II decides to pass a turn

$$\begin{aligned} \begin{aligned} \theta _{\text {II}}(x_0,\ldots ,x_k)= {\left\{ \begin{array}{ll} 1,&{}\text {Player II pass a turn,}\\ 0,&{} \text {otherwise}, \end{array}\right. } \end{aligned} \end{aligned}$$

and one for Player I which tells when Player I decides to end the game immediately

$$\begin{aligned} \begin{aligned} \theta _{\text {I}}(x_0,\ldots ,x_k)= {\left\{ \begin{array}{ll} 1,&{}\text {Player I ends the game,}\\ 0,&{} \text {otherwise}. \end{array}\right. } \end{aligned} \end{aligned}$$

Given the sequence \(x_0,\ldots ,x_k\) with \(x_k\in \Omega \) the game will end immediately when

$$\begin{aligned} \theta _{\text {I}}(x_0,\ldots ,x_k)=\theta _{\text {II}}(x_0,\ldots ,x_k)=1. \end{aligned}$$

Otherwise, the one step transition probabilities will be

$$\begin{aligned} \pi _{S_\text {I},S_\text {II},\theta _{\text {I}},\theta _{\text {II}}}(x_0,\ldots ,x_k,{A})= & {} \displaystyle \big (1-\theta _{\text {II}}(x_0,\ldots ,x_k)\big )\frac{1}{2}\Big ( \delta _{S_\text {I}(x_0,\ldots ,x_k)}({A})+\delta _{S_\text {II}(x_0,\ldots ,x_k)}({A})\Big )\\&+\, \theta _{\text {II}}(x_0,\ldots ,x_k)(1-\theta _{\text {I}}(x_0,\ldots ,x_k))\delta _{S_\text {I}(x_0,\ldots ,x_k)}(A). \end{aligned}$$

By using the Kolmogorov’s extension theorem and the one step transition probabilities, we can build a probability measure \({\mathbb {P}}^{x_0}_{S_\text {I},S_\text {II},\theta _\text {I},\theta _\text {II}}\) on the game sequences. The expected payoff, when starting from \(x_0\) and using the strategies \(S_\text {I},S_\text {II},\theta _\text {I},\theta _\text {II}\), is

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}_{S_{\text {I}},S_\text {II},\theta _\text {I},\theta _\text {II}}^{x_0}\left[ F(x_\tau )-{\varepsilon }\sum _{k=0}^{\tau -1} \theta _{\text {II}}(x_0,\ldots ,x_k)(1-\theta _{\text {I}}(x_0,\ldots ,x_k))\right] \\&\quad =\int _{H^\infty } \Big (F(x_\tau )-{\varepsilon }\sum _{k=0}^{\tau -1} \theta _{\text {II}}(x_0,\ldots ,x_k)(1-\theta _{\text {I}}(x_0,\ldots ,x_k))\Big ) \, d{\mathbb {P}}^{x_0}_{S_\text {I},S_\text {II},\theta _\text {I},\theta _\text {II}}, \end{aligned} \end{aligned}$$
(6.1)

where \(F:\Gamma _{\varepsilon }\rightarrow {\mathbb {R}}\) is a given continuous function prescribing the terminal payoff extended as \(F\equiv 0\) in \(\Omega \).

The value of the game for Player I is given by

$$\begin{aligned} u_\text {I}(x_0)=\sup _{S_{\text {I}},\theta _\text {I}}\inf _{S_\text {II},\theta _\text {II}}\, {\mathbb {E}}_{S_{\text {I}},S_\text {II},\theta _\text {I},\theta _\text {II}}^{x_0}\left[ F(x_\tau )-{\varepsilon }\sum _{i=0}^{\tau -1} \theta _{\text {II}}(x_0,\ldots ,x_k)(1-\theta _{\text {I}}(x_0,\ldots ,x_k))\right] \end{aligned}$$

while the value of the game for Player II is given by

$$\begin{aligned} u_\text {II}(x_0)=\inf _{S_\text {II},\theta _\text {II}}\sup _{S_{\text {I}},\theta _\text {I}}\, {\mathbb {E}}_{S_{\text {I}},S_\text {II},\theta _\text {I},\theta _\text {II}}^{x_0}\left[ F(x_\tau )-{\varepsilon }\sum _{i=0}^{\tau -1} \theta _{\text {II}}(x_0,\ldots ,x_k)(1-\theta _{\text {I}}(x_0,\ldots ,x_k))\right] . \end{aligned}$$

Intuitively, the values \(u_\text {I}(x_0)\) and \(u_\text {II}(x_0)\) are the best expected outcomes each player can guarantee when the game starts at \(x_0\). Observe that if the game does not end almost surely, then the expectation (6.1) is undefined. In this case, we define \({\mathbb {E}}_{S_{\text {I}},S_\text {II},\theta _\text {I},\theta _\text {II}}^{x_0}\) to take value \(-\infty \) when evaluating \(u_\text {I}(x_0)\) and \(+\infty \) when evaluating \(u_\text {II}(x_0)\). If \(u_\text {I}= u_\text {II}\), we say that the game has a value.

6.1 The game value function and its dynamic programming principle

In this section, we prove that the game has a value, i.e. \(u {:}{=}\,u_\text {I}= u_\text {II}\), and that such a value function satisfies the dynamic programming principle (DPP) given by

$$\begin{aligned} \begin{aligned} u(x) = \min \Bigg \{ \frac{1}{2} \left( \sup _{y\in B_{\varepsilon }(x)} u(y) + \inf _{y\in B_{\varepsilon }(x)} u(y) \right) ; \max \Bigg \{0; \sup _{y\in B_{\varepsilon }(x)} u(y) - {\varepsilon }\Bigg \}\Bigg \} \end{aligned} \end{aligned}$$

for \(x\in \Omega \) and \(u(x)=F(x)\) for \(x\in \Gamma _{\varepsilon }\).

Let us see intuitively why this holds. At each step, with the token in a given \(x\in \Omega \), we have that Player II chooses whether to play Tug-of-War or to pass the turn to Player I. In the first case with probability \(\frac{1}{2}\), Player I gets to move and will try to maximize the expected outcome; and with probability \(\frac{1}{2}\), Player II gets to move and will try to minimize the expected outcome. In this case the expected payoff will be

$$\begin{aligned} \frac{1}{2} \sup _{y\in B_{\varepsilon }(x)}u(y) +\frac{1}{2} \inf _{y\in B_{\varepsilon }(x)} u(y). \end{aligned}$$

On the other hand, if Player II passes the turn to Player I, she will have two options: to end the game immediately obtaining 0 or to move trying to maximize the expected outcome by paying \({\varepsilon }\). Player I will prefer the option that gives the greater payoff, that is, the expected payoff is given by

$$\begin{aligned} \max \Bigg \{0;\sup _{y\in B_{\varepsilon }(x)} u(y) - {\varepsilon }\Bigg \} \end{aligned}$$

Finally, Player II will decide between the two possible payoff mentioned here, preferring the one with the minimum payoff.

To prove that the DPP holds for our game, we borrow some ideas from [3] and [12]. We choose a path that allows us to make the presentation self-contain.

We define \(\Omega _{\varepsilon }=\Omega \cup \Gamma _{\varepsilon }\) and \(u_n:\Omega _{\varepsilon }\rightarrow {\mathbb {R}}\) a sequence of functions. We define the sequence inductively, and let \(u_n=F\) on \(\Gamma _{\varepsilon }\),

$$\begin{aligned} u_0=\max _{\Gamma _{\varepsilon }}F \end{aligned}$$

on \(\Omega \) and

$$\begin{aligned} \begin{aligned} u_{n+1}(x) = \min \Bigg \{ \frac{1}{2} \left( \sup _{B_{\varepsilon }(x)} u_n + \inf _{B_{\varepsilon }(x)} u_n \right) ; \max \Bigg \{0; \sup _{B_{\varepsilon }(x)} u_n - {\varepsilon }\Bigg \}\Bigg \} \end{aligned} \end{aligned}$$

on \(\Omega \) for all \(n\in {\mathbb {N}}\).

Let us observe that \(u_0\ge u_1\) and in addition, if \(u_{n-1}\ge u_n\), by the recursive definition, we have \(u_n\ge u_{n+1}\). Then, by induction, we obtain that the sequence of functions is a decreasing sequence. By the definition we have that the sequence is bounded below by \(\displaystyle \min \left\{ 0,\min _{\Gamma _{\varepsilon }}F\right\} \). Hence, \(u_n\) converge point-wise to a bounded Borel function u.

We want to prove that the limit u satisfies the dynamic programming principle. We can attempt to do that by passing to the limit in the recursive formula. Since \(u_n\) is a decreasing sequence that converges point-wisely to u, we can show that

$$\begin{aligned} \inf _{B_{\varepsilon }(x)} u_n \rightarrow \inf _{B_{\varepsilon }(x)} u. \end{aligned}$$

Although, this convergence is not immediate for the supremum. This is why, in order to be able to pass to the limit in the recursive formula, we want to show that the sequence converges uniformly. To this end, let us prove an auxiliary lemma.

Lemma 6.1

Let \(x\in \Omega \), \(n\in {\mathbb {N}}\) and fix \(\lambda _1\), \(\lambda _2\) and \(\delta \) such that

$$\begin{aligned} u_n(x)-u_{n+1}(x)\ge \lambda _1,\quad \Vert u_{n-1}-u_n\Vert _\infty \le \lambda _2 \end{aligned}$$

and \(\delta >0\). Then, there exists \(y\in B_{\varepsilon }(x)\) such that

$$\begin{aligned} \lambda _2-2\lambda _1+\delta +u_{n-1}(y) \ge \sup _{B_{\varepsilon }(x)}u_n, \end{aligned}$$

Proof

Given \(\lambda _1\le u_n(x)-u_{n+1}(x)\), by the recursive definition, we have

$$\begin{aligned} \lambda _1\le&\min \Bigg \{ \frac{1}{2} \Bigg (\sup _{ B_{\varepsilon }(x)} u_{n-1}+ \inf _{ B_{\varepsilon }(x)} u_{n-1} \Bigg ); \max \Bigg \{0; \sup _{B_{\varepsilon }(x)} u_{n-1}- {\varepsilon }\Bigg \}\Bigg \}\\&- \min \Bigg \{ \frac{1}{2} \Bigg (\sup _{B_{\varepsilon }(x)}u_n+ \inf _{B_{\varepsilon }(x)} u_n \Bigg ) ; \max \Bigg \{0; \sup _{B_{\varepsilon }(x)} u_n- {\varepsilon }\Bigg \}\Bigg \}. \end{aligned}$$

From the standard inequalities

$$\begin{aligned} \min \{a,b\}{-}\min \{c,d\}\le \max \{a{-}c,b{-}d\} \;\; \text{ and } \;\; \max \{a,b\}{-}\max \{c,d\}\le \max \{a{-}c,b{-}d\}, \end{aligned}$$

we get

$$\begin{aligned} \lambda _1{\le } \max \Bigg \{ \frac{1}{2} \Bigg (\sup _{B_{\varepsilon }(x)}u_{n-1} {+} \inf _{B_{\varepsilon }(x)} u_{n-1} \Bigg ) {-}\frac{1}{2} \Bigg (\sup _{B_{\varepsilon }(x)}u_n {+} \inf _{B_{\varepsilon }(x)} u_n \Bigg );0; \sup _{B_{\varepsilon }(x)}u_{n-1} {-}\sup _{B_{\varepsilon }(x)}u_n \Bigg \}. \end{aligned}$$

Since \(u_{n-1}\ge u_n\) we can avoid the term 0 in the RHS, we obtain

$$\begin{aligned} \lambda _1\le \frac{1}{2} \Bigg ( \sup _{B_{\varepsilon }(x)}u_{n-1} -\sup _{B_{\varepsilon }(x)}u_n \Bigg ) +\frac{1}{2} \max \Bigg \{ \inf _{B_{\varepsilon }(x)} u_{n-1} - \inf _{B_{\varepsilon }(x)} u_n; \sup _{B_{\varepsilon }(x)}u_{n-1} -\sup _{B_{\varepsilon }(x)}u_n \Bigg \}. \end{aligned}$$

We bound the difference between the suprema and infima using the inequality \(\Vert u_{n-1}-u_n\Vert _\infty \le \lambda _2\), we obtain

$$\begin{aligned} 2\lambda _1\le \Bigg (\sup _{B_{\varepsilon }(x)}u_{n-1} -\sup _{B_{\varepsilon }(x)}u_n \Bigg )+\lambda _2, \end{aligned}$$

that is,

$$\begin{aligned} 2\lambda _1-\lambda _2 +\sup _{B_{\varepsilon }(x)}u_n \le \sup _{B_{\varepsilon }(x)}u_{n-1}. \end{aligned}$$

Finally, we can choose \(y\in B_{\varepsilon }(x)\) such that

$$\begin{aligned} u_{n-1}(y)+\delta \ge \sup _{B_{\varepsilon }(x)}u_{n-1} \end{aligned}$$

which gives the desired inequality. \(\square \)

Proposition 6.2

The sequence \(u_n\) converges uniformly, and the limit u is a solution to the DPP.

Proof

We want to show that the convergence is uniform. Suppose not. Observe that if \(||u_n-u_{n+1}||_\infty \rightarrow 0\) we can extract a uniformly Cauchy subsequence, thus this subsequence converges uniformly to a limit u. This implies that the \(u_n\) converges uniformly to u, because of the monotonicity. By the recursive definition we have \(\Vert u_n-u_{n+1}\Vert _\infty \ge \Vert u_{n-1}-u_n \Vert _\infty \ge 0\). Then, as we are assuming the convergence is not uniform, we have

$$\begin{aligned} \Vert u_n-u_{n+1}\Vert _\infty \rightarrow M \quad \text{ and } \quad \Vert u_n-u_{n+1}\Vert _\infty \ge M \end{aligned}$$

for some \(M>0\).

Given \(\delta >0\), let \(n_0\in {\mathbb {N}}\) such that for all \(n\ge n_0\),

$$\begin{aligned} \Vert u_n-u_{n+1}\Vert _\infty \le M+\delta . \end{aligned}$$

We fix \(k\in {\mathbb {N}}\). Let \(x_0\in \Omega \) such that

$$\begin{aligned} M-\delta <u_{n_0+k-1}(x_0)-u_{n_0+k}(x_0). \end{aligned}$$

Now we apply Lemma 6.1 for \(n=n_0+k-1\), \(\lambda _1=M-\delta \) and \(\lambda _2=M+\delta \), we get

$$\begin{aligned} u_{n_0+k-1}(x_0),u_{n_0+k-1}(x_1)&\le \sup _{B_{\varepsilon }(x_0)}u_{n_0+k-1}\\&\le u_{n_0+k-2}(x_1)+\lambda _2-2\lambda _1+\delta \\&\le u_{n_0+k-2}(x_1)+4\delta -M \end{aligned}$$

for some \(x_1\in B_{\varepsilon }(x_0)\). If we repeat the argument for \(x_1\), but now with \(\lambda _1=t\delta -M\), we obtain

$$\begin{aligned} u_{n_0+k-2}(x_1),u_{n_0+k-2}(x_2)\le u_{n_0+k-3}(x_2)+(2t+2)\delta -M. \end{aligned}$$

Inductively, we obtain a sequence \(x_l\), \(1\le l \le k-1\) such that

$$\begin{aligned} u_{n_0+k-l}(x_{l-1}),u_{n_0+k-l}(x_l)\le u_{n_0+k-l-1}(x_l)+(3\times 2^l-2)\delta -M. \end{aligned}$$

If we add the inequalities

$$\begin{aligned} u_{n_0+k-l}(x_{l-1})\le u_{n_0+k-l-1}(x_l)+(3\times 2^l-2)\delta -M \end{aligned}$$

for \(1\le l \le k-1\) and \(u_{n_0+k}(x_0)\le u_{n_0+k-1}(x_0)+\delta -M\), we get

$$\begin{aligned} u_{n_0+k}(x_0)-u_{n_0}(x_{k-1})\le (3\times 2^k-2k-3)\delta -kM, \end{aligned}$$

which is a contradiction since \(u_n\) is bounded but we can make the RHS as small as we want by choosing a big value for k and a small one for \(\delta \). \(\square \)

Now, we are ready to prove one of the main results of this section.

Theorem 6.3

(Dynamic Programming Principle) The game has a value \(u=u_\text {I}= u_\text {II}\), and it satisfies

$$\begin{aligned} \begin{aligned} u(x) = \min \Bigg \{ \frac{1}{2} \left( \sup _{y\in B_{\varepsilon }(x)} u(y) + \inf _{y\in B_{\varepsilon }(x)} u(y)\right) ; \max \Bigg \{0; \sup _{y\in B_{\varepsilon }(x)} u(y) - {\varepsilon }\Bigg \}\Bigg \} \end{aligned} \end{aligned}$$

for \(x\in \Omega \) and \(u(x)=F(x)\) in \(\Gamma _{\varepsilon }\).

Proof

By definition, \(u_\text {I}\le u_\text {II}\). We will show that \(u_\text {II}\le u\) and \(u\le u_\text {I}\) for the u constructed in Proposition 6.2. This, together with the fact that u satisfies the DPP will complete the proof. For the first inequality we will use the constructed sequence of function \(u_n\) as in [3]. For the second inequality we will use an argument similar to one in [12].

We want to show that \(u_\text {II}\le u\). Given \(\eta >0\) let \(n>0\) be such that \(u_n(x_0)<u(x_0)+\frac{\eta }{2}\). We build an strategy (\(S^0_\text {II}, \theta ^0_\text {II}\)) for Player II, in the firsts n moves, given \(x_{k-1}\) she will choose to play Tug-of-War or pass the turn depending whether

$$\begin{aligned} \frac{1}{2} \left( \inf _{B_{\varepsilon }(x_{k-1})} u_{n-k}+\sup _{B_{\varepsilon }(x_{k-1})} u_{n-k}\right) \quad \text{ or }\quad \max \Bigg \{0;\sup _{B_{\varepsilon }(x_{k-1})} u_{n-k}-{\varepsilon }\Bigg \} \end{aligned}$$

is larger. When playing Tug-of-War, she will move to a point that almost minimizes \(u_{n-k}\), that is, she chooses \(x_k\in B_{\varepsilon }(x_{k-1})\) such that

$$\begin{aligned} u_{n-k}(x_k)<\inf _{B_{\varepsilon }(x_{k-1})}u_{n-k}+\frac{\eta }{2n}. \end{aligned}$$

After the first n moves, she will choose to play Tug-of-War following a strategy that ends the game almost surely (for example, pointing in a fix direction).

We have

$$\begin{aligned} \begin{aligned}&{\mathbb {E}}_{S^0_\text {I}, S_\text {II}}^{x_0}[u_{n-k}(x_k)+\frac{(n-k)\eta }{2n}|\,x_0,\ldots ,x_{k-1}] \\&\quad \le \min \left\{ \frac{1}{2} \Bigg ( \inf _{B_{\varepsilon }(x_{k-1})} u_{n-k}{+}\sup _{B_{\varepsilon }(x_{k-1})} u_{n-k} {+} \frac{\eta }{n}\Bigg );\max \Bigg \{0;\sup _{B_{\varepsilon }(x_{k-1})} u_{n-k}{-}{\varepsilon }\Bigg \}\right\} {+}\frac{(n-k)\eta }{2n} \\&\quad \le u_{n-k+1}(x_{k-1})+\frac{(n-k+1)\eta }{2n}, \end{aligned} \end{aligned}$$

where we have estimated the strategy of Player I by \(\sup \) and used the construction for the \(u_k\)’s. Thus

$$\begin{aligned} M_k= \left\{ \begin{array}{ll} \displaystyle u_{n-k}(x_k)+\frac{(n-k)\eta }{2n} \qquad &{} \text{ for } 0\le k\le n, \\ \displaystyle \sup _{\Gamma _{\varepsilon }} F \qquad &{} \text{ for } k>n, \end{array} \right. \end{aligned}$$

is a supermartingale.

Now we have

$$\begin{aligned} \begin{aligned} u_\text {II}(x_0)&=\inf _{S_\text {II},\theta _\text {II}}\sup _{S_{\text {I}},\theta _\text {I}}\, {\mathbb {E}}_{S_{\text {I}},S_\text {II},\theta _\text {I},\theta _\text {II}}^{x_0}\left[ F(x_\tau )-{\varepsilon }\sum _{i=0}^{\tau -1} \theta _{\text {II}}(x_0,\ldots ,x_i)(1-\theta _{\text {I}}(x_0,\ldots ,x_i))\right] \\&\le \sup _{S_{\text {I}},\theta _\text {I}}\, {\mathbb {E}}_{S_{\text {I}},S^0_\text {II},\theta _\text {I},\theta ^0_\text {II}}^{x_0}\left[ F(x_\tau )-{\varepsilon }\sum _{i=0}^{\tau -1} \theta _{\text {II}}(x_0,\ldots ,x_i)(1-\theta _{\text {I}}(x_0,\ldots ,x_i))\right] \\&\le \inf _{S_\text {II}} \liminf _{k\rightarrow \infty }{\mathbb {E}}_{S_{\text {I}},S^0_\text {II},\theta _\text {I},\theta ^0_\text {II}}^{x_0}[M_{\tau \wedge k}]\\&\le \inf _{S_\text {II}} {\mathbb {E}}_{S_{\text {I}},S^0_\text {II},\theta _\text {I},\theta ^0_\text {II}}^{x_0}[M_0]=u_n(x_0)+\frac{\eta }{2}<u(x_0)+\eta , \end{aligned} \end{aligned}$$
(6.2)

where \(\tau \wedge k {:}{=}\,\min \{\tau ,k\}\), and we used the optional stopping theorem for \(M_{k}\). Since \(\eta \) is arbitrary, this proves the claim.

Now, we will show that \(u\le u_\text {I}\). We want to find a strategy (\(S^0_\text {I}, \theta ^0_\text {I}\)) for Player I that ensures a payoff close to u. He has to maximize the expected payoff and, at the same time, make sure that the game ends almost sure. This is done by using the backtracking strategy (cf. [20, Theorem 2.2] for more details).

To that end, we define

$$\begin{aligned} \delta (x)=\sup _{B_{\varepsilon }(x)} u -u(x). \end{aligned}$$

Fix \(\eta >0\) and a starting point \(x_0\in \Omega \), and set \(\delta _0 =\min \{\delta (x_0),{\varepsilon }\}/2\). We suppose for now that \(\delta _0>0\), and define

$$\begin{aligned} \begin{aligned} X_0=\Big \{x\in \Omega \,:\, \delta (x)> \delta _0\Big \}. \end{aligned} \end{aligned}$$

We consider a strategy \(S_I^0\) for Player I that distinguishes between the cases \(x_k\in X_0\) and \(x_k\notin X_0\). To that end, we define

$$\begin{aligned} m_k= {\left\{ \begin{array}{ll} u(x_k)-\eta 2^{-k} &{} \text{ if } x_k\in X_0\\ u(y_k)-\delta _0 d_k-\eta 2^{-k} &{} \text{ if } x_k \notin X_0 \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} M_k=m_k-{\varepsilon }\sum _{i=0}^{k-1} \theta _{\text {II}}(x_0,\ldots ,x_i)(1-\theta _{\text {I}}(x_0,\ldots ,x_i)) \end{aligned}$$

where \(y_k\) denotes the last game position in \(X_0\) up to time k, and \(d_k\) is the distance, measured in number of steps, from \(x_k\) to \(y_k\) along the graph spanned by the previous points \(y_k=x_{k-j},x_{k-j+1},\ldots ,x_k\) that were used to get from \(y_k\) to \(x_k\).

In what follows we define a strategy for Player I and prove that \(M_k\) is a submartingale. Observe that \(M_{k+1}-m_{k+1}=M_k-m_k\) or \(M_{k+1}-m_{k+1}=M_k-m_k-{\varepsilon }\), so to prove the desired submartingale property we will mostly make computations in terms of \(m_k\).

First, if \(x_k\in X_0\), then Player I chooses to step to a point \(x_{k+1}\) satisfying

$$\begin{aligned} u(x_{k+1})\ge \sup _{B_{{\varepsilon }}(x_k)} u-\eta _{k+1} 2^{-(k+1)}, \end{aligned}$$

where \(\eta _{k+1}\in (0,\eta ]\) is small enough to guarantee that \(x_{k+1}\in X_0\). Let us remark that

$$\begin{aligned} u(x)-\inf _{B_{\varepsilon }(x)}u \le \sup _{B_{\varepsilon }(x)}u-u(x)=\delta (x) \end{aligned}$$
(6.3)

and hence

$$\begin{aligned} \begin{array}{lll} \delta (x_k)-\eta _{k+1} 2^{-(k+1)} &{} \le &{} \displaystyle \sup _{B_{\varepsilon }(x_k)} u -u(x_k)-\eta _{k+1} 2^{-(k+1)} \le u(x_{k+1})-u(x_k) \\ &{} \le &{} \displaystyle u(x_{k+1})-\inf _{B_{\varepsilon }(x_{k+1})}u \le \delta (x_{k+1}). \end{array} \end{aligned}$$

Therefore, we can guarantee that \(x_{k+1}\in X_0\) by choosing \(\eta _{k+1}\) such that

$$\begin{aligned} \delta _0<\delta (x_k)-\eta _{k+1} 2^{-(k+1)}. \end{aligned}$$

Thus if \(x_k\in X_0\) and Player I gets to choose the next position, it holds that

$$\begin{aligned} \begin{aligned} m_{k+1}&\ge u(x_k)+\delta (x_k)-\eta _{k+1} 2^{-(k+1)}-\eta 2^{-(k+1)}\\&\ge u(x_k)+\delta (x_k)-\eta 2^{-k}\\&=m_k+\delta (x_k). \end{aligned} \end{aligned}$$

When Tug-of-War is played, if Player II wins the toss and moves from \(x_k\in X_0\) to \(x_{k+1}\in X_0\), it holds, in view of (6.3), that

$$\begin{aligned} \begin{aligned} m_{k+1}\ge u(x_k)-\delta (x_k)-\eta 2^{-(k+1)}>m_k-\delta (x_k). \end{aligned} \end{aligned}$$

If Player II wins the toss and she moves to a point \(x_{k+1}\notin X_0\) (whether \(x_k\in X_0\) or not), it holds that

$$\begin{aligned} \begin{aligned} m_{k+1}&= u(y_k)-d_{k+1} \delta _0-\eta 2^{-(k+1)}\\&\ge u(y_k)-d_{k} \delta _0-\delta _0-\eta 2^{-k}\\&=m_k-\delta _0 . \end{aligned} \end{aligned}$$
(6.4)

When Player II passes the turn to Player I, he can choose to end the game immediately or to move by paying \({\varepsilon }\). If \(\delta (x_k)\ge {\varepsilon }\) he will choose to play, we get \(M_{k+1}\ge M_k+\delta (x_k)-{\varepsilon }\ge M_k\). If \({\varepsilon }>\delta (x_k)\), the DPP implies that \(0\ge u(x_k)\) and hence, he can finish the game immediately earning more than \(m_k\).

In the case \(x_k\notin X_0\), the strategy for Player I is to backtrack to \(y_k\), that is, if he wins the coin toss, he moves the token to one of the points \(x_{k-j},x_{k-j+1},\ldots ,x_{k-1}\) closer to \(y_k\) so that \(d_{k+1}= d_k-1\).

Thus if Player I wins and \(x_k\notin X_0\) (whether \(x_{k+1}\in X_0\) or not),

$$\begin{aligned} \begin{aligned} m_{k+1}\ge \delta _0+m_k. \end{aligned} \end{aligned}$$

When Tug-of-War is played, if Player II wins the coin toss and moves from \(x_k\notin X_0\) to \(x_{k+1}\in X_0\), then

$$\begin{aligned} \begin{aligned} m_{k+1}=u(x_{k+1})-\eta 2^{-(k+1)}\ge -\delta (x_k)+u(x_k)-\eta 2^{-k}\ge -\delta _0+m_k \end{aligned} \end{aligned}$$

where the first inequality is due to (6.3), and the second follows from the fact \(m_k=u(y_k)-d_k\delta _0-\eta 2^{-k}\le u(x_k)-\eta 2^{-k}\). The same was obtained in (6.4) when \(x_{k+1}\notin X_0\).

It remains to analyze what happens when Player II passes the turn to Player I in this case. Since \(\delta (x_k)\le {\varepsilon }/2<{\varepsilon }\), we have \(0\ge u(x_k)\) and as before he can finish the game immediately earning more than \(m_k\).

Taking into account all the different cases, we see that \(M_k\) is a submartingale. We can also see that when the game ends Player I ensures a payoff of al least \(M_k\). Let us observe that \(m_k\) is also a submartingale, and it is bounded. Since Player I can assure that \(m_{k+1}\ge m_k+\delta _0\) if he gets to move the token, the game must terminate almost surely. This is because, there are arbitrary long sequences of moves made by Player I (if he does not end the game immediately). Indeed, if Player II passes a turn, then Player I gets to move, and otherwise, this is a consequence of the zero-one law.

We can now conclude the proof with an inequality analogous to that in (6.2).

Finally, let us remove the assumption that \(\delta (x_0)>0\). If \(\delta (x_0)=0\) for \(x_0\in \Omega \), when Tug-of-War is played, Player I adopts a strategy of pulling towards a boundary point until the game token reaches a point \(x_0'\) such that \(\delta (x_0')>0\) or \(x_0'\) is outside \(\Omega \). It holds that \(u(x_0)= u(x_0')\), because by (6.3). If Player II passes the turn, Player I ends the game immediately earning 0 (recall that \(\delta (x)=0\) implies \(0\ge u(x)\) because of the DPP). \(\square \)

6.2 Game value convergence

In this subsection we study the behaviour of the game values as \({\varepsilon }\rightarrow 0\). In the previous sections we analyze the game for a fix value of \({\varepsilon }\), and here we will consider the game value for different values of \({\varepsilon }\). For this purpose, we will refer to the game value as \(u^{\varepsilon }\), emphasizing its dependence on \({\varepsilon }\). We want to prove that

$$\begin{aligned} u^{\varepsilon }\rightarrow u \end{aligned}$$

uniformly on \(\overline{\Omega }\) as \({\varepsilon }\rightarrow 0\), and that u is a viscosity solution to

$$\begin{aligned} \left\{ \begin{array}{lllll} \max \{-\Delta _\infty u, \chi _{\{u>0\}}-\left| \nabla u\right| \} &{} = &{} 0 &{} \text{ in } &{} \Omega \\ u(x) &{} = &{} F(x) &{} \text{ on } &{} \partial \Omega , \end{array} \right. \end{aligned}$$
(6.5)

To this end, we would like to apply the following Arzelà-Ascoli type lemma. We refer to the interested reader to [18, Lemma 4.2] for a proof.

Lemma 6.4

Let \(\{u^{\varepsilon }: {\overline{\Omega }} \rightarrow {\mathbb {R}},\ {\varepsilon }>0\}\) be a set of functions such that

  1. 1.

    there exists \(C>0\) such that \(\left| u^{\varepsilon }(x)\right| <C\) for every \({\varepsilon }>0\) and every \(x \in \overline{\Omega }\),

  2. 2.

    given \(\eta >0\) there are constants \(r_0\) and \({\varepsilon }_0\) such that for every \({\varepsilon }< {\varepsilon }_0\) and any \(x, y \in {\overline{\Omega }}\) with \(|x - y | < r_0 \) it holds

    $$\begin{aligned} |u^{\varepsilon }(x) - u^{\varepsilon }(y)| < \eta . \end{aligned}$$

Then, there exists a uniformly continuous function \(u: {\overline{\Omega }} \rightarrow {\mathbb {R}}\) and a subsequence still denoted by \(\{u^{\varepsilon }\}\) such that

$$\begin{aligned} \begin{aligned} u^{{\varepsilon }}\rightarrow u \qquad \text { uniformly in}\quad {\overline{\Omega }}, \end{aligned} \end{aligned}$$

as \({\varepsilon }\rightarrow 0\).

So our task now is to show that the family \(u^{\varepsilon }\) satisfies the hypotheses of the previous lemma. In the next Lemma, we prove that the family is asymptotically uniformly continuous, that is, it satisfies the condition 6.4 on Lemma 6.4. To do that we follow [12].

Lemma 6.5

The family \(u^{\varepsilon }\) is asymptotically uniformly continuous.

Proof

We prove the required oscillation estimate by arguing by contradiction: We define

$$\begin{aligned} A(x) {:}{=}\,\sup _{B_{\varepsilon }(x)} u^{\varepsilon }- \inf _{ B_{\varepsilon }(x)} u^{\varepsilon }\end{aligned}$$

We claim that

$$\begin{aligned} A(x) \le 4 \max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }, \end{aligned}$$

for all \(x\in \Omega \). Aiming for a contradiction, suppose that there exists \(x_0\in \Omega \) such that

$$\begin{aligned} A(x_0) > 4 \max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }. \end{aligned}$$

In this case, we have that

$$\begin{aligned} \begin{aligned} u^{\varepsilon }(x_0)&= \min \Bigg \{ \frac{1}{2} \Bigg ( \sup _{B_{\varepsilon }(x_0)} u^{\varepsilon }+ \inf _{B_{\varepsilon }(x_0)} u^{\varepsilon }\Bigg ); \max \Bigg \{0; \sup _{B_{\varepsilon }(x_0)} u^{\varepsilon }- {\varepsilon }\Bigg \}\Bigg \}\\&= \frac{1}{2} \Bigg (\sup _{ B_{\varepsilon }(x_0)}u^{\varepsilon }+ \inf _{ B_{\varepsilon }(x_0)} u^{\varepsilon }\Bigg ). \end{aligned} \end{aligned}$$
(6.6)

The reason is that the alternative

$$\begin{aligned} \begin{aligned} \frac{1}{2} \Bigg ( \sup _{ B_{\varepsilon }(x_0)}u^{\varepsilon }+ \inf _{ B_{\varepsilon }(x_0)} u^{\varepsilon }\Bigg )&> \max \Bigg \{0; \sup _{ B_{\varepsilon }(x_0)} u^{\varepsilon }- {\varepsilon }\Bigg \}\\&>\sup _{ B_{\varepsilon }(x_0)} u^{\varepsilon }- {\varepsilon }\end{aligned} \end{aligned}$$

would imply

$$\begin{aligned} \begin{aligned} A(x_0) =\sup _{ B_{\varepsilon }(x_0)}u^{\varepsilon }- \inf _{ B_{\varepsilon }(x_0)}u^{\varepsilon }< 2{\varepsilon }, \end{aligned} \end{aligned}$$
(6.7)

which is a contradiction with \(A(x_0) > 4 \max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }\). It follows from (6.6) that

$$\begin{aligned} \sup _{ B_{\varepsilon }(x_0)}u^{\varepsilon }-u^{\varepsilon }(x_0) =u^{\varepsilon }(x_0) -\inf _{ B_{\varepsilon }(x_0)}u^{\varepsilon }=\frac{1}{2} A(x_0) . \end{aligned}$$

Let \(\eta >0\) and take \(x_1 \in B_{\varepsilon }(x_0)\) such that

$$\begin{aligned} u^{\varepsilon }(x_1) \ge \sup _{ B_{\varepsilon }(x_0)}u^{\varepsilon }- \frac{\eta }{2}. \end{aligned}$$

We obtain

$$\begin{aligned} u^{\varepsilon }(x_1) -u^{\varepsilon }(x_0) \ge \frac{1}{2} A(x_0)- \frac{\eta }{2} \ge 2\max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }- \frac{\eta }{2}, \end{aligned}$$
(6.8)

and, since \(x_0\in B_{\varepsilon }(x_1)\), also

$$\begin{aligned} \sup _{ B_{\varepsilon }(x_1)}u^{\varepsilon }- \inf _{ B_{\varepsilon }(x_1)}u^{\varepsilon }\ge 2\max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }- \frac{\eta }{2}. \end{aligned}$$

Arguing as before, (6.6) also holds at \(x_1\), since otherwise the above inequality would lead to a contradiction similarly as (6.7) for small enough \(\eta \).

Thus, (6.8) and (6.6) imply

$$\begin{aligned} \sup _{ B_{\varepsilon }(x_1)}u^{\varepsilon }-u^{\varepsilon }(x_1) = u^{\varepsilon }(x_1)- \inf _{ B_{\varepsilon }(x_1)}u^{\varepsilon }\ge 2\max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }- \frac{\eta }{2}, \end{aligned}$$

so that

$$\begin{aligned} A(x_1) = \sup _{ B_{\varepsilon }(x_1)}u^{\varepsilon }-u^{\varepsilon }(x_1) +u^{\varepsilon }(x_1) - \inf _{ B_{\varepsilon }(x_1)} u^{\varepsilon }\ge 4\max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }- \eta . \end{aligned}$$

Iterating this procedure, we obtain \(x_i \in B_{\varepsilon }(x_{i-1})\) such that

$$\begin{aligned} u^{\varepsilon }(x_i) -u^{\varepsilon }(x_{i-1}) \ge 2\max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }- \frac{\eta }{2^i} \end{aligned}$$
(6.9)

and

$$\begin{aligned} A(x_i) \ge 4\max \{ \mathrm{Lip}(F) ; 1 \} {\varepsilon }- \sum _{j=0}^{i-1} \frac{\eta }{2^j}. \end{aligned}$$
(6.10)

We can proceed with an analogous argument considering points where the infimum is nearly attained to obtain \(x_{-1}\), \(x_{-2}\),... such that \(x_{-i} \in B_{\varepsilon }(x_{-(i-1)})\), and (6.9) and (6.10) hold. Since \(u^{\varepsilon }\) is bounded, there must exist k and l such that \(x_k, x_{-l}\in \Gamma _{\varepsilon }\), and we have

$$\begin{aligned} \displaystyle \frac{|F (x_k) - F(x_{-l}) | }{| x_k - x_{-l}|} \displaystyle \ge \frac{\displaystyle \sum \limits _{j=-l+1}^k u^{\varepsilon }(x_{j}) -u^{\varepsilon }(x_{j-1}) }{{\varepsilon }(k+l)} \displaystyle \ge 2\max \{ \mathrm{Lip}(F) ; 1 \} - \frac{2\eta }{{\varepsilon }}, \end{aligned}$$

a contradiction. Therefore

$$\begin{aligned} A(x) \le 4 \max \{\mathrm{Lip}(F) ; 1 \} {\varepsilon }, \end{aligned}$$

for every \(x\in \Omega \). \(\square \)

Lemma 6.6

Let \(u^{\varepsilon }\) be a family of game values for a Lipschitz continuous boundary data F. Then, there exists a Lipschitz continuous function u such that, up to selecting a subsequence,

$$\begin{aligned} \begin{aligned} u^{\varepsilon }\rightarrow u \quad \text {uniformly in }\overline{\Omega }\end{aligned} \end{aligned}$$

as \({\varepsilon }\rightarrow 0\).

Proof

By choosing always to play Tug-of-War and moving with any strategy that ends the game almost sure (as pulling in a fix direction), Player II can ensure that the final payoff is at most \(\max _{\Gamma _{\varepsilon }}F\). Similarly, by ending the game immediately if given the option and moving with any strategy that ends the game almost sure when playing Tug-of-War, Player II can ensure that the final payoff is at least \(\displaystyle \min \{0,\min _{\Gamma _{\varepsilon }}F\}\). We have

$$\begin{aligned} \displaystyle \min \{0,\min _{\Gamma _{\varepsilon }} F\}\le u^{\varepsilon }\le \max _{\Gamma _{\varepsilon }}F. \end{aligned}$$

This, together with Lemma 6.5, shows that the family \(u^{\varepsilon }\) satisfies the hypothesis of Lemma 6.4. \(\square \)

Theorem 6.7

The function u obtained as a limit in Lemma 6.6 is a viscosity solution to (6.5).

Proof

First, we observe that \(u=F\) on \(\partial \Omega \) due to \(u_{\varepsilon }=F\) on \(\partial \Omega \) for all \({\varepsilon }>0\). Hence, we can focus our attention on showing that u satisfies the equation inside \(\Omega \) in the viscosity sense.

To this end, we obtain the following asymptotic expansions, as in [17]. Choose a point \(x\in \Omega \) and a \(C^2\)-function \(\psi \) defined in a neighbourhood of x. Note that since \(\psi \) is continuous then we have

$$\begin{aligned} \min _{\overline{B}_{{\varepsilon }}(x)} \psi = \inf _{ B_{{\varepsilon }}(x)} \psi \quad \quad \text{ and } \quad \quad \max _{\overline{B}_{{\varepsilon }}(x)} \psi = \sup _{ B_{{\varepsilon }}(x)} \psi \end{aligned}$$

for all \(x\in \Omega \). Let \(x_1^{\varepsilon }\) and \(x_2^{\varepsilon }\) be a minimum point and a maximum point, respectively, for \(\phi \) in \(\overline{B}_{\varepsilon }(x)\). It follows from the Taylor expansions in [17] that

$$\begin{aligned}&\frac{1}{2}\left( \max _{y\in \overline{B}_{\varepsilon }(x)} \psi (y) + \min _{y\in \overline{B}_{\varepsilon }(x)} \psi (y)\right) -\psi (x)\nonumber \\&\quad \ge {\varepsilon }^2\Big \langle D^2\psi (x)\left( \frac{x_1^{{\varepsilon }}-x}{{\varepsilon }}\right) , \left( \frac{x_1^{{\varepsilon }}-x}{{\varepsilon }}\right) \Big \rangle +o({\varepsilon }^2). \end{aligned}$$
(6.11)

and

$$\begin{aligned}&\max _{y\in \overline{B}_{\varepsilon }(x)} \phi (y) -{\varepsilon }- \phi (x)\ge \left( D \phi (x)\cdot \tfrac{x_2^{\varepsilon }-x}{{\varepsilon }}-1\right) {\varepsilon }\nonumber \\&\quad +\,\frac{{\varepsilon }^2}{2} D^2\phi (x)\left( \tfrac{x_2^{{\varepsilon }}-x}{{\varepsilon }}\right) \cdot \left( \tfrac{x_2^{{\varepsilon }}-x}{{\varepsilon }}\right) +o({\varepsilon }^2). \end{aligned}$$
(6.12)

Suppose that \(u-\psi \) has an strict local minimum. We want to prove that

$$\begin{aligned} \max \{-\Delta _\infty \psi (x), \chi _{\{u\ge 0\}}(x)-\left| \nabla \psi (x)\right| \}\ge 0. \end{aligned}$$

If \(\nabla \psi (x)=0\), we have \(-\Delta _\infty \psi (x)=0\) and hence, the inequality holds. We can assume \(\nabla \psi (x)\ne 0\). By the uniform convergence, there exists sequence \(x_{{\varepsilon }}\) converging to x such that \(u^{{\varepsilon }} - \psi \) has an approximate minimum at \(x_{{\varepsilon }}\), that is, for \(\eta _{\varepsilon }>0\), there exists \(x_{{\varepsilon }}\) such that

$$\begin{aligned} u^{{\varepsilon }} (x) - \psi (x) \ge u^{{\varepsilon }} (x_{{\varepsilon }}) - \psi (x_{{\varepsilon }})-\eta _{\varepsilon }. \end{aligned}$$

Moreover, considering \({\tilde{\psi }}= \psi - u^{{\varepsilon }} (x_{{\varepsilon }}) - \psi (x_{{\varepsilon }})\), we can assume that \(\psi (x_{{\varepsilon }}) = u^{{\varepsilon }} (x_{{\varepsilon }})\).

If \(u(x)<0\), we have to show that

$$\begin{aligned} -\Delta _\infty \psi (x)\ge 0. \end{aligned}$$

Since u is continuous and \(u^{\varepsilon }\) converges uniformly, we can assume that \(u^{\varepsilon }(x_{\varepsilon })<0\). Thus, by recalling the fact that \(u^{\varepsilon }\) satisfies the DPP (Theorem 6.3), and observing that

$$\begin{aligned} \max \Bigg \{0; \sup _{ B_{\varepsilon }(x)} u^{\varepsilon }(y) - {\varepsilon }\Bigg \}\ge 0 \end{aligned}$$

we conclude that

$$\begin{aligned} u^{\varepsilon }(x) = \frac{1}{2} \Bigg ( \sup _{B_{\varepsilon }(x)}u^{\varepsilon }+ \inf _{B_{\varepsilon }(x)} u^{\varepsilon }\Bigg ). \end{aligned}$$

We obtain

$$\begin{aligned} \eta _{\varepsilon }\ge -\psi (x_{{\varepsilon }})+\frac{1}{2} \Bigg ( \max _{\overline{B}_{\varepsilon }(x_{\varepsilon })}\psi + \min _{\overline{B}_{\varepsilon }(x_{\varepsilon })} \psi \Bigg ). \end{aligned}$$

and thus, by (6.11), and choosing \(\eta _{\varepsilon }= o({\varepsilon }^2)\), we have

$$\begin{aligned} 0\ge {\varepsilon }^2\Big \langle D^2\psi (x)\left( \frac{x_1^{{\varepsilon }}-x}{{\varepsilon }}\right) , \left( \frac{x_1^{{\varepsilon }}-x}{{\varepsilon }}\right) \Big \rangle +o({\varepsilon }^2). \end{aligned}$$

Next, we observe that

$$\begin{aligned} \Big \langle D^2\psi (x_{{\varepsilon }})\left( \frac{x_1^{{{\varepsilon }}}-x_{{{\varepsilon }}}}{{{\varepsilon }}}\right) , \left( \frac{x_1^{{{\varepsilon }}}-x_{{{\varepsilon }}}}{{{\varepsilon }}}\right) \Big \rangle \rightarrow \Delta _{\infty }\psi (x) \end{aligned}$$

provided \(\nabla \psi (x)\ne 0\). Furthermore, such a limit is bounded below and above by the quantities \(\lambda _{\min } (D^2\psi (x))\) and \(\lambda _{\max } (D^2\psi (x))\). Therefore, by dividing by \({\varepsilon }^2\) and letting \({\varepsilon }\rightarrow 0\), we get the desired inequality.

If \(u(x)\ge 0\), we have to show that

$$\begin{aligned} \max \{-\Delta _\infty \psi (x), 1-\left| \nabla \psi (x)\right| \}\ge 0. \end{aligned}$$

As above, by (6.11) and (6.12), we obtain

$$\begin{aligned} \begin{aligned} 0&\ge \min \Bigg \{\frac{{\varepsilon }^2}{2} D^2\phi (x)\left( \tfrac{x_1^{{\varepsilon }}-x}{{\varepsilon }}\right) \cdot \left( \tfrac{x_1^{{\varepsilon }}-x}{{\varepsilon }}\right) +o({\varepsilon }^2); \max \Bigg \{o({\varepsilon }^2)-\psi (x);\\&\qquad \qquad \qquad \left( D \phi (x)\cdot \tfrac{x_2^{\varepsilon }-x}{{\varepsilon }}-1\right) {\varepsilon }+\frac{{\varepsilon }^2}{2} D^2\phi (x)\left( \tfrac{x_2^{{\varepsilon }}-x}{{\varepsilon }}\right) \cdot \left( \tfrac{x_2^{{\varepsilon }}-x}{{\varepsilon }}\right) +o({\varepsilon }^2)\Bigg \}\Bigg \}. \end{aligned} \end{aligned}$$
(6.13)

and hence, we conclude,

$$\begin{aligned} \Delta _\infty \psi (x)\le 0 \qquad \text{ or }\qquad |\nabla \psi (x)|-1\le 0 \end{aligned}$$

as desired.

We have showed that u is a supersolution to our equation. Similarly, we obtain the subsolution counterpart. Let us remark, as part of those computations, that when \(u^{\varepsilon }(x)>0\) the DDP implies

$$\begin{aligned} \max \Bigg \{0; \sup _{ B_{\varepsilon }(x)} u^{\varepsilon }(y) - {\varepsilon }\Bigg \}> 0 \end{aligned}$$

and hence

$$\begin{aligned} \sup _{ B_{\varepsilon }(x)} u^{\varepsilon }(y) - {\varepsilon }> 0. \end{aligned}$$

Then, in this case we have

$$\begin{aligned} u^{\varepsilon }(x) = \min \Bigg \{ \frac{1}{2} \left( \sup _{B_{\varepsilon }(x)}u^{\varepsilon }+ \inf _{B_{\varepsilon }(x)} u^{\varepsilon }\right) ; \sup _{ B_{\varepsilon }(x)} u^{\varepsilon }(y) - {\varepsilon }\Bigg \}. \end{aligned}$$

\(\square \)

We proved (see Theorem 1.3) that viscosity solutions to (6.5) are unique by using pure PDE methods. Therefore, we conclude that convergence as \({\varepsilon }\rightarrow 0\) of \(u^{\varepsilon }\) holds not only along subsequences. This ends the proof of Theorem 1.5.

7 Further properties for limit solutions

Now, we present some relevant geometric and measure theoretic properties for limit solutions and their free boundaries.

Theorem 7.1

(Uniform positive density) Let \(u_{\infty }\) be a limit solution to (1.2) in \(B_1\) and \(x_0 \in \partial \{v > 0\} \cap B_{\frac{1}{2}}\) be a free boundary point. Then, for any \(0<\rho < \frac{1}{2}\),

$$\begin{aligned} {\mathcal {L}}^N(B_{\rho }(x_0) \cap \{u_{\infty }>0\})\ge \theta \rho ^N, \end{aligned}$$

for a universal constant \(\theta >0\).

Proof

Applying Theorem 1.4 there exists a point \({\hat{y}} \in \partial B_r(x_0) \cap \{u_{\infty }>0\}\) such that,

$$\begin{aligned} v({\hat{y}})\ge r. \end{aligned}$$
(7.1)

Moreover, we claim that there exists \(\kappa >0\) small enough such that

$$\begin{aligned} B_{\kappa r}({\hat{y}}) \subset \{u_{\infty }>0\}. \end{aligned}$$
(7.2)

The constant \(\kappa \) is given by

$$\begin{aligned} \kappa {:}{=}\,\frac{1}{10[u_{\infty }]_{\text{ Lip }(\Omega )}}. \end{aligned}$$

In fact, if this does not holds, it exists a free boundary point \({\hat{z}} \in B_{\kappa r}({\hat{y}})\). Then, from (7.1) we obtain

$$\begin{aligned} r \le u_{\infty }({\hat{y}}) \le \sup _{B_{\kappa r}({\hat{z}})} u_{\infty }(x) \le [u_{\infty }]_{\text{ Lip }(\Omega )}(\kappa r) = \frac{1}{10}r, \end{aligned}$$

which is a contradiction. Therefore,

$$\begin{aligned} B_{\kappa r}({\hat{y}}) \cap B_r(x_0) \subset B_r(x_0) \cap \{u_{\infty }>0\}, \end{aligned}$$

and hence

$$\begin{aligned} {\mathcal {L}}^N(B_{\rho }(x_0) \cap \{u_{\infty }>0\})\ge {\mathcal {L}}^N(B_{\rho }(x_0) \cap B_{\kappa r}({\hat{y}}))\ge \theta r^N, \end{aligned}$$

which proves the result. \(\square \)

Definition 7.2

(\(\zeta \)-Porous set) A set \(S \in {\mathbb {R}}^N\) is said to be porous with porosity constant \(0<\zeta \le 1\) if there exists an \(R > 0\) such that for each \(x \in S\) and \(0< r < R\) there exists a point y such that \(B_{\zeta r}(y) \subset B_r(x) \setminus S\).

Theorem 7.3

(Porosity of limiting free boundary) Let \(u_{\infty }\) be a limit solution to (1.2) in \(\Omega \). There exists a constant \(0<\xi = \xi (N, \text{ Lip }[g]) \le 1\) such that

$$\begin{aligned} {\mathcal {H}}^{N-\xi }\left( \partial \{u_{\infty }>0\}\cap B_{\frac{1}{2}}\right) < \infty . \end{aligned}$$
(7.3)

Proof

Let \(R>0\) and \(x_0\in \Omega \) be such that \(\overline{B_{4R}(x_0)}\subset \Omega \). We will show that \(\partial \{u_{\infty } >0\} \cap B_R(x_0)\) is a \(\frac{\zeta }{2}\)-porous set for a universal constant \(0< \zeta \le 1\). To this end, let \(x\in \partial \{u_{\infty } >0\} \cap B_{R}(x_0)\). For each \(r\in (0, R)\) we have \(\overline{B_r(x)}\subset B_{2R}(x_0)\subset \Omega \). Now, let \(y\in \partial B_r(x)\) such that \(u_{\infty }(y) = \sup \limits _{\partial B_r(x)} u(t)\). From Theorem 1.4

$$\begin{aligned} u_{\infty }(y)\ge r. \end{aligned}$$
(7.4)

On the other hand, near the free boundary, from Lipschitz regularity we have

$$\begin{aligned} u_{\infty }(y)\le [u_{\infty }]_{\text{ Lip }({\overline{\Omega }})} d(y), \end{aligned}$$
(7.5)

where \(d(y) {:}{=}\,\mathrm {dist}(y, \partial \{u_{\infty }>0\} \cap \overline{B_{2R}(x_0)})\). From (7.4) and (7.5) we get

$$\begin{aligned} d(y)\ge \zeta r \end{aligned}$$
(7.6)

for a positive constant \(0<\zeta {:}{=}\,\left( \frac{1}{[u_{\infty }]_{\text{ Lip }({\overline{\Omega }})}+1}\right) <1\).

Now, let \({\hat{y}}\), in the segment joining x and y, be such that \(|y-{\hat{y}}|=\frac{\zeta r}{2}\), then there holds

$$\begin{aligned} B_{\frac{\zeta }{2}r}({\hat{y}})\subset B_{\zeta r}(y)\cap B_r(x), \end{aligned}$$
(7.7)

indeed, for each \(z\in B_{\frac{\zeta }{2}r}({\hat{y}})\)

$$\begin{aligned} |z-y|&\le |z-{\hat{y}}|+|y-{\hat{y}}|<\frac{\zeta r}{2}+\frac{\zeta r}{2}=\zeta r,\\ |z-x|&\le |z-{\hat{y}}|+\big (|x-y|-|{\hat{y}}-y|\big )\le \frac{\zeta r}{2}+\left( r-\frac{\zeta r}{2}\right) =r. \end{aligned}$$

Then, since by (7.6) \(B_{\zeta r}(y)\subset B_{d(y)}(y)\subset \{u_{\infty }>0\}\), we get \(B_{\zeta r}(y)\cap B_r(x)\subset \{u_{\infty }>0\}\), which together with (7.7) implies that

$$\begin{aligned} B_{\frac{\zeta }{2}r}({\hat{y}})\subset B_{\zeta r}(y)\cap B_r(x)\subset B_r(x)\setminus \partial \{u_{\infty }>0\}\subset B_r(x)\setminus \partial \{u_{\infty }>0\} \cap B_{R}(x_0). \end{aligned}$$

Therefore, \(\partial \{v>0\} \cap B_{R}(x_0)\) is a \(\frac{\zeta }{2}\)-porous set. Finally, the \((N-\xi )\)-Hausdorff measure estimates in (7.3) follows from [14]. \(\square \)

In particular, Theorem 7.3 implies that the free boundary \(\partial \{u_{\infty }>0\}\) has Lebesgue measure zero.

In the last part of this paper we include two examples to see what kind of solutions to (1.6) one can expect.

Example 7.4

(Radial solutions) First of all, let us study the following boundary value problem:

$$\begin{aligned} \left\{ \begin{array}{lllll} -\Delta _p u &{}=&{} -\lambda _0 \chi _{\{u>0\}}(x) &{} \text{ in } &{} B_{R}(x_0) \\ u(x) &{}=&{} \kappa &{} \text{ on } &{} \partial B_{R}(x_0), \end{array} \right. \end{aligned}$$
(7.8)

where \(R, \lambda _0\) and \(\kappa \) are a positive constants.

Observe that by the uniqueness of solutions for the Dirichlet problem (7.8) and invariance under rotations of the \(p-\)Laplace operator, it is easy to see that u must be a radially symmetric function. Hence, let us deal with the following one-dimensional ODE

$$\begin{aligned} -(|v^{\prime }(t)|^{p-2}v^{\prime }(t))^{\prime } = -\lambda _0 v_{+}(t) \quad \text{ in } \, (0, T), \qquad v(0)=0 \,\,\,\text{ and }\,\,\,v(T)=\kappa . \end{aligned}$$
(7.9)

It is straightforward to check that \(v(t)=\Theta (1, \lambda _0, p) t^{\,\frac{p}{p-1}}\) is a solution to (7.9), where

$$\begin{aligned} \Theta = \Theta (N, \lambda _0, p) {:}{=}\,\frac{p-1}{p}\left( \frac{ \inf _{\Omega }\lambda _0(x)}{ N}\right) ^{\frac{1}{p-1}} \quad \text{ and } \quad T {:}{=}\,\left( \frac{\kappa }{\Theta } \right) ^{\frac{p-1}{p}}. \end{aligned}$$
(7.10)

Now, in order to characterize the unique solution (7.8) fix \(x_0 \in {\mathbb {R}}^N\) and \(0<r_0<R\). We assume the compatibility condition for the dead-core problem, namely \(R > T\). Thus, for \(r_0 = R-T\) the radially symmetric function given by

$$\begin{aligned} u(x){:}{=}\,\Theta \left[ |x-x_0|-R + \left( \frac{1}{\Theta } \right) ^{\frac{p-1}{p}} \right] _+^{\frac{p}{p-1}} = \Theta \left( |x-x_0|-r_0 \right) _+^{\frac{p}{p-1}} \end{aligned}$$
(7.11)

fulfils (7.8) in the weak sense, where \(r_0 {:}{=}\,R - \left( \frac{\kappa }{\Theta } \right) ^{\frac{p-1}{p}}\). Moreover, the dead core is given by \(B_{r_0}(x_0)\).

Also it is easy to see that the limit radial profile as \(p \rightarrow \infty \) becomes

$$\begin{aligned} u_{\infty }(x){:}{=}\,\left( |x-x_0|-r_0 \right) _+, \end{aligned}$$
(7.12)

which satisfies (1.6) in the viscosity sense with \(\Omega = B_{R}(x_0)\) the dead core given by \(B_{r_0}(x_0)\) for \(r_0 = R - 1\) and \(g\equiv \kappa \) on \(\partial B_{R}(x_0)\).

Example 7.5

Finally, by considering the one-dimensional problem

$$\begin{aligned} \left\{ \begin{array}{lllll} \max \{- u^{\prime \prime }, \chi _{\{u>0\}}-|u^{\prime }|\} &{} = &{} 0 &{} \text{ in } &{} (-1,4) \\ u(-1) &{} = &{} 1&{} &{} \\ u(4) &{} = &{} -1, &{} &{} \end{array} \right. \end{aligned}$$

it is straightforward to verify that \(u(x) = \left\{ \begin{array}{lll} -x &{} \text{ if } &{} x\in (-1, 0] \\ -\frac{1}{4}x &{} \text{ if } &{} x\in [0, 4) \end{array} \right. \) is the unique viscosity solution to our gradient constraint problem.