Asymptotic Lipschitz regularity for tug-of-war games with varying probabilities

We prove an asymptotic Lipschitz estimate for value functions of tug-of-war games with varying probabilities defined in $\Omega\subset \mathbb R^n$. The method of the proof is based on a game-theoretic idea to estimate the value of a related game defined in $\Omega\times \Omega$ via couplings.


Motivation and Statement of the Main Result
Tug-of-war games have gained attention after the seminal papers of Peres, Schramm, Sheffield and Wilson [9,10]. They showed that these two-player zero-sum games have connections to homogeneous and inhomogeneous normalized PDEs in non-divergence form via dynamic programming principle (DPP for short). Regularity properties of value functions of tug-of-war games have been studied in [7,8] by using translation invariance and good symmetry properties, which are no longer available in the natural generalization to the case where probabilities depend on the location. In this space-dependent case Luiro and Parviainen [6] showed asymptotic local Hölder regularity for value functions by developing a game-theoretic method in the spirit of couplings. Our aim is to improve this result by showing an asymptotic Lipschitz estimate.
The object of our study is the value function u ε : → R of the variant of tug-of-war game that is explained in Section 1.2 below. The function u ε satisfies the DPP u ε dL n−1 u ε dL n−1 (1.1) for x ∈ , where ⊂ R n is a bounded domain, ε > 0, B ν ε (x) denotes the (n − 1)dimensional ball of radius ε > 0 centered at x ∈ R n and orthogonal to ν = 0, and L n−1 stands for the (n − 1)-dimensional Lebesgue measure. The coefficients α : → (0, 1] and β : → [0, 1) are continuous probability functions such that α(x) + β(x) = 1 and Next suppose that, in particular, the functions α and β take the form where the function p : → (1, ∞] is continuous and bounded away from 1. Under these assumptions, Arroyo, Heino and Parviainen [1] showed that for a given continuous boundary data and a suitable boundary cut-off function, it holds that u ε → u uniformly when ε → 0, where u is the viscosity solution of the normalized p(x)-Laplace equation − N p(x) u(x) = 0. Here Moreover, by [1,Theorem 4.1], the function u ε is asymptotically Hölder continuous. In this paper we introduce a new game-theoretic strategy to show asymptotic local Lipschitz regularity for u ε under the assumption that the function p(·) is Hölder continuous. The main theorem is stated as follows. Theorem 1.1 Assume that the function α : → (0, 1] is Hölder continuous with a Hölder exponent s ∈ (0, 1) and a Hölder constant C α > 0. Let B 2r (x 0 ) ⊂ for some r > 0. Then, for a solution u ε of Eq. 1.1 it holds for some constant C > 0 depending on α min , C α , s, n, r and sup B 2r |u|, but independent of ε.
Observe that, recalling [1,Theorem 6.2], it turns out that by passing to a subsequence if necessary, u ε converges uniformly to a viscosity solution to − N p(x) u(x) = 0, and thus by Theorem 1.1 it follows that the limit is Lipschitz continuous. However, uniqueness of viscosity solutions to − N p(x) u(x) = 0 is an open question, and therefore the Lipschitz continuity of viscosity solutions can not be deduced from this result. In particular, given a viscosity solution, it is not known whether there exists a sequence of functions satisfying the DPP (1.1) and converging uniformly to this solution. Regarding the question of the regularity of the PDE, if p(·) is Lipschitz continuous, the C 1,α regularity of viscosity solutions to − N p(x) u(x) = 0 is obtained in [13]. We also refer the reader to [2] for an account of regularity theory of the normalized p-Laplace type equation.

Heuristic Idea of the Game and the Method of the Proof
Although the proofs in this paper are mainly written without the game terminology, the intuition behind the proofs comes from the stochastic games, and this point of view helps in understanding the proofs below. The function u ε satisfying the DPP (1.1) in with some continuous boundary data is the value function of the following game. There are two players, Player I trying to maximize the payoff and Player II trying to minimize it. First the token is placed at x 0 ∈ . Both players choose a vector of length ε. Let ν + be the choice of Player I and ν − the choice of Player II. Then they flip a fair coin. If Player I wins the toss, with probability α(x 0 ) the token moves to x 0 + ν + , and with probability β(x 0 ), the token moves somewhere in the (n − 1)-dimensional ball B ν + ε (x 0 ) according to the uniform probability density. Similarly, if Player II wins the fair toss, with probability α(x 0 ) the token moves to x 0 + ν − , and with probability β(x 0 ) it moves somewhere in B ν − ε (x 0 ), again according to the uniform probability density. The game continues until the token hits R n \ for the first time at, let us say x τ , and then Player II pays Player I the amount given by the payoff function at x τ . Intuitively, by summing up the probabilities at x 0 we get the DPP (1.1) at the point x 0 . For a more detailed presentation of the game and its connection to the DPP (1.1), we refer to [1].
To explain the starting point of the proof with a simple notation, we consider for a moment a more simple DPP related to the limit case α(·) ≡ 1 and β(·) ≡ 0, which was studied in [10], and has a connection to infinity harmonic functions. To start with, observe that u ε (x) − u ε (z) = : G(x, z) can be written as a solution of a certain natural DPP in R 2n : for all (x, z) ∈ × it holds that This resembles the original DPP for u ε in R n but is for G in R 2n . In this way the question about the Lipschitz regularity of u ε is converted into a question about the absolute size of a solution of Eq. 1.3 in × ⊂ R 2n . Next we explain the idea of estimating |G(x, z)| via a stochastic game in R 2n . We utilize the observation that G = 0 in the diagonal set The rules of the game are as follows: two game tokens are placed in . Two players, we and the opponent, play the game so that at each turn, if the game tokens are at x k and z k respectively, they have an equal chance to win the turn. If a player wins the turn, he can move the game token at x k to any point in B ε (x k ) and the game token at z k anywhere in B ε (z k ). The game stops if 1) game tokens have the same position or 2) one of the game tokens is placed outside . The pay-off is zero if the game ends due to the first condition and 2 sup |u ε | if the game ends due to the second condition. We try to minimize the payoff and the opponent tries to maximize the payoff. In other words, we try to pull the game tokens to the same position before the opponent succeeds in moving one of the game tokens outside .
Heuristically speaking, the expected value of this game should evidently be larger or equal than |G| since we are using boundary values that are obviously larger than |G| at the boundary, taking the comparison principle and even existence of the value of this game for granted at this point.
Thus it suffices to estimate the value of this game. For this we need a suitable strategy in the game. Let us consider the following natural candidate as an example: what happens if we always simply move, in the case we win the coin toss, the game tokens straight towards each others. Indeed, if the game tokens are at x and z, our moves are It turns out that this strategy does not work well enough. The reason is that if the opponent plays against our moves but with a slight turn, by choosing where T θ is a rotation matrix of a very small angle θ (for θ ≈ ε 3/4 ), the distance to the boundary is expected to decrease much faster than the distance between the game tokens. Indeed, think of one step of length ε and twist θ . Then in the direction x − z, the opponent's expected one step loss is approximately 1 2 εθ 2 = 1 2 ε 5/2 whereas in the perpendicular direction his expected gain is εθ = ε 7/4 , which is much larger for small ε.
To prevent the opponent taking advantage of the slight turn phenomenon, a more promising idea is to follow a threshold angle strategy: we could set a lower threshold and then define our strategy according to this threshold. If the step of the opponent almost taking her to sup B ε (x)×B ε (z) G makes an angle greater than the threshold with the direction x − z, then our strategy could be to pull the tokens straight towards each other. On the other hand, if the angle is less than the threshold, then we could pull against the step of the opponent. It turns out to be hard to evaluate the game value directly, but instead, one should try to find an explicit super-value f of the game, i.e., where 2 sup ∂ |u ε | ≤ f on the boundary of × , and |f (x, z)| |x − z| δ for some δ > 0. In [6], these ideas combined to a comparison argument guaranteed that the game value is less than or equal to f inside × and yielded an asymptotic Hölder estimate for the function u ε satisfying (1.2).
To obtain an asymptotic Lipschitz estimate, on the other hand, we further need a supervalue with a stronger requirement |f (x, z)| |x − z| in × . This idea is applied for our proof of Theorem 1.1. The change of the comparison function gives us substantially less advantage in choosing our strategy compared to the Hölder requirement. Hence, our threshold angle strategy cannot be fixed but it needs to depend both on the distance of the points and the Hölder exponent of the probability function α(·). For details of our strategy, see Section 3.
Usually when starting from a game in R n one could derive several different games in R 2n , and we need to choose the game that is suitable for our purposes. In stochastics or optimal mass transport language, we choose the couplings of the probability measures on R n in such a way that we get a probability measure on R 2n having the original measures as marginals.
It has turned out that the above approach has connections to the method of couplings dating back to the 1986 paper of Lindvall and Rogers [5], see also for example [4,11,12] for recent applications to PDEs. In the theory of viscosity solutions, this is related to the doubling of variables procedure, and in particular Ishii-Lions regularity method introduced in [3]. A key point in the Ishii-Lions method is to utilize the celebrated theorem of sums at the maximum point of u(x) − u(z) − f (x, z). Our proof does not rely on the theorem of sums. In addition, even if as a corollary our result also implies a similar result for the PDE, our main objective is to prove regularity for stochastic games with nonzero step size. For example the small turn phenomenon is not present in the PDE setting.

Outline of the Paper
In Section 2 we fix the notation, introduce our super-value f and state the key Lemma 2.2 for this comparison function. In Section 3 we prove Theorem 1.1 in the case |x − z| >> ε, and in Section 4 in the case |x − z| ε. Finally, in Section 5 we consider a less technical alternative game in order to prove Theorem 1.1 in the restricted case 2 < p(x) ≤ ∞.

Notation
Given ν = 0, let B ν ε (x) : = B ε (x) ∩ {ν} ⊥ = ξ ∈ R n : |ξ − x| < ε and ν, ξ − x = 0 . For i = 1, 2, . . . , n, we denote by e i ∈ R n the column vector containing 1 in the i-th component and 0 in the rest. For simplicity, we denote where P stands for the transpose of P. Given ν ∈ R n such that |ν| = 1, we denote by P ν ∈ O(n) an n-dimensional orthogonal matrix sending the vector e 1 to ν, that is, Note that this is a matrix whose first column vector coincides with ν and it is not unique. Thus, due to the symmetries of the ball B e 1 , we can write with no dependence on the particular choice of the matrix P ν as long as Eq. 2.1 holds.
Remark For the rest of the paper, we fix ε > 0 and denote u : = u ε to simplify the notation.
Given a bounded set of real numbers {a i } i∈I , we will use the notation for brevity. For the same reason we introduce the auxiliary function where |ν| = 1. Hence, Eq. 1.1 reads as for all x ∈ . Fix any orthogonal matrix P ν satisfying (2.1), then, performing the change of variables ζ = P ν ξ in the integral part of Eq. 2.3 and recalling (2.2), we get Again, we remark that the choice P ν ∈ O(n) does not play any role in Eq. 2.5. However, the particular choice of the matrix P ν will become important later for obtaining estimates.

Comparison Function
For the construction of a suitable comparison function in R 2n , first we define an increasing function ω : [0, ∞) → [0, ∞) having the desired regularity properties. To be more precise, let . (2.6) For t > ω 1 , the precise formula is not relevant. Here γ = 1 + s, where 0 < s < 1 is the Hölder exponent of the function α(·), and is a constant depending on the function α(·) to be fixed later (see Eqs. 3.4 and 3.18). Note that, defined in this way, ω is an increasing and strictly concave C 2 -function in (0, ω 1 ]. Moreover, and Next, we define the function f 1 : R 2n → R by where C > 0 is a constant depending on the function C α , α min , s, r and sup B 2r |u| that will be fixed later (see Eqs. 3.3, 3.19, 3.30, 4.1 and 4.5). As we have remarked, the key term in the comparison function f 1 is Cω(|x − z|), while the role of the term M |x + z| 2 is just to guarantee that Therefore, choosing On the other hand, if |x − z| > r, since r > ω 1 , we can extend ω outside [0, ω 1 ] in such a way that ω is increasing and Cω(r) > 2 sup |u|. Then ω(|x − z|) ≥ ω(r) and Eq. 2.8 follows.
Note that the concavity of ω turns out to be crucial when estimating the second order terms in the Taylor's expansion of f 1 in Sections 3.2 and 3.3. Moreover, the importance of the explicit formula for ω (2.7) and the choice γ = 1 + s is made clear in the estimate (3.17). To get an idea, recall that the function α(·) is Hölder continuous with exponent s, that is, for every x, y ∈ and some C α > 0. The coupling method leads us to estimate terms with coefficients of the type However, due to the discrete nature of the DPP, functions satisfying (1.1) can present jumps in the ε-scale. For that reason, in order to control the small scale jumps, we need to define an annular step function f 2 as where Here N is a large constant depending on C, ω 0 , C α and α min (but not on ε) and will be chosen later (see Eqs. 3.9 and 3.29). Note that f 2 vanishes when |x − z| > N 10 ε and sup Therefore, our comparison function f : R 2n → R is defined as Thus, due to Eq. 2.11, the definition of f 2 , we will use separate arguments along the proof of Theorem 1.1, distinguishing between f 2 = 0 (Section 3) and f 2 = 0 (Section 4).

Statement of the Key Lemma for the Comparison Function
Since our comparison function is f = f 1 − f 2 , where the terms in f 1 have been chosen such that Eq. 2.8 holds and sup f 2 = C 2N ε, then Then, our aim is to show that this inequality also holds in B r for properly chosen constants C and N , that is, (2.13) This will guarantee the local Lipschitz estimate of Theorem 1.1. We will argue by contradiction. If inequality (2.13) does not hold, then we can define a constant In what follows, we may assume that α(x) ≥ α(z) because the other case follows from a symmetric argument. In order to obtain a contradiction, as a first step, in Lemma 2.1, we derive lower and upper estimates for the quantity u(x) − u(z) by using the counterassumption and the DPP (2.4). These estimates imply an inequality in terms of f , see the estimate (2.16) below. After that, in the key Lemma 2.2, we show precisely the opposite (strict) inequality for f , Eq. 2.20, mainly using the properties of the comparison function f , getting a contradiction and implying the desired Lipschitz estimate (2.13). (2.4), suppose that the counter-assumption (2.14) holds. Then, for any η > 0, there exist x, z ∈ B r such that the comparison function satisfies

Lemma 2.1 Given a function u satisfying
with P ν x , P ν z ∈ O(n) satisfying P ν x e 1 = ν x and P ν z e 1 = ν z .
Proof By the counter-assumption (2.14), given η > 0, we can immediately choose x, z ∈ B r so that Eq. 2.15 holds. To estimate u(x) − u(z) from above, by recalling the DPP (2.4) we have where all the sup and inf are considered over the unit sphere. Next we look at the difference between A ε u(x, ν x ) and A ε u(z, ν z ). Using the definition (2.5), adding and subtracting the terms for any pair of vectors |ν x | = |ν z | = 1 and orthogonal matrices P ν x and P ν z satisfying P ν x e 1 = ν x and P ν z e 1 = ν z . By the definition of K in Eq. 2.14 together with Eq. 2.12, the inequality holds for every x , z ∈ B 2r . Applying this inequality to each of the terms in the equation Then, using Eq. 2.19, we get On the other hand, let | ν x | = | ν z | = 1 such that Then, combining these estimates with Eq. 2.18, we obtain Dividing by 2 and using the midrange notation we obtain The proof of this lemma, which is the core of the present paper, will be presented in Sections 3 and 4, where a distinction depending on the value of |x − z| is made.

Proof of the key Lemma 2.2. Case |x − z| > N 10 ε
In this case, f 2 (x, z) = 0 and where x, z ∈ B r have been fixed in Lemma 2.1 satisfying (2.15) with some fixed η > 0. Next fix |ν x | = |ν z | = 1 such that Then, for any pair of vectors | ν x | = | ν z | = 1, it holds Thus, Lemma 2.2 will follow if we can find appropriate vectors | ν x | = | ν z | = 1 such that This we will show by using Taylor's expansion. But before this, since the explicit formula for ω given in Eq. 2.6 only holds in the range [0, ω 1 ], first we need to choose large enough C ensuring that |x − z| ≤ ω 1 . From Eq. 2.15 we have, in particular, that u(x) − u(z) − f (x, z) > 0 and, in consequence, we observe that Then ω(|x − z|) ≤ ω(ω 1 ) and |x − z| ≤ ω 1 follows from the monotonicity of ω whenever (3.3) holds. In addition, by imposing we also ensure that |x − z| ≤ 1.

Taylor's Expansion for F and Game Intuition
First, we need to compute the second order Taylor's expansion of f (x+h x , z+h z ), where h x and h z denote column vectors in R n . For that purpose, we start by introducing the following notation, that will be useful in what follows: and denote by V the vector space V = span {v}. Given h ∈ R n , we denote by h V the projection of h on the space V and by h V ⊥ the modulus of the projection on the (n − 1)-dimensional space of vectors orthogonal to v. That is,
Proof We need to compute each term in the second order Taylor's expansion For that reason, we will make use of the formulas for the gradient and the Hessian Thus we obtain Plugging these into the terms in Eq. 3.7 yields and replacing in the second order Taylor's expansion (3.7), we obtain Moreover, since |x − z| ≤ ω 1 , by the explicit form of the function ω, Eq. 2.6, for every |h x | , |h z | ≤ ε, by Taylor's theorem, it holds whenever |x − z| > 2ε. Since 1 < γ < 2, using the hypothesis |x − z| > N 10 ε and choosing large enough natural number N ≥ 40 depending on C and ω 0 , we can estimate On the other hand, since |x − z| ≤ 1 and γ − 2 < 0, we have and then the last two terms in Eq. 3.8 are bounded by Finally, recalling the notation introduced in Eq. 3.5, we obtain (3.6). Now, we utilize expansion (3.6) for obtaining an estimate for the function F defined in Eq. 2.17. Lemma 3.2 Let x, z as at the beginning of this section and |ν x | = |ν z | = 1. Then, there is a pair of matrices P ν x and P ν z such that P ν x e 1 = ν x , P ν z e 1 = ν z and the function F defined in Eq. 2.17 satisfies Proof First, replacing h x = ε ν x and h z = ε ν z in Eq. 3.6, we get the following for the α(z)-term in Eq. 2.17, Similarly, for the β(x)-term, Integrating with respect to the (n − 1)-dimensional Lebesgue measure on B e 1 the first order terms vanish, while for the second order terms, we use the concavity of ω to estimate ω ≤ 0. Moreover, since we can choose P ν x and P ν z satisfying Finally, for the last term in Eq. 2.17, Due to symmetry, the first order terms containing ζ cancel out after integration over B e 1 , while for the second order terms, we use the rough estimate ω ≤ 0. For the remaining term, we develop (ν x − P ν z ζ ) 2 V ⊥ using notation (3.5), Note that, again by symmetry, the last term vanishes after integration and, since (P ν z ζ ) 2 V ⊥ ≤ 1 for any |ζ | ≤ 1, we get Then, replacing each of the terms (3.11), (3.12) and (3.13) in the formula for F (2.17), we get (3.10) and finish the proof. Now we are in position to explain the game intuition behind our argument of the proof of Lemma 2.2. Recall that the crucial point in proving the key Lemma 2.2 is that, given the choices ν x , ν z of our opponent, we need to find appropriate vectors | ν x | = | ν z | = 1 so that Eq. 3.2 holds. Before moving on to details, we will give intuition for our strategy. We mentioned already in Section 1.2 that the strategy of always pulling the points directly closer to each other does not provide the desired result in general. Hence, our response will depend on the opponents choice. If the opponent chooses to pull the points almost as far from each other that is possible, our response is to pull directly to the opposite direction by choosing ν x = −ν x and ν z = −ν z . Otherwise, we just pull the points directly towards each other by choosing ν x = −v and ν z = v, where v = x−z |x−z| . The way of making the distinction is to consider the projection (ν x − ν z ) V and fix the threshold = (x, z). As we will see in Eq. 3.17, the particularities of our comparison function f 1 make it necessary to require the function α(·) to be Hölder continuous (with a Hölder exponent s > 0) and to choose the threshold depending both on the distance of the points as well as the Hölder exponent of α, (x, z) = |x − z| s ∈ (0, 1]. Now we continue with the proof of the key Lemma 2.2.

Case 1. (ν x
This is the case where the opponent plays pulling the points almost in the opposite direction. In this case, as a response to the choices of our opponent, we select ν x = −ν x and ν z = −ν z . Replacing these in the right hand side of Eq. 3.2 and recalling the expansion (3.10), it turns out that the first order terms cancel out and we get Recalling the properties of the function ω, 1 2 ≤ ω ≤ 1 and ω ≤ 0, we obtain where the inequality (ν x − ν z ) 2 V ≥ 4 − ≥ 3 has been used together with α(z) ≥ α min . Thus, we need to obtain estimates for (ν x − ν z ) 2 V ⊥ , |ν x + ν z | 2 and (ν x ) 2 V ⊥ . The first one follows directly from the hypothesis and Pythagorean theorem, while for the second one we recall the parallelogram law, Consequently, we obtain the following estimate for (ν x ) 2 V ⊥ : Thus, recalling that β(x) = 1 − α(x) and α(x) ≥ α(z), Then, replacing this estimate in Eq. 3.14, we get (3.16) Then, by inserting (2.7) with γ = 1 + s, using the precise choice of the threshold = |x − z| s and the Hölder estimate (2.10) for the function α(·), we obtain Then, fixing , (3.18) and replacing these in Eq. 3.16 we get Choosing large enough C > 2(4M + 1), (3.19) where M is the constant fixed in Eq. 2.9, the negativeness of the previous expression is ensured and Eq. 3.2 is proven.

Case 2. (ν x
In that case, by Eq. 3.15, (3.20) As we noted before, this corresponds to the case where the opponent is not playing near optimality. Then, as a response to her choices, we choose ν x = −v and ν z = v. Then, replacing in Eq. 3.10 and estimating the ω -term directly by zero, Using this and Eqs. 3.10 in Eq. 3.2, together with the rough estimate ω ≤ 0, we have Let us estimate the first term in Eq. 3.21. We have Now we focus on the quantity |x + z|. Since x and z are points in B r satisfying (2.15), then where we have taken into account the explicit form of the function f in this section, Eq. 3.1 and the fact that ω is a positive function. We can rearrange terms and take the square root to get At this point, we recall a previous local regularity result from [1] stating that a function u = u ε satisfying (1.1) is asymptotically Hö6lder continuous for some exponent δ ∈ (0, 1), that is, for some constant C u > 0 depending on α min , α max , n, r, sup B 2r u and δ. In particular, using the inequality where in the second inequality we have recalled that |x − z| > N 10 ε and the last inequality follows by choosing large enough N ∈ N (N ≥ 10). Thus, the first term in Eq. 3.21 is bounded by Then, replacing this and the Hö6lder regularity estimate for α in Eq. 3.21 we get Thus, we need to estimate the terms in braces of the above inequality. One special case happens when (ν x −ν z ) 2 V ⊥ ≤ . Then, the rest of the terms can be easily estimated by using the hypothesis (3.20) and the desired result follows. However, we don't have any control on the size of this term and, for that reason, we need to define a new variable ϑ ∈ 1, 4 as follows: When ϑ > 1, we have Note that, by Eq. 3.20, this inequality also holds when ϑ = 1. Thus, Therefore, by Eqs. 3.24 and 3.25, For the second term in brackets, using the parallelogram law we get (3.27) and, since = |x − z| s and ϑ ≥ 1, Then, combining (3.26), (3.27) and (3.28), and since α(x) ≥ α(z), Choosing N ∈ N such that and since α(z) ≥ α min , we get Finally, recalling (3.20), ω ≥ 1 2 , = |x − z| s and s = γ − 1 = δ/2, we obtain Choosing large enough depending on M, C α , α min and C u , we ensure that Eq. 3.2 holds.

Proof of Lemma 2.2. Case |x − z| ≤ N 10 ε
In the previous section, we proved Lemma 2.2 in the case |x − z| > N 10 ε. The other case |x − z| ≤ N 10 ε is similar to [1]. In Section 2.2 we briefly commented that in this case we need an annular step function f 2 ε. Recalling (3.6) and for large enough we obtain the following rough estimate for f 1 , (4.2) Replacing Together with f 2 ≥ 0, these estimates yield Recalling the definition of the step annular function (2.11), fix i ∈ {0, 1, 2, . . . , N} such that (x, z) ∈ A i and choose | ν x | = | ν z | = 1 such that (x + ε ν x , z + ε ν z ) ∈ A i−1 . Then for C > 1 large enough such that α min C 2 − 2 > 7C, (4.5) we can estimate where we use f 2 ≥ 0 in the second inequality and α min > 0 in the last inequality. Therefore, by f = f 1 − f 2 and Eq. 4.3 it holds Combining this inequality with Eq. 4.4, we get Letting large enough C, we get (3.2), and this proves Lemma 2.2 in the case |x − z| ≤ N As we noted at the beginning of this work, the authors in [1] showed that the solutions u ε of the DPP (1.1) converge uniformly as ε → 0 to a viscosity solution of the normalized is a continuous function. In this section we consider a different DPP whose solutions are asymptotically related in the same way to the normalized p(x)-Laplace equation when p(x) > 2 for all x ∈ . Given ⊂ R n a bounded domain and small enough ε > 0, let u = u ε : → R be a function satisfying the DPP for x ∈ , where α : → (0, 1] and β : → [0, 1) are continuous probability functions depending on p and defined as follows: .
As it happens with (1.1), the DPP (5.1) is related to a slightly different tug-of-war game, compared to the DPP (1.1). Indeed, the main difference between this game and the previous one is that, in this case, the random noise can displace the token to any point in the ndimensional ball B ε (x), instead of moving it to a random point in the orthogonal (n − 1)-dimensional ball B ν ε (x), where ν is the direction chosen by the winner of the toss. That is, the possible random displacement of the token in a single step is not affected by the choices of the players. For more details, see [8] where this game is described for fixed α and β.
In a previous result (see [6,Section 5]), it was shown that, for given bounded domain ⊂ R n and B 2r (x 0 ) ⊂ , a solution u = u ε of Eq. 5.1 satisfies for some exponent δ ∈ (0, 1). As in the case studied in previous sections, provided that the function p is Hölder continuous, that is, for every x, y ∈ and some C p > 0 and s ∈ (0, 1), the asymptotic estimate (5.2) can be shown with δ = 1.
for some constant C > 0 depending on p min , C p , n, r and sup B 2r u.
We show that the asymptotic regularity result for solutions u of Eq. 5.1 stated in the previous theorem can be directly derived from the arguments in Sections 3 and 4. Let us rewrite (5.1) using the midrange notation introduced at the beginning of this article. Since the β(x)-term of the DPP does not depend on any parameter, Eq. 5.1 can be written as where B = B 1 (0) stands for the unitary ball centered at the origin and which is a similar version of Eqs. 2.4 and 2.5, respectively. Thus, given x, z ∈ B r and h x , h z ∈ B and assuming without any loss of generality that α(x) ≥ α(z), we analogously get Proceeding by contradiction in the same way as in Section 2 (see Lemma 2.1), we will end up defining a function F as follows, F (x, z, h x , h z ) : = F (f, x, z, h x , h z , ε) : = α(z)f (x + εh x , z + εh z ) for h x , h z ∈ B, and we show the following expansion for F :
Integrating over B the first order term vanishes, then, Finally, for the last term in Eq. 5.3, Due to symmetry, the first order terms containing ζ cancel out after integration over B, while for the second order terms, we use the rough estimate ω ≤ 0. For the remaining term, we develop (h x − ζ ) 2 V ⊥ using notation (3.5), ζ . Note that, again by symmetry, the last term vanishes after integration and, since ζ 2 V ⊥ ≤ 1 for any |ζ | ≤ 1, we get Then, replacing (5.5), (5.6) and (5.7) in Eq. 5.3 we get (5.4).
Note that Lemma 5.2 is the analogous version of Lemma 3.2 in the case 1 < p(x) ≤ ∞. Then, the next step is to show the key Lemma 2.2 for the function F defined in Eq. 5.3. In fact, since β(x) |h x + h z | 2 ≥ 0, the expansion for F , Eq. 5.4, is smaller than which contains exactly the same terms as in Eq. 3.10, its analogous in Section 3. Thus, proceeding exactly as in Section 3, we prove the key lemma in the case |x − z| > N 10 ε. Finally, repeating the same argument from Section 4, we show the key lemma in the case |x − z| ≤ N 10 ε, and thus we conclude the proof of Theorem 5.1. Observe that the above proof can be modified to have stability when p(x) is close to 2. To this end we should use a mirror point coupling for the noise term, as it is done in [6] in the case of the Hölder regularity. However, for consistency with the previous sections, we have made this expository choice here.