Game-theoretic approach to H\"{o}lder regularity for PDEs involving eigenvalues of the Hessian

We prove a local H\"{o}lder estimate with an exponent $0<\delta<\frac 12$ for solutions of the dynamic programming principle $$u^\varepsilon (x) =\sum_{j=1}^n \alpha_j\inf_{\dim(S)=j}\sup_{\substack{v\in S\\ |v|=1}}\frac{ u^\varepsilon (x + \varepsilon v) + u^\varepsilon (x - \varepsilon v)}{2}.$$ The proof is based on a new coupling idea from game theory. As an application, we get the same regularity estimate for viscosity solutions of the PDE $$\sum_{i=1}^n \alpha_i\lambda_i(D^2u)=0,$$ where $\lambda_1(D^2 u)\leq\cdots\leq \lambda_n(D^2 u)$ are the eigenvalues of the Hessian.

Main Theorem.Let u ε be a function satisfying the DPP (1.1) in a bounded domain Ω.Then for any 0 < δ < 1 2 and x, z ∈ B r with B 2r ⊂ Ω, there exists a constant C = C(δ, α 1 , α n ) > 0 such that That the above theorem holds for any 0 < δ < 1 2 is explicitly obtained in the proof in (3.14).
The DPP (1.1) has a connection to a certain PDE involving eigenvalues of the Hessian.Indeed, under certain regularity assumptions for the boundary of the domain, when ε → 0, solutions of (1.1) converge uniformly to the unique viscosity solution of the following PDE, where λ 1 (X) ≤ • • • ≤ λ n (X) are the ordered eigenvalues of X ∈ S(n), the set of n × n real symmetric matrices.
As a consequence of our main result, we obtain the same Hölder estimate for any 0 < δ < 1 2 for viscosity solutions of this PDE.For the proof of the following corollary, see Section 2.3.
Corollary 1.1.Let u be the viscosity solution of (1.3) in a bounded domain Ω.Then for any 0 < δ < 1 2 and x, z ∈ B r with B 2r ⊂ Ω, there exists a constant We remark that n i=1 α i λ i = 0 satisfies Pucci type inequalities, and thus we can get Hölder regularity from the general theory directly, even though the exponent is not explicitly given (see the beginning of Section 4).
The DPPs of type (1.1) model competition of two players, and one can relate these games to different applications.For example, in the context of related tug-ofwar games, they have been suggested in connection to the option pricing problem with market manipulation [NP17].To be more precise, for a given boundary data, the solution of the DPP (1.1) is also the value function of a two-player zero-sum stochastic game, the rules of which can be read from the DPP.We will describe the game in more detail in the next section, but informally, a token is placed at x ∈ Ω, α j is the probability that the number j is chosen.Then the player aiming to minimize the value chooses a j-dimensional subspace of R n , and finally, the player aiming to maximize the value chooses a unit vector from that subspace.Then the token is moved an ε-step either to the direction or the opposite direction of the vector, with equal probabilities.The game continues until the token is moved outside of Ω, and the player choosing subspaces pays the amount given by the boundary payoff function to the other player.
To the best of our knowledge, there are no prior works studying local regularity of DPPs or games related to fully nonlinear PDEs involving eigenvalues of the Hessian.
The game that we just described is connected to the PDE (1.3).This connection has been studied in detail by Blanc and Rossi [BR19b] for the PDE λ j (D 2 u) = 0, where j ∈ {1, ..., n}.See also [BLM20] for a game associated to the Dominative p-Laplacian and [HR20, BER20] for games associated to parabolic versions of these equations.
The rest of the paper is organized as follows.In the following subsection, we give a more detailed idea of our proof method.In Section 2 we give some preliminary definitions and results for viscosity solutions.In Section 3 we prove the main result for the special case α 1 = α n = 1 2 .In Section 4 we prove the main theorem.
1.2.Method of the proof.Although the ideas behind the proof of our main theorem stem from games, we do not use methods from stochastic game theory.Instead, our starting point is the coupling method introduced by Luiro and Parviainen [LP18] in the context of tug-of-war games.However, a direct application of their method does not seem to work in our case, so we need a new type of coupling, which is the main novelty of this work.
To give an idea of the proof, for simplicity we will discuss a special case α 1 = α n = 1 2 , in which case the DPP (1.1) can be written as The starting point of the coupling is to define a 2n-dimensional game related to the DPP.Notice that the function g : Ω × Ω → R given by can be written as a solution of a suitable DPP in R 2n as follows, The potential of this DPP is to transform the question of regularity of u ε to the question of the absolute size of g.The heuristic idea is to introduce a suitable stochastic game in Ω × Ω, where we aim to move the two tokens to the diagonal set  where g = 0, before our opponent can move the tokens outside of the set Ω × Ω.
Observe that Following the idea of Luiro and Parviainen, we could consider a 2n-dimensional game where each player (both with probability 1 2 ) gets to choose v and w and then the tokens move to (x, z) + ε(v, w) or to (x, z) − ε(v, w), each possibility with probability one half.If we set the boundary values of our game to be 0 on T and 2 sup u ε in R 2n \ (Ω × Ω) and could prove that |g(x, z)| ≤ C|x − z| δ , we would get a desired Hölder estimate for the function u ε .
Unfortunately, these rules for the 2n-dimensional game do not seem to be suitable to obtain regularity estimates in our case.The problem is that our opponent can force the tokens away by choosing w = −v normal to x − z.Observe that if the new position of the tokens is given by (x, z) = (x, y) + ε(v, w), we get |x − z| 2 = |x − z| 2 + 4ε 2 .The same holds for (x, z) = (x, y) + ε(−v, −w).
Observe that it also holds Again the rules that follow from this formula allow our opponent to force the tokens away from each other, in this case by choosing w = v normal to x − z.
In conclusion, with the rules that we have described, we do not get a suitable coupling.Our new idea is to couple the moves in one way or the other depending on the choice of the vectors v and w.When the tokens are placed at x and z, given v and w we define two rules moving the token: (i) the token moves to (x, z) + ε(v, w) or to (x, z) − ε(v, w), each possibility with probability one half.(ii) the token moves to (x, z)+ ε(v, −w) or to (x, z)− ε(v, −w), each possibility with probability one half.
Let us define the 2n-dimensional game.We toss a coin and the winner of the toss chooses two unitary vectors v and w.Set y = x − z.We also write v y ⊥ = v − v,y y,y y and w y ⊥ = w − w,y y,y y.If |v y ⊥ | 2 + |w y ⊥ | 2 > 1 and v y ⊥ , w y ⊥ < 0, then the tokens move according to rule (ii).In any other case the tokens move according rule (i). Define otherwise. (1.4) Then we obtain the DPP for our 2n-dimensional game Observe that in the case of Figure 1A, that is when a player selects w = −v normal to x − z, we have to move the tokens accordingly to rule (ii).We have and therefore the distance between the tokens is preserved.Now, we consider the case in Figure 1B, that is, v and w are normal to x − z and also to each other.If the new position of the tokens is given by (x, z) = (x, z) + ε(v, w), we get |x− z| 2 = |x−z| 2 +2ε 2 .We get the same if (x, z) = (x, z)+ε(v, −w).Then, the tokens are forced away from each other independently of what rule we select.But still, observe that this growth is smaller than the one we were getting in the case of Figure 1A when applying rule (i), since |x − z| 2 = |x − z| 2 + 4ε 2 in that case.We claim that our choice of when to apply (i) or (ii) reduces the ability of the players to push the tokens away from each other, and this is the key of the matter.

Preliminaries
In this section, we include some preliminary results concerning solutions to the equation and the game.
If u is continuous and satisfies both of (a) and (b), we say that u is a viscosity solution to (1.3).
Next, we prove comparison and thus uniqueness to our operator.It would follow from [BGRP21], but here we give a simple alternative proof for this particular operator.
Remark 2.2.If M is a Hermitian matrix, it is diagonalizable with real eigenvalues and by the Min-max Theorem those eigenvalues verify for j = 1, . . ., N , and we can use this identity for the Hessian matrix.
Theorem 2.3.Let u 1 ∈ U SC(Ω) be a viscosity subsolution (1.3), and Proof.First, we make a counter proposition sup Then observe that the equation can be written as and let (x ε , y ε ) be the maximum point on Ω × Ω.The points x ε , y ε are not at the boundary in a bounded domain Ω for small enough δ by the standard theory.Then by the theorem of sums [CIL92] we have and X ≤ Y .Furthermore, let η > 0, S j be a j-dimensional subspace of R n and v j ∈ S j with |v j | = 1 be such that Xv, v .
Then we have 2Cδ which is a contradiction for small enough η > 0.
It might also be instructive to think some special cases.For example, for the first eigenvalue equation Observe that uniqueness immediately follows from the comparison principle for the viscosity solutions of the boundary value problem with given continuous boundary values g : ∂Ω → R. Also, observe that if the domain is strictly convex, we have that a plane can act as a barrier.Then, the solution obtained by Perron's method turns out to be continuous up to the boundary.A weaker condition for the existence of continuous solutions for smooth domains can be found in [HL09].

Games.
A game associated with the equation λ j (D 2 u) = 0 was introduced in [BR19b].Here we modify the game so that it is associated with equation (1.3).Next, we give the precise formulation of the game.
It is a two-player zero-sum game.Fix a domain Ω ⊂ R N , ε > 0 and a final payoff function G : R N \ Ω → R. The rules of the game are the following: the game starts with a token at an initial position x 0 ∈ Ω and develops in several rounds.At the beginning of each round j ∈ {1, . . ., n} is chosen at random such that P(j = i) = α i for each i = 1, . . ., n.With this given value, Player I chooses a subspace S of dimension j and subsequently Player II a unitary vector v ∈ S. Then the position of the token is moved to x ± εv with equal probabilities.The game continues until the token leaves the domain.At this time that we call τ , Player I pays G(x τ ) to Player II.When the two players fix their strategies S I and S II , we can compute the expected outcome as Then the value of the game for any x 0 ∈ Ω is defined as and verifies the DPP (1.1), that is for x ∈ Ω and u ε (x) = G(x) for x ∈ Ω, see [BR19a].Intuitively, the rules of the game can be seen from the DPP.When Player I, who aims to minimize the value, chooses a subspace, she knows that Player II aims to choose from that subspace a unitary vector maximizing the average 'ε-step value'.
2.3.Application to the PDE (1.3).We give a brief explanation of the relation between (1.1) and (1.3), and how to prove Corollary 1.1 using the result of our main theorem.
If we assume that the domain is strictly convex, we obtain that (2.2) u ε → u uniformly as ε → 0 where u is the unique solution to (1.3).Observe that in [BR19b] a condition over the boundary is given for each j.This condition is used to prove that the game value is asymptotic continuous near the boundary.It is in this step that we use that the domain to be strictly convex.Then the convergence is obtained following the usual path, see [PS08,MPR12,BR19a].We use the asymptotic version of Arzela-Ascoli to pass to the limit, and using the DPP combined with the definition of viscosity solutions allows us to deduce that the limit is the unique viscosity solution.Observe that a weaker condition on the domain may be enough depending on α i but we are not going to address this question here.
The connection between the DPP and the PDE can be intuitively seen by recalling that and observing that Assume that the estimate (1.2) is provided for any ε > 0. We observe that since a ball is strictly convex we obtain the convergence (2.2) in there.By passing to the limit as ε → 0 in the main theorem, we obtain a Hölder estimate for the limit function u.That is what we stated as Corollary 1.1.

Regularity for a DPP related to
We first focus on the case given by α 1 = α n = 1 2 .Note that in this case α 2 = • • • = α n−1 = 0, since we assumed n j=1 α j = 1.Then the DPP (1.1) is simplified to (3.1) The game starts with a token at an initial position x 0 ∈ Ω.At every round, a fair coin is tossed and the winner of the toss chooses a vector v ∈ R n with |v| = 1.Then the position of the token is moved to either x 0 + εv or x 0 − εv, with equal probabilities.The game ends when the token leaves the domain and we define the game value as before.Our game value satisfies the DPP (3.1) for x ∈ Ω, and In this section, we will obtain the regularity result for solutions to the DPP (3.1).As we have mentioned, we employ the method introduced in [LP18].For the readers' convenience, we will provide a full proof.
Theorem 3.1.Let u ε be a function satisfying the DPP (3.1) in a bounded domain Ω.Then for any 0 < δ < 1 2 and x, z ∈ B r with B 2r ⊂ Ω, there exists a constant Proof.By considering ũ(x) = u(rx) we can assume that r = 1.Also, without loss of generality, we assume that sup by a suitable renormalization.
To obtain the desired estimate for the function, we will use the comparison function where C > 1, δ ∈ (0, 1), and for i = 0, 1, ..., N , where N is a sufficiently large number to be determined later.
The first term in f 1 will give us the desired regularity estimate, and the second term ensures that the estimate holds at (B 2 × B 2 ) \ (B 1 × B 1 ).It is typical for the solutions of 'ε-DPPs' that they are discontinuous at the ε-scale.That is why we need a correction function f 2 , designed to handle the case where the distance |x − z| ≈ ε.
We first observe that u , where we are using that the result follows as we can assume without loss of generality with a suitable translation that x = −z since in this case we can obtain We assume, for the sake of contradiction, that In this case, we have Consider an arbitrary small number η > 0. Then we can choose (x 1 , z 1 ) ∈ (B 1 × B 1 )\T such that Recall (1.4).From (3.6), we observe that since x 1 ± εv, z 1 ± εw are still in B 2 .We can also obtain the same inequality for the other case with the same argument.Then, we get sup Now from (1.5), we deduce that Thus, we can derive a contradiction if we show The case |x − z| ≈ ε follows from the fact that the steps are of size ε.We include the details later.Now we focus on the case |x − z| > N 10 ε.In this case, since f 2 = 0, we need to prove that We will use the following Taylor's expansion for the function f 1 , where V is the space spanned by x − z, (h x − h z ) V refers to the scalar projection onto V i.e. hx−hz,x−z |x−z| , and (h x − h z ) V ⊥ onto the orthogonal complement (see also [LP18,AHP17]).By Taylor's theorem, the error term satisfies and thus we have (3.12) Observe that when considering the Taylor expansions, all the first order term will be canceled.Now we are ready to proceed to the core of the matter, that is to prove (3.8).
Here is where the precise definition of when to apply the rule (i) or (ii) plays the main role.We will estimate the infimum of F by considering ṽ = x − z |x − z| and w = − x − z |x − z| .
Observe that in this case, v y ⊥ = w y ⊥ = 0, hence, the rule (i) applies.Recall that 2 in this case.From (3.9), we have ṽ, w).Therefore, we obtain inf To bound the supremum, we have to separate the cases depending on whether rule (i) or rule (ii) applies.The key point here is to bound (h x − h y ) 2 V ⊥ by strictly less than (2ε) 2 .
When the rule (ii) is applied, we have If rule (i) is applied and v y ⊥ , w y ⊥ ≥ 0 the same calculation can be performed.It remains to check the case where Finally, we obtain sup Now it is enough to show that Observe that this is where we explicitly fix δ.
Recalling that |x − z| < 4, we obtain We have proved that Now we consider the case |x − z| ≤ N 10 ε.We remark that the counterassumption of (3.4) cannot occur when x = z, since Thus it is sufficient to show (3.7) when 0 < |x − z| ≤ N 10 ε.We first observe that Therefore, we see that for any unit vectors v and w, and sufficiently large C. Then we obtain sup By choosing a large constant C, we get sup Combining the previous estimates, we see that This yields (3.7).

The general case
In this section, we consider the DPP (1.1) related to the PDE (1.3).We rewrite the equation and present the rules of the game in a slightly different way.We assume α = 2 min{α 1 , α n } > 0. We define β = 1 − α, β i = α i /β for i = 2, . . ., n − 1 and We remark that one can derive Hölder regularity for (4.1), since a viscosity solution to (4.1) satisfies These Pucci type inequalities are what is required to use [CC95, Proposition 4.10].Now the game for α i can be presented in the following way: at every round with probability α we play the game for 1 2 λ 1 + 1 2 λ n and with probability β we play the game according to β i .In this case, the related DPP is This is equivalent to (1.1).
In order to define the 2n-dimensional game related to (4.2), first we define a 2ndimensional game related to λ j .Fix j ∈ {1, ..., n}.We consider the game related to λ j and write u j for its value function.It satisfies the following DPP g j (x + εv, z + εṽ) + g j (x − εv, z − εṽ) 2 .
We can read the rules of the 2n-dimensional game as follows: Player II selects S, Player I the subspace S and then Player II a unitary vector v ∈ S and Player I a vector ṽ ∈ S.Then, the token moves to (x, z) ± ε(v, ṽ) each with probability one half.
Combining the above observation for each g j with the 2n-dimensional DPP for the game associated with 1 2 λ 1 + 1 2 λ n , for the function g(x, z) = u ε (x) − u ε (z), we have where F is the function given by (1.4).Now we state and prove the Hölder regularity result for (1.1).
Theorem 4.1.Let u ε be a function satisfying the DPP (1.1) in a bounded domain Ω.Then for any 0 < δ < 1 2 and x, z ∈ B r with B 2r ⊂ Ω, there exists a constant Proof.Recall the barrier function f in the proof of Theorem 3.1.By a similar argument as in the previous section, it is enough to show that 2 . (4.3) We first consider the case |x − z| > N 10 ε.For the terms involved in the game associated with 1 2 λ 1 + 1 2 λ n we can recall the estimate (3.15).Meanwhile, for the game associated to λ j , we observe that by taking S = S and ṽ = v we get sup f (x + εṽ, z + εṽ) + f (x − εṽ, z − εṽ) 2 .f (x + εṽ, z + εṽ) where C is the constant in (3.15).Thus, if we take C large enough such that − Cα + 4β < 0, we obtain (4.3).
Next we assume that |x − z| ≤ N 10 ε.From (4.3), (4.4) and (4.5), it is enough to show that This can be rewritten as We use a similar argument in the proof of Theorem 3.1, but we choose C sufficiently large such that sup in (3.16).Then we get (4.6), which completes the proof.
The dominative p-Laplacian can be regarded as a special case of the operator n i=1 α i λ i , which has been considered so far.Observe that the equation D p u = 0 is equivalent to the equation (1.3) when α Therefore, by plugging these values in (1.1) we obtain the following DPP Since min{α 1 , α n } > 0, our result, Theorem 4.2, covers the solutions to this DPP.
We also remark here that the operator D p is uniformly elliptic, and thus we can obtain C 1,δ -regularity for viscosity solutions of the equation D p u = 0 (see [CC95, Section 5.3]).
A different game associated to the Dominative p-Laplacian was presented in [BLM20] (see also [HR20]).Their DPP reads as follows where q = n+2 n+p .This is a control problem.Let x 0 be the starting point.The player first chooses a unit vector v, and then the token is moved according to the following rules: x 1 is randomly selected in B ε (x 0 ) with probability n+2 n+p , and x 1 = x 0 ± εv with probability p−2 2(n+p) , respectively.This stochastic process is repeated until the token leaves Ω.The player tries to maximize the expected value of G(x τ ), and thus he/she chooses the direction v for this purpose.
We can also obtain the following regularity result for this DPP with a slightly worse upper bound for δ.Like before, we again consider g(x, z) = u ε (x) − u ε (z).
Combining the above estimates, we obtain (4.8).
Main results.In this paper, we show local Hölder regularity for solutions of the following Dynamic Programming Principle (DPP)u ε (x) = n j=1 α j inf dim(S)=j sup v∈S |v|=1

Figure 1 .
Figure 1.Two choices of vectors when the tokens are at (x, z).