Finite Element Methods for Isotropic Isaacs Equations with Viscosity and Strong Dirichlet Boundary Conditions

We study monotone P1 finite element methods on unstructured meshes for fully non-linear, degenerately parabolic Isaacs equations with isotropic diffusions arising from stochastic game theory and optimal control and show uniform convergence to the viscosity solution. Elliptic projections are used to manage singular behaviour at the boundary and to treat a violation of the consistency conditions from the framework by Barles and Souganidis by the numerical operators. Boundary conditions may be imposed in the viscosity or in the strong sense, or in a combination thereof. The presented monotone numerical method has well-posed finite dimensional systems, which can be solved efficiently with Howard’s method.


Introduction
Hamilton-Jacobi-Isaacs equations, in short Isaacs equations, arise from stochastic game theory and optimal control, in particular from stochastic two player zero-sum games [Sou99].The solution of the Isaacs equation corresponds under appropriate assumptions to a value function of the game.The equations are of the structure where inf sup is taken over a family of second-order linear degenerately elliptic differential operators L α,β .Isaacs equations combine a number of features which are challenging from the numerical point of view.They are fully nonlinear in non-divergence form and the nonlinearity is in general nonconvex.Their spatial differential operators may only be degenerately elliptic and often do not exhibit smoothness and structure properties to make linearisation a usable tool.Furthermore, the theory of well-posedness relies, in the setting of viscosity solutions, on comparison principles, which in turn make it natural to impose monotonicity requirements on the numerical methods.Boundary conditions, such as of Dirichlet type, can be posed in multiple non-equivalent forms and comparison principles may rely on a particular form to ensure that they hold.
In this work we present a finite element method to solve time-dependent Isaacs equations with isotropic, possibly degenerate diffusions.We show the convergence to the viscosity solution on Lipschitz domains.Dirichlet boundary conditions are imposed in the viscosity or in the strong sense, or in a combination thereof.The presented analysis is a generalisation of [JS13], where Bellman equations with homogeneous, strong Dirichlet boundary conditions were approximated.
Compared to the literature for typical quasilinear differential equations, the scientific discourse on the numerical approximation of second-order Isaacs equations consists of a small number of papers.A two-scale finite element method for Isaacs equations was proposed in [SZ19], where Alexandrov-Bakelman-Pucci estimates are used to prove the convergence with rates of numerical approximations to solutions of uniformly elliptic Isaacs equations on convex domains.Such twoscale finite element methods are closely related to semi-Lagrangian schemes [LN18], for which convergence rates for Isaacs operators on unbounded domains were given in [DJ13], see also the series of works [Jak04], [Jak06], [JK02], [JK05] as well as [MZB06] in this context.Finite difference methods for uniformly parabolic Isaacs equations on smooth bounded domains with Dirichlet boundary conditions were shown to converge with rates in [Kry15] and [Tur15].Deterministic two-player zero-sum games lead to first-order Isaacs equations.We refer for the numerical approximation of deterministic problems to [BCJ19], [BHT11], [Fal06], [FK14] and the references therein.An important field of application is H ∞ control, for which we highlight [BM98], [FRS10], [KKK20], [Sor99].
The main contributions of the paper are the following: Convergence: We present a finite element method which is guaranteed to converge to the viscosity solution of the Isaacs equation, including in the degenerate case.Domains only need to be Lipschitz regular.
Boundary conditions: Dirichlet boundary conditions are treated in the viscosity and strong sense, so that the numerical analysis can follow the requirements arising from the construction of comparison principles.
Barrier functions: Barrier functions are used to ensure that upper and lower semi-continuous envelopes of the numerical solutions locally satisfy the boundary conditions in the strong sense.The construction of barrier functions is characterized in the degenerate setting and on non-convex domains.
Stability: Boundary data is mapped onto the approximation space with elliptic projections in order to avoid instabilities, which may arise for instance in the presence of re-entrant corners of the domain.
This paper is organised as follows.In Section 2 we specify the class of problems under consideration, introduce a notion of viscosity solution and discuss the two types of Dirichlet boundary conditions.In Section 3 we define the numerical scheme.In Section 4 we establish monotonicity properties; in Section 5 we show the existence and uniqueness of the numerical solutions; in Section 6 we ensure stability; in Section 7 we verify that the upper and lower semi-continuous envelopes of numerical solutions are sub-and supersolutions, respectively.In Section 8 we return to the different notions of the boundary conditions.In particular, we introduce barrier functions which ensure that boundary conditions can be satisfied in the strong sense at least on a part of the boundary.With that we can state the convergence result in Section 9.In Section 10 we show the construction of barrier functions in uniformly parabolic as well as in degenerately parabolic cases.Finally, in Section 11 we conclude with numerical experiments verifying the convergence of the scheme and we show an application to a two-player stochastic game.

Isotropic Isaacs equations
We consider Isaacs equations on bounded Lipschitz domains Ω ∈ R d with d ≥ 2. Let A and B be compact metric spaces and (α, β ) ∈ A × B. We introduce the linear operators and data terms is assumed to be continuous such that the families of functions {a (α,β ) } (α,β )∈A×B , {b (α,β ) } (α,β )∈A×B , {c (α,β ) } (α,β )∈A×B and { f (α,β ) } (α,β )∈A×B are equicontinuous.Moreover, a (α,β ) (x) ≥ 0 for any (α, β ) ∈ A × B and x ∈ Ω.It follows that all L α,β are degenerate elliptic and that sup For any sufficiently smooth w we define the Hamiltonian assuming that the supremum and the infimum are applied pointwise.We wish to study the Isaacs problems of the following form: where Even though only the values of g| ∂ Ω appear in (3) and v T | ∂ Ω = g| ∂ Ω it remains valuable to distinguish g and v T : While v T is only assumed to be uniformly bounded, we require that g can be written as the trace of a function from the more regular space Typically there does not exist a smooth solution allowing a classical interpretation of (3).Our aim is to construct a finite element method which approximates the viscosity solution of the Isaacs problem (3).We now formalise what is meant by a viscosity solution throughout this chapter.Firstly, let us reformulate (3) as F = 0 where F is the differential operator Given a bounded function v : [0, T ] × Ω → R we define its upper and lower semi-continuous envelopes, respectively, as v * (t, x) := lim sup We analogously extend the definition of lower-and upper semicontinuous envelopes to F.
Definition 2.2 below imposes Dirichlet boundary conditions in the viscosity sense on all of ∂ Ω in the flavour of the original Barles-Souganidis proof, see also [BS91,JS18].For boundary value problems arising from concrete applications, such viscosity boundary conditions may not be strong enough to establish a comparison principle.We therefore introduce a subset ω ⊂ ∂ Ω on which boundary conditions hold in a stronger sense.
Definition 2.1.Let g : ∂ Ω → R and v T : Ω → R be bounded functions.We say that a bounded function v : and v * (t, x) = g(x).
Similarly, we say that v satisfies the final time conditions at x ∈ Ω strongly if Depending on the setting ω = / 0, ω = ∂ Ω or / 0 ω ∂ Ω may be most appropriate choices.
Definition 2.2.A bounded function v is a viscosity supersolution (respectively, subsolution) of (3) (respectively, provided that v * − ψ attains at (t, x) ∈ [0, T ) × Ω a local minimum relative to [0, T ) × Ω (respectively, v * − ψ attains a local maximum) and additionally v * (respectively v * ) satisfies the final time conditions and the boundary conditions on ω in the strong sense.A function which is simultaneously a viscosity super-and subsolution of (3) is called a viscosity solution.
The motivation for incorporating strong boundary conditions in the definition of viscosity solutions can be illustrated with a simple example.
Example 2.1.We investigate on Ω = (−1, 1) the Bellman equation with homogeneous Dirichlet boundary conditions.Identifying the structure of a Fenchel conjugate in the β -dependent terms, we can rewrite the PDE as |v (x)| 2 − (x 2 − x 4 ) 2 = 0. Taking the square root, we may simplify the expression to the eikonal equation |v (x)| = x 2 − x 4 with a double-well potential on the right-hand side.
An elementary calculation shows that is twice continuously differentiable, solves (6) classically and satisfies the boundary conditions strongly.
For c ≤ 0 we define We initially assume that ω of Definition 2.2 is equal to the empty set and show that v c is in that case a viscosity solution for all c ≤ 0. In particular the viscosity solution is then not unique as required by the construction in [BS91] and there cannot exists a corresponding comparison principle.We observe that (v c ) * = v on Ω, whereas (v c ) * = v c since v c is lower semi-continuous.Thus it is clear that v c is a viscosity subsolution of the problem.
We now prove that v c is also a supersolution.We must show that for all ψ ∈ C 2 such that (v c ) * − ψ has a local minimum x we have as in [CIL92,equation (7.10)] Inequality (7a) holds because v c is a classical solution on Ω. Remembering that x 2 − x 4 vanishes on ∂ Ω and the simplification to the eikonal form, the boundary case (7b) can be simplified to Similar examples can also be constructed with second-order equations arising from diffusive problems and in higher dimensions, e.g.see [JS18, Section 2].While the loss of uniqueness of solutions directly interacts with the ability to formulate comparison principles, the flexibility of viscosity solutions to break continuity in the vicinity of the boundary can be helpful in some circumstances, e.g. when examining singularly perturbed diffusion-advection equations which exhibit a boundary layers for small diffusion coefficients ε > 0 and may fully detach from the Dirichlet data when ε → 0.
We conclude this section by relating our definition of strong boundary conditions to that in [CIL92, Section 7.A].There an upper semi-continuous function v : Similarly, a lower semi-continuous v : In our case B(x, q, p, s) = s − g(x) and we refer for the definition of the closed second-order jets J . Comparing (8) and (9) to Definitions 2.1 and 2.2, we first of all make an observation which is not directly related to the boundary conditions: we adopt in this paper the notion of bounded, but not necessarily semi-continuous sub-and supersolutions, following [BP88] and [BS91], see also [FS06,Chapter VII].Hence, to bridge this difference, we can adapt the definition of [CIL92] to and Now if the closed second-order jets of v are non-empty, then (10) implies v * (x) ≤ g(x) and (11) implies v * (x) ≥ g(x).Since v * ≥ v * by construction, we arrive at v * (x) = v * (x) = g(x), mirroring Definition 2.1.

The numerical method
We consider a sequence V i , i ∈ N, of piecewise linear shape-regular finite element spaces.Let us denote the nodes of the finite element mesh by y i where is the index over the set of nodes.The associated hat functions are denoted φ i , i.e. φ i ∈ V i such that φ i (y i ) = 1 while φ i (y s i ) = 0 for all = s.Set φ i := φ i / φ i L 1 (Ω) .Therefore, the φ i are normalised in the L ∞ norm whilst the φ i are normalised in the L 1 norm.Let V 0 i ⊂ V i be the subspace of functions which satisfy the homogeneous Dirichlet conditions on ∂ Ω.Throughout the text we let the index range over the boundary nodes first, in other words, y i ∈ Ω for ≤ N i := dimV 0 i .In order to construct a finite element method which is both stable and consistent, care needs to be taken when imposing non-homogeneous boundary conditions.Due to the more delicate stability properties of Isaacs operators compared to Bellman operators, on non-convex domains a nodal interpolant is not suitable to map the boundary data onto the approximation space.
Given w ∈ C(H 1 (Ω)), we consider therefore linear mappings P i which map w into V i such that, for all φ Assumption 3.1.There are linear mappings P i satisfying (12) and there is a constant C ≥ 0 such that for every w ∈ C ∞ (R d ) and i ∈ N, It is shown in [DLSW12] that Assumption 3.1 holds if we chose P i w such that it interpolates w on the boundary, (3.1) is satisfied, Ω is a bounded convex polyhedral domain in R d , d ∈ {2, 3}, and the mesh satisfies a local quasi-uniformity condition.
To apply the result for non-convex domains Ω and general w ∈ C ∞ (R d ), we consider a convex polyhedral domain B containing Ω and assume there is a locally quasi-uniform mesh on B which coincides with the original mesh on Ω.Let η be a smooth cut-off function with relatively compact support in B such that η ≡ 1 on Ω.Then the classical elliptic projection on B, acting on ηw : B → R, has the required properties.Given this construction for P i , it is natural to refer to it as an elliptic projection, see also [JS13,Section 4].This construction provides on non-convex domains an approximation of the boundary data, which permits the proof of stability of numerical solutions, see Theorem 6.1 below, as well as of consistency, see Lemma 3.1.In conclusion, let V g i ⊂ V i be the subspace of functions which attain P i g on the boundary.
The mesh size, i.e. the largest diameter of an element, is denoted ∆x i .It is assumed that ∆x i → 0 as i → ∞.The uniform time step size is denoted h i with the constraint that T /h i ∈ N. It is assumed that h i → 0 as i → ∞.Let s k i be the kth time step at the refinement level i.Then the set of time steps is S i := s k i : k = T /h i , . . ., 0 .The time derivative is approximated by d i for which we let the th entry of d i w(s k i , •) be For the discretisation of L (α,β ) we allow a splitting into an explicit and an implicit part.For each pair (α, β ) and for each i, we introduce the explicit operator E (α,β ) i and the implicit operator For each i we also seek a discretisation f ,β ) .We now want to make the conceptual statements Assumption 3.2.The splitting is consistent in the sense that The coefficients c(α,β) i and c(α,β) i are non-negative and that there exists γ ∈ R such that The family of mappings The splitting into explicit and implicit part is used to define the internal numerical operators where ranges over all internal nodes, i.e. ≤ N i .For boundary nodes > N i we set (E (α,β ) where ranges over all boundary nodes, i.e. > N i .For internal nodes When restricted to the domain V i , the numerical operators have matrix representations with respect to the nodal bases {φ i } , which we also denote by E (α,β ) i and I (α,β ) i .Throughout the paper, we make use of the partial ordering of R n : for x, y ∈ R n , we write x ≥ y if and only if x ≥ y for all ∈ {1, . . ., n}.For collections {x α } α ⊂ R n and y β β ⊂ R n , we define the operators sup α and inf β componentwise: We can now state the numerical scheme for finding an approximate solution to (3).We initialise the scheme with the elliptic projection v i (T, •) = P i v T .Then, in order to find the numerical solution v i (s k i , •) ∈ V g i , we proceed inductively over the remaining time steps k ∈ {T /h i − 1, . . ., 0}: If all I (α,β ) i vanish then ( 17) is an explicit scheme, otherwise it is implicit.Note that the terms d i v i representing the discretized time derivative vanish on the boundary nodes because g is time independent.
To show the existence and uniqueness of numerical solutions, it is useful to provide an equivalent reformulation of the scheme.Let w : inf Finally, we write α k, i (w) = α k, ,β k, i (w) i (w), i.e. drop the reference to β to imply that β is chosen optimally.
Let I k,w i and E k,w i be the matrices whose th row at kth time step is equal to that of is not necessarily unique.The subsequent analysis is valid regardless of the choice of the control.We may reformulate the numerical scheme (17) with E k,w i , I k,w i and F k,w i as follows.Initialise the scheme with the elliptic projection so that v i (T, •)

Consistency properties of elliptic projections
We conclude the section with the consistency properties of linear operators for fixed (α, β ) ∈ A×B.The result extends [JS13] even in the Bellman setting by including non-homogenous boundary conditions.To state consistency it is convenient to abbreviate the operator of the numerical scheme as Note that despite of the similar notation, the operator F k,w i has a quite different meaning in that it denotes the forcing term vector, as defined in the first part of this section.
is a time step and y (i) i a node of the i-th refinement.Then and Proof.We only prove (21) since the result for (22) follows analogously.For ease of notation, the dependence of k and on i is made implicit.
Step 1: Standard finite difference bounds ensure that lim Step 2: If y i ∈ Ω then the proof is analogous to the Hamilton-Jacobi-Bellman case described in [JS13], once the control α ∈ A is replaced by the pair of controls (α, β ) ∈ A×B.Both Assumptions 3.1 and 3.2 are used here.
Step 3: Now suppose that y i ∈ ∂ Ω.Then it follows from (3.1) that lim i→∞ Step 4: Consider the sequence {(s k i , y i )} i as specified in the statement of the theorem, in particular with y i ∈ Ω.We decompose {(s k i , y i )} i into the subsequences of the (s k i , y i ) where y i belongs to Ω or ∂ Ω, respectively.Then the conclusions of Steps 1 to 3 above may be applied to the individual subsequences to arrive at (21).

Monotonicity
Monotonicity properties of the numerical scheme play a crucial role in ensuring convergence to the viscosity solution.
Definition 4.1.An operator F : V i → R N i is said to satisfy the local monotonicity property (LMP) if for all v ∈ V i such that v has a non-positive local minimum at the internal node y i , we have (Fv) ≤ 0. The operator F satisfies the weak discrete maximum principle (wDMP) provided that for any v ∈ V i , if Fv ≥ 0 for all ∈ {1, . . ., N i }, then min Suppose an operator F satisfies the LMP.If v ∈ V i has a negative local minimum at an internal node y i , then ((F + εId)v) < 0 for any positive ε.In particular, the contrapositive of (25) holds.Therefore F + εId satisfies the wDMP for any positive ε.
Assumption 4.1.For each (α, β ) ∈ A × B, we assume that E (α,β ) i , restricted to V i , has non-positive off-diagonal entries.We assume that h i is small enough so that h i E (α,β ) i − Id is monotone for every (α, β ), i.e. so that all entries of all h i E (α,β ) i − Id are non-positive.For each (α, β ) ∈ A × B, we suppose that I (α,β ) i satisfies the LMP.
We now turn to the matrices h i I k,w i + Id and h i E k,w i − Id which will later be used in the proof of the well-posedness of the scheme (19).
Lemma 4.1.Consider a w : 1.The matrices h i E k,w i − Id restricted to V 0 i are monotone.

The mapping w → −(h
3. The matrices of h i I k,w i + Id restricted to V i are strictly diagonally dominant M-matrices. 4. For fixed w, the operators v → I k,w i v and v → (h i I k,w i + Id) v satisfy, respectively, the LMP and wDMP.
Proof.The proof for all but 3. is analogous to that of [JS13, Lemma 2.3], once the control α ∈ A is replaced by the pair of controls (α, β ) ∈ A × B, using Assumption 4.1.
For 3. the strict diagonal dominance of h i I k,w i + Id for rows with indices ≤ N i is a direct consequence of its LMP by the same argument as in the case of restriction to V 0 i discussed in [JS13].For indices > N i rows of matrices h i I k,w i +Id are equivalent to that of identity matrix which is strictly diagonally dominant by definition.M-matrix property follows from [BP94, Chapter 6, Theorem 2.3, (M 35 )] after noticing that h i I k,w i + Id are Z-matrices.

Monotonicity through artificial diffusion
We show now that, using the method of artificial diffusion one can ensure that E Let T i be the mesh corresponding to the finite element space V i .Given a function g : Ω → R d and an element K of T i , we denote We say that the meshes T i are strictly acute if there exists ϑ ∈ (0, π/2) such that for all i ∈ N: Consider a splitting of the form , where all terms are in C(Ω) or C(Ω, R d ).Notice the tilde in the notation of the diffusion coefficients.Choose non-negative artificial diffusion coefficients ν(α,β), i and ν(α,β), i such that for all K that have y i as vertex: i The splittings of a (α,β ) = ā(α,β) can always be chosen so that also Assumption 3.2 and (28) hold.Moreover, it is shown in [JS13] that (28) implies Assumption 4.1.

Wellposedness and a solution algorithm
In this section we address the existence and uniqueness of numerical solutions and propose a method to compute them.
Theorem 5.1.There exists a unique numerical solution v i : S i → V g i that solves (17).
Algorithm 1 Howard's method The proof of Theorem 5.2 of [BMZ09], which we used here, relies on the exact solution of the inner Bellman equation (18a), which is reached by a Howard's algorithm in the limit.Using these exact solutions of (18a) in an outer Howard loop yields a sequence whose limit is the solution of (17).
To ensure the completion of the algorithm in finite time we require a termination criterion for the inner and outer loop.In Algorithm 1 (Howard's method) the termination criterion is posed in terms of a tolerance η.The statement of the algorithm also refers to i and (α, β ) ∈ A×B.Given just w, we let Φ (α,β ) (w) be the solution u of Ψ (α,β ) (u, w) = 0.
Suppose that ∑ η < ∞ and that w is the function returned by Algorithm 1 with η = η , w = v i (s k+1 i , •) and a fixed (α 0 , β 0 ) ∈ A × B. Then Theorem 5.4 of [BMZ09] ensures that the w exist and converge to the unique numerical solution v i (s k i , •) as → ∞.

Stability
For Hamilton-Jacobi-Bellman equations one can bound the value function by the solution of any linear evolution problem associated to a fixed control.In contrast the value function of an Isaacs equation may lie in part above and in part below the solution of such a linear problem.This difference between the Bellman and Isaacs equations extends to the proof of stability of numerical solutions.We begin by adapting [JS13, Lemma 3.2].
Lemma 6.1.One has (h i I w i + Id) −1 ∞ ≤ 1 and h i E w i − Id ∞ ≤ 1 for all i ∈ N and w ∈ V i , where the norms are the matrix ∞-norms of linear mappings from V i into V i and V 0 i into V 0 i , respectively.
where 1 ∈ R dimV i is the vector with all entries equal to 1. Since ∇v ≡ 0 (as v ≡ 1) we have for each 1 ≤ ≤ N i that where we have used non-negativity of cα i and defined (α i (w), β (w)) analogously to (α k, i (w), β k, i (w)) in (18).Similarly, for each boundary node index Combining positive invertibility of (h i I w i + Id) with (29), we conclude that (h i I w i + Id) −1 ∞ ≤ 1, as required.Turning our attention to explicit operators, one has because all entries of the matrix h i E w i − Id are non-positive.For each 1 ≤ ≤ N i , Proof.We split the numerical solution into two parts where v 0 i (s k i , •) ∈ V 0 i and g i := P i g ∈ V g i .Then (19) becomes or equivalently Using (12), we find Now an application of Lemma 6.1 and an inductive argument over the timesteps complete the proof.

Sub-and supersolutions of the PDE and the viscosity BCs
Recall the definition of the upper and lower semi-continuous envelopes v * and v * of a function v and consider their numerical equivalent defined as follows Owing to Theorem 5.1 and Theorem 6.1, v and v attain finite values.By construction, v is upper and v lower semi-continuous and v ≤ v.The proof of the next theorem closely follows that Bellman setting analysed in [JS13].We outline briefly the main steps to clarify the changes due to the additional inf operation, control set B and the more general boundary conditions.At this point we delay the proof that the envelopes satisfy boundary conditions in the strong sense.We can express that by assuming momentarily that ω = / 0. The extension to ω = / 0 follows in the next section.
Proof.We show that v is a subsolution.That v is a supersolution follows analogously up to a minor asymmetry in the sign of γ |µ i | in (33) below.Suppose that w ∈ C ∞ (R × R d ) is a test function such that v − w has a strict local maximum at (s, y) ∈ (0, T ) × Ω, with v(s, y) = w(s, y).Then following the argument in [JS13] there exists a sequence s k i , y i i such that and where κ ∈ {k, k + 1}, y λ i ∈ supp φ i and µ i = v i (s k i , y i ) − P i w(s k i , y i ).Notice that µ i → 0 as i → ∞ because of (31).Similar to [JS13] we conclude from the monotonicity of the scheme (17) that Recalling Lemma 3.1, and lim i µ i = 0, we take the limit i → ∞ in inequality (33) to conclude that Hence v is a viscosity subsolution.

Boundary and final time conditions
A direct consequence of Lemma 7.1 is that envelopes of the numerical solution satisfy the Dirichlet boundary conditions in the viscosity sense on the entire ∂ Ω.For the consistency with the strong boundary conditions on ω we assume the existence of two families of barrier functions (ζ y,ε ) ε>0 and (ξ y,ε ) ε>0 corresponding to the sets of super-and subsolutions respectively.
In order to understand the connection between convergence of the envelopes to the boundary conditions and the barrier functions let us consider the following example.
In order to find solution we consider a fully implicit numerical scheme with artificial diffusion.In this spirit, the exact solution of −u t − υ∆u + ∇u − 1 = 0, interpreting υ as the artificial diffusion coefficient, is: .
For a fixed υ we can choose ∆x small enough such that the numerical solution ṽυ with artificial diffusion υ and uniform mesh with mesh size ∆x satisfies ṽυ − v υ W 1,∞ < υ.We note that v υ attains its maximum at x max = υ log υ e In particular, it follows that x max → 1 and v υ (t, x max ) → 1 as υ → 0. Since we conclude that for a decreasing artificial diffusion coefficient the sequence of numerical solutions ṽυ has the lower semi-continuous envelope However, since v has a discontinuity at 1, there cannot exist a Lipschitz continuous barrier function as described in Assumption 8.1 -as we decrease υ, v υ and hence ṽυ exhibit an arbitrarily large gradient in the vicinity of the boundary.As a result, for any barrier function ξ there exists i for which the maximum of ṽυ − P i ξ y,ε does not lie on the boundary for all ε > 0. Note that we can construct both upper and lower barrier functions at x = 0, e. g. ζ 0 = x 2 and ξ 0 = (x + 1) 2 .
Proof.We focus on the case of v * (t, x) as the other case follows analogously.Let y ∈ ω and consider a sequence (s k i , y i ) → (t, y) as i → ∞.We have that lim sup z∈∂ Ω P i g(t, z) − P i ξ y,ε (t, z) =ξ y,ε (t, y) + lim sup where we get (a) due to Lipschitz continuity of g and ξ y,ε in time, (b) due to L ∞ convergence of P i g(t, z) and P i ξ y,ε (t, z) as i → ∞ and (c) due to the definition of r t,y,ε .By Assumption 8.1 we have that lim ε→0 ξ y,ε (t, y) − ξ y,ε (t, r t,y,ε ) ≤ 0 for any t ∈ [0, T ], so we can conclude that lim sup as ε → 0. The proof concludes by completing a similar calculation for lim inf i→∞ v i (s k i , y i ).This gives us g(t, y) ≥ lim sup and the final result follows.
Finally, also the final time conditions are attained strongly.
Lemma 8.2.The sub and supersolutions v and v satisfy Proof.The proof is identical to the Hamilton-Jacobi-Bellman case described in [JS13] once the control α ∈ A is replaced by the pair of controls (α v i ( ) , β v i ( ) ) ∈ A × B similarly as in the proof of Lemma 6.1, the interpolation operator I i is replaced with elliptic projection P i and function space V 0 i is replaced with V g i .

Uniform convergence
Summarising the previous two sections, we have the sub-and supersolution properties of the complete final boundary value problem.
Theorem 9.1.The function v is a viscosity subsolution of (3) and v is a viscosity supersolution of (3).
The last ingredient to establish the convergence of the numerical scheme is a crucial property of the final boundary value problem: a comparison principle.Especially when working with differential operators which degenerate on a part of the domain or classes of singularly perturbed operators, imposing strong Dirichlet conditions on a part of the boundary while viscosity boundary conditions on the remainder is appropriate to guarantee comparison and well-posedness.This is reflected in our analysis in the following assumption in combination with the formulation of Definition 2.2.Assumption 9.1.Let u be a viscosity subsolution and w be a viscosity supersolution.Then u ≤ w on Ω.
Theorem 9.2.One has v = v = v, where v is the unique viscosity solution of equation (3).Furthermore, Proof.The proof is essentially identical to the Hamilton-Jacobi-Bellman case described in [JS13].

Construction of barrier functions
In order to justify Assumption 8.1 we now present settings for the construction of barrier functions.First, we introduce a method for ensuring the existence of barrier functions for the simpler case of uniformly parabolic operators on a convex domain for fully implicit numerical schemes.After that we show the extension to general IMEX schemes and allow non-convex domains as well as degenerate operators.
We will focus on constructing lower barrier functions ξ , since the argument for upper barrier functions ζ is symmetric and follows by changing the direction of inequalities and exchanging sup and inf operators where required.

Barrier functions for uniformly parabolic equations on convex domains
We assume the existence of a function ξ where and The Dirichlet boundary conditions of (40b) are imposed in the strong sense.Because g does not depend on time it is clear that ξ is constant in t.We shall therefore often write ξ (x) instead of ξ (t, x).The function ξ is used as barrier for all y ∈ ω and ε > 0: ξ y,ε := ξ .We assume in this subsection that Ω is convex.Then, as outlined below Assumption 3.1, we can construct P i so that P i ξ interpolates ξ on the boundary.Furthermore, we require for the construction of this section that ∆v T ∈ L ∞ (Ω) and v T = g on ∂ Ω.
To show that a maximum of v i (s k i , •) − P i ξ (x) over s k i × Ω is attained on the boundary for all s k i we use that the h i I k,v i i + Id are M-matrices.Hence it is enough to prove that 10.1.1 Fully implicit numerical scheme For the sake of clarity, we begin with a fully implicit numerical scheme of the form ) is a maximiser of (18a) and β k, i (v i ) is a minimiser of (18b) for w = v i .Noting that − ∆ξ , φ i = ∇P i ξ , ∇ φ i is positive due to (40), we find that Furthermore, there is a î ∈ N such that for all i ≥ î.
Assuming i ≥ î for the remainder of the section, and using the definition of M Recall that ξ (x) = ξ (s k i , x) = ξ (s k+1 i , x).With the numerical scheme (42) we obtain Because both (P i ξ ) and v i (s k i , y i ) interpolate g on ∂ Ω, it follows Owing to the M-matrix property of h i I k,v i i + Id, thus implies condition (41).Hence, by induction, (41) holds for all k as soon as ( 44) is shown at the final time s k+1 i = T .From (40) we know that −λ ∆ξ ≥ M.Then, using v i (T, •) = P i v T and λ > 0, The definition of M ensures that M λ ≥ ∆v T L ∞ (Ω) .Using M-matrix property of the discrete Laplacian formed with respect to the nodal basis { φ i } we obtain (44) at the final time.At this point we proved that the maximum of v i (s k i , •) − P i ξ lies on the boundary as required.It remains to show that for t ∈ [0, T ] we can select a maximiser r t,y,ε of g − ξ y,ε over {t} × ∂ Ω such that lim ε→0 r t,y,ε = y and lim ε→0 ξ y,ε (t, y) − ξ y,ε (t, r t,y,ε ) ≤ 0. Because ξ attains g on whole the boundary, r t,y,ε := y is already such a choice.

IMEX numerical schemes
We now extend the above argument to general numerical schemes as defined in (19).Choosing γ 1 , γ 2 , λ and M as above, the inequality (43) generalises in the IMEX setting to According to Lemma 4.1 we have −(h i E k,v i i − Id) ≥ 0. Therefore, if implies (41).At this point the induction of the previous subsection can be adapted to show that maxima are attained on the boundary.Setting r t,y,ε := y completes the construction.

Barrier functions for degenerate equations and general domains
We now want to remove the two main assumptions of section 10.1, namely the requirements that the differential operator is uniformly parabolic and that the domain is convex.We form ω of the y for which it is possible to ensure the existence of strict supersolutions in the following sense: for all ε > 0 one can find a ξ y,ε satisfying for all β ∈ C(Ω; A) as well as sup ε>0 where B y (δ (ε)) is the ball centred at y with radius δ (ε) > 0, which in turn is a positive parameter depending on ε.Observe that v T | ∂ Ω = g| ∂ Ω and therefore (46) also provides control on the boundary, due to g and ξ y,ε being time independent.It ensures that the maximum of y − ξ y,ε is attained in the vicinity of y and is non-positive.We can view (45) as generalisation of (40a) because ξ of (40a) is a strict supersolution of In light of Assumptions 3.1 and 3.2, we may choose î such that where and Recall the definition of (α k, i (w), β k, i (w)) ∈ A × B in (18).For the sake of readability, we write β in place of β k, i (P i ξ y,ε − v i ) and α in place of α k, , β i (P i ξ y,ε − v i ) in this section, i.e. α and β are optimal choices when evaluating the numerical operator at P i ξ y,ε − v i .
Then the numerical method, applied to P i ξ k+1 − v i with the control of the infimum frozen at P i ξ y,ε − v i , returns at the node y i We conclude with Lemma 4.1 that Because unlike in the case of subsection 10.1 the functions P i ξ y,ε − v i (s k i , •) do not vanish on the boundary we need to extend this result to the case when N i < ≤ dimV i .Owing to (47) and (46b) we know that (48) also holds on the boundary for all time steps s k+1 i .Furthermore, (46b) implies that (48) is satisfied on all of Ω at the final time s k+1 i = T .In summary we have the induction step that if (48) on Ω then P i ξ y,ε − v i (s k i , •) ≥ 0 on Ω and the induction base to guarantee that the maximum of v i (s k i , •) − P i ξ (x) over s k i × Ω is attained on the boundary for all s k i .It remains to show that for t ∈ [0, T ] we can select a maximiser r t,y,ε of g − ξ y,ε over {t} × ∂ Ω such that lim ε→0 r t,y,ε = y and lim ε→0 ξ y,ε (t, y) − ξ y,ε (t, r t,y,ε ) ≤ 0. The former holds because of (46a) and (46c), while the latter follows from (46b).
Similarly we assume the existence of strict subsolutions in the following sense: For all y ∈ ω and ε > 0 one can find a ζ y,ε satisfying inf

Numerical experiments
In this section we present two numerical experiments showing the viability of the presented method.In the first experiment, we analyse convergence rates of a fully nonlinear second order Isaacs problem with a known solution and we confirm at least linear convergence in L 2 , L ∞ and H 1 norms.In the second experiment we calculate the value function of a stochastic tag-chase game with asymmetric velocities and vanishing diffusion on a non-convex domain.The presented finite element method was implemented in Python with FEniCS [LMW + 12] and is available from the public repository [Jar21] under the GNU Lesser General Public License.

Isaacs problem with exact solution
Let the spatial domain Ω in R 2 be the equilateral triangle with vertices (± √ 3, 1 2 ) and (0, 1).We will study the following Isaacs problem: Equation ( 49) is indeed of Isaacs type because the Euclidean norm of the gradient may be written alternatively as |∇v| = sup One can now verify through direct calculation that the function solves (49) exactly.The final and boundary data is also given by the right-hand side of (50).We split the scheme so that the advection term is treated explicitly.Note that in order to ensure the monotonicity of the scheme we may also assign part of the diffusion to the explicit operator.In order to improve the rate of convergence this was done locally, i.e. on a node-wise basis.In case the naturally occurring diffusion at the node is not sufficient, we introduce artificial diffusion.Overall, this approach leads to artificial diffusion near the origin where the differential operator is degenerately elliptic, while for the majority of nodes of the mesh it is zero.We choose the largest timestep guaranteeing the monotonicity of the method which leads to O(h i ) = O(∆x i ).Any amount of the diffusion left after ensuring the monotonicity of the explicit operators is treated implicitly.
The convergence of the scheme is reflected on Figure 1 which plots errors at t = 0 for different mesh sizes.The rates of convergence are as follows:  Imagine two players moving on the R 2 plane.One player is the pursuer (we will denote them by P) and tries to catch the other one, who is the evader (we will denote them by E).Both of them are allowed to choose their direction freely: α, β ∈ [−π, π] and α is the direction chosen by the evader, while β is the direction chosen by the pursuer.The pursuer moves with speed v β P while the evader moves with speed v α E .In particular, the respective speeds are functions of the angles.Additionally, some choices of a direction for the pursuer and the evader may be subject to a random noise behaving like a standard Brownian motion.Having specified the setting we are able to formulate the dynamics explicitly as dx P 1 = (v Additionally, we assume that W P = (W (1,P) ,W (2,P) ), are two R 2 -valued, mutually independent standard Wiener processes.We reduce this 4-dimensional problem to a 2-dimensional one by allowing the origin of the coordinate system to move along with the pursuer.In this case our dynamics are where σ (α,β ) ( xi ) = σ β P (x P i )) 2 + (σ α E (x E i )) 2 and Ŵ is an R 2 -valued standard Wiener process satisfying σ (α,β ) ( xi ) Ŵi (t) = σ β P (x P i )W i,P (t) − σ α E (x E i )W i,E (t), t ≥ 0, i = 1, 2. The pursuer catches the evader (and thus wins the game) if they manage to reduce their distance from the evader to some value r ∈ R. The evader wins the game when they manage to increase the distance to pursuer to some given R > r or if they manage to avoid the capture before some time T .Note that in this case the spatial domain of the problem becomes Ω := B(0, R) \ B(0, r).
Mathematically, the evader receives the pay-out of 1 whenever they win the game and receive 0 otherwise.We write for the expected pay-out to the evader J ( α, β , x,t), where α : [0, T ] → A, β : [0, T ] → B are functions of time so that α(t), β (t) are the controls chosen by the evader and pursuer at time t, respectively.
The value function v(x,t) := inf .
We shall assume that the pursuer is faster than the evader when moving in the horizontal direction.Moreover, we assume that the diffusion in the upper part of the domain scales with the vertical The numerical approximation of the value function is displayed in Figure 2. Note the asymmetric nature of the graph, due to the different speeds of the pursuer and the evader in the horizontal direction as well as a larger effect of the stochastic component of the equation in the upper part of the domain.
which holds trivially.Instead, setting ω = ∂ Ω we enforce (v c ) * = v c = 0 on the boundary, which filters out all the spurious solutions with c < 0 and uniqueness is regained.

satisfy
Assumption 4.1, provided the meshes are strictly acute.

Figure 2 :
Figure 2: The value function at t = 0 for asymmetric velocities and the diffusion having a lower value in the lower part of Ω.