1 Introduction

This note deals with existence and uniqueness of viscosity solutions of systems of non-linear partial differential equations (PDE) with interconnected obstacles and Neumann boundary conditions. Let \(\Omega \subset {\mathbb {R}}^{n}\) be a domain, i.e. an open connected set, and let \(i\in \{1,2,\ldots ,m\}\) for some positive integer m. The problem considered can in general be stated as

$$\begin{aligned}&\min \left\{ \partial _{t}u_{i}\left( t,x\right) + {F}_i\left( t,x,u_i(t,x), Du_{i}, D^{2}u_{i} \right) , u_{i}(t,x) - {{\mathcal {M}}}_i u(t,x) \right\} = 0\\&\quad \quad \text {in} \ (0,T)\times \Omega , \\&u_{i}(0,x) = g_{i}\left( x\right) \quad \text {on} \ {\bar{\Omega }}, \\&B_i\left( t,x,u_i(t,x),Du_{i} \right) = 0\quad \text {on} \ (0,T)\times \partial \Omega . \end{aligned}$$
(BVP)

The solution is a vector-valued function \(u:=(u_1,u_2,\ldots ,u_m)\) where the components are interconnected through the obstacle \({{\mathcal {M}}}_i\) in the sense that component i lies above the obstacle

$$\begin{aligned} {{\mathcal {M}}}_iu(t,x) := \max _{i\ne j }\{u_j(t,x) -c_{ij}(t,x)\}, \end{aligned}$$

which itself depends on all other components. Here and in the following, \(Du_i \in {{\mathbb {R}}}^n\) and \(D^2u_i \in \mathcal {S}^n\) (\(\mathcal {S}^n\) being the set of symmetric \(n \times n\) matrices) denote the gradient and Hessian matrix of \(u_i\) w.r.t. x.

In the standard setting of a single PDE without obstacle, the well-renowned paper Crandall–Ishii–Lions [7] proves existence and uniqueness of viscosity solutions of fully nonlinear degenerate elliptic PDEs with Neumann boundary conditions in smooth domains satisfying an exterior ball condition. Dupuis–Ishii [9, 10] consider more general domains such as non-smooth boundaries and domains with corners, and Barles [3] proved a comparison principle and existence of unique solutions to degenerate elliptic and parabolic boundary value problems with non-linear Neumann type boundary conditions in bounded domains with \(W^{3,\infty }\)-boundary. Ishii and Sato [16] proved similar theorems for boundary value problems for some singular degenerate parabolic partial differential equations with non-linear oblique derivative boundary conditions in bounded \(C^1\)-domains. Further, in bounded domains with \(W^{3,\infty }\)-boundary, Bourgoing [4] considered singular degenerate parabolic equations and equations having \(L^1\)-dependence in time. Lundström–Önskog [23] recently extended parts of [9] by establishing existence and uniqueness of viscosity solutions to a general parabolic PDE in non-smooth, time-dependent domains.

For systems of variational inequalities with interconnected obstacles as in (BVP), the literature on bounded spatial domains \(\Omega \) with boundary conditions is, to the authors’ knowledge, scarce. Instead much focus has been directed to unbounded domains (essentially \(\Omega \equiv {\mathbb {R}}^n\)) and often linear PDEs. In this setting the literature is, on the contrary, rather rich, see, e.g., El-Asri–Hamadene [11], Djehiche–Hamadene–Popier [8], Hu–Tang [14], Biswas–Jakobsen–Karlsen [4], Hamadene-Morlais [13], Lundström–Nyström–Olofsson [19, 20], Lundström–Olofsson–Önskog [21], Fuhrman-Morlais [12], Reisinger–Zhang [24] and the many references listed therein. Extensions to more general operators have been studied in Klimsiak [18]. Recently, Boufoussi-Hamadene-Jakani [6] considered a similar setting to ours and proved existence and uniqueness of a viscosity solution to a system of PDEs with interconnected obstacles and Neumann boundary conditions. In contrast to the present paper, their method is based on a probabilistic formulation in terms of reflected backward stochastic differential equations.

A reason for this large amount of interest is the close connection to stochastic optimization and so called “optimal switching” problems. Under assumptions, the solution to (BVP) then represents the value function of an optimal switching problem in which component i of the underlying stochastic process has the second order operator \(F_i\) as infinitesimal generator. With this interpretation, the domain \(\Omega \) represents the constraints put on the underlying stochastic process and, in particular, \(\Omega \equiv {\mathbb {R}}^n\) means no constraints.

However, when studying applications of optimal switching theory, one can easily think of examples where constraints are not only natural, but also necessary for the model to be meaningful. For example, consider a hydropower plant with a dam in which the underlying (stochastic) process \(X_t\) represents the amount of water currently available in the dam. Then, naturally, the process can only take values between 0 and some \(X_{max}\), the latter being the capacity of the dam. (In case the power plant is of run-of-river type, i.e., has no dam, the problem can be reduced to \(\Omega ={\mathbb {R}}^n\) and an application of the theory in this setting was recently studied in Lundström–Olofsson–Önskog [22].) When the dam is full, additional increase of water must be spilled, keeping the amount of water in the dam constant. Such situation may be modelled through a reflection of the underlying stochastic process \(X_t\). If this reflection is assumed in the normal direction, then one obtains a Neumann boundary condition, i.e.

$$\begin{aligned} B_i(t,x,u_i(t,x),Du_i) = \langle n(x), Du_i\rangle + f_i(x) = 0 \end{aligned}$$
(BC)

where n(x) is the normal of \(\partial \Omega \) at x. A more general reflection model puts us in the oblique derivative problem in which n(x) in (BC) should be replaced by \(\nu (x)\), a vector field satisfying \(\langle \nu (x), n(x)\rangle > 0\) on \(\partial \Omega \). The Dirichlet setting, which is studied in Barkhudaryan–Gomes–Shahgholian–Salehi [2], can be given a similar interpretation as above but then in the sense that the game ends when the process hits the boundary \(\partial \Omega \).

Optimal switching problems with reflection is not as well-studied as the non-reflected counterpart. However, given the close connection to real-world applications and the growing interest of optimal switching problems under constraints, see, e.g., Kharroubi [17], we expect this to change and a thorough analysis of the related PDE-theory on bounded domains is motivated. Moreover, although the study is motivated by applications, the results are of independent mathematical interest, enlarging the class of problem for which the viscosity theory is fully investigated.

2 Mathematical Problem Formulation, Assumptions and Main Results

2.1 Notation and Assumptions

We let \(\Omega \subset {{\mathbb {R}}}^n\) be an open connected bounded set with closure \({\bar{\Omega }}\) and boundary \(\partial \Omega := {\bar{\Omega }} \setminus \Omega \) and consider \(t \in [0,T]\). We denote by \(\mathcal C^{\alpha , \beta }\) the class of functions which are \(\alpha \) and \(\beta \) times continuously differentiable on \([0,T] \times {\bar{\Omega }}\) w.r.t. the first (time t) and second (space x) variable, respectively. If \(f(\cdot )\) only depends on the spatial variable we simply write \(f\in {\mathcal {C}}^\beta \). We assume that

$$\begin{aligned} \partial \Omega \in {\mathcal {C}}^{1} \hbox {and satisfies the exterior ball condition}, \end{aligned}$$
(D1)

i.e., that \(\partial \Omega \) is once continuously differentiable and that

$$\begin{aligned} \hbox {for every} \ {\hat{x}} \in \partial \Omega \ \hbox {there exist}\ x\ \hbox {and}\ r>0~ \hbox {s.t.}~ B(x,r) \not \subset \Omega \ \hbox {and}\ {\hat{x}} \in \partial B(x,r), \end{aligned}$$

where \(B(x,r):= \{y: |y-x|<r\}\). (A simple example of such a domain is \(\partial \Omega \in {\mathcal {C}}^{1}\) with Lipschitz continuous derivative, as such domains satisfy both the exterior and interior ball condition, see Aikawa–Kilpeläinen–Shanmugalingam–Zhong [1, Lemma 2.2].)

We assume that \(F_i(t,x,r,p,X) : [0,T] \times {\bar{\Omega }} \times {{\mathbb {R}}}\times {{\mathbb {R}}}^n \times \mathcal {S}^n \rightarrow {{\mathbb {R}}}\) is a continuous function satisfying

$$\begin{aligned} r \mapsto F_i(t,x,r,p,X)- \lambda \, r \ \hbox {is non-decreasing for some}\ \lambda \in {{\mathbb {R}}}, \end{aligned}$$
(F1)
$$\begin{aligned}&\ |F_i(t,x,r,p,X)-F_i(t,x,r,q,Y)| \le \omega (|p-q| + ||X-Y||)\\&\quad \hbox {for some}\ \omega :[0,\infty ) \rightarrow [0,\infty ) \hbox { with}\ \omega (0+)=0 \end{aligned}$$
(F2)

and that

$$\begin{aligned} F_i(t,y,r,p,Y)-F_i(t,x,r,p,X) \le \omega \left( \frac{1}{\epsilon } |x-y|^2 + |x-y|(|p|+1)\right) \end{aligned}$$
(F3)

for every \(t \in [0,T)\) fixed and whenever

$$\begin{aligned} \left( \begin{array}{ll} X &{}\quad 0 \\ 0 &{} -Y \end{array} \right) \le \frac{3}{\epsilon } \left( \begin{array}{ll} \quad I &{}-I \\ -I &{}\quad I \end{array} \right) , \end{aligned}$$

where \(I\equiv I^n\) denotes the \(n \times n\) identity matrix. Note that these assumptions on \(F_i\) imply degenerate ellipticity.

Regarding the boundary condition \(B_i\), we assume that

$$\begin{aligned}&\qquad \qquad \qquad \qquad \quad \qquad B_i=(t,x,r,p):=\langle n(x),p\rangle + f_i(t,x,r) \\&\hbox {where}~ n(x)~ \hbox {is the exterior unit normal at}~ x ~\hbox {and}~ f_i ~\hbox {is continuous on}~ [0,T]\times \partial \Omega \times \mathbb {R},\\&\hbox {and non-decreasing in}~ r~ \hbox {for every}~ (t,x) \in [0,T] \times \partial \Omega . \end{aligned}$$
(BC1)

Last, we assume that the obstacle functions \(c_{ij}(t,x)\) are continuous on \([0,T]\times {\bar{\Omega }}\) and satisfy

$$\begin{aligned} (i)&\quad \ c_{ij}(t,x) \in {\mathcal {C}}^{1,2} ( [0,T]\times {\bar{\Omega }} ), \\ (ii)&\quad c_{ii}(t,x)=0\hbox { for each}\ i\in \{1,\dots ,m\}, \end{aligned}$$
(O1)

and that the so called “no-loop”-condition holds;

We need to strengthen the assumption on the obstacle slightly to prove existence. In particular, we then also demand that

$$\begin{aligned} c_{ij}(t,x) + c_{jk}(t,x) \ge c_{ik}(t,x) \quad \hbox {for all}~ i,j,k \in \{1,2, \dots , n\} \end{aligned}$$
(O2)

which is an additional structural condition compared to (O2).

Last, we require that the initial data is continuous and compatible with the obstacle functions, in particular that

$$\begin{aligned} g_i(x) \ge g_j(x)- c_{ij}(0,x)\;\; \hbox {for any}~ i,j \in \{1,\dots ,m\}. \end{aligned}$$
(O3)

Remark 2.1

In the parabolic setting studied here, it is a standard task to show that one can assume \(\lambda >0\) in (F1) w.l.o.g. Indeed, if \(\lambda \le 0\), then for \({\bar{\lambda }} < \lambda \), u(tx) is a solution to (BVP) if and only if \(e^{{\bar{\lambda }} t}u(t,x)\) is a solution to (BVP) with \(F_i\) and \(f_i\) replaced by

$$\begin{aligned} -\bar{\lambda }r+e^{\bar{\lambda }t}F_i(t,x,e^{-\bar{\lambda }t}r,e^{-\bar{\lambda }t}p,e^{-\bar{\lambda }t}X)\quad \text {and}\quad e^{\bar{\lambda }t}f_i(t,x,e^{- \bar{\lambda }t}r), \end{aligned}$$

and with \(c_{ij}\) similarly scaled. The function to the left in the above display is strictly increasing and hence satisfies (F1) with \(\lambda \) positive. A consequence of this is that we may as well assume that \(F_i\) is strictly increasing in r and that there exists \(\gamma >0\) s.t. for \(r>s\)

We are looking for solutions in the viscosity sense and will use the classical definitions of (parabolic) sub- and superjets, provided here for convenience.

Definition 2.2

(Definition (8.1) of [7]) For \(({\hat{t}}, {\hat{x}}) \in (0,T) \times {\mathcal {O}} \), the triplet \((a,p,X) \in {{\mathbb {R}}}\times {{\mathbb {R}}}^n \times \mathcal {S}^n\) lies in the parabolic superjet of u at \(({\hat{t}}, {\hat{x}})\), written \((a,p,X) \in {\mathcal {P}}^{2,+}_{{\mathcal {O}}} u({\hat{t}}, {\hat{x}})\), if

$$\begin{aligned} u(t,x)\le & {} u ({\hat{t}}, {\hat{x}}) + a (t-{\hat{t}}) + \langle p, x-{\hat{x}}\rangle + \frac{1}{2} \langle X(x-{\hat{x}}), x- {\hat{x}}\rangle \\&+ o(|t-{\hat{t}}| + |x- {\hat{x}}|^2) \end{aligned}$$

as \((0,T) \times {\mathcal {O}} \ni (t,x) \rightarrow ({\hat{t}}, {\hat{x}})\). Analogously, the parabolic subjet is defined by \(\mathcal P^{2,-}_{{\mathcal {O}}} u({\hat{t}}, {\hat{x}}) := -\mathcal P^{2,+}_{{\mathcal {O}}} -u({\hat{t}}, {\hat{x}})\).

The closure of \({\mathcal {P}}^{2,\cdot }_{{\mathcal {O}}} u ({\hat{t}}, {\hat{x}})\), denoted \(\bar{{\mathcal {P}}}^{2,\cdot }_{{\mathcal {O}}} u({\hat{t}}, {\hat{x}})\), is defined as

$$\begin{aligned} \bar{{\mathcal {P}}}^{2,\cdot }_{{\mathcal {O}}} u({\hat{t}}, {\hat{x}}) :=&\{ (a,p,X) \in {{\mathbb {R}}}\times {{\mathbb {R}}}^n \times \mathcal {S}^n : \exists (t_n,x_n,p_n,X_n)\in (0,T) \times {\mathcal {O}} \times {{\mathbb {R}}}^n \times \mathcal {S}^n \\&\quad \hbox {s.t. } (a_n, p_n, X_n) \in {\mathcal {P}}^{2,\cdot }_{\mathcal O}u(t_n,x_n) ~ \hbox {and}~ \\&\quad (t_n,x_n, u(t_n,x_n), a_n, p_n, X_n) \rightarrow ({\hat{t}}, {\hat{x}}, u({\hat{t}}, {\hat{x}}), a,p,X) \}. \end{aligned}$$

Definition 2.3

A function \(u=(u_1, \dots , u_m) \in USC([0,T]\times {\bar{\Omega }})\) is a viscosity subsolution to (BVP) if, for all \(i \in \{1,\dots ,m\}\),

$$\begin{aligned} (i)&\quad \ \min \left\{ a+ {F_i}(t,x,u(t,x), p,X), u_{i}(x,t)- {{\mathcal {M}}}_i u(t,x) \right\} \le 0, \\&\quad (t,x) \in (0,T) \times \Omega , (a,p,X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u_i(t,x) \\ (ii)&\quad \ \min \left\{ a + {F_i}(t,x,u(t,x), p, X ), u_{i}(t,x) - {{\mathcal {M}}}_i u(t,x) \right\} \\&\quad \wedge \ \langle n(x), p \rangle + f_i(t,x,u(x)) \le 0 , \\&\quad \ (t,x) \in (0,T) \times \partial \Omega , (a,p,X) \in {\bar{\mathcal {P}}}^{2,+}_{\bar{\Omega }}u_i(t,x) \\ (iii)&\quad \ u_i(0,x) \le g_i(x), x\in {\bar{\Omega }} \end{aligned}$$

Viscosity supersolutions are defined analogously with USC replaced by LSC, \({{\bar{\mathcal {P}}}^{2,+}_{\cdot }}\) replaced by \({{\bar{\mathcal {P}}}^{2,-}_{\cdot }}\), \(\wedge \) replaced by \(\vee \), and \(\le \) replaced by \(\ge \). A function \(u \in C([0,T]\times {\bar{\Omega }})\) is a viscosity solution to (BVP) if it is both a sub- and a supersolution.

2.2 Main Results

With the above preliminaries set, we are ready to state our main results, which concern comparison of sub- and supersolutions and existence and uniqueness of viscosity solutions to (BVP).

Theorem 2.4

Assume that (D1), (F1)–(F3), (BC1), and (O1)–(O2) hold. If u and v are viscosity sub- and supersolutions to (BVP), respectively, then \(u(t,x) \le v(t,x)\) for \((t,x) \in [0,T) \times {\bar{\Omega }}\).

Theorem 2.5

Assume that (D1), (F1)–(F3), (BC1), (O1)–(O3), and (I1) hold. Then there exists a unique viscosity solution to (BVP).

Our proofs rely on by now classical techniques in the theory of viscosity solutions, using doubling of variables and the maximum principle for semi-continuous functions for Theorem 2.4 and Perron’s method together with a construction of certain sub- and supersolutions for Theorem 2.5. From the proof of Theorem 2.4 we also get the following corollary, useful in its own merit.

Corollary 2.6

Let u and v be viscosity sub- and supersolutions, respectively, to (BVP). If u and v satisfy Definition 2.3 (ii) on some open region \(G \subset (0,T) \times \partial \Omega \) and \(u \le v\) on \(\left( (0,T) \times \partial \Omega \right) \setminus G\), then \(u(t,x) \le v(t,x)\) for all \((t,x) \in [0,T) \times {\bar{\Omega }}\).

Remark 2.7

We have chosen to present our results and proofs in a rather simplistic setting, with directional derivative in the normal direction and a smooth, stationary domain \(\Omega \). However, our proofs reveal that the additional arguments needed to treat systems of PDEs with interconnected obstacles (rather than a single PDE) are more or less decoupled from the standard arguments of Crandall–Ishii–Lions [7]. Hence, by combining these additional arguments with existence and uniqueness proofs for more general PDEs and domains, one should be able to prove existence and uniqueness for systems of PDEs with interconnected obstacles in similar generality. For tractability, we refrain from such generalizations here.

3 Proof of Theorem 2.4: The Comparison Principle

Throughout this section, we assume that the assumptions of Theorem 2.4 hold, i.e., that (D1), (F1)–(F3), (BC1), and (O1)–(O2) hold. Let \(u=(u_1, u_2, \dots , u_m)\) and \(v=(v_1, v_2, \dots , v_m)\) be sub- and supersolutions to (BVP), respectively. Our proof follows the classical outline of Crandall–Ishii–Lions [7] and consists of four major steps.

3.1 Step 1. An \(\epsilon \)-room on the Boundary

Lemma 3.1

(Lemma 7.6 of [7]) For any continuous \(\nu (x): \partial \Omega \rightarrow {{\mathbb {R}}}^n\) satisfying \(\langle n(x), \nu (x) \rangle > 0\) there exists a \(C^2({\bar{\Omega }})\) function \(\varphi (x)\) s.t.

$$\begin{aligned} \langle \nu (x), D\varphi (x) \rangle \ge 1 \hbox { for } x \in \partial \Omega \quad \hbox {and} \quad \varphi (x) \ge 0~ \hbox {on } {\bar{\Omega }}. \end{aligned}$$

Let

$$\begin{aligned} u_i^\eta (t,x) := u_i(t,x) - \eta \varphi (x) - C_\eta \quad \hbox {and} \quad v_i^\eta := v_i(t,x) + \eta \varphi (x) + C_\eta \end{aligned}$$

where \(\varphi (x)\) is as given by the lemma above (with \(\nu (x) = n(x)\)) and \(C_\eta >0\) is a constant to be specified later. Note that \(u_i^\eta < u_i\) so that for any \((a,p,X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u_i^\eta (t,x)\) we have from (F1) and Remark 2.1 (i.e., from (F1*)) that

$$\begin{aligned}&\gamma (\eta \varphi (x) + C_\eta ) \le F_i(t,x,u_i(t,x), p, X) - F_i(t,x,u_i^\eta (t,x),p, X) \end{aligned}$$

for some \(\gamma >0\) and thus

$$\begin{aligned}&a+ F_i(t,x,u^\eta (t,x),p, X)\\&\quad \le a+ F_i(t,x,u_i(t,x), p, X) - \gamma \eta \varphi (x) - \gamma C_\eta \end{aligned}$$
$$\begin{aligned}&\quad \le a+ F_i(t,x, u_i(t,x), p + \eta D\varphi (x), X + \eta D^2\varphi (x))- \gamma \eta \varphi (x) - \gamma C_\eta + \omega (\eta M) \end{aligned}$$
(I1)

where \(M= \sup _{{\bar{\Omega }}}\{D\varphi (x) +||D^2\varphi (x)||\}\) and where the last inequality comes from (F2). Since \((a,p,X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u^\eta (t,x)\) and \(u^\eta = u - \eta \varphi (x)- {{\mathcal {C}}}_\eta \) we have

$$\begin{aligned} (a,p+\eta D\varphi (x), X + \eta D^2\varphi (x)) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u(t,x) \end{aligned}$$

and therefore choosing \(C_\eta = \frac{\omega (\eta M)}{\gamma }\) we get that

$$\begin{aligned} a+ F_i(t,x,u_i^\eta (t,x),p, X) \le -\gamma \eta \varphi (x) < 0 \end{aligned}$$

whenever \((t,x) \in (0,T) \times \Omega \) and \( u_i(t,x) > {{\mathcal {M}}}_i u(t,x) \) (since u is a subsolution). However, \(- \eta \varphi (x) - C_\eta \) is independent of i and thus

$$\begin{aligned} u_i(t,x)> {{\mathcal {M}}}_i u(t,x)&\iff u^\eta _i(t,x) > {{\mathcal {M}}}_i u^\eta (t,x) \end{aligned}$$

and we can conclude that

$$\begin{aligned} \max \{a+ F_i(t,x,u_i^\eta (t,x),p, X), u^\eta _i - {{\mathcal {M}}}_i u^\eta (t,x) \} \le 0 \end{aligned}$$

whenever \((a,p,X)\in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u^\eta _i(t,x)\), \((t,x) \in (0,T) \times \Omega \). On the other hand, if \(x \in \partial \Omega \), we have by (BC1)

$$\begin{aligned} B_i(t, x, u^\eta _i(t,x), p) =&B_i(t,x,u_i^\eta (t,x), p + \eta D \varphi (x)) - \eta \langle n(x) , D\varphi (x) \rangle \\ \le&B_i(t,x, u(t,x), p + \eta D\varphi (x)) - \eta \end{aligned}$$

and thus \(u^\eta \) is a subsolution to (BVP) with the boundary condition

$$\begin{aligned} B_i(t,x,r,p):= & {} \langle n(x),p\rangle + f_i(t,x,r) \le 0 \quad \hbox {replaced by} \\ {\check{B}}_i(t,x,r,p):= & {} B_i(t,x,r,p) + \eta \le 0. \end{aligned}$$

A similar calculation shows that \(v^\eta \) is a supersolution to (BVP) with boundary condition \({\hat{B}}_i(t,x,r,u): = B_i(t,x,r,p) - \eta \ge 0\). Consequently, it suffices to prove the comparison \(u^\eta \le v^\eta \) when \(u^\eta \) and \(v^\eta \) are sub- and supersolutions to (BVP) with boundary conditions \({\check{B}}_i(t,x,r,p)\) and \({\hat{B}}_i(t,x,r,p)\), respectively, since we retrieve our result in the limit as \(\eta \rightarrow 0\).

3.2 Step 2. Avoiding the Neumann Boundary Condition

We now construct a test function which allows us to discard the Neumann boundary condition in (BVP). We will need the following maximum principle whose proof, which is similar to the current, is postponed to the end of the section.

Proposition 3.2

Let u and v be viscosity sub- and supersolutions, respectively, to

$$\begin{aligned}&\max \{{\partial _t u_i} + F_i(t,x,u_i,Du_i,D^2u_i) , u_i(t,x)-{{\mathcal {M}}}_iu(t,x) \}=0 \\&u_i(0,x) = g_i(x) \end{aligned}$$

on \([0,T) \times {\bar{\Omega }}\) in the sense of Definition 2.3 (i) and (iii). Then,

$$\begin{aligned} \sup _{[0,T) \times {\bar{\Omega }}} (u_i -v_i) \le \max _{k \in \{1,\dots , m\}} \sup _{((0,T) \times \partial \Omega ) \cup (\{0\} \times {\bar{\Omega }})}(u_k-v_k)^+. \end{aligned}$$

Let us now assume the opposite of what we seek to prove, i.e., that there exists a non-empty set \({{\mathcal {I}}}\subset \{1,\dots , m\}\) and \(({\hat{t}}, {\hat{x}}) \in [0,T] \times {\bar{\Omega }}\) such that

$$\begin{aligned} \max _{k \in \{1,\dots , m\}}\sup _{(0,T)\times \Omega } (u_k -v_k) = u_i({\hat{t}}, {\hat{x}}) - v_i({\hat{t}}, {\hat{x}}) = \delta >0 \end{aligned}$$
(3.1)

for any \(i\in {{\mathcal {I}}}\). Since \(u_i \le g_i \le v_i\) on \(\{0\} \times {\bar{\Omega }}\) by definition, we have \({\hat{t}} >0\) and by Proposition 3.2 we can then also assume \({\hat{x}} \in \partial \Omega \). Moreover, if we set

$$\begin{aligned} u^\theta (t,x): = u(t,x) - \frac{\theta }{T-t} \end{aligned}$$

for \(\theta >0\) arbitrary, we have \(u^\theta < u\),

$$\begin{aligned} (a, p, X ) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u^\theta (t,x) \iff (a +\frac{\theta }{(T-t)^2}, p, X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u(t,x), \end{aligned}$$

and

$$\begin{aligned} (i)&\, u_i(t,x)> {{\mathcal {M}}}_i u(t,x) \iff u^\theta _i(t,x) > {{\mathcal {M}}}_i u^\theta (t,x)\\&\quad (\hbox {since} \ u^\theta _i - u_i \hbox {is independent of} \ i), \\ (ii)&\, \langle n(x), p \rangle + f_i(t,x,u_i^\theta (t,x)) \le \langle n(x), p \rangle + f_i(x,u_i(t,x)) \quad ( \hbox {by}~ (BC1)) \\ (iii)&\, a + F_i(t,x,u_i^\theta (t,x),p, X) \le a+ F_i(t,x,u_i(t,x),p, X) \quad (\hbox {by}~ (F1)). \end{aligned}$$

It follows immediately that \(u^\theta \) is a subsolution to (BVP) with \(F_i\) replaced by

$$\begin{aligned} F_i^\theta (t,x,r,p,X)= F_i(t,x,r,p,X)+ \frac{\theta }{T-t} \end{aligned}$$

and that \(u^\theta \rightarrow -\infty \) as \(t \rightarrow T\). We may therefore also assume that \({\hat{t}} < T\), since if not, we can prove \(u^\theta \le v\) as follows and then retrieve our result in the limit as \(\theta \rightarrow 0\).

For \(\epsilon >0\) (tending to 0 in the final argument) and \(i \in {\mathcal {I}}\) arbitrary but fixed, let

$$\begin{aligned} \varphi _\epsilon (t,x,y)= & {} \frac{1}{2\epsilon } |x-y|^2 + |x-{\hat{x}}|^4 + |y-{\hat{x}}|^4\\&+ |t-{\hat{t}}|^2 - f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}})) \langle n({\hat{x}}), x-y \rangle \end{aligned}$$

and consider the function

$$\begin{aligned} \Phi _\epsilon (t,x,y)= u_i(t,x)-v_i(t,y)- \varphi _\epsilon (t,x,y) \end{aligned}$$

which is USC by construction. Let \((t_\epsilon , x_\epsilon , y_\epsilon )\) be the maximum point of \(\Phi _\epsilon \) on \([0,T) \times {\bar{\Omega }} \times {\bar{\Omega }}\). (This maximum point exists for \(\epsilon \) small as \(\Phi _\epsilon \) is USC, \({\hat{t}} <T\) and \({\bar{\Omega }}\) is compact.) Clearly, \(|x_\epsilon - y_\epsilon | \rightarrow 0\) as \(\epsilon \rightarrow 0\) as \((t_\epsilon , x_\epsilon , y_\epsilon )\) is a maximum point. Since

$$\begin{aligned} 2\Phi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon ) \ge \Phi _\epsilon (t_\epsilon , x_\epsilon , x_\epsilon ) + \Phi _\epsilon (t_\epsilon , y_\epsilon , y_\epsilon ) \end{aligned}$$

we have

$$\begin{aligned}&\qquad \frac{1}{\epsilon }|x_\epsilon -y_\epsilon |^2 - 2 f_i({\hat{t}}, {\hat{x}}, u({\hat{t}}, {\hat{x}})) \langle n({\hat{x}}), x_\epsilon -y_\epsilon \rangle \\&\qquad \le u_i(t_\epsilon , x_\epsilon )- u_i(t_\epsilon , y_\epsilon ) + v_i(t_\epsilon , x_\epsilon ) - v_i(t_\epsilon , y_\epsilon ) < \infty \end{aligned}$$

where the last inequality holds since \(u_i-v_i\) is USC. This gives that also \(\frac{1}{\epsilon }|x_\epsilon -y_\epsilon |^2 \rightarrow 0\) as \(\epsilon \rightarrow 0\) and \((t_\epsilon ,x_\epsilon ) \rightarrow ({\hat{t}}, {\hat{x}})\) since \(u_i(t,x) - v_i(t,x) - \varphi _\epsilon (t,x,x)\) has a maximum at \(({\hat{t}}, {\hat{x}})\). Moreover, from the upper- and lower semi-continuity of u and v we get

$$\begin{aligned} u_i(t_\epsilon , x_\epsilon ) \rightarrow u_i({\hat{t}}, {\hat{x}}) \quad \hbox {and} \quad v_i(t_\epsilon , x_\epsilon ) \rightarrow v_i({\hat{t}}, {\hat{x}}). \end{aligned}$$

In particular, from the definition of \(\varphi _\epsilon \) and \((t_\epsilon , x_\epsilon , y_\epsilon )\) we get

$$\begin{aligned} u_i ({\hat{t}}, {\hat{x}}) - v_i ({\hat{t}}, {\hat{x}})&= \Phi _\epsilon ({\hat{t}}, {\hat{x}},{\hat{x}}) \le \Phi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon ) \nonumber \\&\le u_i (t_\epsilon , x_\epsilon ) - v_i (t_\epsilon , x_\epsilon ) + f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}}))\langle n({\hat{x}}), x_\epsilon -y_\epsilon \rangle . \end{aligned}$$
(3.2)

Note that \(u_i \in USC ([0,T]\times {\bar{\Omega }})\) so we have \(\limsup _{\epsilon \rightarrow 0} u_i(t_\epsilon , x_\epsilon ) \le u_i({\hat{t}}, {\hat{x}})\). Since \((t_\epsilon , x_\epsilon , y_\epsilon ) \rightarrow ({\hat{t}}, {\hat{x}}, {\hat{x}})\) as \(\epsilon \rightarrow 0\), this inequality cannot be strict; if it were, (3.2) shows that we must necessarily have \(\liminf _{\epsilon \rightarrow 0} v_i(t_\epsilon ,x_\epsilon )< v({\hat{t}}, {\hat{x}})\) as well, but this contradicts \(v_i \in LSC([0,T]\times {\bar{\Omega }})\). An analogous argument shows \(v_i(t_\epsilon , x_\epsilon ) \rightarrow v_i({\hat{t}}, {\hat{x}})\).

We now invoke the exterior sphere condition which implies the existence of \(r>0\) s.t.

$$\begin{aligned} \langle n(x_\epsilon ) , x_\epsilon - {\hat{x}}\rangle > -\frac{1}{2r}|{\hat{x}} - x_\epsilon |^2 \quad \hbox {for any}~ {\hat{x}} \in {\bar{\Omega }}~ \hbox {and}~ x_\epsilon \in \partial \Omega . \end{aligned}$$

Differentiating gives

$$\begin{aligned} D_x\varphi _ \epsilon (t,x,y)= \frac{1}{\epsilon } (x-y) + 4|x -{\hat{x}}|^2 (x-{\hat{x}}) -f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}})) n({\hat{x}}) \end{aligned}$$

and thus, if \(x_\epsilon \in \partial \Omega \) we have

$$\begin{aligned}&B_i(t_\epsilon , x_\epsilon , u_i(t_\epsilon , x_\epsilon ), D_x \varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon )) \\&= \langle n(x_\epsilon ), D_x\varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon ) \rangle + f_i(t_\epsilon , x_\epsilon , u_i(t_\epsilon , x_\epsilon )) \\&=\Bigg \langle n(x_\epsilon ), \frac{1}{\epsilon } (x_\epsilon -y_\epsilon ) + 4 |x_\epsilon - {\hat{x}}|^2 (x_\epsilon -{\hat{x}}) -f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}})) n({\hat{x}})\Bigg \rangle \\&\quad + f_i(t_\epsilon , x_\epsilon , u_i(t_\epsilon , x_\epsilon )) \\&\ge -\frac{1}{2r \epsilon }|x_\epsilon -y_\epsilon |^2 + 4|x_\epsilon -{\hat{x}}|^2 \langle n(x_\epsilon ), x_\epsilon -{\hat{x}}\rangle \\&\quad - f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}})) \langle n(x_\epsilon ), n({\hat{x}}) \rangle + f_i(t_\epsilon , x_\epsilon , u_i(t_\epsilon , x_\epsilon )). \end{aligned}$$

Hence, as \(\epsilon \rightarrow 0\) we have

$$\begin{aligned} B_i(t_\epsilon , x_\epsilon , u_i(t_\epsilon , x_\epsilon ), D_x\varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon )) \rightarrow D \end{aligned}$$
(3.3)

for some \(D\ge 0\). Similarly, for the supersolution v we get for \(y_\epsilon \in \partial \Omega \)

$$\begin{aligned}&B_i(t_\epsilon , y_\epsilon , v_i(t_\epsilon , y_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon ))\\&= \langle n(y_\epsilon ), -D_y\varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon ) \rangle + f_i(t_\epsilon , y_\epsilon , v_i(t_\epsilon , y_\epsilon )) \\&=\left\langle n(y_\epsilon ), \frac{1}{\epsilon } (x_\epsilon -y_\epsilon ) - 4|y_\epsilon - {\hat{y}}|^2 (y_\epsilon -{\hat{x}}) - f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}})) n({\hat{x}})\right\rangle \\&\quad + f_i(t_\epsilon , y_\epsilon , v_i(t_\epsilon , y_\epsilon )) \\&\le \frac{1}{r\epsilon }|x_\epsilon - y_\epsilon |^2 -4 |y_\epsilon -{\hat{x}}|^2 \langle n(x_\epsilon ), y_\epsilon -{\hat{x}} \rangle \\&\quad -f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}})) \langle n(y_\epsilon ), n({\hat{x}}) \rangle + f_i(t_\epsilon , y_\epsilon , v_i(t_\epsilon , y_\epsilon )) \end{aligned}$$

and thus that, as \(\epsilon \rightarrow 0\),

$$\begin{aligned} B_i(t_\epsilon , y_\epsilon , v_i(t_\epsilon , y_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon ) \rightarrow {\tilde{D}} \end{aligned}$$
(3.4)

for some \({\tilde{D}} \le 0\) (since \(u_i > v_i\) at \(({\hat{t}}, {\hat{x}})\) and \(f_i\) is non-decreasing by assumption (BC1)). Hence, if

$$\begin{aligned} u_i({\hat{t}}, {\hat{x}}) -v_i({\hat{t}}, {\hat{x}})=\delta >0 \end{aligned}$$

is a positive maximum of \(u_i-v_i\) over \((0,T) \times {\bar{\Omega }}\) we must have

$$\begin{aligned}&\ \min \left\{ a + {F_i}\left( t_\epsilon ,x_\epsilon ,u_i(t_\epsilon ,x_\epsilon ), D_x \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), X \right) , u_{i}(t_\epsilon ,x_\epsilon ) - {{\mathcal {M}}}_i u(t_\epsilon ,x_\epsilon ) \right\} \le 0 \\&\quad \hbox {for}~ (a,D_x \varphi _\epsilon , X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u_i(t_\epsilon , x_\epsilon ), \quad \hbox {and} \\&\ \min \left\{ {\tilde{a}} + {F_i}\left( t_\epsilon ,y_\epsilon ,v_i(t_\epsilon ,y_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), Y \right) , v_{i}(t_\epsilon ,x_\epsilon ) - {{\mathcal {M}}}_i v(t_\epsilon ,x_\epsilon ) \right\} \ge 0\\&\quad \hbox {for}~ ({\tilde{a}},-D_y \varphi _\epsilon , Y) \in {\bar{\mathcal {P}}}^{2,-}_{\Omega }v_i(t_\epsilon , x_\epsilon ), \end{aligned}$$

provided \(\epsilon \) is small enough. Indeed, this holds by continuity of \(F_i\) and since, by Step 1, we can consider \(u_i\) and \(v_i\) to be sub- and supersolutions to (BVP) with boundary conditions \({\check{B}}_i : =B_i + \eta \le 0\) and \({\hat{B}}_i : =B_i -\eta \ge 0 \), respectively. Thus, since we have (3.3) and (3.4), the boundary conditions cannot be satisfied and therefore the equation must hold on the boundary (in the sub- /supersolution sense).

3.3 Step 3. Avoiding the Obstacle

We will now argue as in Ishii–Koike [15] to ensure that the subsolution (more precisely, at least one component of it) lies strictly above its obstacle at \(({\hat{t}}, {\hat{x}})\). To do this, recall the set \({{\mathcal {I}}}\) and assume that \(u_i({\hat{t}}, {\hat{x}}) \le {\mathcal {M}}_i u({\hat{t}}, {\hat{x}})\), i.e.,

$$\begin{aligned} u_i({\hat{t}}, {\hat{x}}) \le \max _{i \ne j}\{ {u_j({\hat{t}}, {\hat{x}}) -c_{ij}({\hat{t}}, {\hat{x}})}\}, \end{aligned}$$

whenever \(i \in {{\mathcal {I}}}\). This implies the existence of \(k \in \{1, \dots , i-1, i+1,\dots , m\}\) s.t.

$$\begin{aligned} u_i({\hat{t}}, {\hat{x}}) + c_{ik}({\hat{t}}, {\hat{x}}) \le u_k({\hat{t}}, {\hat{x}}). \end{aligned}$$

Moreover, since v is a supersolution we have

$$\begin{aligned} v_i({\hat{t}}, {\hat{x}}) \ge v_k({\hat{t}}, {\hat{x}}) -c_{ik}({\hat{t}}, {\hat{x}}). \end{aligned}$$

Combining the above two inequalities yield

$$\begin{aligned} u_i({\hat{t}}, {\hat{x}}) -v_i({\hat{t}}, {\hat{x}}) \le u_k({\hat{t}}, {\hat{x}}) -v_k ({\hat{t}}, {\hat{x}}) \end{aligned}$$

but since \(({\hat{t}}, {\hat{x}})\) is a maximum of \(u_i-v_i\) and \(i \in {{\mathcal {I}}}\) this must in fact be an equality and \(k \in I\) as well. Repeating this as many times as necessary, we find the existence of a sequence of indices \(\{i_1, i_2, \dots , i_p, i_1\}\), \(i_p \ne i_{p+1}\) such that

$$\begin{aligned} u_{i_1}({\hat{t}}, {\hat{x}}) + c_{i_1 i_2}({\hat{t}}, {\hat{x}}) + c_{i_2 i_3}({\hat{t}}, {\hat{x}}) + \ldots + c_{i_p i_1}({\hat{t}}, {\hat{x}}) \le u_{i_1}({\hat{t}}, {\hat{x}}) \end{aligned}$$

which implies

$$\begin{aligned} c_{i_1 i_2}({\hat{t}}, {\hat{x}}) + c_{i_2 i_3}({\hat{t}}, {\hat{x}}) + \ldots + c_{i_p i_1}({\hat{t}}, {\hat{x}}) \le 0, \end{aligned}$$

a contradiction to (O1). Thus, there exists at least one index \(i \in {{\mathcal {I}}}\) s.t.

$$\begin{aligned} u_i ({\hat{t}}, {\hat{x}}) > {{\mathcal {M}}}_i u({\hat{t}}, {\hat{x}}). \end{aligned}$$

Since \(u_i(t_\epsilon , x_\epsilon ) \rightarrow u_i(\hat{t}, \hat{x})\) and \(v_i(t_\epsilon , x_\epsilon )\rightarrow v_i(\hat{t}, \hat{x})\) for all \(i \in \{1, \dots , m\}\), we can then conclude from this and Step 2 that

$$\begin{aligned}&a + {F_i}\left( t_\epsilon ,x_\epsilon ,u_i(t_\epsilon ,x_\epsilon ), D_x \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), X \right) \le 0\nonumber \\&\quad \hbox {for}\, (a,D_x \varphi _\epsilon , X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u_i(t_\epsilon , x_\epsilon ), \quad \hbox {and} \nonumber \\&{\tilde{a}} + {F_i}\left( t_\epsilon ,y_\epsilon ,v_i(t_\epsilon ,y_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), Y \right) \ge 0 \nonumber \\&\quad \hbox {for}\ ({\tilde{a}},-D_y \varphi _\epsilon , Y) \in {\bar{\mathcal {P}}}^{2,-}_{\Omega }v_i(t_\epsilon , y_\epsilon ), \end{aligned}$$
(3.5)

for at least one \(i \in {{\mathcal {I}}}\) and \(\epsilon \) small enough.

3.4 Step 4. Reaching the Contradiction

We are now ready to reach our final contradiction. To do this, we will use the following lemma, the so called maximum principle for semi-continuous functions. Lemma 3.3 corresponds to Theorem 8.3 of [7] in a less general but for our purposes sufficient form.

Lemma 3.3

Suppose that \((t_\epsilon , x_\epsilon , y_\epsilon )\) is a maximum point of

$$\begin{aligned} u_i(t,x) -v_i(t,y) -\varphi _\epsilon (t,x,y) \end{aligned}$$

over \((0,T) \times {\bar{\Omega }}\). Then, for each \(\theta >0\) there are \(X, Y \in \mathcal {S}^n\) such that

$$\begin{aligned} (i)&\,(a, D_x \varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon ), X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u_i(t_\epsilon , x_\epsilon ) \quad \hbox {and}\quad \\&\, ({\tilde{a}}, -D_y \varphi _\epsilon (t_\epsilon ,x _\epsilon , y_\epsilon ), Y) \in {\bar{\mathcal {P}}}^{2,-}_{\Omega }v_i(t_\epsilon , y_\epsilon ), \\ (ii)&\, \left( \begin{array}{ll} X&{}\quad 0\\ 0&{}-Y \end{array} \right) \le A+ \theta A^2, \\ (iii)&\,a- {\tilde{a}} = \frac{\partial }{\partial t} \varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon ), \end{aligned}$$

where \(A :=D_x^2 \varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon )\) is the Hessian matrix of \(\varphi (t,x,y)\) (w.r.t. x and y).

We first note that

$$\begin{aligned} \frac{\partial }{\partial t} \varphi _\epsilon (t_\epsilon , x_\epsilon , x_\epsilon )&= 2 (t_\epsilon -{\hat{t}})\\ D_x\varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon )&= \frac{1}{\epsilon } (x_\epsilon - y_\epsilon ) + 4 |x_\epsilon -{\hat{x}}|^2(x_\epsilon -{\hat{x}}) - f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}})) n({\hat{x}}) \\ -D_y \varphi _\epsilon (t_\epsilon ,x_\epsilon , y_\epsilon )&= \frac{1}{\epsilon }(x_\epsilon - y_\epsilon ) - 4 |y_\epsilon -{\hat{x}}|^2 (y_\epsilon -{\hat{x}}) - f_i({\hat{t}}, {\hat{x}}, u_i({\hat{t}}, {\hat{x}})) n({\hat{x}}) \\ A~{:=}~ D^2_x \varphi _\epsilon (t_\epsilon ,x_\epsilon , y_\epsilon )&= \frac{1}{\epsilon } \left( \begin{array}{rr} I &{}-I \\ -I&{} I \end{array} \right) + \mathcal {O}(|x_\epsilon - {\hat{x}}|^2 + |y_\epsilon - {\hat{x}}|^2) \end{aligned}$$

which gives

$$\begin{aligned} A^2 = \frac{2}{\epsilon ^2} \left( \begin{array}{ll} \quad I &{}-I \\ -I&{} \quad I \end{array} \right) + \mathcal {O}\left( \frac{1}{\epsilon }(|x_\epsilon -{\hat{x}}|^2 +|y_\epsilon -{\hat{y}}|^2) + |x_\epsilon - {\hat{x}}|^4 + |y_\epsilon - {\hat{x}}|^4 \right) . \end{aligned}$$

Choosing \(\theta = \epsilon \) in Lemma 3.3 (ii) gives the existence of

$$\begin{aligned} (a, D_x \varphi _\epsilon (t_\epsilon ,x_\epsilon , x_\epsilon ), X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u_i(t_\epsilon , x_\epsilon ) \quad \hbox {and}~~ ({\tilde{a}}, -D_y \varphi _\epsilon (t_\epsilon , x_\epsilon ,x_\epsilon ), Y) \in {\bar{\mathcal {P}}}^{2,-}_{\Omega }v_i( t_\epsilon , x_\epsilon ), \end{aligned}$$

with

$$\begin{aligned} \left( \begin{array}{ll} X&{}\quad 0\\ 0&{}-Y \end{array} \right) \le&\dfrac{3}{\epsilon } \left( \begin{array}{ll} \quad I &{}-I \\ -I&{}\quad I \end{array} \right) + \mathcal {O}(|x_\epsilon - {\hat{x}}|^2 + |y_\epsilon - {\hat{x}}|^2). \end{aligned}$$

Note in particular that this implies that for any \(\xi >0\) fixed there exists \(\epsilon >0\) such that the above holds with

$$\begin{aligned} \left( \begin{array}{ll} X&{}\quad 0\\ 0&{}-Y \end{array} \right)&\le \frac{3}{\epsilon } \left( \begin{array}{ll} \quad I &{}-I \\ -I&{} \quad I \end{array} \right) + \xi \left( \begin{array}{ll} I&{}0\\ 0&{} I \end{array} \right) \nonumber \\&\iff \left( \begin{array}{ll} X-\xi I&{}~~~~~~~~~0\\ 0&{}-(Y+\xi I) \end{array} \right) \le \frac{3}{\epsilon } \left( \begin{array}{cc} \quad I &{}-I \\ -I&{} \quad I \end{array} \right) . \end{aligned}$$
(3.6)

Moreover, we have from (3.5) that

$$\begin{aligned}&a + {F_i}\left( t_\epsilon ,x_\epsilon ,u_i(t_\epsilon ,x_\epsilon ), D_x \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), X \right) \le 0 \quad \hbox {and} \\&{\tilde{a}} + {F_i}(t_\epsilon ,x_\epsilon ,v_i(t_\epsilon ,y_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon , x_\epsilon ), Y ) \ge 0 \end{aligned}$$

which implies

$$\begin{aligned} 2 (t_\epsilon -{\hat{t}})= & {} a-{\tilde{a}} \le {F_i}(t_\epsilon ,y_\epsilon ,v_i(t_\epsilon ,y_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon ,x_\epsilon ,y_\epsilon ), Y ) \nonumber \\&- {F_i}(t_\epsilon ,x_\epsilon ,u_i(t_\epsilon ,x_\epsilon ), D_x \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), X ) \end{aligned}$$
(3.7)

by Lemma 3.3 (iii).

What remains is to show that (3.7) is inconsistent with (3.1), (3.6) and the assumptions (F1) - (F3). To see this, note that

$$\begin{aligned}&{F_i}(t_\epsilon ,y_\epsilon ,v_i(t_\epsilon ,y_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon ,x_\epsilon ,y_\epsilon ), Y ) - {F_i}\left( t_\epsilon ,x_\epsilon ,u_i(t_\epsilon ,x_\epsilon ), D_x \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), X \right) \nonumber \\&\le {F_i}(t_\epsilon ,y_\epsilon ,v_i(t_\epsilon ,y_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon ,x_\epsilon ,y_\epsilon ), Y ) - {F_i}(t_\epsilon ,x_\epsilon ,u_i(t_\epsilon ,x_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), X ) \nonumber \\&\qquad + \omega (4(|{\hat{x}}-x_\epsilon |^3 + |{\hat{x}}- y_\epsilon |^3)) \nonumber \\&\le {F_i}(t_\epsilon ,y_\epsilon ,u_i(t_\epsilon ,x_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon ,x_\epsilon ,y_\epsilon ), Y ) - {F_i}(t_\epsilon ,x_\epsilon ,u_i(t_\epsilon ,x_\epsilon ), -D_y \varphi _\epsilon (t_\epsilon , x_\epsilon ,y_\epsilon ), X ) \nonumber \\&\qquad + \omega (4(|{\hat{x}}-x_\epsilon |^3 + |{\hat{x}}- y_\epsilon |^3)) - \gamma (u_i(t_\epsilon , x_\epsilon ) - v_i(t_\epsilon ,y_\epsilon )) \end{aligned}$$
(3.8)

where we have used, in turn, (F2),

$$\begin{aligned}&|D_x \varphi _\epsilon (t_\epsilon ,x_\epsilon , y_\epsilon )- (-D_y \varphi _\epsilon (t_\epsilon ,x_\epsilon , y_\epsilon ))|\\&= 4(|{\hat{x}}-x_\epsilon |^2(x_\epsilon -{\hat{x}})+ |{\hat{x}}- y_\epsilon |^2 (y_\epsilon - {\hat{x}})), \end{aligned}$$

and (F1) (more precisely, (F1*)). Since (3.6) holds we get from (F3) that

$$\begin{aligned}&F_i(t_\epsilon ,y_\epsilon , r, p, Y+ \xi I) - F_i(t_\epsilon , x_\epsilon , r, p, X-\xi I)\\&\le \omega (\frac{1}{\epsilon } |x_\epsilon -y_\epsilon |^2 + |x_\epsilon -y_\epsilon |(|p|+1)) \end{aligned}$$

and from (F2) that

$$\begin{aligned}&F_i(t_\epsilon ,y_\epsilon , r, p, Y+ \xi I) - F_i(t_\epsilon , x_\epsilon , r, p, X-\xi I) \\&\quad \ge F_i(t_\epsilon ,y_\epsilon , r, p, Y) - F_i(t_\epsilon , x_\epsilon , r, p, X) - 2 \omega (\xi ) \end{aligned}$$

and by combining these displays we have

$$\begin{aligned}&F_i(t_\epsilon ,y_\epsilon , r, p, Y) - F_i(t_\epsilon , x_\epsilon , r, p, X) \nonumber \\&\quad \le \omega (\frac{1}{\epsilon } |x_\epsilon -y_\epsilon |^2+ |x_\epsilon -y_\epsilon |(|p|+1)) + 2 \omega (\xi ). \end{aligned}$$
(3.9)

Putting (3.7), (3.8), and (3.9) together gives

$$\begin{aligned} 2 (t_\epsilon -{\hat{t}}) \le&\, \omega (4(|{\hat{x}}-x_\epsilon |^3 + |{\hat{x}}- y_\epsilon |^3)) - \gamma (u_i(t_\epsilon , x_\epsilon ) - v_i(t_\epsilon ,y_\epsilon )) \\&+ 2 \omega (\xi ) + \omega (\frac{1}{\epsilon } |x_\epsilon -y_\epsilon |^2 + |x_\epsilon -y_\epsilon |(|-D_y \varphi _\epsilon (t_\epsilon , x_\epsilon , y_\epsilon )|+1)). \end{aligned}$$

Taking the limit as \(\epsilon \rightarrow 0\) and recalling \(\frac{1}{\epsilon }|x_\epsilon -y_\epsilon |^2 \rightarrow 0\) we arrive at

$$\begin{aligned} \gamma \delta \le 2 \omega (\xi ) \end{aligned}$$

which is a contradiction since \(\xi >0\) was arbitrary and \(\gamma \delta > 0\). \(\square \)

Proof

(Proof of Proposition 3.2) Assume first that \(u_i \le v_i\) on \((0,T) \times \partial \Omega \) for all \(i \in \{1,\dots , m\}\). As above, assume the existence of \({{\mathcal {I}}}\) and \(({\hat{t}}, {\hat{x}})\) such that

$$\begin{aligned} 0< \delta = u_i({\hat{t}}, {\hat{x}}) -v_i({\hat{t}}, {\hat{x}}) \end{aligned}$$

for all \(i \in {{\mathcal {I}}}\). Now, let

$$\begin{aligned} \varphi _\epsilon (t,x,y) = \frac{1}{2\epsilon } |x-y|^2 + |x-{\hat{x}}|^4 + |y-{\hat{x}}|^4 + |t-{\hat{t}}|^2 \end{aligned}$$

and construct \(\Phi _\epsilon (t,x,y) := u_i(t,x) -v_i (t,y) - \varphi _\epsilon (t,x,y)\). Calculations analogous to those in the first half of Step 2 above shows

$$\begin{aligned} \frac{1}{\epsilon } |x_\epsilon -y_\epsilon |\rightarrow 0, \quad x_\epsilon \rightarrow {\hat{x}}, \quad u_i(t_\epsilon ,x_\epsilon ) \rightarrow u_i ({\hat{t}}, {\hat{x}}), \quad \hbox {and} \quad v_i(t_\epsilon , y_\epsilon ) \rightarrow v_i({\hat{t}}, {\hat{x}}) \end{aligned}$$

and we can thus simply repeat Steps 3-4 above to get the desired contradiction.

For the general case, let

$$\begin{aligned} K :=\max _{i \in \{1,\dots , m\}} \sup _{((0,T) \times \partial \Omega ) \cup (\{0\} \times {\bar{\Omega }})} (u_i-v_i)^+ \ge 0 \end{aligned}$$

and note that \({\tilde{u}}:=u-K \le u\) is a subsolution to (BVP) in the sense of Definition 2.3 (i) and (iii) since \({\tilde{u}}_i(0,\cdot ) \le u_i(0,\cdot ) \le g_i\), K is independent of i and

$$\begin{aligned} a+ F(t,x,{\tilde{u}}_i, p, X) \le a + F(t,x, u_i, p, X) \end{aligned}$$

by (F1). Moreover, \({\tilde{u}} \le v\) on \((0,T) \times \partial \Omega \) so we can apply the above result to conclude that

$$\begin{aligned} {\tilde{u}} = u- K \le v \iff u-v \le K \end{aligned}$$

and we are done. \(\square \)

Proof

(Proof of Corollary 2.6) If u is a viscosity subsolution, then so is \(u-K\) for all \(K>0\). It thus suffices to prove that if \(u\le v\) on \((0,T) \times \partial \Omega )\setminus G\), then \(u\le v\) in \([0,T) \times {\bar{\Omega }}\). If \(G=(0,T) \times \partial \Omega \), this implication and its proof is identical to Theorem 2.4 and if \(G= \emptyset \) it is identical to Proposition 3.2. If \(G \subset (0,T) \times \partial \Omega \) is a non-empty proper subset, we know by assumption that \(u\le v\) on \(((0,T) \times \partial \Omega ) \setminus G\) and so the maximum point \(({\hat{t}}, {\hat{x}})\) defined in (3.1) must belong to the set G where the boundary condition is satisfied. Hence, we can follow the proof of Theorem 2.4 to conclude that \(u\le v\) in \([0,T) \times {\bar{\Omega }}\). \(\square \)

4 Proof of Theorem 2.5: Perron’s Method and Barrier Construction

Throughout this section, we assume that the assumptions of Theorem 2.5 hold, i.e., that (D1), (F1)–(F3), (BC1), (O1)–(O3) and (I1) hold. Our proof of existence follows the machinery of Perron’s method. In particular, we have the following result.

Proposition 4.1

Assume that, for each \(i \in \{1, \dots , m\}\) and \({\hat{x}} \in {\bar{\Omega }}\) there exist families of continuous viscosity sub- and supersolutions, \(\{u^{i,{\hat{x}},\epsilon }\}_{\epsilon >0}\) and \(\{v^{i,{\hat{x}},\epsilon }\}_{\epsilon >0}\), to (BVP) such that

$$\begin{aligned} \sup _{\epsilon } u^{i,{\hat{x}},\epsilon }_i(0,{\hat{x}}) = g_i({\hat{x}}) = \inf _{\epsilon } v^{i,{\hat{x}},\epsilon }_i(0,{\hat{x}}). \end{aligned}$$

Then,

$$\begin{aligned} w(t,x) =\sup \{u(t,x)\,: \,\hbox {{ u} is a subsolution to (BVP)} \} \end{aligned}$$

is a viscosity solution to (BVP).

With this result given, what remains is to construct appropriate barriers, i.e., families of viscosity sub- and supersolutions taking on the correct initial data. More specifically, we prove the following.

Proposition 4.2

For any \({\hat{x}} \in {\bar{\Omega }}\) and \(\epsilon > 0\), there exist non-negative constants ABC and \(\kappa \) such that \(U^{{\hat{x}}, \epsilon } :=(U_1^{{\hat{x}}, \epsilon }, \ldots , U^{{\hat{x}}, \epsilon }_m)\) and \(V^{i, {\hat{x}}, \epsilon }:=(V_1^{i, {\hat{x}}, \epsilon }, \dots , V^{i, {\hat{x}}, \epsilon }_m)\),

$$\begin{aligned} U^{{\hat{x}},\epsilon }_j(t,x)&=g_j({\hat{x}}) - A(\varphi (x) - \varphi ({\hat{x}})) - B \exp (\kappa \varphi (x)) |x-{\hat{x}}|^2- \epsilon - Ct, \\ V^{i,{\hat{x}},\epsilon }_j(t,x)&= g_i({\hat{x}}) + A(\varphi (x) - \varphi ({\hat{x}})) +B \exp (\kappa \varphi (x)) |x-{\hat{x}}|^2 + \epsilon \\&\quad + Ct + c_{ij} (t,x), \end{aligned}$$

where \(\varphi (x)\) is given in Lemma 3.1 and \(i\in \{1,\dots , m\}\), are viscosity sub- and supersolutions to (BVP), respectively. Moreover,

$$\begin{aligned} \sup _{\epsilon } U^{{\hat{x}},\epsilon }_i(0,{\hat{x}}) = g_i({\hat{x}}) = \inf _{\epsilon } V^{i,{\hat{x}},\epsilon }_i(0,{\hat{x}}). \end{aligned}$$

Combining Propositions 4.1 and 4.2 above proves the existence part of Theorem 2.5. Uniqueness follows immediately from Theorem 2.4 and the definition of sub- and supersolutions. What needs to be done is thus to prove the above propositions. Being the non-standard one, we start with Proposition 4.2.

Proof

(Proof of Proposition 4.2) We prove only the supersolution property (the subsolution property is proven analogously but without the need to deal with the obstacle as in (4.2) below). We need to show that \((V_1^{i, {\hat{x}}, \epsilon }, \cdots , V^{i, {\hat{x}}, \epsilon }_m)\) satisfies condition \((i)-(iii)\) of Definition 2.3. To ease notation, we suppress the superindices \(i, {\hat{x}}, \epsilon \) and write \(V_j\) in place of \(V^{i,{\hat{x}},\epsilon }_j\).

We begin with condition (iii). For any \(\epsilon >0\) and A given, we can ensure \(V_j(0,x) \ge g_j(x), \forall x \in {\bar{\Omega }}\) and all \(j \in \{1,\dots ,m\}\) by choosing B sufficiently large. Indeed, this is possible since \(\varphi , g\), and \(c_{ij}\) are continuous and

$$\begin{aligned} V_j(0,{\hat{x}}) = g_i({\hat{x}}) + \epsilon + c_{ij}(0,{\hat{x}}) \ge g_j({\hat{x}}), \end{aligned}$$

where the last inequality is due to Assumption (I1). Moreover, we can let \(\epsilon \rightarrow 0\) (by letting \(B \rightarrow \infty \) if necessary) and since \(c_{ii}(t,x)\equiv 0\) we thus have that

$$\begin{aligned} \inf _{\epsilon } V_i(0, {\hat{x}}) = g_i({\hat{x}}). \end{aligned}$$
(4.1)

Note that the choice of \(B=B(\epsilon ,A)\) can be made with \(C= \kappa = 0\); later increasing \(\kappa \) and/or C will only increase V further while keeping (4.1).

We next turn to condition (i) of Definition 2.3. Starting with the obstacle, we have

$$\begin{aligned}&V_j(t,x) - {\mathcal {M}}_j V(t,x) = V_j(t,x) - \max _{j \ne k} \{V_k (t,x) -c_{jk}(t,x) \} \nonumber \\ =&\,c_{ij}(t,x) - \max _{j \ne k} \{c_{ik}(t,x) - c_{jk}(t,x)\} = c_{ij}(t,x) - c_{i{\hat{k}}}(t,x) + c_{j{\hat{k}}}(t,x) \ge 0 \end{aligned}$$
(4.2)

where the last inequality is by assumption (O3). Concerning the second part we observe that, for some C large enough, it holds that

$$\begin{aligned} \partial _t V_i(t,x)+ F_i(t,x,V_i(t,x), D V_i(t,x), D^2 V_i(t,x)) \ge 0 \end{aligned}$$
(4.3)

for \((a,p,X) \in \mathcal {P}^{2,-}_{ \Omega }V_i(t,x)\), for all \((t,x) \in (0,T)\times \Omega \) and all \(i \in \{1,\dots ,m\}\). Indeed, this follows after noticing that \(V_i\) smooth and \(F_i\) continuous give a lower bound for \(F_i(t,x,V_i(t,x), D V_i(t,x), D^2 V_i(t,x))\) on the compact region \([0,T]\times {\bar{\Omega }}\). This lower bound can be made independent from C (and \(\epsilon \)) since \(F_i\) is assumed non-decreasing with \(V_i\) and \(V_i\) is non-decreasing in C (and \(\epsilon \)); it may however depend on A, B and the parameter \(\kappa \) to be chosen later. Since \(\partial _t V_j(t,x) = C + \partial _t c_{ij}(t,x)\) and the latter term is bounded in \([0,T]\times {\bar{\Omega }}\), we conclude that inequality (4.3) holds for large enough \(C=C(A,B,\kappa )\).

What remains to verify is condition (ii) of Definition 2.3. Note that, for \(x \in \partial \Omega \), satisfying (4.3) in the classical sense does not ensure ditto in the viscosity sense. We therefore instead focus on the Neumann condition and intend to prove that

$$\begin{aligned} \langle n(x), D V_j(t,x) \rangle + f_j(t, x,V_j(t,x)) \ge 0 \end{aligned}$$

for \((t,x) \in (0,T)\times \partial \Omega \) and for all \(j \in \{1,\dots ,m\}\). Recall that we have chosen B s.t. \(V_j(0,x) \ge g_j(x)\) in \({\bar{\Omega }}\). Since \(\partial _t c_{ij}(t,x)\) is bounded from below, we can also ensure that \( V_j(t,x) \ge g_j(x) \) holds for all \((t,x) \in [0,T]\times {\bar{\Omega }}\) by increasing C if necessary. By the monotonicity property of f (non-decreasing in r) we then have

$$\begin{aligned} f_j(t,x,V_j(t,x)) \ge f_j(t,x,g_j(x)). \end{aligned}$$

Let \({\tilde{A}}\) be such that

$$\begin{aligned} \min _{j \in \{1,\dots ,m\}} \inf _{(t,x) \in [0,T]\times \partial \Omega }f_j(t,x,g_j(x))> -{\tilde{A}}. \end{aligned}$$

The Neumann boundary condition thus follows if we can show that

$$\begin{aligned} \langle n(x), D V_j(t,x) \rangle \ge {\tilde{A}} \end{aligned}$$
(4.4)

for all \((t,x)\in (0,T)\times \partial \Omega \). Differentiating \(V_j(t,x)\) gives

$$\begin{aligned} D V_j(t,x) = A D \varphi (x) + B \exp (\kappa \varphi (x) ) \left( 2 (x-{\hat{x}}) + \kappa |x-{\hat{x}}|^2 D \varphi (x) \right) + D c_{ij}(t,x) \end{aligned}$$

and thus

$$\begin{aligned} \langle n(x), D V_j(t,x) \rangle =&\,\langle n(x), A D \varphi (x) \rangle + \langle n(x), D c_{ij}(t,x)\rangle \nonumber \\&+ B \exp (\kappa \varphi (x)) \langle n(x), 2 ( x-{\hat{x}}) + \kappa |x-{\hat{x}}|^2 D \varphi (x)\rangle \end{aligned}$$
(4.5)

where \(|\langle n(x), D c_{ij}\rangle |\) is bounded since \(c_{ij}\) is smooth. The exterior ball condition implies that

$$\begin{aligned} \langle n( x) , x -{\hat{x}}\rangle > - \frac{1}{2r}|{\hat{x}} - x|^2 \quad \hbox {for any} \ {\hat{x}} \in {\bar{\Omega }}\ \hbox {and}\ x \in \partial {\Omega }, \end{aligned}$$

where r depends on \(\Omega \). From this we see that, for \(\kappa \) large enough and depending only on \(\Omega \), the last term in (4.5) is non-negative since \(\langle n(x), D\varphi (x) \rangle \ge 1\) by Lemma 3.1.

Using \(\langle n(x), D\varphi (x) \rangle \ge 1\) once more it now only remains to pick A such that \(A - \max _{i,j \in \{1, \dots , m\}}|D c_{ij}| \ge \tilde{A}\) in order to fulfill (4.4). Observe that this choice of A depends only on the data of the problem. With \(\kappa \) and A now fixed we conclude that for any \(\epsilon > 0\) we can choose \(B = B(\epsilon , A)\) and then \(C = C(A,B,\kappa )\) such that the above calculations hold.

Last, we note that the constructed barrier now satisfies the boundary condition in the classical sense. This suffices as Proposition 7.2 of [7] and the specific form of the function \(B_i\) (cf. (7.4) of [7]) then gives that the boundary condition also holds in the viscosity sense. \(\square \)

Proof (Proof of Proposition 4.1)

[Proof of Proposition 4.1] Note first that w is well defined and bounded by the assumption of existence of sub- and supersolutions. Let \(w_*:=(w_{*, 1}, \ldots , w_{*,m})\) and \(w^*:=(w^*_1, \ldots , w^*_m)\) denote the lower- and upper semi-continuous envelopes of \(w=(w_1, \dots , w_m)\), respectively, i.e., the largest LSC function that is dominated by w and the smallest USC function that dominates w, respectively. By definition and Theorem 2.4 we have, for any \(i, {\hat{x}}\) and \(\epsilon >0\) fixed,

$$\begin{aligned} w_*\le w^*, \qquad u^{i,{\hat{x}}, \epsilon } \le w^*\quad \hbox {and}\quad w_*\le v^{i,{\hat{x}}, \epsilon }. \end{aligned}$$

The essence of Perron’s method is to now prove that \(w_*\) is a supersolution and \(w^*\) a subsolution to (BVP), implying that also \(w^*\le w_*\) by Theorem 2.4 and thus that \(w = w_*=w^*\) is a solution to (BVP).

To establish the subsolution property, we need to show

$$\begin{aligned} (i)&\quad \ \min \left\{ a+ {F_i}(t,x,w^*_i(t,x), p,X), w^*_{i}\left( x,t\right) - {{\mathcal {M}}}_i w^*(t,x) \right\} \le 0, \\&\,\ (t,x) \in (0,T)\times \Omega , (a,p,X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }w^*_i(t,x) \\ (ii)&\quad \ \min \left\{ a + {F_i}(t,x,w^*(t,x), p, X ), w^*_{i}(t,x) - {{\mathcal {M}}}_i w^*(t,x) \right\} \\&\wedge \ \langle n(x), p \rangle + f_i(t,x,w_i^*(x)) \le 0 , \\&\quad \hbox {for}\ (t,x) \in (0,T)\times \partial \Omega , (a,p,X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }w^*_i(t,x) \\ (iii)&\quad \ w^*_i(0,x) \le g_i(x), x\in {\bar{\Omega }} \end{aligned}$$

Statement (iii) follows immediately from the assumptions and Theorem 2.4. Indeed, \(w_i \le v^{i,{\hat{x}}, \epsilon }_i\) for any \(i, {\hat{x}},\) and \(\epsilon \) and thus \(w^*_i(0,{\hat{x}}) \le \inf _{\epsilon } (v_i^{i,{\hat{x}}, \epsilon })^*(0,{\hat{x}}) =g_i({\hat{x}})\) (where the last equality is by assumption).

Concerning statement (i), we first note that by the definition of \(w^*_{i}\) there exists a sequence

$$\begin{aligned} (t_n, x_n, u_i^n(t_n,x_n)) \rightarrow (t,x,w_i^*(t,x)) \end{aligned}$$
(4.6)

with each \(u^n\) being a subsolution of (BVP). Assume now that \((a,p,X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }w^*_i({\hat{t}},{\hat{x}})\) for \(({\hat{t}},{\hat{x}}) \in (0,T)\times \Omega \). Then, by the existence of the sequence (4.6) and the fact that \(w_i^*\) is USC we get from Proposition 4.3 of [7] that there exists a sequence

$$\begin{aligned}&(t_n,x_n, u^n_i, a_n, p_n, X_n) \rightarrow (t,x,w^*(t,x), a,p,X) \quad \hbox {with}\\&\quad (a_n,p_n,X_n) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }u^n_i(t_n,x_n). \end{aligned}$$

Since \(({\hat{t}},{\hat{x}}) \in (0,T)\times \Omega \) we will have \((t_n, x_n) \in (0,T)\times \Omega \) as well for n large enough and by the subsolution property of \(u^n\) and the fact that F is continuous we get

$$\begin{aligned} a + F_i(t,x,w_i^*, p ,X) = \lim _{n \rightarrow \infty } \left( a_n + F_i(t_n,x_n,u^n_i(t_n,x_n),p_n,X_n) \right) \le 0 \end{aligned}$$

and thus \(w^*\) satisfies (i) above. If \((a,p,X) \in {\bar{\mathcal {P}}}^{2,+}_{\Omega }w^*_i({\hat{t}},{\hat{x}})\) for \(({\hat{t}}, {\hat{x}}) \in (0,T)\times \partial \Omega \), analogous reasoning gives the existence of a sequence such that

$$\begin{aligned} \left( a_n + F_i(t_n,x_n,u^n_i(t_n,x_n),p_n,X_n) \right) \wedge B_i(t_n,x_n, u_i^n(t_n,x_n), p_n) \le 0 \end{aligned}$$

for all n. Taking the limit as \(n \rightarrow \infty \), now using also that \(f_i\) is continuous, we conclude that also (ii) holds. In particular, we then have \(w^*=w\).

We now prove that \(w_*\) is a supersolution following a classical argument by contradiction. More precisely, we show that if \(w_*\) is not a supersolution, then there exists a subsolution strictly greater than \(w ^*\), contradicting the very definition of w.

Starting with the initial condition (iii) we have \(w_i (0,{\hat{x}}) \ge u_i^{i,{\hat{x}}, \epsilon } (0,{\hat{x}})\) for all \(\epsilon >0\) and thus \(w_{*,i}(0,{\hat{x}}) \ge \sup _\epsilon (u_i^{i,{\hat{x}}, \epsilon })_*(0,{\hat{x}})= \sup _\epsilon u_i^{i,{\hat{x}}, \epsilon }(0,{\hat{x}}) =g_i({\hat{x}})\). Assume now that \(w_*\) is not a supersolution by violating Definition 2.3 (i), i.e., that for some \(({\hat{t}},{\hat{x}}) \in (0,T)\times \Omega \) and \(i \in \{1,\dots , m\}\) we have

$$\begin{aligned}&\min \{a+ F_i({\hat{t}}, {\hat{x}}, w_{*,i}({\hat{t}}, {\hat{x}}), p,X), w_{*,i}({\hat{t}},{\hat{x}}) - {{\mathcal {M}}}_iw_{*,i}({\hat{t}},{\hat{x}})\}<0 \end{aligned}$$

for some \((a,p,X) \in {\bar{\mathcal {P}}}^{2,-}_{\Omega }w_{*,i}({\hat{t}}, {\hat{x}})\). For \(\delta >0\) and such (apX) fixed, construct the function

$$\begin{aligned} {\tilde{w}}(t,x)&: =&w_{*,i}({\hat{t}}, {\hat{x}}) + \delta + a (t-{\hat{t}}) + \langle p,(x-{\hat{x}})\rangle \\&+ \frac{1}{2} \langle X(x-{\hat{x}}),(x-{\hat{x}})\rangle - \beta (|t-{\hat{t}}| + |x-{\hat{x}}|^2). \end{aligned}$$

Since \({{\mathcal {M}}}_i w_{*} \le {{\mathcal {M}}}_i w^*\) it follows from continuity that the function \({\tilde{w}}(t,x)\) is a viscosity subsolution to

$$\begin{aligned} \min \left\{ \partial _{t} {\tilde{w}}(t,x) + {F}_i(t,x,{\tilde{w}}(t,x), D{\tilde{w}}, D^{2} {\tilde{w}}), {\tilde{w}}(t,x)_{i}- {{\mathcal {M}}}_i w^*(t,x) \right\} =0\nonumber \\ \end{aligned}$$
(4.7)

for \((t,x) \in Q_R := \{(t,x) : |t-{\hat{t}}| + |x-{\hat{x}}|^2 <R\}\) and \(\delta \), \(\beta \) and R sufficiently small. (If \(t \ne {\hat{t}}\), \({\tilde{w}}\) satisfies (4.7) in the classical sense. If \(t={\hat{t}}\), \({\bar{\mathcal {P}}}^{2,+}_{\Omega }{\tilde{w}}(t,x) = \{(a+\beta \eta , p, X) : \eta \in [-1,1]\}\) and the contribution from \(\beta \eta \) is harmless if \(\beta \) is small enough.) By definition of \({\bar{\mathcal {P}}}^{2,-}_{\Omega }w_{*,i}({\hat{t}}, {\hat{x}})\) we have

$$\begin{aligned} w_i^*(t,x)&\ge w_{*,i}(t,x) \ge w_{*,i}({\hat{t}}, {\hat{x}}) + a (t-{\hat{t}}) + \langle p, x-{\hat{x}}\rangle \\&\quad + \frac{1}{2} \langle X (x-{\hat{x}}), x-{\hat{x}} \rangle + o (|t-{\hat{t}}| + |x-{\hat{x}}|^2) \\&= {\tilde{w}}(t,x) - \delta + \beta (|t-{\hat{t}}| + |x-{\hat{x}}|^2) + o(|t-{\hat{t}}| + |x-{\hat{x}}|^2) \end{aligned}$$

and thus, if we let \(\delta = \frac{\beta R}{4}\) and consider \((t,x) \in Q_R \setminus Q_{\frac{R}{2}}\), we get

$$\begin{aligned} w_i^*(t,x)&\ge {\tilde{w}}(t,x) - \frac{\beta R}{4} + \beta (|t-{\hat{t}}| + |x-{\hat{x}}|^2) + o(|t-{\hat{t}}| + |x-{\hat{x}}|^2) \nonumber \\&\ge {\tilde{w}}(t,x) - \frac{\beta R}{4} + \frac{\beta R}{2} + o (R). \end{aligned}$$
(4.8)

Now let \({\check{u}} = \{{\check{u}}_1, \dots , {\check{u}}_m\}\) where

$$\begin{aligned}&{\check{u}}_i(t,x) ={\left\{ \begin{array}{ll} \max \{w^{*}_i(t,x), {\tilde{w}}_i(t,x)\} &{} \hbox { if}\ (t,x)\in Q_R\\ w^*_i(t,x) &{}\hbox {otherwise} \end{array}\right. } \qquad \hbox {and}\\&{\check{u}}_j (t,x)= w^*_j(t,x) ~\hbox {if}~ j\ne i. \end{aligned}$$

Note that by (4.8), there is no jump in \({\check{u}}_i\) at \(\partial Q_R\) if R is small enough. Since \({\check{u}}_i \ge w^*_i\) we have

$$\begin{aligned}{\check{u}}_j - {{\mathcal {M}}}_j {\check{u}} \le w^*_j - {{\mathcal {M}}}_j w^*\end{aligned}$$

for any \(j \ne i\). Recalling that \(w^*_i\) is a subsolution, the above shows that \({\check{u}}\) is a subsolution to (BVP) as well (after decreasing R even further if necessary to ensure \(Q_R \in (0,T)\times \Omega \)). We now note that, since \((w^*_i)_*= w_{*,i}\), there exists by definition a sequence \((t_n, x_n, w^*_i(t_n, x_n)) \rightarrow ({\hat{t}}, {\hat{x}}, w_{*,i}({\hat{t}}, {\hat{x}}))\). If we follow this sequence we thus find that \({\check{u}}_i (t_n, x_n) = {\tilde{w}}_i (t_n, x_n) > w^*_i\) for some point \((t_n,x_n)\) sufficiently close to \(({\hat{t}}, {\hat{x}})\) (since \({\tilde{w}}_i \rightarrow w_{*,i} + \delta > w_{*,i}\)). Thus, we have constructed a subsolution which is strictly greater than w, a contradiction.

What remains is to consider if \(w_*\) fails to be a supersolution by violating condition (ii), i.e., if there exists \(({\hat{t}}, {\hat{x}}) \in (0,T)\times \partial \Omega \) and \((a,p,X) \in {\bar{\mathcal {P}}}^{2,-}_{\Omega }w_*({\hat{t}}, {\hat{x}})\) s.t.

$$\begin{aligned}&\min \{a+ F_i({\hat{t}}, {\hat{x}}, w_{*,i}({\hat{t}}, {\hat{x}}), p,X), w_{*,i}({\hat{t}},{\hat{x}}) \nonumber \\&\quad - {{\mathcal {M}}}_iw_{*,i}({\hat{t}},{\hat{x}})\} \vee B_i({\hat{t}}, {\hat{x}}, w_{*,i}({\hat{t}}, {\hat{x}}),p)<0. \end{aligned}$$
(4.9)

However, if (4.9) holds, continuity of \(f_i\) and the smoothness of \(\partial \Omega \) gives that \(B_i(t, x, {\tilde{w}}(t,x), p) \le 0\) (in the classical sense and thus in the viscosity sense by Proposition 7.2 of [7]) for \((t,x) \in \partial \Omega \) and sufficiently close to \(({\hat{t}}, {\hat{x}})\). We then conclude as above that there exists \(\delta , \beta \), and R s.t. \({\tilde{w}}\) satisfies the subsolution property (4.7) for \((t,x) \in (0,T)\times \Omega \) sufficiently close to \(({\hat{t}}, {\hat{x}})\) and thus that \({\check{u}}\) is a subsolution. Again, we have constructed a subsolution which dominates \(w^*\), contradicting the very definition of w. The proof is complete. \(\square \)