1 Introduction

1.1 Background and motivation

We are interested in a deterministic game-theoretic interpretation to curvature flow equations with dynamic boundary conditions. Throughout this paper, we assume that

  1. (A1)

    \(\Omega \) is a domain in \({{\mathbb {R}}}^2\) with boundary of uniformly \(C^2\) class.

We will later strengthen the regularity of \(\Omega \) to \(C^{2, 1}\) when necessary.

We consider

figure a

where \(u_0: {\overline{\Omega }}\rightarrow {{\mathbb {R}}}\) is assumed to be bounded and Lipschitz, and \(H: {\overline{\Omega }}\times {{\mathbb {R}}}^2\rightarrow {{\mathbb {R}}}\) is a given continuous Hamiltonian describing the dynamics on the boundary. A more precise form of H will be given in (1.5) later.

The equation (1.1) is a level set formulation of the motion of a planar curve by its curvature. In fact, when u is smooth, letting, for any \(t\ge 0\),

$$\begin{aligned} \Gamma _t=\{ x\in {{\mathbb {R}}}^2: u(x, t)=0\}, \end{aligned}$$

we see that on \(\Gamma _t\), \(u_t/|\nabla u|\) and \({\text {div}}(\nabla u/|\nabla u|)\) respectively denote the normal velocity and curvature of the curve provided that \(\nabla u\ne 0\). In general solutions of (1.1) may not belong to \(C^2\) class and one needs to apply the viscosity solution theory in order to overcome the singularity at \(\nabla u=0\) as well as the lack of regularity; see [9, 15, 20] for a detailed introduction on the viscosity approach. In Sect. 2.1 we briefly review the definition of viscosity solutions and well-posedness results of (CF).

Parabolic equations with dynamic boundary conditions are studied in different contexts; see for instance [10, 12, 14, 16,17,18, 43, 44]. A viscosity solution approach is proposed in [4, 5] to handle dynamic boundary problems for a general class of fully nonlinear parabolic equations without singularity. Motivated by applications in superconductivity and interface evolution [13] studies a class of dynamic boundary problems for the Hamilton–Jacobi equations. We also refer to [1, 6] for results on asymptotic behavior for the Hamilton-Jacobi equations with dynamic boundary conditions. In addition, uniqueness and existence of viscosity solutions to degenerate dynamic boundary problems are recently considered in [25].

As for singular parabolic problems like (CF), the well-posedness is much less known except for the case when \(\Omega \) is a half space and the dynamic boundary condition is linear [22].

In this paper, we aim to construct a family of discrete deterministic two-person games whose value functions approximate the viscosity solution of (CF). We restrict ourselves to the two dimensional case only for simplicity of our game rules. It is actually possible to generalize our results in higher dimensions.

Such a deterministic game-based approach is proposed by Kohn and Serfaty in [28] for the mean curvature flow equation and in [29] for general parabolic and elliptic equations. We also refer to stochastic Tug-of-war games studied by Peres et al. [39, 40] for normalized p-Laplace equations with \(1<p\le \infty \); see related results on the so-called asymptotic mean value properties in [30, 36, 37]. The game approximations turn out to be useful in understanding various properties of the associated nonlinear PDEs, as shown in [3, 33,34,35, 38, 41] etc.

The games mentioned above are considered either in the whole space or in a domain with Dirichlet boundary conditions. Concerning the Neumann type boundary problems, deterministic game interpretations for curvature flow equations and more general parabolic equations are studied respectively in [24] and in [11]; see also [2, 8] for stochastic discrete games associated to the infinity Laplacian. The mean value property is recently studied for the Robin boundary problems in [31, 32].

In our recent work [26], we establish a deterministic game interpretation of more general fully nonlinear parabolic equations with dynamic boundary conditions and discuss applications to related problems on asymptotic behavior. However, the results and method in [26] do not directly apply to singular parabolic equations such as (1.1). In the present work, we thus attempt to resolve the singularity in (1.1) and further develop the game approach to dynamic boundary problems.

1.2 The PDE setting

Let us give a more precise description of our basic PDE setting, especially the boundary condition (1.2). Assume that A is a compact metric space. Let \(\nu (x)\) denote the unit outward normal to \(\partial \Omega \) at \(x\in \partial \Omega \). For later use, we take for every \(\lambda \ge 0\)

$$\begin{aligned} \Omega _\lambda =\{x\in \Omega : \mathrm{dist\,}(x, \partial \Omega )> \lambda \}. \end{aligned}$$
(1.4)

The Hamiltonian H in (1.2) is in the form of

$$\begin{aligned} H(x, p)=\max _{a\in A}\left\langle p, \nu (x) - f(x, a)\right\rangle , \end{aligned}$$
(1.5)

where \(\left\langle \cdot , \cdot \right\rangle \) denotes the inner product in \({{\mathbb {R}}}^2\) and \(f: {\overline{\Omega }}\times A\rightarrow {{\mathbb {R}}}\) is a bounded Lipschitz function satisfying several assumptions to be introduced later.

Inspired by [28], we aim to give a discrete game interpretation of this problem. Since the game dynamics in the interior of the domain has already been clarified in [28], we place our emphasis on how to generate the boundary condition (1.2). The games on the classical Neumann condition are studied in [24] for the curvature flow equation and in [11] for more general parabolic and elliptic equations.

We extend the definition of \(\nu \) into the interior of domain near the boundary. More precisely, since \(\Omega \) is of class \(C^2\) uniformly, the signed distance function

$$\begin{aligned} {\text {sd}}(x, \partial \Omega )=\mathrm{dist\,}(x , {{\mathbb {R}}}^2\setminus \Omega )-\mathrm{dist\,}(x, \Omega ) \ \quad (x\in {{\mathbb {R}}}^2) \end{aligned}$$

is known to be of class \(C^2\) near \(\partial \Omega \). We thus may let \(\nu =-\nabla {\text {sd}}(\cdot , \partial \Omega )\) near \(\partial \Omega \) and extend it to a Lipschitz function in \({\overline{\Omega }}\).

We assume throughout this work that

  1. (A2)

    f is bounded and continuous in \({\overline{\Omega }}\times A\) with

    $$\begin{aligned} \sup _{(x, a)\in \partial \Omega \times A} |f(x, a)|< 1, \end{aligned}$$
    (1.6)

    and there exists \(L_f>0\) such that

    $$\begin{aligned} |f(x, a)-f(x', a)|\le L_f|x-x'| \end{aligned}$$

    for all \(x, x'\in {\overline{\Omega }}\) and \(a\in A\).

The assumption (A2) implies that there exists \(\rho >0\) such that

$$\begin{aligned} H\left( x, p+s\nu (x)\right) -H(x, p)\ge \rho s \quad \text {for all }s>0\hbox { and }x\in \partial \Omega . \end{aligned}$$
(1.7)

In fact, we may choose

$$\begin{aligned} \rho =1-\sup _{(x, a)\in \partial \Omega \times A} |f(x, a)|>0 \end{aligned}$$
(1.8)

to obtain (1.7). This amounts to saying that in (1.2) the classical Neumann part \(\left\langle \nabla u, \nu \right\rangle \) dominates the whole boundary condition uniformly for all \(a\in A\). These assumptions are not only important for our game interpretation but also for uniqueness of viscosity solutions; we refer to [4, 5] for a comparison principle for nonsingular equations that essentially requires this domination.

A typical example of dynamic boundary conditions in our mind is

$$\begin{aligned} u_t+\left\langle \nabla u, \nu \right\rangle +K|\nabla u|=0 \quad \hbox { on}\ \partial \Omega \times (0, \infty ) \end{aligned}$$
(1.9)

with any \(0\le K<1\), for which we take in (CF) \(A={\overline{B}}_1\) and \(f(x, a)= Ka\). Here \(B_r\) denotes the open disk centered at the origin with radius \(r>0\). In the sequel we also denote by \(B_r(x)\) the open disk centered at x with radius r.

The well-posedness for (CF) is in general a challenging problem due to the singularity at \(\nabla u=0\). A comparison theorem is established in [22] in the special case when \(\Omega \) is a half space and the dynamic boundary condition (1.2) is linear; see Theorem 2.2 for the precise statements in two space dimensions (\(n=2\)). Although our game setting below is quite general, except for this special case, we need to assume the comparison principle in order to conclude the convergence of game values.

1.3 Main result

Let us below describe our two-person game for the dynamic boundary problem (CF).

We first fix a step size \(\varepsilon >0\). Since \(\Omega \) is a \(C^2\) domain, \(\Omega _\lambda \) given as in (1.4) is also of class \(C^2\) for \(\lambda >0\) small. We set, for any \(x\in {\overline{\Omega }}\),

$$\begin{aligned} \eta _\varepsilon (x)=\min \left\{ 1, {\mathrm{dist\,}(x, \partial \Omega )\over \sqrt{2}\varepsilon }\right\} \end{aligned}$$
(1.10)

and take

$$\begin{aligned} \zeta _\varepsilon =1-\eta _\varepsilon ^2 \quad \quad \text {in }{\overline{\Omega }}. \end{aligned}$$
(1.11)

The game starts from a fixed state \(y_0=x\in {\overline{\Omega }}\) with duration \(t\ge 0\). The total of steps is \(N=[t/\varepsilon ^2]\). At the k-th step (\(k=1, 2, \ldots , N\)),

  • Player I chooses a unit vector \(v_k\in {{\mathbb {R}}}^2\) and \(a_k\in A\);

  • Player II sees the choice of Player I and then picks \(b_k=\pm 1\);

  • Once the choices of both players are determined, the game position is moved from \(y_{k-1}\) to

    $$\begin{aligned} y_k=y_{k-1}+\sqrt{2}\varepsilon \eta _\varepsilon (y_{k-1}) b_kv_k-\varepsilon ^2 \zeta _\varepsilon (y_{k-1})\left( \nu (y_{k-1})-f(y_{k-1}, a_{k})\right) , \end{aligned}$$

    where \(0\le \eta _\varepsilon ,\ \zeta _\varepsilon \le 1\) are the functions given in (1.10) and (1.11).

Owing to the inclusion of the function \(\eta _\varepsilon \), we can easily verify that

$$\begin{aligned} x+\sqrt{2}\varepsilon \eta _\varepsilon (x) bv-\varepsilon ^2 \zeta _\varepsilon (x)\left( \nu (x)-f(x, a)\right) \in {\overline{\Omega }}\end{aligned}$$

for all \(x\in {\overline{\Omega }}\), \(a\in A\), \(b= \pm 1\), \(|v|=1\) and for \(\varepsilon >0\) sufficiently small. Indeed, by (A2), we get

$$\begin{aligned} \begin{aligned}&\mathrm {dist\,}\left( x+\sqrt{2}\varepsilon \eta _\varepsilon (x) bv-\varepsilon ^2 \zeta _\varepsilon (x)\left( \nu (x)-f(x, a)\right) , \partial \Omega \right) \\ {}&\ge \mathrm {dist\,}\left( x+\sqrt{2}\varepsilon \eta _\varepsilon (x) bv, \partial \Omega \right) +\varepsilon ^2\zeta _\varepsilon (x)\left\langle \nu (x),\nu (x)-f(x, a)\right\rangle +o(\varepsilon ^2)\\ {}&> \mathrm {dist\,}(x, \partial \Omega )-\sqrt{2}\varepsilon \eta _\varepsilon (x)\ge 0. \end{aligned} \end{aligned}$$

We thus can repeat the game rules till the final step.

The choices of both players, \(a_k\), \(v_k\) and \(b_k\) for \(k=1, 2, \cdots N\), determine a sequence of game positions \(y_0(=x), y_1, y_2, \cdots , y_N\in {\overline{\Omega }}\). Suppose that Player I needs to pay to Player II the amount \(u_0(y_N)\) of money when the game ends. Player I certainly intends to minimize the cost \(u_0(y_N)\) while Player II is to maximize it.

We then define the value function of the game by

$$\begin{aligned} u^\varepsilon (x, t)=\min _{v_1, a_1}\max _{b_1} \min _{v_2, a_2}\max _{b_2}\cdots \min _{v_N, a_N}\max _{b_N} u_0(y_N). \end{aligned}$$
(1.12)

It is clear, by (1.12), that

$$\begin{aligned} u^\varepsilon (x, t) =\min _{{\begin{array}{c} |v|=1 \\ a\in A \end{array}}}\max _{b=\pm 1} u^\varepsilon \big (x+\sqrt{2}\varepsilon \eta _\varepsilon (x) bv-\varepsilon ^2 \zeta _\varepsilon (x)\left( \nu (x)-f(x, a)\right) , t-\varepsilon ^2\big ), \qquad \end{aligned}$$
(1.13)

which is the so-called dynamic programming principle (DPP) of our game.

Since \(u_0\) is bounded in \({\overline{\Omega }}\), by definition we easily see that \(u^\varepsilon \) is also bounded in \({\overline{\Omega }}\times [0, \infty )\) uniformly in \(\varepsilon \). We thus can define the relaxed half limits \({\overline{u}}\) and \({\underline{u}}\) of the value function \(u^\varepsilon \) as below: for every \((x, t)\in {\overline{\Omega }}\times [0, \infty )\),

$$\begin{aligned} \begin{aligned} {\overline{u}}(x, t)&=\mathop {{\,\mathrm {limsup^*}\,}}_{\varepsilon \rightarrow 0} u^\varepsilon (x,t)\\ {}&=\lim _{\delta \rightarrow 0} \sup \{u^\varepsilon (y, s):\ y\in {\overline{\Omega }},\ s\ge 0,\ |y-x|+|s-t|+\varepsilon \le \delta \},\\ {\underline{u}}(x, t)&=\mathop {{\,\mathrm {liminf_*}\,}}_{\varepsilon \rightarrow 0} u^\varepsilon (x, t)\\ {}&=\lim _{\delta \rightarrow 0} \inf \{u^\varepsilon (y, s):\ y\in {\overline{\Omega }},\ s\ge 0,\ |y-x|+|s-t|+\varepsilon \le \delta \}. \end{aligned} \end{aligned}$$
(1.14)

Using the dynamic programming principle and a comparison principle, we show the following result.

Theorem 1.1

(Game approximation) Assume (A1) and (A2). Assume that \(u_0\) is bounded and Lipschitz continuous in \({\overline{\Omega }}\). For any \(\varepsilon >0\) small, let \(u^\varepsilon \) be the value function defined as in (1.12). Let \({\overline{u}}\) and \({\underline{u}}\) be the relaxed limits of \(u^\varepsilon \) defined by (1.14). Then \({\overline{u}}\) and \({\underline{u}}\) are respectively a subsolution and a supersolution of (1.1)–(1.2) with H given by (1.5) and satisfy

$$\begin{aligned} {\overline{u}}(\cdot , 0)= u_0= {\underline{u}}(\cdot , 0) \quad \text{ in } {\overline{\Omega }}.\end{aligned}$$
(1.15)

In addition, if the comparison principle for (CF) holds, then \(u^\varepsilon \rightarrow u\) locally uniformly in \({\overline{\Omega }}\times [0, \infty )\) as \(\varepsilon \rightarrow 0\), where u is the unique viscosity solution of (CF).

Remark 1.2

A so-called inverse game is also available to approximate the solutions of (CF); see [28] for the case of the Cauchy problem. More precisely, if we keep the game rules above but switch the goals of both players, then we can define the value function in this case to be

$$\begin{aligned} u^\varepsilon (x, t)=\max _{v_1, a_1}\min _{b_1} \max _{v_2, a_2}\min _{b_2}\cdots \max _{v_N, a_N}\min _{b_N} u_0(y_N). \end{aligned}$$
(1.16)

As in Theorem 1.1, one can again show that the relaxed half limits of \(u^\varepsilon \) are sub- and supersolutions of (1.1)–(1.2) respectively with

$$\begin{aligned} H(x, p)=\min _{a\in A}\left\langle p, \nu (x) - f(x, a)\right\rangle . \end{aligned}$$
(1.17)

The convergence of \(u^\varepsilon \) to the unique solution u is again an immediate consequence provided that the comparison principle holds.

1.4 Heuristics

In what follows, we give a heuristic proof of Theorem  1.1, deriving the equations (1.1) and (1.2) from the game setting. Let us assume that \(u^\varepsilon , u\) are smooth with \(\nabla u^\varepsilon \ne 0\), \(\nabla u\ne 0\) and \(u^\varepsilon \rightarrow u\) as \(\varepsilon \rightarrow 0\) in a sufficiently strong sense.

Under these assumptions, we start with Taylor expansion of the right hand side of (1.13):

$$\begin{aligned} \begin{aligned} 0&=\min _{v, a}\max _{b} \bigg \{ \sqrt{2}\varepsilon \eta _\varepsilon (x) b \left\langle \nabla u^\varepsilon (x, t),\ v \right\rangle +\varepsilon ^2 \eta _\varepsilon ^2(x) \left\langle \nabla ^2 u^\varepsilon (x, t)v, v\right\rangle \\&\quad -\varepsilon ^2 \zeta _\varepsilon (x)\left\langle \nabla u^\varepsilon (x, t), \left( \nu (x)-f(x, a)\right) \right\rangle \bigg \} -\varepsilon ^2 u^\varepsilon _t(x, t) +o(\varepsilon ^2). \end{aligned} \end{aligned}$$

It follows that

$$\begin{aligned} \begin{aligned} 0&=\min _{v} \bigg \{ \sqrt{2}\varepsilon \eta _\varepsilon (x) \big |\left\langle \nabla u^\varepsilon (x, t),\ v \right\rangle \big | +\varepsilon ^2 \eta _\varepsilon ^2(x) \left\langle \nabla ^2 u^\varepsilon (x, t)v, v\right\rangle \bigg \}\\&\quad -\varepsilon ^2 \zeta _\varepsilon (x)H(x, \nabla u^\varepsilon (x, t)) -\varepsilon ^2 u^\varepsilon _t(x,t) +o(\varepsilon ^2). \end{aligned} \end{aligned}$$

This implies that the minimizer v satisfies

$$\begin{aligned} v\approx \nabla ^\perp u^\varepsilon (x, t), \end{aligned}$$

where \(\nabla ^\perp u\) denotes \((-u_y, u_x)\) for any \(u\in C^1({{\mathbb {R}}}^2)\). Denoting

$$\begin{aligned} F(p, X)=-{\text {tr}}\left[ \left( I-{p\otimes p\over |p|^2}\right) X\right] \end{aligned}$$
(1.18)

for \((p, X)\in ({{\mathbb {R}}}^2\setminus \{0\})\times {{\mathbb {S}}}^2\), where \({{\mathbb {S}}}^n\) stands for the set of \(n\times n\) symmetric matrices, we have

$$\begin{aligned} 0=-\varepsilon ^2 \eta _\varepsilon ^2 (x)F\left( \nabla u^\varepsilon (x, t), \nabla ^2 u^\varepsilon (x, t)\right) -\varepsilon ^2 \zeta _\varepsilon (x)H(x, \nabla u^\varepsilon (x, t))-\varepsilon ^2 u^\varepsilon _t(x, t) +o(\varepsilon ^2).\nonumber \\ \end{aligned}$$
(1.19)

Here we applied the fact that for all \((p, X)\in ({{\mathbb {R}}}^2\setminus \{0\})\times {{\mathbb {S}}}^2\),

$$\begin{aligned} {\text {tr}}\left[ \left( I-{p\otimes p\over |p|^2}\right) X\right] =\left\langle X {p^\perp \over |p|}, {p^\perp \over |p|}\right\rangle , \end{aligned}$$

where \(p^\perp \) denotes a vector orthogonal to p with length equal to |p|.

For any fixed \(x\in \Omega \), since \(\eta _\varepsilon (x)\rightarrow 1\) as \(\varepsilon \rightarrow 0\), dividing the equation (1.19) by \(\varepsilon ^2\) and passing to the limit, we get

$$\begin{aligned} u_t(x, t)+F\left( \nabla u(x, t), \nabla ^2 u(x, t)\right) =0, \end{aligned}$$

which is precisely (1.1).

On the other hand, if \(x\in \partial \Omega \), then \(\eta _\varepsilon (x)= 0\). Hence, in this case (1.19) yields the dynamic boundary condition (1.2), i.e.,

$$\begin{aligned} u_t(x, t)+H(x, \nabla u(x, t))=0. \end{aligned}$$

We remark that although the derivation of the boundary condition is quite straightforward in our formal argument above, the rigorous proof is more involved. As a matter of fact, since the real proof is essentially based on stability arguments, one has to consider the situation when u(xt) is tested via the test functions on \(u^\varepsilon \) at a sequence of approximating locations \(x_\varepsilon \in {\overline{\Omega }}\), rather than \(x_\varepsilon \in \partial \Omega \). Note that we do not have any controls on the converging speed of \(x_\varepsilon \rightarrow x\), which means that the limit of \(\zeta _\varepsilon (x_\varepsilon )\), via a converging subsequence, can be any \(c\in [0, 1]\) rather than simply 0.

However, it turns out that this is not a problem at all, since we will eventually consider the dynamic boundary condition in the generalized (viscosity) sense. More precisely, when we intend to show, for example, that u is a subsolution at (xt) with \(x\in \partial \Omega \), if there holds

$$\begin{aligned} 0\le & {} - \varepsilon ^2 \eta _\varepsilon ^2(x)F(\nabla u^\varepsilon (x_\varepsilon , t_\varepsilon ), \nabla ^2 u^\varepsilon (x_\varepsilon , t_\varepsilon ))-\varepsilon ^2 \zeta _\varepsilon (x_\varepsilon )H(x_\varepsilon , \nabla u^\varepsilon (x_\varepsilon , t_\varepsilon ))\\&-\varepsilon ^2 u^\varepsilon _t(x_\varepsilon , t_\varepsilon ) +o(\varepsilon ^2) \end{aligned}$$

and \((x_\varepsilon , t_\varepsilon )\rightarrow (x, t)\) as \(\varepsilon \rightarrow 0\), then even if \(\zeta _\varepsilon (x_\varepsilon )\rightarrow c\) with \(0\le c\le 1\), by dividing both sides by \(\varepsilon ^2\) and letting \(\varepsilon \rightarrow 0\), we have, thanks to (1.11),

$$\begin{aligned} u_t(x, t) +\left( 1-c\right) F\left( \nabla u(x, t), \nabla ^2 u(x, t)\right) +cH(x, \nabla u(x, t)) \le 0, \end{aligned}$$

which is equivalent to

$$\begin{aligned} (1-c)\left( u_t(x, t)+ F\left( \nabla u(x, t), \nabla ^2 u(x, t)\right) \right) +c\left( u_t(x, t)+H(x, \nabla u(x, t))\right) \le 0.\end{aligned}$$

It is clear that either

$$\begin{aligned} u_t(x, t)+F\left( \nabla u(x, t), \nabla ^2 u(x, t)\right) \le 0 \end{aligned}$$

or

$$\begin{aligned} u_t(x, t)+H(x, \nabla u(x, t))\le 0 \end{aligned}$$

holds, which verifies that u satisfies the subsolution property on the boundary in the viscosity sense. The proof for the supersolution part is similar.

1.5 Applications

Our game approximation not only provides an existence result for (CF) but also has applications in understanding various properties of the evolution. In Sect. 4, following the method in [34], we discuss the preservation of convexity for the curvature flow with the dynamic boundary condition. The idea is to establish an approximate convexity estimate for the level sets of \(u^\varepsilon (\cdot , t)\) for all \(t>0\) and then pass to the limit as \(\varepsilon \rightarrow 0\). However we need to overcome several difficulties due to the presence of the boundary condition.

Since \(\Omega \) is not assumed to be convex, in general one cannot directly consider the convexity of sub- or super-level sets of \(u^\varepsilon (\cdot , t)\) in \(\Omega \). We therefore give a different definition of set convexity, which we call convexity relative to \(\Omega \). Roughly speaking, a closed set is convex relatively to \(\Omega \) if the portion of its boundary in \(\Omega \) is convex; see Definition 4.1. It turns out that it is a proper notion for us to study the convexity preserving property with boundary conditions.

Although it is a well-known property that mean curvature flow in the whole space preserves convexity [21, 27], it fails to hold for general boundary value problems (cf. [34, Example 5.7]). In order to give an affirmative convexity result for the dynamic boundary problem, we thus impose an additional assumption on the initial value so as to ensure that the initial curve bends in a “correct” way. A precise statement is given in Theorem 4.2.

In Sect. 5, we use the game interpretation to understand the so-called fattening phenomenon for (CF). As shown in [15, 19], either for the Cauchy problem or for the Neumann problem of motion by curvature, there exist solutions whose zero level set is initially a curve but develops nonempty interior during the evolution. Such singular behavior is rigorously verified using the corresponding game-theoretic interpretation [33]. In this work, we use our game to discuss the fattening or non-fattening phenomenon for the curvature flow with dynamic boundary conditions. We choose specific game strategies for both players to obtain uniform estimates on upper and lower bounds of \(u^\varepsilon \) for all \(\varepsilon >0\).

2 The game interpretation

In this section we give a rigorous and detailed proof of Theorem  1.1. For the reader’s convenience, in what follows we first give the definition of viscosity solutions of (CF) and briefly review several known results on its well-posedness.

2.1 Viscosity solutions to dynamic boundary problems

Let \(F: ({{\mathbb {R}}}^2\setminus \{0\}) \times {{\mathbb {S}}}^2\rightarrow {{\mathbb {R}}}\) be as in (1.18). It is clear that F is continuous in \(({{\mathbb {R}}}^2\setminus \{0\}) \times {{\mathbb {S}}}^2\) and

$$\begin{aligned} F^*(0, 0)=F_*(0, 0)=0. \end{aligned}$$

Here \(F^*\) and \(F_*\) respectively denote the upper and lower semicontinuous envelopes of F.

Definition 2.1

A locally bounded upper semicontinuous (resp., lower semicontinuous) function u on \({\overline{\Omega }}\times (0, \infty )\) is said to be a subsolution (resp., supersolution) of (1.1)–(1.2) if whenever there exist \((x_0, t_0)\in {\overline{\Omega }}\times (0, \infty )\) and a function \(\varphi \in C^\infty ({\overline{\Omega }}\times (0, \infty ))\) such that \(u-\varphi \) attains a strict maximum (resp., minimum) in \({\overline{\Omega }}\times (0, \infty )\) at \((x_0, t_0)\), then the following inequalities hold:

  • If \(x_0\in \Omega \), then we have

    $$\begin{aligned} \begin{aligned}&\varphi _t(x_0, t_0)+F_*(\nabla \varphi (x_0, t_0), \nabla ^2 \varphi (x_0, t_0))\le 0\\ {}&\quad \left( \text{ resp., } \varphi _t(x_0, t_0)+F^*(\nabla \varphi (x_0, t_0), \nabla ^2 \varphi (x_0, t_0))\ge 0\right) .\end{aligned} \end{aligned}$$
  • If \(x_0\in \partial \Omega \), then we have

    $$\begin{aligned} \begin{aligned}&\varphi _t(x_0, t_0)+ \min \left\{ F_*\left( \nabla \varphi (x_0, t_0), \nabla ^2 \varphi (x_0, t_0)\right) , H\left( x_0, \nabla \varphi (x_0, t_0)\right) \right\} \le 0 \\&\left( \text {resp.,}\ \varphi _t(x_0, t_0)+ \max \left\{ F^*\left( \nabla \varphi (x_0, t_0), \nabla ^2 \varphi (x_0, t_0)\right) , H\left( x_0, \nabla \varphi (x_0, t_0)\right) \right\} \ge 0\right) .\end{aligned} \end{aligned}$$

A continuous function on \({\overline{\Omega }}\times (0, \infty )\) is called a solution of (1.1)–(1.2) if it is both a subsolution and a supersolution.

Moreover, a locally bounded upper semicontinuous (resp., lower semicontinuous) function u on \({\overline{\Omega }}\times [0, \infty )\) is said to be a subsolution (resp., supersolution) of (CF) if it is a subsolution (supersolution) of (1.1)–(1.2) and satisfies \(u(\cdot , 0)\le u_0\) (resp., \(u(\cdot , 0)\ge u_0\)) in \({\overline{\Omega }}\). A continuous function on \({\overline{\Omega }}\times [0, \infty )\) is called a solution of (CF) if it is both a subsolution and a supersolution of (CF).

It is certainly an important question whether the viscosity solutions defined above are unique. This turns out to be a challenging open question. Let us consider the case when the boundary condition is linear (with \(f\equiv 0\) in (1.5)), i.e.,

$$\begin{aligned} H(x, p)=\left\langle p, \nu (x) \right\rangle \quad \text {for all }x\in {\overline{\Omega }}, p\in {{\mathbb {R}}}^n. \end{aligned}$$
(2.1)

A comparison result in this case is recently established in [22] when \(\Omega \) is a half space.

Theorem 2.2

(Comparison theorem in a half space [22]) Suppose that \(\Omega \) is a half space, i.e.,

$$\begin{aligned} \Omega =\{(x_1, x_2, \ldots , x_n)\in {{\mathbb {R}}}^n: x_n>0\}. \end{aligned}$$
(2.2)

Let u and v be respectively a subsolution and a supersolution of (1.1)–(1.2) with H given by (2.1). Assume in addition that there exists \(M\in {{\mathbb {R}}}\) such that for any \(T>0\), \(u(\cdot , t)-M\) and \(v(\cdot , t)-M\) are compactly supported in \({\overline{\Omega }}\) for all \(0\le t\le T\). If \(u(\cdot , 0)\le v(\cdot , 0)\) on \({\overline{\Omega }}\), then \(u\le v\) in \({\overline{\Omega }}\times [0, \infty )\).

It is however an open question whether the comparison principle holds in a more general domain and for more general nonlinear dynamic boundary conditions.

2.2 A rigorous proof of sub- and supersolution properties

We next prove Theorem 1.1 rigorously. We first define a monotone operator \(S^\varepsilon : C({\overline{\Omega }})\rightarrow C({\overline{\Omega }})\) to be

$$\begin{aligned} S^\varepsilon [\psi ](x)= {} \min _{{{\begin{array}{c} |v|=1 \\ a\in A \end{array}}}}\max _{b=\pm 1} \psi \left( x+\sqrt{2}\varepsilon \eta _\varepsilon (x)bv-\varepsilon ^2\zeta _\varepsilon (x)\left( \nu (x)-f(x, a)\right) \right) , \nonumber \\ \quad x\in {\overline{\Omega }}, \psi \in C({\overline{\Omega }}). \end{aligned}$$
(2.3)

It is clear that

  • \(S^\varepsilon [c]=c\) in \({\overline{\Omega }}\) for any constant \(c\in {{\mathbb {R}}}\);

  • \(S^\varepsilon [\psi +c]=S^\varepsilon [\psi ]+c\) in \({\overline{\Omega }}\) for any \(\psi \in C({\overline{\Omega }})\) and \(c\in {{\mathbb {R}}}\).

  • \(S^\varepsilon [\psi _1]\le S^\varepsilon [\psi _2]\) in \({\overline{\Omega }}\) provided that \(\psi _1\le \psi _2\) in \({\overline{\Omega }}\).

  • For any \(\psi \in C({\overline{\Omega }})\) and any nondecreasing function \(h\in C({{\mathbb {R}}})\), we have

    $$\begin{aligned} h\left( S^\varepsilon [\psi ]\right) =S^\varepsilon [h(\psi )] \quad \text {in }{\overline{\Omega }}. \end{aligned}$$
    (2.4)

The last property above can be viewed as a discrete version of the geometricity of the level-set mean curvature operator. It is known that the evolution of a particular level set described by (1.1) does not depend on the choice of \(u_0\) but only on the initial level set. Our game for the dynamic boundary problem keeps the same feature.

In addition, the following property of \(S^\varepsilon \) holds.

Lemma 2.3

(Consistency) Assume (A1) and (A2). Let \(\eta _\varepsilon \) and \(\zeta _\varepsilon \) be given by (1.10) and (1.11). Let \(\psi \in C^2({\overline{\Omega }})\) and \(S^\varepsilon \) be the operator defined in (2.3). Fix \(x\in {\overline{\Omega }}\). Then, for \(\varepsilon >0\) sufficiently small,

$$\begin{aligned} S^\varepsilon [\psi ](x)-\psi (x)\le & {} -\varepsilon ^2 \eta _\varepsilon ^2(x) F_*(\nabla \psi (x), \nabla ^2 \psi (x))-\varepsilon ^2 \zeta _\varepsilon (x)H(x, \nabla \psi (x))+o(\varepsilon ^2), \nonumber \\ \end{aligned}$$
(2.5)
$$\begin{aligned} S^\varepsilon [\psi ](x)-\psi (x)\ge & {} -\varepsilon ^2 \eta _\varepsilon ^2(x) F^*(\nabla \psi (x), \nabla ^2 \psi (x)) - \varepsilon ^2\zeta _\varepsilon (x)H(x, \nabla \psi (x))+o(\varepsilon ^2),\nonumber \\ \end{aligned}$$
(2.6)

where F is given by (1.18).

In order to prove Lemma 2.3, we first present an elementary result for our later use.

Lemma 2.4

(Lemma 4.1 in [23]) Suppose that p is a unit vector in \({\mathbb {R}}^2\) and \(X\in {{\mathbb {S}}}^2\). Then there exists a constant \(C>0\) that depends only on the norm of X, such that for any unit vector \(\xi \in {\mathbb {R}}^2\),

$$\begin{aligned} |\langle Xp^\perp , p^\perp \rangle -\langle X \xi , \xi \rangle |\le C|\langle \xi , p\rangle |. \end{aligned}$$
(2.7)

Proof of Lemma 2.3

Fix arbitrarily \(x\in {\overline{\Omega }}\). By Taylor expansion on (2.3), we have

$$\begin{aligned} \begin{aligned} S^\varepsilon [\psi ](x)&=\psi (x)+\min _{|v|=1} \bigg \{\sqrt{2}\varepsilon \eta _\varepsilon (x) | \left\langle \nabla \psi (x), v\right\rangle | + \varepsilon ^2 \eta _\varepsilon ^2(x) \left\langle \nabla ^2\psi (x) v, v\right\rangle \bigg \}\\&\quad -\varepsilon ^2\zeta _\varepsilon (x) H(x, \nabla \psi (x))+o(\varepsilon ^2). \end{aligned} \end{aligned}$$
(2.8)

Part 1. Let us first show (2.5). If \(\nabla \psi (x)\ne 0\), then by taking v perpendicular to \(\nabla \psi (x)\), we get

$$\begin{aligned} S^\varepsilon [\psi ](x){\le } \psi (x){+}\varepsilon ^2 \eta _\varepsilon ^2(x) \left\langle \nabla ^2 \psi (x) {\nabla ^\perp \psi (x)\over |\nabla \psi (x)|} , {\nabla ^\perp \psi (x)\over |\nabla \psi (x)|}\right\rangle -\varepsilon ^2\zeta _\varepsilon (x) H(x, \nabla \psi (x))+o(\varepsilon ^2), \end{aligned}$$

which immediately yields (2.5). If \(\nabla \psi (x)=0\), then (2.8) reduces to

$$\begin{aligned} S^\varepsilon [\psi ](x)=\psi (x)+\varepsilon ^2 \eta _\varepsilon ^2(x) \min _{|v|=1} \left\langle \nabla ^2 \psi (x)v, v \right\rangle -\varepsilon ^2\zeta _\varepsilon (x) H(x, \nabla \psi (x))+o(\varepsilon ^2), \end{aligned}$$
(2.9)

which also implies (2.5), since

$$\begin{aligned} \min _{|v|=1} \left\langle Xv, v\right\rangle \le \max _{|v|=1} \left\langle Xv, v\right\rangle =\max _{|v|=1}{\text {tr}}[(I -v\otimes v)X]=-F_*(0, X). \end{aligned}$$

We thus get (2.5) in either case. The modulus in the error term depends only on the continuity of \(\nabla ^2\psi \) near x.

Part 2. Let us now prove (2.6). Suppose that \(\nabla \psi (x)\ne 0\). Applying Lemma 2.4 with \(\xi =v\), \(p=\nabla \psi (x)/|\nabla \psi (x)|\) and \(X=\nabla ^2 \psi (x)\), we have

$$\begin{aligned} \left\langle \nabla ^2\psi (x)v, v\right\rangle -\left\langle \nabla ^2 \psi (x) {\nabla ^\perp \psi (x)\over |\nabla \psi (x)|} , {\nabla ^\perp \psi (x)\over |\nabla \psi (x)|}\right\rangle \ge -C{|\left\langle \nabla \psi (x), v\right\rangle |\over |\nabla \psi (x)|}, \end{aligned}$$

where \(C>0\) depends on \(|\nabla ^2 \psi (x)|\). Adopting this estimate in (2.8), we are led to

$$\begin{aligned} \begin{aligned} S^\varepsilon [\psi ](x)-\psi (x)&\ge \min _{|v|=1}\eta _\varepsilon (x) \left( \sqrt{2}\varepsilon -{C\varepsilon ^2\eta _\varepsilon (x)\over |\nabla \psi (x)|}\right) |\langle \nabla \psi (x), v\rangle |\\&\quad +\varepsilon ^2\eta _\varepsilon ^2(x)\left\langle \nabla ^2 \psi (x){\nabla ^\perp \psi (x)\over |\nabla \psi (x)|}, {\nabla ^\perp \psi (x)\over |\nabla \psi (x)|}\right\rangle \\&\quad -\varepsilon ^2\zeta _\varepsilon (x)H(x, \nabla \psi (x))+o(\varepsilon ^2). \end{aligned} \end{aligned}$$

This clearly implies (2.6) for \(\varepsilon >0\) small. When \(\nabla \psi (x)=0\), (2.6) is again an immediate consequence of (2.9). It is not difficult to see that the error term depends on the continuity of \(\nabla ^2\psi \) around x. \(\square \)

The estimate (2.6) in Lemma  2.3 requires the smallness of \(\varepsilon >0\) that depends on local behavior of \(\nabla \psi \) and \(\nabla ^2\psi \) around x. Let us provide a rough but uniform estimate for more regular test functions in the following class:

$$\begin{aligned} {\mathcal {C}}^{2, 1}({\overline{\Omega }}):=\{\psi \in C^2({\overline{\Omega }}): \nabla \psi \in W^{2, \infty }({\overline{\Omega }})\}. \end{aligned}$$
(2.10)

Lemma 2.5

(Increment bound) Assume (A1) and (A2). Let \(\eta _\varepsilon \) and \(\zeta _\varepsilon \) be given by (1.10) and (1.11). Let \(\psi \in {\mathcal {C}}^{2, 1}({\overline{\Omega }})\) as in (2.10) and \(S^\varepsilon \) be the operator defined in (2.3). Then for any \(\varepsilon >0\),

$$\begin{aligned} |S^\varepsilon [\psi ](x)-\psi (x)|\le C\varepsilon ^2, \end{aligned}$$
(2.11)

where \(C>0\) depends on the uniform bound of \(\nabla \psi \) and \(\nabla \psi ^2\) in \({\overline{\Omega }}\).

Proof

As shown in Part 1 of proof of Lemma 2.3, the error term in (2.5) is uniform for all \(x\in {\overline{\Omega }}\), since \(\nabla ^2\psi \) is Lipschitz in \({\overline{\Omega }}\). Then (2.5) immediately implies

$$\begin{aligned} S^\varepsilon [\psi ](x)\le \psi (x)+C_1\varepsilon ^2 \end{aligned}$$

for some \(C_1>0\) depending on the uniform bounds of \(F_*(\nabla \psi (x), \nabla ^2 \psi (x))\) and \(H(x, \nabla \psi (x))\) for all \(x\in {\overline{\Omega }}\).

We cannot utilize (2.6) directly to show the existence of \(C_2>0\) such that

$$\begin{aligned} S^\varepsilon [\psi ](x)\ge \psi (x)-C_2\varepsilon ^2 \end{aligned}$$

holds uniformly for all \(\varepsilon >0\) small. However, we can still apply (2.8) to get this estimate due to the boundedness of \(\nabla \psi \) and \(\nabla ^2 \psi \) together with the uniform continuity of \(\nabla ^2\psi \). \(\square \)

We next show that \({\overline{u}}\) and \({\underline{u}}\) are respectively a subsolution and a supersolution of (1.1)–(1.2).

Proposition 2.6

(Sub- and supersolution properties) Assume (A1) and (A2). Assume that \(u_0\) is bounded and Lipschitz continuous in \({\overline{\Omega }}\). Let \(u^\varepsilon \) be the value function defined in (1.12). Let \({\overline{u}}\) and \({\underline{u}}\) be defined as in (1.14). Then \({\overline{u}}\) and \({\underline{u}}\) are respectively a subsolution and a supersolution of (1.1)–(1.2).

Proof

Let us first show that \({\overline{u}}\) is a subsolution. Suppose that there exist \((x_0, t_0)\in {\overline{\Omega }}\times (0, \infty )\) and \(\phi \in C^\infty ({\overline{\Omega }}\times (0, \infty ))\) such that \(u-\phi \) attains a strict maximum on \({\overline{\Omega }}\times (0, \infty )\) at \((x_0, t_0)\). Then there exists \(r>0\) such that

$$\begin{aligned} {\overline{u}}(x_0, t_0)-\phi (x_0, t_0)>{\overline{u}}(x, t)-\phi (x, t) \end{aligned}$$

for all \((x, t)\in Q_r\) with \((x, t)\ne (x_0, t_0)\), where

$$\begin{aligned} Q_r=B_r(x_0, t_0)\cap \left( {\overline{\Omega }}\times (0, \infty )\right) . \end{aligned}$$

It follows that there exists \((x_\varepsilon , t_\varepsilon )\in {\overline{\Omega }}\times (0, \infty )\) such that \((x_\varepsilon , t_\varepsilon )\rightarrow (x_0, t_0)\) as \(\varepsilon \rightarrow 0\) and

$$\begin{aligned} u^\varepsilon (x_\varepsilon , t_\varepsilon )-\phi (x_\varepsilon , t_\varepsilon )\ge \sup _{Q_r} (u^\varepsilon -\phi )-\varepsilon ^3. \end{aligned}$$

By (1.13), we have

$$\begin{aligned} \phi (x_\varepsilon , t_\varepsilon )\le \min _{\begin{array}{c} |v|=1\\ a\in A \end{array}}\max _{b=\pm 1} \phi \left( x_\varepsilon +\sqrt{2}\varepsilon \eta _\varepsilon (x_\varepsilon )-\varepsilon ^2\zeta _\varepsilon (x)\nu (x_\varepsilon )+\varepsilon ^2 \zeta _\varepsilon (x)f(x, a), t_\varepsilon -\varepsilon ^2\right) +\varepsilon ^3. \end{aligned}$$

Adopting Lemma 2.3 with \(\psi =\phi \left( \cdot , t_\varepsilon -\varepsilon ^2\right) \) and \(x=x_\varepsilon \), we get

$$\begin{aligned} \begin{aligned} \varepsilon ^2 \phi _t(x_\varepsilon , t_\varepsilon -\varepsilon ^2)\le&-\varepsilon ^2 \eta _\varepsilon ^2(x) F_*(\nabla \phi (x_\varepsilon , t_\varepsilon -\varepsilon ^2), \nabla ^2 \phi (x_\varepsilon , t_\varepsilon -\varepsilon ^2))\\ {}&\quad -\varepsilon ^2 \zeta _\varepsilon (x_\varepsilon )H(x_\varepsilon , \nabla \phi (x_\varepsilon , t_\varepsilon -\varepsilon ^2))+o(\varepsilon ^2). \end{aligned} \end{aligned}$$

Dividing the inequality above by \(\varepsilon ^2\), we are led to

$$\begin{aligned} \phi _t+\eta _\varepsilon ^2 F_*(\nabla \phi , \nabla ^2\phi )+ \zeta _\varepsilon H(x_\varepsilon , \nabla \phi )\le o(1)\quad \text {at }(x_\varepsilon , t_\varepsilon -\varepsilon ^2). \end{aligned}$$
(2.12)

We can easily deduce the viscosity inequality

$$\begin{aligned} \phi _t(x_0, t_0)+F_*(\nabla \phi (x_0, t_0), \nabla ^2 \phi (x_0, t_0))\le 0 \end{aligned}$$

by sending \(\varepsilon \rightarrow 0\) in (2.12) when \(x_0\in \Omega \). In the case \(x_0\in \partial \Omega \), there exists \(0\le c\le 1\) such that \(\zeta _\varepsilon (x_\varepsilon )\rightarrow c\) by taking a subsequence if necessary and therefore

$$\begin{aligned} \phi _t(x_0, t_0)+(1-c)F_*(\nabla \phi (x_0, t_0), \nabla ^2\phi (x_0, t_0))+cH(x_0, \nabla \phi (x_0, t_0))\le 0, \end{aligned}$$

which implies that

$$\begin{aligned} \phi _t(x_0, t_0) +\min \left\{ F_*(\nabla \phi (x_0, t_0), \nabla ^2\phi (x_0, t_0)), H(x_0, \nabla \phi (x_0, t_0))\right\} \le 0. \end{aligned}$$

We complete the verification that \({\overline{u}}\) is a subsolution. The proof for \({\underline{u}}\) is symmetric and therefore omitted here. \(\square \)

Proposition 2.7

(Initial value) Assume (A1) and (A2). Assume that \(u_0\) is bounded and Lipschitz continuous in \({\overline{\Omega }}\). Let \(u^\varepsilon \) be the value function as in (1.12). Let \({\overline{u}}\) and \({\underline{u}}\) be defined as in (1.14). Then (1.15) holds.

Proof

Fix \(x_0\in {\overline{\Omega }}\). Since \(u_0\) is Lipschitz continuous in \({\overline{\Omega }}\), there exists \(L>0\) such that

$$\begin{aligned} u_0(x)\le u_0(x_0)+L|x-x_0|\le u_0(x_0)+L(|x-x_0|^2+\delta ^2)^{1\over 2} \end{aligned}$$
(2.13)

for any \(\delta >0\). Set, for any \(x\in {\overline{\Omega }}\),

$$\begin{aligned} \psi (x)=u_0(x_0)+L(|x-x_0|^2+\delta ^2)^{1\over 2}. \end{aligned}$$

Then it is clear that \(\psi \in {\mathcal {C}}^{2, 1}({\overline{\Omega }})\).

By Lemma 2.5 together with the monotonicity of \(S^\varepsilon \), we have

$$\begin{aligned} u^\varepsilon (\cdot , \varepsilon ^2)=S^\varepsilon [u_0]\le S^\varepsilon [\psi ]\le \psi +C\varepsilon ^2 \end{aligned}$$

in \({\overline{\Omega }}\) for some \(C>0\) independent of \(\varepsilon >0\). Repeating the estimate, we are led to

$$\begin{aligned} u^\varepsilon (x, t)\le \psi (x)+Ct \end{aligned}$$

for all \((x, t)\in {\overline{\Omega }}\times [0, \infty )\) when \(\varepsilon >0\) is small, which implies

$$\begin{aligned} {\overline{u}}(x_0, 0)\le \psi (x_0)=u_0(x_0)+L\delta . \end{aligned}$$

Letting \(\delta \rightarrow 0\), we end up with

$$\begin{aligned} {\overline{u}}(x_0, 0)\le u_0(x_0). \end{aligned}$$

We omit the proof for the part with \({\underline{u}}(x, 0)\), since it is symmetric. \(\square \)

Proof of Theorem 1.1

If the comparison principle holds, then combining the results in Proposition 2.6 and Proposition 2.7, we have \({\overline{u}}\le {\underline{u}}\) in \({\overline{\Omega }}\times [0, \infty )\). Since \({\overline{u}}\ge {\underline{u}}\) holds by definition, we obtain the locally uniform convergence of \(u^\varepsilon \) to the unique solution of (CF) in \({\overline{\Omega }}\times [0, \infty )\). \(\square \)

Our game approximation certainly includes the case with the boundary condition (1.9).

Example 2.8

(Game with a boundary driving force, case 1) In particular, when

$$\begin{aligned} A={\overline{B}}_1,\quad f(x, a)=Ka\quad \text {with }0\le K<1 \text { for all }a\in A\text { and }x\in \partial \Omega , \end{aligned}$$
(2.14)

Theorem 1.1 gives an approximation for (1.1) with the boundary condition (1.9).

2.3 Game convergence in a half plane with linear boundary condition

As stated in our main theorems, a comparison principle is needed to conclude the convergence of the value functions. General uniqueness for the dynamic boundary problem (CF) is a challenging open problem. We below consider a special case when the boundary condition is linear, as given by (2.1), and the domain \(\Omega \) is a half space.

When (2.1) holds and \(\Omega \subset {{\mathbb {R}}}^n\) is a half space, by applying the comparison principle in Theorem  2.2, we can obtain the following convergence result.

Corollary 2.9

(Game convergence in a half plane) Suppose that \(\Omega \subset {{\mathbb {R}}}^2\) is a half plane. Assume that \(u_0\) is Lipschitz continuous in \({\overline{\Omega }}\) and there exists \(M\in {{\mathbb {R}}}\) such that \(u_0-M\) is compactly supported in \({\overline{\Omega }}\). For any \(\varepsilon >0\) small, let \(u^\varepsilon \) be the value function defined as in (1.12) with \(f\equiv 0\). Then \(u^\varepsilon \rightarrow u\) locally uniformly in \({\overline{\Omega }}\times [0, \infty )\) as \(\varepsilon \rightarrow 0\), where u is the unique viscosity solution of (CF).

The assumption that \(u_0-M\) is compactly supported is used to show that \({\overline{u}}(\cdot , t)={\underline{u}}(\cdot , t)=M\) outside a compact set in \({\overline{\Omega }}\), which is required in Theorem 2.2.

Proposition 2.10

(Constant game values outside compact sets) Assume that (A1) and (A2) hold. Let \(u_0\) be Lipschitz in \({\overline{\Omega }}\) and \(u^\varepsilon \) be the value function defined as in (1.12) with \(f\equiv 0\). Assume that there exists \(M\in {{\mathbb {R}}}\) and a compact subset \({{\mathcal {K}}}\subset {\overline{\Omega }}\) such that

$$\begin{aligned} u_0 =M \quad \text { in }{\overline{\Omega }}\setminus {{\mathcal {K}}}. \end{aligned}$$

Then for any \(T>0\), there exists a compact set \({{\mathcal {K}}}_T\subset {\overline{\Omega }}\) such that

$$\begin{aligned} u^\varepsilon (x, t)=M\quad \text { for all }\varepsilon >0\text { small}, x\in {\overline{\Omega }}\setminus {{\mathcal {K}}}_T\text { and } 0\le t\le T. \end{aligned}$$

In particular, the relaxed limits \({\overline{u}}\) and \({\underline{u}}\) satisfy

$$\begin{aligned} {\overline{u}}(x, t)={\underline{u}}(x, t)=M \quad \text { for all }x\in {\overline{\Omega }}\setminus {{\mathcal {K}}}_T\text { and } 0\le t\le T. \end{aligned}$$
(2.15)

Proof

Let us fix \(x\in {\overline{\Omega }}\) such that \(\mathrm{dist\,}(x, {{\mathcal {K}}})>1\), which implies that \(B_1(x)\cap {\overline{\Omega }}\subset {\overline{\Omega }}\setminus {{\mathcal {K}}}\). We consider the game in Sect. 1.3 starting from x. Suppose that at the k-th step Player I chooses \(v_k\) such that \(\left\langle v_k, y_{k-1}-x\right\rangle =0\). Then no matter which \(b_k=\pm 1\) is picked by Player II, we have

$$\begin{aligned} |y_k-x|^2= \left| y_{k-1}-x-\varepsilon ^2\zeta _\varepsilon (y_{k-1})\nu (y_{k-1})\right| ^2+2\varepsilon ^2 \eta _\varepsilon ^2 (y_{k-1})+o(\varepsilon ^2), \end{aligned}$$

which implies that

$$\begin{aligned} |y_k-x|^2\le |y_{k-1}-x|^2+2\varepsilon ^2 \zeta _\varepsilon (y_{k-1})|y_{k-1}-x|+2\varepsilon ^2 \eta _\varepsilon ^2(y_{k-1})+o(\varepsilon ^2). \end{aligned}$$
(2.16)

It follows that for \(\varepsilon >0\) sufficiently small,

$$\begin{aligned} |y_k-x|^2\le |y_{k-1}-x|^2+3\varepsilon ^2, \quad \text {when } |y_{k-1}-x|\le 1. \end{aligned}$$
(2.17)

We thus have

$$\begin{aligned} |y_k-x|^2\le 3k\varepsilon ^2 \end{aligned}$$

provided that \(|y_j-x|\le 1\) for all \(j=1, 2, \ldots , k-1\). The estimate above amounts to saying that the game position \(y_k\) stays in \(B_1(x)\cap {\overline{\Omega }}\) for all k fulfilling \(3k\varepsilon ^2\le 1\). By definition of \(u^\varepsilon \) and the choice of x, we have

$$\begin{aligned} u^\varepsilon (x, t)\le M, \quad \text { if }t\le 1/3. \end{aligned}$$
(2.18)

On the other hand, Player II can take \(b_k=\pm 1\) satisfying

$$\begin{aligned} b_k\left\langle v_k, y_{k-1}-x-\varepsilon ^2\zeta _\varepsilon (y_{k-1})\nu (y_{k-1})\right\rangle \le 0 \end{aligned}$$

so that (2.16) and (2.17) hold for any choices of Player I. This time we have

$$\begin{aligned} u^\varepsilon (x, t)\ge M, \quad \hbox { if}\ t\le 1/3. \end{aligned}$$
(2.19)

Combining (2.18) and (2.19), we deduce that \(u^\varepsilon (x, t)=M\) for all \(t\le 1/3\) and \(x\in {\overline{\Omega }}\) satisfying \(\mathrm{dist\,}(x, {{\mathcal {K}}})>1\).

We may iterate the argument above to show that for any fixed \(T>0\) and any \(\varepsilon >0\) small, \(u^\varepsilon (x, t)=M\) for all \(t\le T\) and \(x\in {\overline{\Omega }}\setminus {{\mathcal {K}}}_T\), where \({{\mathcal {K}}}_T=\{x\in {\overline{\Omega }}: \mathrm{dist\,}(x, {{\mathcal {K}}})\le 3T\}\). The estimate (2.15) is an immediate consequence. \(\square \)

3 General dynamic boundary conditions

Let us discuss several possible choices of H other than that in (1.5).

3.1 Concave boundary Hamiltonians

As already mentioned in Remark 1.2, one may also construct an inverse game for (1.1) with the dynamic boundary condition (1.17). Let us mention that there is another way to build games for (1.1) and (1.17). Instead of switching \(\min _{v, a}\max _{b}\) to \(\max _{v, a}\min _{b}\) as in the inverse game, we may keep the original order but move the set A to the control of Player II. More precisely, the game rules are as follows.

At the k-th step (\(k=1, 2, \ldots , N=[t/\varepsilon ^2]\)),

  • Player I chooses a unit vector \(v_k\in {{\mathbb {R}}}^2\);

  • Player II sees the choice of Player I and then picks \(b_k=\pm 1\) and \(a_k\in A\);

  • The game position is moved from \(y_{k-1}\) to

    $$\begin{aligned} y_k=y_{k-1}+\sqrt{2}\varepsilon \eta _\varepsilon (y_{k-1}) b_kv_k-\varepsilon ^2 \zeta _\varepsilon (y_{k-1})\left( \nu (y_{k-1})-f(y_{k-1}, a_k)\right) , \end{aligned}$$

    where \(0\le \eta _\varepsilon ,\ \zeta _\varepsilon \le 1\) are the functions given in (1.10) and (1.11).

The value function is defined to be

$$\begin{aligned} u^\varepsilon (x, t)=\min _{v_1}\max _{a_1, b_1} \min _{v_2}\max _{a_2, b_2}\cdots \min _{v_N}\max _{a_N, b_N} u_0(y_N) \end{aligned}$$
(3.1)

for any \((x, t)\in {\overline{\Omega }}\times [0, \infty )\), which clearly yields a new DPP as below

$$\begin{aligned} u^\varepsilon (x, t)=\min _{|v|=1}\max _{\begin{array}{c} a\in A\\ b=\pm 1 \end{array}} u^\varepsilon \left( x+\sqrt{2}\varepsilon \eta _\varepsilon (x) bv-\varepsilon ^2 \zeta _\varepsilon (x)\left( \nu (x)-f(x, a)\right) , t-\varepsilon ^2\right) .\nonumber \\ \end{aligned}$$
(3.2)

One can follow the proof in Sect. 2.2 to show the following.

Theorem 3.1

(Game approximation with concave dynamic boundary conditions) Assume (A1) and (A2). Assume that \(u_0\) is bounded and Lipschitz continuous in \({\overline{\Omega }}\). For any \(\varepsilon >0\) small, let \(u^\varepsilon \) be the value function defined as in (3.1). Let \({\overline{u}}\) and \({\underline{u}}\) be the relaxed limits of \(u^\varepsilon \) as in (1.14). Then \({\overline{u}}\) and \({\underline{u}}\) are respectively a subsolution and a supersolution of (1.1)–(1.2) with H given by (1.17). Moreover, \({\overline{u}}\) and \({\underline{u}}\) satisfy (1.15). In addition, if the comparison principle for (CF) holds, then \(u^\varepsilon \rightarrow u\) locally uniformly in \({\overline{\Omega }}\times [0, \infty )\) as \(\varepsilon \rightarrow 0\), where u is the unique viscosity solution of (CF) with (1.17).

Example 3.2

(Game with a boundary driving force, case 2) By applying Theorem 3.1 with (2.14), we obtain a game interpretation for the curvature flow with a boundary condition symmetric to (1.9), that is,

$$\begin{aligned} u_t+\left\langle \nabla u, \nu \right\rangle -K|\nabla u|=0 \quad \hbox { on}\ \partial \Omega \times (0, \infty ) \end{aligned}$$

with \(0\le K<1\).

3.2 General boundary conditions

It is possible to generalize the game interpretation for even more general nonlinear dynamic boundary conditions such as (1.2) with

$$\begin{aligned} H(x, p)=\max _{\alpha \in {{\mathcal {A}}}}\min _{\beta \in {{\mathcal {B}}}} \left\{ \left\langle p , \gamma _{\alpha \beta }(x) -f(x, \alpha , \beta )\right\rangle -g(x, \alpha , \beta )\right\} , \end{aligned}$$
(3.3)

where \({{\mathcal {A}}}, {{\mathcal {B}}}\) are compact metric spaces, \(\gamma _{\alpha \beta }\) denotes a general outward unit oblique normal depending on \(\alpha \in {{\mathcal {A}}}\) and \(\beta \in {{\mathcal {B}}}\), and \(f, g: {\overline{\Omega }}\times {{\mathcal {A}}}\times {{\mathcal {B}}}\rightarrow {{\mathbb {R}}}\) are assumed to be bounded and Lipschitz with respect to x uniformly for \(\alpha \in {{\mathcal {A}}}\) and \(\beta \in {{\mathcal {B}}}\). Concerning \(\gamma _{\alpha \beta }\), we assume that \(\gamma _{\alpha \beta }\) can be extended to a \(C^2\) function in a neighborhood of \(\partial \Omega \) and

$$\begin{aligned} \inf _{(x, \alpha , \beta )\in \partial \Omega \times {{\mathcal {A}}}\times {{\mathcal {B}}}}\left\langle \gamma _{\alpha \beta }(x), \nu (x)\right\rangle -\sup _{(x, \alpha , \beta ) \in \partial \Omega \times {{\mathcal {A}}}\times {{\mathcal {B}}}} |f(x, \alpha , \beta )|>0 \end{aligned}$$
(3.4)

such that (1.7) holds for some \(\rho >0\) in this general case as well. We remark that the condition (3.4) can actually be relaxed to

$$\begin{aligned} \inf _{(x, \alpha , \beta )\in \partial \Omega \times {{\mathcal {A}}}\times {{\mathcal {B}}}}\left\langle \gamma _{\alpha \beta }(x)-f(x, \alpha , \beta ), \nu (x)\right\rangle >0 \end{aligned}$$

without losing (1.7).

Based on the setting in Sect. 1.3, we can modify the rules as below. To be more precise, we start from \(x\in {\overline{\Omega }}\) and end the game after \(N=[t/\varepsilon ^2]\) steps. At the k-th step,

  • Player I chooses a unit vector \(v_k\in {{\mathbb {R}}}^2\) and \(\alpha _k\in {{\mathcal {A}}}\);

  • Player II sees the choice of Player I and then picks \(b_k=\pm 1\) and \(\beta _k\in {{\mathcal {B}}}\);

  • Once the choices of both players are determined, the game position is moved from \(y_{k-1}\) to

    $$\begin{aligned} y_k=y_{k-1}+\sqrt{2}\varepsilon \eta _\varepsilon (y_{k-1}) b_kv_k - \varepsilon ^2 \zeta _\varepsilon (y_{k-1}) (\gamma _{\alpha _k\beta _k}(y_{k-1})-f(y_{k-1}, \alpha _k, \beta _k)), \end{aligned}$$

    where \(0\le \eta _\varepsilon ,\ \zeta _\varepsilon \le 1\) are the functions given in (1.10) and (1.11);

  • Meanwhile, Player II receives a payment of the amount \(\varepsilon ^2\zeta _\varepsilon (y_{k-1}) g(y_{k-1}, \alpha _k, \beta _k)\) from Player I.

Suppose that Player I intends to minimize the total cost

$$\begin{aligned} J^\varepsilon (x, t)= u_0(y_N)+\sum _{k=1}^N \varepsilon ^2 \zeta _\varepsilon (y_{k-1}) g(y_{k-1}, \alpha _k, \beta _k) \end{aligned}$$

while Player II attempts to maximize it. Under the game rules above, we can define the value function

$$\begin{aligned} u^\varepsilon (x, t)=\min _{v_1, \alpha _1}\max _{b_1, \beta _1} \min _{v_2, \alpha _2,}\max _{b_2, \beta _2}\cdots \min _{v_N, \alpha _N}\max _{b_N, \beta _N} J^\varepsilon (x, t), \end{aligned}$$
(3.5)

whose limit as \(\varepsilon \rightarrow 0\), if it exists, formally solves (1.1) and (1.2) with H given by (3.3). A formal proof can again be easily obtained via the dynamic programming principle, which reads in this case

$$\begin{aligned} \begin{aligned} u^\varepsilon (x, t)=\min _{{\begin{array}{c} |v|=1\\ \alpha \in {{\mathcal {A}}} \end{array}}}\max _{{\begin{array}{c} b=\pm 1\\ \beta \in {{\mathcal {B}}} \end{array}}} \bigg \{u^\varepsilon \bigg (x+\sqrt{2}\varepsilon \eta _\varepsilon (x) bv&-\varepsilon ^2 \zeta _\varepsilon (x) (\gamma _{\alpha \beta }(x)-f(x, \alpha , \beta )), t-\varepsilon ^2\bigg )\\&+\varepsilon ^2 \zeta _\varepsilon (x)g(x, \alpha , \beta )\bigg \}.\end{aligned}\nonumber \\ \end{aligned}$$
(3.6)

Indeed, following the argument in Sect. 1.4, we easily see that the interior part of the Taylor expansion on (3.6) yields the same Eq. (1.1) and the game dynamics near the boundary is governed by the terms carrying \(\zeta _\varepsilon \), which give rise to

$$\begin{aligned} 0=\min _{\alpha \in {{\mathcal {A}}}}\max _{\beta \in {{\mathcal {B}}}} \varepsilon ^2\zeta _\varepsilon (x)\left\{ \left\langle \nabla u^\varepsilon (x, t), -\gamma _{\alpha \beta }(x)+f(x, \alpha , \beta )\right\rangle +g(x, \alpha , \beta )-u^\varepsilon _t(x, t)\right\} . \end{aligned}$$

Then (1.2) with (3.3) follows immediately.

Theorem 3.3

(Game approximation for general dynamic boundary condition) Assume (A1). Assume that \(u_0\) is bounded and Lipschitz continuous in \({\overline{\Omega }}\). Assume that (3.4) holds. For any \(\varepsilon >0\) small, let \(u^\varepsilon \) be the value function associated to the game above, defined as in (3.5). Let \({\overline{u}}\) and \({\underline{u}}\) be the relaxed limits of \(u^\varepsilon \) as in (1.14). Then \({\overline{u}}\) and \({\underline{u}}\) are respectively a subsolution and a supersolution of (1.1)–(1.2) with H given by (3.3). Moreover, \({\overline{u}}\) and \({\underline{u}}\) satisfy (1.15). In addition, if the comparison principle holds, then \(u^\varepsilon \rightarrow u\) locally uniformly in \({\overline{\Omega }}\times [0, \infty )\) as \(\varepsilon \rightarrow 0\), where u is the unique viscosity solution of (CF) with (3.3).

The games in the interior, including the space dimension, can be generalized for a larger class of curvature flows. We refer to [28] for more details.

4 Convexity preserving property

Convexity preserving property is an important property of motion by curvature. In this section, we discuss this property for the associated dynamic boundary problem (CF) by using a game-theoretic approach in [34]. Although our method can be generalized for more general boundary conditions as described in Sect.  3.2, we focus on the case when H is given by (1.5). We first need to relax the notion of convexity of sets, since we do not assume \(\Omega \) to be convex.

Definition 4.1

(Relative convexity) A closed set \(E\subset {\overline{\Omega }}\) is said to be convex relatively to \(\Omega \) if for any \(x, y\in E\cap \Omega \), we have \((x+y)/2\in E\) provided that \(\sigma x+(1-\sigma )y\in \Omega \) for all \(0\le \sigma \le 1\).

Theorem 4.2

(Convexity preserving) Suppose that (A1) and (A2) hold. Assume that \(u_0\) is Lipschitz and convex in \({\overline{\Omega }}\). Assume that \(u_0\) satisfies

$$\begin{aligned} \lim _{\delta \rightarrow 0} \mathop {\mathrm {ess\ sup}}_{x\in {\mathcal {N}}_\delta } H(x, \nabla u_0(x))\le 0,\end{aligned}$$
(4.1)

where, for any \(\delta >0\),

$$\begin{aligned} {{\mathcal {N}}}_\delta :=\{x \in {\overline{\Omega }}: \mathrm{dist\,}(x, \partial \Omega ) < \delta \}. \end{aligned}$$

Let \(u^\varepsilon \) be the value function defined as in (1.12) and \({\overline{u}}, {\underline{u}}\) be the relaxed limits as in (1.14). Then for any \(t\ge 0\) and any \(x, y\in \Omega \),

$$\begin{aligned} {\underline{u}}\left( {x+y\over 2}, t\right) \le \max \{{\overline{u}}(x, t),\ {\overline{u}}(y, t)\} \end{aligned}$$

provided that \(\sigma x+(1-\sigma )y\in \Omega \) for all \(0\le \sigma \le 1\). In particular, if the comparison principle for (CF) holds, then for each \(c\in {{\mathbb {R}}}\), the sub-level set

$$\begin{aligned} E_c^t:=\{x\in {\overline{\Omega }}: u(x, t)\le c\} \end{aligned}$$

of the unique solution u of (CF) is convex relatively to \(\Omega \) for all \(t\ge 0\).

We prove this theorem by investigating the convexity of sub-level sets of the value function \(u^\varepsilon \) constructed in (1.3). Let us first show monotonicity of \(t\mapsto u^\varepsilon (x, t)\) for any \(x\in {\overline{\Omega }}\) and \(\varepsilon >0\).

Lemma 4.3

(Monotonicity in time) Suppose that (A1) and (A2) hold. Assume that \(u_0\) is Lipschitz and convex in \({\overline{\Omega }}\) and satisfies (4.1). Let \(u^\varepsilon \) be the value function in (1.12). Then for all \(t\ge s\ge 0\),

$$\begin{aligned} u^\varepsilon (x, s)\le u^\varepsilon (x, t)+(t-s)\omega (\varepsilon ) \quad \text { for all } x\in {\overline{\Omega }}\text { and }\varepsilon >0, \end{aligned}$$
(4.2)

where \(\omega \) is a modulus of continuity.

Proof

Let us fix \(\varepsilon >0\). Since \(u_0\) is convex, for any \(x, y\in {\overline{\Omega }}\) and unit vector \(v \in {{\mathbb {R}}}^2\), we have

$$\begin{aligned} \max _{b=\pm 1}u_0\left( y+\sqrt{2}\varepsilon \eta _\varepsilon (x)bv\right) \ge {1\over 2}u_0(y+\sqrt{2}\varepsilon \eta _\varepsilon (x) v)+{1\over 2}u_0(y-\sqrt{2}\varepsilon \eta _\varepsilon (x) v)\ge u_0(y).\nonumber \\ \end{aligned}$$
(4.3)

Moreover, we claim that

$$\begin{aligned} \min _{a\in A} u_0\left( x-\varepsilon ^2 \zeta _\varepsilon (x)\left( \nu (x)-f(x, a)\right) \right) \ge u_0(x)-\varepsilon ^2\omega _0(\varepsilon )\quad \text { for all } x\in {\overline{\Omega }}, \end{aligned}$$
(4.4)

where \(\omega _0\) is a modulus of continuity. To prove this claim, we use the compatibility condition (4.1); in fact, by (4.1), we obtain a modulus of continuity \(\omega _1\) such that

$$\begin{aligned} \mathop {\mathrm {ess\,sup\,}}\limits _{x\in {{\mathcal {N}}}_\delta } H(x, \nabla u_0(x))\le \omega _1(\delta )\end{aligned}$$

for any \(\delta >0\). Using the Lipschitz continuity and convexity of \(u_0\), we have, for almost every \(x\in {\overline{\Omega }}\),

$$\begin{aligned} \begin{aligned}&\min _{a\in A} \ u_{0}\left( x-\varepsilon ^2 \zeta _\varepsilon (x)\left( \nu (x)-f(x, a)\right) \right) -u_0(x)\\&\quad \ge -\varepsilon ^2\zeta _\varepsilon (x) \max _{a\in A} \left\langle \nabla u_{0}(x), \nu (x)-f(x, a)\right\rangle \\&\quad = -\varepsilon ^2 \zeta _\varepsilon (x) H(x, \nabla u_{0}(x))\ge -\varepsilon ^2 \omega _1\left( \sqrt{2}\varepsilon \right) . \end{aligned} \end{aligned}$$

Due to the continuity of \(u_0(x)\), \(\zeta _\varepsilon (x)\), \(\nu (x)\) and f(xa) in x, we obtain the estimate (4.4) with \(\omega _0(s)=\omega _1\left( \sqrt{2}s\right) \) for any \(s\ge 0\).

Combining (4.4) and (4.3) with \(y=x-\varepsilon ^2 \zeta _\varepsilon (x)\left( \nu (x)-f(x, a)\right) \), we are led to

$$\begin{aligned} u^\varepsilon (x, \varepsilon ^2)\ge u_0(x)-\varepsilon ^2\omega _0(\varepsilon )\quad \text { for all }x\in {\overline{\Omega }}. \end{aligned}$$

It follows from (1.13) that for all \(x\in {\overline{\Omega }}\) we have

$$\begin{aligned} u^\varepsilon (x, 2\varepsilon ^2)\ge u^\varepsilon (x, \varepsilon ^2)-\varepsilon ^2\omega _0(\varepsilon ^2)\ge u_0(x)-2\varepsilon ^2 \omega _0(\varepsilon ). \end{aligned}$$

Iterating this estimate, we obtain

$$\begin{aligned} u^\varepsilon (x, \tau )\ge u_0(x)-\tau \omega _0(\varepsilon ) \quad \text { for all }x\in {\overline{\Omega }}\text { and }\tau \ge 0, \end{aligned}$$

which, by (1.13) again, yields

$$\begin{aligned} u^\varepsilon (x, t+\tau )\ge u^\varepsilon (x, t)-\tau \omega _0(\varepsilon ). \end{aligned}$$

We thus have completed the proof of (4.2). \(\square \)

Proof of Theorem 4.2

Fix \(t\ge 0\) arbitrarily. We take \(x, y\in \Omega \) such that \(\sigma x+(1-\sigma )y\in \Omega \) for all \(\sigma \in [0, 1]\). Set \(c:=\max \{ {\overline{u}}(x, t), \ {\overline{u}}(y, t)\}\). We aim to show that \({\underline{u}}({(x+y)/ 2}, t)\le c\). By definition of \({\overline{u}}\), for any \(\delta >0\), we can take \(\varepsilon >0\) small such that

$$\begin{aligned} \max \{u^{\varepsilon }(x', t),\ u^{\varepsilon }(y', t)\}\le c+\delta \quad \text {for all }x'\in B_{2\varepsilon }(x)\text { and }y'\in B_{2\varepsilon }(y). \end{aligned}$$

We then may let \(\varepsilon >0\) further small to get

$$\begin{aligned} \max \{u^{\varepsilon }(x', s),\ u^{\varepsilon }(y', s)\}\le c+2\delta \quad \text { for all }x'\in B_{2\varepsilon }(x)\text { and }y'\in B_{2\varepsilon }(y) \end{aligned}$$
(4.5)

for all \(s\le t\), thanks to Lemma 4.3. In particular, we have \(u_0(x), u_0(y)\le c+2\delta \), which implies that

$$\begin{aligned} u_0(\sigma x+(1-\sigma )y)\le c+2\delta \end{aligned}$$
(4.6)

by convexity of \(u_0\) in \({\overline{\Omega }}\).

The relation (4.5) means that for any \(s\le t\), there must exist minimizing strategies of Player I such that for the game starting from \(x'\) or \(y'\), regardless of the choices of Player II, the game outcome is bounded from above by \(c+2\delta \).

We next consider the game started from \((x+y)/2\). Player I may keep choosing \(v=(x-y)/|x-y|\). By letting \(\varepsilon >0\) further small if necessary, we can let the game position stay on the line segment between x and y before it enters \(B_{2\varepsilon }(x)\) or \(B_{2\varepsilon }(y)\). In fact, \(\zeta _\varepsilon =0\) holds during such moves when \(\varepsilon >0\) is small, since the line segment between x and y is contained in \(\Omega \).

After arrival at \(B_{2\varepsilon }(x)\) or \(B_{2\varepsilon }(y)\), Player I may switch to use the minimizing strategy implied by (4.5) to guarantee an outcome below \(c+2\delta \).

Player II may certainly choose to let the game position wander away from the neighborhoods of x and y. In this case the final position \(y_N\) must still stay on the line segment between x and y and therefore by (4.6) the game outcome is again no more than \(c+2\delta \).

Since the above game estimate is for a fixed strategy of Player I, we get

$$\begin{aligned} u^{\varepsilon }\left( {x+y\over 2}, t\right) \le c+2\delta . \end{aligned}$$

We conclude the proof by passing to the limit as \(\varepsilon \rightarrow 0\) to get

$$\begin{aligned} {\underline{u}}\left( {x+y\over 2}, t\right) \le c+2\delta \end{aligned}$$

and then sending \(\delta \rightarrow 0\). \(\square \)

Remark 4.4

Our method above is similar to [34, Theorem 5.4], where the convexity preserving property is discussed for the Neumann problem. It is worth mentioning that a more precise estimate for \(u^\varepsilon \) such as

$$\begin{aligned} u^\varepsilon \left( {x+y\over 2}, t\right) \le \max \{u^\varepsilon (x, t),\ u^\varepsilon (y, t)\}+O(\varepsilon )\quad \text { for all }\varepsilon >0\text { small} \end{aligned}$$

can be obtained for the Cauchy problem by using the Lipschitz continuous dependence of game position on the initial position [34, Theorem 4.2]. We however do not know whether similar estimates hold for \(u^\varepsilon \) in the current case, since the game dynamics for boundary value problems are much more complicated.

Remark 4.5

The compatibility condition (4.1) on \(u_0\) roughly means that all level sets of solutions of (CF) keep moving forward near \(\partial \Omega \). It is not clear to us whether this condition is necessary to guarantee the convexity of evolving level sets for our dynamic boundary problem. For the convexity preserving property for curvature flows with the Neumann boundary condition, such a condition is necessary and cannot be removed, as indicated in [34, Example 5.7].

5 Application to the fattening phenomenon

For the level set curvature flow equation, it is called fattening of zero level set if the level set \(\{x\in {{\mathbb {R}}}^n: u(x, t)=0\}\) contains interior points for some \(t>0\) while \(\{x\in {{\mathbb {R}}}^n: u_0(x)=0\}\) does not. For more details on this issue, we refer the reader to [7, 15, 20, 42] for the Cauchy problem and [5, 19] for the Neumann boundary problem. A game-based interpretation of fattening for the Cauchy problem is presented in [33]. We next use the game-theoretic argument to show the occurrence of such behavior for the dynamic boundary problem (CF).

5.1 An example of instant fattening

We begin with an example, where the fattening phenomenon takes place instantly. We choose a particular \(u_0\) whose zero level set is tangent to the boundary. It can be viewed as an adaptation of the well-known example for the so-called figure eight type of initial values [15] to our dynamic boundary problem.

Let \(\Omega \) be the half plane, i.e.,

$$\begin{aligned} \Omega =\{x=(x_1, x_2)\in {{\mathbb {R}}}^2: x_2>0\}. \end{aligned}$$
(5.1)

Let us also denote \(e_1=(1, 0)\) and \(e_2=(0, 1)\) for our convenience of notation later. It is clear that the normal vector \(\nu =-e_2\).

Fix \(R>0\). Let \(z=(0, R)\) and \(E=\overline{B_R(z)}\). We take a triangular region Q contained in E symmetric about the \(x_2\)-axis, i.e.,

$$\begin{aligned} Q:=\{(x_1, x_2)\in \Omega : L|x_1|\le x_2\le L\mu \}\subset E \end{aligned}$$
(5.2)

for some \(L>1\) and \(\mu >0\). For \(M>0\) large, we take

$$\begin{aligned} u_0(x)=\max \{\min \{{\text {sd}}(x, \partial E), \ M\}, \ -M\}\quad \text { for } x\in {\overline{\Omega }}. \end{aligned}$$
(5.3)

Our truncation on the signed distance function of E by \(\pm M\) is not essential to the example but only for the boundedness of \(u_0\). Under these conditions, we can show that the zero level set of the solution u to (CF) with the linear dynamic boundary condition (when H is given by (2.1)) has an interior near the origin. A more precise description of our result is as follows.

Theorem 5.1

(Fattening for curvature flow with dynamic boundary condition) Suppose that \(\Omega \) is the half plane given by (5.1). Let \(z=(0, R)\) and \(E=\overline{B_R(z)}\) for \(R>0\). Let Q be given by (5.2) with \(\mu >0\) and \(L>1\). Let \(u_0\) be defined as in (5.3) with \(M>0\) large and let u be the unique solution of (CF) with H given in (2.1). Then for any \(r>0\) small, there exist \(\tau _2> \tau _1>0\) and an open subset \({{\mathcal {O}}}\) of \(B_r\cap Q\) such that \(u(x, t)=0\) for all \(x\in {{\mathcal {O}}}\) and \(\tau _1\le t\le \tau _2\). Here \(\tau _1\rightarrow 0\) as \(r\rightarrow 0\) and \(\tau _2>0\) does not depend on r but only on \(\mu \) and L.

Proof

We consider the game described in Sect. 1.3 satisfying (2.14) with \(K=0\). Suppose that the game starts at \(x\in B_r\) for some \(0<r<R\). Let

$$\begin{aligned} S_\varepsilon ={{\mathbb {R}}}\times [0, \sqrt{2}\varepsilon ]. \end{aligned}$$
(5.4)

We divide the rest of our proof into several steps.

1. A strategy for Player I

Since Player I aims to minimize the final value \(u_0(y_N)\) with \(N=[t/\varepsilon ^2]\), he may take the following strategy at the k-th step for all \(k=1, 2, \ldots , N\) (Fig. 1):

  1. (1)

    if \(y_{k-1}\in S_\varepsilon \), then Player I chooses \(v_k=e_1\);

  2. (2)

    if \(y_{k-1}\in {\overline{\Omega }}\setminus S_\varepsilon \), then he takes a unit vector \(v_k\) tangent to the circle concentric to \(B_R(z)\), i.e., \(v_k\) satisfies

    $$\begin{aligned} \left\langle v_k, y_{k-1}-z \right\rangle =0. \end{aligned}$$

Regardless of the choices of Player II, the strategy above enables us to make the next two assertions regarding \(y_k\).

  • Once \(y_k\in S_\varepsilon \) for some \(k=1, 2, \ldots , N\), then despite the effect due to the normal vector \(\nu \) in the game, the game trajectory will never leave the region \(S_\varepsilon \) and consequently \(y_N\in S_\varepsilon \).

  • If \(y_k\in {\overline{\Omega }}\setminus S_\varepsilon \) for all \(k=1, 2, \ldots , N\), then

    $$\begin{aligned} |y_N-z|^2= |x-z|^2+2\varepsilon ^2 N\ge (R-r)^2+2\varepsilon ^2 N. \end{aligned}$$

By definition of \(u^\varepsilon \), it follows that either

$$\begin{aligned} u^\varepsilon (x, t)\le \sqrt{2}\varepsilon \end{aligned}$$

or

$$\begin{aligned} u^\varepsilon (x, t)\le R-\sqrt{(R-r)^2+2t}. \end{aligned}$$

In particular, we have

$$\begin{aligned} u^\varepsilon (x, t)\le \sqrt{2}\varepsilon \quad \text {for } x\in B_r \text { and } t\ge {1\over 2}R^2-{1\over 2}(R-r)^2. \end{aligned}$$
(5.5)

2. A strategy for Player II

On the other hand, we may also consider a special strategy in favor of Player II. However, in this case we need to further restrict the game starting point x. We are interested in the first exit time

$$\begin{aligned} \tau _\varepsilon (x):=\min \left\{ k\varepsilon ^2: y_k\in {\overline{\Omega }}\setminus E_\varepsilon \right\} , \end{aligned}$$

where \(E_\varepsilon \) is given by

$$\begin{aligned} E_\varepsilon =\{x\in {\overline{\Omega }}: \mathrm{dist\,}(x, E)\le \sqrt{2}\varepsilon \}. \end{aligned}$$

Let \(L'=2L\) and

$$\begin{aligned} Q'=\left\{ (x_1, x_2)\in \Omega : L'|x_1|-L'\sqrt{2}\varepsilon \le x_2 \le {L\mu \over 2}\right\} . \end{aligned}$$

We next take an open ball \(B_{r_0}(z_0)\subset B_r\cap Q'\cap Q\), where \(z_0\) is on the \(x_2\)-axis and \(r_0<\min \{r, \mu /4\}\). The game starts at an arbitrary point \(x\in B_{r_0}(z_0)\).

Suppose that Player II adopts the following strategy at the k-th step for all \(k=1, 2, \ldots , N\) (Fig. 2): for any unit vector \(v_k\) chosen by Player I,

  1. (1)

    if \(y_{k-1}\in Q'\), then Player II chooses \(b_k=\pm 1\) such that

    $$\begin{aligned} b_k\left\langle v_k, \xi \right\rangle \le 0, \end{aligned}$$

    where the vector \(\xi \in {{\mathbb {R}}}^2\) is given by

    $$\begin{aligned} \xi ={\left\{ \begin{array}{ll} (L', -1) &{} \text {if }\left\langle y_{k-1}, e_1\right\rangle \ge 0,\\ (-L', -1) &{} \text {if }\left\langle y_{k-1}, e_1\right\rangle \le 0; \end{array}\right. } \end{aligned}$$
  2. (2)

    if \(y_{k-1}\in Q\setminus Q'\), then Player II takes \(b_k=\pm 1\) such that

    $$\begin{aligned} b_k\left\langle v_k, y_{k-1}-z' \right\rangle \le 0, \end{aligned}$$

    where \(z'=(0, L'\mu /4)\in \partial Q'\). Then

    $$\begin{aligned} y_k=y_{k-1}+\sqrt{2}\varepsilon \eta _\varepsilon (y_{k-1}) b_kv_k+\varepsilon ^2 \zeta _\varepsilon (y_{k-1})e_2 \end{aligned}$$

    satisfies

    $$\begin{aligned} \begin{aligned} |y_k-z'|^2&\le |y_{k-1}-z'|^2+2\varepsilon ^2\eta _\varepsilon ^2(y_{k-1})+2\sqrt{2}\varepsilon \eta _\varepsilon (y_{k-1}) b_k\left\langle v_k, y_{k-1}-z'\right\rangle \\&\quad +2\varepsilon ^2 \zeta _\varepsilon (y_{k-1}) \left\langle e_2, y_{k-1}-z'\right\rangle +\zeta _\varepsilon (y_{k-1}) o(\varepsilon ^2) \\&\le |y_{k-1}-z'|^2+2\varepsilon ^2 \end{aligned} \end{aligned}$$
    (5.6)

    when \(\varepsilon >0\) is small.

Adopting this strategy in the game, we may observe the following consequences:

  • The game position \(y_k\in Q'\) provided that \(y_{k-1}\in Q'\) unless such a move crosses the top side of \(Q'\) and \(y_k\) enters the set

    $$\begin{aligned} {{\mathcal {N}}}=\left\{ y\in \Omega \setminus Q': |y-x|< \sqrt{2}\varepsilon \text { for some }x\in [-{\mu / 4}, {\mu /4}]\times \{L'\mu /4\}\right\} . \end{aligned}$$
  • If \(y_k\) exits \(Q'\) for some k and never comes back into \(Q'\) again afterwards, then the first exit time \(\tau _\varepsilon (x)\) satisfies

    $$\begin{aligned} \begin{aligned} \tau _\varepsilon (x)&\ge {1\over 2}\mathrm{dist\,}^2(z', \partial Q)-{1\over 2}\left( {\mu \over 4}+\sqrt{2}\varepsilon \right) ^2\\&= {L^2\mu ^2\over 8(1+L^2)}-{\mu ^2\over 32}+O(\varepsilon )\ge {L^2\mu ^2\over 16(1+L^2)} \end{aligned} \end{aligned}$$

    for any \(\varepsilon >0\) small. In fact, starting the game from any point in \({{\mathcal {N}}}\), Player II can use the concentric strategy (2) above to ensure a lower bound of the exit time \(\tau _\varepsilon \) from Q as the right hand side above, thanks to the estimate (5.6).

It follows that

$$\begin{aligned} u^\varepsilon (x, t)\ge -\sqrt{2}\varepsilon \quad \text { for } x\in B_{r_0}(z_0) \text { and } t\le {L^2\mu ^2\over 16(1+L^2)}. \end{aligned}$$
(5.7)
Fig. 1
figure 1

A strategy for Player I

Fig. 2
figure 2

A strategy for Player II

3. Application of the game approximation

Hence, combining the estimates (5.5) and (5.7) and using Theorems 1.1 and 2.2 (with \(n=2\)), we may let \(r>0\) sufficiently small and let \(\varepsilon \rightarrow 0\) to deduce that \(u(x, t)=0\) for any \(x\in B_{r_0}(z_0) \subset B_r\) and any \(\tau _1\le t\le \tau _2\), where

$$\begin{aligned} \tau _1={1\over 2}R^2-{1\over 2}(R-r)^2,\quad \tau _2= {L^2\mu ^2\over 16(1+L^2)}. \end{aligned}$$

We thus can take \({{\mathcal {O}}}=B_{r_0}(z_0)\) to complete the proof. \(\square \)

Remark 5.2

As an immediate consequence of our game arguments above, we can show that \(u(0, t)=0\) for all \(0\le t\le \tau _2\) by letting \(r\rightarrow 0\). In other words, the zero level set of the solution u starts becoming fat from the origin.

5.2 An example of nonfattening with nonlinear boundary conditions

We next consider the games for the curvature flow equation (1.1) with a nonlinear boundary condition (1.9). By means of the game interpretation in Theorem  1.1, we can still observe, in the discrete level, the effect of the driving force on the boundary.

We discuss the case with (1.9) by using the game introduced in Sect. 1.3 satisfying the condition (2.14); see Example 2.8. It turns out that in the presence of such a driving force term, the fattening phenomenon for \(u^\varepsilon \) may not occur instantly even if the initial level curve is set tangential to the boundary.

Let \(\Omega \) be the half plane (5.1) again. Take two points

$$\begin{aligned} z_\pm =(\pm 2, K)\in \Omega , \end{aligned}$$
(5.8)

where \(0<K<1\) is the coefficient given in (1.9). Fix

$$\begin{aligned} R=|z_\pm |=\sqrt{4+K^2} \end{aligned}$$
(5.9)

and let

$$\begin{aligned} E=\overline{B_R(z_+)}\cap \overline{B_R(z_-)}. \end{aligned}$$
(5.10)

We keep the choice of \(u_0\) as in (5.3) and take the value function defined as in (1.12) in Sect.  1.3.

We first use the \(x_2\)-axis to divide \({\overline{\Omega }}\) into two regions:

$$\begin{aligned} S_+=\left\{ x\in {\overline{\Omega }}: \left\langle x, e_1\right\rangle >0\right\} , \quad S_-=\left\{ x\in {\overline{\Omega }}: \left\langle x, e_1\right\rangle <0\right\} . \end{aligned}$$
Fig. 3
figure 3

A strategy for Player I controlling v and a

Starting the game from \(y_0=x\in {\overline{\Omega }}\), we construct the following strategy of Player I; see also Fig. 3. At the k-th step,

  1. (1)

    if \(y_{k-1}\in {\overline{S}}_+\), Player I chooses \(a_k=e_1\) and a unit vector \(v_k\) such that

    $$\begin{aligned} \left\langle v_k, y_{k-1}-z_-\right\rangle =0; \end{aligned}$$
  2. (2)

    if \(y_{k-1}\in S_-\), Player I chooses \(a_k=-e_1\) and a unit vector \(v_k\) such that

    $$\begin{aligned} \left\langle v_k, y_{k-1}-z_+\right\rangle =0. \end{aligned}$$

Denote

$$\begin{aligned} {\hat{y}}_k=y_{k-1}+\sqrt{2}\varepsilon \eta _\varepsilon (y_{k-1})b_k v_k \end{aligned}$$

with \(v_k\) determined above. It follows that under the strategy above for \(y_{k-1}\in {\overline{S}}_+\),

$$\begin{aligned} |{\hat{y}}_k-z_-|^2=|y_{k-1}-z_-|^2+2\varepsilon ^2 \eta _\varepsilon ^2(y_{k-1}) \end{aligned}$$

no matter which \(b_k\) is chosen by Player II. This implies that

$$\begin{aligned} \begin{aligned}&|y_k - z_-|^2=\left| {\hat{y}}_k-z_- -\varepsilon ^2 \zeta _\varepsilon (y_{k-1})(\nu (y_{k-1})-Ke_1) \right| ^2\\&\quad =|{\hat{y}}_k-z_-|^2+2\varepsilon ^2 \zeta _\varepsilon (y_{k-1}) \left\langle {\hat{y}}_k-z_-, Ke_1- \nu (y_{k-1}) \right\rangle +o(\varepsilon ^2)\\&\quad =|y_{k-1}-z_-|^2+2\varepsilon ^2 \eta _\varepsilon ^2(y_{k-1}) +2\varepsilon ^2 \zeta _\varepsilon (y_{k-1}) \left\langle {\hat{y}}_k-z_-, Ke_1- \nu (y_{k-1}) \right\rangle +o(\varepsilon ^2), \end{aligned}\nonumber \\ \end{aligned}$$
(5.11)

and similarly

$$\begin{aligned} \begin{aligned} |y_k-z_+|^2&=|y_{k-1}-z_+|^2+2\varepsilon ^2 \eta _\varepsilon ^2(y_{k-1}) \\&\quad +2\varepsilon ^2 \zeta _\varepsilon (y_{k-1}) \left\langle {\hat{y}}_k-z_+, -Ke_1- \nu (y_{k-1}) \right\rangle + o(\varepsilon ^2) \end{aligned} \end{aligned}$$
(5.12)

provided that \(y_{k-1}\in S_-\).

When \(y_{k-1}\in {\overline{S}}_+\cap S_\varepsilon \) with \(S_\varepsilon \) defined in (5.4), it is not difficult to see that

$$\begin{aligned} \begin{aligned}&\left\langle {\hat{y}}_k-z_-, Ke_1- \nu (y_{k-1}) \right\rangle \\&\quad = \left\langle y_{k-1}-z_-, Ke_1\right\rangle -\left\langle y_{k-1}-z_-, \nu (y_{k-1})\right\rangle +O(\varepsilon ) \ge 2K-K +O(\varepsilon )=K+O(\varepsilon ), \end{aligned} \end{aligned}$$

which yields

$$\begin{aligned} \zeta _\varepsilon (y_{k-1})\left\langle {\hat{y}}_k-z_-, Ke_1- \nu (y_{k-1}) \right\rangle \ge K\zeta _\varepsilon (y_{k-1})+O(\varepsilon ) \end{aligned}$$

for any \(y_{k-1}\in {\overline{S}}_+\). By (5.11), we are thus led to

$$\begin{aligned} \begin{aligned} |y_k-z_-|^2&\ge |y_{k-1}-z_-|^2+2\varepsilon ^2 \eta _\varepsilon ^2(y_{k-1})+2K \varepsilon ^2 \zeta _\varepsilon (y_{k-1})+o(\varepsilon ^2)\\&\ge |y_{k-1}-z_-|^2+2K\varepsilon ^2+o(\varepsilon ^2) \end{aligned} \end{aligned}$$

if \(y_{k-1} \in {\overline{S}}_+\). An analogous estimate from (5.12) gives

$$\begin{aligned} |y_k-z_+|^2\ge |y_{k-1}-z_+|^2+2K\varepsilon ^2 +o(\varepsilon ^2) \end{aligned}$$

if \(y_{k-1}\in S_-\).

On the other hand, since

$$\begin{aligned} |y-z_-|\ge |y-z_+| \quad \text { if and only if }y\in {\overline{S}}_+\text { and} \\ |y-z_+|\ge |y-z_-| \quad \text { if and only if }y\in {\overline{S}}_-, \end{aligned}$$

using the estimates above, we deduce that

$$\begin{aligned} \max \left\{ |y_k-z_+|^2,\ |y_k-z_-|^2\right\} \ge \max \left\{ |y_{k-1}-z_+|^2,\ |y_{k-1}-z_-|^2\right\} +2K\varepsilon ^2+o(\varepsilon ^2). \end{aligned}$$

By iteration it follows that

$$\begin{aligned} \max \left\{ |y_N-z_+|^2,\ |y_N-z_-|^2\right\} \ge R_x^2+2KN\varepsilon ^2+o(\varepsilon ^2N) \end{aligned}$$

for \(\varepsilon >0\) small, where

$$\begin{aligned} R_x:=\max \{|x-z_+|, \ |x-z_-|\}. \end{aligned}$$
(5.13)

Hence, we conclude that

$$\begin{aligned} u^\varepsilon (x, t)\le u_0(y_N)\le R-\sqrt{R_x^2+2KN\varepsilon ^2+o(\varepsilon ^2N)} \end{aligned}$$
(5.14)

for all \(\varepsilon >0\) small, \(x\in {\overline{\Omega }}\) and \(t\ge 0\). It follows that

$$\begin{aligned} {\overline{u}}(x, t)=\, \mathop {\mathrm {limsup^*}}_{\varepsilon \rightarrow 0}u^\varepsilon (x, t)\le R-\sqrt{R_x^2+2Kt} \qquad \text{ for } \text{ all } (x, t)\in {\overline{\Omega }}\times [0, \infty ).\nonumber \\ \end{aligned}$$
(5.15)

In particular, we have

$$\begin{aligned} {\overline{u}}(0, t)\le R-\sqrt{R^2+2Kt}<0\qquad \text {for any } t>0. \end{aligned}$$
(5.16)

We thus have proven the following result.

Proposition 5.3

(Non-fattening at the origin for a nonlinear boundary problem) Suppose that \(\Omega \) is the half plane as in (5.1). For any \(0<K<1\), take \(z_\pm \) as in (5.8) and \(R>0\) as in (5.9). Assume that E is a closed subset of \({\overline{\Omega }}\) given by (5.10). Let \(u_0\) be the initial value as in (5.3) with \(M>0\) large. Let \(u^\varepsilon \) be the value function associated to the game in Sect. 1.3 with the condition (2.14); namely, \(u^\varepsilon \) is defined as in (1.12). Then \(u^\varepsilon (x, t)\) satisfies (5.14) with (5.13) for all \(\varepsilon >0\) small and \((x, t)\in {\overline{\Omega }}\times [0, \infty )\). Moreover, the estimates (5.15) and (5.16) for \({\overline{u}}\) hold.

As a consequence of Proposition 5.3 and Theorem  1.1, if the comparison principle holds for (1.1) with (1.9), then the game value \(u^\varepsilon \) converges to the unique solution u, whose zero level set does not generate interior near the origin although the level set is initially tangent to the boundary at the origin.