A Comparison Principle for Singular Diffusion Equations with Spatially Inhomogeneous Driving Force for Graphs

We introduce the notions of viscosity super- and subsolutions suitable for singular diffusion equations of non-divergence type with a general spatially inhomogeneous driving term. In particular, the viscosity super- and subsolutions support facets and allow a possible facet bending. We prove a comparison principle by a modified doubling variables technique. Finally, we present examples of viscosity solutions. Our results apply to a general crystalline curvature flow with a spatially inhomogeneous driving term for a graph-like curve.


Introduction
As a continuation of [17,21] this paper studies a degenerate nonlinear parabolic equation (in one space dimension) whose diffusion effect is very strong at particular slopes of unknown functions. We are particularly interested in an equation, where the driving force term is spatially inhomogeneous. A typical example, which we have in mind, is a quasilinear equation where W is a given convex function on R but W may not be of class C 1 so that its derivative W may have jump discontinuities. Here a is a given non-negative continuous function and σ is a given smooth function depending on x and also on t, where u t and u x denote the time and the space derivative of u = u(t, x).
As explained in detail in [17] the equation is viewed as an evolution law of the graph of u moved by an anisotropic mean curvature flow V = M(n) (κ γ + σ ) with a singular interfacial energy density γ , where κ γ is a weighted curvature and M is mobility; V denotes the normal velocity of the evolving curve in the direction of n (the quantity κ γ formally equals (γ + γ )κ with curvature κ and γ = γ (θ) is an interfacial density as a function of the argument θ of n = (cos θ, sin θ)).
Our eventual goal is to establish a kind of the theory of viscosity solutions for a class of equations including (1.1) as a particular example so that we are able to construct a global-in-time solution, for instance for periodic initial data. In this paper we give a new notion of viscosity solutions for (1.1) and we establish a comparison principle.
If σ in (1.1) is independent of x, the theory of viscosity solutions has been already established in [17,21]. Even in this simpler case the quality W (u x ) x turns to be nonlocal so the conventional viscosity theory does not work. For example if W ( p) = |p|, then W ( p) is two times the delta function so that (1.1) becomes (1. 2) which is, of course, not a classical partial differential equation. If u = u(t, x) has a flat part (called a facet) with a zero slope, then it is expected to move with speed u t = a(0)[2χ/L + σ ] provided that a facet persists and it does not break.
Here L is the length of a facet (which is a nonlocal quantity) and χ = ±1, 0 is a transition number of the facet depending upon local behavior of u near the facet. For example, if u is 'concave' near the facet, then χ should be −1. When σ is spatially homogeneous, the hypothesis that a facet does not break is justified either by the viscosity theory developed by [17,20] or by the subdifferential theory [14] (in the case σ ≡ 0), in the sense that such a solution is an appropriate limit of solutions to strictly parabolic problems. When W is piecewise linear and σ is independent of x, then (1.1) is analyzed in [1,38] for a very restrictive class of unknown functions, which are piecewise linear, with slopes belonging to jump discontinuities of W . Their 'admissible' solution is actually a solution in a viscosity sense [17] and also in a variational sense [12,14]. If σ depends on the space variable, the hypothesis that all facets do not break is no longer true. For example, if we postulate this hypothesis, then the speed, u t , of a facet with the slope equal to zero when u is a solution of (1.2) is a(0)[2χ/L + − σ dx], where − denotes the average over the facet. As noticed in [19], if we assigned the speed in this way the solution may not in general enjoy the comparison principle. This shows that such a 'solution' is not obtained as a limit of approximate problems satisfying the comparison principle. On the other hand if |σ x | is sufficiently small compared with the length of facets, such a solution is known to enjoy a comparison principle [3].
If a is a constant, say a ≡ 1, and σ is independent of t, (1.1) can be viewed as a subdifferential formulation u t ∈ −∂ϕ(u), (1.3) where ϕ is an energy which formally equals for simplicity, we assume here a periodic boundary condition so that T = R/ωZ. As observed in [18] for (1.3), a general theory of subdifferential equations in the Hilbert space L 2 (T) provides not only the unique existence of the solution but also the value of right derivative du + /dt (of u as a function with value in H ). A general theory further yields where ∂ 0 ϕ is the canonical restriction of a closed convex set ∂ϕ (u(t)), that is In [18], it is observed that ∂ 0 ϕ can be calculated by solving an obstacle problem. Let us review those observations. Since the condition is equivalent to the quantity if u x ∈ P, (1.4) where P is the jump discontinuity of W and u is assumed to be of class C 2 and P-faceted [17]. Here, η 0 minimizes under a suitable boundary condition at the end of facet F depending on whether u is 'convex' or 'concave' near F. This is a convex minimizing problem so a unique minimizer always exists. Moreover, if σ is independent of x, η x must be constant and η 0 x +σ = χ/L +σ . If σ depends on x, η 0 x +σ may not be a constant over F and this is one reason why the speed may not be a constant on F when σ depends on x. The subdifferential equation (1.3) can be approximated by a smooth parabolic problem, so we expect the comparison principle to hold. Thus, it is natural to guess that η 0 x + σ gives a candidate for the value of when W has jump discontinuities. Note that this quantity agrees with the minimal velocity profile proposed by [36], as observed in [18]. Unfortunately, a general equation (1.1) cannot be viewed as a subdifferential equation (1.3). However, we still use (1.4) to define (1.6). We establish a notion of viscosity solutions by assigning the value Λ σ W by (1.4) for test functions which we call admissible. The class of test functions is the same as [17] so a facet of a test function never vanishes or breaks. The idea of the proof of the comparison principle is similar to that of [17] except for a simplified handling of end points of facets observed by [21] and the use of continuity of Λ σ W (u) under the translation of a faceted region which is obvious when σ is constant. So we have to study an obstacle problem in this paper carefully. Let Λ(F)(x) be a quantity defined by where η 0 is the minimizer of (1.5). In particular, we prove that Moreover, the convergence is uniform with respect to F provided that F is bounded. This problem can be viewed as a stability problem for (1.5) with respect to perturbations of σ . Since our obstacle problem is convex, it is not difficult to prove these facts. We also need comparison results (maximum principle) for Λ σ W to see that this quantity behaves like curvature or usual second derivatives. It is often convenient to consider ξ = η + x σ as a variable, instead of η itself, so we shall use variable ξ . We warn the reader that in Section 5 we will use differently defined ξ .
To establish the comparison principle we argue by contradiction using the doubling variables technique. Let u be a subsolution and v be a supersolution. We are interested in the maximizers of x 2 for large x and B is a (non-negative) faceted C 2 convex function with B(0) = 0. This choice of a test function B is different from [17] and this choice simplifies the argument. We use sup-convolutions with a faceted function to regularize the problem as in [17]. Quantity Λ σ W behaves like a usual second derivative in the sense that it satisfies the maximum principle. At the final stage we have to compare Λ(F μ ) and Λ(F) which is trivial when σ is constant, because it is independent of μ.
Although this paper focuses on the comparison principle for (1.1), as observed in [21], the method developed here is fundamental to establish a level set method for V = M(n) (κ γ + σ ) when σ depends on x. For a standard level set method for smooth γ see [10,13,16]. Also a stability result is expected [20] but we do not intend to include any progress in this direction in the present paper. A general existence result through Perron's method is almost the same as the one in [17], though we do not state it explicitly. Instead, we give a couple of examples of solutions. An existence result based on Perron's method and using the comparison principle established in the present paper was established in [25].
Recently, besides examples in [18], several semi-explicit variational solutions are constructed for (1.1) for special choices of M, σ and γ by solving a free boundary problem [27,29,30]. Their variational solutions are expected to be our viscosity solutions. In this paper we shall confirm this consistency at least for some typical examples.
We do not know much about surface evolutions. In surface evolving problems a facet may not stay as a facet even if σ ≡ 0 see for example [5][6][7][8].
After this paper was submitted, we were informed of a very recent work [9] by Chambolle and Novaga, where they established a local-in-time unique solution for a closed curve with spatially inhomogeneous σ .
A notion of a generalized solution is established and a comparison principle is proved in [4], see also [3]. However, the existence of a solution is known only when the initial surface is convex, see [2]; note that their problem is formulated as V = γ κ γ where the mobility parallels the interfacial energy.
The bibliographies of review papers [15,[22][23][24] include several articles dealing with anisotropic curvature flow equations with singular interfacial energy or singular diffusion equations. Here, we only mention a few recent works related to this topic but not included in the papers mentioned above. In particular, we have in mind the approach developed by Mucha and Rybka, which is based on an original definition of a composition of multivalued operators, see [32,34]. So far, it is restricted to one dimension but allows one to study facet evolution for quite general data as well as the regularity of solutions.
This paper is organized as follows. We first study an obstacle problem in Section 2. In Section 3, we establish a notion of viscosity solutions. In Section 4, we prove our main comparison theorem. In Section 5 we shall prove that the semiexplicit solutions in [29] are indeed solutions in our viscosity sense.

Variational Properties of Nonlocal Curvature with a Nonuniform Driving Force Term
We shall give a variational characterization of the quantity Λ σ W , which is formally defined by by means of solving an obstacle problem. This characterization enables us to derive various important properties to establish the theory of viscosity solutions for singular diffusion equations.

An Obstacle Problem
Let Z be a real-valued C 2 (or C 1,1 ) function, defined in a bounded interval I , where I = (a, b). For a given > 0 let K Z χ l χ r be the set of all ξ ∈ H 1 (I ) satisfying and Here, χ l and χ r take values ±1. Let J Z χ l χ r be the functional on L 2 (I ) defined by In this subsection, we suppress the dependence with respect to Z since we fix Z . By the definition of J χ l χ r , it is easy to see that inf J χ l χ r is the H 1 -homogeneous distance from zero to the convex closed set K χ l χ r in H 1 . Thus, J χ l χ r admits a unique absolute minimizer denoted by ξ χ l χ r . Evidently, ξ χ l χ r ∈ H 1 (I ) ⊂ C 1/2 (I ) by the Sobolev embedding. In fact, it is C 1,1 , as proved in [33, Chap II, Theorem 7.1] (in [33], the regularity of the multidimensional obstacle problem is also discussed). In our one-dimensional case, as discussed below, it is easy to prove that ξ χ l χ r is C 1,1 since the obstacle is C 1,1 and the coincidence set is closed.
For ξ ∈ H 1 (I ) let D ± (ξ ) be the coincidence set defined by We say that D + is the upper coincidence set while D − is the lower coincidence set.

Definition 1.
We say that ξ ∈ K χ l χ r satisfies the concave-convex condition if ξ is concave on each connected component of the complement of the upper coincidence set D + and convex on each connected component of the complement of the lower coincidence set D − , that is, ξ 0 outside D + and ξ 0 outside D − . In particular, ξ is C 1,1 in I and ξ = 0 outside D − ∪ D + .

Proposition 1 (A characterization of the minimizer).
The function ξ ∈ K χ l χ r is the minimizer of J χ l χ r if and only if ξ fulfills the concave-convex condition. In particular, ξ χ l χ r is C 1,1 in I and Proof. By the convexity of J χ l χ r and the uniqueness of the minimizer, ξ ∈ K χ l χ r is the absolute minimizer if and only if ξ is a local minimizer of J χ l χ r that is, 2) and the boundary condition (2.3). These conditions are equivalent to the concave-convex condition. We refer to Schwartz [37] or Hörmander [31] for the equivalence of convexity in the distribution sense and the strong convexity.
The remaining statement is a simple consequence of the concave-convexity condition.
As a trivial application we give two cases, where the minimizer is explicitly written.

Comparison Principle
So far, we have fixed interval I to define ξ χ l χ r . We shall study the dependence of ξ χ l χ r upon I . To clarify this, we write J χ l χ r , I instead of J Z χ l χ r and ξ χ l χ r , I instead of ξ Z χ l χ r . We set It is easy to observe that this quantity agrees with η 0 x + σ when Z equals a primitive of σ . It is sufficient to take ξ = η + Z . The reason we write Z instead of Z is that the derivative of ξ Z χ l χ r depends on Z only through its derivative. We suppress Z in This can be proved by a comparison principle for parabolic equations by an approximation, as is done in Giga-Gurtin-Matias [28]. However, since the problem is one dimensional, we give instead an elementary proof, which is based on the following simple observation. .
Proof. It is very easy to see that ξ(x) ζ(x) for x ∈ [a, b]. We have to show that function ξ − ζ is increasing. For this purpose we will see that ξ − ζ may attain neither local maximum nor minimum in (a, b). In fact, the lack of local maxima implies impossibility of local minima. Thus it is sufficient to see that no local maximum of ξ − ζ is possible. Let us suppose the contrary, that is, there exists a point x 0 ∈ (a, b) where ξ − ζ attains a local maximum, that is there is a positive δ, such that (x 0 − δ, where the inequality is strict for x = x 0 ± δ. We will consider a number of cases upon the lower coincidence set They are: We begin with the first case, which is illustrated in Fig. 1. Let us denote the slope of ζ by α. Since we assumed that ξ is a minimizer, then we deduce from Proposition 1 that ξ ∈ C 1,1 ([a, b]). For a sufficiently small η > 0, the line We will see that which contradicts the minimality of ξ .
Indeed, among H 1 -functions with given Dirichlet data, we conclude that Hence, 2.7 follows, and as a result the Lemma holds in case (i). Let us consider case (ii). If it occurs, then . But first of all, ξ (x 0 ) = ζ (x 0 ) =: α and we define in a similar way as in 2.6.
Thus, for a sufficiently small η > 0 the line intersects ξ . From now on, we proceed as in the case (i) to show the impossibility of the local maximum.
The last case, (iii), uses the same kind of argument, as soon as we realize that the solution to the minimization problem with an obstacle meets the obstacle tangentially. This follows from the convexity-concavity condition in Proposition 1. The details are left to the interested Reader. This finishes the proof of the Lemma.
We may now turn our attention to the proof of Theorem 1 Proof. It suffices to prove We begin with the proof of (a). Since the argument is symmetric, it is sufficient to prove the first inequality. We may assume that one of the end points of I 1 and I 2 is the same. By symmetry, it suffices to prove that However, when ξ(b) = ζ(b), then there is nothing to prove, so we assume that ξ(b) > ζ(b). Thus, we may apply the Elementary Lemma, to deduce that ξ ζ in I 2 , that is, 2.8 holds.
We next prove (b). By symmetry it suffices to show one of four inequalities. We shall prove that (2.9) Let ζ = ξ −−,I 1 be the minimizer such that ζ = Λ −− (x, I 1 ) and let ξ = ξ −+,I 2 ) be the minimizer such that ξ = Λ −+ (x, I 2 ). By the structure of minimization problems, we see that ξ(a) = ζ(a) and ξ(b) ζ(b). We may directly apply the Elementary Lemma, to deduce that 2.9 holds.

Stability of Curvature like Quantity
Our goal in this section is to show that the curvature like quantity Λ χ l χ r (x, I ) defined by (2.5) is 'continuous' with respect to the change of the interval I . The stability result for Λ of the convex obstacle problem with respect to Z is essentially known in the literature for example [35, p.156, Chapter 5, Theorem 4.5 and Remark 4.6]. However, we give a proof for the reader's convenience since the situation is slightly different.
We recall several stability properties of J χ l χ r . Let {Z k } ∞ k=1 be a sequence of real-valued C 2 (or C 1,1 ) functions in I , where I = (a, b). In this subsection we fix χ l χ r , as a result we often suppress its dependence and simply write J Z χ l χ r for J and J k instead of J Z k χ l χ r .

Proposition 2 (Lower semicontinuity). Assume that Z k uniformly converges to
Proof. We may assume that ξ k ∈ K Z k . Since ξ k − Z k converges to ξ − Z weakly in L 2 (I ) and the sign is conserved through the weak limit, then we observe that ξ ∈ K Z . The desired conclusion now follows from the lower semicontinuity of H 1 -norm with respect to L 2 -weak convergence.

Proposition 3 (Approximability).
Assume that Z k converges to Z , with its first derivative, uniformly in I as k → ∞, that is, Z k → Z in C 1 (I ). Then for each Proof. We may assume that ξ ∈ K Z since otherwise ξ / ∈ K Z k for sufficiently large k. We set ξ k = ξ − Z + Z k and observe that ξ k is in K Z k by (2.3) and (2.4). Since These two above Propositions say that J k converges to J in the sense of Mosco, that is, both strong and weak Γ − limits of J k equal J . Thus we easily obtain the convergence of minimizers.

Proposition 4 (Convergence of minimizers).
Assume that Z k → Z with its first derivative uniformly onĪ , that is Z k → Z in C 1 (I ). Let ξ k χ l χ r be the minimizer of J k χ l χ r . Then ξ k χ l χ r converges to ξ χ l χ r in L 2 (I ) which is the minimizer of J χ l χ r .
Proof. We deduce from [33, Theorem 7.1] This implies that {min J k } ∞ k=1 is bounded. Since H 1 (I ) is compactly embedded in L 2 (I ), then upon extracting a subsequence ξ k χ l χ r (not relabeled) converges to an element ζ ∈ L 2 (I ) as k → ∞. By Proposition 2, we observe that For any ξ ∈ L 2 (I ), due to Proposition 3, there is always a sequence ξ k → ξ in Therefore, J (ζ ) J (ξ ) so ζ must be the unique minimizer of J . Thus ξ k χ l χ r converges ξ χ l χ r without taking a subsequence.
Theorem 2 (Continuity with respect to Z ). Assume that Proof. We may assume that Z k → Z in C 1 (I ) by adding a constant to fix a value at some point of I , for example Z k ((a+b)/2) = 0, Z ((a+b)/2) = 0. By Proposition 4 we observe that ξ k χ l χ r → ξ χ l χ r in L 2 (I ). By Proposition 1, our assumption on the bound of the second derivative of Z k implies that We are now in position to state the continuity of Λ χ l χ r with respect to I . This notion will be explained below.
We have to clarify the continuity with respect to I . For two bounded intervals I = (a, b) and J = (c, d) there is a unique affine map A: x → y = αx + β (dilation and translation) with α > 0 such that A(I ) = J . Assume that an open interval I k converges to I as k → ∞, that is, the end points a k , b k of I k = (a k , b k ) tend to a and b, respectively. Let F be a mapping: I →F(I ) ∈ C(I ). We say that F is continuous with respect to I if F(I k ) • A k converges to F(I ) in C(I ), as k → ∞ for any I k → I , where A k is the affine map which maps I to I k .
Proof. These assertions easily follow from Theorem 2, once we compare , both defined on I , here A k is the affine transformation mapping I to I k , when I k → I (in the assertion (ii) this affine map is just a translation).

Nonlocal Curvature with a Nonuniform Driving Force Term
In order to define the nonlocal curvature Λ σ W (u), formally given by (2.1), we recall basic assumptions on W as in [17] and a class of function u, so that Λ σ W (u) is well-defined.
(W) Let W be a convex function on R with values in R. Assume that W is of class C 2 outside a closed discrete set P and that W is bounded in any compact set except all points in P.
We shall always assume (W) in this paper. By definition, the set P is either a finite set or a countable set having no accumulation points in R. If P is nonempty, where the p j 's and r j 's are arranged in strictly increasing sequences p j < p j+1 , r j < r j+1 and m is a positive integer.
We recall a notion of a faceted function.
We introduce the left transition number χ l = χ l ( f, x 0 ) and the right transition Definition 2. We assume that σ is a real-valued Lipschitz function on an open interval Ω and Z is its primitive, moreover, (W) holds. We assume that f ∈ C(Ω) p ifaceted at x 0 ∈ Ω with p i ∈ P. Then we define the nonlocal curvature Λ σ W by

Remark 1.
If σ is a constant, so that Z is an affine function, the minimizer ξ χ l χ r of J Z χ l χ r is always a straight line function (cf. Corollary 1 for the case χ = 1 or −1). Thus, it is easy to observe that In particular, our new quantity agrees with the weighted curvature We conclude this section by rewriting the Comparison Principle and Continuity with respect to translation in terms of Λ σ is well-defined for all x ∈ Ω provided that σ is locally Lipschitz. The next two results are immediate consequences of Theorem 1 and Theorem 3, respectively.

Theorem 5 (Continuity).
Let us suppose that the hypotheses of Theorem 4 concerning W and σ hold. We assume that f ∈ C(Ω) is p i -faceted at x 0 − η and g be p i -faceted at x 0 − η and p i ∈ P. Assume moreover,

Definitions of Generalized Solutions
The goal of this section is to define generalized solutions (in the viscosity sense) for evolution equations of the form when W is a singular interfacial energy. Such a notion is given when σ ≡ 0 in [17]. Our definition will be a natural extension to the case when σ ≡ 0. In this section, we shall also give several equivalent definitions for later use.

Admissible Functions and Definitions
We first recall a natural class of test function. Let us set Q = (0, T ) × Ω, where Ω is an open interval and T > 0. Let A P (Q) be the set of all admissible functions ψ on Q in the sense of [17] that is, ψ is of the form For our equation, we often assume that (FT) (Uniform continuity in curvature and time.) For each K the function The third assumption is rather standard when W ≡ 0 and σ is Lipschitz so that Λ σ W (u) = σ . A typical example of (3.1) satisfying (F1), (F2), (FL) and (FT) is of the form here, a ∈ C(R) satisfies 0 a( p) C(| p| + 1) for all p ∈ R, C ∈ C[0, T ]. If a( p) = (1 + p 2 ) 1/2 and C ≡ 0, then (3.2) says that the normal velocity V of the graph of u equals the nonlocal curvatures, that is, The driving force term σ may depend on t. Here is an assumption we often use. (S) The function σ ∈ C [0, T ] × Ω is Lipschitz in space uniformly in time, that is there is a constant L T such that We are now in the position to give a notion of a generalized solution in the viscosity sense.
Here, ψ(t) is a function on Ω defined by ψ(t) = ψ(t, ·) and u * is defined by for (t, x) ∈ Q and u * = (−u * ). A (viscosity) supersolution is defined by replacing u * (< ∞) by the lower-semicontinuous envelope u * (> −∞), max by min in (3.4) and the inequality (3.3) by the opposite one. If u is both a sub-and supersolution, it is called a viscosity solution or a generalized solution. Hereafter, we avoid using the word viscosity. Function ψ satisfying (3.4) is called a test function of u at (t,x).
The monotonicity, that is (F2), and the convexity, that is (W), conditions show that the equation is at least degenerate parabolic. Thus, by comparison (Theorem 4), it is easy to see that ψ ∈ A P (Q) is a subsolution in Q if (and only if) ψ satisfies

An Equivalent Definition
To show the comparison principle for sub-and supersolutions, it is convenient to recall equivalent definitions. One of them is regarded as an infinitesimal version. Such a definition is given in [17] when σ ≡ 0. It is simplified by [21]. We give a definition which is a natural extension of the one in [21,Theorem 4.3].
We first recall upper time derivations on a faceted region. Let ϕ be a function on Q and (t,x) ∈ Q. Assume that ϕ(t, ·) ∈ C(Ω) is p-faceted atx ∈ Ω with p ∈ P. We define T + P ϕ(t,x) = {τ ∈ R | there are a modulus ω and three positive numbers δ, δ + , δ − such that whereÑ −1 denotes a semineighborhood of R(ϕ(t, ·),x), defined in [17]; by a modulus ω we mean that ω : [0, ∞) → [0, ∞) is nondecreasing, continuous with ω(0) = 0. For the reader's convenience, we recall the definition ofÑ −1 . Let f ∈ C(Ω) be p-faceted at x 0 ∈ Ω with p ∈ P. We set and the setÑ −1 is defined by (Fig. 2) The setÑ +1 is defined bỹ An element of T + P ϕ(t,x) is an upper time derivative at (t,x). The set of lower time derivatives is defined by x). We next recall a class of functions (not necessarily admissible) for which the upper time derivative is well-defined on a faceted region. The following definition is an improved one in [21], not the original one in [17]. In [21] Q may not be noncylindrical but here, we consider a simple case Q = (0, T ) × Ω.

Definition 4.
Let ϕ : Ω → R be an upper-semicontinuous function. For (t,x) ∈ Q assume that ϕ(t, ·) ∈ C(Ω) for t neart. We say that ϕ is an (infinitesimally) admissible superfunction at (t,x) in Q if one of the following three conditions holds.
We say that ϕ is an admissible subfunction at (t,x) in Q if ϕ is an admissible superfunction with P replaced by −P. We implicitly assume that R ϕ(t, ·),x does not touch the boundary of Ω. We are now in the position to give a definition of a subsolution in the infinitesimal sense.  For (t,x), let ϕ be an admissible superfunction at (t,x) in Q such that ϕ is a test function of u at (t,x), that is, (3.4) holds. Then
The definition of the supersolution in the infinitesimal sense is given by replacing u * (< ∞) by u * (> −∞), max by min in (3.4), superfunction by subfunction, T + P by T − P , P + by P − and the inequalities in (i), (ii), (iii) by the opposite ones. It turns out that Definitions 3 and 5 are equivalent.

Theorem 6 (Equivalence). Assume (W ), (S), (F1), (F2). A real-valued function u on Q is a subsolution (resp. supersolution) of (3.1) in Q if and only if u is a subsolution (resp. supersolution) of (3.1) in Q in the infinitesimal sense.
The proof essentially parallels that of [17, Theorem 6.9] and [21,Theorem 4.3]. In the proof of the 'only part', (iii) follows from the zero-curvature lemma [21,Lemma 4.2], with a trivial modification. We give a modified version of this lemma for reader's convenience. We do not repeat the tedious details of the proof of the 'only if' part. The proof of the 'if' part is easier and written in the proof of [21, Theorem 4.3]; of course we need trivial modifications, for example Λ W ψ(t, ·),x < 0 should be replaced by χ ψ(t, ·),x < 0.

Comparison Principle
We state our main comparison result for equation (3.1). The proof will be given in the remaining part of this section. The basic strategy is in finding suitable test functions of u and v to obtain a contradiction after having assumed that the conclusion u * v * had been false. This basic strategy is the same as in [17]. However, the nonlocal curvature may depend on x even if x is in a faceted region. So one should be careful on this issue. This is a new aspect of the problem. On the other hand since the infinitesimal version of definitions of sub-and supersolutions are simplified in comparison with [17], we need not avoid handling the case where functions take a maximum value at the end points of faceted regions. In fact, it is mentioned in [21] that the proof of [17] is simplified.

Doubling Variables
As usual, we double the variables. For z = (t, x), z = (s, y) ∈ Q, we set We take a barrier function which is different from the one in [17] Moreover, the length of all faceted regions is the same. It is easy to find the derivative B of such a function by modifying y = x, so that B is obtained as its primitive. We consider its rescaled version: B ε (x) = ε B(x/ε) for ε > 0. Clearly, B ε ∈ C 2 P (R) and satisfies the same properties as B's. We consider 'barrier functions' of the diagonal z = z : for positive parameters ε, δ, γ , γ (in [17] we use |x − y − ζ | 2 /ε 2 instead of B ε (x − y), where ζ is an extra shift parameter used to avoid the situation when a point that we are dealing with is an end point of faceted regions). We often write (z, z ) and S(t, s) instead of showing the dependence on all positive parameters. As usual, we shall analyze the maximizers of

Choice of Parameters
We shall choose ε, δ, γ , γ sufficiently small, as usual. The next statement for the behavior of the maximizer of Φ is rather standard in the process of doubling variables; see for example, [16], [17,Proposition 7.1], [26].

Maximizers in a Faceted Region of Test Functions
We shall consider three cases depending on the location of maximizers (ẑ,ẑ ) = (t,x,ŝ,ŷ) of Φ over Q × Q.

Proposition 7 (No touching of faceted region on the boundary). Assume the conditions of Case
For the proof, we invoke Remark 2. The proof depends on the boundary condition (Proposition 5 (iii)) and it parallels that of [17,Proposition 7.10].

Existence of Admissible Superfunctions
Unfortunately, functions u 0 and v 0 may not be faceted atx andŷ. We have to regularize them by taking sup-convolution with faceted functions. For ρ > 0 let We consider sup-convolutions of u 0 and −v 0 by ϑ. For α > 0 let u α 0 be the supconvolution of u 0 in the x-direction, that is, Based on these regularizations and the maximum principle for faceted sub-and supersolutions, the desired admissible super-and subfunctions are constructed. The proof is essentially the same as in [17,]. Although it is highly nontrivial, we do not repeat the proof. Theorem 8. Assume the condition of Case A and choose parameters ε 0 , δ 0 , γ 0 , γ 0 , as in Remark 2. Let 0 < ε < ε 0 , 0 < δ < δ 0 , 0 < γ < γ 0 and 0 < γ < γ 0 . Then, there exists an admissible superfunction U at (t,x) in Q and an admissible subfunction V at (ŝ,ŷ) in Q satisfying the following properties. v at (t,x) and (ŝ,ŷ) respectively. In fact,

(i) U and V are test functions of u and
The function u α 0 + p 0 x is essentially an admissible superfunction so we are tempted to set U = u α 0 + p 0 x. However, the faceted region may contain the boundary point of ∂Ω. Since

Proof of Comparison Theorem
We are now in position to prove Theorem 7. Suppose that the conclusion were false. We may assume that u and v satisfy the assumptions of Proposition 5, by considering u * and v * on Q. In particular, we may assume m 0 > 0. We shall fix ε 0 , δ 0 , γ 0 , γ 0 , as in Remark 2, and assume that 0 < ε < ε 0 , 0 < δ < δ 0 , 0 < γ < γ 0 and 0 < γ < γ 0 . Since Q is compact and u and −v are upper-semicontinuous, there is always a maximizer (ẑ,ẑ ) = (t,x,ŷ,ŝ) of Φ over Q × Q and it is in Q × Q by the choice of parameters (Proposition 5 (iii) and Remark 2). We shall fix γ and γ . We divide the situations into three cases.
In Case I, we invoke Theorem 8. Since U is an admissible superfunction at (t,x) in Q and since u is a subsolution, by Definition 4 and Theorem 8 (i), (ii) we have By Theorem 8 (iv), we have 3) where χ U l and χ U r denote the transition numbers of U (t, ·) on I U and χ V l and χ V r denote the transition numbers of V (ŝ, ·) on I V = R (V (ŝ, ·),ŷ). Since we have assumed that P is a finite set, there is K such that P ⊂ [−K , K ]. Thus, by (FT) and (F2), inequalities (4.1) and (4.3) yield with some modulus ω K . By definition, inequality (4.2) can be rewritten as Subtracting (4.6) from (4.5) yields This implies by (FL). By Theorem 8(iii), we know I U = I V +x −ŷ. Sending ε to zero, we observe thatx −ŷ → 0 by Proposition 5 (ii). By (S), we know that σ x (s, ·) is uniformly bounded. We now invoke continuity results (Theorems 2 and 3 (ii)) to get as ε → 0, where x(= y), t, s is a subsequent limit ofx,ŷ,t,ŝ as ε → 0 and I is a subsequent limit of I U which is the same as the limit of I V . Note that U and V depend ε, so do I U and I V . However, the convergence is uniform with respect to the interval and σ , so we are able to obtain (4.8). Applying Theorem 2 and Theorem 3(ii) again to (4.8), we let δ → 0 and observe that the right hand sides of (4.8) converge to the same value. We now send ε → 0 and then δ → 0 in (4.7) to get (γ + γ )/T 2 0, which is a contradiction. Case II is rather standard [11,16,17]. The assumptions (FL) and (S) are useful in this step. Case III is essentially the same as Case I (or even easier) if one admits the zero curvature lemma (Lemma 2).

Periodic Version
As noted in [17] a similar argument yields the comparison principle under spatially periodic boundary conditions. In fact, the argument is even simpler because there is no lateral boundary of Q = (0, T )×T, T = R/ωZ, ω > 0. For the reader's convenience, we state the comparison principle for the periodic boundary condition.

Theorem 9 (Comparison). Let us assume that the conditions (W ), (S), (F1), (F2), (F L) and (F T ) hold and in addition set P is finite. Let u and v be respectively
sub-and supersolutions of (3.1) in Q = (0, T ) × T, T = R/ωZ with period ω. If u * v * at t = 0, then u * v * in Q.

Remark 3.
As usual, Theorems 7 and 9 can be extended to the case when F = F(u, t, p, X ) depends explicitly also on u, provided that u → F(u, t, p, X ) + ku =:F is nondecreasing for some k 0 andF is continuous as a function of (u, t, p, X ). Of course, assumptions (FL) and (FT) should be uniform for all u with |u| K for a given K . If k = 0, the proof is the same except for the trivial modification to the way of comparing (4.5) and (4.6). If k > 0, we have to introduce a new variableũ = u exp(−kt) and reduce the problem to the case k = 0. Note that, differently from the standard case [16], when singularity set P is empty, our singular set (jump discontinuity) forũ x depends on time, which apparently yields an extra difficulty. However, we are able to circumvent this difficulty by using old variables to calculate Λ and the slope, while using new variablesũ andṽ to find a maximizer of Φ.

Examples of Solutions
In [27,29,30] we constructed variational solutions to while increasing generality of the setting, where β = M −1 is the kinetic coefficient. We considered graphs, possibly satisfying an additional boundary condition, and simple closed Lipschitz curves, which we called bent rectangles. We will show that the variational solutions to (5.1) for the evolution of graphs are viscosity solutions in the sense of the present paper. For the sake of illustration the theory, we will not consider the general case of [27] but only simple ones presented in [29]. To be precise, we are going to deal here with a simplification of the case studied in [29], where we investigated graphs of functions defined over a finite interval J . We considered solutions having exactly three facets and two of them touched the boundary at the right angle. Here, we study a graph over R, with some restrictions on the data. We expect that the results of the present paper may be applied to closed curves, but we will not elaborate upon this.
The advantage of studying graphs in the parametric approach is that the set of parameters is independent of time. Thus, the main difficulty is interpreting (5.1) in a local coordinate system. We present the setting after [29].

Graphs over R
We consider the evolution of graph Γ (t) = {(x, y) ∈ R 2 : y = d(t, x)}, where d(t, ·) : R → R + . For the sake of simplicity we assume that function d(t, ·) is admissible (in x) for all t 0. We shall say that function d is admissible provided that: The last condition means that we consider a simple yet nontrivial case when d has exactly one faceted region. We stress, however, that facet (−l 0 , l 0 ) may be strictly included in (−λ 0 , λ 0 ). This results from solving the minimization problem with constraints, see Fig. 3 below.
We have to explain the definition of κ γ . Formally, where n is the outer normal to Γ and for γ , given by (5.3), we have, In the present case n = (−d x , 1)/ 1 + d 2 x . Thus, we immediately obtain This is exactly equation (1.1) with W ( p 1 ) = γ Λ | p 1 | and a( p 1 ) = max{| p 1 |, 1}, hence our theory applies.
In [29] we interpreted (5.1) differently. Namely, we replaced gradient ∇ ζ γ , which is defined only almost everywhere by the subdifferential, ∂ ζ γ , which is well defined for all p ∈ R 2 , because γ is convex. However, we had to consider sections ξ of the subdifferential, that is ξ(x) ∈ ∂ ζ γ (n(x)). That is here, where we change notation as compared with the Introduction and Section 2. In the Introduction our present ξ was denoted by η. On the other hand, writing ξ(x) ∈ ∂ ζ γ (n(x)) is consistent with the papers that are the source of our examples.
In order to select ξ we introduce a functional The graph of Γ (t) has the infinite one-dimensional Hausdorff measure. But condition div S ξ ∈ L 2 (Γ ) does not introduce additional unexpected restrictions, because outside of the facets we have ξ = ∇γ (n), where n = n Λ , n R and n Λ = (1, 0), n R = (0, 1). We call a couple (Γ, ξ ) a variational solution to (5.1), provided that Γ is the graph of an admissible function d, as described above, and at each time instant t, the vector field ξ(t, ·) : Γ → R 2 is a minimizer of E, that is We can show that under natural conditions on σ , equation (5.1) takes a form which is suitable for the analysis.
We notice that if ξ is a solution to (5.6), then the boundary of the coincidence set ±l 0 need not coincide with boundary of the flat region ±λ 0 , postulated by the definition of the admissible function, thus l 0 λ 0 . For the sake of notational simplicity, we shall write R 0 := d| (−l 0 ,l 0 ) . Once we settle the notation, we establish the following fact.

Proposition 8.
We assume that σ, σ x , ∈ C(R + × R) and σ satisfies the following conditions: Let us suppose that (Γ, ξ ) is a variational solution to (5.1), where Γ = Γ (d) is the graph of d, such that at each time instant t 0 d(t, ·) has exactly one faceted region, (−l 0 , l 0 ). Furthermore, for all t 0 function d(t, ·) is piecewise C 1 . Then, (a) We have the following formula for ξ 1 for each time t 0 In addition,Ṙ 0 > 0. (b) Equation (5.1) (and hence (5.5)) takes the following form, In the absence of the additional facets the argument gets simpler than in [29] and [30] and it is omitted.
Let us warn the reader that we use the notion 'faceted region' in the sense defined in the present paper. In [29] and [30] its meaning is different.
It turns out that l 0 (·) is a genuine free boundary. We obviously need information about its behavior. Without it, the above system is not closed.
Let us suppose that t 0. The necessary and sufficient condition for continuity of the function given below is the following matching condition In addition, since we have a faceted region, the coincidence set of the obstacle problem (5.6) may not be empty. By definition, ±l 0 forms its boundary, that is, l 0 λ 0 , then at such a point We shall say that (Γ, ξ ) satisfies the tangency condition at l 0 . However, if d + x (l 0 (t), t) > 0, then we just have a boundary condition at this point and (5.11) does not hold.
We have the following two existence results.

Remark 5.
Let us stress again that l 00 is defined as the boundary of the coincidence set where ξ is a solution to variational problem (5.6). We note that in general and the inclusion may be strict.

Theorem 11.
Let us suppose that all the assumptions of Theorem 10 hold, except (d), that is tangency condition (5.11) and the inequality sign in (e) are reversed, that is we have Instead of (5.11) the following inequality is satisfied Moreover, we assume that d 0 ∈ C 1,1 ([l 00 , ∞)), the right derivative d + x (0, l 00 ) is positive and σ ∈ C 1,1 . Then, there is a unique local in time solution to (5.9), such that at no time t > 0 tangency condition (5.11) holds. Subsequently, if ξ(t, ·) is defined, as in Theorem 10, (iii), then (Γ (d(t, ·), ξ(t, ·)) is a variational solution to (5.1).

Remark 6.
We note that l 0 is a genuine free boundary; its behavior is determined by σ . For instance if σ is independent of time and σ = σ (x), then l 0 (t) = l 00 . The type of behavior of the interfacial curve is determined by Σ 0 , this quantity is defined by [30, eq. (3.14)] and the properties of l 0 are presented in [ [30,Section 3.1], because we deal with a single facet for a graph of an admissible function, but the main difference is that here we have an unbounded domain. For the sake of completeness, we offer a sketch of the proof in the Appendix.

Variational Solutions are Viscosity Solutions and They are Unique
Here, we shall see that our variational solution over R can be regarded as the viscosity solutions. Hence, they will be unique. The comparison principle has been shown for equations on a bounded domain, but our sub-and supersolutions are fully determined for large values of |x|, thus a comparison principle for bounded |x| is sufficient. We will explain it in Corollary 2 following Theorem 12.

Theorem 12.
Under the conditions specified above, the variational solutions, constructed in Theorem 10 and in Theorem 11, are viscosity solutions in the sense of the present paper, as long as |d x | 1.
Proof. Of course equation (5.4), augmented with the initial condition, may be written as where Λ σ W (d) = d dx ζ χ l χ r is given by (2.5) and the signs of χ r , χ l depend upon the point we are considering. We will show that if (Γ (d), ξ ) is a variational solution, where ξ is given by (5.8) in Proposition 8. The interval (−l 0 , l 0 ) is the inverse image of a faceted region of Γ in the language of [29,30], it is the faceted region in the present paper sense. If I is any interval containing (−l 0 , l 0 ), thenξ = ξ | I is a solution to the minimization problem, min{E I (ζ ) : ζ ∈ D I }. (5.12) We write, Γ I (t) = {(x, y) ∈ Γ (t) : x ∈ I } and Indeed, if there existed ζ I , a solution to (5.12), such that E I (ζ I ) < E I (ξ I ), then this indicates that ξ is not a solution to (5.6), which is not possible. Thus, if (−l 0 , l 0 ) ⊂ I , then ±l 0 form the boundary of the coincidence set, where the solution ξ to (5.6) attains −γ Λ , that is on the coincidence set ζ I (x) = G(x) + γ Λ . Here, G(x) denotes Since the boundary conditions in K Z ++ are that of D [−l 0 ,l 0 ] , we immediately conclude, by previous considerations, that ζ defined by Z − ξ is the solution to (5.13).
After these preparations, we may check that a variational solution is a viscosity solution. First, we shall see that d is a supersolution. For this purpose, we take a test function ϕ ∈ A P (Q), such that d − ϕ attains a minimum at (x 0 , t 0 ), where t 0 ∈ (0, T ). We have to show that (5.14) Inequality (5.14) (and (5.16) below) is to be checked at each point. We have to consider two cases for the interfacial curves: (a) the free boundary l 0 is a tangency curve; (b) the free boundary l 0 is a matching curve and the tangency condition is violated.
We begin with (i). Since we assumed that d 0 ∈ C 1 , we know (see Theorem 10 or Theorem 11) that at (x 0 , t 0 ) function d is differentiable. Hence, for ϕ(x, t) = f (x) + g(t) with d − ϕ 0 in a neighborhood of (x 0 , t 0 ), we have Due to Definition 2, we have Λ σ W (ϕ) = σ = Λ σ W (d). As a result, as desired. Now, we look at (ii). The argument depends on the type of the interfacial curve l 0 . Let us first assume that l 0 is a tangency curve.
In the considered case, d is also differentiable at (x 0 , t 0 ). If ϕ is a test function, such that d − ϕ attains its minimum at (x 0 , t 0 ), then Since f ∈ C 2 P (Ω), we immediately see that I = R( f, x 0 ), the faceted region of ϕ at (x 0 , t 0 ), must contain [−l 0 , l 0 ]. Let us suppose that ξ I is the solution to min{E I (ω) : ω ∈ D I }.
By the geometric interpretation of the obstacle problem (5.6), [29,Proposition 2.3], the coincidence set is I \ (−l 0 , l 0 ). This is the place where we use the fact that the tangency condition holds at x 0 .
As a result of the above observation, we have Λ σ W (d) = Λ σ W (ϕ). Moreover, Thus, by (5.9) as desired. Let us note that this argument works well for (x 0 , t 0 ) = (l 0 (t 0 ), t 0 ) if the tangency condition holds, so (iii) holds in this case.
We continue our analysis of case (ii). We have to consider the situation when l 0 is a matching curve. We will have to compare Λ σ W (d) and Λ σ W (ϕ). One way is to invoke Theorem 4, but we think it is instructive to check it directly.
Let us suppose that I = [−a, b] is the faceted region of ϕ containing (x 0 , t 0 ). We consider the minimization problem (5.13) defining ζ I on that interval. Without the loss of generality, we may restrict our attention to a subinterval [μ 0 , μ 1 ] ⊂ [−a, b], such that dζ I dx is constant on [μ 0 , μ 1 ]. Let us first consider that situation when μ 0 = −μ 1 . We have to compare velocities dζ I dx and dξ dx on [−l 0 , l 0 ]. Since the tangency condition is violated at l 0 , then there is a possibility of bigger faceted regions containing [−l 0 , l 0 ]. Moreover, dζ I dx is a slope of a line connecting 0 and Z (μ 1 ) + γ Λ , while dξ dx is a slope of a line connecting 0 and Z (l 0 ) + γ Λ . Since Z is strictly increasing, we deduce that dζ I dx < dξ dx . The same observation applies when we want to compare slopes of minimizers to (5.13) on [−a, b] and [−μ 1 , μ 1 ] and a = μ 1 or b = μ 1 but [−a, b] ⊃ [−μ 1 , μ 1 ]. Thus, we have (iii) In order to complete the discussion of the facet, we have to consider the case when at the interfacial point the tangency condition is violated. Let us suppose that this happens at x 0 = l 0 (the case x 0 = −l 0 is analogous). At this point d(t 0 , x 0 ) need not be differentiable with respect to x. Hence, if ϕ is a test function such that d − ϕ attains its minimum, then d − x (l 0 (t 0 ), t 0 ) = 0 and d + x (l 0 (t 0 ), t 0 ) 0. The point (l 0 (t 0 ), t 0 ) belongs to the faceted region of d, hence it belongs to the faceted region of the test function ϕ. As a result, the above consideration on Λ σ W (ϕ) is valid. Hence, the series of inequalities (5.15) is valid too.
We also have to check that d is a subsolution. For this purpose we take a test function ϕ ∈ A P (Q), such that max(d − ϕ) = d(t 0 , x 0 ) − ϕ(t 0 , x 0 ).
We shall show that ϕ t − Λ σ W 0. (5.16) We consider the same three cases. They are handled in an analogous way, we exploit the fact that d(t, ·) is a C 1 function on (−l 0 , l 0 ) and on R \ [−l 0 , l 0 ]. The case (i) is handled as before, because of differentiability of d and ϕ at (x 0 , t 0 ).

Corollary 2.
Let us suppose that the assumptions of Theorem 12 hold. The variational solutions constructed in Theorems 10 and 11 are unique, as long as |d x | 1 and the initial condition d 0 is strictly increasing on [l 00 , ∞).
Proof. Let us suppose that (Γ (d i ), ξ i ) are two variational solutions, with initial data Γ (d 0 ), where d 0 is admissible. We notice that it is sufficient to show that d 1 = d 2 .
Let us set A = max t∈[0,T ) l 0 (t) + 1. Due to (5.7), by formula (6.2), we conclude that d i x (t, x) = 0 for all (t, x) ∈ (0, T ) × (A, ∞). Since we solve an ODE for |x| > A, by the inspection of equation (6.1) we immediately see that if v := d 1 is a supersolution and u := d 2 is a subsolution to (6.1), then v u. Subsequently, by interchanging the roles of d 1 and d 2 , we conclude that d 1 = d 2 for (t, x) ∈ [0, T ) × R \ (−A, A). As a result, we can see that an application of the Comparison Principle on (−A, A) yields that d 1 = d 2 for all (t, x) ∈ [0, T ) × R. Remark 7. We notice that the same kind of the argument shows that Theorem 10 and 11 are valid also if σ = σ (x 1 , x 2 ) satisfies an extension of condition (5.7) for functions of two variables, that is Moreover, by Remark 3, the Comparison Principle (Theorem 7) holds in this case, too.