Error estimates for finite difference schemes associated with Hamilton-Jacobi equations on a junction

This paper is concerned with monotone (time-explicit) finite difference schemes associated with first order Hamilton-Jacobi equations posed on a junction. They extend the schemes recently introduced by Costeseque, Lebacque and Monneau (2013) to general junction conditions. On the one hand, we prove the convergence of the numerical solution towards the viscosity solution of the Hamilton-Jacobi equation as the mesh size tends to zero for general junction conditions. On the other hand, we derive optimal error estimates of order $($\Delta$x)^{\frac{1}{2}}$ in $L\_{loc}^{\infty}$ for junction conditions of optimal-control type at least if the flux is"strictly limited".


Introduction
This paper is concerned with numerical approximation of first order Hamilton-Jacobi equations posed on a junction, that is to say a network made of one node and a finite number of edges.The theory of weak (viscosity) solutions for such equations on such domains has reached maturity by now [20,21,1,18,17].In particular, it is now understood that general junction conditions reduce to special ones of optimal-control type [17].Roughly speaking, it is proved in [17] that imposing a junction condition ensuring the existence of a continuous weak (viscosity) solution and a comparison principle is equivalent to imposing a junction condition obtained by "limiting the flux" at the junction point.For the "minimal"flux-limited junction conditions, Costeseque, Lebacque and Monneau [10] introduced a monotone numerical scheme and proved its convergence.Their scheme can be naturally extended to general junction conditions and our first contribution is to introduce it and to prove its convergence.Our second and main result is an error estimate à la Crandall-Lions [11] in the case of flux-limited junction conditions.It is explained in [11] that the proof of the comparison principle between sub-and super-solutions of the continuous Hamilton-Jacobi equation can be adapted in order to derive error estimates between the numerical solution associated with monotone (stable and consistent) schemes and the continuous solution.In the Euclidian case, the comparison principle is proved thanks to the technique of doubling variables; it relies on the classical penalisation term ε −1 |x − y| 2 .Such a penalisation procedure is known to fail in general if the equation is posed in a junction; it is explained in [17] that it has to be replaced with a vertex test function.In order to derive error estimates as in [11], it is important to study the regularity of the vertex test function.More precisely, we prove (Proposition 5.1) that it can be constructed in such a way that its gradient is locally Lipschitz continuous, at least if the flux is "strictly limited".Such a regularity result is of independent interest.

Hamilton-Jacobi equations posed on junctions
A junction is a network made of one node and a finite number of infinite edges.It can be viewed as the set of N distinct copies (N ≥ 1) of the half-line which are glued at the origin.For α = 1, . . ., N, each branch J α is assumed to be isometric to [0, +∞) and J = α=1,...,N J α with J α ∩ J β = {0} for α = β where the origin 0 is called the junction point.For points x, y ∈ J, d(x, y) denotes the geodesic distance on J defined as d(x, y) = |x − y| if x, y belong to the same branch, |x| + |y| if x, y belong to different branches.
For a real-valued function u defined on J, ∂ α u(x) denotes the (spatial) derivative of u at x ∈ J α \{0} and the gradient of u is defined as follows, With such a notation in hand, we consider the following Hamilton-Jacobi equation posed on the junction J, u t + H α (u x ) = 0 in (0, T ) × J α \ {0}, u t + F ( ∂u ∂x1 , . . ., ∂u ∂xN ) = 0 in (0, T ) × {0}, submitted to the initial condition where u 0 is globally Lipschitz in J.The second equation in (1.2) is referred to as the junction condition.We consider the important case of Hamiltonians H α satisfying the following conditions: lim |p|→+∞ H α (p) = +∞ (Quasi-convexity) {H α ≤ λ} is convex for all λ ∈ R. (1.4) In particular, there exists p α 0 ∈ R such that H α in non-increasing in (−∞, p α 0 ] and non-decreasing in [p α 0 , +∞), and we set where H − α is non-increasing and H + α is non-decreasing.We next introduce a one-parameter family of junction conditions: given a flux limiter A ∈ R ∪ {−∞}, the A-limited flux junction function is defined for p = (p 1 , . . ., p N ) as, for some given A ∈ R {−∞} where H − α is non-increasing part of H α .We now consider the following important special case of (1.2), We point out that all the junction functions As far as general junction conditions are concerned, we assume that the junction function F : (1.8)

Presentation of the scheme
The domain (0, +∞) × J is discretized with respect to time and space.We choose a regular grid in order to simpifly the presentation but it is clear that more general meshes could be used here.
The space step is denoted by ∆x and the time step by ∆t.If ε denotes (∆t, ∆x), the mesh (or grid) G ε is chosen as It is convenient to write x α i for i∆x ∈ J α .A numerical approximation u ε of the solution u of the Hamilton-Jacobi equation is defined in . We want it to be an approximation of u(n∆t, x α i ) for n ∈ N, i ∈ N, where α stands for the index of the branch.We consider the following time-explicit scheme: for n ≥ 0, where p α,n i,± are the discrete (space) gradients defined by with the initial condition The following Courant-Friedrichs-Lewy (CFL) condition ensures that the explicit scheme is monotone, ∆x ∆t ≥ max max i≥0, α=1,...,N, 0≤n≤nT where the integer n T is assumed to be defined as n T = ⌊ T ∆t ⌋ for a given T > 0.

Main results
As previously noticed in [10] in the special case F = F A0 , it is not obvious that the CFL condition (1.12) can be satisfied; the reason is that, for α, i and n given, the discrete gradients p α,n i,+ depend itself on ∆t and ∆x through the numerical scheme.For this reason we will consider the following more restrictive CFL condition that can be checked from initial data, for some p α , p α , p 0 α ∈ R to be fixed.We can argue as in [10] and prove that p α , p α , p 0 α ∈ R can be chosen such that the CFL condition (1.12) is satisfied and, in turn, the scheme is monotone (Lemma 4.1 in Section 4).We will also see that it is stable (Lemma 4.4) and consistent (Lemma 4.5).It is thus known that it converges [11,4].
Theorem 1.1 (Convergence for general junction conditions).Let T > 0 and u 0 be Lipschitz continuous.There exist p α , p α , p 0 α ∈ R, α = 1, . . ., N , depending only on the initial data, the Hamiltonians and the junction function F , such that, if ε satisfies the CFL condition (1.13), then the numerical solution u ε defined by (1.9)-(1.11)converges locally uniformly as ε goes to zero to the unique weak (relaxed viscosity) solution u of (1.2)-(1.3),on any compact set K ⊂ [0, T ) × J, i.e. lim sup The main result of this paper lies in getting error estimates in the case of flux-limited junction conditions.
Theorem 1.2 (Error estimates for flux-limited junction conditions).Let u 0 be Lipshitz continuous, u ε be the solution of the associated numerical scheme (1.9)-(1.11)and u be the weak (viscosity) solution of (1.6)-(1.3)for some A ∈ R. If the CFL condition (1.13) is satisfied, then there exists C > 0 such that (1.15)

Related results
Numerical schemes for Hamilton-Jacobi equations on networks.The discretization of weak (viscosity) solutions of Hamilton-Jacobi equations posed on networks has been studied in a few papers only.Apart from [10] mentioned above, we are only aware of two other works.A convergent semi-Lagrangian scheme is introduced in [6] for equations of eikonal type.In [15], an adapted Lax-Friedrichs scheme is used to solve a traffic model; it is worth mentioning that this discretization implies to pass from the scalar conservation law to the associated Hamilton-Jacobi equation at each time step.
Link with monotone schemes for scalar conservation laws.We first follow [10] by emphasizing that the convergence result, Theorem 1.1, implies the convergence of a monotone scheme for scalar conservation laws (in the sense of distributions).
submitted to the initial condition In view of Theorem 1.1, we thus can conclude that the discrete solution v ε constructed from (V n ) converges towards u x in the sense of distributions, at least far from the junction point.
Scalar conservation laws with Dirichlet boundary conditions and constrained fluxes.
We would like next to explain why our result can be seen as the Hamilton-Jacobi counterpart of the error estimates obtained by Ohlberger and Vovelle [19] for scalar conservation laws submitted to Dirichlet boundary conditions.On one hand, it is known since [3] that Dirichlet boundary conditions imposed to scalar conservation laws should be understood in a generalized sense.This can be seen by studying the parabolic regularization of the problem.A boundary layer analysis can be performed for systems if the solution of the conservation law is smooth; see for instance [14,16].Depending on the fact that the boundary is characteristic or not, the error is ε 1 2 or ε.In the scalar case, it is proved in [13] that the error between the solution of the regularized equation with a vanishing viscosity coefficient equal to ε and the entropy solution of the conservation law (which is merely of bounded variation in space) is of order In [19], the authors derive error estimates for finite volume schemes associated with such boundary value problems and prove that it is of order (∆x) 1/6 (in L 1 t,x norm).More recently, scalar conservation laws with flux constraints were studied [9,8] and some finite volume schemes were built [2].In [7], assuming that the flux is bell-shaped, that is to say the opposite is quasi-convex, it is proved that the error between the finite volume scheme and the entropy solution is of order (∆x) 1 3 and that it can be improved to (∆x) 1 2 under an additional condition on the traces of the BV entropy solution.On the other hand, the derivative of a weak (viscosity) solution of a Hamilton-Jacobi equation posed on the real line is known to coincide with the entropy solution of the corresponding scalar conservation law.It is therefore reasonable to expect that the error between the viscosity solution of the Hamilton-Jacobi equation and its approximation is as good as the one obtained between the entropy solution of the scalar conservation law and its approximation.Moreover, it is explained in [18] that the junction conditions of optimal-control type are related to the BLN condition mentioned above; such a correspondance is recalled in Appendix A. It is therefore interesting to get an error estimate of order (∆x) 1/3 for the Hamilton-Jacobi problem.

Open problems
Let us first mention that it is not known if the error estimate between the (entropy) solution of the scalar conservation law with Dirichlet boundary condition and the solution of the parabolic approximation [13] or with the numerical scheme [19] is optimal or not.Similarly, we do not know if our error estimate is optimal or not.Deriving error estimates for general junction conditions seems difficult to us.The main difficulty is the singular geometry of the domain.The vertex test function, used in deducing the error estimates with flux limited solutions, is designed to compare flux limited solutions.Consequently, when applying the reasoning of Section 6, the discrete viscosity inequality cannot be combined with the continuous one.We expect that a layer develops between the continuous solution and the discrete scheme at the junction point.
Organization of the article.The remaining of the paper is organized as follows.In Section 2, we recall definitions and results from [17] about viscosity solutions for (1.2)-(1.3)and the so-called vertex test function.Section 3 is dedicated to the derivation of discrete gradient estimates for the numerical scheme.In Section 4, the convergence result, Theorem 1.1 is proved.In Section 5, it is proved that the vertex test function constructed in [17] can be chosen so that the gradient is locally Lipshchitz continuous (at least if the flux is strictly limited).The final section, Section 6, is dedicated to the proof of the error estimates.

Viscosity solutions
We introduce the main definitions related to viscosity solutions for Hamilton-Jacobi equations that are used in the remaining.For a more general introduction to viscosity solutions, the reader could refer to Barles [5] and to Crandall, Ishii, Lions [12].In [17], the following assumption on F is imposed, which is weaker than (1.8) above (no coercivity is needed in the theory developed in [17]).
Space of test functions For a smooth real valued function u defined on J, we denote by u α the restriction of u to (0, T ) × J α .
Then we define the natural space of functions on the junction: Viscosity solutions In order to define classical viscosity solutions, we recall the definition of upper and lower semi-continuous envelopes u ⋆ and u ⋆ of a (locally bounded) function u defined on [0, T ) × J: It is convenient to introduce the following shorthand notation Definition 1 (Viscosity solution).Assume that the Hamiltonians satisfy (1.4) and that F satisfies (2.1) and let u : (0, T ) × J → R.
with equality at (t 0 , x 0 ) for some t 0 > 0, we have (iii) We say that u is a (viscosity) solution if u is both a sub-solution and a super-solution.
As explained in [17], it is difficult to construct viscosity solutions in the sense of Definition 1 because of the junction condition.It is possible in the case of the flux-limited junction conditions F A .For general junction conditions, the Perron process generates a viscosity solution in the following relaxed sense [17].
( i ) We say that u is a relaxed sub-solution (resp.relaxed super-solution) of (1.2) in (0, T )×J if for all test function ϕ ∈ C 1 (J T ) such that with equality at (t 0 , x 0 ) for some t 0 > 0, we have ( ii ) We say that u is a relaxed (viscosity) solution of (1.2) if u is both a sub-solution and a super-solution.
Theorem 2.1 (Comparison principle on a junction).Let A ∈ R ∪ {−∞}.Assume that the Hamiltonians satisfy (1.4) and the initial datum u 0 is uniformly continuous.Then for all sub-solution u and super-solution v of (1.6)-(1.3)satisfying for some T > 0 and C T > 0 for some constant C only depending on H and u 0 .Moreover, it is Lipschitz continuous with respect to time and space, in particular, ∇u ∞ ≤ C.

Inverse functions of Hamiltonians and junction functions
In the proofs of discrete gradient estimates, as well as in the construction of the vertex test functions, "generalized" inverse functions of H ± α are needed; they are defined as follows: with the additional convention that (H ± α ) −1 (+∞) = ±∞, where A α := min R H α .In order to define a "generalized" inverse function of F , we remark that (1.8) implies that for all K ∈ R, there exists p Remark that the functions p α can be chosen non-increasing.

Vertex test function
In this subsection, we recall what is a vertex test function.It is introduced in [17] in order to prove a comparison principle for (1.2).This function G plays the role of |x − y| 2 in the classical "doubling variables" method [12].Theorem 2.4 (Vertex test function - [17]).Let A ∈ R ∪ {−∞} and γ > 0. Assume the Hamiltonians satisfy (1.4) and p α 0 = 0, that is to say, min Then there exists a function G : J 2 → R enjoying the following properties.
( iv ) (Compatibility condition on the gradients) For all (x, y) ( vi ) (Gradient bounds) For all K ≥ 0, there exists C K > 0 such that for all (x, y) Remark 1.Following [17], the vertex test function G is obtained as a regularized version of A + G 0 where G 0 is defined as follows: for (x, y) ∈ J α × J β : where G(A) is referred to as the germ and is defined as follows 3 Gradient estimates for the scheme This section is devoted to the proofs of the discrete (time and space) gradient estimates.These estimates ensure the monotonicity of the scheme and, in turn, its convergence.
) is the numerical solution of (1.9)-(1.11)and if the CFL condition (1.13) is satisfied with m 0 finite, then the following two properties hold true for any n ≥ 0.
( i ) (Gradient estimate) There exist p α ,p α , p 0 α (only depending on H α , u 0 and F ) such that ( ii ) (Time derivative estimate) The discrete time derivative defined as Remark 2. The quantities p α ,p α , p 0 α are defined as follows where π ± α and p are the "generalized" inverse functions of H α and F respectively, and where In order to establish Theorem 3.1, we first prove two auxiliary results.In order to state them, some notation should be introduced.

Discrete time derivative estimates
In order to state the first one, Proposition 3.2 below, we introduce some notation.For σ ∈ {+, −}, we set ), max(p α,n i,σ , p )] with p α,n i,σ defined in (1.10) and D α,n i,+ := sup sup The following proposition asserts that if the discrete (space) gradients enjoy suitable estimates, then the discrete time derivative is controlled.
Step 1: (m n ) n is non-decreasing.We want to show that W α,n+1 i ≥ m n for i ≥ 0 and α = 1, . . ., N. Let i ≥ 0 be fixed and let us distinguish two cases.
Case 1: i ≥ 1.Let a branch α be fixed and let σ ∈ {+, −} be such that max ). (3.9) We have where we used (3.6) and (3.8) in the last line.Using (3.7), we thus get Case 2: i = 0. We recall that in this case, we have U β,n for any β = 1, . . ., N. We compute in this case: Using (3.7), we argue like in Case 1 and get Step 2: (M n ) n is non-increasing.We want to show that W α,n+1 i ≤ M n for i ≥ 0 and α = 1, . . ., N. We argue as in Step 1 by distinguishing two cases.
Case 2: i = 0. Using (3.5), we can argue exactly as in Step 1.The proof is now complete.

Gradient estimates
The second result needed in the proof of Theorem 3.1 is the following one.It asserts that if the discrete time derivative is controlled from below, then a discrete gradient estimate holds true.
Proof.Let n ≥ 0 be fixed and consider (U α,n i ) α,i with ∆x, ∆t > 0 given.We compute (U α,n+1 i ) α,i using the scheme (1.9).Let us consider any i ≥ 0 and α = 1, . . ., N. If i ≥ 1, the result follows from If i = 0, the results follows from This achieves the proof of Proposition 3.3

Proof of gradient estimates
Proof of Theorem 3.1.The idea of the proof is to introduce new Hamiltonians Hα and a new junction function F for which it is easier to derive gradient estimates but whose corresponding numerical scheme in fact coincide with the original one.
Step 1: Modification of the Hamiltonians and the junction function.Let the new Hamiltonians Hα for all α = 1, . . ., N be defined as where p α and p α are defined in (3.2) respectively, and The modified Hamiltonians Hα satisfy (1.4) and Let the new F satisfy (1.8), be such that We then consider the new numerical scheme with the same initial condition, namely, In view of (3.11) and (3.12), the CFL condition (1.13) gives that for any i ≥ 0, n ≥ 0, and α = 1, . . ., N Step 2: First gradient bounds.Let n ≥ 0 be fixed.If mn and M n are finite, we have mn ≤ W α,n i for any i ≥ 0, α = 1, . . ., N.
In particular, we get that In view of (3.13), Proposition 3.2 implies that mn ≤ mn+1 ≤ M n+1 ≤ M n for any n ≥ 0. (3.14) In particular, mn+1 is also finite.Since m0 = m 0 and M 0 are finite, we conclude that mn and M n are finite for all n ≥ 0 and for all n ≥ 0, Step 3: Time derivative and gradient estimates.Now we can repeat the same reasoning but applying Proposition 3.3 with K = m 0 and get This implies that Ũ α,n i = U α,n i for all i ≥ 0, n ≥ 0, α = 1, . . ., N .In view of (3.14), (3.15) and (3.16), the proof is now complete.

Convergence for general junction conditions
This section is devoted to the convergence of the scheme defined by (1.9)-(1.10).In order to do so, we first make precise how to choose p α , p α and p 0 α in the CFL condition (1.13).

Monotonicity of the scheme
In order to prove the convergence of the numerical solution as the mesh size tends to zero, we need first to prove a monotonicity result.It is common to write the scheme defined by (1.9)-(1.10)under the compact form where the operator S ε is defined on the set of functions defined in J ε .The scheme is monotone if In our cases, if t = n∆t and x = i∆x ∈ J α and Checking the monotonicity of the scheme reduces to checking that S α and S 0 are non-decreasing in all their variables.Lemma 4.1 (Monotonicity of the numerical scheme).Let (U n ) := (U α,n i ) α,i the numerical solution of (1.9)- (1.11).Under the CFL condition (1.12) the scheme is monotone.
Proof.We distinguish two cases.
Case 1: i ≥ 1.It is straightforward to check that, for any α = 1, . . ., N, the function S α is non-decreasing with respect to U α,n i−1 and U α,n i+1 .Moreover, which is non-negative if the CFL condition (1.12) is satisfied.
Case 2: i = 0. Similarly it is straightforward to check that S 0 is non-decreasing with respect to U β,n which is non-negative due to the CFL condition.The proof is now complete.
A direct consequence of the previous lemma is the following elementary but useful discrete comparison principle.
If the CFL condition (1.12) is satisfied and if )) can be seen as a super-scheme (resp.subscheme).
We finally recall how to derive discrete viscosity inequalities for monotone schemes. where

Stability and Consistency of the scheme
We first derive a local L ∞ bound for the solution of the scheme.
Lemma 4.4 (Stability of the numerical scheme).Assume that the CFL condition (1.13) is satisfied and let u ε be the solution of the numerical scheme (1.9)-(1.11).There exists a constant C 0 > 0, such that for all (t, x) ∈ G ε , In particular, the scheme is (locally) stable.
Another condition to satisfy convergence of the numerical scheme (1.9) towards the continuous solution of (1.6) is the consistency of the scheme (which is obvious in our case).In the statement below, we use the short hand notation (2.2) introduced in the preliminary section.
Lemma 4.5 (Consistency of the numerical scheme).Under the assumptions on the Hamiltonians (1.4), the finite difference scheme is consistent with the continuous problem (1.6), that is to say for any smooth function ϕ(t, x), we have

Convergence of the numerical scheme
In this subsection, we present a sketch of the proof of Theorem 1.1.
The comparison principle (see Theorem 2.1) then implies that u ≤ u ≤ u which achieves the proof.

C 1,1 estimates for the vertex test function
In this section, we study the Lipschitz regularity of the gradient of the vertex test function constructed in [17].It turns out that its gradient is indeed Lipschitz if the flux limiter A is not equal to A 0 , the minimal flux limiter.Such a technical result will be used when deriving error estimates.
It is also of independent interest.
Proposition 5.1 (C 1,1 estimates for the vertex test function).Let γ > 0 and assume that the Hamiltonians satisfy (1.4) and (2.4).The vertex test funtion G associated with the small parameter γ and with the flux limiter A 0 + γ obtained from Theorem 2.4 can be chosen C 1,1 (J 2 K ) for any K > 0 where J 2 K = {(x, y) ∈ J 2 : d(x, y) ≤ K}.Moreover, there exists C K such that the constant C K depends only on K and (1.4).
Proof.In the following A denotes A 0 + γ. first get the desired estimate in the smooth convex case and then derive it in the general case.
Step 1: the smooth convex case.We first assume that Hamiltonians satisfy ( In this case, the vertex test function G(x, y) is constructed in [17] in two different ways if x, y are in the same branch or not.If they are, then G is a regularization of when x and y are on the branch J α .This regularization implies an error γ in the viscosity inequalities and the second derivatives of G in J 2 α can be bounded by O(γ −1 ).For x ∈ J α and y ∈ J β with α = β, G is defined in [17] by the following formula, The supremum is reached for some λ ≥ A which depends on x and y.In the region where λ = A, the function G is linear and there is nothing to prove.In {λ > A}, the function λ(x, y) is implicitly defined by the following equation and the gradient of G is given by λ) with λ = λ(x, y).We thus can easily compute the second order derivatives of G, where ).We distinguish cases.
using the fact that H ′′ α are bounded from below, and Case 2: min H α = A 0 and min H β > A 0 .In this case Using a second order Taylor expansion for H α we assume that Now reasoning as in the previous case, one can deduce that: Case 3: min H α > A 0 and min H β = A 0 .In this case Arguing as in the previous case, we have Case 4: min H α = A 0 and min H β = A 0 .In this case We have xy G| = O(1), Step 2: the smooth case.We now weaken (5.1) as (5.4) In this case, it is explained in [17] that the smooth convex case can be used by considering Ĥα = β • H α for some C 2 convex function β such that β(0) = 0 and β ′ ≥ δ for some δ > 0. Indeed, In this case, the vertex test function studied in Step 1 and associated with the Hamiltonians Ĥα satisfies Ĥ(y, −G y (x, y)) ≤ Ĥ(x, G x (x, y)) + γ which implies that (where the short hand notation H(x, p) is associated with The general case.If the Hamiltonians H α merely satisfy (1.4), we can construct Hα such that If we consider now the vertex test function G constructed in the smooth case associated with the small parameter γ/3 and A = A 0 + γ, we get Hence G is a vertex test function for the Hamiltonians H α and it satisfies the desired gradient estimate.The proof is now complete.

Error estimates
To prove Theorem 1.2, we will need the following result whose classical proof is given in Appendix for the reader's convenience.
Lemma 6.1 (A priori control).Let T > 0 and let u ε be a solution of the numerical scheme (1.9)-(1.11)and u a super-solution of (1.2)-(1.3)satisfying for some C T > 0, Then there exists a constant C = C(T ) > 0 such that for all (t, x) ∈ G ε , t ≤ T , and (s, y) ∈ [0, T ) × J, we have We now turn to the proof of the error estimates in the case of flux-limited junction conditions.
Proof of Theorem 1.2.Before deriving the error estimate, we remark as in [17] that we can assume without loss of generality that the Hamiltonians satisfy the additional condition (2.4).Indeed, if u solves (1.We next remark that the solution ũε of the associated scheme satisfies ũε (t, x) = u ε (t, x) − p α 0 x for (t, x) ∈ G ε .
In order to get (1.15), we only prove that u ε (t, x) − u(t, x) ≤ C T (∆x) 1/3 in [0, T ) × R ∩ G ε since the proof of the reverse inequality is very similar.Let
The remaining of the proof proceeds in several steps.
Step 1: Penalization procedure.For σ > 0, η > 0, δ > 0 let us define M ε,δ = sup where C only depends on g (in particular, it does not depend on ε).This estimate together with the fact that −G y (x/ε, y/ε) − δd(y, 0) lies in the viscosity subdifferential of u(t, •) at x implies that there exists K > 0 only depending on ∇u ∞ (see Theorem 2.3) and g such that the point (t, x, s, y) realizing the maximum We want to prove that for σ > σ ⋆ (to be determined) that the supremum in (6.2) is attained for t = 0 or s = 0. We assume that t > 0 and s > 0 and we prove that σ ≤ σ ⋆ .
Step 2: Viscosity inequalities.Since t > 0 and s > 0, we can use Lemma 4.3 and get the following viscosity inequalities.If x = 0, then we see that d 2 (and consequently ϕ) is in C 1,1 in J 2 .Moreover ϕ satisfies
Theorem 2.3 (Existence and uniqueness on a junction).Assume that the Hamiltonians satisfy (1.4) and that F satisfies (2.1) and that the initial datum u 0 is Lipschitz continuous.Then there exists a unique relaxed viscosity solution u of (1.2)-(1.3),such that