Abstract
We present an introduction to the theory of viscosity solutions of first-order partial differential equations and a review on the optimal control/dynamical approach to the large time behavior of solutions of Hamilton–Jacobi equations, with the Neumann boundary condition. This article also includes some of basics of mathematical analysis related to the optimal control/dynamical approach for easy accessibility to the topics.
In memory of Riichi Iino, my former adviser at Waseda University.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
L. Ambrosio, P. Tilli, Topics on Analysis in Metric Spaces. Oxford Lecture Series in Mathematics and Its Applications, vol. 25 (Oxford University Press, Oxford, 2004), viii + 133 pp
M. Bardi, I. Capuzzo-Dolcetta, Optimal Control and Viscosity Solutions of Hamilton-Jacobi-Bellman Equations. Systems & Control: Foundations & Applications (Birkhäuser Boston, Inc., Boston, 1997), xviii + 570 pp
G. Barles, P.E. Souganidis, On the large time behavior of solutions of Hamilton-Jacobi equations. SIAM J. Math. Anal. 31(4), 925–939 (2000)
G. Barles, An introduction to the theory of viscosity solutions for first-order Hamilton-Jacobi equations and applications, in Hamilton-Jacobi Equations: Approximations, Numerical Analysis and Applications, ed. by P. Loreti, N.A. Tchou. Lecture Notes in Mathematics (Springer, Berlin/Heidelberg, 2013)
G. Barles, Discontinuous viscosity solutions of first-order Hamilton-Jacobi equations: a guided visit. Nonlinear Anal. 20(9), 1123–1134 (1993)
G. Barles, Solutions de Viscosité des Équations de Hamilton-Jacobi. Mathématiques & Applications (Berlin), vol. 17 (Springer, Paris, 1994), x + 194 pp
G. Barles, H. Ishii, H. Mitake, A new PDE approach to the large time asymptotics of solutions of Hamilton-Jacobi equations. Bull. Math. Sci. (to appear)
G. Barles, H. Ishii, H. Mitake, On the large time behavior of solutions of Hamilton-Jacobi equations associated with nonlinear boundary conditions. Arch. Ration. Mech. Anal. 204(2), 515–558 (2012)
G. Barles, H. Mitake, A pde approach to large-time asymptotics for boundary-value problems for nonconvex Hamilton-Jacobi equations. Commun. Partial Differ. Equ. 37(1), 136–168 (2012)
G. Barles, J.-M. Roquejoffre, Ergodic type problems and large time behaviour of unbounded solutions of Hamilton-Jacobi equations. Commun. Partial Differ. Equ. 31(7–9), 1209–1225 (2006)
E.N. Barron, R. Jensen, Semicontinuous viscosity solutions for Hamilton-Jacobi equations with convex Hamiltonians. Commun. Partial Differ. Equ. 15(12), 1713–1742 (1990)
P. Bernard, Existence of C 1, 1 critical sub-solutions of the Hamilton-Jacobi equation on compact manifolds. Ann. Sci. École Norm. Sup. (4) 40(3), 445–452 (2007)
P. Bernard, J.-M. Roquejoffre, Convergence to time-periodic solutions in time-periodic Hamilton-Jacobi equations on the circle. Commun. Partial Differ. Equ. 29(3–4), 457–469 (2004)
G. Buttazzo, M. Giaquinta, S. Hildebrandt, One-Dimensional Variational Problems. An Introduction. Oxford Lecture Series in Mathematics and Its Applications, vol. 15 (The Clarendon Press/Oxford University Press, New York, 1998), viii + 262 pp
L.A. Caffarelli, X. Cabré, Fully Nonlinear Elliptic Equations. American Mathematical Society Colloquium Publications, vol. 43 (American Mathematical Society, Providence, 1995), vi + 104 pp
F. Camilli, O. Ley, P. Loreti, V.D. Nguyen, Large time behavior of weakly coupled systems of first-order Hamilton-Jacobi equations. Nonlinear Differ. Equ. Appl. (NoDEA) 19(6), 719–749 (2012)
I. Capuzzo-Dolcetta, P.-L. Lions, Hamilton-Jacobi equations with state constraints. Trans. Am. Math. Soc. 318(2), 643–683 (1990)
M.G. Crandall, L.C. Evans, P.-L. Lions, Some properties of viscosity solutions of Hamilton-Jacobi equations. Trans. Am. Math. Soc. 282(2), 487–502 (1984)
M.G. Crandall, H. Ishii, P.-L. Lions, User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. (N.S.) 27(1), 1–67 (1992)
M.G. Crandall, P.-L. Lions, Viscosity solutions of Hamilton-Jacobi equations. Trans. Am. Math. Soc. 277(1), 1–42 (1983)
A. Davini, A. Siconolfi, A generalized dynamical approach to the large time behavior of solutions of Hamilton-Jacobi equations. SIAM J. Math. Anal. 38(2), 478–502 (2006)
W. E, Aubry-Mather theory and periodic solutions of the forced Burgers equation. Commun. Pure Appl. Math. 52(7), 811–828 (1999)
L.C. Evans, On solving certain nonlinear partial differential equations by accretive operator methods. Isr. J. Math. 36(3–4), 225–247 (1980)
L.C. Evans, A survey of partial differential equations methods in weak KAM theory. Commun. Pure Appl. Math. 57(4) 445–480 (2004)
A. Fathi, Théorème KAM faible et théorie de Mather sur les systèmes lagrangiens. C. R. Acad. Sci. Paris Sér. I Math. 324(9), 1043–1046 (1997)
A. Fathi, Sur la convergence du semi-groupe de Lax-Oleinik. C. R. Acad. Sci. Paris Sér. I Math. 327(3), 267–270 (1998)
A. Fathi, Weak KAM theorem in Lagrangian dynamics. Cambridge University Press (to appear)
A. Fathi, A. Siconolfi, Existence of C 1 critical subsolutions of the Hamilton-Jacobi equation. Invent. Math. 155(2), 363–388 (2004)
W.H. Fleming, H. Mete Soner, Controlled Markov Processes and Viscosity Solutions, 2nd edn. Stochastic Modelling and Applied Probability, vol. 25 (Springer, New York, 2006), xviii + 429 pp
Y. Fujita, H. Ishii, P. Loreti, Asymptotic solutions of Hamilton-Jacobi equations in Euclidean n space. Indiana Univ. Math. J. 55(5), 1671–1700 (2006)
Y. Giga, Surface Evolution Equations. A Level Set Approach. Monographs in Mathematics, vol. 99 (Birkhäuser, Basel, 2006), xii + 264 pp
Y. Giga, Q. Liu, H. Mitake, Singular Neumann problems and large-time behavior of solutions of non-coercive Hamiton-Jacobi equations. Tran. Amer. Math. Soc. (to appear)
Y. Giga, Q. Liu, H. Mitake, Large-time asymptotics for one-dimensional Dirichlet problems for Hamilton-Jacobi equations with noncoercive Hamiltonians. J. Differ. Equ. 252(2), 1263–1282 (2012)
N. Ichihara, H. Ishii, Asymptotic solutions of Hamilton-Jacobi equations with semi-periodic Hamiltonians. Commun. Partial Differ. Equ. 33(4–6), 784–807 (2008)
N. Ichihara, H. Ishii, The large-time behavior of solutions of Hamilton-Jacobi equations on the real line. Methods Appl. Anal. 15(2), 223–242 (2008)
N. Ichihara, H. Ishii, Long-time behavior of solutions of Hamilton-Jacobi equations with convex and coercive Hamiltonians. Arch. Ration. Mech. Anal. 194(2), 383–419 (2009)
H. Ishii, Asymptotic solutions for large time of Hamilton-Jacobi equations in Euclidean. Ann. Inst. Henri Poincaré Anal. Non Linéaire 25(2), 231–266 (2008)
H. Ishii, Long-time asymptotic solutions of convex Hamilton-Jacobi equations with Neumann type boundary conditions. Calc. Var. Partial Differ. Equ. 42(1–2), 189–209 (2011)
H. Ishii, Weak KAM aspects of convex Hamilton-Jacobi equations with Neumann type boundary conditions. J. Math. Pures Appl. (9) 95(1), 99–135 (2011)
H. Ishii, H. Mitake, Representation formulas for solutions of Hamilton-Jacobi equations with convex Hamiltonians. Indiana Univ. Math. J. 56(5), 2159–2183 (2007)
S. Koike, A Beginner’s Guide to the Theory of Viscosity Solutions. MSJ Memoirs, vol. 13 (Mathematical Society of Japan, Tokyo, 2004), viii + 123 pp
P.-L. Lions, Generalized Solutions of Hamilton-Jacobi Equations. Research Notes in Mathematics, vol. 69 (Pitman, Boston, 1982), iv + 317 pp
P.-L. Lions, Neumann type boundary conditions for Hamilton-Jacobi equations. Duke Math. J. 52(4), 793–820 (1985)
P.-L. Lions, A.-S. Sznitman, Stochastic differential equations with reflecting boundary conditions. Commun. Pure Appl. Math. 37(4), 511–537 (1984)
P.-L. Lions, N.S. Trudinger, Linear oblique derivative problems for the uniformly elliptic Hamilton-Jacobi-Bellman equation. Math. Z. 191(1), 1–15 (1986)
J.N. Mather, Variational construction of connecting orbits. Ann. Inst. Fourier (Grenoble) 43(5), 1349–1386 (1993)
J.N. Mather, Total disconnectedness of the quotient Aubry set in low dimensions. Dedicated to the memory of Jürgen K. Moser. Commun. Pure Appl. Math. 56(8), 1178–1183 (2003)
H. Mitake, Asymptotic solutions of Hamilton-Jacobi equations with state constraints. Appl. Math. Optim. 58(3), 393–410 (2008)
H. Mitake, The large-time behavior of solutions of the Cauchy-Dirichlet problem. Nonlinear Differ. Equ. Appl. (NoDEA) 15(3), 347–362 (2008)
H. Mitake, Large time behavior of solutions of Hamilton-Jacobi equations with periodic boundary data. Nonlinear Anal. 71(11), 5392–5405 (2009)
H. Mitake, H.V. Tran, Remarks on the large time behavior of viscosity solutions of quasi-monotone weakly coupled systems of Hamilton-Jacobi equations. Asymptot. Anal. 77(1–2), 43–70 (2012)
H. Mitake, H.V. Tran, A dynamical approach to the large-time behavior of solutions to weakly coupled systems of Hamilton-Jacobi equations. J. Math. Pures Appl. (to appear)
G. Namah, J.-M. Roquejoffre, Remarks on the long time behaviour of the solutions of Hamilton-Jacobi equations. Commun. Partial Differ. Equ. 24(5–6), 883–893 (1999)
J.-M. Roquejoffre, Convergence to steady states or periodic solutions in a class of Hamilton-Jacobi equations. J. Math. Pures Appl. (9) 80(1), 85–104 (2001)
H.M. Soner, Optimal control with state-space constraint, I. SIAM J. Control Optim. 24(3), 552–561 (1986)
M. Spivak, Calculus on Manifolds. A Modern Approach to Classical Theorems of Advanced Calculus (W.A. Benjamin, Inc., New York, 1965), xii + 144 pp
E.M. Stein, R. Shakarchi, Real Analysis. Measure Theory, Integration, and Hilbert Spaces. Princeton Lectures in Analysis, vol. III (Princeton University Press, Princeton, 2005), xx + 402 pp
E. Yokoyama, Y. Giga, P. Rybka, A microscopic time scale approximation to the behavior of the local slope on the faceted surface under a nonuniformity in supersaturation. Phys. D 237(22), 2845–2855 (2008)
Acknowledgements
Supported in part by JSPS KAKENHI (#20340019, #21340032, #21224001, #23340028, and #23244015).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 A.1 Local maxima to global maxima
We recall a proposition from [56] which is about partition of unity.
Proposition A.1.
Let \(\mathcal{O}\) be a collection of open subsets of \({\mathbb{R}}^{n}\) . Set \(W :=\bigcup _{U\in \mathcal{O}}U\) . Then there is a collection \(\mathcal{F}\) of \({C}^{\infty }\) functions in \({\mathbb{R}}^{n}\) having the following properties:
-
(i)
0 ≤ f(x) ≤ 1 for all x ∈ W and \(f \in \mathcal{F}\).
-
(ii)
For each x ∈ W there is a neighborhood V of x such that all but finitely many \(f \in \mathcal{F}\) vanish in V.
-
(iii)
\(\sum _{f\in \mathcal{F}}f(x) = 1\) for all x ∈ W.
-
(iv)
For each \(f \in \mathcal{F}\) there is a set \(U \in \mathcal{O}\) such that supp f ⊂ U.
Proposition A.2.
Let Ω be any subset of \({\mathbb{R}}^{n}\) ,\(u \in \mathrm{USC}(\Omega, \mathbb{R})\) and ϕ ∈ C 1 (Ω). Assume that u −ϕ attains a local maximum at y ∈ Ω. Then there is a function ψ ∈ C 1 (Ω) such that u −ψ attains a global maximum at y and ψ = ϕ in a neighborhood of y.
Proof.
As usual it is enough to prove the above proposition in the case when (u − ϕ)(y) = 0.
By the definition of the space C 1(Ω), there is an open neighborhood W 0 of Ω such that ϕ is defined in W 0 and \(\phi \in {C}^{1}(W_{0})\).
There is an open subset U y ⊂ W 0 of \({\mathbb{R}}^{n}\) containing y such that \(\max _{U_{y}\cap \Omega }(u\,-\,\phi ) = (u-\phi )(y)\). Since \(u \in \mathrm{USC}(\Omega, \mathbb{R})\), for each x ∈ Ω ∖ { y} we may choose an open subset U x of \({\mathbb{R}}^{n}\) so that x ∈ U x , y ∉ U x and \(\sup _{U_{x}\cap \Omega }u < \infty \). Set \(a_{x} =\sup _{U_{x}\cap \Omega }u\) for every x ∈ Ω ∖ { y}.
We set \(\mathcal{O} =\{ U_{z}\, :\, z \in \Omega \}\) and \(W =\bigcup _{U\in \mathcal{O}}U\). Note that W is an open neighborhood of Ω. By Proposition A.1, there exists a collection \(\mathcal{F}\) of functions \(f \in {C}^{\infty }({\mathbb{R}}^{n})\) satisfying the conditions (i)–(iv) of the proposition. According to the condition (iv), for each \(f \in \mathcal{F}\) there is a point z ∈ Ω such that \(\mathrm{supp}f \subset U_{z}\). For each \(f \in \mathcal{F}\) we fix such a point z ∈ Ω and define the mapping \(p\, :\, \mathcal{F}\rightarrow \Omega \) by p(f) = z. We set
By the condition (ii), we see that ψ ∈ C 1(W). Fix any x ∈ Ω and \(f \in \mathcal{F}\), and observe that if f(x) > 0 and p(f) ≠ y, then we have \(x \in \mathrm{supp}f \subset U_{p(f)}\) and, therefore, \(a_{p(f)} =\sup _{U_{p(f)}\cap \Omega }u \geq u(x)\). Observe also that if f(x) > 0 and p(f) = y, then we have x ∈ U y and ϕ(x) ≥ u(x). Thus we see that for all x ∈ Ω,
Thanks to the condition (ii), we may choose a neighborhood V ⊂ W of y and a finite subset \(\{f_{j}\}_{j=1}^{N}\) of \(\mathcal{F}\) so that
If p(f j ) ≠ y for some j = 1, …, N, then \(U_{p(f_{j})} \cap \{ y\} = \varnothing \) and hence \(y\not\in \mathrm{supp}f_{j}\). Therefore, by replacing V by a smaller one we may assume that p(f j ) = y for all j = 1, …, N. Since f = 0 in V for all \(f \in \mathcal{F}\setminus \{ f_{1},\ldots,f_{N}\}\), we see that
It is now easy to see that u − ψ has a global maximum at y. □
1.2 A.2 A Quick Review of Convex Analysis
We discuss here basic properties of convex functions on \({\mathbb{R}}^{n}\).
By definition, a subset C of \({\mathbb{R}}^{n}\) is convex if and only if
For a given function \(f\, :\, U \subset {\mathbb{R}}^{n} \rightarrow [-\infty,\,\infty ]\), its epigraph epi(f) is defined as
A function f : U → [ − ∞, ∞] is said to be convex if epi(f) is a convex subset of \({\mathbb{R}}^{n+1}\).
We are henceforth concerned with functions defined on \({\mathbb{R}}^{n}\). When we are given a function f on U with U being a proper subset of \({\mathbb{R}}^{n}\), we may think of f as a function defined on \({\mathbb{R}}^{n}\) having value ∞ on the set \({\mathbb{R}}^{n} \setminus U\).
It is easily checked that a function \(f\, :\, {\mathbb{R}}^{n} \rightarrow [-\infty,\,\infty ]\) is convex if and only if for all \(x,y \in {\mathbb{R}}^{n}\), \(t,s \in \mathbb{R}\) and λ ∈ [0, 1],
From this, we see that a function \(f\, :\, {\mathbb{R}}^{n} \rightarrow (-\infty,\,\infty ]\) is convex if and only if for all \(x,y \in {\mathbb{R}}^{n}\) and λ ∈ [0, 1],
Here we use the convention for extended real numbers, i.e., for any \(x \in \mathbb{R}\), − ∞ < x < ∞, x ± ∞ = ± ∞, x ⋅( ± ∞) = ± ∞ if x > 0, 0 ⋅( ± ∞) = 0, etc.
Any affine function f(x) = a ⋅x + b, where \(a \in {\mathbb{R}}^{n}\) and \(b \in \mathbb{R}\), is a convex function on \({\mathbb{R}}^{n}\). Moreover, if \(A \subset {\mathbb{R}}^{n}\) and \(B \subset \mathbb{R}\) are nonempty sets, then the function on \({\mathbb{R}}^{n}\) given by
is a convex function. Note that this function f is lower semicontinuous on \({\mathbb{R}}^{n}\). We restrict our attention to those functions which take values only in ( − ∞, ∞].
Proposition B.1.
Let \(f\, :\, {\mathbb{R}}^{n} \rightarrow (-\infty,\,\infty ]\) be a convex function. Assume that p ∈ D − f(y) for some \(y,p \in {\mathbb{R}}^{n}\) . Then
Proof.
By the definition of D − f(y), we have
Hence, fixing \(x \in {\mathbb{R}}^{n}\), we get
Using the convexity of f, we rearrange the above inequality and divide by t > 0, to get
Sending t → 0 + yields
□
Proposition B.2.
Let \(\mathcal{F}\) be a nonempty set of convex functions on \({\mathbb{R}}^{n}\) with values in (−∞, ∞]. Then \(\sup \mathcal{F}\) is a convex function on \({\mathbb{R}}^{n}\) having values in (−∞, ∞].
Proof.
It is clear that \((\sup \mathcal{F})(x) \in (-\infty,\,\infty ]\) for all \(x \in {\mathbb{R}}^{n}\). If \(f \in \mathcal{F}\), \(x,y \in {\mathbb{R}}^{n}\) and t ∈ [0, 1], then we have
and hence
which proves the convexity of \(\sup \mathcal{F}\). □
We call a function \(f\, :\, {\mathbb{R}}^{n} \rightarrow (-\infty,\,\infty ]\) proper convex if the following three conditions hold:
-
(a)
f is convex on \({\mathbb{R}}^{n}\).
-
(b)
\(f \in \mathrm{LSC}({\mathbb{R}}^{n})\).
-
(c)
f(x) ≢ ∞.
Let \(f\, :\, {\mathbb{R}}^{n} \rightarrow [-\infty,\,\infty ]\). The conjugate convex function (or the Legendre–Fenchel transform) of f is the function \({f}^{\star }\, :\, {\mathbb{R}}^{n} \rightarrow [-\infty,\,\infty ]\) given by
Proposition B.3.
If f is a proper convex function, then so is f ⋆ .
Lemma B.1.
If f is a proper convex function on \({\mathbb{R}}^{n}\) , then D − f(y)≠∅ for some \(y \in {\mathbb{R}}^{n}\).
Proof.
We choose a point \(x_{0} \in {\mathbb{R}}^{n}\) so that \(f(x_{0}) \in \mathbb{R}\). Let \(k \in \mathbb{N}\), and define the function g k on \(\bar{B}_{1}(x_{0})\) by the formula g k (x) = f(x) + k | x − x 0 | 2. Since \(g_{k} \in \mathrm{LSC}(\overline{B}_{1}(x_{0}))\), and \(g_{k}(x_{0}) = g(x_{0}) \in \mathbb{R}\), the function g k has a finite minimum at a point \(x_{k} \in \overline{B}_{1}(x_{0})\). Note that if k is sufficiently large, then
Fix such a large k, and observe that x k ∈ B 1(x 0) and, therefore, \(-2k(x_{k} - x_{0}) \in {D}^{-}f(x_{k})\). □
Proof (Proposition B.3).
The function \(x\mapsto x \cdot y - f(y)\) is an affine function for any \(y \in {\mathbb{R}}^{n}\). By Proposition B.2, the function f ⋆ is convex on \({\mathbb{R}}^{n}\). Also, since the function \(x\mapsto x \cdot y - f(y)\) is continuous on \({\mathbb{R}}^{n}\) for any \(y \in {\mathbb{R}}^{n}\), as stated in Proposition 1.5, the function f ⋆ is lower semicontinuous on \({\mathbb{R}}^{n}\).
Since f is proper convex on \({\mathbb{R}}^{n}\), there is a point \(x_{0} \in {\mathbb{R}}^{n}\) such that \(f(x_{0}) \in \mathbb{R}\). Hence, we have
By Lemma B.1, there exist points \(y,p \in {\mathbb{R}}^{n}\) such that p ∈ D − f(y). By Proposition B.1, we have
That is,
which implies that \({f}^{\star }(p) = p \cdot y - f(y) \in \mathbb{R}\). Thus, we conclude that \({f}^{\star }\, :\, {\mathbb{R}}^{n} \rightarrow (-\infty,\,\infty ]\), f ⋆ is convex on \({\mathbb{R}}^{n}\), \({f}^{\star } \in \mathrm{LSC}({\mathbb{R}}^{n})\) and f ⋆ (x) ≢ ∞. □
The following duality (called convex duality or Legendre–Fenchel duality) holds.
Theorem B.1.
Let \(f\, :\, {\mathbb{R}}^{n} \rightarrow (-\infty,\,\infty ]\) be a proper convex function. Then
Proof.
By the definition of f ⋆ , we have
which reads
Hence,
Next, we show that
We fix any \(a \in {\mathbb{R}}^{n}\) and choose a point \(y \in {\mathbb{R}}^{n}\) so that \(f(y) \in \mathbb{R}\). We fix a number R > 0 so that | y − a | < R. Let \(k \in \mathbb{N}\), and consider the function \(g_{k} \in \mathrm{LSC}(\overline{B}_{R}(a))\) defined by g k (x) = f(x) + k | x − a | 2. Let \(x_{k} \in \overline{B}_{R}(a)\) be a minimum point of the function g k . Noting that if k is sufficiently large, then
we see that x k ∈ B R (a) for k sufficiently large. We henceforth assume that k is large enough so that x k ∈ B R (a). We have
Accordingly, if we set ξ k = − 2k(x k − a), then we have \(\xi _{k} \in {D}^{-}f(x_{k})\). By Proposition B.1, we get
or, equivalently,
Hence,
Using this, we compute that
We divide our argument into the following cases, (a) and (b).
Case (a): \(\lim _{k\rightarrow \infty }k\vert x_{k} - a{\vert }^{2} = \infty \). In this case, if we set \(m =\min _{\bar{B}_{R}(a)}f\), then we have
and, therefore, f ⋆ ⋆ (a) ≥ f(a).
Case (b): \(\liminf _{k\rightarrow \infty }k\vert x_{k} - a{\vert }^{2} < \infty \). We may choose a subsequence \(\{x_{k_{j}}\}_{j\in \mathbb{N}}\) of {x k } so that \(\lim _{j\rightarrow \infty }x_{k_{j}} = a\). Then we have
Thus, in both cases we have f ⋆ ⋆ (a) ≥ f(a), which completes the proof. □
Theorem B.2.
Let \(f\, :\, {\mathbb{R}}^{n} \rightarrow (-\infty,\,\infty ]\) be proper convex and \(x,\xi \in {\mathbb{R}}^{n}\) . Then the following three conditions are equivalent each other.
-
(i)
ξ ∈ D − f(x).
-
(ii)
\(x \in {D}^{-}{f}^{\star }(\xi )\).
-
(iii)
x ⋅ξ = f(x) + f ⋆ (ξ).
Proof.
Assume first that (i) holds. By Proposition B.1, we have
which reads
Hence,
Thus, (iii) is valid.
Next, we assume that (iii) holds. Then the function \(y\mapsto \xi \cdot y - f(y)\) attains a maximum at x. Therefore, ξ ∈ D − f(x). That is, (i) is valid.
Now, by the convex duality (Theorem B.1), (iii) reads
The equivalence between (i) and (iii), with f replaced by f ⋆ , is exactly the equivalence between (ii) and (iii). The proof is complete. □
Finally, we give a Lipschitz regularity estimate for convex functions.
Theorem B.3.
Let \(f\, :\, {\mathbb{R}}^{n} \rightarrow (-\infty,\,\infty ]\) be a convex function. Assume that there are constants M > 0 and R > 0 such that
Then
Proof.
Let x, y ∈ B R and note that | x − y | < 2R. We may assume that x ≠ y. Setting ξ = (x − y) ∕ | x − y | and z = y + 2Rξ and noting that z ∈ B 3R ,
and
we obtain
and
In view of the symmetry in x and y, we see that
□
1.3 A.3 Global Lipschitz Regularity
We give here a proof of Lemmas 2.1 and 2.2.
Proof (Lemma 2.1).
We first show that there is a constant C > 0, for each \(z \in \overline{\Omega }\) a ball B r (z) centered at z, and for each \(x,y \in B_{r}(z) \cap \overline{\Omega }\), a curve \(\eta \in \mathrm{AC}([0,T], {\mathbb{R}}^{n})\), with \(T \in \overline{\mathbb{R}}_{+}\), such that η(s) ∈ Ω for all s ∈ (0, T), \(\vert \dot{\eta }(s)\vert \leq 1\) for a.e. s ∈ (0, T) and T ≤ C | x − y | .
Let ρ be a defining function of Ω. We may assume that \(\|D\rho \|_{\infty,{\mathbb{R}}^{n}} \leq 1\) and | Dρ(x) | ≥ δ for all \(x \in {(\partial \Omega )}^{\delta } :=\{ y \in {\mathbb{R}}^{n}\, :\, \mathrm{dist}(y,\partial \Omega ) <\delta \}\) and some constant δ ∈ (0, 1).
Let z ∈ Ω. We can choose r > 0 so that B r (z) ⊂ Ω. Then, for each x, y ∈ B r (z), with x ≠ y, the line η(s) = x + s(y − x) ∕ | y − x | , with s ∈ [0, | x − y | ], connects two points x and y and lies inside Ω. Note as well that \(\dot{\eta }(s) = (y - x)/\vert y - x\vert \in \partial B_{1}\) for all s ∈ [0, | x − y | ].
Let \(z \in \partial \Omega \). Since | Dρ(z) | 2 ≥ δ 2, by continuity, we may choose r ∈ (0, δ 3 ∕ 4) so that Dρ(x) ⋅Dρ(z) ≥ δ 2 ∕ 2 for all \(x \in B_{{4\delta }^{-2}r}(z)\). Fix any \(x,y \in B_{r}(z) \cap \overline{\Omega }\). Consider the curve ξ(t) = x + t(y − x) − t(1 − t)6δ − 2 | x − y | Dρ(z), with t ∈ [0, 1], which connects the points x and y. Note that
and 4δ − 2 r < δ. Hence, we have \(\xi (t) \in B_{{4\delta }^{-2}r}(z) \cap {(\partial \Omega )}^{\delta }\) for all t ∈ [0, 1]. If t ∈ (0, 1 ∕ 2], then we have
for some θ ∈ (0, 1). Similarly, if t ∈ [1 ∕ 2, 1), we have
Hence, ξ(t) ∈ Ω for all t ∈ (0, 1). Note that
If x = y, then we just set η(s) = x = y for s = 0 and the curve \(\eta \,:\, [0,\,0] \rightarrow {\mathbb{R}}^{n}\) has the required properties. Now let x ≠ y. We set t(x, y) = (1 + 6δ − 2) | x − y | and η(s) = ξ(s ∕ t(x, y)) for s ∈ [0, t(x, y)]. Then the curve \(\eta : [0,\,t(x,y)] \rightarrow {\mathbb{R}}^{n}\) has the required properties with C = 1 + 6δ − 2.
Thus, by the compactness of \(\overline{\Omega }\), we may choose a constant C > 0 and a finite covering \(\{{B}^{i}\}_{i=1}^{N}\) of \(\overline{\Omega }\) consisting of open balls with the properties: for each \(x,y \in \hat{ B}_{i} \cap \overline{\Omega }\), where \(\hat{B}_{i}\) denotes the concentric open ball of B i with radius twice that of B i , there exists a curve \(\eta \in \mathrm{AC}([0,\,t(x,y)], {\mathbb{R}}^{n})\) such that η(s) ∈ Ω for all s ∈ (0, t(x, y)), \(\vert \dot{\eta }(s)\vert \leq 1\) for a.e. s ∈ [0, t(x, y)] and t(x, y) ≤ C | x − y | .
Let r i be the radius of the ball B i and set r = minr i and R = ∑r i , where i ranges all over i = 1, …, N.
Let \(x,y \in \overline{\Omega }\). If | x − y | < r, then \(x,y \in \hat{ B}_{i}\) for some i and there is a curve \(\eta \in \mathrm{AC}([0,t(x,y)], {\mathbb{R}}^{n})\) such that η(s) ∈ Ω for all s ∈ (0, t(x, y)), \(\vert \dot{\eta }(s)\vert \leq 1\) for a.e. s ∈ [0, t(x, y)] and t(x, y) ≤ C | x − y | . Next, we assume that | x − y | ≥ r. By the connectedness of Ω, we infer that there is a sequence \(\{B_{i_{j}}\, :\, j = 1,\ldots,J\} \subset \{ B_{i}\, :\, i = 1,\ldots,N\}\) such that \(x \in B_{i_{1}}\), \(y \in B_{i_{J}}\), \(B_{i_{j}} \cap B_{i_{j+1}} \cap \Omega \not =\varnothing \) for all 1 ≤ j < J, and \(B_{i_{j}}\not =B_{i_{k}}\) if j ≠ k. It is clear that J ≤ N. If J = 1, then we may choose a curve η with the required properties as in the case where | x − y | < r. If J > 1, then we may choose a curve \(\eta \in \mathrm{AC}([0,\,t(x,y)],\, {\mathbb{R}}^{n})\) joining x and y as follows. First, we choose a sequence {x j : j = 1, …, J − 1} of points in Ω so that \(x_{j} \in B_{i_{j}} \cap B_{i_{j+1}} \cap \Omega \) for all 1 ≤ j < J. Next, setting x 0 = x, x J = y and t 0 = 0, since \(x_{j-1},x_{i_{j}} \in B_{j} \cap \overline{\Omega }\) for all 1 ≤ j ≤ J, we may select \(\eta _{j} \in \mathrm{AC}([t_{j-1},\,t_{j}],\, {\mathbb{R}}^{n})\), with 1 ≤ j ≤ J, inductively so that \(\eta _{j}(t_{j-1}) = x_{j-1}\), \(\eta _{j}(t_{j}) = x_{j}\), η j (s) ∈ Ω for all s ∈ (t j − 1, t j ) and \(t_{j} \leq t_{j-1} + C\vert x_{j} - x_{j-1}\vert \). Finally, we define \(\eta \in \mathrm{AC}([0,\,t(x,y)], {\mathbb{R}}^{n})\), with t(x, y) = t J , by setting η(s) = η i (s) for s ∈ [t j − 1, t j ] and 1 ≤ j ≤ J. Noting that
we see that the curve \(\eta \in \mathrm{AC}([0,\,t(x,y)],\, {\mathbb{R}}^{n})\) has all the required properties with C replaced by CRr − 1. □
Remark C.1.
(i) A standard argument, different from the above one, to prove the local Lipschitz continuity near the boundary points is to flatten the boundary by a local change of variables. (ii) One can easily modify the above proof to prove the proposition same as Lemma 2.1, except that Ω is a Lipschitz domain.
Proof (Lemma 2.2).
Let C > 0 be the constant from Lemma 2.1. We show that | u(x) − u(y) | ≤ CM | x − y | for all x, y ∈ Ω.
To show this, we fix any x, y ∈ Ω such that x ≠ y. By Lemma 2.1, there is a curve \(\eta \in \mathrm{AC}([0,\,t(x,y)],\, {\mathbb{R}}^{n})\) such that η(0) = x, η(t(x, y)) = y, t(x, y) ≤ C | x − y | , η(s) ∈ Ω for all s ∈ [0, t(x, y)] and \(\vert \dot{\eta }(s)\vert \leq 1\) for a.e. s ∈ [0, t(x, y)].
By the compactness of the image η([0, t(x, y)]) of interval [0, t(x, y)] by η, we may choose a finite sequence \(\{B_{i}\}_{i=1}^{N}\) of open balls contained in Ω which covers η([0, t(x, y)]). We may assume by rearranging the label i if needed that x ∈ B 1, y ∈ B N and \(B_{i} \cap B_{i+1}\not =\varnothing \) for all 1 ≤ i < N. We may choose a sequence \(0 = t_{0} < t_{1} < \cdots < t_{N} = t(x,y)\) of real numbers so that the line segment \([\eta (t_{i-1}),\,\eta (t_{i})]\) joining η(t i − 1) and η(t i ) lies in B i for any i = 1, …, N.
Thanks to Proposition 1.14, we have
Using this, we compute that
This completes the proof. □
1.4 A.4 Localized Versions of Lemma 4.2
Theorem D.1.
Let U, V be open subsets of \({\mathbb{R}}^{n}\) with the properties: \(\overline{V } \subset U\) and \(V \cap \Omega \not =\varnothing \) . Let \(u \in C(U \cap \overline{\Omega })\) be a viscosity solution of
Then, for each \(\varepsilon \in (0,\,1)\) , there exists a function \({u}^{\varepsilon } \in {C}^{1}(V \cap \overline{\Omega })\) such that
Proof.
We choose functions \(\zeta,\,\eta \in {C}^{1}({\mathbb{R}}^{n})\) so that 0 ≤ ζ(x) ≤ η(x) ≤ 1 for all \(x \in {\mathbb{R}}^{n}\), ζ(x) = 1 for all x ∈ V, η(x) = 1 for all \(x \in \mathrm{supp}\zeta\) and \(\mathrm{supp}\eta \subset U\).
We define the function \(v \in C(\overline{\Omega })\) by setting v(x) = η(x)u(x) for \(x \in U \cap \overline{\Omega }\) and v(x) = 0 otherwise. By the coercivity of H, u is locally Lipschitz continuous in \(U \cap \overline{\Omega }\), and hence, v is Lipschitz continuous in \(\overline{\Omega }\). Let L > 0 be a Lipschitz bound of v in \(\overline{\Omega }\). Then v is a viscosity solution of
where \(M := L\|\gamma \|_{\infty,\partial \Omega }\). In fact, we have a stronger assertion that for any \(x \in \overline{\Omega }\) and any p ∈ D + v(x),
To check this, let \(\phi \in {C}^{1}(\overline{\Omega })\) and assume that v − ϕ attains a maximum at \(x \in \overline{\Omega }\). Observe that if x ∈ Ω, then | Dϕ(x) | ≤ L and that if x ∈ ∂Ω, then
which yields
Thus, (132) is valid.
We set
It is clear that h ∈ C(∂Ω) and G satisfies (A5)–(A7), with H replaced by G
In view of the coercivity of H, we may assume by reselecting L if necessary that for all \((x,p) \in \overline{\Omega } \times {\mathbb{R}}^{n}\), if | p | > L, then H(x, p) > 0. We now show that v is a viscosity solution of
To do this, let \(\hat{x} \in \overline{\Omega }\) and \(\hat{p} \in {D}^{+}v(\hat{x})\). Consider the case where \(\zeta (\hat{x}) > 0\), which implies that \(\hat{x} \in U\). We have η(x) = 1 near the point \(\hat{x}\), which implies that \(\hat{p} \in {D}^{+}u(\hat{x})\). As u is a viscosity subsolution of (131), we have \(H(\hat{x},\hat{p}) \leq 0\) if \(\hat{x} \in \Omega \) and \(\min \{H(\hat{x},\hat{p}),\,\gamma (\hat{x}) \cdot \hat{ p} - h(\hat{x})\} \leq 0\) if \(\hat{x} \in \partial \Omega \). Assume in addition that \(\hat{x} \in \partial \Omega \). By (132), we have \(\gamma (\hat{x}) \cdot \hat{ p} \leq M\). If \(\vert \hat{p}\vert > L\), we have both
Hence, if \(\vert \hat{p}\vert > L\), then \(\gamma (\hat{x}) \cdot \hat{ p} \leq h(\hat{x})\). On the other hand, if \(\vert \hat{p}\vert \leq L\), we have two cases: in one case we have \(H(\hat{x},\hat{p}) \leq 0\) and hence, \(G(\hat{x},\hat{p}) \leq 0\). In the other case, we have \(\gamma (\hat{x}) \cdot \hat{ p} \leq g(\hat{x})\) and then \(\gamma (\hat{x}) \cdot \hat{ p} \leq h(\hat{x})\). These observations together show that
We next assume that \(\hat{x} \in \Omega \). In this case, we easily see that \(G(\hat{x},\hat{p}) \leq 0\).
Next, consider the case where \(\zeta (\hat{x}) = 0\), which implies that \(G(\hat{x},\hat{p}) = \vert \hat{p}\vert - L\) and \(h(\hat{x}) = M\). By (132), we immediately see that \(G(\hat{x},\hat{p}) \leq 0\) if \(\hat{x} \in \Omega \) and \(\min \{G(\hat{x},\hat{p}),\gamma (\hat{x}) \cdot \hat{ p} - h(\hat{x})\} \leq 0\) if \(\hat{x} \in \partial \Omega \). We thus conclude that v is a viscosity solution of (133).
We may invoke Theorem 4.2, to find a collection \(\{{v}^{\varepsilon }\}_{\varepsilon \in (0,1)} \subset {C}^{1}(\overline{\Omega })\) such that
But, this yields
The functions \({v}^{\varepsilon }\) have all the required properties. □
The above theorem has a version for Hamilton–Jacobi equations of evolution type.
Theorem D.2.
Let U, V be bounded open subsets of \({\mathbb{R}}^{n} \times \mathbb{R}_{+}\) with the properties: \(\overline{V } \subset U\) , \(\overline{U} \subset {\mathbb{R}}^{n} \times \mathbb{R}_{+}\) and \(V \cap Q\not =\varnothing \) . Let \(u \in \mathrm{Lip}(U \cap Q)\) be a viscosity solution of
Then, for each \(\varepsilon \in (0,\,1)\) , there exists a function \({u}^{\varepsilon } \in {C}^{1}(V \cap Q)\) such that
Proof.
Choose constants \(a,b \in \mathbb{R}_{+}\) so that \(U \subset {\mathbb{R}}^{n} \times (a,b)\) and let ρ be a defining function of Ω. We may assume that ρ is bounded in \({\mathbb{R}}^{n}\). We choose a function \(\zeta \in {C}^{1}(\mathbb{R})\) so that ζ(t) = 0 for all t ∈ [a, b], ζ′(t) > 0 for all t > b, ζ′(t) < 0 for all t < a and \(\min \{\zeta (a/2),\zeta (2b)\} >\|\rho \| _{\infty,\Omega }\).
We set
It is easily seen that
Let \((x,t) \in {\mathbb{R}}^{n+1}\) be such that \(\tilde{\rho }(x,t) = 0\). It is obvious that \((x,t) \in \overline{\Omega } \times [a/2,\,2b]\). If a ≤ t ≤ b, then ρ(x) = 0 and thus Dρ(x) ≠ 0. If either t > b or t < a, then | ζ′(t) | > 0. Hence, we have \(D\tilde{\rho }(x,t)\not =0\). Thus, \(\tilde{\rho }\) is a defining function of \(\tilde{\Omega }\).
Let M > 0 and define \(\tilde{\gamma }\in C(\partial \tilde{\Omega }, {\mathbb{R}}^{n+1})\) by
where we may assume that γ is defined and continuous in \(\overline{\Omega }\). We note that for any \((x,t) \in \partial \tilde{\Omega }\),
Note as well that (1 + Mρ(x)) + = 1 for all x ∈ ∂Ω and
Thus we can fix M > 0 so that for all \((x,t) \in \partial \tilde{\Omega }\),
Noting that for each x ∈ Ω, the x-section \(\{t \in \mathbb{R}\, :\, (x,t) \in \tilde{ \Omega }\}\) of \(\tilde{\Omega }\) is an open interval (or, line segment), we deduce that \(\tilde{\Omega }\) is a connected set. We may assume that g is defined and continuous in \(\overline{\Omega }\). We define \(\tilde{g} \in C(\partial \tilde{\Omega })\) by \(\tilde{g}(x,t) = g(x)\). Thus, assumptions (A1)–(A4) hold with n + 1, \(\tilde{\Omega }\), \(\tilde{\gamma }\) and \(\tilde{g}\) in place of n, Ω, γ and g.
Let L > 0 be a Lipschitz bound of the function u in \(U \cap Q\). Set
and note that \(\tilde{H} \in C(\overline{\tilde{\Omega }\,} \times {\mathbb{R}}^{n+1})\) satisfies (A5)–(A7), with Ω replaced by \(\tilde{\Omega }\).
We now claim that u is a viscosity solution of
Indeed, since \(U \cap \tilde{ \Omega } = U \cap Q\) and \(U \cap \partial \tilde{\Omega } = U \cap \partial Q\), if \((x,t) \in U \cap \overline{\tilde{\Omega }}\) and (p, q) ∈ D + u(x, t), then we get | q | ≤ L by the cylindrical geometry of Q and, by the viscosity property of u,
We apply Theorem D.1, to find a collection \(\{{u}^{\varepsilon }\}_{\varepsilon \in (0,1)} \subset {C}^{1}(V \cap \overline{\tilde{\Omega }\,})\) such that
It is straightforward to see that the collection \(\{{u}^{\varepsilon }\}_{\varepsilon \in (0,1)} \subset {C}^{1}(V \cap Q)\) satisfies (134). □
1.5 A.5 A Proof of Lemma 5.4
This subsection is mostly devoted to the proof of Lemma 5.4, a version of the Dunford–Pettis theorem. We also give a proof of the weak-star compactness of bounded sequences in \({L}^{\infty }(J, {\mathbb{R}}^{m})\), where J = [a, b] is a finite interval in \(\mathbb{R}\).
Proof (Lemma 5.4).
We define the functions \(F_{j} \in C(J, {\mathbb{R}}^{m})\) by
By the uniform integrability of {f j }, the sequence \(\{F_{j}\}_{j\in \mathbb{N}}\) is uniformly bounded and equi-continuous in J. Hence, the Ascoli–Arzela theorem ensures that it has a subsequence converging to a function F uniformly in J. We fix such a subsequence and denote it again by the same symbol {F j }. Because of the uniform integrability assumption, the sequence {F j } is equi-absolutely continuous in J. That is, for any \(\varepsilon > 0\) there exists δ > 0 such that
An immediate consequence of this is that \(F \in \mathrm{AC}(J, {\mathbb{R}}^{m})\). Hence, for some \(f \in {L}^{1}(J, {\mathbb{R}}^{m})\), we have
Next, let ϕ ∈ C 1(J), and we show that
Integrating by parts, we observe that as j → ∞,
Hence, (135) is valid.
Now, let ϕ ∈ L ∞(J). We regard the functions f j , f, ϕ as functions defined in \(\mathbb{R}\) by setting f j (x) = f(x) = ϕ(x) = 0 for x < a or x > b. Let \(\{k_{\varepsilon }\}_{\varepsilon >0}\) be a collection of standard mollification kernels. We recall that
Fix any δ > 0. By the uniform integrability assumption, we have
Let α > 0 and set
By the Chebychev inequality, we get
By the uniform integrability assumption, if α > 0 is sufficiently large, then
In what follows we fix α > 0 large enough so that (138) holds. We write f j − f = g j + b j , where \(g_{j} = (f_{j} - f)(1 -\mathbf{1}_{E_{j}})\) and \(b_{j} = (f_{j} - f)\mathbf{1}_{E_{j}}\). Then,
Observe that
and
Hence, in view of (135) and (136), we get \(\limsup _{j\rightarrow \infty }\vert I_{j}\vert \leq 2\delta \|\phi \|_{{L}^{\infty }(J)}.\) As δ > 0 is arbitrary, we get \(\lim _{j\rightarrow \infty }I_{j} = 0,\) which completes the proof. □
As a corollary of Lemma 5.4, we deduce that the weak-star compactness of bounded sequences in \({L}^{\infty }(J, {\mathbb{R}}^{m})\):
Lemma E.1.
Let J = [a, b], with −∞ < a < b < ∞. Let \(\{f_{k}\}_{k\in \mathbb{N}}\) be a bounded sequence of functions in \({L}^{\infty }(J, {\mathbb{R}}^{m})\) . Then {f k } has a subsequence which converges weakly-star in \({L}^{\infty }(J, {\mathbb{R}}^{m})\).
Proof.
Set \(M =\sup _{k\in \mathbb{N}}\|f_{k}\|_{{L}^{\infty }(J)}\). Let E ⊂ J be a measurable set, and observe that
which shows that the sequence {f k } is uniformly integrable in J. Thanks to Lemma 5.4, there exists a subsequence \(\{f_{k_{j}}\}_{j\in \mathbb{N}}\) of {f k } which converges to a function f weakly in L 1(J, R m).
Let \(i \in \mathbb{N}\) and set E i = { t ∈ J : | f(t) | > M + 1 ∕ i} and \(g_{i}(t) = \mathbf{1}_{E_{i}}(t)f(t)/\vert f(t)\vert \) for t ∈ J. Since \(g_{i} \in {L}^{\infty }(J, {\mathbb{R}}^{m})\), we get
Hence, using the Chebychev inequality, we obtain
which ensures that | E i | = 0. Thus, we find that | f(t) | ≤ M a.e. in J.
Now, fix any \(\phi \in {L}^{1}(J, {\mathbb{R}}^{m})\). We select a sequence \(\{\phi _{i}\}_{i\in \mathbb{N}} \subset {L}^{\infty }(J, {\mathbb{R}}^{m})\) so that, as i → ∞, ϕ i → ϕ in \({L}^{1}(J, {\mathbb{R}}^{m})\). For each \(i \in \mathbb{N}\), we have
On the other hand, we have
and
These together yield
□
1.6 A.6 Rademacher’s Theorem
We give here a proof of Rademacher’s theorem.
Theorem F.1 (Rademacher).
Let \(B = B_{1} \subset {\mathbb{R}}^{n}\) and \(f \in \mathrm{Lip}(B)\) . Then f is differentiable almost everywhere in B.
To prove the above theorem, we mainly follow the proof given in [1].
Proof.
We first show that f has a distributional gradient Df ∈ L ∞(B).
Let L > 0 be a Lipschitz bound of the function f. Let i ∈ { 1, 2, …, n} and e i denote the unit vector in \({\mathbb{R}}^{n}\) with unity as the i-th entry. Fix any ϕ ∈ C 0 1(B) and observe that
and
Thus, the map
extends uniquely to a bounded linear functional G i on L 2(B). By the Riesz representation theorem, there is a function g i ∈ L 2(B) such that
This shows that g = (g 1, …, g n ) is the distributional gradient of f.
We plug the function ϕ ∈ L 2(B) given by \(\phi (x) = (g_{i}(x)/\vert g_{i}(x)\vert )\mathbf{1}_{E_{k}}(x)\), where \(k \in \mathbb{N}\) and E k = { x ∈ B : | g i (x) | > L + 1 ∕ k}, into the inequality \(\vert G_{i}(\phi )\vert \leq L\|\phi \|_{{L}^{1}(B)}\), to obtain
which yields
Hence, we get | E k | = 0 for all \(k \in \mathbb{N}\) and | {x ∈ B : | g i (x) | > L} | = 0. That is, g i ∈ L ∞(B) and | g i (x) | ≤ L a.e. in B.
The Lebesgue differentiation theorem (see [57]) states that for a.e. x ∈ B, we have \(g(x) \in {\mathbb{R}}^{n}\) and
Now, we fix such a point x ∈ B and show that f is differentiable at x. Fix an r > 0 so that B r (x) ⊂ B. For δ ∈ (0, r), consider the function \(h_{\delta } \in C(\overline{B})\) given by
We claim that
Note that h δ (0) = 0 and h δ is Lipschitz continuous with Lipschitz bound L. By the Ascoli–Arzela theorem, for any sequence {δ j } ⊂ (0, r) converging to zero, there exist a subsequence \(\{\delta _{j_{k}}\}_{k\in \mathbb{N}}\) of {δ j } and a function \(h_{0} \in C(\overline{B})\) such that
In order to prove (140), we need only to show that h 0(y) = g(x) ⋅y for all y ∈ B.
Since h δ (0) = 0 for all δ ∈ (0, r), we have h 0(0) = 0. We observe from (139) that
Using this, we compute that for all ϕ ∈ C 0 1(B),
This guarantees that h 0(y) − g(x) ⋅y is constant for all y ∈ B while h 0(0) = 0. Thus, we see that h 0(y) = g(x) ⋅y for all y ∈ B, which proves (140).
Finally, we note that (140) yields
□
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Ishii, H. (2013). A Short Introduction to Viscosity Solutions and the Large Time Behavior of Solutions of Hamilton–Jacobi Equations. In: Hamilton-Jacobi Equations: Approximations, Numerical Analysis and Applications. Lecture Notes in Mathematics(), vol 2074. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-36433-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-36433-4_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-36432-7
Online ISBN: 978-3-642-36433-4
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)