Skip to main content
Log in

Remarks on the Vanishing Viscosity Process of State-Constraint Hamilton–Jacobi Equations

  • Published:
Applied Mathematics & Optimization Submit manuscript

Abstract

We investigate the convergence rate in the vanishing viscosity process of the solutions to the subquadratic state-constraint Hamilton–Jacobi equations. We give two different proofs of the fact that, for non-negative Lipschitz data that vanish on the boundary, the rate of convergence is \({\mathcal {O}}(\sqrt{\varepsilon })\) in the interior. Moreover, the one-sided rate can be improved to \({\mathcal {O}}(\varepsilon )\) for non-negative compactly supported data and \({\mathcal {O}}(\varepsilon ^{1/p})\) (where \(1<p<2\) is the exponent of the gradient term) for non-negative data \(f\in \mathrm {C}^2({\overline{\varOmega }})\) such that \(f = 0\) and \(Df = 0\) on the boundary. Our approach relies on deep understanding of the blow-up behavior near the boundary and semiconcavity of the solutions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Armstrong, S.N., Tran, H.V.: Viscosity solutions of general viscous Hamilton-Jacobi equations. Math. Ann. 361(3), 647–687 (2015)

    Article  MathSciNet  Google Scholar 

  2. Bandle, C., Essen, M.: On the solutions of quasilinear elliptic problems with boundary blow-up. Symposia Matematica 35, 93–111 (1994)

    MathSciNet  MATH  Google Scholar 

  3. Bardi, M., Capuzzo-Dolcetta, I.: Optimal Control and Viscosity Solutions of Hamilton–Jacobi–Bellman Equations. Modern Birkhäuser Classics (1997)

  4. Barles, G.: A weak Bernstein method for fully nonlinear elliptic equations. Differ. Integral Equ. 4(2), 241–262 (1991)

    MathSciNet  MATH  Google Scholar 

  5. Barles, G., Da Lio, F.: On the generalized Dirichlet problem for viscous Hamilton-Jacobi equations. J. Mathématiques Pures et Appliquées 83(1), 53–75 (2004)

    Article  MathSciNet  Google Scholar 

  6. Barles, G., Porretta, A., Tchamba, T.T.: On the large time behavior of solutions of the Dirichlet problem for subquadratic viscous Hamilton-Jacobi equations. J. Mathématiques Pures et Appliquées 94(5), 497–519 (2010)

    Article  MathSciNet  Google Scholar 

  7. Bernstein, S.: Sur la généralisation du problème de Dirichlet. Math. Ann. 69(1), 82–136 (1910)

    Article  MathSciNet  Google Scholar 

  8. Calder, J.: Lecture notes on viscosity solutions (2018)

  9. Capuzzo Dolcetta, I., Leoni, F., Porretta, A.: Hölder estimates for degenerate elliptic equations with coercive Hamiltonians. Trans. Am. Math. Soc. 362(9), 4511–4536 (2010)

    Article  Google Scholar 

  10. Capuzzo-Dolcetta, I., Lions, P.-L.: Hamilton-Jacobi equations with state constraints. Trans. Am. Math. Soc. 318(2), 643–683 (1990)

    Article  MathSciNet  Google Scholar 

  11. Crandall, M.G., Lions, P.L.: Two approximations of solutions of Hamilton-Jacobi equations. Math. Comput. 43(167), 1–19 (1984)

    Article  MathSciNet  Google Scholar 

  12. Evans, L.C.: Adjoint and compensated compactness methods for Hamilton-Jacobi PDE. Arch. Ration. Mech. Anal. 197(3), 1053–1088 (2010)

    Article  MathSciNet  Google Scholar 

  13. Fabbri, G., Gozzi, F., Swiech, A.: Stochastic optimal control in infinite dimension: dynamic programming and HJB equations. Probab. Theory Stoch. Model. (2017)

  14. Fleming, W.H.: The convergence problem for differential games. J. Math. Anal. Appl. 3(1), 102–116 (1961)

    Article  MathSciNet  Google Scholar 

  15. Fleming, W.H., Souganidis, P.E.: Asymptotic series and the method of vanishing viscosity. Indiana Univ. Math. J. 35(2), 425–447 (1986)

    Article  MathSciNet  Google Scholar 

  16. Gilbarg, D., Trudinger, N.S.: Elliptic Partial Differential Equations of Second Order, 2nd edn. Springer, Berlin (2001)

    Book  Google Scholar 

  17. Ishii, H., Koike, S.: A new formulation of state constraint problems for first-order PDEs. SIAM J. Control. Optim. 34(2), 554–571 (1996)

    Article  MathSciNet  Google Scholar 

  18. Ishii, H., Loreti, P.: A class of stochastic optimal control problems with state constraint. Indiana Univ. Math. J. 51(5), 1167–1196 (2002)

    Article  MathSciNet  Google Scholar 

  19. Ishii, H., Mitake, H., Tran, H.V.: The vanishing discount problem and viscosity Mather measures. Part 2: boundary value problems. J. Mathématiques Pures et Appliquées 108(3), 261–305 (2017)

    Article  MathSciNet  Google Scholar 

  20. Kim, Y., Tran, H.V., Tu, S.N.: State-constraint static Hamilton-Jacobi equations in nested domains. SIAM J. Math. Anal. 52(5), 4161–4184 (2020)

    Article  MathSciNet  Google Scholar 

  21. Lasry, J.M., Lions, P.-L.: Nonlinear elliptic equations with singular boundary conditions and stochastic control with state constraints. I. The model problem. Mathematische Annalen 283(4), 583–630 (1989)

    Article  MathSciNet  Google Scholar 

  22. Leonori, T., Petitta, F.: Local estimates for parabolic equations with nonlinear gradient terms. Calc. Var. Partial Differ. Equ. 42(1), 153–187 (2011)

    Article  MathSciNet  Google Scholar 

  23. Lions, P.L.: Quelques remarques sur les problemes elliptiques quasilineaires du second ordre. J. d’Analyse Mathématique 45(1), 234–254 (1985)

    Article  MathSciNet  Google Scholar 

  24. Marcus, M., Véron, L.: Existence and uniqueness results for large solutions of general nonlinear elliptic equations. J. Evol. Equ. 3(4), 637–652 (2003)

    Article  MathSciNet  Google Scholar 

  25. Mitake, H.: Asymptotic solutions of Hamilton-Jacobi equations with state constraints. Appl. Math. Optim. 58(3), 393–410 (2008)

    Article  MathSciNet  Google Scholar 

  26. Moll, S., Petitta, F.: Large solutions for nonlinear parabolic equations without absorption terms. J. Funct. Anal. 262(4), 1566–1602 (2012)

    Article  MathSciNet  Google Scholar 

  27. Porretta, A.: Local estimates and large solutions for some elliptic equations with absorption. Adv. Differ. Equ. 9(3–4), 329–351 (2004)

    MathSciNet  MATH  Google Scholar 

  28. Porretta, A., Véron, L.: Asymptotic behaviour of the gradient of large solutions to some nonlinear elliptic equations. Adv. Nonlinear Stud. 6(3), 351–378 (2006)

    Article  MathSciNet  Google Scholar 

  29. Sardarli, M.: The ergodic Mean Field Game system for a type of state constraint condition arXiv: 2101.10564 (2021)

  30. Soner, H.: Optimal control with state-space constraint I. SIAM J. Control. Optim. 24(3), 552–561 (1986)

    Article  MathSciNet  Google Scholar 

  31. Tran, H.V.: Adjoint methods for static Hamilton-Jacobi equations. Calc. Var. Partial Differ. Equ. 41(3–4), 301–319 (2010)

    MathSciNet  MATH  Google Scholar 

  32. Tran, H.V.: Hamilton-Jacobi Equations: Theory and Applications. American Mathematical Society, Providence (2021)

    Book  Google Scholar 

  33. Tu, S.N.T.: Vanishing discount problem and the additive eigenvalues on changing domains. arXiv: 2006.15800 (2021)

Download references

Acknowledgements

The authors would like to express their appreciation to Hung V. Tran for his invaluable guidance. The authors would also like to thank Dohyun Kwon for useful discussions.

Funding

The authors have not disclosed any funding.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuxi Han.

Ethics declarations

Conflict of interest

The authors have not disclosed any competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The authors contributed equally to this work. The authors are supported in part by NSF Grant DMS-1664424 and NSF CAREER Grant DMS-1843320. The work of Son N. T. Tu is supported in part by the GSSC Fellowship, University of Wisconsin–Madison.

Appendix

Appendix

1.1 A.1 Estimates on Solutions

We present here a proof for the gradient bound of the solution to (PDE\(_\varepsilon \)) using Bernstein’s method (see also [21, 23]). Another proof using Berstein’s method inside a doubling variable argument is given in [1].

Proof of Theorem 4

Let \(\theta \in (0,1)\) be chosen later, \(\varphi \in \mathrm {C}_c^\infty (\varOmega )\), \(0\le \varphi \le 1\), \(\mathrm {supp}\;\varphi \subset \varOmega \) and \(\varphi = 1\) on \(\varOmega _\delta \) such that

$$\begin{aligned} \left|\varDelta \varphi (x)\right|\le C\varphi ^\theta \qquad \text {and}\qquad \left|D \varphi (x)\right|^2 \le C\varphi ^{1+\theta },\quad \forall x\in \varOmega , \end{aligned}$$
(66)

where \(C = C(\delta ,\theta )\) is a constant depending on \(\delta ,\theta \).

Define \(w(x) := \left|Du^\varepsilon (x)\right|^2\) for \(x \in \varOmega \). The equation for w is given by

$$\begin{aligned} -\varepsilon \varDelta w + 2 p\left|D u^\varepsilon \right|^{p-2}D u^\varepsilon \cdot D w + 2 w - 2 D f\cdot D u^\varepsilon + 2 \varepsilon \left|D^2u^\varepsilon \right|^2 = 0 \qquad \text {in}\;\varOmega . \end{aligned}$$

Then an equation for \((\varphi w)\) can be derived as follows.

$$\begin{aligned}&-\varepsilon \varDelta (\varphi w) + 2 p\left|D u^\varepsilon \right|^{p-2}D u^\varepsilon \cdot D (\varphi w) + 2 (\varphi w) + 2 \varepsilon \varphi \left|D^2u^\varepsilon \right|^2 + 2\varepsilon \frac{D \varphi }{\varphi }\cdot D (\varphi w) \\&\quad = \varphi (D f\cdot D u^\varepsilon ) + 2 p\left|D u^\varepsilon \right|^{p-2}(D u^\varepsilon \cdot D \varphi )w -\varepsilon w \varDelta \varphi + 2\varepsilon \frac{\left|D \varphi \right|^2}{\varphi }w \qquad \text {in}\;\mathrm {supp}\;\varphi . \end{aligned}$$

Assume that \(\varphi w\) achieves its maximum over \({\overline{\varOmega }}\) at \(x_0\in \varOmega \). And we can further assume that \(x_0\in \mathrm {supp}\;\varphi \), since otherwise the maximum of \(\varphi w\) over \({\overline{\varOmega }}\) is zero. By the classical maximum principle,

$$\begin{aligned} -\varepsilon \varDelta (\varphi w)(x_0)\ge 0 \qquad \text {and}\qquad \left|D(\varphi w)(x_0)\right|= 0. \end{aligned}$$

Use this in the equation of \(\varphi w\) above to obtain

$$\begin{aligned} \varepsilon \varphi \left|D^2u^\varepsilon \right|^2 \le \varphi (Df\cdot Du^\varepsilon )+ 2 p\left|Du^\varepsilon \right|^{p-1} \left|D\varphi \right|w + \varepsilon w \left|\varDelta \varphi \right|+ 2\varepsilon w\frac{\left|D\varphi \right|^2}{\varphi }, \end{aligned}$$

where all terms are evaluated at \(x_0\). From (66), we have

$$\begin{aligned} \varepsilon \varphi \left|D^2u^\varepsilon \right|^2 \le \varphi \left|Df\right|w^{\frac{1}{2}}+ 2 Cp w^{\frac{p-1}{2}+1} \varphi ^{\frac{1+\theta }{2}} + C\varepsilon w \varphi ^{\theta } + 2C\varepsilon w\varphi ^\theta . \end{aligned}$$
(67)

By Cauchy–Schwartz inequality, \(n\left|D^2u^\varepsilon \right|^2\ge (\varDelta u^\varepsilon )^2\). Thus, if \(n\varepsilon < 1\), then

$$\begin{aligned} \begin{aligned} \varepsilon \left|D^2u^\varepsilon \right|^2&\ge \frac{(\varepsilon \varDelta u^\varepsilon )^2}{n\varepsilon } \ge (\varepsilon \varDelta u^\varepsilon )^2 = \left( u^\varepsilon + \left|Du^\varepsilon \right|^p - f\right) ^2\\&\ge \left|Du^\varepsilon \right|^{2p} - 2C\left|Du^\varepsilon \right|^p \ge \frac{\left|Du^\varepsilon \right|^{2p}}{2} - 2C, \end{aligned} \end{aligned}$$
(68)

where C depends on \(\max _{{\overline{\varOmega }}}f\) only. Using (68) in (67), we obtain that

$$\begin{aligned} \varphi \left( \frac{1}{2}w^p - 2C\right) \le \varphi \left|Df\right|w^{\frac{1}{2}}+ 2 Cp w^{\frac{p-1}{2}+1} \varphi ^{\frac{1+\theta }{2}} + 3C\varepsilon w \varphi ^{\theta }. \end{aligned}$$

Multiply both sides by \(\varphi ^{p-1}\) to deduce that

$$\begin{aligned} (\varphi w)^p \le 4C\varphi ^{p-1} + 2\Vert Df\Vert _{L^\infty }\varphi ^p w^{\frac{1}{2}} + 4Cp \varphi ^{\frac{2p+\theta - 1}{2}}w^{\frac{p+1}{2}} + 6C\varepsilon \varphi ^{p+\theta - 1}w. \end{aligned}$$

Choose \(2p+\theta -1 \ge p+1\), i.e., \(p+\theta \ge 2\). This is always possible with the requirement \(\theta \in (0,1)\), as \(1<p <\infty \). Then we get

$$\begin{aligned} (\varphi w)^p \le C\left( 1+ (\varphi w)^\frac{1}{2} + (\varphi w)^\frac{p+1}{2} +(\varphi w)\right) . \end{aligned}$$
(69)

As a polynomial in \(z = (\varphi w)(x_0)\), this implies that \((\varphi w)(x_0)\le C\) where C depends on coefficients of the right hand side of (69), which gives our desired gradient bound since \(w(x)=(\varphi w)(x) \le (\varphi w)(x_0)\) for \(x \in {\overline{\varOmega }}_\delta \subset \mathrm {supp}\;\varphi \). \(\square \)

1.2 A.2 Well-Posedness of (PDE\(_\varepsilon \))

Proof of Theorem 5

If \(p\in (1,2)\), we use the ansatz \( u(x) = C_\varepsilon d(x)^{-\alpha }\) to find a solution to (PDE\(_\varepsilon \)). Plug the ansatz into (PDE\(_\varepsilon \)) and compute

$$\begin{aligned} \begin{aligned} \left|Du (x)\right|^p&= \frac{(\alpha C_\varepsilon )^p }{d(x)^{p(\alpha +1)}}\left|D d(x)\right|^p,\\ \varepsilon \varDelta u(x)&= \frac{\varepsilon C_\varepsilon \alpha (\alpha +1)}{d(x)^{\alpha +2}}\left|D d(x)\right|^2 - \frac{\varepsilon C_\varepsilon \alpha }{d(x)^{\alpha +1}}\varDelta d(x). \end{aligned} \end{aligned}$$

Since \(\left|D d(x)\right|= 1\) for x near \(\partial \varOmega \), as \(x\rightarrow \partial \varOmega \), the explosive terms of the highest order are

$$\begin{aligned} C_\varepsilon ^p \alpha ^p d^{-(\alpha +1)p} -\varepsilon C_\varepsilon \alpha (\alpha +1)d^{-(\alpha +2)}. \end{aligned}$$

Set the above to be zero to obtain that

$$\begin{aligned} \displaystyle \alpha = \frac{2-p}{p-1} \qquad \text {and}\qquad C_\varepsilon = \left( \frac{1}{\alpha }(\alpha +1)^\frac{1}{p-1}\right) \varepsilon ^{\frac{1}{p-1}} = \frac{1}{\alpha }(\alpha +1)^{\alpha +1}\varepsilon ^{\alpha +1}. \end{aligned}$$
(70)

For \(0<\delta < \frac{1}{2}\delta _0\) and \(\eta \) small, define

$$\begin{aligned} \begin{aligned} {\overline{w}}_{\eta ,\delta }(x)&:= \frac{(C_\alpha +\eta )\varepsilon ^{\alpha +1}}{(d(x)-\delta )^\alpha } + M_\eta , \qquad x\in \varOmega _\delta ,\\ {\underline{w}}_{\eta ,\delta }(x)&:= \frac{(C_\alpha -\eta )\varepsilon ^{\alpha +1}}{(d(x)+\delta )^\alpha } - M_\eta , \qquad x\in \varOmega ^\delta , \end{aligned} \end{aligned}$$

where \(C_\alpha := \frac{1}{\alpha } (\alpha +1)^{\alpha +1} \), \(M_\eta \) to be chosen. Next, we show that \({\overline{w}}_{\eta ,\delta }\) is a supersolution of (PDE\(_\varepsilon \)) in \(\varOmega _\delta \), while \({\underline{w}}_{\eta ,\delta }\) is a subsolution of (PDE\(_\varepsilon \)) in \(\varOmega ^\delta \). Compute

$$\begin{aligned} {\mathcal {L}}^\varepsilon \left[ {\overline{w}}_{\eta ,\delta }\right] (x) =&\frac{ (C_\alpha + \eta )\varepsilon ^{\alpha +1}}{(d(x)-\delta )^\alpha } + M_\eta + \frac{(C_\alpha +\eta )^p \alpha ^p\varepsilon ^{\alpha +2}}{(d(x)-\delta )^{\alpha +2}}\left|D d(x)\right|^p - f(x) \\&- \frac{(C_\alpha +\eta )\alpha (\alpha +1)\varepsilon ^{\alpha +2}}{(d(x)-\delta )^{\alpha +2}}\left|D d(x)\right|^2 + \frac{(C_\alpha +\eta )\alpha \varepsilon ^{\alpha +2}}{(d(x)-\delta )^{\alpha +1}}\varDelta d(x)\\ \ge&M_\eta - f(x) \\&+ \underbrace{\frac{\nu C_\alpha \alpha (\alpha +1)\varepsilon ^{\alpha +2}}{(d(x)-\delta )^{\alpha +2}}\left[ \nu ^{p-1}\left|Dd(x)\right|^{p} - \left|Dd(x)\right|^2 + \frac{(d(x)-\delta )\varDelta d(x)}{\alpha +1}\right] }_{I}, \end{aligned}$$

where we use \((C_\alpha \alpha )^p = C_\alpha \alpha (\alpha +1)\) and \(\nu = \frac{C_\alpha +\eta }{C_\alpha } \in (1,2)\) for small \(\eta \). Let

$$\begin{aligned} \delta _\eta : = \frac{\alpha +1}{K_2}\left[ \nu ^{p-1}-1\right] \end{aligned}$$

and \(\delta _\eta \rightarrow 0\) as \(\eta \rightarrow 0\). To get \({\mathcal {L}}^\varepsilon \left[ {\overline{w}}_{\eta ,\delta }\right] \ge 0\), there are two cases to consider, depending on how large \(d(x)-\delta \) is.

  • If \(0< d(x)-\delta<\delta _\eta < \delta _0\) for \(\eta \) small and fixed, then \(\left|Dd(x)\right|= 1\), and thus \(I\ge 0\). Hence, \({\mathcal {L}}^\varepsilon \left[ {\overline{w}}_{\eta ,\delta }\right] \ge 0\) if we choose \(M_\eta \ge \max _{{\overline{\varOmega }}} f\).

  • If \(d(x)-\delta \ge \delta _\eta \), then

    $$\begin{aligned} I \le \left( \frac{1}{\delta _\eta }\right) ^{\alpha +2}\nu C_\alpha \alpha (\alpha +1)\left[ \nu ^{p-1}K_1^{p}+K_1^2+K_2K_0\right] \varepsilon ^{\alpha +2}. \end{aligned}$$

    Thus, we can choose \(M_\eta = \max _{{\overline{\varOmega }}} f + C\varepsilon ^{\alpha +2}\) for C large enough (depending on \(\eta \)) so that \({\mathcal {L}}^\varepsilon \left[ {\overline{w}}_{\eta ,\delta }\right] \ge 0\).

Therefore, \({\overline{w}}_{\eta ,\delta }\) is a supersolution in \(\varOmega _\delta \).

Similarly, we have

$$\begin{aligned}&{\mathcal {L}}_\varepsilon \left[ {\underline{w}}_{\eta ,\delta }\right] (x) \\&\quad = \frac{ (C_\alpha - \eta )\varepsilon ^{\alpha +1}}{(d(x)+\delta )^\alpha } - M_\eta + \frac{(C_\alpha -\eta )^p \alpha ^p\varepsilon ^{\alpha +2}}{(d(x)+\delta )^{\alpha +2}}\left|D d(x)\right|^p - f(x) \\&\qquad - \frac{(C_\alpha -\eta )\alpha (\alpha +1)\varepsilon ^{\alpha +2}}{(d(x)+\delta )^{\alpha +2}}\left|D d(x)\right|^2 + \frac{(C_\alpha -\eta )\alpha \varepsilon ^{\alpha +2}}{(d(x)+\delta )^{\alpha +1}}\varDelta d(x)\\&\quad = - M_\eta - f(x) \\&\qquad +\underbrace{\frac{\nu C_\alpha \alpha (\alpha +1)\varepsilon ^{\alpha +2}}{(d(x)+\delta )^{\alpha +2}}\left[ \nu ^{p-1}\left|Dd(x)\right|^{p} - \left|Dd(x)\right|^2 + \frac{(d(x)+\delta )\varDelta d(x)}{\alpha +1}+\frac{(d(x)+\delta )^2}{\alpha (\alpha +1)\varepsilon }\right] }_{J}, \end{aligned}$$

where \(\nu = \frac{C_\alpha -\eta }{C_\alpha }\in (0,1)\) for small \(\eta \). Let

$$\begin{aligned} \delta _\eta := \left( 1-\nu ^{p-1}\right) \left( \frac{\alpha (\alpha +1)\varepsilon }{1+K_2\alpha \varepsilon }\right) \end{aligned}$$

and \(\delta _\eta \rightarrow 0\) as \( \eta \rightarrow 0\). To obtain \({\mathcal {L}}^\varepsilon \left[ {\underline{w}}_{\eta ,\delta }\right] \le 0\), there are two cases to consider depending on how large \(d(x)+\delta \) is.

  • If \(0<d(x)+\delta<\delta _\eta < \delta _0\) for \(\eta \) small and fixed, then \(\left|Dd(x)\right|= 1\), and thus \(J\le 0\). Hence, \({\mathcal {L}}^\varepsilon \left[ {\underline{w}}_{\eta ,\delta }\right] \le 0\) if we choose \(M_\eta \ge -\max _{\varOmega }f\).

  • If \(d(x)+\delta \ge \delta _\eta \), then

    $$\begin{aligned} \left|J\right|\le \left( \frac{1}{\delta _\eta }\right) ^{\alpha +2} \nu C_\alpha \alpha (\alpha +1)\left[ \nu ^{p-1}K_1^{p}+K_1^2 + \frac{(K_0+1)K_2}{\alpha +1} + \frac{(K_0+1)^2}{\alpha (\alpha +1)\varepsilon }\right] \varepsilon ^{\alpha +2} \end{aligned}$$

    Thus, we can choose \(M_\eta = -\max _{{\overline{\varOmega }}} f - C\varepsilon ^{\alpha +2}\) for C large enough (depending on \(\eta \)) so that \({\mathcal {L}}^\varepsilon \left[ {\underline{w}}_{\eta ,\delta }\right] \le 0\).

Therefore, \({\underline{w}}_{\eta ,\delta }\) is a subsolution in \(\varOmega ^\delta \).

For \(p=2\), we use the ansatz \(u(x) = -C_\varepsilon \log (d(x))\) instead. Similar to the previous case, one can find \(u(x) = -\varepsilon \log (d(x))\). For \(0<\delta <\frac{1}{2}\delta _0\), define

$$\begin{aligned} \begin{aligned}&{\overline{w}}_{\eta ,\delta }(x) = -(1+\eta )\varepsilon \log (d(x)-\delta ) + M_\eta , \qquad x\in \varOmega _\delta ,\\&{\underline{w}}_{\eta ,\delta }(x) = -(1-\eta )\varepsilon \log (d(x)+\delta ) - M_\eta , \qquad x\in \varOmega ^\delta , \end{aligned} \end{aligned}$$

where \(M_\eta \) is to be chosen so that \({\overline{w}}_{\eta ,\delta }(x)\) is a supersolution in \(\varOmega _\delta \) and \({\underline{w}}_{\eta ,\delta }\) is a subsolution in \(\varOmega ^\delta \). The computations are omitted here, as they are similar to the previous case.

We divide the rest of the proof into 3 steps. We first construct a minimal solution, then a maximal solution to (PDE\(_\varepsilon \)), and finally show that they are equal to conclude the existence and the uniqueness of the solution to (PDE\(_\varepsilon \)).

Step 1 There exists a minimal solution \({\underline{u}}\in \mathrm {C}^2(\varOmega )\) of (PDE\(_\varepsilon \)) such that \(v\ge {\underline{u}}\) for any other solution \(v\in \mathrm {C}^2(\varOmega )\) solving (PDE\(_\varepsilon \)).

Proof

Let \(w_{\eta ,\delta }\in \mathrm {C}^2(\varOmega )\) solve

$$\begin{aligned} {\left\{ \begin{array}{ll} {\mathcal {L}}^\varepsilon \left[ w_{\eta ,\delta }\right] = 0 &{}\qquad \text {in}\;\varOmega ,\\ w_{\eta ,\delta } = {\underline{w}}_{\eta ,\delta } &{}\text {on}\;\partial \varOmega . \end{array}\right. } \end{aligned}$$
(71)
  • Fix \(\eta >0\). As \(\delta \rightarrow 0^+\), the value of \({\underline{w}}_{\eta ,\delta }\) blows up on the boundary. Therefore, by the standard comparison principle for the second-order elliptic equation with the Dirichlet boundary, \(\delta _1 \le \delta _2\) implies \(w_{\eta ,\delta _1}\ge w_{\eta ,\delta _2}\) on \({\overline{\varOmega }}\).

  • For \(\delta '>0\), since \({\underline{w}}_{\eta ,\delta '}\) is a subsolution in \({\overline{\varOmega }}\) with finite boundary,

    $$\begin{aligned} 0<\delta \le \delta '\qquad \Longrightarrow \qquad {\underline{w}}_{\eta ,\delta '} \le w_{\eta _,\delta '}\le w_{\eta ,\delta } \qquad \text {on}\;{\overline{\varOmega }}. \end{aligned}$$
    (72)
  • Similarly, since \({\overline{w}}_{\eta ,\delta '}\) is a supersolution on \(\varOmega _{\delta '}\) with infinity value on the boundary \(\partial \varOmega _{\delta '}\), by the comparison principle,

    $$\begin{aligned} w_{\eta ,\delta } \le {\overline{w}}_{\eta , \delta '} \qquad \text {in}\;\varOmega _{\delta '} \qquad \Longrightarrow \qquad w_{\eta ,\delta } \le {\overline{w}}_{\eta ,0} \qquad \text {in}\;\varOmega . \end{aligned}$$
    (73)

From (72) and (73), we have

$$\begin{aligned} 0<\delta \le \delta '\qquad \Longrightarrow \qquad {\underline{w}}_{\eta ,\delta '} \le w_{\eta _,\delta '}\le w_{\eta ,\delta } \le {\overline{w}}_{\eta ,0} \qquad \text {in}\;\varOmega . \end{aligned}$$
(74)

Thus, \(\{w_{\eta ,\delta }\}_{\delta >0}\) is locally bounded in \(L^{\infty }_{\mathrm {loc}}(\varOmega )\) (\(\{w_{\eta ,\delta }\}_{\delta >0}\) is uniformly bounded from below). Using the local gradient estimate for \(w_{\eta ,\delta }\) solving (71), we deduce that \(\{w_{\eta ,\delta }\}_{\delta >0}\) is locally bounded in \(W^{1,\infty }_{\mathrm {loc}}(\varOmega )\). Since \(w_{\eta ,\delta }\) solves (71), we further have that \(\{w_{\eta ,\delta }\}_{\delta >0}\) is locally bounded in \(W^{2,r}_{\mathrm {loc}}(\varOmega )\) for all \(r<\infty \) by Calderon–Zygmund estimates.

Local boundedness of \(\{w_{\eta ,\delta }\}_{\delta >0}\) in \(W^{2,r}_{\mathrm {loc}}(\varOmega )\) implies weak\(^*\) compactness, that is, there exists a function \(u\in W^{2,r}_{\mathrm {loc}}(\varOmega )\) such that (via subsequence and monotonicity)

$$\begin{aligned} w_{\eta ,\delta } \rightharpoonup u \qquad \text {weakly in}\;W^{2,r}_{\mathrm {loc}}(\varOmega ),\qquad \text {and}\qquad w_{\eta ,\delta } \rightarrow u \qquad \text {strongly in}\;W^{1,r}_{\mathrm {loc}}(\varOmega ). \end{aligned}$$

In particular, \(w_{\eta ,\delta }\rightarrow u\) in \(\mathrm {C}^1_{\mathrm {loc}}(\varOmega )\) thanks to Sobolev compact embedding. Let us rewrite the equation \({\mathcal {L}}^\varepsilon \left[ w_{\eta ,\delta }\right] = 0\) as \(\varepsilon \varDelta w_{\eta ,\delta }(x) = F[w_{\eta ,\delta }](x)\) for \(x \in U\subset \subset \varOmega \), where

$$\begin{aligned} F[w_{\eta ,\delta }](x) = w_{\eta ,\delta }(x) + H(x,Dw_{\eta ,\delta }(x)). \end{aligned}$$

Since \(w_{\eta ,\delta }\rightarrow u\) in \(\mathrm {C}^1(U)\) as \(\delta \rightarrow 0\), we have \(F[w_{\eta ,\delta }](x) \rightarrow F(x)\) uniformly in U as \(\delta \rightarrow 0\), where

$$\begin{aligned} F(x) = u(x) + H(x,Du(x)). \end{aligned}$$

In the limit, we obtain that \(u\in L^2(U)\) is a weak solution of \(\varepsilon \varDelta u = F\) in U where F is continuous. Thus, \(u\in \mathrm {C}^2(\varOmega )\) and by stability, u solves \({\mathcal {L}}^\varepsilon [u] = 0\) in \(\varOmega \). From (74), we also have

$$\begin{aligned} {\underline{w}}_{\eta ,0} \le u \le {\overline{w}}_{\eta ,0} \qquad \text {in}\;\varOmega . \end{aligned}$$

Moreover, \(u(x)\rightarrow \infty \) as \(\mathrm {dist}(x,\partial \varOmega )\rightarrow 0\) with the precise rate like (9) or (10). Note that by construction, u may depend on \(\eta \). But next, we will show that u is independent of \(\eta \), by proving u is the unique minimal solution of \({\mathcal {L}}^\varepsilon [u] = 0\) in \(\varOmega \) with \(u = +\infty \) on \(\partial \varOmega \).

Let \(v\in \mathrm {C}^{2}(\varOmega )\) be a solution to (PDE\(_\varepsilon \)). Fix \(\delta >0\). Since \(v(x)\rightarrow \infty \) as \(x\rightarrow \partial \varOmega \) while \(w_{\eta ,\delta }\) remains bounded on \(\partial \varOmega \), the comparison principle yields

$$\begin{aligned} v\ge w_{\eta ,\delta } \qquad \text {in} \; \varOmega . \end{aligned}$$

Let \(\delta \rightarrow 0\) and we deduce that \(v\ge u\) in \(\varOmega \). This concludes that u is the minimal solution in \(\mathrm {C}^2(\varOmega )(\forall \,r<\infty )\) and thus u is independent of \(\eta \). \(\square \)

Step 2 There exists a maximal solution \({\overline{u}}\in \mathrm {C}^2(\varOmega )\) of (PDE\(_\varepsilon \)) such that \(v\le {\overline{u}}\) for any other solution \(v\in \mathrm {C}^2(\varOmega )\) solving (PDE\(_\varepsilon \)).

Proof

For each \(\delta >0\), let \(u_\delta \in \mathrm {C}^2(\varOmega _\delta )\) be the minimal solution to \({\mathcal {L}}^\varepsilon [u_\delta ] = 0\) in \(\varOmega _\delta \) with \(u_\delta = +\infty \) on \(\partial \varOmega _\delta \). By the comparison principle, for every \(\eta >0\), there holds

$$\begin{aligned} {\underline{w}}_{\eta ,\delta } \le u_\delta \le {\overline{w}}_{\eta ,\delta } \qquad \text {in}\;\varOmega _\delta , \end{aligned}$$

and

$$\begin{aligned} 0<\delta <\delta ' \qquad \Longrightarrow \qquad u_\delta \le u_\delta ' \qquad \text {in}\;\varOmega _{\delta '}\,. \end{aligned}$$

The monotoniciy, together with the local boundedness of \(\{u_\delta \}_{\delta >0}\) in \(W^{2,r}_{\mathrm {loc}}(\varOmega )\), implies that there exists \(u\in W^{2,r}_{\mathrm {loc}}(\varOmega )\) for all \(r<\infty \) such that \(u_\delta \rightarrow u\) strongly in \(\mathrm {C}^1_{\mathrm {loc}}(\varOmega )\). Using the equation \({\mathcal {L}}^\varepsilon [u_\delta ] = 0\) in \(\varOmega _\delta \) and the regularity of Laplace’s equation, we can further deduce that \(u\in \mathrm {C}^2(\varOmega )\) solves (PDE\(_\varepsilon \)) and

$$\begin{aligned} {\underline{w}}_{\eta ,0} \le u\le {\overline{w}}_{\eta ,0} \qquad \text {in}\;\varOmega \end{aligned}$$

for all \(\eta >0\). As \(u_\delta \) is independent of \(\eta \) by the previous argument in Step 1, it is clear that u is also independent of \(\eta \). Now we show that u is the maximal solution of (PDE\(_\varepsilon \)). Let \(v\in \mathrm {C}^2(\varOmega )\) solve (PDE\(_\varepsilon \)). Clearly \(v\le u_\delta \) on \(\varOmega _\delta \). Therefore, as \(\delta \rightarrow 0\), we have \(v\le u\). \(\square \)

In conclusion, we have found a minimal solution \({\underline{u}}\) and a maximal solution \({\overline{u}}\) in \(\mathrm {C}^2(\varOmega )\) such that

$$\begin{aligned} {\underline{w}}_{\eta ,0} \le {\underline{u}}\le {\overline{u}}\le {\overline{w}}_{\eta ,0} \qquad \text {in}\;\varOmega \end{aligned}$$
(75)

for any \(\eta >0\). This extra parameter \(\eta \) now enables us to show that \({\overline{u}} = {\underline{u}}\) in \(\varOmega \). The key ingredient here is the convexity in the gradient slot of the operator.

Step 3 We have \({\overline{u}}\equiv {\underline{u}}\) in \(\varOmega \). Therefore, the solution to (PDE\(_\varepsilon \)) in \(\mathrm {C}^2(\varOmega )\) is unique.

Proof

Let \(\theta \in (0,1)\). Define \(w_\theta = \theta {\overline{u}} + (1-\theta ) \inf _{\varOmega } f\). It can be verified that \(w_\theta \) is a subsolution to (PDE\(_\varepsilon \)). Then one may argue that by the comparison principle,

$$\begin{aligned} w_\theta = \theta {\overline{u}} + (1-\theta )\inf _{\varOmega } f\le {\underline{u}} \qquad \text {in}\;\varOmega , \end{aligned}$$

and conclude that \({\overline{u}} \le {\underline{u}}\) by letting \(\theta \rightarrow 1\). But we have to be careful here. As they are both explosive solutions, to use the comparison principle, we need to show that \(w_\theta \le {\underline{u}}\) in a neighborhood of \(\partial \varOmega \). From (75), we see that

$$\begin{aligned}&1\le \frac{{\overline{u}}(x)}{{\underline{u}}(x)} \le \frac{{\overline{w}}_{\eta ,0}(x)}{{\underline{w}}_{\eta ,0}(x)} = \frac{(C_\alpha +\eta )+ M_\eta d(x)^\alpha }{(C_\alpha -\eta )- M_\eta d(x)^\alpha },&1<p<2,\\&1 \le \frac{{\overline{u}}(x)}{{\underline{u}}(x)} \le \frac{{\overline{w}}_{\eta ,0}(x)}{{\underline{w}}_{\eta ,0}(x)} = \frac{-(1+\eta )\log (d(x)) + M_\eta }{-(1-\eta )\log (d(x))-M_\eta },&p=2, \end{aligned}$$

for \(x\in \varOmega \). Hence,

$$\begin{aligned}&1\le \lim _{d(x)\rightarrow 0} \left( \frac{{\overline{u}}(x)}{{\underline{u}}(x)}\right) \le \frac{C_\alpha +\eta }{C_\alpha -\eta },&1< p < 2,\\&1\le \lim _{d(x)\rightarrow 0} \left( \frac{{\overline{u}}(x)}{{\underline{u}}(x)}\right) \le \frac{-(1+\eta )}{-(1-\eta )},&p = 2. \end{aligned}$$

Since \(\eta >0\) is chosen arbitrary, we obtain

$$\begin{aligned} \lim _{d(x)\rightarrow 0} \left( \frac{{\overline{u}}(x)}{{\underline{u}}(x)}\right) = 1. \end{aligned}$$

This means for any \(\varsigma \in (0,1)\), there exists \({\delta _1}(\varsigma )>0\) small such that

$$\begin{aligned} \frac{{\overline{u}}(x)}{{\underline{u}}(x)}\le (1+\varsigma )\Longrightarrow \left( \frac{1}{1+\varsigma }\right) {\overline{u}}(x) \le {\underline{u}}(x) \qquad \text {in}\; \varOmega \backslash \varOmega _{\delta _1}. \end{aligned}$$

For a fixed \(\theta \in (0,1)\), one can always choose \(\varsigma \) small enough so that \(\displaystyle (1+\varsigma )^{-1} \ge \frac{1+\theta }{2}\). Since \({\overline{u}}(x) \rightarrow +\infty \) as \(d(x) \rightarrow 0\), there exists \(\delta _2 > 0\) such that \({\overline{u}}(x) \ge 2 \inf _\varOmega f\) for all \(x \in \varOmega \setminus \varOmega _{\delta _2}\). Now we have

$$\begin{aligned} {\underline{u}}(x)\ge \left( \frac{1}{1+\varsigma }\right) {\overline{u}}(x) \ge \theta {\overline{u}}(x)+ \left( \frac{1-\theta }{2}\right) {\overline{u}}(x)\ge \theta {\overline{u}}(x) + (1-\theta )\left( \inf _\varOmega f\right) \end{aligned}$$

for all \(x \in \varOmega \setminus \varOmega _\delta \) where \(\delta : = \min \{\delta _1, \delta _2\}\). This implies for any fixed \(\theta \in (0,1)\), \(w_\theta \le {\underline{u}}\) in a neighborhood of \(\partial \varOmega \). Hence, by the comparison principle,

$$\begin{aligned} w_\theta = \theta {\overline{u}} + (1-\theta )\inf _{\varOmega } f\le {\underline{u}} \qquad \text {in}\;\varOmega , \end{aligned}$$

for any \(\theta \in (0,1)\). Then let \(\theta \rightarrow 1\) to get the conclusion.

This finishes the proof of the well-posedness of (PDE\(_\varepsilon \)) for \(1<p\le 2\). \(\square \)

Proof of Lemma 6

The proof is a variation of Perron’s method (see [10]) and we proceed by contradiction. Let \(\varphi \in \mathrm {C}({\overline{\varOmega }})\) and \(x_0\in {\overline{\varOmega }}\) such that \(u(x_0) = \varphi (x_0)\) and \(u-\varphi \) has a global strict minimum over \({\overline{\varOmega }}\) at \(x_0\) with

$$\begin{aligned} \varphi (x_0) + H(x_0,D\varphi (x_0)) < 0. \end{aligned}$$
(76)

Let \(\varphi ^\varepsilon (x) = \varphi (x) - \left|x-x_0\right|^2 + \varepsilon \) for \(x\in {\overline{\varOmega }}\). Let \(\delta > 0\). We see that for \(x\in \partial B(x_0,\delta )\cap {\overline{\varOmega }}\),

$$\begin{aligned} \varphi ^\varepsilon (x) = \varphi (x) - \delta ^2 +\varepsilon \le \varphi (x) - \varepsilon \end{aligned}$$

if \(2\varepsilon \le \delta ^2\). We observe that

$$\begin{aligned} \begin{aligned} \varphi ^\varepsilon (x) - \varphi (x_0)&= \varphi (x)-\varphi (x_0) + \varepsilon - \left|x-x_0\right|^2 \\ D\varphi ^\varepsilon (x) - D\varphi (x_0)&= D\varphi (x) - D\varphi (x_0) - 2(x-x_0) \end{aligned} \end{aligned}$$

for \(x\in B(x_0,\delta )\cap {\overline{\varOmega }}\). By the continuity of H(xp) near \((x_0,D\varphi (x_0))\) and the fact that \(\varphi \in \mathrm {C}^1({\overline{\varOmega }})\), we can deduce from (76) that if \(\delta \) is small enough and \(0<2\varepsilon < \delta ^2\), then

$$\begin{aligned} \varphi ^\varepsilon (x)+H(x,D\varphi ^\varepsilon (x)) < 0 \qquad \text {for}\;x\in B(x_0,\delta )\cap {\overline{\varOmega }}. \end{aligned}$$
(77)

We have found \(\varphi ^\varepsilon \in \mathrm {C}^1({\overline{\varOmega }})\) such that \(\varphi ^\varepsilon (x_0)>u(x_0)\), \(\varphi ^\varepsilon <u\) on \(\partial B(x_0,\delta )\cap {\overline{\varOmega }}\) and (77). Let

$$\begin{aligned} {\tilde{u}}(x) = {\left\{ \begin{array}{ll} \max \big \lbrace u(x),\varphi ^\varepsilon (x) \big \rbrace &{}x\in B(x_0,\delta )\cap {\overline{\varOmega }},\\ u(x)&{}x\notin B(x_0,\delta )\cap {\overline{\varOmega }}.\\ \end{array}\right. } \end{aligned}$$

We see that \({\tilde{u}}\in \mathrm {C}({\overline{\varOmega }})\) is a subsolution of (PDE\(_0\)) in \(\varOmega \) with \({\tilde{u}}(x_0) > u(x_0)\), which is a contradiction. Thus, u is a supersolution of (PDE\(_0\)) on \({\overline{\varOmega }}\). \(\square \)

1.3 A.3 Semiconcavity

We present a proof for the semiconcavity of solution to first-order Hamilton–Jacobi equation using the doubling variable method (see also [8]).

Theorem 19

Let \(H(x,p) = G(p)-f(x)\) where \(G\ge 0\) with \(G(0) = 0\) is a convex function from \({\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) and \(f\in \mathrm {C}^2_c({\mathbb {R}}^n)\). Let \(u\in \mathrm {C}_c({\mathbb {R}}^n)\) be a viscosity solution to \(u+H(x,Du) = 0\) in \({\mathbb {R}}^n\). Then u is semiconcave, i.e., u is a viscosity solution of \(-D^2u \ge -c\;{\mathbb {I}}_n\) in \({\mathbb {R}}^n\) where

$$\begin{aligned} c = \max \big \lbrace D_{\xi \xi }f(x): \left|\xi \right|=1, x\in {\mathbb {R}}^n \big \rbrace \ge 0. \end{aligned}$$

Proof

Consider the auxiliary functional

$$\begin{aligned} \varPhi (x,y,z) = u(x)-2u(y)+u(z) - \frac{\alpha }{2}\left|x-2y+z\right|^2 - \frac{c}{2}\left|y-x\right|^2-\frac{c}{2}\left|y-z\right|^2 \end{aligned}$$

for \((x,y,z)\in {\mathbb {R}}^n\times {\mathbb {R}}^n\times {\mathbb {R}}^n\). By the a priori estimate, u is bounded and Lipschitz. Thus, we can assume \(\varPhi \) achieves its maximum over \({\mathbb {R}}^n\times {\mathbb {R}}^n\times {\mathbb {R}}^n\) at \((x_\alpha ,y_\alpha ,z_\alpha )\). The viscosity solution tests give us

$$\begin{aligned}&u(x_\alpha ) + G\big (p_\alpha +c(x_\alpha -y_\alpha )\big ) \le f(x_\alpha )\\&u(z_\alpha ) + G\big (p_\alpha +c(z_\alpha -y_\alpha )\big ) \le f(z_\alpha )\\&u(y_\alpha ) + G\left( p_\alpha +\frac{c}{2}(x_\alpha -y_\alpha ) + \frac{c}{2}(z_\alpha -y_\alpha )\right) \ge f(y_\alpha ), \end{aligned}$$

where \(p_\alpha = \alpha (x_\alpha -2y_\alpha +z_\alpha )\). By the convexity of G, we have

$$\begin{aligned} 2G\left( p_\alpha \!+\! \frac{c}{2}(x_\alpha -y_\alpha ) \!+\! \frac{c}{2}(z_\alpha -y_\alpha )\right) \!\le \! G\big (p_\alpha +c(x_\alpha -y_\alpha )\big ) \!+\! G\big (p_\alpha +c(z_\alpha -y_\alpha )\big ) \end{aligned}$$

Therefore,

$$\begin{aligned} u(x_\alpha ) - 2u(y_\alpha ) + u(z_\alpha ) \le f(x_\alpha ) - 2f(y_\alpha ) + f(z_\alpha ). \end{aligned}$$
  • \(\varPhi (x_\alpha ,y_\alpha ,z_\alpha )\ge \varPhi (0,0,0)\) gives us

    $$\begin{aligned} \frac{\alpha }{2}\left|x_\alpha -2y_\alpha +z_\alpha \right|^2 + \frac{c}{2}\left|y_\alpha -x_\alpha \right|^2+\frac{c}{2}\left|y_\alpha -z_\alpha \right|^2 \le C. \end{aligned}$$

    Thus, \((x_\alpha -y_\alpha )\rightarrow h_0\) and \((y_\alpha -z_\alpha )\rightarrow h_0\) as \(\alpha \rightarrow \infty \) for some \(h_0 \in {\mathbb {R}}^n\).

  • \(\varPhi (x_\alpha ,y_\alpha ,z_\alpha )\ge \varPhi (y_\alpha +h_0,y_\alpha ,y_\alpha -h_0)\) gives us

    $$\begin{aligned} u(x_\alpha )-2u(y_\alpha ) + u(z_\alpha ) - \frac{\alpha }{2}\left|x_\alpha - 2y_\alpha +z_\alpha \right|^2 - \frac{c}{2}\left|x_\alpha -y_\alpha \right|^2 - \frac{c}{2}\left|y_\alpha -z_\alpha \right|^2 \\ \ge u(y_\alpha +h_0) - 2u(y_\alpha ) + u(y_\alpha -h_0) - c\left|h_0\right|^2. \end{aligned}$$

    Therefore, by the fact that u is Lipschitz, we have

    $$\begin{aligned} \begin{aligned} \frac{\alpha }{2}\left|x_\alpha - 2y_\alpha +z_\alpha \right|^2 \le&c\left( \frac{2\left|h_0\right|^2 - \left|x_\alpha -y_\alpha \right|^2 - \left|y_\alpha -z_\alpha \right|^2}{2}\right) \\&+ C\Big (\left|(x_\alpha -y_\alpha ) - h_0\right|+ \left|(z_\alpha - y_\alpha ) + h_0\right|\Big ) \rightarrow 0 \end{aligned} \end{aligned}$$

    as \(\alpha \rightarrow \infty \).

For any \(x\in {\mathbb {R}}^n\), we have \(\varPhi (x_\alpha ,y_\alpha ,z_\alpha ) \ge \varPhi (x+h,x,x-h)\), i.e.,

$$\begin{aligned}&u(x+h)-2u(x)+u(x-h)-c\left|h\right|^2\\&\quad \le f(x_\alpha )-2f(y_\alpha ) + f(z_\alpha ) \\&\qquad - \frac{\alpha }{2}\left|x_\alpha - 2y_\alpha +z_\alpha \right|^2-\frac{c}{2}\left|y_\alpha -x_\alpha \right|^2-\frac{c}{2}\left|y_\alpha -z_\alpha \right|^2. \end{aligned}$$

If \(\{y_\alpha \}\) is unbounded, then since \(f\in \mathrm {C}_c^2({\mathbb {R}}^n)\), we have \(f(y_\alpha )\rightarrow 0\) as \(\alpha \rightarrow \infty \). As a consequence, \(x_\alpha ,z_\alpha \rightarrow \infty \) as well and thus \(f(x_\alpha )-2f(y_\alpha ) + f(z_\alpha )\rightarrow 0\) as \(\alpha \rightarrow \infty \). Therefore,

$$\begin{aligned} u(x+h)-2u(x)+u(x-h)-c\left|h\right|^2 \le 0. \end{aligned}$$

If \(\{y_\alpha \}\) is bounded, then \(y_\alpha \rightarrow y_0\) for some \(y_0 \in {\mathbb {R}}^n\) as \(\alpha \rightarrow \infty \). Thus,

$$\begin{aligned}&u(x+h)-2u(x)+u(x-h)-c\left|h\right|^2\\&\quad \le f(y_0+h_0)- 2f(y_0) + f(y_0-h_0) -c\left|h_0\right|^2. \end{aligned}$$

Let \(\xi = h_0\) and we have

$$\begin{aligned} {\left\{ \begin{array}{ll} f(y_0+h_0) - f(y_0) = \displaystyle \int _0^1 D_x f(y_0 + t\xi )\cdot \xi dt, \\ f(y_0) - f(y_0-h_0) = \displaystyle \int _0^1 D_x f(y_0 - \xi + t\xi )\cdot \xi dt. \end{array}\right. } \end{aligned}$$

Therefore,

$$\begin{aligned} \begin{aligned} f(y_0+h_0) - 2f(y_0) + f(y_0-h_0)&= \int _0^1 \Big (D_x f(y_0 + t\xi ) - D_x f(y_0 - \xi + t\xi ) \Big )\cdot \xi dt\\&= \int _0^1 \int _0^1 \xi ^{\mathsf {T}} D^2 f(y_0-\xi +t\xi +s \xi ) \xi \;dsdt. \end{aligned} \end{aligned}$$

which implies

$$\begin{aligned} \left|f(y_0+h_0) - 2f(y_0) + f(y_0-h_0)\right|\le \left( \max _{\left|\xi \right|=1}D_{\xi \xi } f\right) \left|\xi \right|^2. \end{aligned}$$

Hence,

$$\begin{aligned} u(x+h)-2u(x)+u(x-h)-c\left|h\right|^2 \le 0 \end{aligned}$$

and thus u is semiconcave. It is easy to see that if \(\varphi \) is smooth and \(u-\varphi \) has a local min at x, then \(D^2\varphi (x) \le c\;{\mathbb {I}}\), i.e., \(-D^2\varphi (x)\ge -c\;{\mathbb {I}}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Han, Y., Tu, S.N.T. Remarks on the Vanishing Viscosity Process of State-Constraint Hamilton–Jacobi Equations. Appl Math Optim 86, 3 (2022). https://doi.org/10.1007/s00245-022-09874-z

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00245-022-09874-z

Keywords

Mathematics Subject Classification

Navigation