Abstract
Here, we study the large-time limit of viscosity solutions of the Cauchy problem for second-order Hamilton–Jacobi–Bellman equations with convex Hamiltonians in the torus. This large-time limit solves the corresponding stationary problem, sometimes called the ergodic problem. This problem, however, has multiple viscosity solutions and, thus, a key question is which of these solutions is selected by the limit. Here, we provide a representation for the viscosity solution to the Cauchy problem in terms of generalized holonomic measures. Then, we use this representation to characterize the large-time limit in terms of the initial data and generalized Mather measures. In addition, we establish various results on generalized Mather measures and duality theorems that are of independent interest.
This is a preview of subscription content,
to check access.Similar content being viewed by others
Data availability
We have no data associate for the submission.
References
Amann, H., Crandall, M.G.: On some existence theorems for semi-linear elliptic equations. Indiana Univ. Math. J. 27(5), 779–790 (1978)
Armstrong, S.N., Tran, H.V.: Viscosity solutions of general viscous Hamilton-Jacobi equations. Math. Ann. 361(3–4), 647–687 (2015)
Barles, G., Souganidis, P.E.: On the large time behavior of solutions of Hamilton-Jacobi equations. SIAM J. Math. Anal. 31(4), 925–939 (2000)
Barles, G., Souganidis, P.E.: Space-time periodic solutions and long-time behavior of solutions to quasi-linear parabolic equations. SIAM J. Math. Anal. 32(6), 1311–1323 (2001)
Benamou, J.-D., Brenier, Y.: A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math. 84(3), 375–393 (2000)
Bernard, P., Buffoni, B.: Optimal mass transportation and Mather theory. J. Eur. Math. Soc. (JEMS) 9(1), 85–121 (2007)
Cagnetti, F., Gomes, D., Mitake, H., Tran, H.: A new method for large time behavior of degenerate viscous Hamilton-Jacobi equations with convex Hamiltonians. Ann. Inst. H. Poincaré Anal. Non Linéaire 32(1), 183–200 (2015)
Camilli, F., Capuzzo-Dolcetta, I., Gomes, D.: Error estimates for the approximation of the effective Hamiltonian. Appl. Math. Optim. 57(1), 30–57 (2008)
Cannarsa, P., Cardaliaguet, P.: Hölder estimates in space-time for viscosity solutions of Hamilton-Jacobi equations. Commun. Pure Appl. Math. 63(5), 590–629 (2010)
Capuzzo-Dolcetta, I., Leoni, F., Porretta, A.: Hölder estimates for degenerate elliptic equations with coercive Hamiltonians. Trans. Am. Math. Soc. 362(9), 4511–4536 (2010)
Cardaliaguet, P., Silvestre, L.: Hölder continuity to Hamilton-Jacobi equations with superquadratic growth in the gradient and unbounded right-hand side. Commun. Partial Differ. Equ. 37(9), 1668–1688 (2012)
Crandall, M.G., Kocan, M., Soravia, P., Swiech, A.: On the equivalence of various weak notions of solutions of elliptic PDEs with measurable ingredients. In: Progress in elliptic and parabolic partial differential equations (Capri, 1994), vol 350. Pitman Research Notes Mathematics Series, pp. 136–162. Longman, Harlow (1996)
Davini, A., Siconolfi, A.: A generalized dynamical approach to the large time behavior of solutions of Hamilton-Jacobi equations. SIAM J. Math. Anal. 38(2), 478–502 (electronic) (2006)
Dweik, S., Ghoussoub, N., Kim, Y.-H., Palmer, A.Z.: Stochastic optimal transport with free end time. Preprint: arXiv:1909.04814 (2019)
Evans, L.C.: Adjoint and compensated compactness methods for Hamilton-Jacobi PDE. Arch. Ration. Mech. Anal. 197(3), 1053–1088 (2010)
Fathi, A.: Sur la convergence du semi-groupe de Lax-Oleinik. C. R. Acad. Sci. Paris Sér. I Math. 327, 267–270 (1998)
Fleming, W., Vermes, D.: Generalized solutions in the optimal control of diffusions. In: Stochastic differential systems, stochastic control theory and applications (Minneapolis, Minn., 1986), vol 10. IMA Volumes in Mathematics and Applications, pp. 119–127. Springer, New York (1988)
Fleming, W., Vermes, D.: Convex duality approach to the optimal control of diffusions. SIAM J. Control Optim. 27(5), 1136–1155 (1989)
Friedman, A.: Partial Differential Equations of Parabolic Type. Prentice-Hall, Inc., Englewood Cliffs (1964)
Gomes, D.: A stochastic analogue of Aubry-Mather theory. Nonlinearity 15(3), 581–603 (2002)
Gomes, D.: Duality principles for fully nonlinear elliptic equations. In: Trends in partial differential equations of mathematical physics, vol 61. Progress in Nonlinear Differential Equations Applications, pp. 125–136. Birkhäuser, Basel (2005)
Gomes, D.: Generalized Mather problem and selection principles for viscosity solutions and Mather measures. Adv. Calc. Var. 1(3), 291–307 (2008)
Gomes, D., Valdinoci, E.: Duality theory, representation formulas and uniqueness results for viscosity solutions of Hamilton-Jacobi equations. In: Dynamics, games and science. II, vol. 2. Springer Proceedings in Mathematics, pp. 361–386. Springer, Heidelberg (2011)
Ichihara, N., Ishii, H.: Long-time behavior of solutions of Hamilton-Jacobi equations with convex and coercive Hamiltonians. Arch. Ration. Mech. Anal. 194(2), 383–419 (2009)
Ishii, H., Mitake, H., Tran, H.V.: The vanishing discount problem and viscosity Mather measures. Part 1: The problem on a torus. J. Math. Pures Appl. (9) 108(2), 125–149 (2017)
Lasry, J.-M., Lions, P.-L.: A remark on regularization in Hilbert spaces. Israel J. Math. 55(3), 257–266 (1986)
Lewis, R.M., Vinter, R.B.: Relaxation of optimal control problems to equivalent convex programs. J. Math. Anal. Appl. 74(2), 475–493 (1980)
Ley, O., Nguyen, V.D.: Large time behavior for some nonlinear degenerate parabolic equations. J. Math. Pures Appl. (9) 102(2), 293–314 (2014)
Mikami, T.: Marginal problem for semimartingales via duality. Gakuto Int. Ser. Math. Sci. Appl. 30, 133–152 (2008)
Mikami, T.: Two end points marginal problem by stochastic optimal transportation. SIAM J. Control Optim. 53(4), 2449–2461 (2015)
Mikami, T.: Stochastic optimal transport revisited. Preprint: arXiv:2003.11811v2 (2020)
Mikami, T., Thieullen, M.: Duality theorem for the stochastic optimal control problem. Stochastic Process. Appl. 116(12), 1815–1835 (2006)
Mitake, H., Tran, H.: Dynamical properties of Hamilton-Jacobi equations via the nonlinear adjoint method: large time behavior and discounted approximation. In: Dynamical and geometric aspects of Hamilton-Jacobi and linearized Monge-Ampère equations—VIASM 2016, vol. 2183 . Lecture Notes in Mathematics, pp. 125–228. Springer, Cham (2017)
Mitake, H., Tran, H.: Selection problems for a discount degenerate viscous Hamilton-Jacobi equation. Adv. Math. 306, 684–703 (2017)
Mitake, H., Tran, H.: On uniqueness sets of additive eigenvalue problems and applications. Proc. Am. Math. Soc. 146(11), 4813–4822 (2018)
Otto, F., Villani, C.: Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality. J. Funct. Anal. 173(2), 361–400 (2000)
Tran, H.: Adjoint methods for static Hamilton-Jacobi equations. Calc. Var. Partial Differ. Equ. 41(3–4), 301–319 (2011)
Villani, C.: Topics in optimal transportation, vol. 58. Graduate Studies in Mathematics. American Mathematical Society, Providence (2003)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Giga.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The work of DG was partially supported by KAUST baseline funds and KAUST OSR-CRG2017-3452. The work of HM was partially supported by the JSPS grants: KAKENHI #19K03580, #19H00639, #17KK0093, #20H01816. The work of HT was partially supported by NSF grant DMS-1664424 and NSF CAREER grant DMS-1843320.
Appendices
Approximations of solutions to Problem 1
Let \(\theta \in C_c^\infty (\mathbb {R}^{n},[0,\infty ))\) and \(\rho \in C_c^\infty (\mathbb {R},[0,\infty ))\) be symmetric standard mollifiers; that is, \({\text {supp}}\theta \subset \overline{B}(0,1)\subset \mathbb {R}^{n}\), \({\text {supp}}\rho \subset \overline{B}(0,1)\subset \mathbb {R}\), \(\theta (x)=\theta (-x)\), \(\rho (s)=\rho (-s)\), and \(\Vert \theta \Vert _{L^{1}(\mathbb {R}^{n})}=\Vert \rho \Vert _{L^{1}(\mathbb {R})}=1\). For \(\alpha >0\), set \(\theta ^\alpha (x):=\alpha ^{-n} \theta (\alpha ^{-1}x)\) for \(x\in \mathbb {R}^{n}\), and \(\rho ^{\alpha }(t):=\alpha ^{-1} \rho (\alpha ^{-1}t)\) for \(t\in \mathbb {R}\). For \(w \in C(\mathbb {T}^n \times [0,\infty ))\), let \(w^{\alpha }\in C^\infty (\mathbb {T}^n\times [\alpha ,\infty ))\) be
Proposition A.1
Let w be a Lipschitz solution to (1.1). For \(0<\alpha <1\), let \(w^\alpha \) be as in (A.1). Then, there exists \(C>0\) depending on H, a, and the Lipschitz constant of w such that
We give only a brief outline of the proof. For a detailed proof, see [33, 34].
Proof outline
To obtain the inequality in (A.2), we seek to rewrite its left-hand side in terms of the convolution of the left-hand side in (1.1) with \( \rho ^\alpha \theta ^\alpha \). We will handle each of the terms separately. The first term, \(w^\alpha _t\) is trivial. For the last term, \(H(x,Dw^\alpha )\), we observe that Jensen’s inequality gives, for \((x,t)\in \mathbb {T}^n\times [\alpha ,\infty )\),
Thus, the term \(H(x,Dw^{\alpha }(x,t))\) is controlled by the corresponding term in (1.1) convolved with \( \rho ^\alpha \theta ^\alpha \) and an error term bounded by \(C \alpha \).
The second term, \( a(x)\Delta w^\alpha \), is where the main difficulty of the estimate lies. Because w is Lipschitz, using equation (1.1), we have
in viscosity sense.
Because of the simple structure of a, we see further that \(\Vert a\Delta w\Vert _{L^{\infty }(\mathbb {T}^n \times [0,\infty ))} \leqslant C\), and w is a subsolution to (1.1) and (A.3) in the distributional sense. We need to control the commutation term,
where
To complete the proof, we show that \(|R^\alpha (x,t)| \leqslant C \alpha ^{1/2}\) for all \((x,t) \in \mathbb {T}^n \times [0,\infty )\). We consider two cases
In case (i), there exists \({\bar{x}} \in \overline{B(x,\alpha )}\) such that \(a({\bar{x}}) \leqslant \alpha \). Then, there exists a constant \(C>0\) such that,
See [7, Lemma 2.6] for example. For any \(z\in \overline{B(x,\alpha )}\),
Moreover, by using Taylor’s expansion,
We use the two above inequalities to control \(R^\alpha (x,t)\) as
Now, we consider case (ii); that is, \(\min _{\overline{B(x,\alpha )}}\, a>\alpha \). A direct computation shows that
Combining these estimates yields the conclusion. \(\square \)
The general diffusion matrix case
Now, we consider the case of a general diffusion matrix A. Let \(\mathbb {S}^n\) be the space of \(n \times n\) real symmetric matrices. Let \(A:\mathbb {T}^n\rightarrow \mathbb {S}^n\) be a nonnegative definite diffusion matrix; that is, \( \xi ^T A(x)\xi \geqslant 0\) for all \(\xi \in {\mathbb {R}}^n\) and \(x\in \mathbb {T}^n\). Assume further that \(A\in C^2(\mathbb {T}^n, \mathbb {S}^n)\). We suppose that Assumptions 1–3 always hold in this section and replace (1.1) in Problem 1 by the following general Hamilton–Jacobi–Bellman equation
where Du and \(D^2 u\), respectively, denote the spatial gradient and Hessian of u, and with initial data
We now extend the results in Theorems 1.1–1.2 for this setting. While the statements are similar, there are several technical points that must be addressed. The main difficulty is that the analog of the approximation result in Proposition A.1 is substantially more involved. This approximation result is examined in Proposition B.12 and requires the approximate equation to be uniformly parabolic. Thus, further approximation arguments are needed in various places, including in the definition of holonomic measures.
1.1 Representation formulas for the general case of diffusion matrices
For \(\nu _0, \nu _1\in \mathcal {P}(\mathbb {T}^n)\), \(\eta >0\), let \(\mathcal {H}^{\eta }(\nu _0,\nu _1;t_0,t_1)\) be the set of all \(\gamma \in \mathcal {R}^+({\mathbb {T}}^n\times {\mathbb {R}}^n\times [t_0,t_1])\) satisfying
and
for all \(\varphi \in C^2(\mathbb {T}^n\times [t_0,t_1])\). Moreover, we set
Fix any \(\nu _1 \in \mathcal {P}(\mathbb {T}^n)\). It is worth emphasizing that while we do not know that \(\mathcal {H}^\eta (\nu _0,\nu _1;t_0,t_1)\ne \emptyset \) for each \(\nu _0\in \mathcal {P}(\mathbb {T}^n)\), the set \(\mathcal {H}^\eta (\nu _1;t_0,t_1)\) is always non-empty as shown in the proof of Lemma B.5.
We define \({\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\) as follows. We say that \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\) if \(\gamma \in \mathcal {R}^+(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1])\), and there exist \(C>0\), a sequence \(\{\eta _j\}_{j\in \mathbb {N}} \rightarrow 0\), \(\{\nu _1^{\eta _j}\}_{j \in \mathbb {N}}\subset \mathcal {P}(\mathbb {T}^n)\), and \(\{\gamma ^{\eta _j}\}_{j \in \mathbb {N}}\subset \mathcal {R}^+(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1])\) such that
We also note that \({\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\) is non-empty for any \(\nu _1\in \mathcal {P}(\mathbb {T}^n)\) as stated in Corollary B.6. For \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\), let \(\nu ^{\gamma }\) be the unique element in \(\mathcal {P}(\mathbb {T}^n)\) such that
for all \(\varphi \in C^2(\mathbb {T}^n\times [t_0,t_1])\). Accordingly, \(\gamma \in {\widetilde{\mathcal {H}}}(\nu ^{\gamma },\nu _1;t_0,t_1)\).
We now use the measures in \({\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\) to obtain a representation formula for solutions of (B.1), which can be viewed as a generalization of Theorem 1.1.
Theorem B.1
Let u solve (B.1). Suppose that Assumptions 1–3 hold. Then, for any \(\nu \in \mathcal {P}(\mathbb {T}^n)\) and \(t>0\), we have
This theorem is proved in the next subsection.
The ergodic problem here is
As previously (cf Remark 1), we add a constant to H, if necessary, so that \(c=0\). As in the time-dependent case, we begin by defining the sets of approximated stationary generalized holonomic measures. For each \(\eta >0\), we denote by
In a similar manner to the time-dependent case, \({\widetilde{\mathcal {H}}}\) is the set of all \(\mu \in \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) for which there exist \(C>0\), \(\{\eta _j\}_{j\in \mathbb {N}} \rightarrow 0\), and \(\{\mu ^{\eta _j}\}_{j\in \mathbb {N}}\subset \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) such that
We consider the variational problem
A generalized Mather measure is a solution of the minimization problem in (B.3) and \(\widetilde{\mathcal {M}}\) is the set of all generalized Mather measures. Moreover, \(\mathcal {M}\) is the set of all generalized projected Mather measures; that is, the projections to \(\mathbb {T}^n\) of generalized Mather measures.
Proposition B.2
Suppose that Assumptions 1–3 hold. Assume that the ergodic constant c for (B.2) is 0. We have
This proposition is proved at the end of the paper.
Finally, we have the following representation result, which is a generalized version of Theorem 1.2.
Theorem B.3
Suppose that Assumptions 1–3 hold. Let u solve (B.1) and \(u_\infty \) be as in (1.4). Then, for any \(\nu \in \mathcal {M}\), we have
where for \(\nu _0, \nu _1 \in \mathcal {P}(\mathbb {T}^n)\), d is the generalized Mañé critical potential connecting \(\nu _0\) to \(\nu _1\) given by
The proof of the preceding theorem is similar to the one of Theorem 1.2, so we omit it here.
1.2 Proof of Theorem B.1
We begin by proving Theorem B.1.
Lemma B.4
Suppose that Assumptions 1–3 hold. Let u solve (B.1). Then, for \(\nu _1 \in \mathcal {P}(\mathbb {T}^n)\) and \(t>0\),
Proof
Note that u is globally Lipschitz continuous on \(\mathbb {T}^n\times [0,\infty )\) (see [2, Proposition 3.5], [33, Proposition 4.15] for instance) under Assumptions 1–3.
For \(\alpha , \varepsilon , \delta >0\), let \(u^{\alpha ,\varepsilon ,\delta }\) be the function given by (B.9) in Sect. B.3 below, and set
We notice that \({\tilde{u}}\in C^2(\mathbb {T}^n\times [0,\infty ))\) and it is an approximate subsolution to (B.1), as stated in Proposition B.12.
Fix \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _1; 0,t)\). By the definition of \( {\widetilde{\mathcal {H}}}(\nu _1; 0,t)\), there exist \(\{\eta _j\}_{j\in \mathbb {N}} \rightarrow 0\), \(\{\nu _1^{\eta _j}\}_{j \in \mathbb {N}}\subset \mathcal {P}(\mathbb {T}^n)\), and \(\{\gamma ^{\eta _j}\}\subset \mathcal {R}^+(\mathbb {T}^n\times \mathbb {R}^n\times [0,t])\) such that \(\gamma ^{\eta _j}\in \mathcal {H}^{\eta _j}(\nu _1^{\eta _j}; 0,t)\) for all \(j\in \mathbb {N}\), and
Then,
Because of the definition of the Legendre transform in (1.2),
Accordingly, we have
where \(\kappa (\alpha ,\eta ,\delta ,\varepsilon )\) is defined by (B.11) in Sect. B.3 below. By taking a subsequence if necessary, we have
for some \(\nu ^{\gamma }\in \mathcal {P}(\mathbb {T}^n)\) weakly in the sense of measures.
Notice that if we send \(\alpha \rightarrow 0\), \(j\rightarrow \infty \), and \(\varepsilon \rightarrow 0\) in this order, then we have
Thus, we obtain
Because \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _1; 0,t)\) is arbitrary, the statement follows. \(\square \)
To prove the opposite bound, we regularized (B.1) as in Problem 3. That is, for \(\varepsilon >0\), we find \(u^\varepsilon :{\mathbb {T}}^n\times [0,\infty )\rightarrow {\mathbb {R}}\) solving
If Assumptions 1–3 hold, (B.4) has a unique solution, \(u^\varepsilon \in C^2({\mathbb {T}}^n\times [0,+\infty ))\). Moreover, \(u^\varepsilon \) is Lipschitz continuous uniformly in \(\varepsilon \in (0,1)\). Further, by the standard viscosity solution theory, \(u^\varepsilon \rightarrow u\) locally uniformly, as \(\varepsilon \rightarrow 0\), where u solves (B.1).
Now, we use the nonlinear adjoint method, see [15, 37], to construct measures that satisfy an approximated holonomy condition.
Lemma B.5
For \(\varepsilon >0\), let \(u^\varepsilon \) solve (B.4). For any \(\nu _1\in \mathcal {P}({\mathbb {T}}^n)\), there exist a probability measure \(\nu _0^{\varepsilon }\in \mathcal {P}({\mathbb {T}}^n)\) and \(\gamma ^\varepsilon \in \mathcal {H}^{\varepsilon }(\nu _0^{\varepsilon },\nu _1;t_0,t_1)\). Moreover, \(q=D_pH(x, Du^\varepsilon (x,t))\) \(\gamma ^\varepsilon \)-almost everywhere.
Proof
For \(\varphi \in C^2({\mathbb {T}}^n\times [t_0,t_1])\), the linearization of (B.4) around the solution, \(u^\varepsilon \), is
where we use Einstein’s summation convention. Accordingly, the corresponding adjoint equation is the Fokker–Planck equation
By standard properties of the Fokker–Planck equation,
Next, for each \(\varepsilon >0\) and \(t\in [t_0,t_1]\), let \(\beta ^{\varepsilon }_t \in \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) be the probability measure determined by
for all \(\psi \in C_c(\mathbb {T}^n \times \mathbb {R}^n)\). For \(t\in [t_0,t_1]\), let \(\gamma ^{\varepsilon }_t \in \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) the pullback of \(\beta ^{\varepsilon }_t\) by the map \(\Phi (x,q)=(x,D_q L(x,q))\); that is,
for all \(\psi \in C_c(\mathbb {T}^n \times \mathbb {R}^n)\).
Define the measures \(\beta ^\varepsilon , \gamma ^\varepsilon \in \mathcal {R}(\mathbb {T}^n \times \mathbb {R}^n\times [t_0,t_1])\) by
for any \(f\in C_c(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1])\).
Multiplying the first equation in (B.5) by an arbitrary function, \(\varphi \in C^2(\mathbb {T}^n\times [t_0,t_1])\), and integrating on \(\mathbb {T}^n\), we gather
Next, integrating on \([t_0,t_1]\), we deduce the identity
where \(d\nu _0^{\varepsilon }:=\sigma ^\varepsilon (x,t_0)\,dx\), which implies \(\gamma ^\varepsilon \in \mathcal {H}^\varepsilon (\nu _0^{\varepsilon }, \nu _1;t_0,t_1)\). \(\square \)
Corollary B.6
Under Assumptions 1–3, for all \(0<t_0<t_1\) and \(\nu _1\in \mathcal {P}(\mathbb {T}^n)\),
Proof
Let \(\nu _0^{\varepsilon }\in \mathcal {P}({\mathbb {T}}^n)\) and \(\gamma ^\varepsilon \in \mathcal {H}^\varepsilon (\nu _0^{\varepsilon },\nu _1;t_0,t_1)\) be measures given by Lemma B.5.
Because
there exists a sequence \(\{\varepsilon _j\}\rightarrow 0\) such that
weakly in terms of measures on \(\mathbb {T}^n\) and \(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]\), respectively. Thus, \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _0,\nu _1;t_0,t_1)\), which implies the conclusion. \(\square \)
Finally, we use Lemma B.5 to establish the opposite inequality to the one in Lemma B.4.
Lemma B.7
For any \(\nu \in \mathcal {P}(\mathbb {T}^n)\) and \(t>0\), we have
Proof
For \(s\in [0,t]\), let \(\gamma ^\varepsilon _s\) be the measure constructed in the proof of Lemma B.5 for \(t_0=0\) and \(t_1=t\). By the properties of the Legendre transform
Therefore,
for all \(s\in [0,t]\). Moreover, integrating by parts and using the adjoint equation and (B.4), we obtain
Integrating on [0, t] yields
Taking subsequences \(\{\gamma ^{\varepsilon _j}\}\) and \(\{\nu _0^{\varepsilon _j}\}\) as in (B.6) yields
Because \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _0,\nu ;0,t)\), we obtain the inequality claimed in the statement. \(\square \)
Proof of Theorem B.1
The statement follows directly by combining Lemma B.5 with Lemma B.7. \(\square \)
1.3 Approximation and proof of Proposition B.2
We first construct an approximation of viscosity solutions of (B.1) by \(C^2\)-subsolutions of an approximate equation. We begin by recalling the definition of the sup and inf-convolutions and some of their basic properties.
Let \(w:{\mathbb {T}}^n\times [0,T]\rightarrow {\mathbb {R}}\) be a continuous function. The sup-convolution, \(w^\varepsilon \), and inf-convolution, \(w_\varepsilon \), of w with respect to x for \(\varepsilon >0\) are defined by
Proposition B.8
Let \(w\in \mathrm{Lip\,}(\mathbb {T}^n\times [0,\infty ))\), and set \(L:=\Vert Dw\Vert _{L^\infty (\mathbb {T}^n\times (0,\infty ))}\). We have
Moreover, \(\Vert Dw^\varepsilon \Vert _{L^\infty (\mathbb {T}^n\times (0,\infty ))}\le 2L\) and \(\Vert Dw_\varepsilon \Vert _{L^\infty (\mathbb {T}^n\times (0,\infty ))}\le 2L\).
Proof
We only give a proof for \(w^{\varepsilon }\). Take \(x,y\in \mathbb {T}^n\) so that \(2L\varepsilon <|x-y|\). Since \(w(y,t)-w(x,t)\le L|x-y|<\frac{|x-y|^2}{2\varepsilon }\), we have \(w(y,t)-\frac{|x-y|^2}{2\varepsilon }<w(x,t)\), which implies the first claim.
To get the Lipschitz estimate for \(w^\varepsilon \), for a fixed \(x\in \mathbb {T}^n\), take \(z_x\in \overline{B}(x,2L\varepsilon )\) so that \(w^\varepsilon (x,t)=w(z_x,t)-\frac{|x-z_x|^2}{2\varepsilon }\). Then,
if \(|x-y|\le C\varepsilon \) for any \(C>0\), which implies the conclusion. \(\square \)
It is well known that the inf-sup convolution \((w^{\varepsilon +\delta })_\delta \) for \(\varepsilon , \delta >0\) gives a \(C^{1,1}\) approximation of w in x (see [26] for instance).
Proposition B.9
We have \((w^{\varepsilon +\delta })_\delta \in \mathrm{Lip\,}([0,\infty ); C^{1,1}(\mathbb {T}^n))\), where we denote by \(\mathrm{Lip\,}([0,\infty ); C^{1,1}(\mathbb {T}^n))\) the set of all functions which are Lipschitz on \(t\in [0,\infty )\) and \(C^{1,1}\) on \(x\in \mathbb {T}^n\). Moreover, for each \(t>0\),
Proof
It is clear that \((w^{\varepsilon +\delta })_\delta \) is \((1/2\delta )\)-semiconcave. Because the inf and sup convolutions satisfy the semigroup property, that is, \(w^{\varepsilon +\delta }=(w^{\varepsilon })^\delta \), \(w_{\varepsilon +\delta }=(w_{\varepsilon })_\delta \) for \(\varepsilon , \delta >0\), we have
\((w^{\varepsilon +\delta })_\delta \) is \((1/2\varepsilon )\)-semiconvex in light of [12, Proposition 4.5]. Therefore, \((w^{\varepsilon +\delta })_\delta \in \mathrm{Lip\,}([0,\infty ); C^{1,1}(\mathbb {T}^n))\), and (B.7) follows. \(\square \)
Lemma B.10
Let w be a Lipschitz subsolution to (B.1). Then, there exists a modulus of continuity \(\omega \in C([0,\infty ))\), nondecreasing and with \(\omega (0)=0\), a function \(\delta _0(\varepsilon ,\eta )\), defined for \(\varepsilon , \eta >0\), such that for \(\varepsilon , \eta >0\), \(\delta _0=\delta _0(\varepsilon ,\eta )\) and any \(\delta \in (0,\delta _0]\), and any \((x,t) \in \mathbb {T}^n \times (0,\infty )\) where \((w^{\varepsilon +\delta })_\delta \) is twice differentiable in x and differentiable in t we have at (x, t) the following inequaliy
Proof
We first notice that by standard viscosity solution theory, \(w^\varepsilon \) satisfies
in the sense of viscosity solutions for some nondecreasing function \(\omega \in C([0,\infty ))\) with \(\omega (0)=0\). Furthermore, because \(w^{\varepsilon }+|x|^2/(2\varepsilon )\) is convex, we see that \(-\Delta w^{\varepsilon }\le n/\varepsilon \) in \(\mathbb {T}^n\times (0,\infty )\) in the sense of viscosity solutions. Thus,
in the sense of viscosity solutions.
Let \({\tilde{w}}=(w^{\varepsilon +\delta })_\delta \). Note that \({\tilde{w}}\geqslant w^\varepsilon \) on \(\mathbb {T}^n\). Now, let \((\hat{x},\hat{t})\in \mathbb {T}^n\times (0,\infty )\) be a point where \({\tilde{w}}\) is twice differentiable in x and differentiable in t. Select a function, \(\varphi \in C^2(\mathbb {T}^n\times (0,\infty ))\), such that \({\tilde{w}}-\varphi \) has a maximum at \((\hat{x},\hat{t})\). At this point either \({\tilde{w}}(\hat{x},\hat{t})=w^{\varepsilon }(\hat{x},\hat{t})\) or \({\tilde{w}}(\hat{x},\hat{t})>w^{\varepsilon }(\hat{x},\hat{t})\). In the first alternative, that is, when \({\tilde{w}}(\hat{x},\hat{t})=w^{\varepsilon }(\hat{x},\hat{t})\), \(w^\varepsilon -\varphi \) has a maximum at \((\hat{x}, \hat{t})\). Thus,
In the second alternative, that is, if \({\tilde{w}}(\hat{x},\hat{t})>w^\varepsilon (\hat{x},\hat{t})\), by [12, Proposition 4.4], \(1/\delta \) is one of the eigenvalues of \(D^2\tilde{w}(\hat{x},\hat{t})\). Moreover, by using (B.7), we get
Letting \(\Lambda :=\max _{x\in \mathbb {T}^n}\mathrm{eig}\,(A(x))\ge 0\), where \(\mathrm{eig}\,(A(x))\) are the eigenvalues of A(x), we similarly have
Set \(L:=\Vert w_t\Vert _{L^{\infty }(\mathbb {T}^n)}+\Vert Dw\Vert _{L^{\infty }(\mathbb {T}^n)}\), and \({\bar{C}}=\max _{|p|\leqslant 2L}H(x,p)\). Moreover, we set
Combining the preceding estimates, for all \(0<\delta <\delta _0=\delta _0(\varepsilon ,\eta )\), we get
by the choice of \(\delta _0\) in (B.8), which finishes the proof. \(\square \)
Next, we regularize \((w^{\varepsilon +\delta })_\delta \) further by using standard mollifiers to obtain a \(C^2\) subsolution. Let \(\theta \in C_c^\infty (\mathbb {R}^{n},[0,\infty ))\) and \(\rho \in C_c^\infty (\mathbb {R},[0,\infty ))\) be symmetric standard mollifiers; that is, \({\text {supp}}\theta \subset \overline{B}(0,1)\subset \mathbb {R}^{n}\), \({\text {supp}}\rho \subset \overline{B}(0,1)\subset \mathbb {R}\), \(\theta (x)=\theta (-x)\), \(\rho (s)=\rho (-s)\), and \(\Vert \theta \Vert _{L^{1}(\mathbb {R}^{n})}=\Vert \rho \Vert _{L^{1}(\mathbb {R})}=1\). For each \(\alpha >0\), set \(\theta ^\alpha (x):=\alpha ^{-n} \theta (\alpha ^{-1}x)\) for \(x\in \mathbb {R}^{n}\), and \(\rho ^{\alpha }(t):=\alpha ^{-1} \rho (\alpha ^{-1}t)\) for \(t\in \mathbb {R}\). We define the function \(w^{\alpha ,\varepsilon ,\delta }\in C^\infty (\mathbb {T}^n\times [\alpha ,\infty ))\) by
for all \((x,t)\in \mathbb {T}^n\times [\alpha ,\infty )\).
Lemma B.11
Let \(w\in \mathrm{Lip\,}(\mathbb {T}^n\times (0,\infty ))\) and \(w^{\alpha ,\varepsilon ,\delta }\) be given by (B.9). Then, there exists a constant \(C>0\), independent of \(\alpha , \varepsilon , \delta \), such that for all \((x,t)\in \mathbb {T}^n\times [\alpha ,\infty )\), we have
-
(i)
$$\begin{aligned}&\Big | {\text {tr}}(A(x)D^2w^{\alpha ,\varepsilon ,\delta }(x,t))\\&\quad -\int _0^\infty \int _{\mathbb {T}^n}\rho ^\alpha (s)\theta ^\alpha (y){\text {tr}}\left( A(x-y)D^2(w^{\varepsilon +\delta })_\delta (x-y,t-s)\right) \,dyds \Big |\\ \le \,&C\alpha \max \left\{ \frac{1}{\varepsilon }, \frac{1}{\delta }\right\} , \end{aligned}$$
-
(ii)
$$\begin{aligned}&H(x,Dw^{\alpha ,\varepsilon ,\delta }(x,t))\\&\quad -\int _0^\infty \int _{\mathbb {T}^n} \rho ^\alpha (s)\theta ^\alpha (y)H(x-y,D(w^{\varepsilon +\delta })_\delta (x-y,t-s))\,dyds\le C\alpha . \end{aligned}$$
Proof
We denote, respectively, \(w^{\alpha ,\varepsilon ,\delta }\) and \((w^{\varepsilon +\delta })_\delta \) by \(w^\alpha \) and w for simplicity in the proof. We begin by proving the first inequality. We have
By Proposition B.9, we have, for each \(t>0\),
which implies (i).
By the convexity of H, Jensen’s inequality implies
By Proposition B.8 and (A3),
for all \(y\in B(x,\alpha )\) and \(t-s>0\). Thus, we finish the proof. \(\square \)
Proposition B.12
Let w be a Lipschitz subsolution to (B.1). For \(\alpha , \varepsilon , \delta >0\), let \(w^{\alpha ,\varepsilon ,\delta }\in C^2(\mathbb {T}^n\times [\alpha ,\infty ))\) be the function defined by (B.9). For \(\eta >0\), let \(\delta _0=\delta _0(\varepsilon ,\eta )\) be the constant given by (B.8). Then, we have
for all \(\delta \in (0,\delta _0]\) and \((x,t)\in \mathbb {T}^n\times [\alpha ,\infty )\). Here,
where \(\omega \in C([0,\infty ))\) is a nondecreasing function \(\omega \in C([0,\infty ))\) with \(\omega (0)=0\), and C is a positive constant.
Proof
Proposition B.12 is a straightforward result of Lemmas B.10, B.11. \(\square \)
We finally apply this approximation procedure to give the characterization of the ergodic constant for (B.2) in terms of generalized Mather measures stated in Proposition B.2
Proof of Proposition B.2
Take \(\mu \in {\widetilde{\mathcal {H}}}\). By the definition of \( {\widetilde{\mathcal {H}}}\), there exist \(\{\eta _j\}_{j\in \mathbb {N}}\rightarrow 0\) and \(\{\mu ^{\eta _j}\}_{j\in \mathbb {N}}\subset \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) such that \(\mu ^{\eta _j}\in \mathcal {H}^{\eta _j}\), and
Let v be a Lipschitz viscosity solution to (B.2). For \(\alpha ,\varepsilon ,\delta >0\), denote by
Then, \(v^{\alpha ,\varepsilon ,\delta }\in C^2(\mathbb {T}^n)\). Let \(\tilde{v}:=v^{\alpha ,\varepsilon ,\delta }\), to simplify the notation. Because of Proposition B.12, we have
We let \(\alpha \rightarrow 0\), \(j\rightarrow \infty \), and \(\varepsilon \rightarrow 0\), in this order, to get
For \(\varepsilon >0\), let \((v^\varepsilon ,c^\varepsilon )\in C^2(\mathbb {T}^n)\times \mathbb {R}\) solve
The constant \(c^\varepsilon \) is unique. Besides, thanks to Assumptions 1–3, we have that \(v^\varepsilon \) is Lipschitz continuous uniformly in \(\varepsilon \in (0,1)\). Because the ergodic constant c for (1.5) was normalized to be 0, we see that \(c^\varepsilon \rightarrow 0\) as \(\varepsilon \rightarrow 0\). Let \(\theta ^\varepsilon \) solve the associated adjoint equation
Next, we define a measure, \(\mu ^\varepsilon \in \mathcal {P}(\mathbb {T}^n\times \mathbb {R}^n)\), as follows
for all \(\psi \in C_c(\mathbb {T}^n\times \mathbb {R}^n)\).
Multiplying (B.12) by \(\theta ^\varepsilon \), integrating on \(\mathbb {T}^n\), and using the integration by parts, we get
By a similar argument to the one in Lemma 2.2, we see that \(\mu ^\varepsilon \in \mathcal {H}^\varepsilon \). By extracting a subsequence if necessary, there exists a sequence \(\{\varepsilon _j\} \rightarrow 0\) such that
as \(j\rightarrow \infty \) for some \(\mu \in {\widetilde{\mathcal {H}}}\), and
This also shows that \(\mu \in \widetilde{\mathcal {M}}\), and finishes the proof. \(\square \)
Rights and permissions
About this article
Cite this article
Gomes, D.A., Mitake, H. & Tran, H.V. The large time profile for Hamilton–Jacobi–Bellman equations. Math. Ann. 384, 1409–1459 (2022). https://doi.org/10.1007/s00208-021-02320-5
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00208-021-02320-5