Skip to main content
Log in

The large time profile for Hamilton–Jacobi–Bellman equations

Mathematische Annalen Aims and scope Submit manuscript

Cite this article


Here, we study the large-time limit of viscosity solutions of the Cauchy problem for second-order Hamilton–Jacobi–Bellman equations with convex Hamiltonians in the torus. This large-time limit solves the corresponding stationary problem, sometimes called the ergodic problem. This problem, however, has multiple viscosity solutions and, thus, a key question is which of these solutions is selected by the limit. Here, we provide a representation for the viscosity solution to the Cauchy problem in terms of generalized holonomic measures. Then, we use this representation to characterize the large-time limit in terms of the initial data and generalized Mather measures. In addition, we establish various results on generalized Mather measures and duality theorems that are of independent interest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Data availability

We have no data associate for the submission.


  1. Amann, H., Crandall, M.G.: On some existence theorems for semi-linear elliptic equations. Indiana Univ. Math. J. 27(5), 779–790 (1978)

    Article  MathSciNet  Google Scholar 

  2. Armstrong, S.N., Tran, H.V.: Viscosity solutions of general viscous Hamilton-Jacobi equations. Math. Ann. 361(3–4), 647–687 (2015)

    Article  MathSciNet  Google Scholar 

  3. Barles, G., Souganidis, P.E.: On the large time behavior of solutions of Hamilton-Jacobi equations. SIAM J. Math. Anal. 31(4), 925–939 (2000)

    Article  MathSciNet  Google Scholar 

  4. Barles, G., Souganidis, P.E.: Space-time periodic solutions and long-time behavior of solutions to quasi-linear parabolic equations. SIAM J. Math. Anal. 32(6), 1311–1323 (2001)

    Article  MathSciNet  Google Scholar 

  5. Benamou, J.-D., Brenier, Y.: A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. Numer. Math. 84(3), 375–393 (2000)

    Article  MathSciNet  Google Scholar 

  6. Bernard, P., Buffoni, B.: Optimal mass transportation and Mather theory. J. Eur. Math. Soc. (JEMS) 9(1), 85–121 (2007)

    Article  MathSciNet  Google Scholar 

  7. Cagnetti, F., Gomes, D., Mitake, H., Tran, H.: A new method for large time behavior of degenerate viscous Hamilton-Jacobi equations with convex Hamiltonians. Ann. Inst. H. Poincaré Anal. Non Linéaire 32(1), 183–200 (2015)

    Article  MathSciNet  Google Scholar 

  8. Camilli, F., Capuzzo-Dolcetta, I., Gomes, D.: Error estimates for the approximation of the effective Hamiltonian. Appl. Math. Optim. 57(1), 30–57 (2008)

    Article  MathSciNet  Google Scholar 

  9. Cannarsa, P., Cardaliaguet, P.: Hölder estimates in space-time for viscosity solutions of Hamilton-Jacobi equations. Commun. Pure Appl. Math. 63(5), 590–629 (2010)

    MATH  Google Scholar 

  10. Capuzzo-Dolcetta, I., Leoni, F., Porretta, A.: Hölder estimates for degenerate elliptic equations with coercive Hamiltonians. Trans. Am. Math. Soc. 362(9), 4511–4536 (2010)

    Article  Google Scholar 

  11. Cardaliaguet, P., Silvestre, L.: Hölder continuity to Hamilton-Jacobi equations with superquadratic growth in the gradient and unbounded right-hand side. Commun. Partial Differ. Equ. 37(9), 1668–1688 (2012)

    Article  Google Scholar 

  12. Crandall, M.G., Kocan, M., Soravia, P., Swiech, A.: On the equivalence of various weak notions of solutions of elliptic PDEs with measurable ingredients. In: Progress in elliptic and parabolic partial differential equations (Capri, 1994), vol 350. Pitman Research Notes Mathematics Series, pp. 136–162. Longman, Harlow (1996)

  13. Davini, A., Siconolfi, A.: A generalized dynamical approach to the large time behavior of solutions of Hamilton-Jacobi equations. SIAM J. Math. Anal. 38(2), 478–502 (electronic) (2006)

  14. Dweik, S., Ghoussoub, N., Kim, Y.-H., Palmer, A.Z.: Stochastic optimal transport with free end time. Preprint: arXiv:1909.04814 (2019)

  15. Evans, L.C.: Adjoint and compensated compactness methods for Hamilton-Jacobi PDE. Arch. Ration. Mech. Anal. 197(3), 1053–1088 (2010)

    Article  MathSciNet  Google Scholar 

  16. Fathi, A.: Sur la convergence du semi-groupe de Lax-Oleinik. C. R. Acad. Sci. Paris Sér. I Math. 327, 267–270 (1998)

    Article  MathSciNet  Google Scholar 

  17. Fleming, W., Vermes, D.: Generalized solutions in the optimal control of diffusions. In: Stochastic differential systems, stochastic control theory and applications (Minneapolis, Minn., 1986), vol 10. IMA Volumes in Mathematics and Applications, pp. 119–127. Springer, New York (1988)

  18. Fleming, W., Vermes, D.: Convex duality approach to the optimal control of diffusions. SIAM J. Control Optim. 27(5), 1136–1155 (1989)

    Article  MathSciNet  Google Scholar 

  19. Friedman, A.: Partial Differential Equations of Parabolic Type. Prentice-Hall, Inc., Englewood Cliffs (1964)

    MATH  Google Scholar 

  20. Gomes, D.: A stochastic analogue of Aubry-Mather theory. Nonlinearity 15(3), 581–603 (2002)

    Article  MathSciNet  Google Scholar 

  21. Gomes, D.: Duality principles for fully nonlinear elliptic equations. In: Trends in partial differential equations of mathematical physics, vol 61. Progress in Nonlinear Differential Equations Applications, pp. 125–136. Birkhäuser, Basel (2005)

  22. Gomes, D.: Generalized Mather problem and selection principles for viscosity solutions and Mather measures. Adv. Calc. Var. 1(3), 291–307 (2008)

    Article  MathSciNet  Google Scholar 

  23. Gomes, D., Valdinoci, E.: Duality theory, representation formulas and uniqueness results for viscosity solutions of Hamilton-Jacobi equations. In: Dynamics, games and science. II, vol. 2. Springer Proceedings in Mathematics, pp. 361–386. Springer, Heidelberg (2011)

  24. Ichihara, N., Ishii, H.: Long-time behavior of solutions of Hamilton-Jacobi equations with convex and coercive Hamiltonians. Arch. Ration. Mech. Anal. 194(2), 383–419 (2009)

    Article  MathSciNet  Google Scholar 

  25. Ishii, H., Mitake, H., Tran, H.V.: The vanishing discount problem and viscosity Mather measures. Part 1: The problem on a torus. J. Math. Pures Appl. (9) 108(2), 125–149 (2017)

    Article  MathSciNet  Google Scholar 

  26. Lasry, J.-M., Lions, P.-L.: A remark on regularization in Hilbert spaces. Israel J. Math. 55(3), 257–266 (1986)

    Article  MathSciNet  Google Scholar 

  27. Lewis, R.M., Vinter, R.B.: Relaxation of optimal control problems to equivalent convex programs. J. Math. Anal. Appl. 74(2), 475–493 (1980)

    Article  MathSciNet  Google Scholar 

  28. Ley, O., Nguyen, V.D.: Large time behavior for some nonlinear degenerate parabolic equations. J. Math. Pures Appl. (9) 102(2), 293–314 (2014)

    Article  MathSciNet  Google Scholar 

  29. Mikami, T.: Marginal problem for semimartingales via duality. Gakuto Int. Ser. Math. Sci. Appl. 30, 133–152 (2008)

    Google Scholar 

  30. Mikami, T.: Two end points marginal problem by stochastic optimal transportation. SIAM J. Control Optim. 53(4), 2449–2461 (2015)

    Article  MathSciNet  Google Scholar 

  31. Mikami, T.: Stochastic optimal transport revisited. Preprint: arXiv:2003.11811v2 (2020)

  32. Mikami, T., Thieullen, M.: Duality theorem for the stochastic optimal control problem. Stochastic Process. Appl. 116(12), 1815–1835 (2006)

    Article  MathSciNet  Google Scholar 

  33. Mitake, H., Tran, H.: Dynamical properties of Hamilton-Jacobi equations via the nonlinear adjoint method: large time behavior and discounted approximation. In: Dynamical and geometric aspects of Hamilton-Jacobi and linearized Monge-Ampère equations—VIASM 2016, vol. 2183 . Lecture Notes in Mathematics, pp. 125–228. Springer, Cham (2017)

  34. Mitake, H., Tran, H.: Selection problems for a discount degenerate viscous Hamilton-Jacobi equation. Adv. Math. 306, 684–703 (2017)

    Article  MathSciNet  Google Scholar 

  35. Mitake, H., Tran, H.: On uniqueness sets of additive eigenvalue problems and applications. Proc. Am. Math. Soc. 146(11), 4813–4822 (2018)

    Article  MathSciNet  Google Scholar 

  36. Otto, F., Villani, C.: Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality. J. Funct. Anal. 173(2), 361–400 (2000)

    Article  MathSciNet  Google Scholar 

  37. Tran, H.: Adjoint methods for static Hamilton-Jacobi equations. Calc. Var. Partial Differ. Equ. 41(3–4), 301–319 (2011)

    Article  MathSciNet  Google Scholar 

  38. Villani, C.: Topics in optimal transportation, vol. 58. Graduate Studies in Mathematics. American Mathematical Society, Providence (2003)

Download references


We would like to thank Hitoshi Ishii for his suggestions on the approximations of viscosity solutions and subsolutions in Appendix B. We are grateful to Toshio Mikami for the discussions on Theorem 1.1 and for giving us relevant references on the duality result in Theorem 1.4.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Hiroyoshi Mitake.

Additional information

Communicated by Giga.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The work of DG was partially supported by KAUST baseline funds and KAUST OSR-CRG2017-3452. The work of HM was partially supported by the JSPS grants: KAKENHI #19K03580, #19H00639, #17KK0093, #20H01816. The work of HT was partially supported by NSF grant DMS-1664424 and NSF CAREER grant DMS-1843320.


Approximations of solutions to Problem 1

Let \(\theta \in C_c^\infty (\mathbb {R}^{n},[0,\infty ))\) and \(\rho \in C_c^\infty (\mathbb {R},[0,\infty ))\) be symmetric standard mollifiers; that is, \({\text {supp}}\theta \subset \overline{B}(0,1)\subset \mathbb {R}^{n}\), \({\text {supp}}\rho \subset \overline{B}(0,1)\subset \mathbb {R}\), \(\theta (x)=\theta (-x)\), \(\rho (s)=\rho (-s)\), and \(\Vert \theta \Vert _{L^{1}(\mathbb {R}^{n})}=\Vert \rho \Vert _{L^{1}(\mathbb {R})}=1\). For \(\alpha >0\), set \(\theta ^\alpha (x):=\alpha ^{-n} \theta (\alpha ^{-1}x)\) for \(x\in \mathbb {R}^{n}\), and \(\rho ^{\alpha }(t):=\alpha ^{-1} \rho (\alpha ^{-1}t)\) for \(t\in \mathbb {R}\). For \(w \in C(\mathbb {T}^n \times [0,\infty ))\), let \(w^{\alpha }\in C^\infty (\mathbb {T}^n\times [\alpha ,\infty ))\) be

$$\begin{aligned} w^{\alpha }(x,t) := \int _{0}^{\infty } \rho ^\alpha (s)\int _{{\mathbb {R}}^n} \theta ^\alpha (y)w(x-y,t-s)\,dy ds \quad \text { for } (x,t)\in \mathbb {T}^n\times [\alpha ,\infty ). \end{aligned}$$

Proposition A.1

Let w be a Lipschitz solution to (1.1). For \(0<\alpha <1\), let \(w^\alpha \) be as in (A.1). Then, there exists \(C>0\) depending on Ha, and the Lipschitz constant of w such that

$$\begin{aligned} w^\alpha _t - a(x)\Delta w^\alpha + H(x,Dw^\alpha ) \leqslant C \alpha ^{1/2} \quad \text { on } \mathbb {T}^n \times [\alpha , \infty ). \end{aligned}$$

We give only a brief outline of the proof. For a detailed proof, see [33, 34].

Proof outline

To obtain the inequality in (A.2), we seek to rewrite its left-hand side in terms of the convolution of the left-hand side in (1.1) with \( \rho ^\alpha \theta ^\alpha \). We will handle each of the terms separately. The first term, \(w^\alpha _t\) is trivial. For the last term, \(H(x,Dw^\alpha )\), we observe that Jensen’s inequality gives, for \((x,t)\in \mathbb {T}^n\times [\alpha ,\infty )\),

$$\begin{aligned} H(x,Dw^{\alpha }(x,t))&= H\left( x, \int _{0}^{\infty } \rho ^\alpha (s)\int _{\mathbb {T}^n} \theta ^\alpha (y)Dw(x-y,t-s)\,dy ds\right) \\&\leqslant \int _{0}^{\infty } \rho ^\alpha (s)\int _{\mathbb {T}^n} \theta ^\alpha (y)H(x,Dw(x-y,t-s))\,dy ds\\&\leqslant \int _{0}^{\infty } \rho ^\alpha (s)\int _{\mathbb {T}^n} \theta ^\alpha (y)H(x-y,Dw(x-y,t-s))\,dy ds + C \alpha . \end{aligned}$$

Thus, the term \(H(x,Dw^{\alpha }(x,t))\) is controlled by the corresponding term in (1.1) convolved with \( \rho ^\alpha \theta ^\alpha \) and an error term bounded by \(C \alpha \).

The second term, \( a(x)\Delta w^\alpha \), is where the main difficulty of the estimate lies. Because w is Lipschitz, using equation (1.1), we have

$$\begin{aligned} -C \leqslant a(x) \Delta w \leqslant C \quad \text { on } \mathbb {T}^n \times [0,\infty ) \end{aligned}$$

in viscosity sense.

Because of the simple structure of a, we see further that \(\Vert a\Delta w\Vert _{L^{\infty }(\mathbb {T}^n \times [0,\infty ))} \leqslant C\), and w is a subsolution to (1.1) and (A.3) in the distributional sense. We need to control the commutation term,

$$\begin{aligned}&\int _{0}^{\infty } \rho ^\alpha (s)\int _{\mathbb {T}^n} \theta ^\alpha (y)a(x-y)\Delta w(x-y,t-s)\,dy ds - a(x) \Delta w^{\alpha }(x,t)\\ =\,&\int _{0}^{\infty } \rho ^\alpha (s) \left( \int _{\mathbb {T}^n} \theta ^\alpha (y)(a(x-y)-a(x))\Delta w(x-y,t-s)\,dy \right) \, ds \\ =\,&\int _{0}^{\infty } \rho ^\alpha (s) R^\alpha (x,t-s)\,ds, \end{aligned}$$


$$\begin{aligned} R^\alpha (x,t-s) = \int _{\mathbb {T}^n} \theta ^\alpha (y)(a(x-y)-a(x))\Delta w(x-y,t-s)\,dy. \end{aligned}$$

To complete the proof, we show that \(|R^\alpha (x,t)| \leqslant C \alpha ^{1/2}\) for all \((x,t) \in \mathbb {T}^n \times [0,\infty )\). We consider two cases

$$\begin{aligned} \text {(i) } \min _{y\in \overline{B(x,\alpha )}} a(y) \leqslant \alpha , \quad \text { or } \quad \text {(ii) } \min _{y\in \overline{B(x,\alpha )}} a(y) > \alpha . \end{aligned}$$

In case (i), there exists \({\bar{x}} \in \overline{B(x,\alpha )}\) such that \(a({\bar{x}}) \leqslant \alpha \). Then, there exists a constant \(C>0\) such that,

$$\begin{aligned} |Da({\bar{x}})| \leqslant C a({\bar{x}})^{1/2} \leqslant C \alpha ^{1/2}. \end{aligned}$$

See [7, Lemma 2.6] for example. For any \(z\in \overline{B(x,\alpha )}\),

$$\begin{aligned} |Da(z)| \leqslant |Da(z)-Da({\bar{x}})|+ |Da({\bar{x}})| \leqslant C \alpha + C \alpha ^{1/2} \leqslant C \alpha ^{1/2}. \end{aligned}$$

Moreover, by using Taylor’s expansion,

$$\begin{aligned}&|a(z)-a(x)| \leqslant |a(z)-a({\bar{x}})|+ |a(x)-a({\bar{x}})|\\ \leqslant \&|Da({\bar{x}})| (|z-{\bar{x}}|+|x-{\bar{x}}|) + C(|z-{\bar{x}}|^2+|x-{\bar{x}}|^2) \leqslant C \alpha ^{3/2}+ C\alpha ^2 \leqslant C \alpha ^{3/2}. \end{aligned}$$

We use the two above inequalities to control \(R^\alpha (x,t)\) as

$$\begin{aligned}&|R^\alpha (x,t)| = \left| \int _{\mathbb {R}^n} (a(x-y)-a(x)) \Delta w(x-y,t) \theta ^\alpha (y)\,dy\right| \\ =\,&\left| \int _{{\mathbb {R}}^n} Dw(x-y,t)\cdot Da(x-y) \theta ^\alpha (y)\,dy \right. \\&\left. - \int _{{\mathbb {R}}^n} Dw(x-y,t)\cdot D\theta ^\alpha (y) (a(x-y)-a(x))\,dy\right| \\ \leqslant \,&C \int _{{\mathbb {R}}^n} \left( \alpha ^{1/2} \theta ^\alpha (y)+ \alpha ^{3/2} |D\theta ^\alpha (y)|\right) \,dy \leqslant C \alpha ^{1/2}. \end{aligned}$$

Now, we consider case (ii); that is, \(\min _{\overline{B(x,\alpha )}}\, a>\alpha \). A direct computation shows that

$$\begin{aligned}&|R^\alpha (x,t)|\le \int _{{\mathbb {R}}^n} \left| (a(x-y)-a(x))\right| \cdot \left| \Delta w(x-y,t)\right| \theta ^\alpha (y)\,dy\\ \leqslant \&C \int _{{\mathbb {R}}^n} \frac{|a(x-y)-a(x)|}{a(x-y)} \theta ^\alpha (y)\,dy \leqslant C \int _{{\mathbb {R}}^n} \frac{|Da(x-y)| \cdot |y|}{a(x-y)} \theta ^\alpha (y)\,dy+C\alpha \\ \leqslant \&C \int _{{\mathbb {R}}^n} \frac{|y|}{a(x-y)^{1/2}} \theta ^\alpha (y)\,dy+C\alpha \leqslant C \int _{{\mathbb {R}}^n} \frac{|y|}{\alpha ^{1/2}} \theta ^\alpha (y)\,dy+C\alpha \leqslant C \alpha ^{1/2}. \end{aligned}$$

Combining these estimates yields the conclusion. \(\square \)

The general diffusion matrix case

Now, we consider the case of a general diffusion matrix A. Let \(\mathbb {S}^n\) be the space of \(n \times n\) real symmetric matrices. Let \(A:\mathbb {T}^n\rightarrow \mathbb {S}^n\) be a nonnegative definite diffusion matrix; that is, \( \xi ^T A(x)\xi \geqslant 0\) for all \(\xi \in {\mathbb {R}}^n\) and \(x\in \mathbb {T}^n\). Assume further that \(A\in C^2(\mathbb {T}^n, \mathbb {S}^n)\). We suppose that Assumptions 13 always hold in this section and replace (1.1) in Problem 1 by the following general Hamilton–Jacobi–Bellman equation

$$\begin{aligned} u_t -{\text {tr}}(A(x)D^2 u) + H(x,Du) = 0 \quad \text { in } \mathbb {T}^n \times (0,\infty ), \end{aligned}$$

where Du and \(D^2 u\), respectively, denote the spatial gradient and Hessian of u, and with initial data

$$\begin{aligned} u(x,0) = u_0(x) \quad \text { on } \mathbb {T}^n. \end{aligned}$$

We now extend the results in Theorems 1.11.2 for this setting. While the statements are similar, there are several technical points that must be addressed. The main difficulty is that the analog of the approximation result in Proposition A.1 is substantially more involved. This approximation result is examined in Proposition B.12 and requires the approximate equation to be uniformly parabolic. Thus, further approximation arguments are needed in various places, including in the definition of holonomic measures.

1.1 Representation formulas for the general case of diffusion matrices

For \(\nu _0, \nu _1\in \mathcal {P}(\mathbb {T}^n)\), \(\eta >0\), let \(\mathcal {H}^{\eta }(\nu _0,\nu _1;t_0,t_1)\) be the set of all \(\gamma \in \mathcal {R}^+({\mathbb {T}}^n\times {\mathbb {R}}^n\times [t_0,t_1])\) satisfying

$$\begin{aligned} \int _{\mathbb {T}^n \times \mathbb {R}^n\times [t_0,t_1]} |q|^\zeta \,d\gamma (z,q,s)<\infty \end{aligned}$$


$$\begin{aligned}&\int _{\mathbb {T}^n \times \mathbb {R}^n\times [t_0,t_1]} \left( \varphi _t(z,s)-\eta \Delta \varphi (z,s)-{\text {tr}}(A(z)D^2\varphi (z,s))+q\cdot D\varphi (z,s) \right) \,d\gamma (z,q,s) \\ =\,&\, \int _{\mathbb {T}^n}\varphi (z,t_1)\,d\nu _1(z) -\int _{\mathbb {T}^n}\varphi (z,t_0)\,d\nu _0(z) \end{aligned}$$

for all \(\varphi \in C^2(\mathbb {T}^n\times [t_0,t_1])\). Moreover, we set

$$\begin{aligned} \mathcal {H}^\eta (\nu _1;t_0,t_1):=\bigcup _{\nu _0\in \mathcal {P}(\mathbb {T}^n)} \mathcal {H}^\eta (\nu _0,\nu _1;t_0,t_1). \end{aligned}$$

Fix any \(\nu _1 \in \mathcal {P}(\mathbb {T}^n)\). It is worth emphasizing that while we do not know that \(\mathcal {H}^\eta (\nu _0,\nu _1;t_0,t_1)\ne \emptyset \) for each \(\nu _0\in \mathcal {P}(\mathbb {T}^n)\), the set \(\mathcal {H}^\eta (\nu _1;t_0,t_1)\) is always non-empty as shown in the proof of Lemma B.5.

We define \({\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\) as follows. We say that \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\) if \(\gamma \in \mathcal {R}^+(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1])\), and there exist \(C>0\), a sequence \(\{\eta _j\}_{j\in \mathbb {N}} \rightarrow 0\), \(\{\nu _1^{\eta _j}\}_{j \in \mathbb {N}}\subset \mathcal {P}(\mathbb {T}^n)\), and \(\{\gamma ^{\eta _j}\}_{j \in \mathbb {N}}\subset \mathcal {R}^+(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1])\) such that

$$\begin{aligned}&\gamma ^{\eta _j}\in \mathcal {H}^{\eta _j}(\nu _1^{\eta _j};t_0,t_1), \ \text {and} \ \int _{\mathbb {T}^n \times \mathbb {R}^n\times [t_0,t_1]} |q|^\zeta \,d\gamma ^{\eta _j}(z,q,s)<C \quad \text { for each } j\in \mathbb {N}, \\&\nu _1^{\eta _j}\rightharpoonup \nu _1 \quad \text { weakly in the sense of measures in} \ \mathcal {R}(\mathbb {T}^n),\\&\gamma ^{\eta _j}\rightharpoonup \gamma \quad \text { weakly in the sense of measures in} \ \mathcal {R}(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]) \ \text {as} \ j\rightarrow \infty . \end{aligned}$$

We also note that \({\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\) is non-empty for any \(\nu _1\in \mathcal {P}(\mathbb {T}^n)\) as stated in Corollary B.6. For \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\), let \(\nu ^{\gamma }\) be the unique element in \(\mathcal {P}(\mathbb {T}^n)\) such that

$$\begin{aligned}&\int _{\mathbb {T}^n \times \mathbb {R}^n\times [t_0,t_1]} \left( \varphi _t(z,s)-\eta \Delta \varphi (z,s)-{\text {tr}}(A(z)D^2\varphi (z,s))+q\cdot D\varphi (z,s) \right) \,d\gamma (z,q,s) \\ =\,&\, \int _{\mathbb {T}^n}\varphi (z,t_1)\,d\nu _1(z) -\int _{\mathbb {T}^n}\varphi (z,t_0)\,d\nu ^\gamma (z) \end{aligned}$$

for all \(\varphi \in C^2(\mathbb {T}^n\times [t_0,t_1])\). Accordingly, \(\gamma \in {\widetilde{\mathcal {H}}}(\nu ^{\gamma },\nu _1;t_0,t_1)\).

We now use the measures in \({\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\) to obtain a representation formula for solutions of (B.1), which can be viewed as a generalization of Theorem 1.1.

Theorem B.1

Let u solve (B.1). Suppose that Assumptions 13 hold. Then, for any \(\nu \in \mathcal {P}(\mathbb {T}^n)\) and \(t>0\), we have

$$\begin{aligned} \int _{\mathbb {T}^n}u(z,t)\,d\nu (z)= \inf _{\gamma \in {\widetilde{\mathcal {H}}}(\nu ;0,t)} \left[ \int _{\mathbb {T}^n\times \mathbb {R}^n\times [0,t]}L(z,q)\,d\gamma (z,q,s)+\int _{\mathbb {T}^n}u_0(z)\,d\nu ^\gamma (z) \right] . \end{aligned}$$

This theorem is proved in the next subsection.

The ergodic problem here is

$$\begin{aligned} -{\text {tr}}(A(x)D^2 v)+H(x,Dv)=c \quad \text { in } \mathbb {T}^n. \end{aligned}$$

As previously (cf Remark 1), we add a constant to H, if necessary, so that \(c=0\). As in the time-dependent case, we begin by defining the sets of approximated stationary generalized holonomic measures. For each \(\eta >0\), we denote by

$$\begin{aligned}&\mathcal {H}^\eta :=\Big \{\mu \in \mathcal {P}(\mathbb {T}^n\times \mathbb {R}^n) \,:\, \int _{\mathbb {T}^n \times \mathbb {R}^n} |q|^\zeta \,d\mu (z,q)<\infty ,\\&\int _{\mathbb {T}^n\times \mathbb {R}^n}\left( -\eta \Delta \varphi (z)-{\text {tr}}(A(z)D^2\varphi (z))+q\cdot D\varphi (z)\right) \,d\mu (z,q)=0 \quad \text { for all} \ \varphi \in C^2(\mathbb {T}^n)\Big \} . \end{aligned}$$

In a similar manner to the time-dependent case, \({\widetilde{\mathcal {H}}}\) is the set of all \(\mu \in \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) for which there exist \(C>0\), \(\{\eta _j\}_{j\in \mathbb {N}} \rightarrow 0\), and \(\{\mu ^{\eta _j}\}_{j\in \mathbb {N}}\subset \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) such that

$$\begin{aligned}&\mu ^{\eta _j}\in \mathcal {H}^{\eta _j}, \text { and } \int _{\mathbb {T}^n \times \mathbb {R}^n} |q|^\zeta \,d\mu ^{\eta _j}(z,q)< C \quad \text { for all } j \in \mathbb {N},\\&\mu ^{\eta _j}\rightharpoonup \mu \quad \text { weakly in the sense of measures in} \ \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n) \ \text {as} \ j\rightarrow \infty . \end{aligned}$$

We consider the variational problem

$$\begin{aligned} \inf _{\mu \in {\widetilde{\mathcal {H}}}} \int _{{\mathbb {T}}^n\times {\mathbb {R}}^n}L(x,q)\,d\mu (x,q). \end{aligned}$$

A generalized Mather measure is a solution of the minimization problem in (B.3) and \(\widetilde{\mathcal {M}}\) is the set of all generalized Mather measures. Moreover, \(\mathcal {M}\) is the set of all generalized projected Mather measures; that is, the projections to \(\mathbb {T}^n\) of generalized Mather measures.

Proposition B.2

Suppose that Assumptions 13 hold. Assume that the ergodic constant c for (B.2) is 0. We have

$$\begin{aligned} \inf _{\mu \in {\widetilde{\mathcal {H}}}} \int _{{\mathbb {T}}^n\times {\mathbb {R}}^n} L(x,q)\,d\mu (x,q)=0. \end{aligned}$$

This proposition is proved at the end of the paper.

Finally, we have the following representation result, which is a generalized version of Theorem 1.2.

Theorem B.3

Suppose that Assumptions 13 hold. Let u solve (B.1) and \(u_\infty \) be as in (1.4). Then, for any \(\nu \in \mathcal {M}\), we have

$$\begin{aligned} \int _{\mathbb {T}^n} u_\infty (z) \,d\nu (z) = \inf _{\nu _0\in \mathcal {P}(\mathbb {T}^n)} \left[ d(\nu _0,\nu ) + \int _{\mathbb {T}^n} u_0(z)\,d\nu _0(z) \right] , \end{aligned}$$

where for \(\nu _0, \nu _1 \in \mathcal {P}(\mathbb {T}^n)\), d is the generalized Mañé critical potential connecting \(\nu _0\) to \(\nu _1\) given by

$$\begin{aligned} d(\nu _0,\nu _1):= \inf _{\begin{array}{c} \gamma \in {\widetilde{\mathcal {H}}}(\nu _0,\nu _1;0,t)\\ t>0 \end{array}} \int _{\mathbb {T}^n \times \mathbb {R}^n \times [0,t]} L(x,q) \, d\gamma (x,q,s). \end{aligned}$$

The proof of the preceding theorem is similar to the one of Theorem 1.2, so we omit it here.

1.2 Proof of Theorem B.1

We begin by proving Theorem B.1.

Lemma B.4

Suppose that Assumptions 13 hold. Let u solve (B.1). Then, for \(\nu _1 \in \mathcal {P}(\mathbb {T}^n)\) and \(t>0\),

$$\begin{aligned} \int _{{\mathbb {T}}^n}u(z,t)d\nu _1(z) \leqslant \inf _{\gamma \in {\widetilde{\mathcal {H}}}(\nu _1; 0,t)} \left[ \int _{\mathbb {T}^n \times \mathbb {R}^n\times [0,t]} L(z,q)\,d\gamma (z,q,s)+ \int _{\mathbb {T}^n}u(z,0)\,d\nu ^\gamma (z) \right] . \end{aligned}$$


Note that u is globally Lipschitz continuous on \(\mathbb {T}^n\times [0,\infty )\) (see [2, Proposition 3.5], [33, Proposition 4.15] for instance) under Assumptions 13.

For \(\alpha , \varepsilon , \delta >0\), let \(u^{\alpha ,\varepsilon ,\delta }\) be the function given by (B.9) in Sect. B.3 below, and set

$$\begin{aligned} \tilde{u}(x,t):=u^{\alpha ,\varepsilon ,\delta }(x,t+\alpha ) \quad \text { for all } (x,t)\in \mathbb {T}^n\times [0,\infty ). \end{aligned}$$

We notice that \({\tilde{u}}\in C^2(\mathbb {T}^n\times [0,\infty ))\) and it is an approximate subsolution to (B.1), as stated in Proposition B.12.

Fix \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _1; 0,t)\). By the definition of \( {\widetilde{\mathcal {H}}}(\nu _1; 0,t)\), there exist \(\{\eta _j\}_{j\in \mathbb {N}} \rightarrow 0\), \(\{\nu _1^{\eta _j}\}_{j \in \mathbb {N}}\subset \mathcal {P}(\mathbb {T}^n)\), and \(\{\gamma ^{\eta _j}\}\subset \mathcal {R}^+(\mathbb {T}^n\times \mathbb {R}^n\times [0,t])\) such that \(\gamma ^{\eta _j}\in \mathcal {H}^{\eta _j}(\nu _1^{\eta _j}; 0,t)\) for all \(j\in \mathbb {N}\), and

$$\begin{aligned}&\nu _1^{\eta _j}\rightharpoonup \nu _1 \quad \text { weakly in the sense of measures in} \ \mathcal {R}(\mathbb {T}^n), \\&\gamma ^{\eta _j}\rightharpoonup \gamma \quad \text { weakly in the sense of measures in} \ \mathcal {R}(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]) \ \text {as} \ j\rightarrow \infty . \end{aligned}$$


$$\begin{aligned}&\int _{\mathbb {T}^n \times \mathbb {R}^n\times [0,t]} \left( {\tilde{u}}_t(z,s)-\eta _j \Delta \tilde{u}(z,s)-{\text {tr}}(A(z)D^2 {\tilde{u}}(z,s))+q\cdot D {\tilde{u}}(z,s) \right) \,d\gamma ^{\eta _j}(z,q,s) \\ =\,&\int _{{\mathbb {T}}^n}{\tilde{u}}(z,t)\,d\nu _1^{\eta _j} (z) -\int _{\mathbb {T}^n}{\tilde{u}}(z,0)\,d\nu ^{\gamma ^{\eta _j}}(z). \end{aligned}$$

Because of the definition of the Legendre transform in (1.2),

$$\begin{aligned} q\cdot D{\tilde{u}}(z,s)\leqslant L(z,q)+H(z, D{\tilde{u}}(z,s)). \end{aligned}$$

Accordingly, we have

$$\begin{aligned}&\int _{\mathbb {T}^n}\tilde{u}(z,t)\,d\nu _1^{\eta _j}-\int _{\mathbb {T}^n}\tilde{u}(z,0)\,d\nu ^{\gamma ^{\eta _j}}(z)\\ =\,&\, \int _{\mathbb {T}^n\times \mathbb {R}^n\times [0,t]} \left( \tilde{u}_t-\eta _j \Delta \tilde{u}-{\text {tr}}(A(z)D^2 {\tilde{u}})+q\cdot D\tilde{u} \right) \,d\gamma ^{\eta _j}(z,q,s)\\ \le \,&\, \int _{\mathbb {T}^n\times \mathbb {R}^n\times [0,t]}\left( \tilde{u}_t-\eta _j \Delta \tilde{u}-{\text {tr}}(A(z)D^2 {\tilde{u}})+H(z,D\tilde{u})+L(z,q) \right) \,d\gamma ^{\eta _j}(z,q,s)\\ \le \,&\, \int _{\mathbb {T}^n\times \mathbb {R}^n\times [0,t]}L(z,q)\,d\gamma ^{\eta _j}(z,q,s) +\kappa (\alpha ,\eta _j,\delta ,\varepsilon )t, \end{aligned}$$

where \(\kappa (\alpha ,\eta ,\delta ,\varepsilon )\) is defined by (B.11) in Sect. B.3 below. By taking a subsequence if necessary, we have

$$\begin{aligned} \nu ^{\gamma ^{\eta _j}}\rightharpoonup \nu ^{\gamma } \qquad \text {as} \ j\rightarrow \infty \end{aligned}$$

for some \(\nu ^{\gamma }\in \mathcal {P}(\mathbb {T}^n)\) weakly in the sense of measures.

Notice that if we send \(\alpha \rightarrow 0\), \(j\rightarrow \infty \), and \(\varepsilon \rightarrow 0\) in this order, then we have

$$\begin{aligned} \kappa (\alpha ,\eta _j,\delta ,\varepsilon )\rightarrow 0. \end{aligned}$$

Thus, we obtain

$$\begin{aligned} \int _{{\mathbb {T}}^n}u(x,t)\,d\nu _1 \leqslant \int _{\mathbb {T}^n \times \mathbb {R}^n\times [0,t]} L(z,q)\,d\gamma (z,q,s)+ \int _{\mathbb {T}^n}u(z,0)\,d\nu ^\gamma (z). \end{aligned}$$

Because \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _1; 0,t)\) is arbitrary, the statement follows. \(\square \)

To prove the opposite bound, we regularized (B.1) as in Problem 3. That is, for \(\varepsilon >0\), we find \(u^\varepsilon :{\mathbb {T}}^n\times [0,\infty )\rightarrow {\mathbb {R}}\) solving

$$\begin{aligned} {\left\{ \begin{array}{ll} u^\varepsilon _t -{\text {tr}}(A(x)D^2 u^\varepsilon )+ H(x,Du^\varepsilon ) = \varepsilon \Delta u^\varepsilon \quad &{} \text{ in } \mathbb {T}^n \times (0,\infty ),\\ u(x,0) = u_0(x) \quad &{}\text { on } \mathbb {T}^n. \end{array}\right. } \end{aligned}$$

If Assumptions 13 hold, (B.4) has a unique solution, \(u^\varepsilon \in C^2({\mathbb {T}}^n\times [0,+\infty ))\). Moreover, \(u^\varepsilon \) is Lipschitz continuous uniformly in \(\varepsilon \in (0,1)\). Further, by the standard viscosity solution theory, \(u^\varepsilon \rightarrow u\) locally uniformly, as \(\varepsilon \rightarrow 0\), where u solves (B.1).

Now, we use the nonlinear adjoint method, see [15, 37], to construct measures that satisfy an approximated holonomy condition.

Lemma B.5

For \(\varepsilon >0\), let \(u^\varepsilon \) solve (B.4). For any \(\nu _1\in \mathcal {P}({\mathbb {T}}^n)\), there exist a probability measure \(\nu _0^{\varepsilon }\in \mathcal {P}({\mathbb {T}}^n)\) and \(\gamma ^\varepsilon \in \mathcal {H}^{\varepsilon }(\nu _0^{\varepsilon },\nu _1;t_0,t_1)\). Moreover, \(q=D_pH(x, Du^\varepsilon (x,t))\) \(\gamma ^\varepsilon \)-almost everywhere.


For \(\varphi \in C^2({\mathbb {T}}^n\times [t_0,t_1])\), the linearization of (B.4) around the solution, \(u^\varepsilon \), is

$$\begin{aligned} \mathcal {L}^\varepsilon [\varphi ]= \varphi _t -a^{ij}\varphi _{x_ix_j}+ D_pH(x,Du^\varepsilon )\cdot D\varphi - \varepsilon \Delta \varphi , \end{aligned}$$

where we use Einstein’s summation convention. Accordingly, the corresponding adjoint equation is the Fokker–Planck equation

$$\begin{aligned} {\left\{ \begin{array}{ll} -\sigma ^\varepsilon _t -(a^{ij}\sigma ^\varepsilon )_{x_ix_j} - \text {div}(D_pH(x,Du^{\varepsilon })\sigma ^{\varepsilon }) = \varepsilon \Delta \sigma ^\varepsilon \quad &{} \text{ in } \mathbb {T}^n \times (t_0,t_1),\\ \sigma ^\varepsilon (x,t_1) = \nu _1 \quad &{}\text { on } \mathbb {T}^n. \end{array}\right. } \end{aligned}$$

By standard properties of the Fokker–Planck equation,

$$\begin{aligned} \sigma ^{\varepsilon }>0 \ \text {on} \ \mathbb {T}^n\times [t_0,t_1), \quad \text { and } \quad \int _{\mathbb {T}^n} \sigma ^{\varepsilon }(x,t)\,dx=1 \ \text {for all} \ t\in [t_0,t_1). \end{aligned}$$

Next, for each \(\varepsilon >0\) and \(t\in [t_0,t_1]\), let \(\beta ^{\varepsilon }_t \in \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) be the probability measure determined by

$$\begin{aligned} \int _{\mathbb {T}^n} \psi (x,Du^{\varepsilon }) \sigma ^{\varepsilon }(x,t)\,dx =\int _{\mathbb {T}^n \times \mathbb {R}^n} \psi (x,p)\,d\beta ^{\varepsilon }_t(x,p) \end{aligned}$$

for all \(\psi \in C_c(\mathbb {T}^n \times \mathbb {R}^n)\). For \(t\in [t_0,t_1]\), let \(\gamma ^{\varepsilon }_t \in \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) the pullback of \(\beta ^{\varepsilon }_t\) by the map \(\Phi (x,q)=(x,D_q L(x,q))\); that is,

$$\begin{aligned} \int _{\mathbb {T}^n \times \mathbb {R}^n} \psi (x,p)\,d\beta ^{\varepsilon }_t(x,p)=\int _{\mathbb {T}^n \times \mathbb {R}^n} \psi (x,D_q L(x,q))\,d\gamma ^{\varepsilon }_t(x,q) \end{aligned}$$

for all \(\psi \in C_c(\mathbb {T}^n \times \mathbb {R}^n)\).

Define the measures \(\beta ^\varepsilon , \gamma ^\varepsilon \in \mathcal {R}(\mathbb {T}^n \times \mathbb {R}^n\times [t_0,t_1])\) by

$$\begin{aligned}&\int _{\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]}f\,d\beta ^\varepsilon = \int _{t_0}^{t_1}\int _{\mathbb {T}^n\times \mathbb {R}^n}f(\cdot ,t)\,d\beta ^\varepsilon _t\,dt, \\&\int _{\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]}f\,d\gamma ^\varepsilon = \int _{t_0}^{t_1}\int _{\mathbb {T}^n\times \mathbb {R}^n}f(\cdot ,t)\,d\gamma ^\varepsilon _t\,dt \end{aligned}$$

for any \(f\in C_c(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1])\).

Multiplying the first equation in (B.5) by an arbitrary function, \(\varphi \in C^2(\mathbb {T}^n\times [t_0,t_1])\), and integrating on \(\mathbb {T}^n\), we gather

$$\begin{aligned}&\varepsilon \int _{\mathbb {T}^n}\sigma ^\varepsilon \Delta \varphi \,dx = \, -\int _{\mathbb {T}^n}\varphi \sigma ^\varepsilon _t\,dx +\int _{\mathbb {T}^n}(-a^{ij}(x)\varphi _{x_ix_j}+D_pH(x,Du^\varepsilon )\cdot D\varphi )\sigma ^\varepsilon \,dx\\&=\, -\int _{\mathbb {T}^n}(\varphi \sigma ^\varepsilon )_t\,dx +\int _{\mathbb {T}^n}(\varphi _t-a^{ij}(x)\varphi _{x_ix_j}+D_pH(x,Du^\varepsilon )\cdot D\varphi ) \sigma ^\varepsilon \,dx. \end{aligned}$$

Next, integrating on \([t_0,t_1]\), we deduce the identity

$$\begin{aligned}&\varepsilon \int _{t_0}^{t_1}\int _{\mathbb {T}^n}\sigma ^\varepsilon \Delta \varphi \,dxdt =\varepsilon \int _{\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]}\Delta \varphi \,d\gamma ^{\varepsilon }(x,q,t)\\ =&\, -\int _{t_0}^{t_1}\int _{\mathbb {T}^n}(\varphi \sigma ^\varepsilon )_t\,dxdt \\&+\int _{t_0}^{t_1}\int _{\mathbb {T}^n}(\varphi _t-a^{ij}(x)\varphi _{x_ix_j}+D_pH(x,Du^\varepsilon )\cdot D\varphi ) \sigma ^\varepsilon \,dxdt\\ =&\, -\left[ \int _{\mathbb {T}^n}\varphi (\cdot ,t_1)\sigma ^\varepsilon (\cdot ,t_1)\,dx-\int _{\mathbb {T}^n}\varphi (\cdot ,t_0)\sigma ^\varepsilon (\cdot ,t_0)\,dx\right] \\&\quad \quad \quad \quad \quad \quad \quad \quad +\int _{t_0}^{t_1}\int _{\mathbb {T}^n\times \mathbb {R}^n}(\varphi _t-a^{ij}(x)\varphi _{x_ix_j}+q\cdot D\varphi ) \,d\gamma ^\varepsilon _t(x,q)\\ =&\, -\left[ \int _{\mathbb {T}^n}\varphi (\cdot ,t_1)\,d\nu _1-\int _{\mathbb {T}^n}\varphi (\cdot ,t_0)\,d\nu _0^\varepsilon \right] \\&\quad \quad \quad \quad \quad \quad \quad \quad +\int _{\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]}(\varphi _t-a^{ij}(x)\varphi _{x_ix_j}+q\cdot D\varphi )\, d\gamma ^\varepsilon (x,q,t), \end{aligned}$$

where \(d\nu _0^{\varepsilon }:=\sigma ^\varepsilon (x,t_0)\,dx\), which implies \(\gamma ^\varepsilon \in \mathcal {H}^\varepsilon (\nu _0^{\varepsilon }, \nu _1;t_0,t_1)\). \(\square \)

Corollary B.6

Under Assumptions 13, for all \(0<t_0<t_1\) and \(\nu _1\in \mathcal {P}(\mathbb {T}^n)\),

$$\begin{aligned} {\widetilde{\mathcal {H}}}(\nu _1;t_0,t_1)\ne \emptyset . \end{aligned}$$


Let \(\nu _0^{\varepsilon }\in \mathcal {P}({\mathbb {T}}^n)\) and \(\gamma ^\varepsilon \in \mathcal {H}^\varepsilon (\nu _0^{\varepsilon },\nu _1;t_0,t_1)\) be measures given by Lemma B.5.


$$\begin{aligned} \Vert Du^\varepsilon (\cdot ,t)\Vert _{L^{\infty }(\mathbb {T}^n)}\le C\quad \text {for some} \ C>0, \ \text {for all} \ t\in [t_0,t_1], \end{aligned}$$

there exists a sequence \(\{\varepsilon _j\}\rightarrow 0\) such that

$$\begin{aligned} \nu _0^{\varepsilon _j}\rightharpoonup \nu _0\in \mathcal {P}(\mathbb {T}^n), \quad \gamma ^{\varepsilon _j}\rightharpoonup \gamma \in \mathcal {R}(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]) \quad \text {as} \ \ j\rightarrow \infty , \end{aligned}$$

weakly in terms of measures on \(\mathbb {T}^n\) and \(\mathbb {T}^n\times \mathbb {R}^n\times [t_0,t_1]\), respectively. Thus, \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _0,\nu _1;t_0,t_1)\), which implies the conclusion. \(\square \)

Finally, we use Lemma B.5 to establish the opposite inequality to the one in Lemma B.4.

Lemma B.7

For any \(\nu \in \mathcal {P}(\mathbb {T}^n)\) and \(t>0\), we have

$$\begin{aligned} \int _{\mathbb {T}^n}u(z,t)\,d\nu (z) \ge \inf _{\gamma \in {\widetilde{\mathcal {H}}}(\nu ;0,t)} \left\{ \int _{\mathbb {T}^n\times \mathbb {R}^n\times [0,t]}L(z,q)\,d\gamma (z,q,s)+\int _{\mathbb {T}^n}u_0(z)\,d\nu ^\gamma (z) \right\} . \end{aligned}$$


For \(s\in [0,t]\), let \(\gamma ^\varepsilon _s\) be the measure constructed in the proof of Lemma B.5 for \(t_0=0\) and \(t_1=t\). By the properties of the Legendre transform

$$\begin{aligned} L(z,q)=D_pH(z,D_qL(z,q))\cdot D_qL(z,q)-H(z,D_qL(z,q)). \end{aligned}$$


$$\begin{aligned}&\int _{\mathbb {T}^n \times \mathbb {R}^n} L(z,q)\,d\gamma ^{\varepsilon }_s(z,q)\\ =&\, \int _{\mathbb {T}^n \times \mathbb {R}^n} (D_pH(z,D_qL(z,q))\cdot D_qL(z,q)-H(z,D_qL(z,q)))\,d\gamma ^{\varepsilon }_s(z,q)\\ =&\, \int _{\mathbb {T}^n \times \mathbb {R}^n} (D_pH(z,p)\cdot p -H(z,p))\,d\beta ^{\varepsilon }_s(z,p)\\ =&\, \int _{\mathbb {T}^n} (D_pH(x,Du^{\varepsilon })\cdot Du^\varepsilon -H(x,Du^{\varepsilon }))\sigma ^{\varepsilon }(x,s)\,dx \end{aligned}$$

for all \(s\in [0,t]\). Moreover, integrating by parts and using the adjoint equation and (B.4), we obtain

$$\begin{aligned}&\int _{\mathbb {T}^n} (D_pH(x,Du^{\varepsilon })\cdot Du^{\varepsilon } -H(x,Du^{\varepsilon }))\sigma ^{\varepsilon }\,dx\\ =&\, \int _{\mathbb {T}^n} -{\text {div}}(D_pH(x,Du^{\varepsilon })\sigma ^{\varepsilon })u^{\varepsilon }-H(x,Du^{\varepsilon })\sigma ^{\varepsilon }\,dx\\ =&\, \int _{\mathbb {T}^n} (\sigma ^{\varepsilon }_t+\varepsilon \Delta \sigma ^\varepsilon +(a^{ij}\sigma ^\varepsilon )_{x_ix_j})u^{\varepsilon } +(u^\varepsilon _t-\varepsilon \Delta u^\varepsilon -a^{ij}u^\varepsilon _{x_ix_j})\sigma ^{\varepsilon }\,dx\\ =&\, \int _{\mathbb {T}^n} (u^\varepsilon \sigma ^{\varepsilon })_t\,dx. \end{aligned}$$

Integrating on [0, t] yields

$$\begin{aligned} \int _{\mathbb {T}^n \times \mathbb {R}^n\times [0,t]} L(x,q)\,d\gamma ^{\varepsilon }(x,q,s)&=\, \int _0^t\int _{\mathbb {T}^n \times \mathbb {R}^n} L(x,q)\,d\gamma ^{\varepsilon }_s(x,q)ds\\&=\, \int _{\mathbb {T}^n}u^\varepsilon (x,t)\,d\nu -\int _{\mathbb {T}^n} u_0(x)\,d\nu ^\varepsilon _0. \end{aligned}$$

Taking subsequences \(\{\gamma ^{\varepsilon _j}\}\) and \(\{\nu _0^{\varepsilon _j}\}\) as in (B.6) yields

$$\begin{aligned} \int _{\mathbb {T}^n}u(z,t)\,d\nu (z) =\int _{\mathbb {T}^n \times \mathbb {R}^n \times [0,t]} L(z,q)\,d\gamma (z,q,s)+ \int _{\mathbb {T}^n} u_0(z)\,d\nu _0(z). \end{aligned}$$

Because \(\gamma \in {\widetilde{\mathcal {H}}}(\nu _0,\nu ;0,t)\), we obtain the inequality claimed in the statement. \(\square \)

Proof of Theorem B.1

The statement follows directly by combining Lemma B.5 with Lemma B.7. \(\square \)

1.3 Approximation and proof of Proposition B.2

We first construct an approximation of viscosity solutions of (B.1) by \(C^2\)-subsolutions of an approximate equation. We begin by recalling the definition of the sup and inf-convolutions and some of their basic properties.

Let \(w:{\mathbb {T}}^n\times [0,T]\rightarrow {\mathbb {R}}\) be a continuous function. The sup-convolution, \(w^\varepsilon \), and inf-convolution, \(w_\varepsilon \), of w with respect to x for \(\varepsilon >0\) are defined by

$$\begin{aligned}&w^\varepsilon (x,t):=\sup _{y\in \mathbb {T}^n}\left[ w(y,t)-\frac{|x-y|^2}{2\varepsilon }\right] , \quad w_\varepsilon (x,t):=\inf _{y\in \mathbb {T}^n}\left[ w(y,t)+\frac{|x-y|^2}{2\varepsilon }\right] . \end{aligned}$$

Proposition B.8

Let \(w\in \mathrm{Lip\,}(\mathbb {T}^n\times [0,\infty ))\), and set \(L:=\Vert Dw\Vert _{L^\infty (\mathbb {T}^n\times (0,\infty ))}\). We have

$$\begin{aligned}&w^\varepsilon (x,t)=\max _{ |x-y|\le 2L\varepsilon }\left\{ w(y,t)-\frac{|x-y|^2}{2\varepsilon }\right\} , \\&\quad w_\varepsilon (x,t)=\min _{ |x-y|\le 2L\varepsilon }\left\{ w(y,t)+\frac{|x-y|^2}{2\varepsilon }\right\} . \end{aligned}$$

Moreover, \(\Vert Dw^\varepsilon \Vert _{L^\infty (\mathbb {T}^n\times (0,\infty ))}\le 2L\) and \(\Vert Dw_\varepsilon \Vert _{L^\infty (\mathbb {T}^n\times (0,\infty ))}\le 2L\).


We only give a proof for \(w^{\varepsilon }\). Take \(x,y\in \mathbb {T}^n\) so that \(2L\varepsilon <|x-y|\). Since \(w(y,t)-w(x,t)\le L|x-y|<\frac{|x-y|^2}{2\varepsilon }\), we have \(w(y,t)-\frac{|x-y|^2}{2\varepsilon }<w(x,t)\), which implies the first claim.

To get the Lipschitz estimate for \(w^\varepsilon \), for a fixed \(x\in \mathbb {T}^n\), take \(z_x\in \overline{B}(x,2L\varepsilon )\) so that \(w^\varepsilon (x,t)=w(z_x,t)-\frac{|x-z_x|^2}{2\varepsilon }\). Then,

$$\begin{aligned} w^\varepsilon (x,t)-w^\varepsilon (y,t)&\le \, \left( w(z_x,t)-\frac{|x-z_x|^2}{2\varepsilon }\right) -\left( w(z_x,t)-\frac{|y-z_x|^2}{2\varepsilon }\right) \\&=\, \frac{(y-x)\cdot (x+y-2z_x)}{2\varepsilon } \le \frac{|x-y|(|x-z_x|+|y-z_x|)}{2\varepsilon }\\&\le \, \frac{4L+C}{2}|x-y| \end{aligned}$$

if \(|x-y|\le C\varepsilon \) for any \(C>0\), which implies the conclusion. \(\square \)

It is well known that the inf-sup convolution \((w^{\varepsilon +\delta })_\delta \) for \(\varepsilon , \delta >0\) gives a \(C^{1,1}\) approximation of w in x (see [26] for instance).

Proposition B.9

We have \((w^{\varepsilon +\delta })_\delta \in \mathrm{Lip\,}([0,\infty ); C^{1,1}(\mathbb {T}^n))\), where we denote by \(\mathrm{Lip\,}([0,\infty ); C^{1,1}(\mathbb {T}^n))\) the set of all functions which are Lipschitz on \(t\in [0,\infty )\) and \(C^{1,1}\) on \(x\in \mathbb {T}^n\). Moreover, for each \(t>0\),

$$\begin{aligned} -\frac{1}{\varepsilon } I\le D^2(w^{\varepsilon +\delta })_\delta (x,t) \le \frac{1}{\delta } I \quad \text {for} \ a.e. \ x\in \mathbb {R}^n. \end{aligned}$$


It is clear that \((w^{\varepsilon +\delta })_\delta \) is \((1/2\delta )\)-semiconcave. Because the inf and sup convolutions satisfy the semigroup property, that is, \(w^{\varepsilon +\delta }=(w^{\varepsilon })^\delta \), \(w_{\varepsilon +\delta }=(w_{\varepsilon })_\delta \) for \(\varepsilon , \delta >0\), we have

$$\begin{aligned} (w^{\varepsilon +\delta })_\delta =((w^\varepsilon )^\delta )_\delta , \end{aligned}$$

\((w^{\varepsilon +\delta })_\delta \) is \((1/2\varepsilon )\)-semiconvex in light of [12, Proposition 4.5]. Therefore, \((w^{\varepsilon +\delta })_\delta \in \mathrm{Lip\,}([0,\infty ); C^{1,1}(\mathbb {T}^n))\), and (B.7) follows. \(\square \)

Lemma B.10

Let w be a Lipschitz subsolution to (B.1). Then, there exists a modulus of continuity \(\omega \in C([0,\infty ))\), nondecreasing and with \(\omega (0)=0\), a function \(\delta _0(\varepsilon ,\eta )\), defined for \(\varepsilon , \eta >0\), such that for \(\varepsilon , \eta >0\), \(\delta _0=\delta _0(\varepsilon ,\eta )\) and any \(\delta \in (0,\delta _0]\), and any \((x,t) \in \mathbb {T}^n \times (0,\infty )\) where \((w^{\varepsilon +\delta })_\delta \) is twice differentiable in x and differentiable in t we have at (xt) the following inequaliy

$$\begin{aligned} ((w^{\varepsilon +\delta })_\delta )_t -\eta \Delta (w^{\varepsilon +\delta })_\delta -{\text {tr}}(A(x)D^2(w^{\varepsilon +\delta })_\delta )+H(x,D(w^{\varepsilon +\delta })_\delta )\le \omega (\varepsilon )+\frac{n\eta }{\varepsilon }. \end{aligned}$$


We first notice that by standard viscosity solution theory, \(w^\varepsilon \) satisfies

$$\begin{aligned} w^\varepsilon _t-{\text {tr}}(A(x)D^2w^\varepsilon )+H(x,Dw^\varepsilon )\le \omega (\varepsilon ) \end{aligned}$$

in the sense of viscosity solutions for some nondecreasing function \(\omega \in C([0,\infty ))\) with \(\omega (0)=0\). Furthermore, because \(w^{\varepsilon }+|x|^2/(2\varepsilon )\) is convex, we see that \(-\Delta w^{\varepsilon }\le n/\varepsilon \) in \(\mathbb {T}^n\times (0,\infty )\) in the sense of viscosity solutions. Thus,

$$\begin{aligned} w^\varepsilon _t-\eta \Delta w^\varepsilon -{\text {tr}}(A(x)D^2w^\varepsilon )+H(x,Dw^\varepsilon )\le \omega (\varepsilon )+\frac{n\eta }{\varepsilon } \end{aligned}$$

in the sense of viscosity solutions.

Let \({\tilde{w}}=(w^{\varepsilon +\delta })_\delta \). Note that \({\tilde{w}}\geqslant w^\varepsilon \) on \(\mathbb {T}^n\). Now, let \((\hat{x},\hat{t})\in \mathbb {T}^n\times (0,\infty )\) be a point where \({\tilde{w}}\) is twice differentiable in x and differentiable in t. Select a function, \(\varphi \in C^2(\mathbb {T}^n\times (0,\infty ))\), such that \({\tilde{w}}-\varphi \) has a maximum at \((\hat{x},\hat{t})\). At this point either \({\tilde{w}}(\hat{x},\hat{t})=w^{\varepsilon }(\hat{x},\hat{t})\) or \({\tilde{w}}(\hat{x},\hat{t})>w^{\varepsilon }(\hat{x},\hat{t})\). In the first alternative, that is, when \({\tilde{w}}(\hat{x},\hat{t})=w^{\varepsilon }(\hat{x},\hat{t})\), \(w^\varepsilon -\varphi \) has a maximum at \((\hat{x}, \hat{t})\). Thus,

$$\begin{aligned}&\tilde{w}_t(\hat{x},\hat{t})-\eta \Delta \tilde{w}(\hat{x},\hat{t})-{\text {tr}}(A(\hat{x})D^2\tilde{w}(\hat{x},\hat{t}))+H(x,D\tilde{w}(\hat{x},\hat{t}))\\ \le&\, \varphi _t(\hat{x},\hat{t})-\eta \Delta \varphi (\hat{x},\hat{t})-{\text {tr}}(A(\hat{x})D^2\varphi (\hat{x},\hat{t}))+H(x,D\varphi (\hat{x},\hat{t})) \le \omega (\varepsilon )+\frac{n\eta }{\varepsilon }. \end{aligned}$$

In the second alternative, that is, if \({\tilde{w}}(\hat{x},\hat{t})>w^\varepsilon (\hat{x},\hat{t})\), by [12, Proposition 4.4], \(1/\delta \) is one of the eigenvalues of \(D^2\tilde{w}(\hat{x},\hat{t})\). Moreover, by using (B.7), we get

$$\begin{aligned} \eta \Delta \tilde{w}(\hat{x},\hat{t})\ge \eta \left( \frac{1}{\delta }-\frac{n-1}{\varepsilon }\right) . \end{aligned}$$

Letting \(\Lambda :=\max _{x\in \mathbb {T}^n}\mathrm{eig}\,(A(x))\ge 0\), where \(\mathrm{eig}\,(A(x))\) are the eigenvalues of A(x), we similarly have

$$\begin{aligned} {\text {tr}}(A(\hat{x})D^2\tilde{w}(\hat{x},\hat{t}))\ge -\frac{(n-1)\Lambda }{\varepsilon }. \end{aligned}$$

Set \(L:=\Vert w_t\Vert _{L^{\infty }(\mathbb {T}^n)}+\Vert Dw\Vert _{L^{\infty }(\mathbb {T}^n)}\), and \({\bar{C}}=\max _{|p|\leqslant 2L}H(x,p)\). Moreover, we set

$$\begin{aligned} \delta _0(\varepsilon ,\eta )=\frac{\varepsilon \eta }{(n-1)(\Lambda +\eta )+(L+{\bar{C}})\varepsilon }. \end{aligned}$$

Combining the preceding estimates, for all \(0<\delta <\delta _0=\delta _0(\varepsilon ,\eta )\), we get

$$\begin{aligned}&\tilde{w}_t(\hat{x},\hat{t})-\eta \Delta \tilde{w}(\hat{x},\hat{t})-{\text {tr}}(A(\hat{x})D^2\tilde{w}(\hat{x},\hat{t}))+H(x,D\tilde{w}(\hat{x},\hat{t}))\\ \le&\, L-\eta \left( \frac{1}{\delta }-\frac{n-1}{\varepsilon }\right) +\frac{(n-1)\Lambda }{\varepsilon } +{\bar{C}} {=} -\frac{\eta }{\delta }{+}\frac{(n-1)(\Lambda +\eta )+(L+{\bar{C}})\varepsilon }{\varepsilon } {\leqslant } 0, \end{aligned}$$

by the choice of \(\delta _0\) in (B.8), which finishes the proof. \(\square \)

Next, we regularize \((w^{\varepsilon +\delta })_\delta \) further by using standard mollifiers to obtain a \(C^2\) subsolution. Let \(\theta \in C_c^\infty (\mathbb {R}^{n},[0,\infty ))\) and \(\rho \in C_c^\infty (\mathbb {R},[0,\infty ))\) be symmetric standard mollifiers; that is, \({\text {supp}}\theta \subset \overline{B}(0,1)\subset \mathbb {R}^{n}\), \({\text {supp}}\rho \subset \overline{B}(0,1)\subset \mathbb {R}\), \(\theta (x)=\theta (-x)\), \(\rho (s)=\rho (-s)\), and \(\Vert \theta \Vert _{L^{1}(\mathbb {R}^{n})}=\Vert \rho \Vert _{L^{1}(\mathbb {R})}=1\). For each \(\alpha >0\), set \(\theta ^\alpha (x):=\alpha ^{-n} \theta (\alpha ^{-1}x)\) for \(x\in \mathbb {R}^{n}\), and \(\rho ^{\alpha }(t):=\alpha ^{-1} \rho (\alpha ^{-1}t)\) for \(t\in \mathbb {R}\). We define the function \(w^{\alpha ,\varepsilon ,\delta }\in C^\infty (\mathbb {T}^n\times [\alpha ,\infty ))\) by

$$\begin{aligned} w^{\alpha ,\varepsilon ,\delta }(x,t) := \int _{0}^{\infty } \rho ^\alpha (s)\int _{\mathbb {T}^n} \theta ^\alpha (y)(w^{\varepsilon +\delta })_\delta (x-y,t-s)\,dy ds \end{aligned}$$

for all \((x,t)\in \mathbb {T}^n\times [\alpha ,\infty )\).

Lemma B.11

Let \(w\in \mathrm{Lip\,}(\mathbb {T}^n\times (0,\infty ))\) and \(w^{\alpha ,\varepsilon ,\delta }\) be given by (B.9). Then, there exists a constant \(C>0\), independent of \(\alpha , \varepsilon , \delta \), such that for all \((x,t)\in \mathbb {T}^n\times [\alpha ,\infty )\), we have

  1. (i)
    $$\begin{aligned}&\Big | {\text {tr}}(A(x)D^2w^{\alpha ,\varepsilon ,\delta }(x,t))\\&\quad -\int _0^\infty \int _{\mathbb {T}^n}\rho ^\alpha (s)\theta ^\alpha (y){\text {tr}}\left( A(x-y)D^2(w^{\varepsilon +\delta })_\delta (x-y,t-s)\right) \,dyds \Big |\\ \le \,&C\alpha \max \left\{ \frac{1}{\varepsilon }, \frac{1}{\delta }\right\} , \end{aligned}$$
  2. (ii)
    $$\begin{aligned}&H(x,Dw^{\alpha ,\varepsilon ,\delta }(x,t))\\&\quad -\int _0^\infty \int _{\mathbb {T}^n} \rho ^\alpha (s)\theta ^\alpha (y)H(x-y,D(w^{\varepsilon +\delta })_\delta (x-y,t-s))\,dyds\le C\alpha . \end{aligned}$$


We denote, respectively, \(w^{\alpha ,\varepsilon ,\delta }\) and \((w^{\varepsilon +\delta })_\delta \) by \(w^\alpha \) and w for simplicity in the proof. We begin by proving the first inequality. We have

$$\begin{aligned}&{\text {tr}}(A(x)D^2w^\alpha (x,t)) =\, {\text {tr}}\left( A(x)\int _0^\infty \int _{\mathbb {T}^n}\rho ^\alpha (s)\theta ^\alpha (y)D^2w(x-y,t-s)\,dyds\right) \\ =&\, \int _0^\infty \int _{\mathbb {T}^n}\rho ^\alpha (s)\theta ^\alpha (y){\text {tr}}\left( A(x-y)D^2w(x-y)\right) \,dyds\\&+\int _0^\infty \int _{|y|\le \alpha }\rho ^\alpha (s)\theta ^\alpha (y){\text {tr}}\left( (A(x)-A(x-y))D^2w(x-y)\right) \,dyds. \end{aligned}$$

By Proposition B.9, we have, for each \(t>0\),

$$\begin{aligned} |D^2w(x,t)|\le C\max \left\{ \frac{1}{\varepsilon }, \frac{1}{\delta }\right\} \quad \text {for} \ a.e. \ x\in \mathbb {T}^n, \end{aligned}$$

which implies (i).

By the convexity of H, Jensen’s inequality implies

$$\begin{aligned} H(x,Dw^{\alpha }(x,t))&=\, H\left( x,\int _0^\infty \int _{\mathbb {T}^n} \rho ^\alpha (s)\theta ^\alpha (y)Dw(x-y,t-s)\,dyds\right) \\&\,\leqslant \int _0^\infty \int _{\mathbb {T}^n} \rho ^\alpha (s)\theta ^\alpha (y) H(x,Dw(x-y,t-s))\,dyds. \end{aligned}$$

By Proposition B.8 and (A3),

$$\begin{aligned} |H(x-y,Dw(x-y,t-s))-H(x,Dw(x-y,t-s))|\le C\alpha \quad \end{aligned}$$

for all \(y\in B(x,\alpha )\) and \(t-s>0\). Thus, we finish the proof. \(\square \)

Proposition B.12

Let w be a Lipschitz subsolution to (B.1). For \(\alpha , \varepsilon , \delta >0\), let \(w^{\alpha ,\varepsilon ,\delta }\in C^2(\mathbb {T}^n\times [\alpha ,\infty ))\) be the function defined by (B.9). For \(\eta >0\), let \(\delta _0=\delta _0(\varepsilon ,\eta )\) be the constant given by (B.8). Then, we have

$$\begin{aligned} (w^{\alpha ,\varepsilon ,\delta })_t-\eta \Delta w^{\alpha ,\varepsilon ,\delta } -{\text {tr}}(A(x)D^2w^{\alpha ,\varepsilon ,\delta })+H(x,Dw^{\alpha ,\varepsilon ,\delta }) \le \kappa (\alpha ,\eta , \delta , \varepsilon ) \end{aligned}$$

for all \(\delta \in (0,\delta _0]\) and \((x,t)\in \mathbb {T}^n\times [\alpha ,\infty )\). Here,

$$\begin{aligned} \kappa (\alpha ,\eta , \delta , \varepsilon ) :=\omega (\varepsilon )+\frac{n\eta }{\varepsilon }+C\max \left\{ \frac{1}{\varepsilon },\frac{1}{\delta }\right\} \alpha \end{aligned}$$

where \(\omega \in C([0,\infty ))\) is a nondecreasing function \(\omega \in C([0,\infty ))\) with \(\omega (0)=0\), and C is a positive constant.


Proposition B.12 is a straightforward result of Lemmas B.10, B.11. \(\square \)

We finally apply this approximation procedure to give the characterization of the ergodic constant for (B.2) in terms of generalized Mather measures stated in Proposition B.2

Proof of Proposition B.2

Take \(\mu \in {\widetilde{\mathcal {H}}}\). By the definition of \( {\widetilde{\mathcal {H}}}\), there exist \(\{\eta _j\}_{j\in \mathbb {N}}\rightarrow 0\) and \(\{\mu ^{\eta _j}\}_{j\in \mathbb {N}}\subset \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n)\) such that \(\mu ^{\eta _j}\in \mathcal {H}^{\eta _j}\), and

$$\begin{aligned} \mu ^{\eta _j}\rightharpoonup \mu \quad \text { weakly in the sense of measures in} \ \mathcal {P}(\mathbb {T}^n \times \mathbb {R}^n). \end{aligned}$$

Let v be a Lipschitz viscosity solution to (B.2). For \(\alpha ,\varepsilon ,\delta >0\), denote by

$$\begin{aligned} v^{\alpha ,\varepsilon ,\delta }(x) := \int _{\mathbb {T}^n} \theta ^\alpha (y)(v^{\varepsilon +\delta })_\delta (x-y)\,dy. \end{aligned}$$

Then, \(v^{\alpha ,\varepsilon ,\delta }\in C^2(\mathbb {T}^n)\). Let \(\tilde{v}:=v^{\alpha ,\varepsilon ,\delta }\), to simplify the notation. Because of Proposition B.12, we have

$$\begin{aligned} \kappa (\alpha ,\eta ,\delta ,\varepsilon )&\,\ge \int _{\mathbb {T}^n\times \mathbb {R}^n} \left( -\eta _j\Delta \tilde{v}-{\text {tr}}(A(z)D^2\tilde{v})+H(z,D\tilde{v}) \right) \, d\mu ^{\eta _j}(z,q)\\&\,\ge \int _{\mathbb {T}^n\times \mathbb {R}^n} \left( -\eta _j\Delta \tilde{v}-{\text {tr}}(A(z)D^2\tilde{v})+q\cdot D\tilde{v}-L(z,q) \right) \, d\mu ^{\eta _j}(z,q)\\&\,= -\int _{\mathbb {T}^n\times \mathbb {R}^n}L(z,q)\, d\mu ^{\eta _j}(z,q). \end{aligned}$$

We let \(\alpha \rightarrow 0\), \(j\rightarrow \infty \), and \(\varepsilon \rightarrow 0\), in this order, to get

$$\begin{aligned} \int _{\mathbb {T}^n\times \mathbb {R}^n}L(z,q)\, d\mu (z,q)\ge 0 \quad \text { for all} \ \mu \in {\widetilde{\mathcal {H}}}. \end{aligned}$$

For \(\varepsilon >0\), let \((v^\varepsilon ,c^\varepsilon )\in C^2(\mathbb {T}^n)\times \mathbb {R}\) solve

$$\begin{aligned} -\varepsilon \Delta v^\varepsilon -{\text {tr}}(A(x)D^2v^\varepsilon )+H(x,Dv^\varepsilon )=c^\varepsilon \quad \text { in} \ \mathbb {T}^n. \end{aligned}$$

The constant \(c^\varepsilon \) is unique. Besides, thanks to Assumptions 13, we have that \(v^\varepsilon \) is Lipschitz continuous uniformly in \(\varepsilon \in (0,1)\). Because the ergodic constant c for (1.5) was normalized to be 0, we see that \(c^\varepsilon \rightarrow 0\) as \(\varepsilon \rightarrow 0\). Let \(\theta ^\varepsilon \) solve the associated adjoint equation

$$\begin{aligned} \left\{ \begin{array}{ll} &{}-\varepsilon \Delta \theta ^{\varepsilon }-(a^{ij}\theta ^\varepsilon )_{x_ix_j}-\text {div}(D_pH(x,Dv^\varepsilon )\theta ^\varepsilon )=0 \quad \text { in} \ \mathbb {T}^n, \\ &{} \displaystyle \int _{\mathbb {T}^n}\theta ^\varepsilon (x)\,dx=1. \end{array} \right. \end{aligned}$$

Next, we define a measure, \(\mu ^\varepsilon \in \mathcal {P}(\mathbb {T}^n\times \mathbb {R}^n)\), as follows

$$\begin{aligned} \int _{\mathbb {T}^n} \psi (x,Dv^{\varepsilon }) \theta ^{\varepsilon }(x)\,dx =\int _{\mathbb {T}^n \times \mathbb {R}^n} \psi (x,D_q L(x,q))\,d\mu ^{\varepsilon }(x,q) \end{aligned}$$

for all \(\psi \in C_c(\mathbb {T}^n\times \mathbb {R}^n)\).

Multiplying (B.12) by \(\theta ^\varepsilon \), integrating on \(\mathbb {T}^n\), and using the integration by parts, we get

$$\begin{aligned} c^\varepsilon&=\, \int _{\mathbb {T}^n} \Big (-\varepsilon \Delta v^\varepsilon -{\text {tr}}(A(x)D^2v^\varepsilon )+H(x,Dv^\varepsilon )\Big )\theta ^\varepsilon \,dx\\&=\, \int _{\mathbb {T}^n} \Big (H(x,Dv^\varepsilon )-D_pH(x,Dv^\varepsilon )\cdot Dv^\varepsilon \Big )\theta ^\varepsilon \,dx =\int _{\mathbb {T}^n\times \mathbb {R}^n}L(x,q)\,d\mu ^\varepsilon . \end{aligned}$$

By a similar argument to the one in Lemma 2.2, we see that \(\mu ^\varepsilon \in \mathcal {H}^\varepsilon \). By extracting a subsequence if necessary, there exists a sequence \(\{\varepsilon _j\} \rightarrow 0\) such that

$$\begin{aligned} \mu ^{\varepsilon _j}\rightharpoonup \mu \quad \text { weakly in the sense of measures in} \ \mathcal {P}(\mathbb {T}^n\times \mathbb {R}^n) \end{aligned}$$

as \(j\rightarrow \infty \) for some \(\mu \in {\widetilde{\mathcal {H}}}\), and

$$\begin{aligned} \int _{\mathbb {T}^n\times \mathbb {R}^n}L(x,q)\,d\mu =0. \end{aligned}$$

This also shows that \(\mu \in \widetilde{\mathcal {M}}\), and finishes the proof. \(\square \)

Rights and permissions

Reprints and Permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gomes, D.A., Mitake, H. & Tran, H.V. The large time profile for Hamilton–Jacobi–Bellman equations. Math. Ann. 384, 1409–1459 (2022).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

Mathematics Subject Classification