Skip to main content

An Introduction to Fully Nonlinear Parabolic Equations

  • Chapter
  • First Online:
An Introduction to the Kähler-Ricci Flow

Part of the book series: Lecture Notes in Mathematics ((LNM,volume 2086))

Abstract

These notes contain a short exposition of selected results about parabolic equations: Schauder estimates for linear parabolic equations with Hölder coefficients, some existence, uniqueness and regularity results for viscosity solutions of fully nonlinear parabolic equations (including degenerate ones), the Harnack inequality for fully nonlinear uniformly parabolic equations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. O. Alvarez, J.-M. Lasry, P.-L. Lions, Convex viscosity solutions and state constraints. J. Math. Pures Appl. (9) 76(3), 265–288 (1997)

    Google Scholar 

  2. G. Barles, in Solutions de viscosité des équations de Hamilton-Jacobi. Mathématiques & Applications (Berlin), vol. 17 (Springer, Paris, 1994), x + 194 pp.

    Google Scholar 

  3. L.A. Caffarelli, X. Cabre, in Fully Nonlinear Elliptic Equations. American Mathematical Society Colloquium Publications, vol. 43 (American Mathematical Society, Providence, 1995), vi + 104 pp.

    Google Scholar 

  4. P. Cannarsa, C. Sinestrari, Semiconcave functions, in Hamilton-Jacobi Equations, and Optimal Control. Progress in Nonlinear Differential Equations and Their Applications, vol. 58 (Birkhäuser, Boston, 2004), xiv + 304 pp.

    Google Scholar 

  5. M.G. Crandall, P.-L. Lions, Condition d’unicité pour les solutions généralisées des équations de Hamilton-Jacobi du premier ordre. C. R. Acad. Sci. Paris Sér. I Math. 292(3), 183–186 (1981)

    MathSciNet  MATH  Google Scholar 

  6. M.G. Crandall, H. Ishii, P.-L. Lions, User’s guide to viscosity solutions of second order partial differential equations. Bull. Am. Math. Soc. (N.S.) 27(1), 1–67 (1992)

    Google Scholar 

  7. G.C. Dong, in Nonlinear Partial Differential Equations of Second Order. Translated from the Chinese by Kai Seng Chou [Kaising Tso]. Translations of Mathematical Monographs, vol. 95 (American Mathematical Society, Providence, 1991), viii + 251 pp.

    Google Scholar 

  8. L.C. Evans, R.F. Gariepy, in Measure Theory and Fine Properties of Functions. Studies in Advanced Mathematics (CRC Press, Boca Raton, 1992), viii + 268 pp.

    Google Scholar 

  9. D. Gilbarg, N.S. Trudinger, in Elliptic Partial Differential Equations of Second Order, Reprint of the 1998 edn. Classics in Mathematics (Springer, Berlin, 2001), xiv + 517 pp.

    Google Scholar 

  10. J.-B. Hirriart-Urruty, C. Lemaréchal, Fundamentals of convex analysis. Abridged version of Convex analysis and minimization algorithms. I [Springer, Berlin, 1993; MR1261420 (95m:90001)] and II [Springer, Berlin, 1993; MR1295240 (95m:90002)]. Grundlehren Text Editions (Springer, Berlin, 2001), x + 259 pp.

    Google Scholar 

  11. C. Imbert, Convexity of solutions and C 1, 1 estimates for fully nonlinear elliptic equations. J. Math. Pure Appl. (9) 85(6), 791–807 (2006)

    Google Scholar 

  12. H. Ishii, Perron’s method for Hamilton-Jacobi equations. Duke Math. J. 55(2), 369–384 (1987)

    Article  MathSciNet  MATH  Google Scholar 

  13. H. Ishii, On uniqueness and existence of viscosity solutions of fully nonlinear second-order elliptic PDEs. Comm. Pure Appl. Math. 42(1), 15–45 (1989)

    Article  MathSciNet  MATH  Google Scholar 

  14. H. Ishii, P.-L. Lions, Viscosity solutions of fully nonlinear second-order elliptic partial differential equations. J. Differ. Equat. 83(1), 26–78 (1990)

    Article  MathSciNet  MATH  Google Scholar 

  15. R. Jensen, The maximum principle for viscosity solutions of fully nonlinear second order partial differential equations. Arch. Ration. Mech. Anal. 101(1), 1–27 (1988)

    Article  MATH  Google Scholar 

  16. N.V. Krylov, Sequences of convex functions, and estimates of the maximum of the solution of a parabolic equation. Sibirsk. Mat. Z. 17(2), 290–303, 478 (1976)

    Google Scholar 

  17. N.V. Krylov, in Nonlinear Elliptic and Parabolic Equations of the Second Order. Translated from the Russian by P.L. Buzytsky. Mathematics and Its Applications (Soviet Series), vol. 7 (D. Reidel Publishing Co., Dordrecht, 1987), xiv + 462 pp.

    Google Scholar 

  18. N.V. Krylov, in Lectures on Elliptic and Parabolic Equations in Hölder Spaces. Graduate Studies in Mathematics, vol. 12 (American Mathematical Society, Providence, 1996), xii + 164 pp.

    Google Scholar 

  19. N.V. Krylov, Fully nonlinear second order elliptic equations: recent development. Dedicated to Ennio De Giorgi. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 25(3–4), 569–595 (1997)

    Google Scholar 

  20. O.A. Ladyzenskaja, V.A. Solonnikov, N.N. Uralceva, in Linear and Quasilinear Equations of Parabolic Type (Russian). Translated from the Russian by S. Smith. Translations of Mathematical Monographs, vol. 23 (American Mathematical Society, Providence, 1967), xi + 648 pp.

    Google Scholar 

  21. G.M. Lieberman, Second Order Parabolic Differential Equations (World Scientific, River Edge, 1996)

    Book  MATH  Google Scholar 

  22. P.-L. Lions, Optimal control of diffusion processes and Hamilton-Jacobi-Bellman equations, II. Viscosity solutions and uniqueness. Comm. Partial Differ. Equat. 8(11), 1229–1276 (1983)

    Article  MATH  Google Scholar 

  23. M.V. Safonov, The classical solution of the elliptic Bellman equation (Russian). Dokl. Akad. Nauk SSSR 278(4), 810–813 (1984)

    MathSciNet  Google Scholar 

  24. K. Tso, On an Aleksandrov-Bakelman type maximum principle for second-order parabolic equations. Comm. Partial Differ. Equat. 10(5), 543–553 (1985)

    Article  MathSciNet  MATH  Google Scholar 

  25. L. Wang, On the regularity theory of fully nonlinear parabolic equations, I. Comm. Pure Appl. Math. 45(1), 27–76 (1992)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cyril Imbert .

Editor information

Editors and Affiliations

Appendix: Technical Lemmas

Appendix: Technical Lemmas

2.1.1 A.1 Lebesgue’s Differentiation Theorem

The purpose of this appendix is to prove a version of Lebesgue’s differentiation theorem with parabolic cylinders. Recall that the usual version of the result says that if \(f \in {L}^{1}(\Omega,\mathit{dt} \otimes \mathit{dx})\) where \(\Omega \) is a Borel set of \({\mathbb{R}}^{d+1}\), then for a.e. \((t,x) \in \Omega \),

$$\displaystyle{\lim _{j\rightarrow \infty }\int -_{G_{j}}\vert f - f(t,x)\vert = 0}$$

as long as the sequence of sets G j satisfies the regularity condition:

$$\displaystyle\begin{array}{rcl} G_{j}& \subset & B_{j} {}\\ \vert G_{j}\vert & \geq & c\vert B_{j}\vert {}\\ \end{array}$$

where B j is a sequence of balls \(B_{r_{j}}(t,x)\) with r j  → 0.

A sequence of parabolic cylinders \(Q_{r_{j}}(t,x)\) cannot satisfy the regularity condition because of the different scaling between space and time. Indeed \(\vert Q_{r_{j}}(t,x)\vert = r_{j}^{d+2}\) which is an order of magnitude smaller than r j d + 1.

Fortunately, the classical proof of Lebesgue’s differentiation theorem can be repeated and works for parabolic cylinders as well, as it is shown below.

Theorem 2.5.1 (Lebesgue’s differentiation theorem). 

Consider an integrable function \(f \in {L}^{1}(\Omega,\mathit{dt} \otimes \mathit{dx})\) where \(\Omega \) is an open set of \({\mathbb{R}}^{d+1}\) . Then for a.e. \((t,x) \in \Omega \) ,

$$\displaystyle{\lim _{r\rightarrow 0+}\int -_{(t-{r}^{2},t)\times B_{r}(x)}\vert f - f(t,x)\vert = 0}$$

where \(\int -_{O}g = \frac{1} {\vert O\vert }\int _{O}g\) for any Borel set \(O \subset {\mathbb{R}}^{d+1}\) and integrable function g.

In the proof, we will in fact use the following corollary.

Corollary 2.5.2 (Generalized Lebesgue’s differentiation theorem). 

Let G j be a family of sets which is regular in the following sense: there exists a constant c > 0 and r j → 0 such that

$$\displaystyle\begin{array}{rcl} G_{j}& \subset & (t - r_{j}^{2},t) \times B_{ r_{j}}(x), {}\\ \vert G_{j}\vert & \geq & cr_{j}^{d+2}. {}\\ \end{array}$$

Then, except for a set of measure zero which is independent of the choice of {G j }, we have

$$\displaystyle{\lim _{j\rightarrow +\infty }\int -_{G_{j}}\vert f - f(t,x)\vert = 0.}$$

Remark 2.5.3.

It is interesting to point out that if the parabolic cylinders were replaced by other families of sets not satisfying the regularity condition, the result of Lemma 2.5.5 may fail. For example if we take

$$\displaystyle{\tilde{M}f(t,x) =\sup _{(a,b)\times B_{r}(y)\ni (t,x)}\int -_{(a,b)\times B_{r}(y)\cap \Omega }\vert f\vert }$$

then Lemma 2.5.5 would fail for \(\tilde{M}f\).

Proof of Corollary 2.5.2.

We obtain Corollary 2.5.2 as an immediate consequence of Theorem 2.5.1 by noting that since \(G_{j} \subset (t - r_{j}^{2},t) \times B_{r_{j}}(x)\).

$$\displaystyle{\int -_{G_{j}}\vert f - f(t,x)\vert \leq \frac{{r}^{2}\vert B_{r}\vert } {\vert G_{j}\vert } \int -_{(t-{r}^{2},t)\times B_{r}(x)}\vert f - f(t,x)\vert.}$$

Thus, the result holds at all points where this right hand side goes to zero, which is a set of full measure by Theorem 2.5.1 and that \(\frac{{r}^{2}\vert B_{ r}\vert } {\vert G_{j}\vert } \geq c > 0\). □ 

In order to prove Theorem 2.5.1, we first need a version of Vitali’s covering lemma.

Lemma 2.5.4 (Vitali’s covering lemma). 

Consider a bounded collection of cubes (Q α ) α of the form \(Q_{\alpha } = (t_{\alpha } - r_{\alpha }^{2},t_{\alpha }) \times B_{r_{\alpha }}(x_{\alpha })\) and a set A such that \(A \subset \cup _{\alpha }Q_{\alpha }\) . Then there is a finite number of cubes \(Q_{1},\ldots,Q_{N}\) such that \(A \subset \cup _{j=1}^{N}5Q_{j}\) where \(5Q_{j} = (t_{\alpha } - 25r_{\alpha }^{2},t_{\alpha }) \times B_{5r_{\alpha }}(x_{\alpha })\) .

Consider next the maximal function Mf associated with a function \(f \in {L}^{1}(\Omega,\mathit{dt} \otimes \mathit{dx})\)

$$\displaystyle{\mathit{Mf }(t,x) =\sup _{Q\ni (t,x)}\int -_{Q\cap \Omega }\vert f\vert }$$

where the supremum is taken over cubes Q of the form (s, y) + ( − r 2, 0) ×B r .

Lemma 2.5.5 (The maximal inequality). 

Consider \(f \in {L}^{1}(\Omega,\mathit{dt} \otimes \mathit{dx})\) , f positive, and λ > 0, we have

$$\displaystyle{\vert \{\mathit{Mf } >\lambda \} \vert \leq \frac{C} {\lambda } \|f\|_{{L}^{1}}}$$

for some constant C depending only on dimension d.

Proof.

For all x ∈ {Mf  > λ}, there exists Q ∋ x such that

$$\displaystyle{\inf _{Q}f \geq \frac{\lambda } {2}\vert Q\vert.}$$

Hence, the set {Mf  > λ} can be covered by cubes Q. From Vitali’s covering lemma, there exists a finite cover of {Mf  > λ} with some 5Q’s:

$$\displaystyle{\{Mf >\lambda \}\subset \cup _{j=1}^{N}5Q_{ j}}$$

with Q j that are disjoint and such that

$$\displaystyle{\int _{Q_{j}\cap \Omega }f \geq \frac{\lambda } {2}\vert Q_{j} \cap \Omega \vert.}$$

Hence

$$\displaystyle\begin{array}{rcl} \int _{\Omega }f& \geq & \int _{\cup _{j}Q_{j}\cap \Omega }f =\sum _{j}\int _{Q_{j}\cap \Omega }f {}\\ & \geq & \frac{\lambda } {2}\vert \cup _{j}Q_{j} \cap \Omega \vert = \frac{\lambda } {2} \times \frac{1} {{5}^{d+2}}\vert \cup _{j}5Q_{j} \cap \Omega \vert \geq \frac{\lambda } {C}\vert \{\mathit{Mf } >\lambda \} \vert {}\\ \end{array}$$

with C = 2 ×5d + 2. □ 

We can now prove Lebesgue’s differentiation theorem (Theorem 2.5.1).

Proof of Theorem 2.5.1.

We can assume without loss of generality that the set \(\Omega \) is bounded. We first remark that the result is true if f is continuous. If f is not continuous, we consider a sequence (f n ) n of continuous functions such that

$$\displaystyle{\|f - f_{n}\|_{{L}^{1}} \leq \frac{C} {{2}^{n}}.}$$

Moreover, up to a subsequence, we can also assume that for a.e. \((t,x) \in \Omega \),

$$\displaystyle{f_{n}(t,x) \rightarrow f(t,x)\quad \text{ as }n \rightarrow \infty.}$$

Thanks to the maximal inequality (Lemma 2.5.5), we have in particular

$$\displaystyle{\vert \{M(f - f_{n}) >\lambda \} \vert \leq \frac{C} {\lambda {2}^{n}}.}$$

By Borel–Cantelli’s Lemma, we conclude that for all λ > 0, there exists \(n_{\lambda } \in \mathbb{N}\) such that for all n ≥ n λ ,

$$\displaystyle{M(f - f_{n}) \leq \lambda \quad \text{ a.e. in }\Omega.}$$

We conclude that for a.e. \((t,x) \in \Omega \) and all \(k \in \mathbb{N}\), there exists a strictly increasing sequence n k such that for all r > 0 such that \(Q_{r}(t,x) \subset \Omega \),

$$\displaystyle{\int -_{Q_{r}(t,x)}\vert f - f_{n_{k}}\vert \leq M(f - f_{n_{k}}) \leq \frac{1} {k}.}$$

Moreover, since f n is continuous and \(\Omega \) is bounded, there exists r k  > 0 such that for r ∈ (0, r k ), we have

$$\displaystyle{\int -_{Q_{r}(t,x)}\vert f_{n_{k}} - f_{n_{k}}(t,x)\vert \leq \frac{1} {k}.}$$

Moreover, for a.e. \((t,x) \in \Omega \),

$$\displaystyle{\vert f_{n_{k}}(t,x) - f(t,x)\vert \rightarrow 0\quad \text{ as }k \rightarrow \infty.}$$

These three facts imply that for a.e. \((t,x) \in \Omega \), for all \(\varepsilon > 0\), there exists \(r_{\varepsilon } > 0\) such that \(r \in (0,r_{\varepsilon })\),

$$\displaystyle{\int -_{Q_{r}(t,x)}\vert f - f(t,x)\vert \leq \varepsilon.}$$

This achieves the proof of the lemma. □ 

2.1.2 A.2 Jensen–Ishii’s Lemma for N Functions

When proving Theorem 2.4.9 (more precisely, Lemma 2.4.6), we used the following generalization of Lemmas 2.3.23 and 2.3.30 whose proof can be found in [CIL92].

Lemma 2.5.6 (Jensen–Ishii’s Lemma III). 

Let U i , \(i = 1,\ldots,N\) be open sets of \({\mathbb{R}}^{d}\) and I an open interval of \(\mathbb{R}\) . Consider also lower semi-continuous functions \(u_{i}: I \times U_{i} \rightarrow \mathbb{R}\) such that for all v = u i , \(i = 1,\ldots,N\) , (t,x) ∈ I × U i , there exists r > 0 such that for all M > 0 there exists C > 0,

$$\displaystyle{\left.\begin{array}{r} (s,y) \in Q_{r}(t,x) \\ (\beta,q,Y ) \in {\mathcal{P}}^{-}v(s,y) \\ \vert v(s,y)\vert + \vert q\vert + \vert Y \vert \leq M \end{array} \right \} \Rightarrow -\beta \leq C.}$$

Let \(x = (x_{1},\ldots,x_{N})\) and \(x_{0} = (x_{1}^{0},\ldots,x_{N}^{0})\) . Assume that \(\sum _{i=1}^{N}u_{i}(t,x_{i}) -\phi (t,x)\) reaches a local minimum at \((t_{0},x_{0}) \in I \times \Pi _{i}U_{i}\) . If α denotes ∂ t ϕ(t 0 ,x 0 ) and p i denotes \(D_{x_{i}}\phi (x_{0})\) and A denotes D 2 ϕ(t 0 ,x 0 ), then for any β > 0 such that I + βA > 0, there exist \((\alpha _{i},X_{i}) \in \mathbb{R} \times \mathbb{S}_{d}\) , \(i = 1,\ldots,N\) , such that for all \(i = 1,\ldots,N\) ,

$$\displaystyle\begin{array}{rcl} & & (\alpha _{i},p_{i},X_{i}) \in {\overline{\mathcal{P}}}^{-}u(t_{ 0},x_{i}^{0}) {}\\ & & \sum _{i=1}^{N}\alpha _{ i} =\alpha {}\\ \end{array}$$

and

$$\displaystyle{ \frac{1} {\beta } \left (\begin{array}{cccc} I &0& \ldots &0\\ 0 & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots &0\\ 0 & \ldots &0 &I \end{array} \right ) \geq \left (\begin{array}{cccc} X_{1} & 0& \ldots & 0 \\ 0 & \ddots & \ddots & \vdots\\ \vdots & \ddots & \ddots & 0 \\ 0 & \ldots &0&X_{N} \end{array} \right ) \geq A_{\beta } }$$

where A β = (I + βA) −1 A.

Remark 2.5.7.

The condition on the functions u i is satisfied as soon as the u i ’s are supersolutions of a parabolic equation. This condition ensures that some compactness holds true when using the doubling variable technique in the time variable. See [CIL92, Theorem 8.2, p. 50] for more details.

2.1.3 A.3 Technical Lemmas for Monotone Envelopes

When proving the maximum principle (Theorem 2.4.9), we used the two following technical lemmas.

Lemma 2.5.8.

Consider a convex set \(\Omega \) of \({\mathbb{R}}^{d}\) and a lower semi-continuous function \(v: [a,b] \times \bar{ \Omega } \rightarrow \mathbb{R}\) which is non-increasing with respect to t ∈ (a,b) and convex with respect to \(x \in \Omega \) . Assume that v is bounded from above and that for all \((\alpha,p,X) \in {\mathcal{P}}^{-}v(t,x)\) , we have

$$\displaystyle{ -\alpha \leq C\quad \text{ and }\quad X \leq CI. }$$

Then v is Lipschitz continuous with respect t ∈ (a,b) and C 1,1 with respect to \(x \in \Omega \) .

Proof of Lemma 2.5.8.

We assume without loss of generality that \(\Omega \) is bounded. In this case, v is bounded from above and from below, hence is bounded. Next, we also get that v is Lipschitz continuous with respect to x in [a, b] ×F for all closed convex set \(F \subset \Omega \) such that \(d(F,\partial \Omega ) > 0\).

Step 1.

We first prove that v is Lipschitz continuous with respect to t: for all \((t_{0},x_{0}) \in (a,b) \times \Omega \),

$$\displaystyle\begin{array}{rcl} M& =& \sup _{s,t\in (a,b),x,y\in \Omega }\bigg\{v(t,x) - v(s,y) - L\vert t - s\vert -\frac{L} {4\varepsilon } \vert x - y{\vert }^{2} - L\varepsilon {}\\ & & \quad - L_{0}\vert x - x_{0}{\vert }^{2} - L_{ 0}{(t - t_{0})}^{2}\bigg\} \leq 0 {}\\ \end{array}$$

for L large enough only depending on C and the Lipschitz constant of v with respect to x around (t 0, x 0) and for L 0 large enough. We argue by contradiction by assuming that M > 0. Consider \((\bar{s},\bar{t},\bar{x},\bar{y})\) where the maximum M is reached. Remark first that

$$\displaystyle\begin{array}{rcl} L_{0}\vert \bar{y} - x_{0}{\vert }^{2} + L_{ 0}{(\bar{s} - t_{0})}^{2} + L\vert \bar{t} -\bar{ s}\vert + \frac{L} {4\varepsilon } \vert \bar{x} -\bar{ y}{\vert }^{2} + L\varepsilon & \leq & v(\bar{t},\bar{x}) - v(\bar{s},\bar{y}) {}\\ & \leq & 2\vert v\vert _{0,[a,b]\times \bar{\Omega }}. {}\\ \end{array}$$

In particular, we can choose L 0 and L large enough so that \((\bar{s},\bar{y}),(\bar{t},\bar{x}) \in (a,b) \times \Omega \). Remark next that \(\bar{t}\neq \bar{s}\). Indeed, if \(\bar{t} =\bar{ s}\), then

$$\displaystyle{0 < M \leq v(\bar{t},\bar{x}) - v(\bar{t},\bar{y}) -\frac{L} {4\varepsilon } \vert \bar{x} -\bar{ y}{\vert }^{2} - L\varepsilon }$$

and choosing L larger than the Lipschitz constant of v with respect to x yields a contradiction. Hence the function v is touched from below at \((\bar{s},\bar{y})\) by the test function

$$\displaystyle{(s,y)\mapsto C_{0} -\frac{L} {4\varepsilon } \vert \bar{x} - y{\vert }^{2} - L\vert \bar{t} - s\vert }$$

where C 0 is a constant depending on \((\bar{t},\bar{x})\). In particular,

$$\displaystyle{(L\mathrm{sign}(\bar{t} -\bar{ s}),L{(4\varepsilon )}^{-1}(\bar{x} -\bar{ y}),L{(4\varepsilon )}^{-1}I) \in {\mathcal{P}}^{-}v(\bar{s},\bar{y}).}$$

We thus should have L ≤ C. Choosing L > C yields also the desired contradiction.

Step 2.

In order to prove that for all t ∈ (a, b), u(t,  ⋅) is C 1, 1 with respect to x, it is enough to prove that for all (p, X) ∈ D 2, −  u(t, x) (see below), X ≤ CI. Indeed, this implies that \(u(t,\cdot ) + \frac{C} {2} \vert \cdot {\vert }^{2}\) is concave [ALL97]. Since u(t,  ⋅) is convex, this implies that it is C 1, 1 [CanSin04].

(p, X) ∈ D 2, −  u(t, x) means that there exists \(\psi \in {C}^{2}({\mathbb{R}}^{d})\) such that p = (x) and X = D 2 ψ(x) and

$$\displaystyle{\psi (y) -\psi (x) \leq u(t,y) - u(t,x)}$$

for y ∈ B r (x). We can further assume that the minimum of u(t,  ⋅) − ψ is strict. We then consider the minimum of \(u(s,x) -\psi (x) {+\varepsilon }^{-1}{(s - t)}^{2}\) in (t − r, t + r) ×B r (x). For \(\varepsilon\) small enough, this minimum is reached in an interior point \((t_{\varepsilon },x_{\varepsilon })\) and \((t_{\varepsilon },x_{\varepsilon }) \rightarrow (t,x)\) as \(\varepsilon \rightarrow 0\). Then

$$\displaystyle{{(\varepsilon }^{-1}(s_{\varepsilon } - t),D\psi (x_{\varepsilon }),{D}^{2}\psi (x_{\varepsilon })) \in {\mathcal{P}}^{-}u(t_{\varepsilon },x_{\varepsilon }).}$$

Hence, \({D}^{2}\psi (x_{\varepsilon }) \leq \mathit{CI}\). Letting \(\varepsilon \rightarrow 0\) yields X ≤ CI. This achieves Step 2.

The proof of the lemma is now complete. □ 

Lemma 2.5.9.

Consider a convex set \(\Omega \) of \({\mathbb{R}}^{d}\) and \(v: (a,b) \times \Omega \rightarrow \mathbb{R}\) which is non-increasing with respect to t ∈ (a,b) and convex with respect to \(x \in \Omega \) . Then for all \((\alpha,p,X) \in {\mathcal{P}}^{-}v(t,x)\) , that there exists (α n ,p n ,X n ) such that

$$\displaystyle\begin{array}{rcl} & & (\alpha _{n},p_{n},X_{n}) \in {\mathcal{P}}^{-}v(t_{ n},x_{n}) {}\\ & & \quad (t_{n},x_{n},\alpha _{n},p_{n}) \rightarrow (t,x,\alpha,p) {}\\ & & \qquad X \leq X_{n} + o_{n}(1),X_{n} \geq 0. {}\\ \end{array}$$

The proof of this lemma relies on Alexandroff theorem in its classical form. A statement and a proof of this classical theorem can be found for instance in [EG92]. We will only use the following consequence of this theorem.

Theorem 2.5.10.

Consider a convex set \(\Omega \) of \({\mathbb{R}}^{d}\) and a function \(v: (a,b) \times \Omega \rightarrow \mathbb{R}\) which is convex with respect to \((t,x) \in (a,b) \times \Omega \) . Then for almost \((t,x) \in (a,b) \times \Omega \) , there exists \((\alpha,p,X) \in {\mathcal{P}}^{-}\cap {\mathcal{P}}^{+}v(t,x)\) , that is to say such that,

$$\displaystyle{ v(s,y) = v(t,x)+\alpha (s-t)+p\cdot (y-x)+ \frac{1} {2}X(y-x)\cdot (y-x)+o(\vert s-t\vert +\vert y-x{\vert }^{2}). }$$
(2.62)

Jensen’s lemma is also needed (stated here in a “parabolic” version for the sake of clarity).

Lemma 2.5.11 (Jensen). 

Consider a convex set \(\Omega \) of \({\mathbb{R}}^{d}\) and a function \(v: (a,b) \times \Omega \rightarrow \mathbb{R}\) such that there exists \((\tau,C) \in {\mathbb{R}}^{2}\) such that u(t,x) + τt 2 + C|x| 2 is convex with respect to \((t,x) \in (a,b) \times \Omega \) . If u reaches a strict local maximum at (t 0 ,x 0 ), then for r > 0 and δ > 0 small enough, the set

$$\displaystyle\begin{array}{rcl} K& =& \{(t,x) \in (t_{0} - r,t_{0} + r) \times B_{r}(x_{0}): \exists (\tau,p) \in (-\delta,\delta ) \times B_{\delta }, {}\\ & & \qquad (s,y)\mapsto u(s,y) -\tau s - p \cdot y\text{ reaches a local maximum at }(t,x)\} {}\\ \end{array}$$

has a positive measure.

See [CIL92] for a proof. We can now turn to the proof of Lemma 2.5.8. The proof of Lemma 2.5.9 below mimics the proof of [ALL97, Lemma 3] in which there is no time dependence.

Proof of Lemma 2.5.9.

Consider a test function ϕ such that u − ϕ reaches a local maximum at (t, x) and

$$\displaystyle{(\alpha,p,X) = (\partial _{t}\phi,D\phi,{D}^{2}\phi )(t,x).}$$

Without loss of generality, we can assume that this maximum is strict; indeed, replace ϕ with ϕ(s, y) − | y − x | 2 − (s − t)2 for instance. Then consider the function

$$\displaystyle{v_{\varepsilon }(t,x) =\inf _{y\in {\mathbb{R}}^{d},s\geq 0}\left \{v(s,y) + \frac{1} {\varepsilon } \vert y - x{\vert }^{2} + \frac{1} {\varepsilon } {(s - t)}^{2}\right \}.}$$

One can check that \(v_{\varepsilon }\) is still convex with respect to x and non-increasing with respect to t and that

$$\displaystyle{(t,x)\mapsto v_{\varepsilon }(t,x) + \frac{1} {\varepsilon } \vert x{\vert }^{2} + \frac{1} {\varepsilon } {t}^{2}}$$

is concave with respect to (t, x). Moreover, \(v_{\varepsilon } \leq v\) and

$$\displaystyle{\lim _{\varepsilon \rightarrow 0}v_{\varepsilon }(t,x) = v(t,x).}$$

This implies that there exists \((t_{\varepsilon },x_{\varepsilon }) \rightarrow 0\) as \(\varepsilon \rightarrow 0\) such that \(v_{\varepsilon }-\phi\) reaches a local maximum at \((t_{\varepsilon },x_{\varepsilon })\). Remarking that \(v_{\varepsilon }-\phi\) satisfies the assumptions of Jensen’s lemma, Lemma 2.5.11 above, we combine it with Theorem 2.5.10 and we conclude that we can find slopes (τ n , p n ) → (0, 0) and points \((t_{n},x_{n}) \rightarrow (t_{\varepsilon },x_{\varepsilon })\) as n →  where \(v_{\varepsilon }-\phi\) satisfies (2.62) and \(v_{\varepsilon } -\phi -\tau _{n}s - p_{n}y\) reaches a local maximum at (t n , x n ). In other words,

$$\displaystyle{(\tau _{n} + \partial _{t}\phi (t_{n},x_{n}),p_{n} + D\phi (t_{n},x_{n}),{D}^{2}v_{\varepsilon }(t_{ n},x_{n})) \in {\mathcal{P}}^{-}v_{\varepsilon }(t_{ n},x_{n})}$$

with

$$\displaystyle{{D}^{2}v_{\varepsilon }(t_{ n},x_{n}) \geq 0}$$

and

$$\displaystyle{{D}^{2}\phi (t_{ n},x_{n}) \leq {D}^{2}v_{\varepsilon }(t_{ n},x_{n}).}$$

In order to conclude, we use the classical following result from viscosity solution theory (see [CIL92] for a proof):

Lemma 2.5.12.

Consider (s n ,y n ) such that

$$\displaystyle{v_{\varepsilon }(t_{n},x_{n}) = v(s_{n},y_{n}) {+\varepsilon }^{-1}\vert y_{ n} - x_{n}{\vert }^{2} {+\varepsilon }^{-1}{(t_{ n} - s_{n})}^{2}.}$$

Then

$$\displaystyle{\vert y_{n} - x_{n}{\vert }^{2} + {(t_{ n} - s_{n})}^{2} \leq \varepsilon \vert {v}^{+}\vert _{ 0,(a,b)\times \Omega }}$$

and

$$\displaystyle{{\mathcal{P}}^{-}u_{\varepsilon }(t_{ n},x_{n}) \subset {\mathcal{P}}^{-}u(s_{ n},y_{n}).}$$

We used in the previous lemma that v is bounded from above since \(\Omega \) is bounded. Putting all the previous pieces of information together yields the desired result. □ 

2.1.4 A.4 An Elementary Iteration Lemma

The following lemma is classical, see for instance [GT01, Lemma 8.23].

Lemma 2.5.13.

Consider a non-decreasing function \(h: (0,1) \rightarrow {\mathbb{R}}^{+}\) such that for all ρ ∈ (0,1),

$$\displaystyle{h(\gamma \rho ) \leq \delta h(\rho ) + C{_{0}\rho }^{\beta }}$$

for some δ,γ,β ∈ (0,1). Then for all ρ ∈ (0,1),

$$\displaystyle{h(\rho ) \leq C{_{\alpha }\rho }^{\alpha }}$$

for all \(\alpha = \frac{1} {2}\min (\frac{\ln \delta } {\ln \gamma },\beta ) \in (0,1)\) .

Proof.

Consider \(k \in \mathbb{N}\), k ≥ 1, and get by induction that for all ρ 0, ρ 1 ∈ (0, 1) with ρ 1 ≤ ρ 0,

$$\displaystyle{h({\gamma }^{k}\rho _{ 1}) {\leq \delta }^{k}h(\rho _{ 1}) + C_{0}\rho _{1}^{\beta }\sum _{ j=0}^{k-1}{\gamma }^{\beta j}.}$$

Then write

$$\displaystyle\begin{array}{rcl} h{(\gamma }^{k}\rho _{ 1})& \leq & {\delta }^{k}h(\rho _{ 0}) + C_{0} \frac{\rho _{1}^{\beta }} {1 {-\gamma }^{\beta }} {}\\ &\leq & {{(\gamma }^{k})}^{\tilde{\beta }}h(\rho _{ 0}) + C_{0} \frac{\rho _{1}^{\beta }} {1 {-\gamma }^{\beta }} {}\\ &\leq & {{(\gamma }^{k})}^{2\alpha }h(\rho _{ 0}) + C_{0} \frac{\rho _{1}^{2\alpha }} {1 {-\gamma }^{\beta }} {}\\ \end{array}$$

where \(\tilde{\beta }= \frac{\ln \delta }{\ln \gamma }\). Now pick \(\rho \in {[\gamma }^{k+1}\rho _{1}{,\gamma }^{k}\rho _{1})\) and choose \(\rho _{1} = \sqrt{\rho _{0}\rho }\) and get from the previous inequality the desired result for ρ ∈ (0, ρ 0). Choose next \(\rho _{0} = \frac{1} {2}\) and conclude for ρ ∈ (0, 1). □ 

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Imbert, C., Silvestre, L. (2013). An Introduction to Fully Nonlinear Parabolic Equations. In: Boucksom, S., Eyssidieux, P., Guedj, V. (eds) An Introduction to the Kähler-Ricci Flow. Lecture Notes in Mathematics, vol 2086. Springer, Cham. https://doi.org/10.1007/978-3-319-00819-6_2

Download citation

Publish with us

Policies and ethics