Skip to main content
Log in

Two-by-two upper triangular matrices and Morrey’s conjecture

  • Published:
Calculus of Variations and Partial Differential Equations Aims and scope Submit manuscript

Abstract

It is shown that every homogeneous gradient Young measure supported on matrices of the form \(\begin{pmatrix} a_{1,1} &{} \cdots &{} a_{1,n-1} &{} a_{1,n} \\ 0 &{} \cdots &{} 0 &{} a_{2,n} \end{pmatrix}\) is a laminate. This is used to prove the same result on the 3-dimensional nonlinear submanifold of \(\mathbb {M}^{2 \times 2}\) defined by \(\det X = 0\) and \(X_{12}>0\).

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ambrosio, L., Fusco, N., Pallara, D.: Functions of Bounded Variation and Free Discontinuity Problems. The Clarendon Press, Oxford University Press, New York (2000)

    MATH  Google Scholar 

  2. Chaudhuri, N., Müller, S.: Rank-one convexity implies quasi-convexity on certain hypersurfaces. Proc. R. Soc. Edinb. Sect. A 133, 1263–1272 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  3. Conti, S., Faraco, D., Maggi, F., Müller, S.: Rank-one convex functions on \(2 \times 2\) symmetric matrices and laminates on rank-three lines. Calc. Var. Partial Differ. Equ. 24, 479–493 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  4. Evans, L.C., Gariepy, R.F.: On the partial regularity of energy-minimizing, area-preserving maps. Calc. Var. Partial Differ. Equ. 9, 357–372 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  5. Faraco, D., Székelyhidi Jr., L.: Tartar’s conjecture and localization of the quasiconvex hull in \(\mathbb{R}^{{2 \times 2}} \). Acta Math. 200, 279–305 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  6. Kinderlehrer, D., Pedregal, P.: Characterization of Young measures generated by gradients. Arch. Ration. Mech. Anal. 115, 329–365 (1991)

    Article  MathSciNet  MATH  Google Scholar 

  7. Kirchheim, B.: Rigidity and Geometry of Microstructures. Habilitation thesis, University of Leipzig (2003)

  8. Lee, J., Müller, P.F.X., Müller, S.: Compensated compactness, separately convex functions and interpolatory estimates between Riesz transforms and Haar projections. Commun. Partial Differ. Equ. 36, 547–601 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  9. Martin, R.J., Ghiba, I.-D., Neff, P.: Rank-one convexity implies polyconvexity for isotropic, objective and isochoric elastic energies in the two-dimensional case. Proc. R. Soc. Edinb. Sect. A 147, 571–597 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  10. Matousek, J., Plecháč, P.: On functional separately convex hulls. Discrete Comput. Geom. 19, 105–130 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  11. Morrey Jr., C.B.: Quasi-convexity and the lower semicontinuity of variational integrals. Pac. J. Math. 2, 25–53 (1952)

    Article  MATH  Google Scholar 

  12. Müller, S.: Rank-one convexity implies quasiconvexity on diagonal matrices. Int. Math. Res. Not. 20, 1087–1095 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  13. Müller, S.: A sharp version of Zhang’s theorem on truncating sequences of gradients. Trans. Am. Math. Soc. 351, 4585–4597 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  14. Müller, S., Šverák, V.: Convex integration with constraints and applications to phase transition and partial differential equations. J. Eur. Math. Soc. 1, 393–422 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  15. Pedregal, P.: Parametrized Measures and Variational Principles. Birkhäuser Verlag, Basel (1997)

    Book  MATH  Google Scholar 

  16. Šverák, V.: Rank-one convexity does not imply quasiconvexity. Proc. R. Soc. Edinb. Sect. A 120, 185–189 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  17. Tartar, L.: Compensated compactness and applications to partial differential equations In: Knops, R. (ed.) Nonlinear Analysis and Mechanics: Heriot–Watt Symposium, vol. 4, Res. Notes. Math. 39, Pitman, Boston, pp. 136–212 (1979)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Terence L. J. Harris.

Additional information

Communicated by J. Ball.

Appendix: The diagonal case

Appendix: The diagonal case

This section contains one particular generalisation of Müller’s result from the \(2 \times 2\) diagonal matrices \(\mathbb {M}^{2 \times 2}_{{{\mathrm{diag}}}}\) to the subspace

$$\begin{aligned} \mathbb {M}^{2 \times n}_{{{\mathrm{diag}}}} := \left\{ \begin{pmatrix} a_{1,1} &{} \cdots &{} a_{1,n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} a_{2,n} \end{pmatrix} \in \mathbb {M}^{2 \times n} \right\} , \quad n \ge 2. \end{aligned}$$

This is used to prove the result for \(\mathbb {M}^{2 \times n}_{{{\mathrm{tri}}}}\). The proof has only minor modifications from the one in [12], but is included for convenience. As in [8], define the elements of the Haar system in \(L^2(\mathbb {R}^n)\) by

$$\begin{aligned} h_Q^{(\epsilon )}(x) = \prod _{j=1}^n h_{I_j}^{\epsilon _j}(x_j), \quad \text { for } x \in \mathbb {R}^n, \end{aligned}$$

where \(\epsilon \in \{0,1\}^n \setminus \{(0, \ldots , 0)\}\), \(Q= I_1 \times \cdots \times I_n\) is a dyadic cube in \(\mathbb {R}^n\), the \(I_j\)’s are dyadic intervals of equal size and the convention \(0^0=0\) is assumed. A dyadic interval is always of the form \(\left[ k \cdot 2^{-j}, (k+1)\cdot 2^{-j} \right) \) with \(j,k \in \mathbb {Z}\). For a dyadic interval \(I=[a,b)\), \(h_I\) is defined by

$$\begin{aligned} h_I(x) = h_{[0,1)}\left( \frac{ x-a}{b-a} \right) \quad \text { for } x \in \mathbb {R}, \end{aligned}$$

where

$$\begin{aligned} h_{[0,1)} = \chi _{\left[ 0, \frac{1}{2}\right) } - \chi _{\left[ \frac{1}{2}, 1\right) }. \end{aligned}$$

For \(j \in \mathbb {Z}\) and \(k \in \mathbb {Z}^n\), the notation \(h^{(\epsilon )}_{j,k}=h^{(\epsilon )}_Q\) will be used, where

$$\begin{aligned} Q =Q_{j,k} = \bigg [ \frac{ k_1}{2^j},\frac{ k_1+ 1}{2^j} \bigg ) \times \cdots \times \bigg [ \frac{ k_n}{2^j}, \frac{ k_n+ 1}{2^j} \bigg ). \end{aligned}$$

The standard basis vectors in \(\mathbb {R}^n\) or \(\{0,1\}^n\) will be denoted by \(e_j\). The Riesz transform \(R_j\) on \(L^2(\mathbb {R}^n)\) is defined through multiplication on the Fourier side by \(-i \xi _j/|\xi |\). In [8, Theorem 2.1] and [12, Theorem 5] it was shown that if \(\epsilon \in \{0,1\}^n\) satisfies \(\epsilon _j =1\), then there is a constant C such that

$$\begin{aligned} \left\| P^{(\epsilon )}u \right\| _2 \le C \Vert u\Vert _2^{1/2} \Vert R_j u \Vert _2^{1/2} \quad \text { for all } u \in L^2(\mathbb {R}^n), \end{aligned}$$
(4)

where \(\epsilon \) is fixed and \(P^{(\epsilon )}\) is the projection onto the closed span of the set

$$\begin{aligned} \left\{ h_Q^{(\epsilon )} : Q \subseteq \mathbb {R}^n \text { is a dyadic cube } \right\} . \end{aligned}$$

Lemma 1

If \(f: \mathbb {M}^{2 \times n} \rightarrow \mathbb {R}\) is rank-one convex with \(f(0)=0\), and if \(u_1, \ldots , u_{n-1}, v_n\) have finite expansions in the Haar system

$$\begin{aligned} u_i = \sum _{ \epsilon _n =0} \sum _{j=J}^K \sum _{k \in \mathbb {Z}^n} a_{j,k, i}^{(\epsilon )} h_{j,k}^{(\epsilon )} \quad \text { for } 1 \le i \le n-1, \text { and } \quad \quad v_n = \sum _{j=J}^K \sum _{k \in \mathbb {Z}^n} b_{j,k} h_{j,k}^{(e_n)}, \end{aligned}$$

so that \(a_{j,k, i}^{(\epsilon )}=b_{j,k}=0\) whenever |k| is sufficiently large, then

$$\begin{aligned} \int _{\mathbb {R}^n} f\begin{pmatrix} u_1 &{} \cdots &{} u_{n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} v_n \end{pmatrix} \, dx \ge 0. \end{aligned}$$

Proof

The assumption that \(a_{j,k, i}^{(\epsilon )}=b_{j,k}=0\) for |k| sufficiently large means the integral converges absolutely. Let

$$\begin{aligned} \widetilde{u}_i = \sum _{ \epsilon _n = 0} \sum _{j=J}^{K-1} \sum _{k \in \mathbb {Z}^n} a_{j,k, i}^{(\epsilon )} h_{j,k}^{(\epsilon )} \quad \text { for } 1 \le i \le n-1, \text { and let } \quad \widetilde{v}_n = \sum _{j=J}^{K-1} \sum _{k \in \mathbb {Z}^n} b_{j,k} h_{j,k}^{(e_n)}. \end{aligned}$$

Then on \(Q_{K,k}\), for any \(k \in \mathbb {Z}^n\),

$$\begin{aligned} u_i' := u_i - \widetilde{u}_i =\sum _{ \epsilon _n = 0} a_{K,k, i}^{(\epsilon )} h_{K,k}^{(\epsilon )}, \quad v_n' := v_n -\widetilde{v}_n = b_{K,k}h_{K,k}^{(e_n)}, \end{aligned}$$

and

$$\begin{aligned}&\int _{Q_{K,k}} f\begin{pmatrix} u_1 &{} \cdots &{} u_{n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} v_n \end{pmatrix} \, dx \\&\quad = \int _{Q_{K,k}} f\begin{pmatrix} \widetilde{u}_1+ u_1' &{} \cdots &{} \widetilde{u}_{n-1}+ u_{n-1}' &{} 0 \\ 0 &{} \cdots &{} 0 &{} \widetilde{v}_n + v_n' \end{pmatrix} \, dx_1 \, \cdots \, dx_n. \end{aligned}$$

The bottom row is constant in \(x_1, \ldots , x_{n-1}\) on \(Q_{K,k}\), and so the function is convex for the integration with respect to \(x_1, \ldots , x_{n-1}\). The terms \(\widetilde{u}_i\) and \(\widetilde{v}_n\) are constant on \(Q_{K,k}\), and the \(x_1, \ldots , x_{n-1}\) integral of \(u_i'\) over the \((n-1)\)-dimensional dyadic cube inside \(Q_{K,k}\) is zero (for any \(x_n\)). Hence applying Jensen’s inequality for convex functions gives

$$\begin{aligned}&\int _{Q_{K,k}} f\begin{pmatrix} u_1 &{} \cdots &{} u_{n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} v_n \end{pmatrix} \, dx \\&\quad \ge \int _{Q_{K,k}} f\begin{pmatrix} \widetilde{u}_1 &{} \cdots &{} \widetilde{u}_{n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} \widetilde{v}_n + v_n' \end{pmatrix} \, dx_1 \, \cdots \, dx_n. \end{aligned}$$

Applying Jensen’s inequality similarly to the integration in \(x_n\), and summing over all \(k \in \mathbb {Z}^n\) gives

$$\begin{aligned} \int _{\mathbb {R}^n} f\begin{pmatrix} u_1 &{} \cdots &{} u_{n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} v_n \end{pmatrix} \, dx \ge \int _{\mathbb {R}^n} f\begin{pmatrix} \widetilde{u}_1 &{} \cdots &{} \widetilde{u}_{n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} \widetilde{v}_n \end{pmatrix} \, dx. \end{aligned}$$

By induction this proves the lemma. \(\square \)

Proof of Theorem 2

Let \(\mu \) be a homogeneous gradient Young measure supported in \(\mathbb {M}^{2 \times n}_{{{\mathrm{diag}}}}\), and let \(f: \mathbb {M}^{2 \times n} \rightarrow \mathbb {R}\) be a rank-one convex function. It is required to show that

$$\begin{aligned} \int f \, d\mu \ge f( \overline{\mu }). \end{aligned}$$

Without loss of generality it may be assumed that \(\overline{\mu }=0\) and that \(f(0)=0\). After replacing f by an extension of f which is equal to f on \(({{\mathrm{supp}}}\mu )^{co}\), it can also be assumed that there is a constant C with

$$\begin{aligned} |f(X)| \le C(1+|X|^2) \quad \text { for all } X \in \mathbb {M}^{2 \times n}. \end{aligned}$$
(5)

Let \(\varOmega \subseteq \mathbb {R}^n\) be the open unit cube. By the characterisation of gradient Young measures [15, Theorem 8.16] there is a sequence \(\phi ^{(j)}=(\phi ^{(j)}_1,\phi ^{(j)}_2)\) in \(W^{1, \infty }(\varOmega , \mathbb {R}^2)\) whose gradients generate \(\mu \), which means that

$$\begin{aligned} \lim _{j \rightarrow \infty } \int _{\varOmega } \eta (x) g\left( \nabla \phi ^{(j)}(x) \right) \, dx = \int g \, d\mu \cdot \int _{\varOmega } \eta (x) \, dx, \end{aligned}$$
(6)

for any continuous g and for all \(\eta \in L^1(\varOmega )\). In particular \(\nabla \phi ^{(j)} \rightarrow 0\) weakly in \(L^2(\varOmega , \mathbb {M}^{2 \times n})\). By the sharp version of the Zhang truncation theorem (see [13, Corollary 3]) it may be assumed that

$$\begin{aligned} \left\| {{\mathrm{dist}}}\left( \nabla \phi ^{(j)}, \mathbb {M}^{2 \times n}_{{{\mathrm{diag}}}} \right) \right\| _{\infty } \rightarrow 0. \end{aligned}$$
(7)

As in Lemma 8.3 of [15], after multiplying the sequence by cutoff functions and diagonalising in such a way as to not affect (7), it can additionally be assumed that \(\phi ^{(j)} \in W_0^{1, \infty }(\varOmega , \mathbb {R}^2)\) (the choice \(p=\infty \) is not that important, any large enough p would work).

Let \(P_1: L^2(\mathbb {R}^n) \rightarrow L^2(\mathbb {R}^n)\) be the projection onto the closed span of

$$\begin{aligned} \left\{ h_Q^{(\epsilon )} : Q \subseteq \mathbb {R}^n \text { is a dyadic cube and } \epsilon _n = 0 \right\} , \end{aligned}$$

and let \(P_2: L^2(\mathbb {R}^n) \rightarrow L^2(\mathbb {R}^n)\) be the projection onto the closed span of

$$\begin{aligned} \left\{ h_Q^{(\epsilon )} : Q \subseteq \mathbb {R}^n \text { is a dyadic cube and } \epsilon = e_n \right\} . \end{aligned}$$

Write \(w^{(j)} = \nabla \phi ^{(j)}\), so that by (7) and the fact that \(R_i \partial _j \phi = R_j \partial _i \phi \),

$$\begin{aligned} \left\| R_n w^{(j)}_{1,1} \right\| _2, \ldots , \left\| R_nw^{(j)}_{1,n-1}\right\| _2 \rightarrow 0, \quad \left\| R_1w^{(j)}_{2,n}\right\| _2, \ldots , \left\| R_{n-1}w^{(j)}_{2,n}\right\| _2 \rightarrow 0. \end{aligned}$$

Hence by (4) and orthogonality

$$\begin{aligned} \left\| w^{(j)}_{1,1}-P_1 w^{(j)}_{1,1} \right\| _2, \ldots ,\left\| w^{(j)}_{1,n-1}-P_1 w^{(j)}_{1,n-1}\right\| _2 \rightarrow 0, \quad \left\| w^{(j)}_{2,n}-P_2 w^{(j)}_{2,n} \right\| _2 \rightarrow 0. \end{aligned}$$
(8)

The function f is separately convex since it is rank-one convex. Hence by the quadratic growth of f in (5) (see Observation 2.3 in [10]), there exists a constant K such that

$$\begin{aligned} |f(X)-f(Y)| \le K(1+|X|+|Y|)|X-Y| \text { for all } X,Y \in \mathbb {M}^{2 \times n}. \end{aligned}$$
(9)

Hence applying (6) with \(\eta = \chi _{\varOmega }\) gives

$$\begin{aligned} \int f \, d\mu&= \lim _{j \rightarrow \infty } \int _{\varOmega } f\left( w^{(j)} \right) \, dx \nonumber \\&= \lim _{j \rightarrow \infty } \int _{\varOmega } f\begin{pmatrix} P_1w^{(j)}_{11} &{} \cdots &{} P_1w^{(j)}_{1,n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} P_2w^{(j)}_{2,n} \end{pmatrix} \, dx, \end{aligned}$$
(10)

by (7), (8), (9) and the Cauchy–Schwarz inequality. The functions \(w^{(j)}\) are supported in \(\overline{\varOmega }\) and satisfy \(\int _{\varOmega } w^{(j)} \, dx = 0\) by the definition of weak derivative. Hence the \(L^2(\mathbb {R}^n)\) inner product satisfies \(\left\langle w^{(j)}, h_Q^{(\epsilon )} \right\rangle = 0\) whenever Q is a dyadic cube not contained in \(\overline{\varOmega }\). This implies that \(P_1w^{(j)}\) and \(P_2w^{(j)}\) are supported in \(\overline{\varOmega }\). The integrand in (10) therefore vanishes outside \(\overline{\varOmega }\), and so

$$\begin{aligned} \int f\, d\mu = \lim _{j \rightarrow \infty } \int _{\mathbb {R}^n} f\begin{pmatrix} P_1w^{(j)}_{11} &{} \cdots &{} P_1w^{(j)}_{1,n-1} &{} 0 \\ 0 &{} \cdots &{} 0 &{} P_2w^{(j)}_{2,n} \end{pmatrix} \, dx \ge 0 \end{aligned}$$

by (9) and Lemma 1. This finishes the proof. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Harris, T.L.J., Kirchheim, B. & Lin, CC. Two-by-two upper triangular matrices and Morrey’s conjecture. Calc. Var. 57, 73 (2018). https://doi.org/10.1007/s00526-018-1360-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00526-018-1360-8

Mathematics Subject Classification

Navigation