Abstract
We show that, in order to decide whether a given probability measure is laminate, it is enough to verify Jensen’s inequality in the class of extremal non-negative rank-one convex integrands. We also identify a subclass of these extremal integrands, consisting of truncated minors, thus proving a conjecture made by Šverák (Arch Ration Mech Anal 119(4):293–300, 1992).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Since its introduction in the seminal work of Morrey [32], quasiconvexity has played an important role not just in the Calculus of Variations [11, 13, 21, 40] but also in problems from other areas of mathematical analysis, for instance in the theory of compensated compactness [35, 46]. Nonetheless, this concept is still poorly understood and has been mostly studied in relation with polyconvexity and rank-one convexity, which are respectively stronger and weaker notions that are easier to deal with (we refer the reader to Sect. 2 for terminology and notation). An outstanding open problem in the area is Morrey’s problem, which is the problem of deciding whether rank-one convexity implies quasiconvexity, so that the two notions coincide. A fundamental example [44] of Šverák shows that this implication does not hold in dimensions \(3\times 2\) or higher and, more recently, Grabovsky [16] has found a different example in dimensions \(8\times 2\) which moreover is 2-homogeneous. The problem in dimensions \(2\times 2\), in particular, remains completely open, but in the last two decades evidence towards a positive answer in this case has been piling up [2, 14, 17, 24, 25, 33, 37, 38, 41].
A presumably easier (but still unsolved) version of Morrey’s problem in dimensions \(2\times 2\) is to decide whether rank-one convex integrandsFootnote 1 in the space of \(2\times 2\) symmetric matrices are quasiconvex. In this direction, Šverák introduced in [43] new quasiconvex integrands, which were later generalized in [15]. For any \(n\times n\) symmetric matrix A, these integrands are defined by
for \(k=0,\dots , n\); we recall that the index of a matrix is the number of its negative eigenvalues. We also note that the integrand \(F_0\) is sometimes called \(\det ^{++}\) in the literature, since its support is the set of positive definite matrices. These integrands have played an important role in studying other problems related to the Calculus of Variations, for instance in building counterexamples to the regularity of elliptic systems [34] or in the computation of rank-one convex hulls of compact sets [45].
In order to understand Šverák’s motivation for considering these integrands it is worth making a small excursion into classical convex analysis. Given a real vector space \({\mathbb {V}}\) and a convex set \(K\subset {\mathbb {V}}\), one can define the set of extreme points of K as the set of points which are not contained in any open line segment contained in K. In general the set of extreme points might be very small: this is what happens, for instance, when the set is a convex cone \(C\subset {\mathbb {V}}\), since in this situation all non-zero vectors are contained in a ray through zero. However, if we can find a convex base B for C, then we note that such a ray corresponds to a unique point in B. If this is an extreme point of B then we say that we have an extremal ray.
We are interested in the extremal rays of the cone C of rank-one convex integrands. This cone has the inconvenient feature that it is not line-free: there is a set of elements \(v\in {\mathbb {V}}\) such that, for any \(c\in C\) and any \(t\in {\mathbb {R}}\), the point \(c+tv\) is in C; this set is precisely \(C\cap (-C)\). In turn, it is quite clear that this is the set of rank-one affine integrands. A reasonable way of disposing of rank-one affine integrands is by demanding non-negativity from all integrands from C. This leads us to the definition of extremality considered by Šverák: we say that a non-negative rank-one convex integrand F is extremal if, whenever we have \(F=E_1+E_2\) for \(E_1, E_2\) non-negative rank-one convex integrands, then each \(E_i\) is a non-negative multiple of F. A weaker notion of extremality was introduced by Milton in [31] for the case of quadratic forms (see also [18]) but we shall not discuss it further here.
Let us now explain the relation between Šverák’s integrands and extremality. In [43] Šverák observed that the polyconvex integrand \(|\det |:{\mathbb {R}}^{n\times n} \rightarrow [0,\infty )\) is not extremal, since \(|\det |=\det ^++\det ^-\), where as usual
which are also polyconvex. He also observed that \(\det ^\pm \) are not extremal in \({\mathbb {R}}^{n\times n}_{\text {sym}}\), since
However, he conjectured that \(\det ^\pm \) are extremal in \({\mathbb {R}}^{n\times n}\) and also that each \(F_k\) is extremal in \({\mathbb {R}}^{n\times n}_\text {sym}\). In this paper we give an affirmative answer to both conjecturesFootnote 2:
Theorem 1.1
Given a minor \(M:{\mathbb {R}}^{n\times n}\rightarrow {\mathbb {R}}\), let \(M^\pm \) be its positive and negative parts. Then \(M^\pm \) are extremal non-negative rank-one convex integrands in \({\mathbb {R}}^{n\times n}\).
For \(k=0,\dots ,n\), the integrands \(F_k:{\mathbb {R}}^{n\times n}_{\text {sym}}\rightarrow {\mathbb {R}}\), are extremal non-negative rank-one convex integrands in \({\mathbb {R}}^{n\times n}_{\text {sym}}\).
As a main tool we use the fact that, on a connected open set, a rank-one affine integrand is an affine combination of minors; this is proved by localizing the arguments from Ball–Currie–Olver [5] concerning the classification of Null Lagrangians.
The importance of extreme points in convex analysis has to do with the Krein–Milman theorem, which states that the closed convex hull of the set of extreme points of a compact, convex subset K of a locally convex vector space is the whole set K—informally, this means that the set of extreme points is a set of “minimal information” needed to recover K. However, Klee [26] showed that Krein–Milman theorem is generically true for trivial reasons: if we fix an infinite-dimensional Banach space and we consider the space of its compact, convex subsets (which we can equip with the Hausdorff distance so that it becomes a complete metric space), then for almost every compact convex set K its extreme points are dense in K. Here we mean “almost every” in the sense that the previous statement is false only in a meagre set. Despite this disconcerting result, the situation can be somewhat remedied with the help of Choquet theory, which roughly states that, under reasonable assumptions, an arbitrary point in K can be represented by a measure carried in the set of its extreme points. For precise statements and much more information concerning Choquet theory we refer the reader to the lecture notes [39] or to the monograph [28] and for Krein–Milman-type theorems for semi-convexity notions see [23, 27, 29].
Theorem 1.1 can be interpreted in light of Choquet theory. It is well-known (see, for instance, the lecture notes [33]) that Morrey’s problem is equivalent to a dual problem, which is the problem of deciding whether the class of homogeneous gradient Young measures and the class of laminates are the same. Fix a compactly supported Radon probability measure \(\nu \) on \({\mathbb {R}}^{n\times n}\); for simplicity we assume that \(\nu \) has support contained in the interior of the cube \(Q\equiv \prod _{i=1}^{n^2} [0,1]\subset {\mathbb {R}}^{n^2}\cong {\mathbb {R}}^{n\times n}.\) In order to decide whether \(\nu \) is a laminate, we can resort to an important theorem due to Pedregral [36], which states that \(\nu \) is a laminate if and only if Jensen’s inequality holds for any rank-one convex integrand \(f:{\mathbb {R}}^{n\times n}\rightarrow {\mathbb {R}}\):
Since the class of rank-one convex integrands is rather large, the problem of deciding whether a measure is a laminate is in general very hard. However, it follows from Choquet theory that one does not need to test \(\nu \) against all rank-one convex integrands, it being sufficient to test it in a strictly smaller class:
Theorem 1.2
Let \(A_1,\dots , A_{2^{n^2}}\) be the vertices of the cube Q. A Radon probability measure \(\nu \) supported on the interior of Q is a laminate if and only if
for all integrands g which are extreme points of the convex set
We note that the summation condition is simply a normalization which corresponds to fixing a base of the cone of non-negative rank-one convex integrands on Q.
Our interest in extremal integrands was ignited by the work [3] of Astala–Iwaniec–Prause–Saksman, where it was shown that an integrand known as Burkholder’s function is extreme in the class of homogeneous, isotropic, rank-one convex integrands; in fact, this integrand is also the least integrand in this class, in the sense that no other element of the class is below it at all points. The relevance of this fact is readily seen: from standard results about quasiconvex envelopes it follows immediately that the Burkholder function is either quasiconvex everywhere or quasiconvex nowhere. Burkholder’s function was found in the context of martingale theory by Burkholder [9, 10] and was later generalized to higher dimensions by Iwaniec [19]. This remarkable function is a bridge between Morrey’s problem and important problems in Geometric Function Theory [1, 20] and we refer the reader to the very interesting papers [2, 3, 19] and the references therein for more details in this direction.
Finally, we give a brief outline of the paper. Section 2 contains standard definitions, notation and briefly recalls some useful facts for the reader’s convenience. Section 3 comprises results concerning improved homogeneity properties of rank-one convex integrands which vanish on isotropic cones. Section 4 is devoted to the proof of Theorem 1.1. Finally, Sect. 5 elaborates on the relation between Choquet theory and Morrey’s problem and we prove Theorem 1.2.
2 Preliminaries
In this section we will gather a few definitions and notation for the reader’s convenience. The material is standard and can be found for instance in the excellent references [12] and [33].
Consider an integrand \(E:{\mathbb {R}}^{m\times n}\rightarrow {\mathbb {R}}\). We say that E is polyconvex if E(A) is a convex function of the minors of A, see [6]. If E is locally integrable, we say that E is quasiconvex if there is some bounded open set \(\Omega \subset {\mathbb {R}}^n\) such that, for all \(A\in {\mathbb {R}}^{m\times n}\) and all \(\varphi \in C^{\infty }_0(\Omega ,{\mathbb {R}}^n)\),
This notion was introduced in [32] and generalized to higher-order derivatives by Meyers in [30]. The case of higher derivatives was also addressed in [5], where it was shown that quasiconvexity is not implied by the corresponding notion of rank-one convexity if \(m>2\) and \(n\ge 2\). The integrand E is rank-one convex if, for all matrices \(A, X\in {\mathbb {R}}^{m\times n}\) such that X has rank one, the function
is convex. An equivalent definition of rank-one convexity is that E is rank-one convex if, whenever \((A_i, \lambda _i)_{i=1}^N\) satisfy the \((H_N)\) conditions (c.f. [12, Def. 5.14]), we have
By a slight abuse of terminology we will call a collection \((A_i, \lambda _i)_{i=1}^N\) which satisfies the \((H_N)\) conditions a prelaminate. It is also clear how to adapt the definition of rank-one convexity to the more general situation where E is defined on an open set \(\mathcal {O}\subset {\mathbb {R}}^{m\times n}\). Finally, \(E:{\mathbb {R}}^d\rightarrow {\mathbb {R}}\) is separately convex if, for all \(x\in {\mathbb {R}}^d\) and all \(i=1,\dots , d\), the function \(t\mapsto E(x+t e_i)\) is convex; we denote by \(e_1,\dots , e_d\) the standard basis of \({\mathbb {R}}^d\).
We recall that all real-valued rank-one convex integrands are locally Lipschitz continuous, a fact that we will often use implicitly. Moreover, if \((m,n)\ne (1,1)\), we have
while E quasiconvex \(\Leftarrow E\) rank-one convex fails if \(n\ge 2, m\ge 3\). The case \(n\ge m=2\) is the content of Morrey’s problem.
We will only consider square matrices, so \(n=m\). Besides polyconvexity, the integrands \(\det ^\pm :{\mathbb {R}}^{n\times n} \rightarrow [0,\infty )\) have two important properties: they are positively n-homogeneous (in fact, they are n-homogeneous if n is even) and isotropic. A generic integrand \(E:{\mathbb {R}}^{n\times n}\rightarrow {\mathbb {R}}\) is said to be p-homogeneous for a number \(p\ge 1\) if
it is positively p-homogeneous if the same holds only for \(t>0\). The integrand E is isotropic if it is invariant under the left– and right–\(\text {SO}(n)\) actions, that is,
This condition can be understood in a somewhat more concrete way with the help of singular values. The singular values \({{\widetilde{\sigma }}}_1(A)\ge \dots \ge {{\widetilde{\sigma }}}_n(A)\ge 0\) of a matrix A are the eigenvalues of the matrix \(\sqrt{AA^T}\). We shall consider the signed singular values \(\sigma _j(A)\), which are defined by
As we shall only deal with the signed singular values, the word signed will sometimes be omitted. The importance of singular values is largely due to the following standard result (c.f. [12, § 13]):
Theorem 2.1
Let \(A\in {\mathbb {R}}^{n\times n}\). There are matrices \(Q,R\in \text {SO}(n)\) and real numbers \(|\sigma _n|\le \sigma _{n-1}\le \dots \le \sigma _1\) such that
With the help of this theorem we can reinterpret isotropy as follows: consider the polar decomposition \(A=Q\sqrt{AA^T}\) for some \(Q\in \text {O}(n)\) and consider a diagonalization of the positive-definite matrix \(\sqrt{AA^T}\), so \(R\sqrt{AA^T}R^{-1}=\text {diag}(\widetilde{\sigma }_1,\dots ,{{\widetilde{\sigma }}}_n)\) for some \(R\in \text {SO}(n)\). Here all the \({{\widetilde{\sigma }}}_j\)’s are positive, but by flipping the sign of the last one if \(\det A<0\) we can take \(Q\in \text {SO}(n)\). Hence, if E is isotropic,
So isotropic integrands are functions of the signed singular values.
When \(n=2\) isotropy is particularly simple to handle, since there is a simple way of understanding the signed singular values of a matrix \(A\in {\mathbb {R}}^{2\times 2}\). For this, we recall the conformal–anticonformal decomposition: we can write
This corresponds to the decomposition
which is orthogonal with respect to the Euclidean inner product. Here \({\mathbb {R}}^{2\times 2}_{\text {conf}}\) is the space of conformal matrices while \({\mathbb {R}}^{2\times 2}_{\text {aconf}}\) corresponds to the anticonformal matrices; these are the matrices that are scalar multiples of orthogonal matrices and have respectively positive and negative determinant. This decomposition is particularly useful for us because the singular values of A satisfy the identities
where \(|A |^2=\text {tr}(A^T A)\) denotes the usual Euclidean norm of a matrix. Hence, if \(n=2\) and E is isotropic, \(E(A)=E(|A^+|,|A^-|)\). In particular, the above formulae yield
The above decomposition also allows one to identify \({\mathbb {R}}^{2\times 2}\cong {\mathbb {C}}^2\) by the linear isomorphism
The advantage of this identification is that we can say that a integrand \(E:{\mathbb {C}}^2\rightarrow {\mathbb {R}}\) is rank-one convex if and only if the function
is convex for all \((z,w)\in {\mathbb {C}}^2\) and all \((\xi , \zeta )\in S^1\times S^1\).
The conformal–anticonformal decomposition of \({\mathbb {R}}^{2\times 2}\) is also related to an important rank-one convex integrand, commonly referred to as Burkholder’s function. This function can be defined in any real or complex Hilbert space with the norm \(\Vert \cdot \Vert \) by
where \(p^*-1 = \max (p-1, (p-1)^{-1})\); here and in the rest of the paper, when referring to \(B_p\), we implicitly assume that \(1<p<\infty \). This remarkable function is zig-zag convex, i.e.
see [10], and it is also p-homogeneous. If the Hilbert space where \(B_p\) is defined is \({\mathbb {C}}\), the zig-zag convexity of \(B_p\) implies that the Burkholder function \(B_p:{\mathbb {C}}\times {\mathbb {C}}\rightarrow {\mathbb {R}}\) is rank-one convex. Since we are interested in non-negative integrands, we will also deal with the integrand \(B_p^+\equiv \max (B_p,0)\), which is also rank-one convex. Moreover, \(B_2^+=\det ^+\), so \(B_p^+\) can be seen as a “\(\det ^+\)-type integrand”, in the sense that it is rank-one convex, isotropic and vanishes on some cone
but it is more general since it can be homogeneous with any degree of homogeneity strictly greater than one. We refer the reader to [19] for higher dimensional generalizations of \(B_p\) and to [2] for extremality results concerning this integrand.
3 Homogeneity properties of a class of rank-one convex integrands
In this section we discuss homogeneity properties of rank-one convex integrands which vanish on cones, both in two and in higher dimensions. We are interested in the family of isotropic cones of aperture \(a\ge 1\),
motivated by the fact that the Burkholder function \(B_p\) vanishes on \(\mathcal {C}_{p^*-1}\). When \(a=1\) we have \(\mathcal {C}_1=\{\det =0\}\), which of course can be defined in any dimension.
Lemma 3.1
Let \(E:{\mathbb {C}}\times {\mathbb {C}}\rightarrow {\mathbb {R}}\) be rank-one convex and assume that, for some \(a\ge 1\), E is non-positive on \(\mathcal {C}_a\). Define
and let \(A=(z,w)\) be such that \(k\equiv |w|/|z|\le 1\). Then, for \(t\ge 1\),
Proof
Let us write, for real numbers \(x,y \in {\mathbb {R}}\), \((x,y)\equiv (x z/|z|, y w/|w|)\in {\mathbb {C}}\times {\mathbb {C}}\), so \(A=(|z|,k|z|)\). We fix \(t>1\) since when \(t=1\) there is nothing to prove. Let us define the auxiliary points
see Fig. 1. Simple calculations show that
and it is easy to verify that \(B_1-A_1\) and \(B_2-tA\) are rank-one directions. One also needs to verify that \(\lambda _1, \lambda _2 \in [0,1]\), which is a lengthy but elementary calculation using the fact that \(0\le k\le 1\le a\).
Observe that \(B_1, B_2\in \mathcal {C}_a\) and so \(E(B_1)=E(B_2)\le 0\). Therefore, from rank-one convexity, we have
and a simple calculation shows that \(\lambda _1 \lambda _2 = h_a(t,k)\). \(\square \)
We now specialise the lemma to two important situations, in which one can say more. Let us first assume that \(k=0\).
Proposition 3.2
Let \(E:{\mathbb {C}}\times {\mathbb {C}}\rightarrow {\mathbb {R}}\) be rank-one convex, positively p-homogeneous for some \(p\ge 1\) and not identically zero. If there is some \(a\ge 1\) such that \(E=0\) on \(\mathcal {C}_a\) then \(p\ge \frac{1}{a} + 1\).
We note that this inequality is sharp: indeed, the zero set of the Burkholder function \(B_p\) is \(\mathcal {C}_{p^*-1}\) and so, when \(1<p< 2\), we have
Thus we can reinterpret this proposition as saying that, for \(1<p< 2\), the Burkholder function has the least possible order of homogeneity of rank-one convex integrands which vanish on \(\mathcal {C}_{p^*-1}\).
Proof
In this proof we assume that \(a>1\), since the case \(a=1\) follows from Lemma 3.3 below. We claim that there is some \(z\in {\mathbb {C}}\) such that \(E(z,0)> 0\). Once this is shown, the conclusion follows easily: take \(k=0\) in Lemma 3.1 to find that \(E(A)\le F_a(t) E(A)\) where
and \(A=(z,0)\). Since \(E(A)>0\), we must have \(F_a(t)\ge 1\) for all \(t\ge 1\). Moreover, \(F_a(1)=1\) and an elementary computation reveals that
which is non-negative precisely when \(p\ge \frac{1}{a} + 1\).
To finish the proof it suffices to prove the claim. Take an arbitrary \(z\in S^1\) and take any rank-one line segment starting at (z, 0) and having the other end-point in \(\mathcal {C}_a\); such a line must intersect \(\mathcal {C}_1\), since \(a>1\), say at \(P_z\). Note that \(E(P_z)\ge 0\), since the function \(t\mapsto E(t P_z)= t^p E(P_z)\) is convex. We conclude that \(E(z,0)\ge 0\) with equality if and only if \(E(P_z)=0\), in which case E is identically zero along the entire rank-one line segment.
To prove the claim we want to show that if \(E(z,0)=0\) for all \(z \in {\mathbb {C}}\) then E is identically zero, so let us make this assumption. Then, from the previous discussion, we see that E is identically zero in the “outside” of \(\mathcal {C}_a\), i.e. in
Moreover, given any point P in the interior of \(\mathcal {C}_a\), there is a rank-one line segment through P with both endpoints, say \(P_1, P_2\), in \(\mathcal {C}_a^+\); this is the case because we assume \(a>1\). But E is zero in a neighbourhood of \(P_i\) and since it is convex along the rank-one line segment \([P_1,P_2]\) we conclude that it is also zero at P. \(\square \)
We remark that, in one dimension, the only homogeneous extreme convex integrands are linear (c.f. Proposition 5.3) while, from the results of Sect. 4, for \(n>1\) there are extremal rank-one convex integrands in \({\mathbb {R}}^{n\times n}\) which are positively k-homogeneous for any \(k\in \{1,\dots , n\}\). It would be interesting to know whether there are extremal homogeneous integrands with other degrees of homogeneity, or whether there is an upper bound for the order of homogeneity of such integrands.
If we set \(a=1\) in Lemma 3.1, so \(\mathcal {C}_1=\{\det =0\}\), we see that \(h_1(t,k)=t^{-2}\) and we find the estimate \(t^2 E(A)\le E(tA)\) for \(t\ge 1\). In fact, this holds in any dimension, and the proof is a simple variant of the proof of Lemma 3.1.
Lemma 3.3
Let \(E:{\mathbb {R}}^{n\times n} \rightarrow {\mathbb {R}}\) be a rank-one convex integrand which is non-positive on the cone \(\{\det =0\}\). Then for all \(A\in {\mathbb {R}}^{n\times n}\), E satisfies
Proof
Let us begin by observing that the second inequality follows from the first. Indeed, given a matrix \(A\in {\mathbb {R}}^{n\times n}\) and \(0<t<1\), let \(B\equiv t A\) and apply the first inequality to B to get
this can be done since \(1<\frac{1}{t} \). Hence we shall prove only the first inequality.
We begin by proving the statement in the case where A is diagonal, so there are real numbers \(\sigma _j\) such that \(A=\text {diag}(\sigma _1, \dots , \sigma _n)\). Let \(A_0\equiv A\) and define, for \(1\le j \le n\),
Hence, for any \(1\le j \le n\),
This is a splitting of \(A_{j-1}\) since we are assuming that \(t> 1\) and also
which is a rank-one matrix. Iterating (3.1) we find
and by construction this is a prelaminate. Thus rank-one convexity of E yields
since \(\det (B_j)=0\) for all \(1\le j\le n\), hence \(E(B_j)\le 0\), and also \(A_n= tA\) by definition.
In the case where A is not diagonal, we consider the singular value decomposition of Theorem 2.1, i.e. \(A=Q\Sigma R\) where \(Q,R\in \text {SO}(n)\) and \(\Sigma =\text {diag}(\sigma _1, \dots , \sigma _n)\). We see that (3.2) can be rewritten as
and so, multiplying this by Q and R, we get
We still have \(\det (Q B_j R)=\det (B_j)=0\) and hence to finish the proof it suffices to show that this decomposition of A is still a prelaminate. For this, we use the following elementary fact:
The splittings used to obtain this prelaminate are rotated versions of (3.1), i.e.
and this is still a legitimate splitting since
\(\square \)
As a simple consequence of the lemma, we find a rigidity result for decompositions of positively n-homogeneous integrands.
Proposition 3.4
Let \(E_1, E_2:{\mathbb {R}}^{n\times n}\rightarrow {\mathbb {R}}\) be rank-one convex integrands which are non-positive on \(\{\det =0\}\) and assume there is some positively n-homogeneous integrand F such that \(F=E_1+E_2\). Then each \(E_i\) is positively n-homogeneous.
Proof
Define the “homogenized” integrands \(E_i^h:{\mathbb {R}}^{n\times n}\rightarrow {\mathbb {R}}\) by
so that the lemma yields
Our claim is that \(E_i=E_i^h\). Since \(F= E_1+E_2\) it follows that
and we have equality on the sphere \(\{A\in {\mathbb {R}}^{n\times n}: |A|= 1\}\), where \(E_i=E_i^h\). As both sides of the inequality are positively n-homogeneous they must be equal in the whole set U and so \(E_i=E_i^h\) in U. An identical argument establishes equality in the complement of U. \(\square \)
Remark 3.5
The proofs of Lemma 3.3 and Proposition 3.4 are fairly robust. In particular, a similar statement holds if the integrands \(E_i\) are defined in \({\mathbb {R}}^{n\times n}_{\text {sym}}\) instead of \({\mathbb {R}}^{n\times n}\). Indeed, \({\mathbb {R}}^{n\times n}_{\text {sym}}\) is the set of (real) matrices that can be diagonalized by rotations. Thus, the prelaminate built in the proof of Lemma 3.3 has support in \({\mathbb {R}}^{n\times n}_{\text {sym}}\) if A is symmetric: for the nondiagonal case, one can take \(R=Q^{-1}\).
Returning to the case \(n=2\), it would be pleasant to have a result analogous to Proposition 3.4 for integrands vanishing on cones of aperture greater than one. However, this is not possible, since it would yield the extremality of the Burkholder function in the class of isotropic rank-one convex integrands. In order to see that this cannot be the case, we recall that in [42] Šveraák introduced the rank-one convex function
which is related to the Burkholder function \(B_p\), when \(1<p<2\), by
see [4]. In particular, this shows that one cannot drop the homogeneity assumption from the results of [2].
4 Proof of extremality for truncated minors
This section is dedicated to the proof of Theorem 1.1. Although truncated minors are not linear along rank-one lines, they are piecewise linear along such lines. For this reason, it will be useful to have at our disposal the classification of rank-one affine integrands, which is due to Ball [6] in dimensions three or lower, Dacorogna [12] in higher dimensions and also Ball–Currie–Olver [5] in the case of higher order quasiconvexity. Given an open set \(\mathcal {O}\subset {\mathbb {R}}^{n\times n}\) and an integrand \(E:\mathcal {O}\rightarrow {\mathbb {R}}\) we say that E is rank-one affine if both E and \(-E\) are rank-one convex; such integrands are also often called Null Lagrangians or quasiaffine.
Theorem 4.1
Let \(\mathcal {O}\subset {\mathbb {R}}^{n\times n}\) be a connected open set and consider a rank-one affine integrand \(E:\mathcal {O}\rightarrow {\mathbb {R}}\). Then E(A) is an affine combination of the minors of A.
More precisely, let \({\mathbf {M}} (A)\) be the matrix consisting of the minors of A and let \(\tau (n)\equiv (2n)!/(n!)^2\) be its length. There is a constant \(c\in {\mathbb {R}}\) and a vector \(v\in {\mathbb {R}}^{\tau (n)}\) such that
This theorem is essentially a particular case of [5, Theorem 4.1], the only difference being that in this paper the authors deal only with integrands defined on the whole space. We briefly sketch how to adapt their proof to our case. The first result needed is the following:
Lemma 4.2
Let \(\mathcal {O}\subset {\mathbb {R}}^{n\times n}\) be open. A smooth integrand \(E:\mathcal {O}\rightarrow {\mathbb {R}}\) is rank-one affine if and only if, for any \(k\ge 2\),
for all \(A\in \mathcal {O}\) and all \(v_i,w_i \in {\mathbb {R}}^n\) with \(w_1,\dots , w_k\) linearly dependent.
In particular, when \(\mathcal {O}\) is connected, any continuous rank-one affine integrand E is a polynomial of degree at most n.
We remark that our proof is very similar to the one in [6, Theorem 4.1].
Proof
We recall that a smooth integrand E is rank-one affine if and only if
and so clearly one of the directions of the lemma holds. Hence let us assume that E is rank-one affine and fix some point \(A\in \mathcal {O}\). Define the 2k-tensor \(T:({\mathbb {R}}^n)^{2k}\rightarrow {\mathbb {R}}\) by
Moreover, since E is rank-one affine, T is alternating. This follows from the following claim: if \(w_j=w_l\) for some \(j\ne l\), then
To see why this claim is true, we note that it certainly holds when \(k=2\), since (4.1) implies that
since \(D^2 E(A)[v_1\otimes w, v_1 \otimes w]=0\) and the same with \(v_2\) in the place of \(v_1\). For a general \(k\ge 2\), we use implicit summation to see that
where \({\widehat{\cdot }}\) represents an omitted term. Now we can apply the \(k=2\) case to the term in square brackets to see that
as wished.
To prove the lemma under the assumption that E is smooth, let us take \(w_1, \dots , w_k\) linearly dependent, so we can suppose for simplicity that \(w_k=w_1+\dots + w_{k-1}\). Then
since T is linear and alternating. The last statement of the lemma follows by observing that the first part implies that \(D^{n+1}E(A)=0\) for all \(A\in \mathcal {O}\).
When E is merely continuous, let \(\rho \) be the standard mollifier and let \(\rho _\varepsilon (A)=\varepsilon ^{-n^2}\rho (A/\varepsilon )\) for \(\varepsilon >0\). Fix \(A\in \mathcal {O}\) and find an \(\varepsilon >0\) such that \(\text {dist}(A,\partial \mathcal {O})>\varepsilon \). Then \(E_\varepsilon \equiv \rho _\varepsilon *E\) is smooth and rank-one affine and hence
whenever \(w_1, \dots , w_k\) are linearly dependent. Since \(D^k E_\varepsilon \) converges to \(D^k E\) locally uniformly, the conclusion of the lemma follows. \(\square \)
Using the lemma, we see that in order to prove the theorem it suffices to consider rank-one affine integrands which are homogeneous polynomials, so let us take such an integrand E which is a homogeneous polynomial of some degree k. Given any \(A\in \mathcal {O}\), the total derivative \(D^k E(A)\) is a symmetric k-linear function \(D^k E(A):({\mathbb {R}}^{n\times n})^k\rightarrow {\mathbb {R}}\); we remark that this operator has as domain the whole matrix space and not just a subset of it. There is an isomorphism between the space of k-homogeneous rank-one affine integrands and the space of symmetric k-linear functions \(({\mathbb {R}}^{n\times n})^k\rightarrow {\mathbb {R}}\) and the proof in [5] is unchanged in our case.
We recall that a generic symmetric rank-one matrix is of the form \(c v\otimes v\) for some \(v\in {\mathbb {R}}^n\) with \(|v|=1\) and some \(c \in {\mathbb {R}}\). Hence, we have the following analogue of Lemma 4.2:
Lemma 4.3
Let \(\mathcal {O}\subset {\mathbb {R}}^{n\times n}_\text {sym}\) be open. A smooth integrand \(E:\mathcal {O}\rightarrow {\mathbb {R}}\) is rank-one affine if and only if, for any \(k\ge 2\),
for all \(A\in \mathcal {O}\) and all \(v_i,w_i \in {\mathbb {R}}^n\) with \(v_1,\dots , v_k\) linearly dependent.
From this we deduce, by the same arguments as in the situation above, the following result:
Theorem 4.4
Let \(\mathcal {O}\subset {\mathbb {R}}^{n\times n}_\text {sym}\) be a connected open set and consider a rank-one affine integrand \(E:\mathcal {O}\rightarrow {\mathbb {R}}\). Then there is a constant \(c\in {\mathbb {R}}\) and a vector \(v\in {\mathbb {R}}^{\tau (n)}\) such that
In order to apply Theorems 4.1 and 4.4 to the integrands we are interested in, we need to know that its supports are connected (in general, it is clear that any integrand with disconnected support cannot be extremal). Given a minor M, let \(\mathcal {O}_M\equiv \{M>0\}\). Moreover, each \(F_k\) has support in the set
Lemma 4.5
For any minor M the set \(\mathcal {O}_M\) is connected. Moreover, for \(k=0,\dots , n\), the sets \(\mathcal {O}_k\) are connected.
Proof
For the first part, let M be an \(s\times s\) minor. Let us make the identification
so that \(M(A)=\det (P_s(A))\), where \(P_s\) is the projection of \({\mathbb {R}}^{n\times n}\) onto \({\mathbb {R}}^{s\times s}\). Hence we see that, under this identification,
Since both spaces are connected and the product of connected spaces is connected, \(\mathcal {O}_M\) is connected as well.
For the second part, note that the set \(\mathcal {O}_k\) is the set of matrices A for which there is some \(Q\in \text {SO}(n)\) and some diagonal matrix \(\Lambda =\text {diag}(a_1, \dots , a_k, b_1, \dots , b_{n-k})\), where \(a_i<0\) and \(b_j>0\), such that \(QAQ^T=\Lambda \). Clearly the set of \(\Lambda \)’s with this form can be connected to \(\text {diag}(-I_k, I_{n-k})\) by a path in \(\mathcal {O}_k\); here \(I_l\) is an \(l\times l\) identity matrix. Hence it suffices to prove that there is a continuous path in \(\mathcal {O}_k\) connecting \(A=Q\Lambda Q^T\) to \(\Lambda \). Such a path is given by \(A(t)=Q(t)A Q(t)^T\), where \(Q:[0,1]\rightarrow \text {SO}(n)\) is a continuous path with \(Q(0)=I, Q(1)=Q\). \(\square \)
We are finally ready to prove the extremality of truncated minors and of Šverák’s integrands.
Proof of Theorem 1.1
Let M be a minor and let \(E_1, E_2:{\mathbb {R}}^{n\times n}\rightarrow [0,\infty )\) be rank-one convex integrands such that \(M^+=E_1+E_2\). For concreteness, let us say
Each \(E_i\) is zero outside \(\mathcal {O}_M\) and, in this set, it is rank-one affine, so by Theorem 4.1 there are constants \(c_i\) and \(v_i \in {\mathbb {R}}^{\tau (n)}\) such that
and in fact, by continuity, this holds in the entire set \(\overline{\mathcal {O}_M}=\{M\ge 0\}\).
Clearly we must have \(c_i=0\). Let us write, for some vectors \(v_i^j\in {\mathbb {R}}^{{{n}\atopwithdelims (){j}}\times {{n}\atopwithdelims (){j}} }\),
where \({\mathbf {M}}_j(A)\) is the matrix of the j-th order minors of A (this is denoted by \(\text{ adj }_j(A)\) in [12]).
We observe that, given some s and some minor \(M'\) of order s, there is a matrix A such that \(M'\) is the only minor of order s that does not vanish at A. Indeed, if
then we can take a matrix A whose only non-zero entries are the entries \(a_{i'_\alpha j'_\alpha }\) for \(\alpha \in \{1,\dots , s\}\) and set these entries to one, so \(M'(A)=1\). Since all other entries of A are zero we see that all other minors of order s vanish at A. Note, moreover, that A has rank s.
The previous observation, applied with \(s=k\) and \(M=M'\), shows that for \(A\in \overline{\mathcal {O}_M}\) we have \(v_i^k\cdot {\mathbf {M}}_k(A)= \lambda _i M(A)\), where \(\lambda _i \in {\mathbb {R}}\) is the entry of \(v_i^k\) corresponding to M. We now prove that all the vectors \(v_i^j, j\ne k\), are zero.
Let \(j\le k\) be the lowest integer for which \(v_i^j\ne 0\) and suppose that \(j<k\). Given any minor \(M'\) of order j, say \(M'=e_\alpha \cdot {\mathbf {M}}_j\) for some \(\alpha \), there is an A with \(\text{ rank }(A)=j\) so that \(M'\) is the only minor of order j that does not vanish at A. Since A has rank j all of its \((j+1)\times (j+1)\) minors vanish and, in particular, \(M(A)=0\) and hence \(A\in \overline{\mathcal {O}_M}\). Since j is the lowest integer for which \(v_i^j\ne 0\) we have
Since \(\alpha \) was chosen arbitrarily, this is a contradiction and hence we have \(j=k\).
Let \(j\ge k\) be the highest integer for which \(v_i^j\ne 0\) and suppose that \(j>k\). Given any minor \(M'\) of order j, say \(M'=e_\alpha \cdot {\mathbf {M}}_j\) for some \(\alpha \), there is an A such that \(M'\) is the only minor of order j that does not vanish at A; moreover, by flipping the sign of the \(i_1\)-th row of A, if need be, we can assume that \(A \in \overline{\mathcal {O}_M}\). Since \(j>k\) is the highest integer for which \(v_i^j\ne 0\), by computing
and sending \(0<t\nearrow \infty \) we see that we must have \((v_1^j+v_2^j)_\alpha =0\) and also that the sign of \(E_i(t A)\) is, for large t, the sign of \((v_i^j)_\alpha M'(A)\); hence \((v_i^j)_\alpha =0\) for \(i=1,2\). Moreover, since \(\alpha \) was chosen arbitrarily we have \(v_i^j=0\) and we find a contradiction; thus \(j=k\).
The two previous paragraphs show that, in \(\overline{\mathcal {O}_M}\),
and we already know that \(v_i^k\cdot {\mathbf {M}}_k(A)= \lambda _i M(A)\), so the proof that \(M^+\) is extremal is complete. The fact that \(M^-\) is extremal follows from the identity \(M^-=M^+(J\cdot )\), where \(J=(j_{\alpha \beta })\), \(j_{\alpha \beta }=\delta _{\alpha \beta }(1 -2\delta _{\alpha i_1}) ,\) so J changes the sign of the \(i_1\)-th row.
For the second part of the theorem take some \(k\in \{0,\dots , n\}\) and assume that there are rank-one convex integrands \(E_1, E_2:{\mathbb {R}}^{n\times n}_\text{ sym }\rightarrow [0,\infty )\) such that \(F_k=E_1+E_2\). The integrand \(F_k\) has support in \(\mathcal {O}_k\), which by Lemma 4.5 is connected, and in this set each \(E_i\) is rank-one affine, so Theorem 4.4 implies that there are \(c_i\in {\mathbb {R}}, v_i \in {\mathbb {R}}^{\tau (n)}\) such that
By continuity this in fact holds in \(\overline{\mathcal {O}_k}\). From Remark 3.5 we see that each \(E_i\) has to be positively n-homogeneous and therefore \(E_i= \alpha _i \det \) in \(\mathcal {O}_k\), where \(\alpha _i\) is the last entry of \(v_i\). Since \(E_i\ge 0\) we must have, by possibly changing the sign of \(\alpha _i\), \(E_i=\alpha _i |\det |\) in \(\mathcal {O}_k\). Moreover, \(E_i=0\) outside \(\mathcal {O}_k\), and so indeed \(E_i=\alpha _i F_k\) as wished. \(\square \)
We note that, for the second part of the theorem, it is helpful to employ the homogeneity from Remark 3.5. Indeed, it is easy to see, and it follows in particular from the linear algebraic arguments in the proof of the full space case, that minors of a fixed order are linearly independent as functions on \({\mathbb {R}}^{n\times n}\). However, this is not the case if instead we think of them as functions defined over \({\mathbb {R}}^{n\times n}_\text{ sym }\), since there are non-trivial linear relations between minors. For instance, given a \(4\times 4\) matrix \(A=(a_{ij})\), we have that
This is classical phenomenon and had already been noted, for instance, in [5, Pages 155-156].
5 Choquet theory and Morrey’s problem
In this section we shall see the implications of Choquet theory for Morrey’s problem. Let us introduce some notation: given a number \(d\in {\mathbb {N}}\), let \(Q_d=[0,1]^d\), denote by \(A_1, \dots , A_{2^d}\) its vertices and consider the cone
In a similar fashion we define the cone \(\mathcal {C}_d^{\text{ sc }}\) of non-negative separately convex functions on \(Q_d\) and when \(d=n\times m\) for some \(n,m\in {\mathbb {N}}\), we have the cones \(\mathcal {C}_d^{\text{ qc }}\) and \(\mathcal {C}_d^{\text{ rc }}\) of non-negative quasiconvex and rank-one convex integrands. These are closed convex cones in the locally convex vector space \({\mathbb {R}}^{Q_d}\) of real-valued functions on \(Q_d\); the topology on \({\mathbb {R}}^{Q_d}\) is the product topology, i.e. the topology of pointwise convergence. In particular, for any \(x\in Q_d\) the evaluation functionals \(\varepsilon _x:f\mapsto f(x)\) are continuous.
We claim that each of the above cones has a compact, convex base:
here we only take \(\Box \in \{\text{ qc }, \text{ rc }\}\) if \(d=n\times m\). Clearly each \(\mathcal {B}_d^\Box \) is a closed, convex base for \(\mathcal {C}_d^\Box \), so it suffices to see that \(\mathcal {B}^{\text{ sc }}_d\) is compact. For this, note that a separately convex function on \(Q_d\) attains its maximum at some \(A_i\) and, since all functions \(f\in \mathcal {B}^\text{ sc }_d\) are non-negative, we have \(f\le 1\) in \(Q_d\). This shows that \(\mathcal {B}_d^\text{ sc }\subset [0,1]^{Q_d}\), which is a compact set by Tychonoff’s Theorem, and our claim is proved.
The main tool of this section is the following powerful result:
Theorem 5.1
(Choquet) Let K be a metrizable, compact, convex subset of a locally convex vector space X. For each \(f\in K\) there is a regular probability measure \(\mu \) on K which is supported on the set \(\text{ Ext }(K)\) of extreme points of K and which represents the point f: for all \(\varphi \in X^*\),
For a proof see, for instance, [39, §3]. We note that in general—and this is also the case in our situation—the representing measure is not unique. In order to apply this theorem to \(\mathcal {B}_d^\Box \), we need to show that this set is metrizable and this can be done by using a simple result from point-set topology; for a proof see, for instance, [28, Lemma 10.45].
Lemma 5.2
Let K be a compact Hausdorff space. Then K is metrizable if and only if there is a countable family of continuous functions on K which separates points.
In our situation, it is easy to see that such a family exists: indeed, let \(x_n\in Q_d\) be a countable set of points which is dense in \(Q_d\) and consider the evaluation functionals \(\varepsilon _{x_n}:f\mapsto f(x_n)\) on \(\mathcal {B}_d^\Box \). These functionals are continuous on \(\mathcal {B}_d^\Box \) and separate points, since all elements of \(\mathcal {B}_d^\Box \) are continuous real-valued functions on \(Q_d\). Therefore the lemma implies that \(\mathcal {B}_d^\Box \) is metrizable, and hence Choquet’s theorem yields:
Proposition 5.3
Fix \(\Box \in \{\text{ c },\text{ qc },\text{ rc },\text{ sc }\}\) and let \(\nu \) be a regular probability measure in Q. The measure \(\nu \) satisfies \(f({\overline{\nu }} )\le \langle \nu ,f \rangle \) for all \(f\in \mathcal {C}_d^\Box \) if and only if \(g({\overline{\nu }} )\le \langle \nu ,g \rangle \) for all \(g\in \text{ Ext }(\mathcal {B}_d^\Box )\).
Proof
Assume that we have Jensen’s inequality for all \(g \in \text{ Ext }(\mathcal {B}_d^\Box )\), take any \(f\in \mathcal {B}_d^\Box \) and let \(\mu \) be the measure given by Choquet’s theorem. If we take \(\varphi =\varepsilon _{{\overline{\nu }}}\) in the theorem, we can apply Fubini’s theorem to see that
Since any \(h\in \mathcal {C}_d^\Box \) can be written uniquely as \(h=\lambda f\) for some \(\lambda >0, f \in \mathcal {B}_d^\Box \), the conclusion follows. \(\square \)
Theorem 1.2 follows easily from Proposition 5.3. For the reader’s convenience, we restate the theorem here:
Theorem 5.4
Let \(d=n^2\) and take a Radon probability measure \(\nu \) supported in the interior of \(Q_d\). Then \(\nu \) is a laminate if and only if
for all integrands \(g\in \text{ Ext }(\mathcal {B}_d^\text{ rc })\).
Proof
From Pedregal’s Theorem, the measure \(\nu \) is a laminate if and only if Jensen’s inequality holds for any rank-one convex integrand \(f:{\mathbb {R}}^{n\times n}\rightarrow {\mathbb {R}}\):
Note that if this inequality holds for all non-negative rank-one convex integrands then it holds for any rank-one convex integrand: given any such f, one can consider the new integrand
which is non-negative and rank-one convex, hence by hypothesis \(f_k({\overline{\nu }})\le \langle \nu , f_k \rangle \). This in turn is equivalent to
Sending \(k\rightarrow \infty \) we find that \( f({\overline{\nu }}) \le \langle \nu , f\rangle \), as we wished; note that f, being continuous, is bounded by below on \(Q_d\).
Therefore, from Proposition 5.3, the theorem follows once we show that any rank-one convex integrand \(g:Q_d\rightarrow [0,\infty )\) can be extended to a rank-one convex integrand \(f:{\mathbb {R}}^{n\times n}\rightarrow {\mathbb {R}}\) with \(g=f\) in the support of \(\nu \). This is a standard result, see [42]. \(\square \)
We end this section with some cautionary comments concerning the previous results. In the one dimensional case, where all the above cones coincide, the extreme points are quite easy to identify; the oldest reference we found where this problem is discussed is [7], but see also [28, §14.1].
Proposition 5.5
The set of extreme points of \(\mathcal {B}_1\) is the set \(\{\varphi _y, \psi _y: y\in [0,1]\}\), where the functions \(\varphi _y, \psi _y\) are defined by
In higher dimensions the various cones are different. In the case of convex integrands, the set of extreme points of \(\mathcal {C}_d^\text{ c }\) for \(d>1\) is very different from the one-dimensional case, since it is dense in this cone.
Theorem 5.6
Any finite continuous convex function on a convex domain \(U\subset {\mathbb {R}}^d\) can be approximated uniformly on convex compact subsets of U by extremal convex functions.
This result was proved by Johansen in [22] for \(d=2\) and then generalized to any \(d>1\) in [8]. In these papers the set of extremal convex functions is not fully identified, but it is shown that there is a sufficiently large class of extremal convex functions which approximate any given convex function well: these are certain polyhedral functions, i.e. functions of the form \(f=\max _{1\le i \le k} a_i\) for some affine functions \(a_1, \dots , a_k\). This disturbing situation, however, is not too unexpected given the result of Klee [26] already mentioned in the introduction. I do not know whether a similar statement holds for the cones \(\mathcal {C}_{n\times m}^\text{ qc }\) and \(\mathcal {C}_{n\times m}^\text{ rc }\).
Notes
We will refer to real-valued functions defined on a matrix space as integrands; this terminology is standard in the Calculus of Variations literature.
In fact, Šverák only conjectures extremality in the cone of quasiconvex integrands, so our results are in this sense slightly stronger than his conjecture.
References
Astala, K., Iwaniec, T., Martin, G.: Elliptic Partial Differential Equations and Quasiconformal Mappings in the Plane (PMS-48). Princeton University Press, Princeton (2009)
Astala, K., Iwaniec, T., Prause, I., Saksman, E.: Burkholder integrals, Morrey’s problem and quasiconformal mappings. J. Am. Math. Soc. 25(2), 507–531 (2012)
Astala, K., Iwaniec, T., Prause, I., Saksman, E.: A hunt for sharp \(L^p\)-estimates and rank-one convex variational integrals. Filomat 29(2), 245–261 (2015)
Baernstein, A., Montgomery-Smith, S.J.: Some conjectures about integral means of \(\partial f\) and \({\bar{\partial }} f\). Complex analysis and differential equations (Uppsala, 1997). Acta Univ. Ups. Skr. Uppsala Univ. C Organ. Hist. 64, 92–109 (1997)
Ball, J., Currie, J., Olver, P.: Null Lagrangians, weak continuity, and variational problems of arbitrary order. J. Funct. Anal. 41(2), 135–174 (1981)
Ball, J.M.: Convexity conditions and existence theorems in nonlinear elasticity. Arch. Ration. Mech. Anal. 63(4), 337–403 (1977)
Blaschke, W., Pick, G.: Distanzschätzungen im Funktionenraum II. Math. Ann. 77, 277–302 (1916)
Bronshtein, E.M.: Extremal convex functions. Sib. Math. J. 19(1), 6–12 (1978)
Burkholder, D.L.: Boundary value problems and sharp inequalities for martingale transforms. Ann. Probab. 12(3), 647–702 (1984)
Burkholder, D.L.: Sharp inequalities for martingales and stochastic integrals. Astérisque 157–158, 75–94 (1988)
Chen, C.Y., Kristensen, J.: On coercive variational integrals. Nonlinear Anal. Theory Methods Appl. 153, 213–229 (2017)
Dacorogna, B.: Direct Methods in the Calculus of Variations. Applied Mathematical Sciences, vol. 78. Springer, New York (2007)
Evans, L.C.: Quasiconvexity and partial regularity in the calculus of variations. Arch. Ration. Mech. Anal. 95(3), 227–252 (1986)
Faraco, D., Székelyhidi, L.: Tartar’s conjecture and localization of the quasiconvex hull in \( {\mathbb{R}}^{{2 \times 2}} \). Acta Math. 200(2), 279–305 (2008)
Faraco, D., Zhong, X.: Quasiconvex functions and Hessian equations. Arch. Ration. Mech. Anal. 168(3), 245–252 (2003)
Grabovsky, Y.: From microstructure-independent formulas for composite materials to rank-one convex, non-quasiconvex functions. Arch. Ration. Mech. Anal. 227(2), 607–636 (2018)
Harris, T.L.J., Kirchheim, B., Lin, C-c: Two-by-two upper triangular matrices and Morrey’s conjecture. Calc. Var. Partial Differ. Equ. 57(3), 73 (2018)
Harutyunyan, D., Milton, G.W.: Towards characterization of all \(3\times 3\) extremal quasiconvex quadratic forms. Commun. Pure Appl. Math. 70(11), 2164–2190 (2017)
Iwaniec, T.: Nonlinear Cauchy–Riemann operators in \({\mathbb{R}}^n\). Trans. Am. Math. Soc. 354(5), 1961–1995 (2002)
Iwaniec, T., Martin, G.: Geometric Function Theory and Non-linear Analysis. Clarendon Press, Oxford (2001)
Iwaniec, T., Sbordone, C.: Weak minima of variational integrals. J. Reine Angew. Math. (Crelles J.) 1994(454), 143–162 (1994)
Johansen, S.: The extremal convex functions. Math. Scand. 34(1), 61–68 (1974)
Kirchheim, B.: Geometry and rigidity of microstructures. Dissertation habilitation thesis, Universität Leipzig, (2001)
Kirchheim, B., Kristensen, J.: On rank one convex functions that are homogeneous of degree one. Arch. Ration. Mech. Anal. 221(1), 527–558 (2016)
Kirchheim, B., Székelyhidi, L.: On the gradient set of Lipschitz maps. J. Reine Angew. Math. (Crelles J.) 2008(625), 215–229 (2008)
Klee, V.: Some new results on smoothness and rotundity in normed linear spaces. Math. Ann. 139(1), 51–63 (1959)
Kružík, M.: Bauer’s maximum principle and hulls of sets. Calc. Var. Partial Differ. Equ. 11(3), 321–332 (2000)
Lukeš, J., Malý, J., Netuka, I., Spurný, J.: Integral Representation Theory: Applications to Convexity, Banach Spaces and Potential Theory. Walter de Gruyter, New York (2010)
Matoušek, J., Plecháč, P.: On functional separately convex hulls. Discrete Comput. Geom. 19(1), 105–130 (1998)
Meyers, N.G.: Quasi-convexity and lower semi-continuity of multiple variational integrals of any order. Trans. Am. Math. Soc. 119(1), 125 (1965)
Milton, G.W.: On characterizing the set of possible effective tensors of composites: the variational method and the translation method. Commun. Pure Appl. Math. 43(1), 63–125 (1990)
Morrey, C.B.: Quasi-convexity and lower semicontinuity of multiple integrals. Pac. J. Math. 2, 25–53 (1952)
Müller, S.: Variational models for microstructure and phase transitions. In: Calculus of Variations and Geometric Evolution Problems. Springer, Berlin, pp. 85–210 (1999)
Müller, S., Šverák, V.: Convex integration for Lipschitz mappings and counterexamples to regularity. Ann. Math. 157(3), 715–742 (2003)
Murat, F.: Compacité par Compensation. Ann. Sc. Norm. Super. Pisa Cl. Sci. 5(3), 489–507 (1978)
Pedregal, P.: Laminates and microstructure. Eur. J. Appl. Math. 4(02), 121–149 (1993)
Pedregal, P.: Some remarks on quasiconvexity and rank-one convexity. Proc. R. Soc. Edinb. Sect. A Math. 126(05), 1055–1065 (1996)
Pedregal, P., Šverák, V.: A note on quasiconvexity and rank-one convexity for 2 x 2 matrices. J. Convex Anal. 5, 107–118 (1998)
Phelps, R.R. (ed.): Lectures on Choquet’s Theorem, vol. 1757 of Lecture Notes in Mathematics. Springer, Berlin (2001)
Rindler, F.: Calculus of Variations. Universitext. Springer, Berlin (2018)
Sebestyén, G., Székelyhidi, L.: Laminates supported on cubes. J. Convex Anal. 24(4), 1217–1237 (2017)
Šverák, V.: Examples of rank-one convex functions. Proc. R. Soc. Edinb. Sect. A Math. 114(3–4), 237–242 (1990)
Šverák, V.: New examples of quasiconvex functions. Arch. Ration. Mech. Anal. 119(4), 293–300 (1992)
Šverák, V.: Rank-one convexity does not imply quasiconvexity. Proc. R. Soc. Edinb. Sect. A Math. 120(1–2), 185–189 (1992)
Székelyhidi, L.: Rank-one convex hulls in \({\mathbb{R}}^{2\times 2}\). Calc. Var. Partial Differ. Equ. 3, 253–281 (2005)
Tartar, L.: Compensated compactness and applications to partial differential equations. In: Nonlinear Analysis and Mechanics: Heriot-Watt Symposium, vol. 4 (1979)
Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council [EP/L015811/1]. I thank my supervisor Jan Kristensen for guidance throughout the process of obtaining these results as well as for carefully reading and making corrections to the drafts. I would also like to thank István Prause, Rita Teixeira da Costa and Lukas Koch for helpful discussions and comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by M. Struwe.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Guerra, A. Extremal rank-one convex integrands and a conjecture of Šverák. Calc. Var. 58, 201 (2019). https://doi.org/10.1007/s00526-019-1646-5
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00526-019-1646-5