Extremal rank-one convex integrands and a conjecture of \v{S}ver\'ak

We show that in order to decide whether a given probability measure is laminate it is enough to verify Jensen's inequality in the class of extremal non-negative rank-one convex integrands. We also identify a subclass of these extremal integrands, consisting of truncated minors, thus proving a conjecture made by \v{S}ver\'ak in (Arch. Ration. Mech. Anal. 119 293-300, 1992).


Introduction
Since its introduction in the seminal work of Morrey [32], quasiconvexity has played an important role not just in the Calculus of Variations [11,13,21,40] but also in problems from other areas of mathematical analysis, for instance in the theory of compensated compactness [35,46]. Nonetheless, this concept is still poorly understood and has been mostly studied in relation with polyconvexity and rank-one convexity, which are respectively stronger and weaker notions that are easier to deal with (we refer the reader to Section 2 for terminology and notation). An outstanding open problem in the area is Morrey's problem, which is the problem of deciding whether rank-one convexity implies quasiconvexity, so that the two notions coincide. A fundamental example [44] of Šverák shows that this implication does not hold in dimensions 3 × 2 or higher and, more recently, Grabovsky [16] has found a different example in dimensions 8 × 2 which moreover is 2-homogeneous. The problem in dimensions 2 × 2, in particular, remains completely open, but in the last two decades evidence towards a positive answer in this case has been piling up [2,14,17,24,25,33,37,38,41].
A presumably easier (but still unsolved) version of Morrey's problem in dimensions 2 × 2 is to decide whether rank-one convex integrands (1) in the space of 2 × 2 symmetric matrices are quasiconvex. In this direction, Šverák introduced in [43] new quasiconvex integrands, which were later generalized in [15]. For any n × n symmetric matrix A, these integrands are defined by F k (A) ≡ | det A| A has index k 0 otherwise (1) We will refer to real-valued functions defined on a matrix space as integrands; this terminology is standard in the Calculus of Variations literature. for k = 0, . . . , n; we recall that the index of a matrix is the number of its negative eigenvalues. We also note that the integrand F 0 is sometimes called det ++ in the literature, since its support is the set of positive definite matrices. These integrands have played an important role in studying other problems related to the Calculus of Variations, for instance in building counterexamples to the regularity of elliptic systems [34] or in the computation of rank-one convex hulls of compact sets [45].
In order to understand Šverák's motivation for considering these integrands it is worth making a small excursion into classical convex analysis. Given a real vector space V and a convex set K ⊂ V, one can define the set of extreme points of K as the set of points which are not contained in any open line segment contained in K. In general the set of extreme points might be very small: this is what happens, for instance, when the set is a convex cone C ⊂ V, since in this situation all non-zero vectors are contained in a ray through zero. However, if we can find a convex base B for C, then we note that such a ray corresponds to a unique point in B. If this is an extreme point of B then we say that we have an extremal ray.
We are interested in the extremal rays of the cone C of rank-one convex integrands. This cone has the inconvenient feature that it is not line-free: there is a set of elements v ∈ V such that, for any c ∈ C and any t ∈ R, the point c + tv is in C; this set is precisely C ∩ (−C). In turn, it is quite clear that this is the set of rank-one affine integrands. A reasonable way of disposing of rank-one affine integrands is by demanding non-negativity from all integrands from C. This leads us to the definition of extremality considered by Šverák: we say that a non-negative rank-one convex integrand F is extremal if, whenever we have F = E 1 + E 2 for E 1 , E 2 non-negative rank-one convex integrands, then each E i is a non-negative multiple of F . A weaker notion of extremality was introduced by Milton in [31] for the case of quadratic forms (see also [18]) but we shall not discuss it further here.
Let us now explain the relation between Šverák's integrands and extremality. In [43] Šverák observed that the polyconvex integrand | det | : R n×n → [0, ∞) is not extremal, since | det | = det + + det − , where as usual which are also polyconvex. He also observed that det ± are not extremal in R n×n sym , since However, he conjectured that det ± are extremal in R n×n and also that each F k is extremal in R n×n sym . In this paper we give an affirmative answer to both conjectures (2) : 1.1 Theorem. Given a minor M : R n×n → R, let M ± be its positive and negative parts. Then M ± are extremal non-negative rank-one convex integrands in R n×n .
For k = 0, . . . , n, the integrands F k : R n×n sym → R, are extremal non-negative rank-one convex integrands in R n×n sym . As a main tool we use the fact that, on a connected open set, a rank-one affine integrand is an affine combination of minors; this is proved by localizing the arguments from Ball-Currie-Olver [5] concerning the classification of Null Lagrangians. (2) In fact, Šverák only conjectures extremality in the cone of quasiconvex integrands, so our results are in this sense slightly stronger than his conjecture.
The importance of extreme points in convex analysis has to do with the Krein-Milman theorem, which states that the closed convex hull of the set of extreme points of a compact, convex subset K of a locally convex vector space is the whole set K-informally, this means that the set of extreme points is a set of "minimal information" needed to recover K. However, Klee [26] showed that Krein-Milman theorem is generically true for trivial reasons: if we fix an infinite-dimensional Banach space and we consider the space of its compact, convex subsets (which we can equip with the Hausdorff distance so that it becomes a complete metric space), then for almost every compact convex set K its extreme points are dense in K. Here we mean "almost every" in the sense that the previous statement is false only in a meagre set. Despite this disconcerting result, the situation can be somewhat remedied with the help of Choquet theory, which roughly states that, under reasonable assumptions, an arbitrary point in K can be represented by a measure carried in the set of its extreme points. For precise statements and much more information concerning Choquet theory we refer the reader to the lecture notes [39] or to the monograph [28] and for Krein-Milman-type theorems for semi-convexity notions see [23,27,29]. Theorem 1.1 can be interpreted in light of Choquet theory. It is well-known (see, for instance, the lecture notes [33]) that Morrey's problem is equivalent to a dual problem, which is the problem of deciding whether the class of homogeneous gradient Young measures and the class of laminates are the same. Fix a compactly supported Radon probability measure ν on R n×n ; for simplicity we assume that ν has support contained in the interior of the cube In order to decide whether ν is a laminate, we can resort to an important theorem due to Pedregral [36], which states that ν is a laminate if and only if Jensen's inequality holds for any rank-one convex integrand f : R n×n → R: Since the class of rank-one convex integrands is rather large, the problem of deciding whether a measure is a laminate is in general very hard. However, it follows from Choquet theory that one does not need to test ν against all rank-one convex integrands, it being sufficient to test it in a strictly smaller class: 1.2 Theorem. Let A 1 , . . . , A 2 n 2 be the vertices of the cube Q. A Radon probability measure ν supported on the interior of Q is a laminate if and only if g(ν) ≤ ν, g for all integrands g which are extreme points of the convex set We note that the summation condition is simply a normalization which corresponds to fixing a base of the cone of non-negative rank-one convex integrands on Q.
Our interest in extremal integrands was ignited by the work [3] of Astala-Iwaniec-Prause-Saksman, where it was shown that an integrand known as Burkholder's function is extreme in the class of homogeneous, isotropic, rank-one convex integrands; in fact, this integrand is also the least integrand in this class, in the sense that no other element of the class is below it at all points. The relevance of this fact is readily seen: from standard results about quasiconvex envelopes it follows immediately that the Burkholder function is either quasiconvex everywhere or quasiconvex nowhere. Burkholder's function was found in the context of martingale theory by Burkholder [9,10] and was later generalized to higher dimensions by Iwaniec [19]. This remarkable function is a bridge between Morrey's problem and important problems in Geometric Function Theory [1,20] and we refer the reader to the very interesting papers [2,3,19] and the references therein for more details in this direction.
Finally, we give a brief outline of the paper. Section 2 contains standard definitions, notation and briefly recalls some useful facts for the reader's convenience. Section 3 comprises results concerning improved homogeneity properties of rank-one convex integrands which vanish on isotropic cones. Section 4 is devoted to the proof of Theorem 1.1. Finally, Section 5 elaborates on the relation between Choquet theory and Morrey's problem and we prove Theorem 1.2.

Preliminaries
In this section we will gather a few definitions and notation for the reader's convenience. The material is standard and can be found for instance in the excellent references [12] and [33].
Consider an integrand E : R m×n → R. We say that E is polyconvex if E(A) is a convex function of the minors of A, see [6]. If E is locally integrable, we say that E is quasiconvex if there is some bounded open set Ω ⊂ R n such that, for all A ∈ R m×n and all ϕ ∈ C ∞ 0 (Ω, R n ), This notion was introduced in [32] and generalized to higher-order derivatives by Meyers in [30]. The case of higher derivatives was also addressed in [5], where it was shown that quasiconvexity is not implied by the corresponding notion of rank-one convexity if m > 2 and n ≥ 2. The integrand E is rank-one convex if, for all matrices A, X ∈ R m×n such that X has rank one, the function t → E(A + tX) is convex. An equivalent definition of rank-one convexity is that E is rank-one convex if, whenever By a slight abuse of terminology we will call a collection (A i , λ i ) N i=1 which satisfies the (H N ) conditions a prelaminate. It is also clear how to adapt the definition of rank-one convexity to the more general situation where E is defined on an open set O ⊂ R m×n . Finally, E : R d → R is separately convex if, for all x ∈ R d and all i = 1, . . . , d, the function t → E(x + te i ) is convex; we denote by e 1 , . . . , e d the standard basis of R d .
We recall that all real-valued rank-one convex integrands are locally Lipschitz continuous, a fact that we will often use implicitly. Moreover, if (m, n) = (1, 1), we have The case n ≥ m = 2 is the content of Morrey's problem. We will only consider square matrices, so n = m. Besides polyconvexity, the integrands det ± : R n×n → [0, ∞) have two important properties: they are positively n-homogeneous (in fact, they are n-homogeneous if n is even) and isotropic. A generic integrand E : R n×n → R is said to be p-homogeneous for a number p ≥ 1 if for all A ∈ R n×n and all t ∈ R; it is positively p-homogeneous if the same holds only for t > 0. The integrand E is isotropic if it is invariant under the left-and right-SO(n) actions, that is, This condition can be understood in a somewhat more concrete way with the help of singular values. The singular values σ 1 (A) ≥ · · · ≥ σ n (A) ≥ 0 of a matrix A are the eigenvalues of the matrix √ AA T . We shall consider the signed singular values σ j (A), which are defined by As we shall only deal with the signed singular values, the word signed will sometimes be omitted. The importance of singular values is largely due to the following standard result (c.f. [12, §13]): 2.1 Theorem. Let A ∈ R n×n . There are matrices Q, R ∈ SO(n) and real numbers |σ n | ≤ σ n−1 ≤ · · · ≤ σ 1 such that With the help of this theorem we can reinterpret isotropy as follows: consider the polar decomposition A = Q √ AA T for some Q ∈ O(n) and consider a diagonalization of the positivedefinite matrix Here all the σ j 's are positive, but by flipping the sign of the last one if det A < 0 we can take Q ∈ SO(n). Hence, if E is isotropic, So isotropic integrands are functions of the signed singular values. When n = 2 isotropy is particularly simple to handle, since there is a simple way of understanding the signed singular values of a matrix A ∈ R 2×2 . For this, we recall the conformal-anticonformal decomposition: we can write

This corresponds to the decomposition
aconf , which is orthogonal with respect to the Euclidean inner product. Here R 2×2 conf is the space of conformal matrices while R 2×2 aconf corresponds to the anticonformal matrices; these are the matrices that are scalar multiples of orthogonal matrices and have respectively positive and negative determinant. This decomposition is particularly useful for us because the singular values of A satisfy the identities In particular, the above formulae yield The above decomposition also allows one to identify R 2×2 ∼ = C 2 by the linear isomorphism The advantage of this identification is that we can say that a integrand E : C 2 → R is rank-one convex if and only if the function is convex for all (z, w) ∈ C 2 and all (ξ, ζ) ∈ S 1 × S 1 . The conformal-anticonformal decomposition of R 2×2 is also related to an important rankone convex integrand, commonly referred to as Burkholder's function. This function can be defined in any real or complex Hilbert space with the norm · by where p * − 1 = max(p − 1, (p − 1) −1 ); here and in the rest of the paper, when referring to B p , we implicitly assume that 1 < p < ∞. This remarkable function is zig-zag convex, i.e.
see [10], and it is also p-homogeneous. If the Hilbert space where B p is defined is C, the zig-zag convexity of B p implies that the Burkholder function B p : C × C → R is rank-one convex. Since we are interested in non-negative integrands, we will also deal with the integrand B + p ≡ max(B p , 0), which is also rank-one convex. Moreover, B + 2 = det + , so B + p can be seen as a "det + -type integrand", in the sense that it is rank-one convex, isotropic and vanishes on some cone but it is more general since it can be homogeneous with any degree of homogeneity strictly greater than one. We refer the reader to [19] for higher dimensional generalizations of B p and to [2] for extremality results concerning this integrand.

Homogeneity properties of a class of rank-one convex integrands
In this section we discuss homogeneity properties of rank-one convex integrands which vanish on cones, both in two and in higher dimensions. We are interested in the family of isotropic cones of aperture a ≥ 1, motivated by the fact that the Burkholder function B p vanishes on C p * −1 . When a = 1 we have C 1 = {det = 0}, which of course can be defined in any dimension.
3.1 Lemma. Let E : C × C → R be rank-one convex and assume that, for some a ≥ 1, E is non-positive on C a . Define and let A = (z, w) be such that k ≡ |w|/|z| ≤ 1. Then, for t ≥ 1, Proof: Let us write, for real numbers x, y ∈ R, (x, y) ≡ (xz/|z|, yw/|w|) ∈ C × C, so A = (|z|, k|z|). We fix t > 1 since when t = 1 there is nothing to prove. Let us define the auxiliary points see Figure 1. Simple calculations show that and it is easy to verify that B 1 − A 1 and B 2 − tA are rank-one directions. One also needs to verify that λ 1 , λ 2 ∈ [0, 1], which is a lengthy but elementary calculation using the fact that Observe that B 1 , B 2 ∈ C a and so E(B 1 ) = E(B 2 ) ≤ 0. Therefore, from rank-one convexity, we have and a simple calculation shows that λ 1 λ 2 = h a (t, k).
We now specialise the lemma to two important situations, in which one can say more. Let us first assume that k = 0. 3.2 Proposition. Let E : C × C → R be rank-one convex, positively p-homogeneous for some p ≥ 1 and not identically zero. If there is some a ≥ 1 such that E = 0 on C a then p ≥ 1 a + 1. We note that this inequality is sharp: indeed, the zero set of the Burkholder function B p is C p * −1 and so, when 1 < p < 2, we have Thus we can reinterpret this proposition as saying that, for 1 < p < 2, the Burkholder function has the least possible order of homogeneity of rank-one convex integrands which vanish on C p * −1 .
Proof: In this proof we assume that a > 1, since the case a = 1 follows from Lemma 3.3 below. We claim that there is some z ∈ C such that E(z, 0) > 0. Once this is shown, the conclusion follows easily: take k = 0 in Lemma 3.1 to find that and A = (z, 0). Since E(A) > 0, we must have F a (t) ≥ 1 for all t ≥ 1. Moreover, F a (1) = 1 and an elementary computation reveals that which is non-negative precisely when p ≥ 1 a + 1. To finish the proof it suffices to prove the claim. Take an arbitrary z ∈ S 1 and take any rank-one line segment starting at (z, 0) and having the other end-point in C a ; such a line must intersect C 1 , since a > 1, say at P z . Note that E(P z ) ≥ 0, since the function t → E(tP z ) = t p E(P z ) is convex. We conclude that E(z, 0) ≥ 0 with equality if and only if E(P z ) = 0, in which case E is identically zero along the entire rank-one line segment.
To prove the claim we want to show that if E(z, 0) = 0 for all z ∈ C then E is identically zero, so let us make this assumption. Then, from the previous discussion, we see that E is identically zero in the "outside" of C a , i.e. in Moreover, given any point P in the interior of C a , there is a rank-one line segment through P with both endpoints, say P 1 , P 2 , in C + a ; this is the case because we assume a > 1. But E is zero in a neighbourhood of P i and since it is convex along the rank-one line segment [P 1 , P 2 ] we conclude that it is also zero at P .
We remark that, in one dimension, the only homogeneous extreme convex integrands are linear (c.f. Proposition 5.3) while, from the results of Section 4, for n > 1 there are extremal rank-one convex integrands in R n×n which are positively k-homogeneous for any k ∈ {1, . . . , n}. It would be interesting to know whether there are extremal homogeneous integrands with other degrees of homogeneity, or whether there is an upper bound for the order of homogeneity of such integrands.
If we set a = 1 in Lemma 3.1, so C 1 = {det = 0}, we see that h 1 (t, k) = t −2 and we find the estimate t 2 E(A) ≤ E(tA) for t ≥ 1. In fact, this holds in any dimension, and the proof is a simple variant of the proof of Lemma 3.1.

Lemma.
Let E : R n×n → R be a rank-one convex integrand which is non-positive on the cone {det = 0}. Then for all A ∈ R n×n , E satisfies Proof: Let us begin by observing that the second inequality follows from the first. Indeed, given a matrix A ∈ R n×n and 0 < t < 1, let B ≡ tA and apply the first inequality to B to get this can be done since 1 < 1 t . Hence we shall prove only the first inequality. We begin by proving the statement in the case where A is diagonal, so there are real numbers σ j such that A = diag(σ 1 , . . . , σ n ). Let A 0 ≡ A and define, for 1 ≤ j ≤ n, A j = diag(tσ 1 , . . . , tσ j , σ j+1 , . . . , σ n ) B j = diag(tσ 1 , . . . , tσ j−1 , 0, σ j+1 , . . . , σ n ).
Hence, for any 1 ≤ j ≤ n, This is a splitting of A j−1 since we are assuming that t > 1 and also A j − B j = diag(0, . . . , 0, tσ j , 0, . . . , 0), which is a rank-one matrix. Iterating (3.4) we find and by construction this is a prelaminate. Thus rank-one convexity of E yields since det(B j ) = 0 for all 1 ≤ j ≤ n, hence E(B j ) ≤ 0, and also A n = tA by definition.
In the case where A is not diagonal, we consider the singular value decomposition of Theorem 2.1, i.e. A = QΣR where Q, R ∈ SO(n) and Σ = diag(σ 1 , . . . , σ n ). We see that (3.5) can be rewritten as and so, multiplying this by Q and R, we get We still have det(QB j R) = det(B j ) = 0 and hence to finish the proof it suffices to show that this decomposition of A is still a prelaminate. For this, we use the following elementary fact: for all A ∈ R n×n and M ∈ GL(n), rank(AM ) = rank(M A) = rank(A).
The splittings used to obtain this prelaminate are rotated versions of (3.4), i.e.
and this is still a legitimate splitting since As a simple consequence of the lemma, we find a rigidity result for decompositions of positively n-homogeneous integrands.
3.7 Proposition. Let E 1 , E 2 : R n×n → R be rank-one convex integrands which are non-positive on {det = 0} and assume there is some positively n-homogeneous integrand F such that F = E 1 + E 2 . Then each E i is positively n-homogeneous.
Proof: Define the "homogenized" integrands E h i : R n×n → R by

Our claim is that
and we have equality on the sphere {A ∈ R n×n : |A| = 1}, where E i = E h i . As both sides of the inequality are positively n-homogeneous they must be equal in the whole set U and so An identical argument establishes equality in the complement of U .
3.8 Remark. The proofs of Lemma 3.3 and Proposition 3.7 are fairly robust. In particular, a similar statement holds if the integrands E i are defined in R n×n sym instead of R n×n . Indeed, R n×n sym is the set of (real) matrices that can be diagonalized by rotations. Thus, the prelaminate built in the proof of Lemma 3.3 has support in R n×n sym if A is symmetric: for the nondiagonal case, one can take R = Q −1 .
Returning to the case n = 2, it would be pleasant to have a result analogous to Proposition 3.7 for integrands vanishing on cones of aperture greater than one. However, this is not possible, since it would yield the extremality of the Burkholder function in the class of isotropic rank-one convex integrands. In order to see that this cannot be the case, we recall that Šverák introduced in [42] the rank-one convex function which is related to the Burkholder function B p , when 1 < p < 2, bŷ see [4]. In particular, this shows that one cannot drop the homogeneity assumption from the results of [2].

Proof of extremality for truncated minors
This section is dedicated to the proof of Theorem 1.1. Although truncated minors are not linear along rank-one lines, they are piecewise linear along such lines. For this reason, it will be useful to have at our disposal the classification of rank-one affine integrands, which is due to Ball [6] in dimensions three or lower, Dacorogna [12] in higher dimensions and also Ball-Currie-Olver [5] in the case of higher order quasiconvexity. Given an open set O ⊂ R n×n and an integrand E : O → R we say that E is rank-one affine if both E and −E are rank-one convex; such integrands are also often called Null Lagrangians or quasiaffine.

Theorem. Let O ⊂ R n×n be a connected open set and consider a rank-one affine integrand E : O → R. Then E(A) is an affine combination of the minors of A.
More precisely, let M(A) be the matrix consisting of the minors of A and let τ (n) ≡ (2n)!/(n!) 2 be its length. There is a constant c ∈ R and a vector v ∈ R τ (n) such that This theorem is essentially a particular case of [5,Theorem 4.1], the only difference being that in this paper the authors deal only with integrands defined on the whole space. We briefly sketch how to adapt their proof to our case. The first result needed is the following: for all A ∈ O and all v i , w i ∈ R n with w 1 , . . . , w k linearly dependent.
In particular, when O is connected, any continuous rank-one affine integrand E is a polynomial of degree at most n.
We remark that our proof is very similar to the one in [6,Theorem 4.1].
Proof: We recall that a smooth integrand E is rank-one affine if and only if and so clearly one of the directions of the lemma holds. Hence let us assume that E is rank-one affine and fix some point A ∈ O. Define the 2k-tensor T : (R n ) 2k → R by Moreover, since E is rank-one affine, T is alternating. This follows from the following claim: if w j = w l for some j = l, then To see why this claim is true, we note that it certainly holds when k = 2, since (4.3) implies that = 0 and the same with v 2 in the place of v 1 . For a general k ≥ 2, we use implicit summation to see that where · represents an omitted term. Now we can apply the k = 2 case to the term in square brackets to see that T [v 1 , . . . , v k , w 1 , . . . , w k ] = 0 as wished.
To prove the lemma under the assumption that E is smooth, let us take w 1 , . . . , w k linearly dependent, so we can suppose for simplicity that w k = w 1 + · · · + w k−1 . Then   T [v 1 , . . . , v k , w 1 , . . . , w k−1 , w 1 + · · · + w k−1 ] = 0 since T is linear and alternating. The last statement of the lemma follows by observing that the first part implies that D n+1 E(A) = 0 for all A ∈ O.
When E is merely continuous, let ρ be the standard mollifier and let ρ ε (A) = ε −n 2 ρ(A/ε) for ε > 0. Fix A ∈ O and find an ε > 0 such that dist(A, ∂O) > ε. Then E ε ≡ ρ ε * E is smooth and rank-one affine and hence whenever w 1 , . . . , w k are linearly dependent. Since D k E ε converges to D k E locally uniformly, the conclusion of the lemma follows.
Using the lemma, we see that in order to prove the theorem it suffices to consider rank-one affine integrands which are homogeneous polynomials, so let us take such an integrand E which is a homogeneous polynomial of some degree k. Given any A ∈ O, the total derivative D k E(A) is a symmetric k-linear function D k E(A) : (R n×n ) k → R; we remark that this operator has as domain the whole matrix space and not just a subset of it. There is an isomorphism between the space of k-homogeneous rank-one affine integrands and the space of symmetric k-linear functions (R n×n ) k → R and the proof in [5] is unchanged in our case.
We recall that a generic symmetric rank-one matrix is of the form cv ⊗ v for some v ∈ R n with |v| = 1 and some c ∈ R. Hence, we have the following analogue of Lemma 4.2: sym be open. A smooth integrand E : O → R is rank-one affine if and only if, for any k ≥ 2, for all A ∈ O and all v i , w i ∈ R n with v 1 , . . . , v k linearly dependent.
From this we deduce, by the same arguments as in the situation above, the following result: 4.5 Theorem. Let O ⊂ R n×n sym be a connected open set and consider a rank-one affine integrand E : O → R. Then there is a constant c ∈ R and a vector v ∈ R τ (n) such that In order to apply Theorems 4.1 and 4.5 to the integrands we are interested in, we need to know that its supports are connected (in general, it is clear that any integrand with disconnected support cannot be extremal). Given a minor M , let O M ≡ {M > 0}. Moreover, each F k has support in the set O k ≡ {A ∈ R n×n sym : A has index k and is invertible}. Proof: For the first part, let M be an s × s minor. Let us make the identification so that M (A) = det(P s (A)), where P s is the projection of R n×n onto R s×s . Hence we see that, under this identification, Since both spaces are connected and the product of connected spaces is connected, O M is connected as well.
For the second part, note that the set O k is the set of matrices A for which there is some Q ∈ SO(n) and some diagonal matrix Λ = diag(a 1 , . . . , a k , b 1 , . . . , b n−k ), where a i < 0 and b j > 0, such that QAQ T = Λ. Clearly the set of Λ's with this form can be connected to diag(−I k , I n−k ) by a path in O k ; here I l is an l × l identity matrix. Hence it suffices to prove that there is a continuous path in O k connecting A = QΛQ T to Λ. Such a path is given by We are finally ready to prove the extremality of truncated minors and of Šverák's integrands. Clearly we must have c i = 0. Let us write, for some vectors v j i ∈ R ( n j )×( n j ) , where M j (A) is the matrix of the j-th order minors of A (this is denoted by adj j (A) in [12] where λ i ∈ R is the entry of v k i corresponding to M . We now prove that all the vectors v j i , j = k, are zero. Let j ≤ k be the lowest integer for which v j i = 0 and suppose that j < k. Given any minor M of order j, say M = e α · M j for some α, there is an A with rank(A) = j so that M is the only minor of order j that does not vanish at A. Since A has rank j all of its (j + 1) × (j + 1) minors vanish and, in particular, M (A) = 0 and hence A ∈ O M . Since j is the lowest integer for which v j i = 0 we have Since α was chosen arbitrarily, this is a contradiction and hence we have j = k.
Let j ≥ k be the highest integer for which v j i = 0 and suppose that j > k. Given any minor M of order j, say M = e α · M j for some α, there is an A such that M is the only minor of order j that does not vanish at A; moreover, by flipping the sign of the i 1 -th row of A, if need be, we can assume that A ∈ O M . Since j > k is the highest integer for which v j i = 0, by computing and sending 0 < t ∞ we see that we must have (v j 1 + v j 2 ) α = 0 and also that the sign of E i (tA) is, for large t, the sign of (v j i ) α M (A); hence (v j i ) α = 0 for i = 1, 2. Moreover, since α was chosen arbitrarily we have v j i = 0 and we find a contradiction; thus j = k. The two previous paragraphs show that, in O M , and we already know that v k i ·M k (A) = λ i M (A), so the proof that M + is extremal is complete. The fact that M − is extremal follows from the identity M − = M + (J·), where J = (j αβ ), j αβ = δ αβ (1 − 2δ αi 1 ), so J changes the sign of the i 1 -th row.
For the second part of the theorem take some k ∈ {0, . . . , n} and assume that there are rank-one convex integrands E 1 , The integrand F k has support in O k , which by Lemma 4.6 is connected, and in this set each E i is rank-one affine, so Theorem 4.5 implies that there are c i ∈ R, v i ∈ R τ (n) such that By continuity this in fact holds in O k . From Remark 3.8 we see that each E i has to be positively n-homogeneous and therefore E i = α i det in O k , where α i is the last entry of v i . Since E i ≥ 0 we must have, by possibly changing the sign of α i , E i = α i | det | in O k . Moreover, E i = 0 outside O k , and so indeed E i = α i F k as wished.
We note that, for the second part of the theorem, it is helpful to employ the homogeneity from Remark 3.8. Indeed, it is easy to see, and it follows in particular from the linear algebraic arguments in the proof of the full space case, that minors of a fixed order are linearly independent as functions on R n×n . However, this is not the case if instead we think of them as functions defined over R n×n sym , since there are non-trivial linear relations between minors. For instance, given a 4 × 4 matrix A = (a ij ), we have that −(a 13 a 24 − a 14 a 23 ) + (a 12 a 34 − a 14 a 23 ) − (a 12 a 34 − a 13 a 24 ) = 0. This is classical phenomenon and had already been noted, for instance, in [5, Pages 155-156].

Choquet theory and Morrey's problem
In this section we shall see the implications of Choquet theory for Morrey's problem. Let us introduce some notation: given a number d ∈ N, let Q d = [0, 1] d , denote by A 1 , . . . , A 2 d its vertices and consider the cone In a similar fashion we define the cone C sc d of non-negative separately convex functions on Q d and when d = n × m for some n, m ∈ N, we have the cones C qc d and C rc d of non-negative quasiconvex and rank-one convex integrands. These are closed convex cones in the locally convex vector space R Q d of real-valued functions on Q d ; the topology on R Q d is the product topology, i.e. the topology of pointwise convergence. In particular, for any x ∈ Q d the evaluation functionals ε x : f → f (x) are continuous.
We claim that each of the above cones has a compact, convex base: The main tool of this section is the following powerful result: 5.1 Theorem (Choquet). Let K be a metrizable, compact, convex subset of a locally convex vector space X. For each f ∈ K there is a regular probability measure µ on K which is supported on the set Ext(K) of extreme points of K and which represents the point f : for all ϕ ∈ X * , For a proof see, for instance, [39, §3]. We note that in general-and this is also the case in our situation-the representing measure is not unique. In order to apply this theorem to B d , we need to show that this set is metrizable and this can be done by using a simple result from point-set topology; for a proof see, for instance, [ In our situation, it is easy to see that such a family exists: indeed, let x n ∈ Q d be a countable set of points which is dense in Q d and consider the evaluation functionals ε xn : f → f (x n ) on B d . These functionals are continuous on B d and separate points, since all elements of B d are continuous real-valued functions on Q d . Therefore the lemma implies that B d is metrizable, and hence Choquet's theorem yields: 5.3 Proposition. Fix ∈ {c, qc, rc, sc} and let ν be a regular probability measure in Q. The measure ν satisfies f (ν) ≤ ν, f for all f ∈ C d if and only if g(ν) ≤ ν, g for all g ∈ Ext(B d ).
Proof: Assume that we have Jensen's inequality for all g ∈ Ext(B d ), take any f ∈ B d and let µ be the measure given by Choquet's theorem. If we take ϕ = ε ν in the theorem, we can apply Fubini's theorem to see that Since any h ∈ C d can be written uniquely as h = λf for some λ > 0, f ∈ B d , the conclusion follows. Theorem 1.2 follows easily from Proposition 5.3. For the reader's convenience, we restate the theorem here: 5.4 Theorem. Let d = n 2 and take a Radon probability measure ν supported in the interior of Q d . Then ν is a laminate if and only if g(ν) ≤ ν, g for all integrands g ∈ Ext(B rc d ).
Proof: From Pedregal's Theorem, the measure ν is a laminate if and only if Jensen's inequality holds for any rank-one convex integrand f : R n×n → R: Note that if this inequality holds for all non-negative rank-one convex integrands then it holds for any rank-one convex integrand: given any such f , one can consider the new integrand which is non-negative and rank-one convex, hence by hypothesis f k (ν) ≤ ν, f k . This in turn is equivalent to max(f (ν), −k) ≤ ν, max(f, −k) .
Sending k → ∞ we find that f (ν) ≤ ν, f , as we wished; note that f , being continuous, is bounded by below on Q d . Therefore, from Proposition 5.3, the theorem follows once we show that any rank-one convex integrand g : Q d → [0, ∞) can be extended to a rank-one convex integrand f : R n×n → R with g = f in the support of ν. This is a standard result, see [42].
We end this section with some cautionary comments concerning the previous results. In the one dimensional case, where all the above cones coincide, the extreme points are quite easy to identify; the oldest reference we found where this problem is discussed is [7], but see also [28 In higher dimensions the various cones are different. In the case of convex integrands, the set of extreme points of C c d for d > 1 is very different from the one-dimensional case, since it is dense in this cone. This result was proved by Johansen in [22] for d = 2 and then generalized to any d > 1 in [8]. In these papers the set of extremal convex functions is not fully identified, but it is shown that there is a sufficiently large class of extremal convex functions which approximate any given convex function well: these are certain polyhedral functions, i.e. functions of the form f = max 1≤i≤k a i for some affine functions a 1 , . . . , a k . This disturbing situation, however, is not too unexpected given the result of Klee [26] already mentioned in the introduction. I do not know whether a similar statement holds for the cones C qc n×m and C rc n×m .