On the constancy theorem for anisotropic energies through differential inclusions

In this paper we study stationary graphs for functionals of geometric nature defined on currents or varifolds. The point of view we adopt is the one of differential inclusions, introduced in this context in the recent papers (De Lellis et al. in Geometric measure theory and differential inclusions, 2019. arXiv:1910.00335; Tione in Minimal graphs and differential inclusions. Commun Part Differ Equ 7:1–33, 2021). In particular, given a polyconvex integrand f, we define a set of matrices \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_f$$\end{document}Cf that allows us to rewrite the stationarity condition for a graph with multiplicity as a differential inclusion. Then we prove that if f is assumed to be non-negative, then in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_f$$\end{document}Cf there is no \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T'_N$$\end{document}TN′ configuration, thus recovering the main result of De Lellis et al. (Geometric measure theory and differential inclusions, 2019. arXiv:1910.00335) as a corollary. Finally, we show that if the hypothesis of non-negativity is dropped, one can not only find \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$T'_N$$\end{document}TN′ configurations in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$C_f$$\end{document}Cf, but it is also possible to construct via convex integration a very degenerate stationary point with multiplicity.


Introduction
In this paper we continue the study started in [5,26] of functionals arising from geometric variational problems from the point of view of differential inclusions. The energies we consider are of the form defined on m-dimensional rectifiable currents (resp. varifolds) T = E, T , θ of × R n , where ⊂ R m is a convex and bounded open set, and the integrand is defined on the oriented (resp. non-oriented) Grassmanian space. In order to keep the technicalities at a minimum level, we defer all the definitions of these geometric objects to Section A. The main interest is the regularity of stationary points for energies as in (1.1) satisfying suitable ellipticity conditions. From the celebrated regularity theorem of Allard of [2], it is known that an ε-regularity theorem holds for stationary points of the area functional, namely the case in which ≡ 1. Since then, the question of extending this result to more general energies has been an important open problem in Geometric Measure Theory, see [1] for a result in this direction and [6,8,9] for more recent contributions. On the other hand, the situation is more understood for minimizers of energies of the form (1.1), where similar partial regularity theorems are known, see for instance [11,Ch. 5], [22].
In [5], the second author togheter with C. De Lellis, G. De Philippis and B. Kirchheim already approached this regularity problem through the viewpoint of differential inclusions. The theory of differential inclusions has a rich history, we refer the reader to [17] for an overview and to [18,19] for more recent results. Since this work is also based on that viewpoint, let us briefly explain what this means. The strategy of [5] consisted first in rewriting (1.1) on a special class of geometric objects, namely multiplicity one graphs of Lipschitz maps, and study the differential inclusion associated to the system of PDEs arising from the stationarity condition. Namely, it can be shown that, see [5,Sec. 6] or Sect. A.5, to a C k integrand as the one appearing in (1.1), one can naturally associate a C k function f : R n×m → R with the property that where T u = u , ξ u , 1 is the current associated to the graph of u i.e. if v(x) .
= (x, u(x)) is the graph map we have T u = v # . In particular, it is possible to prove, see [5,Prop. 6.8] that T u is stationary for the energy (1.1) if and only if u solves the following equations: called inner (or domain) variations. The second step is to study (1.3) and (1.4) from the point of view of differential inclusions. This amounts to rewrite (1. 3

)-(1.4) equivalently as
, (1.5) for A ∈ L ∞ ( , R n×m ), B ∈ L ∞ ( , R m ) with div(A) = 0, div(B) = 0. This paper focuses on the same problem as [5], i.e. regularity of stationary points for geometric integrands, but with the addition of considering graphs with arbitrary positive multiplicity. This of course enlarges the class of competitors and might allow for more flexibility in the regularity of solutions. In particular, we consider polyconvex functions f , i.e.
f (X ) = g(X , (X )), where g ∈ C 1 (R k ) is a convex function and : R n×m → R k is the vector containing all the minors (subdeterminants) of order larger than or equal to 2 of X ∈ R n×m . In analogy with (1.3)-(1.4), we will be interested in the following system of PDEs (1.6) for a Lipschitz map u ∈ Lip( , R n ), and a Borel function β ∈ L ∞ ( , R + ). The study of objects with multiplicity is rather natural in the context of stationary rectifiable varifolds or currents. When dealing with these objects, one is interested in showing a so-called constancy theorem, see [23,Theorem 8.4.1]. A constancy theorem in the sense of [23,Theorem 8.4.1] asserts that if a stationary (for the area) varifold of dimension m has support contained in a C 2 manifold of the same dimension, then the varifold must be given by a fixed multiple of the manifold, so that in particular the multiplicity must be constant. In [10], it was shown that instead of C 2 , even Lipschitz regularity of the manifold is sufficient to guarantee the validity of the Constancy Theorem. This is connected to the following algebraic fact. If a C 2 map u solves (1.3), then it necessarily solves also (1.4), hence the system (1.3)-(1.4) reduces to equation (1.3). Nonetheless, if u ∈ C 2 and solves (1.6) for a bounded multiplicity β, then it is not anymore true that u automatically solves the first. One therefore would like to show a priori that the multiplicity is constant and subsequently one is again in the situation given by (1.3)-(1.4). As for regularity theorems, no general constancy result is known at the moment for general functionals, except for the codimension one case, see [7]. As said, the tools we use are the same as the ones of [5], namely we rewrite (1.6) as (1.7) again for A ∈ L ∞ ( , R n×m ), B ∈ L ∞ ( , R m ) with div(A) = 0, div(B) = 0. Our result is twofold. First, we will show that, if f is assumed to be non-negative, then the same result as [5,Theorem 1] holds, namely in C f there are no T N configurations. Secondly, we show the optimality of this result by proving that if we drop the hypothesis on the positivity of f , one can not only embed a special family of matrices in C f , but one can actually construct a stationary current for the energy give in (1.1) whose support lies on the graph of a Lipschitz and nowhere C 1 map. In order to formulate properly these results, we need some terminology concerning differential inclusions.
Differential inclusions are relations of the form M(x) ∈ K ⊂ R n×m a.e. in (1.8) for M ∈ L ∞ ( , R n×m ) satisfying A (M) = 0 in the weak sense for some constant coefficients, linear differential operator A (·). To every operator A (·), one can associate a wave cone, denoted with A , that is made of those directions A in which it is possible to have plane wave solution, i.e. A ∈ A if and only if there exists ξ ∈ R m such that A (h((x, ξ))A) = 0, ∀h ∈ C 1 (R).
In this work, we will not need to consider various differential operators, as we will only work with the mixed div-curl operator introduced in (1.5). In that case, we denote the cone with dc and we will introduce it in detail in Sect. 2.1. Due to the connection of the wave-cone to the existence of oscillatory solutions of (1.8), a very first step to exclude wild solutions of (1. 8

) is to check that
This is usually quite simple to verify, and indeed we will show in Proposition 3.1 that, if f is positive, then (1.9) holds with A = dc and K replaced by C f . Property (1.9) is in general not sufficient to guarantee good regularity properties of solution of (1.8). Indeed, in [20], S. Müller and V. Šverák constructed a striking counterexample to elliptic regularity for solutions of where the function f ∈ C ∞ (R 2×2 ) is quasiconvex (for the definition of quasiconvex function, we refer the reader to [20]), and J is a matrix satisfying J = −J T and J 2 = − id. In particular, they were able to show that there exists a Lipschitz and nowhere C 1 function v : ⊂ R 2 → R 4 satisfying the differential inclusion (1.10). Their strategy was subsequently improved by L. Székelyhidi in [24] showing that f can be chosen polyconvex. In both cases, K f does not contain rank one connections, i.e.
and this can proved to be equivalent to (1.9) in the case A = curl. Their strategy was based on showing that in K f other suitable families of matrices could be embedded, the so-called T N configurations. In our situation, since we are dealing with mixed div-curl operators, we need to consider a slightly different version of T N configurations, that we have named T N configurations in [5]. We postpone the definition of T N and T N configurations to Sect. 2, but we are finally able to formally state our main positive result: This result, as [5,Theorem 1], shows that it is not possible to apply the convex integration methods of [20,24] to prove the existence of an irregular solution of the system (1.6). This theorem is stronger than [5,Theorem 1], in the sense that we are able to show [5, Theorem 1] as a corollary: is a strictly polyconvex function (not necessarily non-negative), then K f does not contain any set {A 1 , . . . , A N } which induces a T N configuration.
Finally, in Sect. 4, we show the optimality of the hypothesis of non-negativity of the previous theorem by proving the following: Theorem There exists a smooth and elliptic integrand : 2 (R 4 ) → R such that the associated energy admits a stationary point T whose (integer) multiplicities are not constant. Moreover the rectifiable set supporting T is given by a graph of a Lipschitz map u : → R 2 that fails to be C 1 in any open subset V ⊂ .
The last Theorem is obtained by embedding in the differential inclusion (1.7) what has been named in [13] large T N configuration. Following the strategy of [24], we do not a priori choose a polyconvex f ∈ C ∞ (R 2×2 ), but rather we construct it in such a way that C f already contains this special family of matrices. Once the polyconvex function f has been built, we prove an extension result for f to the Grassmanians, thus obtaining the integrand of the statement of the Theorem. The extension results are quite simple and might be of independent interest. The construction of our counterexample can not be carried out in the varifold setting. The reason is quite elementary, as the integrand we would need to construct in the varifold case should be even, convex and positively 1-homogeneous, hence positive. We refer the reader to Remark 5.6 for more details. Moreover, let us point out that positivity of the integrand is a necessary assumption when studying existence of minima, but to the best of our knowledge there is no available example for it to be a necessary assumption also when studying regularity properties of stationary points.
The paper is organized as follows. In Sect. 2, we recall the statements of our main results in the case of non-negative integrands f and we collect some crucial preliminary results of [5]. The proof of the main results in the positive case, i.e. Proposition 3.1, Theorem 3.3 and Corollary 3.4, will be given in Sect. 3. In Sect. 4, we provide a counterexample to regularity when dropping the hypothesis of positivity of the integrand. Some lemmas of Sect. 4 concerning the extension of polyconvex functions to the Grassmaniann manifold can be easily extended to general dimension and codimensions. Therefore, we give the proof of these general versions in Sect. 5. Finally, the appendix contains a concise introduction to the tools of geometric measure theory used along the paper.

Positive case: absence of T N configurations
In this section we collect some preliminary results proved in [5], that will be essential for the proofs of the next section.

Div-curl differential inclusions, wave cones and inclusion sets
In this subsection, we explain how to rephrase the system (1.3)-(1.4) as a differential inclusion. As recalled in the introduction, the Euler-Lagrange equations defining stationary points for energies E f are the couple of equations (1.3), (1.4), that can be written in the classical form: Thus we are lead to study the following div-curl differential inclusion for a triple of maps X , Y ∈ L ∞ ( , R n×m ) and Z ∈ L ∞ ( , R m×m ): where f ∈ C 1 (R n×m ) is a fixed function. Moreover, we also consider the following more general system of PDEs, for u ∈ Lip( , R n ) and a Borel map β ∈ L ∞ ( , (0, +∞)): This system is equivalent to the stationarity in the sense of varifolds of the varifold V = u , β , where u is the graph of u. This is discussed in Sect. A.5. The div-curl differential inclusion associated to this system is, again for a triple of maps X , Y ∈ L ∞ ( , R n×m ) and Z ∈ L ∞ ( , R m×m ): This discussion proves the following 3) if and only there are matrix fields Y ∈ L ∞ ( , R n×m ) and Z ∈ L ∞ ( , R m×m ) such that W = (Du, Y , Z ) solves the div-curl differential inclusion (2.4)-(2.5).
Finally, we introduce here the wave-cone associated to the mixed div-curl operator that is relevant for us.

Definition 2.2 The cone dc ⊂ R (2n+m)×m consists of the matrices in block form
with the property that there is a direction ξ ∈ S m−1 and a vector u ∈ R n such that X = u ⊗ ξ , Y ξ = 0 and Z ξ = 0.

T N configurations and T N configurations
We start defining T N configurations for classical curl-type differential inclusions.
⊂ R n×m is said to induce a T N configuration if there exist matrices P, C i ∈ R n×m and real numbers k i > 1 such that: (a) each C i belongs to the wave cone of curl X = 0, namely rank(C i ) ≤ 1 for each i; . , X N , P and C 1 , . . . , C N satisfy the following N linear conditions In the rest of the chapter we will use the word T N configuration for the data P, C 1 , . . . , C N , k 1 , . . . k N .
We will moreover say that the configuration is nondegenerate if rank(C i ) = 1 for every i.
As in [5], we give a slightly more general definition of T N configuration than the one usually given in the literature (cf. [20,24,25]), in that we drop the requirement that there are no rank-one connections between distinct X i and X j . We refer the reader to [5] for discussions concerning T N configurations.
Adapted to the div-curl operator we introduce T N configurations, originally introduced in [5].
and the following properties hold: (a) each element (C i , D i , E i ) belongs to the wave cone dc of (2.1); We say that the T N configuration is nondegenerate if rank(C i ) = 1 for every i.
We collect here some simple consequences of the definition above.

Strategy
Before starting with the proof of the main result of this chapter, it is convenient to explain the strategy we intend to follow. In order to do so, let us consider here the case n = m = 2, N = 5. Suppose by contradiction that there exists a strictly polyconvex function f : We will see below that we can without loss of generality assume that P = 0. The first part of the strategy follows the same lines of the one of [5]. Indeed, we think the relations A i ∈ C f , ∀i, where C f has been defined in (2.6), as two separate pieces of information: and Let us denote with c i . = f (X i ). As in [5], we use (2.9) to obtain inequalities involving X i , Y i and quantities involving f . These are deduced from the polyconvexity of f , analogously to [24,Lemma 3]. In particular, (2.9) is rewritten as for d i .
= ∂ y 5 g(y 1 , y 2 , y 3 , y 4 , y 5 )| (X i ,det(X i )) . This is proved in Proposition 2.9. The final goal is to prove that these inequalities can not be fulfilled at the same time. As in [5], we can simplify (2.11) using the structure result on T N configurations in R 2×2 of [25,Proposition 1]. This asserts, in the specific case of the ongoing example, the existence of 5 vectors (t i 1 , . . . , t i 5 ), i ∈ {1, . . . , 5} with positive components, such that If we use this result in (2.11), we can eliminate from the expression the variable d i , thus obtaining compare Corollary 2.10. In [5], [25, Proposition 1] was extended to T N configurations in R n×m , so that relations (2.12) remain true in every dimension and target dimension. This extension is recalled in Proposition 2.8. Despite being very useful, the last simplification can not conclude the proof. Indeed, up to now we have exploited (2.9) and the fact that {X 1 , . . . , X 5 } induce a T 5 configuration, but, if β i = 1, ∀i, this is the exact same situation of [24]. Since from that paper we know the existence of T 5 configurations in K f , clearly we can not reach a contradiction at this point of the strategy. This is where the inner variations come into play. We rewrite (2.10) using the definition of T 5 configuration and, after some manipulations, we find that the numbers must all be 0. For the index I such that β I = min i β i , and essentially using the positivity of c j , we find that which is in contradiction with the negativity of ν I .

Preliminary results: T N configurations
To follow the strategy explained in Sect. 2.3, we need to recall the extension of [25, Proposition 1] proved in [5]. Here we will only recall the essential results without proof, we refer the interested reader to [5] for the details. First, it is possible to associate to a set T N -configuration of the form (2.7), i.e. X 1 = P + k 1 C 1 , a defining vector (λ, μ) ∈ R N +1 , see [5,Definition 3.7], defined as follows: and .
These relations can be inverted, in fact one can express (2.14) Since k i > 1, ∀i ∈ {1, . . . , N }, (2.13) implies that λ i > 0, ∀i, μ > 1 and also As in [25, Proposition 1], we define N vectors of R N with positive components where ξ i > 1 are normalization constants chosen in such a way that t i The importance of these vectors t i comes from [25,Proposition 1], where it is proved that, for a T N configuration of the form (2.7) in R 2×2 , N j=1 t i j X j = P + C 1 + · · · + C i−1 (2.16) Moreover, the following relation holds for every i: If we define the vectors t i as in (2.15), then This lemma allows to generalize (2.16) and (2.17), compare [5,Proposition 3.8]. To state this result, we need some notation concerning multi-indexes. We will use I for multi-indexes referring to ordered sets of rows of matrices and J for multi-indexes referring to ordered sets of columns. In our specific case, where we deal with matrices in R n×m we will thus have and we will use the notation |I | . = r and |J | . = s. In the sequel we will always have r = s.

Definition 2.7 We denote by A r the set
For a matrix M = (m i j ) ∈ R n×m and for Z ∈ A r of the form Z = (I , J ), we denote by M Z the squared r × r matrix obtained by M considering just the elements m i j with i ∈ I , j ∈ J (using the order induced by I and J ).
We are finally in position to state [5, Proposition 3.8].
. Define the vectors t 1 , . . . , t N as in (2.15) and for every Z ∈ A r of order 1 ≤ r ≤ min{n, m} define the minor S : It is clear that the previous result extends (2.16) and (2.17) to all the minors.

Preliminary results: inclusion set associated to polyconvex functions
As in [5, Section 4], we write a necessary condition for a set of distinct matrices to belong to a set of the form for some strictly polyconvex function f : R n×m → R. First, introduce the following notation, that is the same as in [5]. Let f : R n×m → R be a strictly polyconvex function of the form f (X ) = g( (X )), where g ∈ C 1 (R k ) is strictly convex and is the vector of all the subdeterminants of X , i.e.
for some fixed (but arbitrary) ordering of all the elements Z ∈ A s . Variables of R k , and hence partial derivatives in R k , are labeled using the ordering induced by . The first nm partial derivatives, corresponding in (X ) to X , are collected in a n × m matrix denoted with D X g. The j-th partial derivative, mn + 1 ≤ j ≤ k, is instead denoted by ∂ Z g, where Z is the element of A s corresponding to the j-th position of . Let us make an example in low dimension: if n = 3, m = 2, then k = 9, and we choose the ordering of to be (X ) = (X , det(X (12,12) ), det(X (13,12) ), det(X (23,12) )).
The partial derivatives with respect to the first 6 variables are collected in the 3 × 2 matrix: The partial derivatives with respect to the remaining variables are denoted as ∂ (12,12) g, ∂ (13,12) g and ∂ (23,12) g, i.e. following the ordering induced by . Finally, for a matrix A ∈ R r ×r , we denote with cof(A) the matrix defined as where M ji (A) denotes the (n − 1) × (n − 1) submatrix of A obtained by eliminating from A the j-th row and the i-th column. In particular, the following relation holds We are ready to state the following: Proposition 2.9 Let f : R n×m → R be a strictly polyconvex function of the form f (X ) = g( (X )), where g ∈ C 1 is strictly convex and is the vector of all the subdeterminants of X, i.e.
for some fixed (but arbitrary) ordering of all the elements Z ∈ A s . If fulfill the following inequalities for every i = j: This result was proved in [5,Proposition 4.1]. We now introduce the set , for some β > 0 .
Notice that C f is the projection of C f on the first 2n × m coordinates. We immediately obtain from the previous proposition and the definition of C f that The expressions in (2.25) can be simplified when the matrices X 1 , . . . , X N induce a T N configuration:

Corollary 2.10 Let f be a strictly polyconvex function and let
where the t i 's are given by (2.15).
This corresponds to [5,Corollary 4.3], and concludes the list of preliminary results needed for the results of this paper.

Positive case: proof of the main results
Before checking whether the inclusion set C f contains T N or T N configurations, we need to exclude more basic building block for wild solutions, such as rank-one connections or, as in this case, dc -connections in C f . It is rather easy to see, compare for instance [24], that if f is strictly polyconvex, then for A, B ∈ K f it is not possible to have Indeed the same result holds even considering K f . To prove this, it is sufficient to observe and On the other hand, since f is strictly polyconvex, it is easy to see that The first result of this section shows that this result holds also for C f , provided f is positive.
Proof Suppose by contradiction that there exist Now we can use the so-called Matrix Determinant Lemma 3.2 to see that the expressions found in (2.25) evaluated at yield the following inequalities: In the previous lines we have used the fact that and, since C is of rank one with Cv = 0, ∀v ⊥ ξ , and (3.4) as From (3.6), we infer Since c ≥ 0 and c ≥ 0, we get a contradiction.
Let us recall the Matrix Determinant Lemma used in the proof of the last proposition: Now that we have excluded dc -connections, we can ask ourselves the same question concerning T N configurations. In particular we want to prove the main Theorem of this part of the paper: At the end of the section we will show the following Let us fix the notation. We will always consider T N configurations of the following form: with: and we denote with n i ∈ S m−1 the vectors such that

Idea of the proof
Before proving the theorem, let us give an idea of the key steps of the proof. First of all, in Lemma 3.5, we will see that without loss of generality we can choose P = 0. As already explained in Sect. 2.3, we want to prove that the system of inequalities cannot be fulfilled at the same time. This gives a contradiction with Corollary 2.10. In particular, we show that for the index σ such that β σ = min j β j , To do so, we prove that the quantities equal to 0 for every i. Then, choosing σ as above and exploiting the positivity of c j , ∀ j, we estimate (3.12) This will then yield the required contradiction. In order to show where μ > 1 is part of the defining vector of the T N configuration {X 1 , . . . , X N }, compare 2.13, and α j are real numbers. We prove that for numbers ξ j > 0, a subset (3.24). This is achieved thanks to Lemma 3.6. Since M i is trace-free, as can be seen by the structure of C j and D j , we will find N relations of the form This can be read as the equations for the kernel for a specific matrix N × N matrix, W .
Proving that W has trivial kernel will yield ξ j μ j = 0, ∀ j, and thus μ j = 0 since ξ j > 0. The proof of the invertibility of W is the content of the last Lemma 3.10.

Lemma 3.5 If f is a strictly polyconvex function such that
Clearly the newly defined family {B 1 , . . . B N } still induces a T N configuration, and it is straightforward that B i ∈ C F . Moreover, this does not affect positivity, in the sense that F( where t i is the vector defined in (2.15).
Proof We need to compute the following sums: Let us start computing the sum for We rewrite it in the following way: 14) where we collected in the coefficients g i j the following quantities: On the other hand, again using (2.14), Using the equalities We just proved that Recall the definition of t i , namely By the previous computation (i = 1), it is convenient to rewrite (3.13) using (3.15) as (3.16) In the previous equation, we have used the equality that easily follows from Lemma 2.6. Once again, let us express the sum up to i − 1 in the following way: A combinatorial argument analogous to the one in the previous case gives We rewrite (3.16) as Now we substitute (3.18) in the definition (3.9) of Z i in order to compute E i : Multiply by n i the previous expression and recall that E i n i = 0 to find: (3.20) Now we need to compute Using this computation, (3.20) reads as: Exploiting the definition of t i j , we see that we can rewrite

Thus (3.21) becomes
Since C i v = 0, ∀v ⊥ n i , we have C T i Y i n i = C i , Y i n i , and we finally obtain the desired equalities: We are finally in position to prove the main Theorem.

Proof of Theorem 3.3
Assume by contradiction the existence of a T N configuration induced by matrices {A 1 , . . . , A N } of the form (3.8) which belong to the inclusion set C f of some stictly polyconvex function f ∈ C 1 (R n×m ) and f (X i ) ≥ 0 for every i. We can assume, without loss of generality by Lemma 3.5, that Using Lemma 3.6, we find where v i,a ∈ span{n i , . . . , n i+a−1 }. From now on, we fix i ∈ {1, . . . , N }. To prove (3.24), first we rewrite and then we use (3.23) to obtain To conclude the proof of (3.24), we only need to show that To do so, we compute M i − M i+a . Let us start from the case 1 ≤ i + a ≤ N : On the other hand, if Now the crucial observation is that, due to the fact that C j v = 0 for every v ⊥ n j , the image of C T j D j is contained in the line span(n j ), for every j ∈ {1, . . . , N }. Therefore, the previous computations prove (3.26) and hence (3.24). Now we introduce We can extract a basis for span(V i ) in the following way. First, choose indexes We denoted with 0 c,d the zero matrix with c rows and d columns, with T the C × (m − C) matrix of the coefficients of M i γ j with respect to {n s : s ∈ S i }, and with * numbers we are not interested in computing explicitely. Finally, we have chosen an enumeration s 1 < s 2 < · · · < s < · · · < s C of the elements of S i , and we have defined The triangular form of the matrix representing M i is exactly due to (3.24). Now, tr(M i ) = 0, ∀i, since C T i D i is trace-free for every i. This implies that the matrix in (3.29) must be trace-free, hence: (3.30) We have thus reduced the problem to the following simple Linear Algebra statement: we wish to show that, if W is the N × N matrix defined as (3.12), this is sufficient to reach a contradiction. Therefore, we only need to show that W x = 0 ⇒ x = 0. This proof will be given in Let W i be the i-th row of W . We notice that for i = 1, 2, 3, W i+1 differs from W i by exactly two elements, while W 4 does not differ with W 1 by only two elements. It does, though, with μW 1 . Hence we rewrite equivalently the system W x = 0 as As in the previous example, for i = 1, 2, 3, W i+1 differs from W i by exactly one element, while W 4 does the same with μW 1 . Thus as before we rewrite equivalently the system W x = 0 as In this case, h(i) = i, ∀i ∈ {1, . . . , 4}. Clearly also in this case μ > 1, implies x i = 0, ∀i.
Finally, let us show a less symmetric example: Example 3.9 Consider the case in which C = 3. Then, a possible matrix is: First, let us comment on the fact that this is a possible matrix appearing in the proof of the previous Theorem. Indeed, let us consider the first two lines: The fact that W 13 = 0 means that n 3 ∈ span(n 1 , n 2 ), since 3 / ∈ S 1 . On the other hand, Proposition 3.1 ensures that n 3 is not a multiple of n 2 , hence n 3 ∈ S 2 , and W 23 = 1 = 0. would for instance have been non-admissible. Now, in order to prove W x = 0 ⇒ x = 0, we work as in the previous examples, by noticing that for i = 1, 2, 3, W i+1 differs from W i by at most two elements, while W 4 must be compared with μW 1 . Thus we write W i − W i+1 , W 4 − μW 1 : It is an elementary computation to show that x i = 0, ∀i.
Even though the examples we have given are too simple to appreciate the usefulness of the function h such that x i = a i x h(i) , this will be crucial in the proof of the Lemma. Proof Throughout the proof, we always consider a given vector x ∈ R N such that W x = 0. The proof, partially suggested by the previous examples, consists in the following steps. First, we show that the rows of W , W i and W i+1 (if i = N , we compare W N with μW 1 ) differ for at most two elements, and one of them is always x i . This immediately yields the existence of a function h : {1, . . . , N } → {1, . . . , N } such that x i = a i x h(i) . We will then use this and the crucial fact that μ > 1 to conclude that x i = 0, ∀i. Let us make the following claims, and see from them how to conclude the proof of the present Lemma. We will use freely the notation introduced at the end of the proof of Theorem 3.3.
Claim 1: Let i ∈ {1, . . . , N }. Then S i differs from S i+1 (if i = N , S i+1 = S 1 ) of at most two elements, in the sense that with the property Let us show how the proof of the Lemma follows from Claim 3, and postpone the proofs of the claims. Fix i ∈ {1, . . . , N } and use (3.31) recursively to find where h (n) denotes the function obtained by applying h to itself n times. We also use the notation h (0) to denote the identity function: h (0) (i) = i, ∀i ∈ {1, . . . , N }. By the properties of a j , we have, ∀r ∈ {1, . . . , n}, Fix k ∈ N, and let r ∈ {k + 1, . . . , k + N + 1}. Then, h (r ) (i) > h (r −1) (i) can occur at most N times in this range, since otherwise we would find and this is impossible since we would have N + 1 distinct elements in the set {1, . . . , N }. Now clearly this observation implies that for every fixed l ∈ N, there exists s ∈ N such that This can only happen if x i = 0. Since i is arbitrary, the conclusion follows.
At the same time, consider what happens in S i+1 : the span in the (i + 1)-th case starts with one vector less than the one of the i-th case, simply because the collection of indexes in S i+1 starts from n i+1 . Hence, since I is the first missing index in S i , I is also the first possible missing index for S i+1 . Therefore, consider the first case n I ∈ span(n i , . . . , n I −1 ) \ span(n i+1 , . . . , n I −1 ).
This implies that I ∈ S i+1 . Moreover, we are now adding n I to the set of vectors n i+1 , . . . , n I −1 , and n I ∈ span(n i , . . . , n I −1 ) \ span(n i+1 , . . . , n I −1 ), hence n I adds to the previous vectors the component relative to n i , in the sense that span(n i+1 , . . . , n I ) = span(n i , . . . , n I −1 ).
If instead n I ∈ span(n i+1 , . . . , n I −1 ), then we see that I / ∈ S i+1 , and we can iterate this reasoning from there, in the sense that we look for the next index I such that I / ∈ S i and divide again into the two cases above. Clearly, for the indexes i + 1 ≤ j < I 1 , we have j ∈ S i+1 and j ∈ S i . Either this iteration enters in case 1 of the previous subdivision for some element I / ∈ S i , or we conclude S i = S i+1 . This concludes the proof of the claim.

Proof of claim
and from (3.34) and (3.35) we infer From these equations we see that (3.31) holds with the choice h(i) . = I (i), when i is such that S i S i+1 = ∅, and h(i) . = i otherwise.

Proof of Corollary 3.4
We end this section by showing that Theorem 3.3 implies Theorem 3.4. Assume by contradiction that there exists a family of matrices inducing a T N configuration of the form (3.8). We show that then there exists another This is a contradiction with Theorem 3.3. To accomplish this, it is is sufficient to define This function is clearly strictly polyconvex, since f is. Moreover, we define In this way, B i is still a T N configuration. Moreover, and This finishes the proof.

Sign-changing case: the counterexample
In this section, we construct a counterexample to regularity in the case in which the hypothesis of non-negativity on f is dropped. Let us explain the strategy, that follows the one of [24]. First of all, we consider the following equivalent formulation of the differential inclusion of divcurl type considered in the previous sections. Indeed, due to the fact that for a ∈ Lip(R 2 , R 2 ), (4.1) From now on, we will always use this reformulation of the problem. Let us also introducẽ In order to construct the counterexample, we want to find a set of non-rigid matrices Roughly, non-rigidity means that there exists a non-affine solution of the problem for some convex and smooth g : R 5 → R and is the area function. As in [24], f is not fixed from the beginning, but rather becomes another unknown of the problem. In particular, in order to find f , it is sufficient for the following condition to be fulfilled: If this condition is satisfied, then one has The construction of f is the content of Lemma 4.2. Moreover, we will be able to build f in such a way that for some large R > 0, and constants M, L > 0. The non-rigidity of A 1 , . . . , A 5 stems from the fact that we choose {X 1 , . . . , X 5 } forming a large T 5 -configuration, in the terminology of [13]. Therefore we introduce:  (1) , X σ i (2) , . . . , X σ i (5) ] is a T 5 configuration and moreover {C σ 1 (i) , C σ 2 (i) , C σ 3 (i) } are linearly independent for every i ∈ {1, . . . , 5}.
Once this condition is guaranteed, by [13, Theorem 1.2], we find a non-affine Lipschitz map u : ⊂ R 2 → R 2 such that almost everywhere in . Furthermore, we can choose u with the property that for any subset V ⊂ , Du attains each of these matrices on a set of positive measure. This is proved in Lemma 4.3.
In order to find Lipschitz maps w 1 , w 2 : → R 2 such that satisfies Dw ∈C f a.e. in , we simply consider w 1 = Au + B, w 2 = Cu + D, for suitable 2 × 2 matrices A, B, C, D.
We therefore get our last

Condition 3 Y i and Z i can be chosen of the form
In Sect. (4.1), we will give an explicit example of values such that the Conditions 1-2-3 are fulfilled.
Once this is achieved, we need to extend the energy E f to an energy defined on integral currents of dimension 2 in R 4 . Some of the results we present in this section in our specific case can be easily generalized to more general polyconvex integrands. Therefore, we defer their proofs to Sect. 5.
In order to extend our polyconvex function f to a geometric functional, we first recall (4.3), i.e.
f (X ) = εA(X ) + g(X , det(X )), for g : R 5 → R convex and smooth, and introduce the convex function h : R 5 → R: We consider the perspective function of h: It is a standard result in convex analysis that G is convex on R 5 × R + as soon as h is convex on R 5 , compare [4, Lemma 2]. Property (4.7) reads as therefore we also find that the recession function of G is Hence, G can be extended to the hyperplane y = 0 as In Lemma 4.4, we will prove that G(z, t) admits a finite, positively 1-homogeneous convex extension G to the whole space R 6 . We are finally able to define an integrand on the space of 2-vectors of R 4 , 2 (R 4 ). For a more thorough introduction to k-vectors, see Sect. A.1.
Recall that A basis for 2 (R 4 ) is given by the six elements e i ∧ e j , 1 ≤ i < j ≤ 4, where e 1 , e 2 , e 3 , e 4 is the canonical basis of R 4 . Recall moreover that this vector space can be endowed with a scalar product that acts on simple vectors as where (u, v) denotes as usual the standard scalar product of R 4 . The integrand is thus defined as, for τ ∈ 2 (R 4 ), (τ ) . = G( τ, e 3 ∧e 2 , τ, e 4 ∧e 2 , τ, e 1 ∧e 3 , τ, e 1 ∧e 4 , τ, e 3 ∧e 4 , τ, e 1 ∧e 2 ). (4.10) Consequently, we define an energy on I 2 (R 4 ) as if T = E, T , θ . For the notation concerning rectifiable currents and graphs, we refer the reader to Sect. A.4. The energy defined in this way satisfies Almgren's ellipticity condition (A.11), as we will prove in Lemma 4.5. Finally, in Lemma 4.6, we will prove that the current is stationary for the energy . The definition of stationarity for geometric functionals is recalled in Sect. A.5. In (4.11), u is the graph of u, ξ u is its orientation, see (A.5), and θ(y) is a multiplicity, defined as θ(x, u(x)) = β i if x ∈ is such that This discussion constitutes the proof of the following:

Theorem 4.1 There exists a smooth and elliptic integrand
: 2 (R 4 ) → R such that the associated energy admits a stationary point T whose (integer) multiplicities are not constant. Moreover the rectifiable set supporting T is given by a graph of a Lipschitz map u : → R 2 that fails to be C 1 in any open subset V ⊂ .

Lemma 4.2 There exists a smooth function f
with g : R 5 → R convex and smooth, such that (1) (4.6) is fulfilled; Proof We will follow roughly the strategy of [24,Lemma 3]. At first we construct the function g in several steps.
the set of admissible matrices. For ε > 0 consider for each i the perturbed values where a(X , d) = 1 + |X | 2 + d 2 and A(X ) = a(X , det(X )), as defined in (4.4). Furthermore we introduce the perturbed matrix Thanks to the strict inequality in (4.5) we can fix ε, σ > 0 such that Q ε i j ≤ −σ < 0 for all i, j. Let us define the linear functions and the convex function Note that l j (X j , det(X j )) = c ε j and . Choosing a radial symmetric, non-negative smoothing kernel on R 5 , ρ ε , 0 < ε << δ we have that g 2 . = ρ ε g 1 satisfies (1) g 2 is smooth and convex (2) g 2 = l j in a neighbourhood of (X j , det(X j )) for all j ∈ {1, . . . , 5}.
We choose any R > 2 max 1≤i≤5 { X i + | det(X i )|}, and any M > C. Now we may choose L > 0 such that Since M > C we have that for all (X , d) / ∈ B R 2 , for some R 2 > R. Now let us fix a smooth approximation of the max function, say   m(a, b) .
where φ ε is a radial symmetric, non-negative smoothing kernel in R 2 . Note that m(a, b) = max(a, b) outside a neighborhood of {a = b}. In particular if we choose ε sufficiently small we can ensure that agrees with g 2 on B 2R 3 by (4.13), and that it agrees with F(X , d) outside B 2R 2 by (4.14). It remains to check that g(X , d) is still convex. First note that ∂ a m ≥ 0 and ∂ b m ≥ 0 since (a, b), ∂ bm (a, b) ≥ 0, then the composition k(x) .
is convex. Thus we conclude that g is convex. Let us summarize the properties of g and the related polyconvex integrand f 1 (X ) . = g(X , det(X )) (1) g is a smooth, convex function; In particular from the last conditions and (4.12) we conclude that h(X , d) For a compact C ⊂ R 2×2 , the rank-one convex hull is defined as where f : R 2×2 → R is said to be rank-one convex if For an open set U ⊂ R 2×2 , In this way, if U is open, then U rc is open as well. The existence of a in-approximation for K implies the existence of a non-affine map u such that Du ∈ {X 1 , . . . , X 5 }, hence (4.15). This is proved in [13,Theorem 1.1]. To show (4.16), there are two ways. Either, one can use the same proof of [20,Theorem 4.1] or [24,Proposition 2] to show that the essential oscillation of Du is positive on any open subset of . Since there is rigidity for the four gradient problem, see [3], this implies (4.16). Another way to show (4.16) is to use the Baire Theorem approach of convex integration as introduced by Kirchheim in [16]. In particular, in [16,Corollary 4.15], it is proved the following. Define we fix A ∈ U, and we also set then the typical (in the sense of Baire) map u ∈ P · ∞ has the property that Du ∈ K . Then, we can use [26,Lemma 7.4] to show that actually the typical map is non-affine on any open set, hence again by the rigidity for the four gradient problem, we conclude (4.16).

Lemma 4.4
Let G : R 5 × R ≥0 → R be the convex function defined in (4.8). Then, there exists a positively 1-homogeneous, convex function G ∈ C ∞ (R 6 \ {0}) ∩ Lip(R 6 ) such that Proof To prove the statement, it is sufficent to notice that the convexity of h and (4.9) tells us that h has property (P), see the beginning of Sect. 5, and therefore we can simply apply Proposition 5.2. The smoothness is a consequence of the smoothness of h, property (4.9) and Corollary 5.3.

Lemma 4.5 The energy satisfies the uniform Almgren ellipticity condition (A.11).
Proof By construction, it is immediate to see that also is still convex and positively 1-homogenous. Define ε as in (4.10) by substituting G ε to G. By the general Proposition 5.5, we see that ε satisfies Almgren condition, hence satisfies (A.11) with constant ε 2 .

Lemma 4.6
The current T u,θ = u , ξ u , θ defined in (4.11) is stationary in × R 2 for the energy .
Proof A direct computation shows that f and fulfill where W (X ) = M 1 (X ) ∧ M 2 (X ) and M i are the columns of the matrix Once this is checked, the proof is entirely analogous to the one of [5, Proposition 6.8], and will be sketched in the appendix, see Proposition 5.8.

Extension of polyconvex functions
Let : R n×m → R k be the usual map that, to a matrix X ∈ R n×m , associates the vector of the subdeterminants of . Consider a polyconvex function h : R k → R being 1 C 1 . The purpose of this section is to generalize the arguments of the previous section to arbitrary n, m, and hence to prove some of the lemmas of that section. Consider the following set of assumptions If h fulfills (i)-(ii)-(iii), we will say it has property (P). If, in addition, h satisfies (iv), we will say that h fulfills property (PE).

Remark 5.1
Notice that (iii) is a consequence of (iv), indeed if (iv) holds we can write, for z 1 = 0 and for any z 2 = z ∈ R k : We denote with h * the recession function of h: It is not difficult to prove that the limit above always exists and is finite for a function h satisfying (P). To show it, one can use the fact that the function y → yh x y defined for y > 0 is convex for every fixed x ∈ R k , see [4,Lemma 2].
As above, we define the perspective function We consider the smallest convex extension of G to the whole R k+1 : By 1-homogeneity of G, we can write First, we prove

Conversely, if there exists a function G that fulfills (1)-(2)-(3), then h fulfills (P).
Furthermore, we can prove the following characterization of G:

Corollary 5.3 Let h fulfill property (P)
, and let G be defined as in (5.1). Assume further that there exists λ ∈ R and R > 0 such that Then, λ = λ and for t < 0, we have where λ is the quantity appearing in (iii).
Before starting with the proof of the proposition, we need to recall some results concerning the notion of subdifferential at x ∈ R N of a convex function f : R N → R.

Subdifferentials
The subdifferential of f at x, denoted with ∂ f (x), is the collection of those vectors v ∈ R N such that We will use the following facts concerning the subdifferential. For a convex function with finite values, ∂ f (x) = ∅ at all x ∈ R N , see [21,Theorem 23.4]. Conversely, if f : As can be seen from the definition of subdifferential, This, together with the fact that if K is compact, then ∂ f (K ) .
= x∈K ∂ f (x) is compact, see [12,Lemma A.22], yields the fact that every convex function is locally Lipschitz. Moreover, if f is positively 1-homogeneous, a simple application of the definition of subdifferential shows that v In particular, combining (5.3) with the local Lipschitz property of convex functions, we infer that if f : R N → R is convex and positively-1 homogeneous, f must be globally Lipschitz. Furthermore, using the definition of subdifferential and (5.3) for f convex and positively-1 homogeneous, it is easy to see that the following generalized Euler's formula holds Finally, we recall that at x, the convex function f : see [12,] and references therein. We can now start the proof of the proposition.

Proof of Proposition 5.2
First we assume that h has property (P). G is convex since it is supremum of linear functions. Moreover, the convexity of h yields the convexity of G on R k × (0, +∞). Having established that G is convex, the fact that G as in (5.1) extends G is a classical fact. This proves (1). Since G was positively 1-homogeneous, we have that G is as well homogeneous. Therefore (2) is checked, and we only need to prove (3). By (5.1) we see that in order to conclude we only need to show that, for fixed (z, t) ∈ R k+1 , where L possibly depends on (z, t). Let us compute DG. Firstly we have that Now, exploiting the convexity of h, we can choose any v ∈ R k with v = 1 and write Using the linear growth of h, i.e. (ii), we bound: Letting s → +∞, the previous expression yields Thus, if we can show that ∂ y G(x, y) is uniformly bounded, then we conclude the proof. We compute explicitly, for every (x, y) ∈ R k × (0, +∞) We are therefore left to study the boundedness (from below) of the function z → h(z) − (Dh(z), z), but this is a consequence of (iii) of property (P). Finally, let us show the necessity of (P). If G is convex and extends G, then in particular hence h is convex. By the discussion of Sect. 5.1, we know that G is globally Lipschitz with constant L > 0. Since we infer that h has linear growth, i.e. it enjoys property (ii). Finally, we need to show (iii).
Since G extends G in the upper half-space, we obtain By the definition of G, we deduce hence (iii).

Proof of Corollary 5.3
First we show that λ = λ . To see this, consider for any z = 0 the auxiliary function g(t) .
= h(tz) − (Dh(tz), tz), for t > 0. Then, g is non-increasing. Indeed, and we can use that G is convex to deduce that t → ∂ y G(z, t) is non-decreasing, hence that t → g(t) is non-increasing. Now, for any t sufficiently large, by assumption (5.2), we have that This shows that In particular, notice that h(0) = lim t→0 + g(t) ≥ λ . To show the equality between λ and λ , consider now a sequence z n ∈ R k such that a n . = h(z n ) − (Dh(z n ), z n ) → λ as n → ∞. If z n = 0 for infinitely many n, we can write λ = lim n→∞ a n = h(0) ≥ λ and the proof is concluded. Otherwise, by the computation above, we have, for every t ≥ 1 a n ≥ h(tz n ) − (Dh(tz n ), tz n ).
By choosing t in dependence of z n , we can ensure through assumption (5.2) that h(tz n ) − (Dh(tz n ), tz n ) = λ .
Therefore, λ = lim n a n ≥ λ and the proof of the first part of the Corollary is finished. Now we wish to show the characterization of G. Fix (z, t) ∈ R k × (−∞, 0). Let (x, y) ∈ R k × (0, +∞). Then, using the definition G(x, y) = yh x y (DG(x, y), (z, t) hence, since t < 0, then that provides the desired bound. This finishes the proof of the convexity of G.
To conclude, we need to show the converse statement, i.e. that if G is even and convex, then h fulfills (PE). The fact that h fulfills (i)-(ii)-(iii) can be proved in a completely analogous way as in Proposition 5.2. By Remark 5.1, one could also infer (iii) as a corollary of (iv). Finally, to see (iv), one can simply follow the chain of logical equivalences of the previous part of the proof. This proves that (PE) is also necessary to the existence of the even extension.

Consider an orthonormal basis of
as done in (A.1). We define, for every τ ∈ m (R m+n ) (τ ) . = G( τ, E 2 , . . . , τ, E ( m+n m ) , τ, E 1 ), (5.16) and consequently the energy Proof Let R, S ∈ R m (R n+m ), ∂ R = ∂ S, spt S is contained in the vectorsubspace of R n associated with a simple m vector S 0 of R n , and S(z) = S 0 for S -almost all z. Since ∂ R = ∂ S we have that compare [11, 5.1.2]. Note that this implies by the linearity of φ that Now we may use Jensen inequality and the 1-homogeneity of G to deduce that where we used again in the last line that G is 1-homogeneous.

Remark 5.6
If G is even, then is a well-defined energy on varifolds. Notice that in this case, G is convex, even and 1-homogeneous. A simple computation in convex analysis shows that this imposes for G to be positive. This observation is what makes it impossible to extend an integrand f as the one constructed in Sect. 4 to an integrand defined on varifolds using the methods introduced here. length m, I = (i 1 , . . . , i m ), with 1 ≤ i 1 < · · · < i m ≤ n + m. There are n+m m of these multivectors, and each one defines a simple m-vector E j = e i 1 ∧ · · · ∧ e i m , j ∈ 1, . . . , n + m m .
It is easy to check that E 1 , . . . , E ( n+m m ) is a basis for m (R n+m ). We also set E 1 .
= e 1 ∧ · · · ∧ e m , (A.1) while the ordering of the other indexes is arbitrary (but fixed). The vector space m (R n+m ) can be endowed with a scalar product that is defined on simple vectors as where X ∈ R m×m is defined as We define a norm on m (R n+m ) by setting τ = √ (τ, τ ). Analogously, one introduces the space of m-covectors of R n+m , * m (R n+m ), as the linear space generated by the wedge product of m covectors of R n+m . An element η ∈ * m (R n+m ) acts by duality on elements of m (R n+m ) in the following way. Let η = η 1 ∧ · · · ∧ η m and τ = v 1 ∧ · · · ∧ v m (the general case follows by linearity). Then, With these definition at hand, we can consider the space D m (R n+m ) as the space of smooth m-forms of R n+m with compact support, namely D m (R n+m ) . = C ∞ c (R n+m , * m (R n+m )). We endow * m (R n+m ) with a norm given by hence we can consider on D m (R n+m ) the norm ω ∞ . = sup x∈R n+m ω(x) .

A.2. Planes and rectifiable sets
We denote with G(m, n + m) the space of unoriented m-planes of R n+m . In [5], we used the identification of G(m, n + m) with the space of orthogonal projections on m-planes P ∈ R (m+n)×(m+n) : P = P T , P 2 = P, rank(P) = tr(P) = m .
It is not difficult to show that s m (R n+m ) can be identified with the space of oriented mdimensional planes of R m+n , see [14,Section 2.1]. It is thus natural to introduce the two-toone map f : s m (R n+m ) → G(m, n + m) (A.2) that takes v 1 ∧· · ·∧v m ∈ s m (R n+m ) to the projection on the m-plane spanned by v 1 , . . . , v m . Notice that f is not injective since f (τ ) = f (−τ ), ∀τ ∈ s m (R n+m ).
We recall that a set E ⊂ R m+n is called rectifiable of dimension m if where H m (E 0 ) = 0, F j ∈ C 1 (R m , R n+m ), and E j ⊂ R m+n is Borel. To such a set E it is possible to associate naturally a notion of approximate tangent plane, i.e. a map x → T x E ∈ G(m, n + m).
For the definition of T x E, we refer the reader to [23,Section 3.1]. An orientation x → T (x) of T x E is a Borel map T ∈ L ∞ (E, s m (R n+m )) with T (x) = 1 for H m E a.e. x ∈ R n+m , and T (x) ∈ f −1 (T x E), for H m E a.e. x ∈ R n+m , where f is the map defined in (A.2).

A.3. Varifolds
A m-dimensional rectifiable varifold V is a measure on R n+m × G(n + m, m) given by where E is m dimensional rectifiable set and θ ∈ L 1 (E; H m E). The varifold is called integer rectifiable if in addition θ has values in N \ {0}. The notation for the varifold V defined as in (A.3) is

A.4. Currents
A rectifiable current of dimension m, denoted by T , is a linear functional over D m (R n+m ) represented as: where E is an m-rectifiable subset of R n+m , T (x) is an orientation of T x E, and θ ∈ L 1 (E; H m E). Such a current T is denoted as The mass of the current T is defined as and we introduce the notion of boundary ∂ T of a rectifiable current T as the m −1 dimensional current ∂ T (ω) . = T (dω).
We restrict our attention to the space of integer rectifiable currents of dimension m, R m (R n+m ), defined as the space of m-dimensional rectifiable currents T with finite mass and for which θ has values in N \ {0}.
Given T = E, T , θ and an injective vector-field X ∈ C 1 (R n+m , R n+m ), we define the pushforward of T as the current X # (T ) ∈ R m (R n+m ) defined by Analogously, for a varifold V = E, θ , Notice that to every current T ∈ R m (R n+m ) one can associate an integer rectifiable varifold V T in the obvious way where f is the map defined in (A.2). Let us explain how to give to a graph of a Lipschitz map a structure of current, hence also of varifold. We essentially follow the theory developed in [14,15]. We also refer the reader to [5], where this discussion was made for giving the graph a structure of varifold. Let ⊂ R m be open and bounded and let u ∈ Lip( , R n ). Then, the graph of u defined as . = ( f i , ∂u i (x)) T . The orientation we define on π(x 0 ) is the natural one: Given a Borel function θ ∈ L 1 ( u ; H m u ), we define the current T u,θ = u , ξ u , θ and β(x) . = θ(x, u(x)). Through the area formula, see for instance [5, Proposition 6.4], we have is the area function. Notice that in the case n = m = 2, A has the form (4.4). In particular, by the definition of norm of a m-vector, we notice that where we have used the notation of (A.5).

A.5. Geometric functionals
Given a smooth and 1-homogeneous function : s (R n+m ) → R, we can define the functional on rectifiable currents T = E, T , θ (T ) . if V = E, θ . In particular, any even integrand : s (R n+m ) → R as above allows us to define a functional on varifold too. The minimal hypotheses that one requires on an integrand to get lower semicontinuity of the energy , see [11,Section 5.1], is that has positive values and it satisfies Almgren's ellipticity condition, i.e. whenever ∂ T = ∂ Q, Q has support contained in an m subspace of R n+m whose orienting m-vector is τ = v 1 ∧ · · · ∧ v m and the orientation of Q is given by τ . We say it satisfies a uniform Almgren ellipticity condition if there exists ε > 0 such that We give now the definition of stationarity in the sense of currents (or varifolds). Fix an energy and let U ⊂ R m+n be open. Given any function g ∈ C ∞ c (U, R n+m ), we define the flow X ε (x) .
= γ x (ε), where γ x is the solution of the ODE γ (t) = g(γ (t)) γ (0) = x. (A.12) We define the variation of T with respect to the vector field g ∈ C 1 c (U; R m+n ) as Finally, the current T is said to be stationary in U if [δ T ](g) = 0, ∀g ∈ C 1 c (U; R m+n ). With obvious modifications, this definition holds for varifolds as well.