Abstract
In this paper we study stationary graphs for functionals of geometric nature defined on currents or varifolds. The point of view we adopt is the one of differential inclusions, introduced in this context in the recent papers (De Lellis et al. in Geometric measure theory and differential inclusions, 2019. arXiv:1910.00335; Tione in Minimal graphs and differential inclusions. Commun Part Differ Equ 7:1–33, 2021). In particular, given a polyconvex integrand f, we define a set of matrices \(C_f\) that allows us to rewrite the stationarity condition for a graph with multiplicity as a differential inclusion. Then we prove that if f is assumed to be nonnegative, then in \(C_f\) there is no \(T'_N\) configuration, thus recovering the main result of De Lellis et al. (Geometric measure theory and differential inclusions, 2019. arXiv:1910.00335) as a corollary. Finally, we show that if the hypothesis of nonnegativity is dropped, one can not only find \(T'_N\) configurations in \(C_f\), but it is also possible to construct via convex integration a very degenerate stationary point with multiplicity.
1 Introduction
In this paper we continue the study started in [5, 26] of functionals arising from geometric variational problems from the point of view of differential inclusions. The energies we consider are of the form
defined on mdimensional rectifiable currents (resp. varifolds) \(T = \llbracket E,\vec T, \theta \rrbracket \) of \(\Omega \times {\mathbb {R}}^n\), where \(\Omega \subset {\mathbb {R}}^m\) is a convex and bounded open set, and the integrand \(\Psi \) is defined on the oriented (resp. nonoriented) Grassmanian space. In order to keep the technicalities at a minimum level, we defer all the definitions of these geometric objects to Section A. The main interest is the regularity of stationary points for energies as in (1.1) satisfying suitable ellipticity conditions. From the celebrated regularity theorem of Allard of [2], it is known that an \(\varepsilon \)regularity theorem holds for stationary points of the area functional, namely the case in which \(\Psi \equiv 1\). Since then, the question of extending this result to more general energies has been an important open problem in Geometric Measure Theory, see [1] for a result in this direction and [6, 8, 9] for more recent contributions. On the other hand, the situation is more understood for minimizers of energies of the form (1.1), where similar partial regularity theorems are known, see for instance [11, Ch. 5], [22].
In [5], the second author togheter with C. De Lellis, G. De Philippis and B. Kirchheim already approached this regularity problem through the viewpoint of differential inclusions. The theory of differential inclusions has a rich history, we refer the reader to [17] for an overview and to [18, 19] for more recent results. Since this work is also based on that viewpoint, let us briefly explain what this means. The strategy of [5] consisted first in rewriting (1.1) on a special class of geometric objects, namely multiplicity one graphs of Lipschitz maps, and study the differential inclusion associated to the system of PDEs arising from the stationarity condition. Namely, it can be shown that, see [5, Sec. 6] or Sect. A.5, to a \(C^k\) integrand \(\Psi \) as the one appearing in (1.1), one can naturally associate a \(C^k\) function \(f: {\mathbb {R}}^{n\times m}\rightarrow {\mathbb {R}}\) with the property that
where \(T_u = \llbracket \Gamma _u,\vec \xi _u,1\rrbracket \) is the current associated to the graph of u i.e. if \(v(x)\doteq (x,u(x))\) is the graph map we have \(T_u={v}_{\#} \llbracket \Omega \rrbracket \). In particular, it is possible to prove, see [5, Prop. 6.8] that \(T_u\) is stationary for the energy (1.1) if and only if u solves the following equations:
and
The Euler–Lagrange equation (1.3) corresponds to variations of the form
usually called outer variations, and (1.4) corresponds to variations of the form
called inner (or domain) variations. The second step is to study (1.3) and (1.4) from the point of view of differential inclusions. This amounts to rewrite (1.3)–(1.4) equivalently as
for \(A \in L^\infty (\Omega ,{\mathbb {R}}^{n\times m})\), \(B \in L^\infty (\Omega ,{\mathbb {R}}^m)\) with \({{\,\mathrm{div}\,}}(A) = 0\), \({{\,\mathrm{div}\,}}(B) = 0\).
This paper focuses on the same problem as [5], i.e. regularity of stationary points for geometric integrands, but with the addition of considering graphs with arbitrary positive multiplicity. This of course enlarges the class of competitors and might allow for more flexibility in the regularity of solutions. In particular, we consider polyconvex functions f, i.e.
where \(g \in C^1({\mathbb {R}}^k)\) is a convex function and \(\Phi :{\mathbb {R}}^{n\times m} \rightarrow {\mathbb {R}}^k\) is the vector containing all the minors (subdeterminants) of order larger than or equal to 2 of \(X \in {\mathbb {R}}^{n\times m}\). In analogy with (1.3)–(1.4), we will be interested in the following system of PDEs
for a Lipschitz map \(u \in {{\,\mathrm{Lip}\,}}(\Omega ,{\mathbb {R}}^n)\), and a Borel function \(\beta \in L^\infty (\Omega ,{\mathbb {R}}^+)\). The study of objects with multiplicity is rather natural in the context of stationary rectifiable varifolds or currents. When dealing with these objects, one is interested in showing a socalled constancy theorem, see [23, Theorem 8.4.1]. A constancy theorem in the sense of [23, Theorem 8.4.1] asserts that if a stationary (for the area) varifold of dimension m has support contained in a \(C^2\) manifold of the same dimension, then the varifold must be given by a fixed multiple of the manifold, so that in particular the multiplicity must be constant. In [10], it was shown that instead of \(C^2\), even Lipschitz regularity of the manifold is sufficient to guarantee the validity of the Constancy Theorem. This is connected to the following algebraic fact. If a \(C^2\) map u solves (1.3), then it necessarily solves also (1.4), hence the system (1.3)(1.4) reduces to equation (1.3). Nonetheless, if \(u \in C^2\) and solves (1.6) for a bounded multiplicity \(\beta \), then it is not anymore true that u automatically solves the first. One therefore would like to show a priori that the multiplicity is constant and subsequently one is again in the situation given by (1.3)(1.4). As for regularity theorems, no general constancy result is known at the moment for general functionals, except for the codimension one case, see [7].
As said, the tools we use are the same as the ones of [5], namely we rewrite (1.6) as
again for \(A \in L^\infty (\Omega ,{\mathbb {R}}^{n\times m})\), \(B \in L^\infty (\Omega ,{\mathbb {R}}^m)\) with \({{\,\mathrm{div}\,}}(A) = 0\), \({{\,\mathrm{div}\,}}(B) = 0\). Our result is twofold. First, we will show that, if f is assumed to be nonnegative, then the same result as [5, Theorem 1] holds, namely in \(C_f\) there are no \(T'_N\) configurations. Secondly, we show the optimality of this result by proving that if we drop the hypothesis on the positivity of f, one can not only embed a special family of matrices in \(C_f\), but one can actually construct a stationary current for the energy give in (1.1) whose support lies on the graph of a Lipschitz and nowhere \(C^1\) map. In order to formulate properly these results, we need some terminology concerning differential inclusions.
Differential inclusions are relations of the form
for \(M \in L^\infty (\Omega ,{\mathbb {R}}^{n\times m})\) satisfying \({\mathscr {A}}(M) = 0\) in the weak sense for some constant coefficients, linear differential operator \({\mathscr {A}}(\cdot )\). To every operator \({\mathscr {A}}(\cdot )\), one can associate a wave cone, denoted with \(\Lambda _{\mathscr {A}}\), that is made of those directions A in which it is possible to have plane wave solution, i.e. \(A \in {\Lambda _{{\mathscr {A}}}}\) if and only if there exists \(\xi \in {\mathbb {R}}^m\) such that
In this work, we will not need to consider various differential operators, as we will only work with the mixed divcurl operator introduced in (1.5). In that case, we denote the cone with \(\Lambda _{dc}\) and we will introduce it in detail in Sect. 2.1. Due to the connection of the wavecone to the existence of oscillatory solutions of (1.8), a very first step to exclude wild solutions of (1.8) is to check that
This is usually quite simple to verify, and indeed we will show in Proposition 3.1 that, if f is positive, then (1.9) holds with \(\Lambda _{{\mathscr {A}}} = \Lambda _{dc}\) and K replaced by \(C_f\). Property (1.9) is in general not sufficient to guarantee good regularity properties of solution of (1.8). Indeed, in [20], S. Müller and V. Šverák constructed a striking counterexample to elliptic regularity for solutions of
where the function \(f \in C^\infty ({\mathbb {R}}^{2\times 2})\) is quasiconvex (for the definition of quasiconvex function, we refer the reader to [20]), and J is a matrix satisfying \(J = J^T\) and \(J^2 = {{\,\mathrm{id}\,}}\). In particular, they were able to show that there exists a Lipschitz and nowhere \(C^1\) function \(v: \Omega \subset {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^4\) satisfying the differential inclusion (1.10). Their strategy was subsequently improved by L. Székelyhidi in [24] showing that f can be chosen polyconvex. In both cases, \(K'_f\) does not contain rank one connections, i.e.
and this can proved to be equivalent to (1.9) in the case \({\mathscr {A}} = {{\,\mathrm{curl}\,}}\). Their strategy was based on showing that in \(K'_f\) other suitable families of matrices could be embedded, the socalled \(T_N\) configurations. In our situation, since we are dealing with mixed divcurl operators, we need to consider a slightly different version of \(T_N\) configurations, that we have named \(T'_N\) configurations in [5]. We postpone the definition of \(T_N\) and \(T'_N\) configurations to Sect. 2, but we are finally able to formally state our main positive result:
Theorem
If \(f\in C^1 ({\mathbb {R}}^{n\times m})\) is a strictly polyconvex function, then \(C_f\) does not contain any set \(\{A_1, \ldots , A_N\} \subset {\mathbb {R}}^{(2n + m)\times m}\) which induces a \(T_N'\) configuration, provided that \(f(X_1) \ge 0,\dots , f(X_N) \ge 0\), if
This result, as [5, Theorem 1], shows that it is not possible to apply the convex integration methods of [20, 24] to prove the existence of an irregular solution of the system (1.6). This theorem is stronger than [5, Theorem 1], in the sense that we are able to show [5, Theorem 1] as a corollary:
Corollary
If \(f\in C^1 ({\mathbb {R}}^{n\times m} )\) is a strictly polyconvex function (not necessarily nonnegative), then \(K_f\) does not contain any set \(\{A_1, \ldots , A_N\}\) which induces a \(T_N'\) configuration.
Finally, in Sect. 4, we show the optimality of the hypothesis of nonnegativity of the previous theorem by proving the following:
Theorem
There exists a smooth and elliptic integrand \(\Psi : \Lambda _2({\mathbb {R}}^4)\rightarrow {\mathbb {R}}\) such that the associated energy \(\Sigma \) admits a stationary point T whose (integer) multiplicities are not constant. Moreover the rectifiable set supporting T is given by a graph of a Lipschitz map \(u: \Omega \rightarrow {\mathbb {R}}^2\) that fails to be \(C^1\) in any open subset \({\mathcal {V}} \subset \Omega \).
The last Theorem is obtained by embedding in the differential inclusion (1.7) what has been named in [13] large \(T_N\) configuration. Following the strategy of [24], we do not a priori choose a polyconvex \(f \in C^\infty ({\mathbb {R}}^{2\times 2})\), but rather we construct it in such a way that \(C_f\) already contains this special family of matrices. Once the polyconvex function f has been built, we prove an extension result for f to the Grassmanians, thus obtaining the integrand \(\Psi \) of the statement of the Theorem. The extension results are quite simple and might be of independent interest. The construction of our counterexample can not be carried out in the varifold setting. The reason is quite elementary, as the integrand \(\Psi \) we would need to construct in the varifold case should be even, convex and positively 1homogeneous, hence positive. We refer the reader to Remark 5.6 for more details. Moreover, let us point out that positivity of the integrand is a necessary assumption when studying existence of minima, but to the best of our knowledge there is no available example for it to be a necessary assumption also when studying regularity properties of stationary points.
The paper is organized as follows. In Sect. 2, we recall the statements of our main results in the case of nonnegative integrands f and we collect some crucial preliminary results of [5]. The proof of the main results in the positive case, i.e. Proposition 3.1, Theorem 3.3 and Corollary 3.4, will be given in Sect. 3. In Sect. 4, we provide a counterexample to regularity when dropping the hypothesis of positivity of the integrand. Some lemmas of Sect. 4 concerning the extension of polyconvex functions to the Grassmaniann manifold can be easily extended to general dimension and codimensions. Therefore, we give the proof of these general versions in Sect. 5. Finally, the appendix contains a concise introduction to the tools of geometric measure theory used along the paper.
2 Positive case: absence of \(T_N\) configurations
In this section we collect some preliminary results proved in [5], that will be essential for the proofs of the next section.
2.1 Divcurl differential inclusions, wave cones and inclusion sets
In this subsection, we explain how to rephrase the system (1.3)–(1.4) as a differential inclusion. As recalled in the introduction, the Euler–Lagrange equations defining stationary points for energies \({\mathbb {E}}_f\) are the couple of equations (1.3), (1.4), that can be written in the classical form:
Thus we are lead to study the following divcurl differential inclusion for a triple of maps \(X, Y\in L^\infty (\Omega , {\mathbb {R}}^{n\times m})\) and \(Z\in L^\infty (\Omega , {\mathbb {R}}^{m\times m})\):
where \(f\in C^1 ({\mathbb {R}}^{n\times m})\) is a fixed function.
Moreover, we also consider the following more general system of PDEs, for \(u \in {{\,\mathrm{Lip}\,}}(\Omega , {\mathbb {R}}^n)\) and a Borel map \(\beta \in L^\infty (\Omega ,(0,+\infty ))\):
This system is equivalent to the stationarity in the sense of varifolds of the varifold \(V = \llbracket \Gamma _u,\beta \rrbracket \), where \(\Gamma _u\) is the graph of u. This is discussed in Sect. A.5. The divcurl differential inclusion associated to this system is, again for a triple of maps \(X, Y\in L^\infty (\Omega , {\mathbb {R}}^{n\times m})\) and \(Z\in L^\infty (\Omega , {\mathbb {R}}^{m\times m})\):
where
This discussion proves the following
Lemma 2.1
Let \(f\in C^1 ({\mathbb {R}}^{n\times m})\). A map \(u \in {{\,\mathrm{Lip}\,}}(\Omega ,{\mathbb {R}}^n)\) is a stationary point of the energy (1.2) if and only if there are matrix fields \(Y\in L^\infty (\Omega , {\mathbb {R}}^{n\times m})\) and \(Z\in L^\infty (\Omega , {\mathbb {R}}^{m\times m})\) such that \(W = (Du, Y,Z)\) solves the divcurl differential inclusion (2.1)–(2.2).
Moreover, the couple \((u,\beta ) \in {{\,\mathrm{Lip}\,}}(\Omega ,{\mathbb {R}}^n)\times L^\infty (\Omega ,(0,+\infty ))\) solves (2.3) if and only there are matrix fields \(Y\in L^\infty (\Omega , {\mathbb {R}}^{n\times m})\) and \(Z\in L^\infty (\Omega , {\mathbb {R}}^{m\times m})\) such that \(W = (Du, Y,Z)\) solves the divcurl differential inclusion (2.4)–(2.5).
Finally, we introduce here the wavecone associated to the mixed divcurl operator that is relevant for us.
Definition 2.2
The cone \(\Lambda _{dc}\subset {\mathbb {R}}^{(2n+m)\times m}\) consists of the matrices in block form
with the property that there is a direction \(\xi \in {\mathbb {S}}^{m1}\) and a vector \(u\in {\mathbb {R}}^n\) such that \(X = u\otimes \xi \), \(Y \xi =0\) and \(Z\xi =0\).
2.2 \(T_N\) configurations and \(T'_N\) configurations
We start defining \(T_N\) configurations for classical curltype differential inclusions.
Definition 2.3
An ordered set of \(N\ge 2\) distinct matrices \(\{X_i\}_{i=1}^N \subset {\mathbb {R}}^{n\times m}\) is said to induce a \(T_N\) configuration if there exist matrices \(P, C_i \in {\mathbb {R}}^{n\times m}\) and real numbers \(k_i > 1\) such that:

(a)
each \(C_i\) belongs to the wave cone of \({{{\,\mathrm{curl}\,}}}\, X=0\), namely \({{\,\mathrm{rank}\,}}(C_i) \le 1\) for each i;

(b)
\(\sum _i C_i = 0\);

(c)
\(X_1, \ldots , X_N\), P and \(C_1, \ldots , C_N\) satisfy the following N linear conditions
$$\begin{aligned} \begin{aligned}&X_1 = P + k_1 C_1 ,\\&X_2 = P + C_1 + k_2C_2 ,\\&\dots \\&\dots \\&X_N = P + C_1 +\dots + k_NC_N\, . \end{aligned} \end{aligned}$$(2.7)
In the rest of the chapter we will use the word \(T_N\) configuration for the data
We will moreover say that the configuration is nondegenerate if \({{\,\mathrm{rank}\,}}(C_i)=1\) for every i.
As in [5], we give a slightly more general definition of \(T_N\) configuration than the one usually given in the literature (cf. [20, 24, 25]), in that we drop the requirement that there are no rankone connections between distinct \(X_i\) and \(X_j\). We refer the reader to [5] for discussions concerning \(T_N\) configurations.
Adapted to the divcurl operator we introduce \(T'_N\) configurations, originally introduced in [5].
Definition 2.4
A family \(\{A_1, \ldots , A_N\}\subset {\mathbb {R}}^{(2n+m)\times m}\) of \(N\ge 2\) distinct
induces a \(T_N'\) configuration if there are matrices \(P, Q, C_i, D_i \in {\mathbb {R}}^{n\times m}\), \(R, E_i\in {\mathbb {R}}^{m\times m}\) and coefficients \(k_i >1\) such that
and the following properties hold:

(a)
each element \((C_i, D_i, E_i)\) belongs to the wave cone \(\Lambda _{dc}\) of (2.1);

(b)
\(\sum _\ell C_\ell = 0\), \(\sum _\ell D_\ell =0 \) and \(\sum _\ell E_\ell = 0\).
We say that the \(T'_N\) configuration is nondegenerate if \({{\,\mathrm{rank}\,}}(C_i)=1\) for every i.
We collect here some simple consequences of the definition above.
Proposition 2.5
Assume \(A_1, \ldots , A_N\) induce a \(T_N'\) configuration with \(P,Q, R, C_i, D_i, E_i\) and \(k_i\) as in Definition 2.4. Then:

(i)
\(\{X_1, \ldots , X_N\}\) induce a \(T_N\) configuration of the form (2.7), if they are distinct; moreover the \(T_N'\) configuration is nondegenerate if and only if the \(T_N\) configuration induced by \(\{X_1, \ldots , X_N\}\) is nondegenerate;

(ii)
For each i there is an \(n_i\in {\mathbb {S}}^{m1}\) and a \(u_i\in {\mathbb {R}}^n\) such that \(C_i = u_i\otimes n_i\), \(D_i n_i =0\) and \(E_i n_i =0\);

(iii)
\({{\,\mathrm{tr}\,}}C_i^T D_i = \langle C_i, D_i\rangle = 0\) for every i.
2.3 Strategy
Before starting with the proof of the main result of this chapter, it is convenient to explain the strategy we intend to follow. In order to do so, let us consider here the case \(n = m = 2\), \(N = 5\). Suppose by contradiction that there exists a strictly polyconvex function \(f : {\mathbb {R}}^{2\times 2 }\rightarrow {\mathbb {R}}\), \(f(X) = g(X,\det (X))\) and a \(T'_5\) configuration \(A_1,A_2,A_3,A_4,A_5\),
where \(X_i,Y_i,Z_i\) fulfill the relations of (2.8), i.e.
We will see below that we can without loss of generality assume that \(P=0\). The first part of the strategy follows the same lines of the one of [5]. Indeed, we think the relations \(A_i \in C_f\), \(\forall i\), where \(C_f\) has been defined in (2.6), as two separate pieces of information:
and
Let us denote with \(c_i\doteq f(X_i)\). As in [5], we use (2.9) to obtain inequalities involving \(X_i,Y_i\) and quantities involving f. These are deduced from the polyconvexity of f, analogously to [24, Lemma 3]. In particular, (2.9) is rewritten as
for \(d_i \doteq \partial _{y_5}g(y_1,y_2,y_3,y_4,y_5)_{(X_i,\det (X_i))}\). This is proved in Proposition 2.9. The final goal is to prove that these inequalities can not be fulfilled at the same time. As in [5], we can simplify (2.11) using the structure result on \(T_N\) configurations in \({\mathbb {R}}^{2\times 2}\) of [25, Proposition 1]. This asserts, in the specific case of the ongoing example, the existence of 5 vectors \((t_1^i,\dots , t_5^i), i \in \{1,\dots , 5\}\) with positive components, such that
If we use this result in (2.11), we can eliminate from the expression the variable \(d_i\), thus obtaining
compare Corollary 2.10. In [5, 25, Proposition 1] was extended to \(T_N\) configurations in \({\mathbb {R}}^{n\times m}\), so that relations (2.12) remain true in every dimension and target dimension. This extension is recalled in Proposition 2.8. Despite being very useful, the last simplification can not conclude the proof. Indeed, up to now we have exploited (2.9) and the fact that \(\{X_1,\dots , X_5\}\) induce a \(T_5\) configuration, but, if \(\beta _i = 1,\forall i\), this is the exact same situation of [24]. Since from that paper we know the existence of \(T_5\) configurations in \(K'_f\), clearly we can not reach a contradiction at this point of the strategy. This is where the inner variations come into play. We rewrite (2.10) using the definition of \(T'_5\) configuration and, after some manipulations, we find that the numbers
must all be 0. For the index I such that \(\beta _I = \min _{i}\beta _i\), and essentially using the positivity of \(c_j\), we find that
which is in contradiction with the negativity of \(\nu _I\).
2.4 Preliminary results: \(T_N\) configurations
To follow the strategy explained in Sect. 2.3, we need to recall the extension of [25, Proposition 1] proved in [5]. Here we will only recall the essential results without proof, we refer the interested reader to [5] for the details. First, it is possible to associate to a set \(T_N\)configuration of the form (2.7), i.e.
a defining vector \((\lambda ,\mu ) \in {\mathbb {R}}^{N + 1}\), see [5, Definition 3.7], defined as follows:
These relations can be inverted, in fact one can express
Since \(k_i > 1, \forall i \in \{1,\dots ,N\}\), (2.13) implies that \(\lambda _i> 0, \forall i, \mu > 1\) and also
As in [25, Proposition 1], we define N vectors of \({\mathbb {R}}^N\) with positive components
where \(\xi _i > 1\) are normalization constants chosen in such a way that \(\Vert t^{i}\Vert _{1} = 1\). For a vector \(v = (v_1,\dots , v_N) \in {\mathbb {R}}^N\),
The importance of these vectors \(t^i\) comes from [25, Proposition 1], where it is proved that, for a \(T_N\) configuration of the form (2.7) in \({\mathbb {R}}^{2\times 2}\),
Moreover, the following relation holds for every i:
We need to state the generalization of the previous relations for \(T_N\) configurations of any size. In [5, Lemma 3.10] it was proved this general Linear Algebra result:
Lemma 2.6
Assume the real numbers \(\mu >1\), \(\lambda _1, \ldots , \lambda _N >0\) and \(k_1, \ldots , k_N >1\) are linked by the formulas (2.13). Assume \(v, v_1, \ldots , v_N, w_1, \ldots , w_N\) are elements of a vector space satisfying the relations
If we define the vectors \(t^i\) as in (2.15), then
This lemma allows to generalize (2.16) and (2.17), compare [5, Proposition 3.8]. To state this result, we need some notation concerning multiindexes. We will use I for multiindexes referring to ordered sets of rows of matrices and J for multiindexes referring to ordered sets of columns. In our specific case, where we deal with matrices in \({\mathbb {R}}^{n\times m}\) we will thus have
and we will use the notation \(I\doteq r\) and \(J\doteq s\). In the sequel we will always have \(r = s\).
Definition 2.7
We denote by \({\mathcal {A}}_r\) the set
For a matrix \(M = (m_{ij})\in {\mathbb {R}}^{n\times m}\) and for \(Z\in {\mathcal {A}}_r\) of the form \(Z = (I,J)\), we denote by \(M^Z\) the squared \(r\times r\) matrix obtained by M considering just the elements \(m_{ij}\) with \(i\in I\), \(j\in J\) (using the order induced by I and J).
We are finally in position to state [5, Proposition 3.8].
Proposition 2.8
Let \(\{X_1, \ldots , X_N\}\subset {\mathbb {R}}^{n\times m}\) induce a \(T_N\) configuration as in (2.7) with defining vector \((\lambda , \mu )\). Define the vectors \(t^1,\dots ,t^N\) as in (2.15) and for every \(Z\in {\mathcal {A}}_r\) of order \(1\le r \le \min \{n,m\}\) define the minor \({\mathcal {S}} : {\mathbb {R}}^{n\times m} \ni X \mapsto {\mathcal {S}} (X) \doteq \det (X^Z)\in {\mathbb {R}}\). Then
and \(A^\mu _Z \lambda = 0\).
It is clear that the previous result extends (2.16) and (2.17) to all the minors.
2.5 Preliminary results: inclusion set associated to polyconvex functions
As in [5, Section 4], we write a necessary condition for a set of distinct matrices \(A_i \in {\mathbb {R}}^{2n\times m}\)
to belong to a set of the form
for some strictly polyconvex function \(f:{\mathbb {R}}^{n\times m}\rightarrow {\mathbb {R}}\). First, introduce the following notation, that is the same as in [5]. Let \(f:{\mathbb {R}}^{n\times m}\rightarrow {\mathbb {R}}\) be a strictly polyconvex function of the form \(f(X) =g(\Phi (X))\), where \(g \in C^1({\mathbb {R}}^k)\) is strictly convex and \(\Phi \) is the vector of all the subdeterminants of X, i.e.
and
for some fixed (but arbitrary) ordering of all the elements \(Z\in {\mathcal {A}}_s\). Variables of \({\mathbb {R}}^k\), and hence partial derivatives in \({\mathbb {R}}^k\), are labeled using the ordering induced by \(\Phi \). The first nm partial derivatives, corresponding in \(\Phi (X)\) to X, are collected in a \(n\times m\) matrix denoted with \(D_Xg\). The jth partial derivative, \(mn + 1\le j \le k\), is instead denoted by \(\partial _Zg\), where Z is the element of \({\mathcal {A}}_s\) corresponding to the jth position of \(\Phi \). Let us make an example in low dimension: if \(n = 3,m = 2\), then \(k = 9\), and we choose the ordering of \(\Phi \) to be
In this case, \(y \in {\mathbb {R}}^k\) has coordinates
The partial derivatives with respect to the first 6 variables are collected in the \(3\times 2\) matrix:
The partial derivatives with respect to the remaining variables are denoted as \(\partial _{(12,12)}g\), \(\partial _{(13,12)}g\) and \(\partial _{(23,12)}g\), i.e. following the ordering induced by \(\Phi \). Finally, for a matrix \(A \in {\mathbb {R}}^{r\times r}\), we denote with \({{\,\mathrm{cof}\,}}(A)\) the matrix defined as
where \(M_{ji}(A)\) denotes the \((n1)\times (n1)\) submatrix of A obtained by eliminating from A the jth row and the ith column. In particular, the following relation holds
We are ready to state the following:
Proposition 2.9
Let \(f:{\mathbb {R}}^{n\times m}\rightarrow {\mathbb {R}}\) be a strictly polyconvex function of the form \(f(X) =g(\Phi (X))\), where \(g \in C^1\) is strictly convex and \(\Phi \) is the vector of all the subdeterminants of X, i.e.
and
for some fixed (but arbitrary) ordering of all the elements \(Z\in {\mathcal {A}}_s\). If \(A_i \in K'_f\) and \(A_i \ne A_j\) for \(i \ne j\), then \(X_i\), \(Y_i = D f(X_i)\) and \(c_i = f (X_i)\) fulfill the following inequalities for every \(i\ne j\):
where \(d^i_Z = \partial _Zg(\Phi (X_i))\).
This result was proved in [5, Proposition 4.1]. We now introduce the set
Notice that \(C'_f\) is the projection of \(C_f\) on the first \(2n\times m\) coordinates. We immediately obtain from the previous proposition and the definition of \(C'_f\) that
if and only if there exist numbers \(\beta _i > 0, \forall i\), such that
The expressions in (2.25) can be simplified when the matrices \(X_1, \ldots , X_N\) induce a \(T_N\) configuration:
Corollary 2.10
Let f be a strictly polyconvex function and let \(A_1, \ldots , A_N\) be distinct elements of \(K'_f\) with the additional property that \(\{X_1, \ldots , X_N\}\) induces a \(T_N\) configuration of the form (2.7) with defining vector \((\mu , \lambda )\). Then,
where the \(t^i\)’s are given by (2.15).
This corresponds to [5, Corollary 4.3], and concludes the list of preliminary results needed for the results of this paper.
3 Positive case: proof of the main results
Before checking whether the inclusion set \(C_f\) contains \(T_N\) or \(T'_N\) configurations, we need to exclude more basic building block for wild solutions, such as rankone connections or, as in this case, \(\Lambda _{dc}\)connections in \(C_f\). It is rather easy to see, compare for instance [24], that if f is strictly polyconvex, then for \(A,B \in K_f\) it is not possible to have
Indeed the same result holds even considering \(K'_f\). To prove this, it is sufficient to observe that if \(X,Y \in {\mathbb {R}}^{n\times m}\) are rankone connected, i.e. for some \(u \in {\mathbb {S}}^{m  1}\)
and
then
where \(\{u_1,\dots , u_m\}\) is an orthonormal basis of \({\mathbb {R}}^m\) with \(u_1 = u\). On the other hand, since f is strictly polyconvex, it is easy to see that
if \({{\,\mathrm{rank}\,}}(XY) = 1\). The first result of this section shows that this result holds also for \(C_f\), provided f is positive.
Proposition 3.1
Let f be strictly polyconvex. If
and \(f(X) \ge 0, f(X') \ge 0\), then
Proof
Suppose by contradiction that there exist
with \(c\doteq f(X) \ge 0, c'\doteq f(X') \ge 0\), and there is a vector \(\xi \in {\mathbb {R}}^m\) with \(\Vert \xi \Vert = 1\) such that for every \(v\perp \xi \),
Now we can use the socalled Matrix Determinant Lemma 3.2 to see that the expressions found in (2.25) evaluated at
yield the following inequalities:
Moreover by assumption \((Z'  Z)\xi = 0\), i.e.
Thus, using \((Y'  Y)\xi = 0\),
that yields, since \(\Vert \xi \Vert = 1\),
In the previous lines we have used the fact that
and, since C is of rank one with \(Cv = 0, \forall v\perp \xi \),
Exploiting (3.5), we rewrite (3.3) as
and (3.4) as
From (3.6), we infer
and from (3.7)
Since \(c \ge 0\) and \(c' \ge 0\), we get a contradiction. \(\square \)
Let us recall the Matrix Determinant Lemma used in the proof of the last proposition:
Lemma 3.2
Let A, B be matrices in \({\mathbb {R}}^{m\times m}\), and let \({{\,\mathrm{rank}\,}}(B) \le 1\). Then,
Now that we have excluded \(\Lambda _{dc}\)connections, we can ask ourselves the same question concerning \(T'_N\) configurations. In particular we want to prove the main Theorem of this part of the paper:
Theorem 3.3
If \(f\in C^1 ({\mathbb {R}}^{n\times m})\) is a strictly polyconvex function, then \(C_f\) does not contain any set \(\{A_1, \ldots , A_N\} \subset {\mathbb {R}}^{(2n + m)\times m}\) which induces a \(T_N'\) configuration, provided that \(f(X_1) \ge 0,\dots , f(X_N) \ge 0\), if
At the end of the section we will show the following
Corollary 3.4
If \(f\in C^1 ({\mathbb {R}}^{n\times m} )\) is strictly polyconvex, then \(K_f\) does not contain any set \(\{A_1, \ldots , A_N\}\) which induces a \(T_N'\) configuration.
Let us fix the notation. We will always consider \(T'_N\) configurations of the following form:
with:
and we denote with \(n_i \in {\mathbb {S}}^{m  1}\) the vectors such that
3.1 Idea of the proof
Before proving the theorem, let us give an idea of the key steps of the proof. First of all, in Lemma 3.5, we will see that without loss of generality we can choose \(P = 0\). As already explained in Sect. 2.3, we want to prove that the system of inequalities
cannot be fulfilled at the same time. This gives a contradiction with Corollary 2.10. In particular, we show that for the index \(\sigma \) such that \(\beta _{\sigma } = \min _j\beta _j \),
To do so, we prove that the quantities
equal to 0 for every i. Then, choosing \(\sigma \) as above and exploiting the positivity of \(c_j, \forall j\), we estimate
This will then yield the required contradiction. In order to show \(\mu _i = 0,\forall 1\le i \le N\), we consider N matrices \(M_i\) defined as
where \(\mu > 1\) is part of the defining vector of the \(T_N\) configuration \(\{X_1,\dots ,X_N\}\), compare 2.13, and \(\alpha _j\) are real numbers. We prove that for numbers \(\xi _j > 0\), a subset \({\mathcal {I}}_i \subset \{\xi _1\mu _1,\dots , \xi _N\mu _N\}\) is made of generalized eigenvalues of \(M_i\), see (3.24). This is achieved thanks to Lemma 3.6. Since \(M_i\) is tracefree, as can be seen by the structure of \(C_j\) and \(D_j\), we will find N relations of the form
This can be read as the equations for the kernel for a specific matrix \(N\times N\) matrix, W. Proving that W has trivial kernel will yield \(\xi _j\mu _j = 0,\forall j\), and thus \(\mu _j = 0\) since \(\xi _j > 0\). The proof of the invertibility of W is the content of the last Lemma 3.10.
3.2 Proof of Theorem 3.3
Lemma 3.5
If f is a strictly polyconvex function such that \(A_i \in C_f\), \(\forall 1\le i \le N\) and \(f(X_i) \ge 0, \forall 1\le i\le N\), then there exists another strictly polyconvex function F such that the \(T_N'\) configuration \(B_i\) defined as
satisfies \(B_i \in C_F,\) for every \(1\le i \le N\) and moreover \(F(X_i  P) \ge 0, \forall i\).
Proof
Simply define the new polyconvex function F(X) by \(F(X)\doteq f(X + P)\). Clearly the newly defined family \(\{B_1,\dots B_N\}\) still induces a \(T'_N\) configuration, and it is straightforward that \(B_i \in C_F\). Moreover, this does not affect positivity, in the sense that \(F(X_i  P) = f(X_i  P + P) = f(X_i) \ge 0\). \(\square \)
Lemma 3.6
Suppose \(A_i \in C_f\), \(\forall i\), and \(P = 0\). Then, for every \(i \in \{1,\dots , N\}\):
where \(t^i\) is the vector defined in (2.15).
Proof
We need to compute the following sums:
Let us start computing the sum for \(i = 1\), \(\sum _j\lambda _jX_j^TY_j.\) First, notice that
since, by Lemma 2.6 or (2.21),
We rewrite it in the following way:
where we collected in the coefficients \(g_{ij}\) the following quantities:
Using (2.14), we have, if \(i \ne j\):
On the other hand, again using (2.14),
Using the equalities \(\sum _\ell C_\ell = 0 = \sum _\ell D_\ell \), then also \(\sum _{i,j}C_i^TD_j = 0\), and so \(\sum _{i\ne j}C_i^TD_j = \sum _iC_i^TD_i\). Hence, (3.14) becomes
We just proved that
Recall the definition of \(t^i\), namely
By the previous computation (\(i = 1\)), it is convenient to rewrite (3.13) using (3.15) as
In the previous equation, we have used the equality
that easily follows from Lemma 2.6. Once again, let us express the sum up to \(i  1\) in the following way:
A combinatorial argument analogous to the one in the previous case gives
Now
and so
Hence
We rewrite (3.16) as
Now we substitute (3.18) in the definition (3.9) of \(Z_i\) in order to compute \(E_i\):
Multiply by \(n_i\) the previous expression and recall that \(E_in_i = 0\) to find:
Now notice that, since \(D_in_i = 0\),
Thus (3.19) becomes
Now we need to compute
and
Using this computation, (3.20) reads as:
Exploiting the definition of \(t_j^i\), we see that we can rewrite
and
Thus (3.21) becomes
Since \(C_iv = 0, \forall v\perp n_i\), we have \(C_i^TY_in_i = \langle C_i,Y_i\rangle n_i\), and we finally obtain the desired equalities:
\(\square \)
We are finally in position to prove the main Theorem.
Proof of Theorem 3.3
Assume by contradiction the existence of a \(T_N'\) configuration induced by matrices \(\{A_1, \ldots , A_N\}\) of the form (3.8) which belong to the inclusion set \(C_f\) of some stictly polyconvex function \(f\in C^1 ({\mathbb {R}}^{n\times m})\) and \(f(X_i) \ge 0\) for every i. We can assume, without loss of generality by Lemma 3.5, that
Using Lemma 3.6, we find
Define \(\alpha _{j}\doteq k_j(k_{j} 1)\lambda _j > 0\), and
for \(i \in \{1,\dots , N\}\). Also set
Then, (3.22) can be rewritten as
We define \(n_{s}\doteq n_{s  N}\), for \(s \in \{N + 1,\dots , 2N\}\). As explained in Sect. 3.1, the idea is to show that a subset of the vectors \(n_j\) are generalized eigenvectors and a subset of \(\xi _j \mu _j\) are generalized eigenvalues of \(M_i\). In particular, for every \(i \in \{1,\dots ,N\}\), we want to show the following equalities:
where \(v_{i,a} \in {{\,\mathrm{span}\,}}\{n_i,\dots , n_{i + a  1}\}\). From now on, we fix \(i \in \{1,\dots , N\}\). To prove (3.24), first we rewrite
and then we use (3.23) to obtain
To conclude the proof of (3.24), we only need to show that
To do so, we compute \(M_i  M_{i + a}\). Let us start from the case \(1\le i + a \le N\):
On the other hand, if \(N + 1 \le i + a \le i + N  1\), then
Now the crucial observation is that, due to the fact that \(C_jv = 0\) for every \(v\perp n_j\), the image of \(C_j^TD_j\) is contained in the line \({{\,\mathrm{span}\,}}(n_j)\), for every \(j \in \{1,\dots , N\}\). Therefore, the previous computations prove (3.26) and hence (3.24). Now we introduce
We can extract a basis for \({{\,\mathrm{span}\,}}(V_i)\) in the following way. First, choose indexes
Then, consider the basis \({\mathcal {B}}_i\doteq \{n_k: k \in {\overline{S}}_i\}\) for \({{\,\mathrm{span}\,}}(V_i)\). Since
then \(\#{\overline{S}}_i = C \le \min \{m,N\}, \forall i\). Indexes in \({{\overline{S}}}_i\) lie in the set \(\{1,\dots , 2N\}\). For technical reasons, we also need to consider the modulo N counterpart of \({{\overline{S}}}_i\), that is
In \(S_i\), consider furthermore \(S_i'\doteq S_i \cap \{i,\dots , N\}\), \(S_i''\doteq S_i\cap \{1,\dots ,i  1\}\). If necessary, complete \({\mathcal {B}}_i\) to a basis of \({\mathbb {R}}^m\) made with elements \(\gamma _j\) orthogonal to the ones of \({\mathcal {B}}_i\). Note that, since \({{\,\mathrm{Im}\,}}(C_i^TD_i) \subset {{\,\mathrm{span}\,}}(n_i)\), then \({{\,\mathrm{Im}\,}}(M_i)\subset \{n_1,\dots , n_N\}\). Then, the associated matrix to \(M_i\) with respect to \({\mathcal {B}}_i\) is
We denoted with \({\mathbf {0}}_{c,d}\) the zero matrix with c rows and d columns, with \({\mathbf {T}}\) the \(C\times (m  C)\) matrix of the coefficients of \(M_i\gamma _j\) with respect to \(\{n_s:s\in {\overline{S}}_i\}\), and with \(\mathbf {*}\) numbers we are not interested in computing explicitely. Finally, we have chosen an enumeration \(s_1< s_2<\dots< s_\ell< \dots < s_C\) of the elements of \({\overline{S}}_i\), and we have defined
The triangular form of the matrix representing \(M_i\) is exactly due to (3.24). Now, \({{\,\mathrm{tr}\,}}(M_i) = 0, \forall i\), since \(C_i^TD_i\) is tracefree for every i. This implies that the matrix in (3.29) must be tracefree, hence:
We have thus reduced the problem to the following simple Linear Algebra statement: we wish to show that, if W is the \(N\times N\) matrix defined as
then, \(Wx = 0 \Rightarrow x = 0\). By (3.30), the vector \(x \in {\mathbb {R}}^{N}\) defined as \(x_j\doteq \xi _j\mu _j, \forall 1\le j \le N\), is such that \(Wx = 0\), thus if the statement is true we get \(\xi _j \mu _j = 0, \forall 1\le j \le N\), and since \(\xi _j > 1\), also \(\mu _j = 0,\forall 1 \le j \le N\). By (3.12), this is sufficient to reach a contradiction. Therefore, we only need to show that \(Wx = 0 \Rightarrow x = 0\). This proof will be given in Lemma 3.10. \(\square \)
Before giving the proof of the final Lemma, let us make some examples of possible matrices W arising from the previous construction. For the sake of illustration, let us take N to be as small as possible, i.e. \(N = 4\).
Example 3.7
Consider the case in which \(C = 2\). This corresponds, for instance, to the case \(m = 2\). Then, by Proposition 3.1 and (3.27), the only possible form of W is
Let \(W_i\) be the ith row of W. We notice that for \(i = 1,2,3\), \(W_{i + 1}\) differs from \(W_i\) by exactly two elements, while \(W_4\) does not differ with \(W_1\) by only two elements. It does, though, with \(\mu W_1\). Hence we rewrite equivalently the system \(Wx = 0\) as \((W_i  W_{i + 1},x) = 0\), \((W_4  \mu W_1,x) = 0\):
For a function \(h: \{1,\dots , 4\} \rightarrow \{1,\dots , 4\}\). Since \(\mu > 1\), this immediately implies \(x_i = 0, \forall i\).
Example 3.8
Consider the case in which \(C = 4\), corresponding to \(n_1,n_2,n_3,n_4\) linearly independent. Then,
As in the previous example, for \(i = 1,2,3\), \(W_{i + 1}\) differs from \(W_i\) by exactly one element, while \(W_4\) does the same with \(\mu W_1\). Thus as before we rewrite equivalently the system \(Wx = 0\) as \((W_i  W_{i + 1},x) = 0\), \((W_4  \mu W_1,x) = 0\):
In this case, \(h(i) = i, \forall i \in \{1,\dots , 4\}\). Clearly also in this case \(\mu > 1\), implies \(x_i = 0, \forall i\).
Finally, let us show a less symmetric example:
Example 3.9
Consider the case in which \(C = 3\). Then, a possible matrix is:
First, let us comment on the fact that this is a possible matrix appearing in the proof of the previous Theorem. Indeed, let us consider the first two lines:
The fact that \(W_{13} = 0\) means that \(n_3 \in {{\,\mathrm{span}\,}}(n_1,n_2)\), since \(3 \notin S_1\). On the other hand, Proposition 3.1 ensures that \(n_{3}\) is not a multiple of \(n_2\), hence \(n_3 \in S_2\), and \(W_{23} = 1 \not = 0.\) For this reason, the matrix
would for instance have been nonadmissible. Now, in order to prove \(Wx = 0 \Rightarrow x = 0\), we work as in the previous examples, by noticing that for \(i = 1,2,3\), \(W_{i + 1}\) differs from \(W_i\) by at most two elements, while \(W_4\) must be compared with \(\mu W_1\). Thus we write \(W_i  W_{i + 1}\), \(W_4  \mu W_1\):
It is an elementary computation to show that \(x_i = 0, \forall i\).
Even though the examples we have given are too simple to appreciate the usefulness of the function h such that \(x_i = a_ix_{h(i)}\), this will be crucial in the proof of the Lemma.
Lemma 3.10
Let W be the matrix defined in the proof of Theorem 3.3. Then, \({{\,\mathrm{Ker}\,}}(W) = \{0\}\).
Proof
Throughout the proof, we always consider a given vector \(x \in {\mathbb {R}}^N\) such that \(Wx = 0\). The proof, partially suggested by the previous examples, consists in the following steps. First, we show that the rows of W, \(W_i\) and \(W_{i + 1}\) (if \(i = N\), we compare \(W_N\) with \(\mu W_1\)) differ for at most two elements, and one of them is always \(x_i\). This immediately yields the existence of a function \(h: \{1,\dots , N\} \rightarrow \{1,\dots , N\}\) such that \(x_i = a_ix_{h(i)}\). We will then use this and the crucial fact that \(\mu > 1\) to conclude that \(x_i = 0, \forall i\). Let us make the following claims, and see from them how to conclude the proof of the present Lemma. We will use freely the notation introduced at the end of the proof of Theorem 3.3.
Claim 1: Let \(i \in \{1,\dots , N\}\). Then \({\overline{S}}_i\) differs from \({\overline{S}}_{i + 1}\) (if \(i = N\), \({\overline{S}}_{i + 1}= {\overline{S}}_1\)) of at most two elements, in the sense that
contains at most 2 elements. Moreover, if \({\overline{S}}_i \Delta {\overline{S}}_{i + 1} \ne \emptyset \), then \({\overline{S}}_i \Delta {\overline{S}}_{i + 1} = \{i,I(i)\}\), with \(i \in {\overline{S}}_{i}\setminus {\overline{S}}_{i +1}\), and \(I(i) \in {\overline{S}}_{i + 1}\setminus {\overline{S}}_i\).
Claim 2: Let \(i \in \{1,\dots , N  1\}\). The couple of rows \(W_i\),\(W_{i + 1}\) and \(\mu W_1\), \(W_N\) differ at most by two elements, in the sense that if \(W_i = (W_{i1},\dots ,W_{iN})\) and \(W_{i + 1} = (W_{(i+1)1},\dots ,W_{(i+1)N})\), then there are at most two indexes \(j_1,j_2\) such that \(W_{ij_1}W_{(i + 1)j_1} \ne 0\) and \(W_{ij_2}W_{(i + 1)j_2} \ne 0\) (and analogously for \(\mu W_1\) and \(W_N\)).
Finally, with this claim at hand, we are going to prove
Claim 3: There exists a function \(h: \{1,\dots , N\}\rightarrow \{1,\dots , N\}\) and numbers \(a_i, i \in \{1,\dots , N\}\), such that
with the property
Let us show how the proof of the Lemma follows from Claim 3, and postpone the proofs of the claims. Fix \(i \in \{1,\dots , N\}\) and use (3.31) recursively to find
where \(h^{(n)}\) denotes the function obtained by applying h to itself n times. We also use the notation \(h^{(0)}\) to denote the identity function: \(h^{(0)}(i) = i\), \(\forall i \in \{1,\dots ,N\}\). By the properties of \(a_{j}\), we have, \(\forall r \in \{1,\dots , n\}\),
Fix \(k \in {{\mathbb {N}}}\), and let \(r \in \{k + 1, \dots , k + N + 1\}\). Then, \( h^{(r)}(i) > h^{(r  1)}(i)\) can occur at most N times in this range, since otherwise we would find
and this is impossible since we would have \(N + 1\) distinct elements in the set \(\{1,\dots , N\}\). Now clearly this observation implies that for every fixed \(l \in {{\mathbb {N}}}\), there exists \(s \in {{\mathbb {N}}}\) such that
This can only happen if \(x_i = 0\). Since i is arbitrary, the conclusion follows. \(\square \)
Let us now turn to the proof of the claims.
Proof of claim 1:
To prove the claim, we need to use the definition of \({\overline{S}}_i\). Let us recall the definition of \({\overline{S}}_i\), given in (3.27). To build \({\overline{S}}_i\) what we do is consider the ordered set \(\{n_i,n_{i + 1},\dots , n_{i + 1  N}\}\) and select from it a basis of \({{\,\mathrm{span}\,}}\{n_1,\dots , n_N\}\) starting from \(n_i\) and then at step \(1 \le k \le N1\) deciding whether to insert the vector \(n_{i + k}\) in our collection based on the fact that it is linear dependent or not from the previous ones. Recall also that \(S_i\) is the modulo N version of \({\overline{S}}_i\), see (3.28), and that we define \(n_j \doteq n_{j  N}\), for \(j \in \{N + 1,\dots , 2N\}\). Hence now fix \(i \in \{1,\dots , N\}\) and consider \(S_i\). If \(S_i = \{1,\dots , N\}\), then \(\#S_i = N\), thus \(S_j = \{1,\dots , N\}, \forall 1\le j \le N\) and the claim holds. Otherwise, let \(i + 1 < I = I(i) \le i + N 1\) be the first element in \((\overline{S_i})^c\). There are two cases:

(1)
\(n_{I} \in {{\,\mathrm{span}\,}}(n_i,\dots , n_{I  1}) \setminus {{\,\mathrm{span}\,}}(n_{i + 1},\dots , n_{I  1})\);

(2)
\(n_{I} \in {{\,\mathrm{span}\,}}(n_{i + 1},\dots , n_{I  1})\).
At the same time, consider what happens in \({\overline{S}}_{i + 1}\): the span in the \((i + 1)\)th case starts with one vector less than the one of the ith case, simply because the collection of indexes in \({\overline{S}}_{i + 1}\) starts from \(n_{i + 1}\). Hence, since I is the first missing index in \({\overline{S}}_i\), I is also the first possible missing index for \({\overline{S}}_{i + 1}\). Therefore, consider the first case
This implies that \(I \in {\overline{S}}_{i + 1}\). Moreover, we are now adding \(n_I\) to the set of vectors \(n_{i+1},\dots , n_{I  1}\), and \(n_{I} \in {{\,\mathrm{span}\,}}(n_i,\dots , n_{I  1})\setminus {{\,\mathrm{span}\,}}(n_{i + 1},\dots , n_{I  1})\), hence \(n_I\) adds to the previous vectors the component relative to \(n_i\), in the sense that
This moreover implies that \(j \in {\overline{S}}_i \Leftrightarrow j \in {\overline{S}}_{i + 1}\), \(\forall I \le j < N + i  1\). Since \(n_i \in {{\,\mathrm{span}\,}}(n_{i + 1},\dots ,n_I)\), \(i \notin {\overline{S}}_{i + 1}\). Thus \({\overline{S}}_i\) and \({\overline{S}}_{i + 1}\) differ by at most two elements, and we have \(i \in {\overline{S}}_{i}\setminus {\overline{S}}_{i + 1}\) and \(I = I(i) \in {\overline{S}}_{i + 1}\setminus {\overline{S}}_i\). This concludes the case
If instead \(n_{I} \in {{\,\mathrm{span}\,}}(n_{i + 1},\dots , n_{I  1})\), then we see that \(I \notin {\overline{S}}_{i + 1}\), and we can iterate this reasoning from there, in the sense that we look for the next index \(I'\) such that \({I'}\notin {\overline{S}}_{i}\) and divide again into the two cases above. Clearly, for the indexes \(i + 1 \le j < I'_1\), we have \(j \in {\overline{S}}_{i + 1}\) and \(j \in {\overline{S}}_i\). Either this iteration enters in case 1 of the previous subdivision for some element \(I \notin {\overline{S}}_i\), or we conclude \({\overline{S}}_i = {\overline{S}}_{i + 1}\). This concludes the proof of the claim. \(\square \)
Proof of claim 2:
Note that nonzero elements of \(W_i\) are found in positions corresponding to elements of \(S_i\). Hence now fix \(i \in \{1,\dots , N 1\}\) and consider \(W_i\) and \(W_{i + 1}\). If \(S_i = S_{i + 1}\), then \(W_{ij} = 0 \Leftrightarrow W_{(i + 1)j} = 0\). Moreover, we introduce the modulo N counterpart of the number I(i) found in Claim 2, i.e. \(I'(i) = I(i)\) if \(I(i) \in \{1,\dots ,N\}\), and \(I'(i) = I(i)  N\) if \(I(i) \in \{N + 1,\dots , 2N\}\). Thus using the definition of W, we can deduce, if \(S_i = S_{i + 1}\),
and the claim holds in this case. Finally, if \(S_i \Delta S_{i +1} = \{i,I'(i)\}\), then:
This concludes the proof of the claim if \(i \in \{1,\dots , N1\}\). If \(i = N\), then we need to compare \(W_N\) with \(\mu W_1\), and we obtain two cases, in analogy with the previous situation:
and
\(\square \)
Proof of Claim 3:
Fix \(i \in \{1,\dots , N\}\). We consider the equations given by
If we consider \(i \in \{1,\dots ,N 1\}\), we see from (3.32) and (3.33) that
and from (3.34) and (3.35) we infer
From these equations we see that (3.31) holds with the choice \(h(i)\doteq I'(i)\), when i is such that \(S_i\Delta S_{i + 1} \ne \emptyset \), and \(h(i)\doteq i\) otherwise. \(\square \)
3.3 Proof of Corollary 3.4
We end this section by showing that Theorem 3.3 implies Theorem 3.4. Assume by contradiction that there exists a family of matrices
inducing a \(T'_N\) configuration of the form (3.8). We show that then there exists another \(T'_N\) configuration \(\{B_1,\dots , B_N\}\) such that \(B_i \in K_F \subset C_F, \forall 1\le i \le N\) for some strictly polyconvex F with
if
This is a contradiction with Theorem 3.3. To accomplish this, it is is sufficient to define \(F(X)\doteq f(X)  \min _{i}f(X_i)\). This function is clearly strictly polyconvex, since f is. Moreover, we define
In this way, \(B_i\) is still a \(T_N'\) configuration. Moreover, \(B_i \in K_F, \; \forall 1\le i \le N\). To see this, it is sufficient to notice that, since \(A_i \in K_f\),
and
This finishes the proof.
4 Signchanging case: the counterexample
In this section, we construct a counterexample to regularity in the case in which the hypothesis of nonnegativity on f is dropped. Let us explain the strategy, that follows the one of [24]. First of all, we consider the following equivalent formulation of the differential inclusion of divcurl type considered in the previous sections. Indeed, due to the fact that for \(a \in {{\,\mathrm{Lip}\,}}({\mathbb {R}}^2,{\mathbb {R}}^2)\),
if
one easily sees that (2.3) holds if and only if
in the weak sense. Since \(\Omega \) is convex, the latter allows us to say that (2.3) holds for \(u \in {{\,\mathrm{Lip}\,}}(\Omega ,{\mathbb {R}}^2)\) if and only if there exist \(w_1,w_2:\Omega \rightarrow {\mathbb {R}}^2\) such that
solves a.e. in \(\Omega \):
From now on, we will always use this reformulation of the problem. Let us also introduce
In order to construct the counterexample, we want to find a set of nonrigid matrices \(\{A_1,A_2,A_3,A_4,A_5\}\), \(A_i \in {\mathbb {R}}^{6\times 2}, \forall i\), satisfying
Roughly, nonrigidity means that there exists a nonaffine solution of the problem
see Lemma 4.3. The integrand f is of the form
for some convex and smooth \(g: {\mathbb {R}}^5 \rightarrow {\mathbb {R}}\) and
is the area function. As in [24], f is not fixed from the beginning, but rather becomes another unknown of the problem. In particular, in order to find f, it is sufficient for the following condition to be fulfilled:
Condition 1
There exist \(2\times 2\) matrices \(\{X_1,\dots ,X_5\}\), \(\{Y_1,\dots , Y_5\}\), real numbers \(c_1,\dots , c_5, d_1,\dots , d_5\) and positive integers \(\beta _1,\dots , \beta _5\) such that for \(Q_{ij}\doteq \displaystyle c_i  c_j + d_i\det (X_i  X_j) + \frac{1}{\beta _i}\langle X_i  X_j ,Y_iJ\rangle \), one has
If this condition is satisfied, then one has
The construction of f is the content of Lemma 4.2. Moreover, we will be able to build f in such a way that for some large \(R > 0\),
and constants \(M,L > 0\). The nonrigidity of \(A_1,\dots ,A_5\) stems from the fact that we choose \(\{X_1,\dots ,X_5\}\) forming a large \(T_5\)configuration, in the terminology of [13]. Therefore we introduce:
Condition 2
\(\{X_1,X_2,X_3,X_4, X_5\}\) form a large \(T_5\) configuration, i.e. there exists at least three permutations \(\sigma _1,\sigma _2,\sigma _3 : \{1,2,3,4,5\} \rightarrow \{1,2,3,4,5\}\) such that the ordered set \([X_{\sigma _i(1)},X_{\sigma _i(2)},\dots , X_{\sigma _i(5)}]\) is a \(T_5\) configuration and moreover \(\{C_{\sigma _1(i)},C_{\sigma _2(i)},C_{\sigma _3(i)}\}\) are linearly independent for every \(i \in \{1,\dots ,5\}\).
Once this condition is guaranteed, by [13, Theorem 1.2], we find a nonaffine Lipschitz map \(u:\Omega \subset {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^2\) such that
almost everywhere in \(\Omega \). Furthermore, we can choose u with the property that for any subset \({\mathcal {V}}\subset \Omega \), Du attains each of these matrices on a set of positive measure. This is proved in Lemma 4.3.
In order to find Lipschitz maps \(w_1,w_2: \Omega \rightarrow {\mathbb {R}}^2\) such that
satisfies
we simply consider \(w_1 = Au + B\), \(w_2 = Cu + D\), for suitable \(2\times 2\) matrices A, B, C, D. We therefore get our last
Condition 3
\(Y_i\) and \(Z_i\) can be chosen of the form
and \(Z_i = X_i^TY_i  \beta _ic_iJ\), where \(c_i = f(X_i)\).
In Sect. (4.1), we will give an explicit example of values such that the Conditions 123 are fulfilled.
Once this is achieved, we need to extend the energy \(\mathbb {E}_f\) to an energy defined on integral currents of dimension 2 in \({\mathbb {R}}^4\). Some of the results we present in this section in our specific case can be easily generalized to more general polyconvex integrands. Therefore, we defer their proofs to Sect. 5.
In order to extend our polyconvex function f to a geometric functional, we first recall (4.3), i.e.
for \(g: {\mathbb {R}}^{5} \rightarrow {\mathbb {R}}\) convex and smooth, and introduce the convex function \(h: {\mathbb {R}}^5 \rightarrow {\mathbb {R}}\):
We consider the perspective function of h:
It is a standard result in convex analysis that G is convex on \({\mathbb {R}}^5\times {\mathbb {R}}_+\) as soon as h is convex on \({\mathbb {R}}^5\), compare [4, Lemma 2]. Property (4.7) reads as
therefore we also find that the recession function of G is
Hence, G can be extended to the hyperplane \(y = 0\) as
In Lemma 4.4, we will prove that G(z, t) admits a finite, positively 1homogeneous convex extension \({\mathcal {G}}\) to the whole space \({\mathbb {R}}^6\). We are finally able to define an integrand on the space of 2vectors of \({\mathbb {R}}^4\), \(\Lambda _2({\mathbb {R}}^4)\). For a more thorough introduction to kvectors, see Sect. A.1. Recall that
A basis for \(\Lambda _2({\mathbb {R}}^4)\) is given by the six elements \(e_i\wedge e_j, 1\le i < j\le 4\), where \(e_1,e_2,e_3,e_4\) is the canonical basis of \({\mathbb {R}}^4\). Recall moreover that this vector space can be endowed with a scalar product that acts on simple vectors as
where (u, v) denotes as usual the standard scalar product of \({\mathbb {R}}^4\). The integrand
is thus defined as, for \(\tau \in \Lambda _2({\mathbb {R}}^4)\),
Consequently, we define an energy on \({\mathcal {I}}_2({\mathbb {R}}^4)\) as
if \(T = \llbracket E, \vec T, \theta \rrbracket \). For the notation concerning rectifiable currents and graphs, we refer the reader to Sect. A.4. The energy defined in this way satisfies Almgren’s ellipticity condition (A.11), as we will prove in Lemma 4.5. Finally, in Lemma 4.6, we will prove that the current
is stationary for the energy \(\Sigma \). The definition of stationarity for geometric functionals is recalled in Sect. A.5. In (4.11), \(\Gamma _u\) is the graph of u, \(\vec \xi _u\) is its orientation, see (A.5), and \(\theta (y)\) is a multiplicity, defined as \(\theta (x,u(x)) = \beta _i\) if \(x \in \Omega \) is such that
This discussion constitutes the proof of the following:
Theorem 4.1
There exists a smooth and elliptic integrand \(\Psi : \Lambda _2({\mathbb {R}}^4)\rightarrow {\mathbb {R}}\) such that the associated energy \(\Sigma \) admits a stationary point T whose (integer) multiplicities are not constant. Moreover the rectifiable set supporting T is given by a graph of a Lipschitz map \(u: \Omega \rightarrow {\mathbb {R}}^2\) that fails to be \(C^1\) in any open subset \({\mathcal {V}} \subset \Omega \).
Lemma 4.2
There exists a smooth function \(f: {\mathbb {R}}^{2\times 2}\rightarrow {\mathbb {R}}\) of the form
with \(g:{\mathbb {R}}^5 \rightarrow {\mathbb {R}}\) convex and smooth, such that

(1)
(4.6) is fulfilled;

(2)
\(g(X) = M{\mathcal {A}}(X)  L\) for constants \(M,L > 0\), if \(\Vert X\Vert \ge R\).
Proof
We will follow roughly the strategy of [24, Lemma 3]. At first we construct the function g in several steps. Let \(\{(X_i, Y_i, Z_i, \beta _i)\}_{i=1}^5\) the set of admissible matrices. For \(\varepsilon >0\) consider for each i the perturbed values
where \(a(X,d) = \sqrt{1+ X^2 + d^2}\) and \({\mathcal {A}}(X)= a(X,\det (X))\), as defined in (4.4). Furthermore we introduce the perturbed matrix
Thanks to the strict inequality in (4.5) we can fix \(\varepsilon , \sigma >0\) such that \(Q^\varepsilon _{ij}\le \sigma <0\) for all i, j. Let us define the linear functions
and the convex function
Note that \(l_j(X_j,\det (X_j)) = c_j^\varepsilon \) and
Hence there is \(\delta >0\) such that \(l_i(X,d)< l_j(X,d)\) for all \((X,d) \in B_{\delta }(X_j,\det (X_j))\) for all \(i\ne j\) which implies that \(g_1 = l_j\) on \(B_{\delta }(X_j,\det (X_j))\). Choosing a radial symmetric, nonnegative smoothing kernel on \({\mathbb {R}}^5\), \(\rho _\varepsilon \), \(0< \varepsilon<< \delta \) we have that \(g_2\doteq \rho _\varepsilon \star g_1\) satisfies

(1)
\(g_2\) is smooth and convex

(2)
\(g_2 = l_j\) in a neighbourhood of \((X_j, \det (X_j))\) for all \(j \in \{1,\dots , 5\}\).

(3)
\(g_2(X,d) \le C \Vert (1,X,d)\Vert \) for all (X, d) for some \(C>0\).
We choose any \(R> 2 \max _{1\le i \le 5} \{\Vert X_i\Vert + \det (X_i) \}\), and any \(M>C\). Now we may choose \(L>0\) such that
Since \(M>C\) we have that
for all \((X,d) \notin B_{R_2}\), for some \(R_2 > R\). Now let us fix a smooth approximation of the \(\max \) function, say
where \(\phi _\varepsilon \) is a radial symmetric, nonnegative smoothing kernel in \({\mathbb {R}}^2\). Note that \(m(a,b) = \max (a,b)\) outside a neighborhood of \(\{ a= b\}\). In particular if we choose \(\varepsilon \) sufficiently small we can ensure that
agrees with \(g_2\) on \(B_{\frac{2R}{3}}\) by (4.13), and that it agrees with F(X, d) outside \(B_{2R_2}\) by (4.14). It remains to check that g(X, d) is still convex. First note that \(\partial _am \ge 0\) and \(\partial _bm\ge 0\) since \(\partial _a\max = {\mathbf {1}}_{\{a>b\}} \ge 0\), \(\partial _b\max = {\mathbf {1}}_{\{b>a\}} \ge 0\). Now it is a direct computation on the Hessian to see that if \(f_1, f_2 \in C^2({\mathbb {R}}^N)\) are two convex functions and \({\tilde{m}} \in C^2({\mathbb {R}}^2)\) is convex with \(\partial _a{\tilde{m}}(a,b), \partial _b{\tilde{m}}(a,b) \ge 0\), then the composition \(k(x)\doteq {\tilde{m}}(f_1(x), f_2(x))\) is convex. Thus we conclude that g is convex. Let us summarize the properties of g and the related polyconvex integrand \(f_1(X)\doteq g(X, \det (X))\)

(1)
g is a smooth, convex function;

(2)
\(g=M\, a  L\) outside a ball \(B_{R_3}\)

(3)
\(g = g_2\) on a ball \(B_{R_0}\), that implies that \(f_1(X_i)=c_i^\varepsilon \) and \(\beta _i Df_1(X_i)J = Y_i^\varepsilon \) for all i.
In particular from the last conditions and (4.12) we conclude that \(h(X,d)\doteq \varepsilon a(X,d) + g(X,d)\) is convex, \(f(X)\doteq \varepsilon {\mathcal {A}}(X) + f_1(X)\) is smooth, polyconvex and satisfies the desired properties, in particular \(f(X_i)=c_i\), \(\beta _i Df(X_i)J= Y_i\) for all i and \(f = (\varepsilon + M){\mathcal {A}}  L\) outside a ball centered at 0. \(\square \)
Lemma 4.3
Given a large \(T_5\) configuration \(\{X_1,\dots , X_5\} \subset {{\,\mathrm{Sym}\,}}(2)\), where \({{\,\mathrm{Sym}\,}}(2)\) is the space of symmetric matrices of \({\mathbb {R}}^{2\times 2}\), there exists a map \(u \in {{\,\mathrm{Lip}\,}}(\Omega ,{\mathbb {R}}^2)\) such that
and such that for every open \({\mathcal {V}}\subset \Omega \),
Proof
This statement is wellknown, so we will only sketch its proof and give references where to find the relevant results. As shown in [13, Theorem 2.8], if \(K\doteq \{X_1,\dots , X_5\}\) forms a large \(T_5\) configuration, then there exists an inapproximation of K inside \({{\,\mathrm{Sym}\,}}(2)\). This means, compare [13, Definition 1.3], that there exists a sequence of sets \(\{U_k\}_{k \in {{\mathbb {N}}}}\), open relatively to \({{\,\mathrm{Sym}\,}}(2)\), such that

\(\sup _{X \in U_k}{{\,\mathrm{d}\,}}(X,K) \rightarrow 0\) as \(k \rightarrow \infty \);

\(U_k\subset U_{k + 1}^{rc}, \forall k \in {{\mathbb {N}}}\).
For a compact \(C \subset {\mathbb {R}}^{2\times 2}\), the rankone convex hull is defined as
where \(f: {\mathbb {R}}^{2\times 2} \rightarrow {\mathbb {R}}\) is said to be rankone convex if
For an open set \(U \subset {\mathbb {R}}^{2\times 2}\),
In this way, if U is open, then \(U^{rc}\) is open as well. The existence of a inapproximation for K implies the existence of a nonaffine map u such that \(Du \in \{X_1,\dots , X_5\}\), hence (4.15). This is proved in [13, Theorem 1.1]. To show (4.16), there are two ways. Either, one can use the same proof of [20, Theorem 4.1] or [24, Proposition 2] to show that the essential oscillation of Du is positive on any open subset of \(\Omega \). Since there is rigidity for the four gradient problem, see [3], this implies (4.16). Another way to show (4.16) is to use the Baire Theorem approach of convex integration as introduced by Kirchheim in [16]. In particular, in [16, Corollary 4.15], it is proved the following. Define
we fix \(A \in {\mathcal {U}}\), and we also set
then the typical (in the sense of Baire) map
has the property that \(Du \in K\). Then, we can use [26, Lemma 7.4] to show that actually the typical map is nonaffine on any open set, hence again by the rigidity for the four gradient problem, we conclude (4.16). \(\square \)
Lemma 4.4
Let \(G: {\mathbb {R}}^{5}\times {\mathbb {R}}_{\ge 0}\rightarrow {\mathbb {R}}\) be the convex function defined in (4.8). Then, there exists a positively 1homogeneous, convex function \({\mathcal {G}} \in C^\infty ({\mathbb {R}}^6\setminus \{0\})\cap {{\,\mathrm{Lip}\,}}({\mathbb {R}}^6)\) such that
if \(z \in {\mathbb {R}}^5,t \in {\mathbb {R}}_+\).
Proof
To prove the statement, it is sufficent to notice that the convexity of h and (4.9) tells us that h has property (P), see the beginning of Sect. 5, and therefore we can simply apply Proposition 5.2. The smoothness is a consequence of the smoothness of h, property (4.9) and Corollary 5.3. \(\square \)
Lemma 4.5
The energy \(\Sigma _\Psi \) satisfies the uniform Almgren ellipticity condition (A.11).
Proof
By construction, it is immediate to see that also
is still convex and positively 1homogenous. Define \(\Psi _\varepsilon \) as in (4.10) by substituting \({\mathcal {G}}_\varepsilon \) to \({\mathcal {G}}\). By the general Proposition 5.5, we see that \(\Sigma _{\Psi _\varepsilon }\) satisfies Almgren condition, hence \(\Sigma _\Psi \) satisfies (A.11) with constant \(\frac{\varepsilon }{2}\). \(\square \)
Lemma 4.6
The current \(T_{u,\theta } = \llbracket \Gamma _u,\vec \xi _u,\theta \rrbracket \) defined in (4.11) is stationary in \(\Omega \times {\mathbb {R}}^2\) for the energy \(\Sigma _\Psi \).
Proof
A direct computation shows that f and \(\Psi \) fulfill
where \(W(X) = M^1(X)\wedge M^2(X)\) and \(M^i\) are the columns of the matrix
Once this is checked, the proof is entirely analogous to the one of [5, Proposition 6.8], and will be sketched in the appendix, see Proposition 5.8. \(\square \)
4.1 Explicit values
The following values were found using Maple 2020. Define the following quantities:
The large \(T_5\) configuration is given by:
Define \(X_i, Y_i,Z_i \in {\mathbb {R}}^{2\times 2}\) through the relations
The matrices A, B, C, D appearing in Condition 3 are given by:
These values fulfill Conditions 1, 2, 3. In particular, the three permutations in the definition of large \(T_5\) configuration of Condition 2 are: [1, 2, 3, 5, 4], [1, 2, 4, 5, 3], [1, 2, 5, 3, 4].
5 Extension of polyconvex functions
Let \(\Phi : {\mathbb {R}}^{n\times m} \rightarrow {\mathbb {R}}^{k}\) be the usual map that, to a matrix \(X \in {\mathbb {R}}^{n\times m}\), associates the vector of the subdeterminants of \(\Phi \). Consider a polyconvex function
\(h: {\mathbb {R}}^k \rightarrow {\mathbb {R}}\) being^{Footnote 1}\(C^1\). The purpose of this section is to generalize the arguments of the previous section to arbitrary n, m, and hence to prove some of the lemmas of that section. Consider the following set of assumptions

(i)
h is convex;

(ii)
h has linear growth, i.e. \(h(z) \le A\Vert z\Vert + B, \forall z \in {\mathbb {R}}^k\), for \(A,B\ge 0\);

(iii)
\(\lambda \doteq \inf \{h(z)  (Dh(z),z): z\in {\mathbb {R}}^k\} > \infty \);

(iv)
\((Dh(z_2),z_2  z_1) \le h(z_1) + h(z_2), \quad \forall z_1,z_2 \in {\mathbb {R}}^k\).
If h fulfills (i)(ii)(iii), we will say it has property (P). If, in addition, h satisfies (iv), we will say that h fulfills property (PE).
Remark 5.1
Notice that (iii) is a consequence of (iv), indeed if (iv) holds we can write, for \(z_1 = 0\) and for any \(z_2 = z \in {\mathbb {R}}^k\):
hence
that implies (iii).
We denote with \(h^*\) the recession function of h:
It is not difficult to prove that the limit above always exists and is finite for a function h satisfying (P). To show it, one can use the fact that the function
defined for \(y > 0\) is convex for every fixed \(x \in {\mathbb {R}}^k\), see [4, Lemma 2].
As above, we define the perspective function
We consider the smallest convex extension of G to the whole \({\mathbb {R}}^{k + 1}\):
By 1homogeneity of G, we can write
First, we prove
Proposition 5.2
Let \({\mathcal {G}}\) be defined as in (5.1). Then, if h satisfies (P), \({\mathcal {G}}\)

(1)
is convex and extends G on \({\mathbb {R}}^k\times (0,+\infty )\);

(2)
is positively 1homogeneous;

(3)
is finite everywhere.
Conversely, if there exists a function \({\mathcal {G}}\) that fulfills (1)–(2)–(3), then h fulfills (P).
Furthermore, we can prove the following characterization of \({\mathcal {G}}\):
Corollary 5.3
Let h fulfill property (P), and let \({\mathcal {G}}\) be defined as in (5.1). Assume further that there exists \(\lambda ' \in {\mathbb {R}}\) and \(R > 0\) such that
Then, \(\lambda ' = \lambda \) and for \(t < 0\), we have
where \(\lambda \) is the quantity appearing in (iii).
Before starting with the proof of the proposition, we need to recall some results concerning the notion of subdifferential at \(x \in {\mathbb {R}}^N\) of a convex function \(f: {\mathbb {R}}^{N} \rightarrow {\mathbb {R}}\).
5.1 Subdifferentials
The subdifferential of f at x, denoted with \(\partial f(x)\), is the collection of those vectors \(v \in {\mathbb {R}}^N\) such that
We will use the following facts concerning the subdifferential. For a convex function with finite values, \(\partial f(x) \ne \emptyset \) at all \(x \in {\mathbb {R}}^N\), see [21, Theorem 23.4]. Conversely, if \(f:{\mathbb {R}}^N \rightarrow {\mathbb {R}}\) is such that \(\partial f(x) \ne \emptyset \) at every \(x \in {\mathbb {R}}^N\), then f is convex, since in that case
As can be seen from the definition of subdifferential,
This, together with the fact that if K is compact, then \(\partial f(K) \doteq \bigcup _{x \in K}\partial f(x)\) is compact, see [12, Lemma A.22], yields the fact that every convex function is locally Lipschitz. Moreover, if f is positively 1homogeneous, a simple application of the definition of subdifferential shows that
In particular, combining (5.3) with the local Lipschitz property of convex functions, we infer that if \(f: {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) is convex and positively1 homogeneous, f must be globally Lipschitz. Furthermore, using the definition of subdifferential and (5.3) for f convex and positively1 homogeneous, it is easy to see that the following generalized Euler’s formula holds
Finally, we recall that at x, the convex function \(f: {\mathbb {R}}^N \rightarrow {\mathbb {R}}\) is differentiable if and only if
see [12, Lemma A.20A.21] and references therein. We can now start the proof of the proposition.
5.2 Proof of Proposition 5.2
First we assume that h has property (P). \({\mathcal {G}}\) is convex since it is supremum of linear functions. Moreover, the convexity of h yields the convexity of G on \({\mathbb {R}}^k\times (0,+\infty )\). Having established that G is convex, the fact that \({\mathcal {G}}\) as in (5.1) extends G is a classical fact. This proves (1). Since G was positively 1homogeneous, we have that \({\mathcal {G}}\) is as well homogeneous. Therefore (2) is checked, and we only need to prove (3). By (5.1) we see that in order to conclude we only need to show that, for fixed \((z,t) \in {\mathbb {R}}^{k + 1}\),
where L possibly depends on (z, t). Let us compute DG. Firstly we have that
Now, exploiting the convexity of h, we can choose any \(v \in {\mathbb {R}}^k\) with \(\Vert v\Vert = 1\) and write
Using the linear growth of h, i.e. (ii), we bound:
Letting \(s \rightarrow + \infty \), the previous expression yields
Thus, if we can show that \(\partial _yG(x,y)\) is uniformly bounded, then we conclude the proof. We compute explicitly, for every \((x,y)\in {\mathbb {R}}^{k}\times (0,+\infty )\)
We are therefore left to study the boundedness (from below) of the function \(z\mapsto h(z)  (Dh(z),z)\), but this is a consequence of (iii) of property (P).
Finally, let us show the necessity of (P). If \({\mathcal {G}}\) is convex and extends G, then in particular
hence h is convex. By the discussion of Sect. 5.1, we know that \({\mathcal {G}}\) is globally Lipschitz with constant \(L > 0\). Since
we infer that h has linear growth, i.e. it enjoys property (ii). Finally, we need to show (iii). Since \({\mathcal {G}}\) extends G in the upper halfspace, we obtain
By the definition of G, we deduce
hence (iii).
5.3 Proof of Corollary 5.3
First we show that \(\lambda = \lambda '\). To see this, consider for any \(z \ne 0\) the auxiliary function \(g(t) \doteq h(tz)  (Dh(tz),tz)\), for \(t > 0\). Then, g is nonincreasing. Indeed,
and we can use that \({\mathcal {G}}\) is convex to deduce that \(t\mapsto \partial _y{\mathcal {G}}(z,t)\) is nondecreasing, hence that \(t\mapsto g(t)\) is nonincreasing. Now, for any t sufficiently large, by assumption (5.2), we have that
This shows that
In particular, notice that \(h(0) = \lim _{t \rightarrow 0^+}g(t) \ge \lambda '\). To show the equality between \(\lambda \) and \(\lambda '\), consider now a sequence \(z_n \in {\mathbb {R}}^k\) such that \(a_n \doteq h(z_n)  (Dh(z_n),z_n) \rightarrow \lambda \) as \(n \rightarrow \infty \). If \(z_n = 0\) for infinitely many n, we can write \(\lambda = \lim _{n \rightarrow \infty }a_n = h(0) \ge \lambda '\) and the proof is concluded. Otherwise, by the computation above, we have, for every \(t \ge 1\)
By choosing t in dependence of \(z_n\), we can ensure through assumption (5.2) that
Therefore,
and the proof of the first part of the Corollary is finished.
Now we wish to show the characterization of \({\mathcal {G}}\). Fix \((z,t) \in {\mathbb {R}}^k\times (\infty ,0)\). Let \((x,y) \in {\mathbb {R}}^k\times (0,+\infty )\). Then, using the definition \(G(x,y) = yh\left( \frac{x}{y}\right) \)
By (iii), we get
hence, since \(t < 0\), then
We now show that
Let \(a,b \in {\mathbb {R}}^k\), \(r > 0\). Then, using the convexity of h,
or
To conclude (5.7), we might use assumption (5.2), but let us use a slightly more general argument in order to use the same inequality below. By (5.5), we have that
is an equibounded family of vectors, hence up to subsequences it admits a limit \(\lim _{j \rightarrow \infty } Dh\left( \frac{b}{r_j}\right) = w \in {\mathbb {R}}^k\), where \(\lim _{j \rightarrow \infty }r_j = 0\). Hence,
Now, \(w \in \partial h^*(b)\), in fact using the convexity of h we can write
Multiplying by \(r_j\) and letting \(j \rightarrow \infty \), we find that \(w \in \partial h^*(b)\). By (5.4), (5.7) now follows from (5.9). Therefore, we can conclude that, for \(t < 0\),
To conclude the assertion, we consider for any \(t > 0\):
If we choose y sufficiently small (in dependence of z), once again using (5.2), we see that
the latter being true by the first part of the proof. This concludes the proof of the corollary.
5.4 Symmetric extension
Now we show the link between (PE) and a symmetric extension. Notice that imposing that h admits a 1homogeneous and even extension such that \({\mathcal {G}}(z,1) = h(z)\) forces this extension to have the form
for \(t \ne 0\). If we require that \({\mathcal {G}}\) is convex too, then it is continuous, hence it becomes uniquely determined on \(\{(x,y): y = 0\}\) as \({\mathcal {G}}(z,0) = h^*(z)\). Therefore, instead of considering a general convex extension as in (5.1), we are going to work with the function \({\mathcal {G}}\) obtained in (5.10).
Proposition 5.4
h satisfies (PE) if and only if \({\mathcal {G}}: {\mathbb {R}}^{k + 1}\rightarrow {\mathbb {R}}\) defined as
is even and convex.
Proof
Assume that h satisfies (PE). First we prove that \({\mathcal {G}}\) is even. This amounts to show that
To see this, we simply evaluate (iv) at \(z_1 = \frac{z}{t}\) and \(z_2 = \frac{z}{t}\) for any \(z \in {\mathbb {R}}^k\), \(t > 0\) to find
We now use the same argument to prove (5.7) to see that for a sequence of positive numbers \(\{t_j\}_{j \in {{\mathbb {N}}}}\) with \(\lim _jt_j = 0\), \(\lim _{j \rightarrow \infty }Dh\left( \frac{z}{t_j}\right) = w \in \partial h^*(z)\). Therefore, multiplying by t in (5.13) and passing to the limit along this subsequence, we get
By (5.4), \((w,z) = h^*(z)\), and in this way we see that, using the last equation,
that implies (5.12).
Now we show that \({\mathcal {G}}\) is convex. We rely on the results of Sect. 5.1, and we aim to show that at every point \(p = (z,t) \in {\mathbb {R}}^{k + 1}\),
Let first \(t > 0\). Since at p the function is differentiable, the only possible candidate for an element of the subdifferential is \(v\doteq D{\mathcal {G}}(p)\). Notice moreover that by the 1homogeneity of \({\mathcal {G}}\), \((D{\mathcal {G}}(p),p)={\mathcal {G}}(p)\). Thus we have, for any \(q = (x,y) \in {\mathbb {R}}^{k + 1}\):
If we establish \((D{\mathcal {G}}(p),q) \le {\mathcal {G}}(q)\) for any \(y \ne 0\), then we can use the pointwise convergence
to infer that the inequality holds also for \(y = 0\). We therefore compute, for any \(y \ne 0\):
Using (5.14), \(v = D{\mathcal {G}}(p)\) is a supporting hyperplane if and only if
Following the same argument of the beginning of Proposition 5.2, since h is convex, \({\mathcal {G}}\) is convex on \({\mathbb {R}}^k\times (0,+\infty )\). Thus (5.15) is surely fulfilled if \(y > 0\). If \(y < 0\), (5.15) becomes
that can be rewritten as
The last condition is equivalent to (iv). Now we need to prove that also for points \(p = (z,t)\) with \(t < 0 \) an element in the subdifferential exists. This is anyway a consequence of the evenness of \({\mathcal {G}}\) and the proof above, indeed the evenness of \({\mathcal {G}}\) yields
Therefore, for any \(q = (x,y) \in {\mathbb {R}}^{k + 1}\),
where we exploited the fact that \(D{\mathcal {G}}(p) \in \partial {\mathcal {G}}(p)\), as proved above. Finally, we need to produce an element in the subdifferential at points \(p = (z,0)\). To do so, we again use the fact that for any \(p' = (z,t)\) with \(t > 0\), \(q = (x,y) \in {\mathbb {R}}^{k + 1}\),
We only need to observe that \(\{D{\mathcal {G}}(z,t)\}_{t > 0}\) is an equibounded family of vectors. This allows us to choose a sequence \(t_j>0\) convergent to 0 such that \(\{D{\mathcal {G}}(z,t_j)\}_{j \in {{\mathbb {N}}}}\) converges to a vector \(w \in {\mathbb {R}}^{k + 1}\). Since \(\lim _{t\rightarrow 0^+}{\mathcal {G}}(z,t) = {\mathcal {G}}(z,0)\), we have
where \(p_j = (z,t_j), \forall j \in {{\mathbb {N}}}\). To show the equiboundedness of \(\{D{\mathcal {G}}(z,t)\}_{t>0}\), we observe that
that is equibounded in z and t by (5.5). Exactly as in the proof of Proposition 5.2, we use (iii) to say that
Hence we only need to provide a bound from above. To show it, we use the convexity of h to estimate
that provides the desired bound. This finishes the proof of the convexity of \({\mathcal {G}}\).
To conclude, we need to show the converse statement, i.e. that if \({\mathcal {G}}\) is even and convex, then h fulfills (PE). The fact that h fulfills (i)(ii)(iii) can be proved in a completely analogous way as in Proposition 5.2. By Remark 5.1, one could also infer (iii) as a corollary of (iv). Finally, to see (iv), one can simply follow the chain of logical equivalences of the previous part of the proof. This proves that (PE) is also necessary to the existence of the even extension. \(\square \)
5.5 Extension to geometric functionals
Consider an orthonormal basis of \(\Lambda _m({\mathbb {R}}^{m + n})\), denoted with \(E_1,\dots , E_{\left( {\begin{array}{c}m + n\\ m\end{array}}\right) }\), where
as done in (A.1). We define, for every \(\tau \in \Lambda _m({\mathbb {R}}^{m + n})\)
and consequently the energy
for \(T = \llbracket E, \vec T,\theta \rrbracket \in {\mathcal {R}}_m({\mathbb {R}}^{n + m})\). For convenience, let us denote
We have
Proposition 5.5
Let \({\mathcal {G}}\) be positively1 homogeneous and convex, and define \(\Psi \) as in (5.16). Then, \(\Sigma _\Psi \) fulfills Almgren’s condition (A.10).
Proof
Let \(R,S \in {\mathcal {R}}_m({\mathbb {R}}^{n + m})\), \(\partial R = \partial S \), \({{\,\mathrm{spt}\,}}S\) is contained in the vectorsubspace of \({\mathbb {R}}^n\) associated with a simple m vector \(\vec {S}_0\) of \({\mathbb {R}}^n\), and \(\vec {S}(z)=\vec {S}_0\) for \(\Vert S\Vert \)almost all z. Since \(\partial R = \partial S\) we have that
compare [11, 5.1.2]. Note that this implies by the linearity of \(\phi \) that
Now we may use Jensen inequality and the 1homogeneity of \({\mathcal {G}}\) to deduce that
where we used again in the last line that \({\mathcal {G}}\) is 1homogeneous. \(\square \)
Remark 5.6
If \({\mathcal {G}}\) is even, then \(\Sigma _\Psi \) is a welldefined energy on varifolds. Notice that in this case, \({\mathcal {G}}\) is convex, even and 1homogeneous. A simple computation in convex analysis shows that this imposes for \({\mathcal {G}}\) to be positive. This observation is what makes it impossible to extend an integrand f as the one constructed in Sect. 4 to an integrand defined on varifolds using the methods introduced here.
Notes
This hypothesis on the regularity of h is not necessary, and one could simply consider \(h \in {{\,\mathrm{Lip}\,}}({\mathbb {R}}^k)\). Indeed, all the results of this section would work with simple modifications in the Lipschitz case. Nonetheless, we prefer to assume \(C^1\) regularity in order to avoid further technicalities.
References
Allard, W.K.: An integrality theorem and a regularity theorem for surfaces whose first variation with respect to a parametric elliptic integrand is controlled, Geometric measure theory and the calculus of variations (1986)
Allard, W.K.: On the first variation of a varifold. Ann. Math. Second Series (1972)
Chlebik, M., Kirchheim, B.: Rigidity for the four gradient problem. Journal für die reine und angewandte Mathematik (Crelles Journal) 551 (200201), 19
Dacorogna, B., Maréchal, P.: The role of perspective functions in convexity, polyconvexity, rankone convexity and separate convexity. J. Convex Anal. 15(2), 271–284 (2008)
De Lellis, C., De Philippis, G., Kirchheim, B., Tione, R.: Geometric measure theory and differential inclusions, Accepted Paper: Annales de la Faculté des Sciences de Toulouse, arXiv:1910.00335 (2019)
De Philippis, G., De Rosa, A., Ghiraldin, F.: Rectifiability of Varifolds with Locally Bounded First Variation with Respect to Anisotropic Surface Energies. Commun. Pure Appl. Math. 71(6), 1123–1148 (2017)
De Philippis, G., De Rosa, A., Hirsch, J.: The area blow up set for bounded mean curvature submanifolds with respect to elliptic surface energy functionals. Discrete Contin. Dyn. Syst. 39(12), 7031–7056 (2019)
De Rosa, A., Kolasiñski, S.: Equivalence of the ellipticity conditions for geometric variational problems. Commun. Pure Appl. Math. 73(11), 2473–2515 (2020)
De Rosa, A., Tione, R.: Regularity for graphs with bounded anisotropic mean curvature, arXiv:2011.09922 (2020)
Duggan, J.P.: Regularity theorems for varifolds with mean curvature, Ph.D. Thesis, (1986)
Federer, H.: Geometric Measure Theory. Springer, Berlin (1969)
Figalli, A.: The MongeAmpère Equation and Its Applications, Zurich Lectures in Advanced Mathematics, European Mathematical Society, (2017)
Förster, C., Székelyhidi, L.: T5Configurations and nonrigid sets of matrices. Calc. Var. Partial Differ. Equ. 57(1), 19 (2017)
Giaquinta, M., Modica, G., Soucek, J.: Cartesian Currents in the Calculus of Variations, vol. I. Springer, Berlin (1998)
Giaquinta, M., Modica, G., Soucek, J.: Cartesian Currents in the Calculus of Variations, vol. II. Springer, Berlin (1998)
Kirchheim, B.: Rigidity and Geometry of Microstructures. Habilitation thesis, University of Leipzig (2003)
Kirchheim, B., Müller, S., Šverák, V.: Studying nonlinear pde by geometry in matrix space. In: Hildebrandt, S., Karcher, H. (eds.) Geometric Analysis and Nonlinear Partial Differential Equations. Springer, Berlin, Heidelberg (2003). https://doi.org/10.1007/9783642556272_19
Lorent, A., Peng, G.: Null Lagrangian measures in subspaces, compensated compactness and conservation laws. Arch. Rational Mech. Anal. 234(2), 857–910 (2019)
Lorent, A., Peng, G.: On the Rank1 convex hull of a set arising from a hyperbolic system of Lagrangian elasticity. Calc. Var. Part. Differ. Equ. 59 (August 2020), no. 5
Müller, S., Šverák, V.: Convex integration for Lipschitz mappings and counterexamples to regularity. Ann. Math. Second Ser. 157(3), 715–742 (2003)
Rockafellar, R.T.: Convex Analysis, Princeton Landmarks in Mathematics and Physics. Princeton University Press (1970)
Schoen, R., Simon, L.: A new proof of the regularity theorem for rectifiable currents which minimize parametric elliptic functionals. Indiana Univ. Math. J. 31(3), 415–434 (1982)
Simon, L.: Lectures on Geometric Measure Theory. Australian National University (2008)
Székelyhidi Jr., L.: The regularity of critical points of polyconvex functionals. Arch. Ration. Mech. Anal. 172(1), 133–152 (2004)
Székelyhidi Jr., L.: Rankone convex hulls in \({\mathbb{R}}^2\times 2\). Calc. Variations Part. Differ. Equ. 28(4), 545–546 (2007)
Tione, R.: Minimal graphs and differential inclusions. Commun Part Differ. Equ. (2021). https://doi.org/10.1080/03605302.2020.1871367
Acknowledgements
The authors would like to thank Camillo De Lellis for his interest in the problem and some preliminary discussions. This work was developed while R. T. was finishing his PhD at the University of Zürich, and is now supported by the SNF Grant \(200021\_182565\). J. H. was partially supported by the German Science Foundation DFG in the context of the Priority Program SPP 2026 Geometry at Infinity.
Funding
Open Access funding provided by EPFL Lausanne.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by J. M. Ball.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix A. Currents, varifolds and geometric functionals
Appendix A. Currents, varifolds and geometric functionals
In this section we give the main definitions concerning currents and varifolds we have used throughout the paper. One can give more general definitions, namely flat, normal currents and general varifolds, see for instance [11, 23], but we limit ourselves to rectifiable currents and varifolds in order to keep the exposition as concise as possible.
1.1 A.1. Multilinear algebra
Let \(n \in {{\mathbb {N}}}, m \ge 0\). We denote with \(\Lambda _{m}({\mathbb {R}}^{n + m})\) the space of mvectors of \({\mathbb {R}}^{n + m}\), i.e. the vector space given by finite linear combinations of elements of the form
We also let \(\Lambda _{m}^s({\mathbb {R}}^{n + m})\subset \Lambda _{m}({\mathbb {R}}^{n + m})\) be the space of nonzero simple mvectors, i.e. all elements \(\tau \in \Lambda _{m}({\mathbb {R}}^{n + m})\) such that
for \(v_1,\dots , v_m \in {\mathbb {R}}^{n + m}\). We define a canonical basis of \(\Lambda _{m}({\mathbb {R}}^{n + m})\) as follows. Let \(e_1,\dots , e_{n + m}\) be the vectors of the canonical basis of \({\mathbb {R}}^{n + m}\). Consider any multivector of length m, \(I = (i_1,\dots ,i_m)\), with \(1\le i_1<\dots < i_m\le n+ m\). There are \(\left( {\begin{array}{c}n + m\\ m\end{array}}\right) \) of these multivectors, and each one defines a simple mvector
It is easy to check that \(\left\{ E_1,\dots , E_{\left( {\begin{array}{c}n + m\\ m\end{array}}\right) }\right\} \) is a basis for \(\Lambda _m({\mathbb {R}}^{n + m})\). We also set
while the ordering of the other indexes is arbitrary (but fixed).
The vector space \(\Lambda _m({\mathbb {R}}^{n + m})\) can be endowed with a scalar product that is defined on simple vectors as
where \(X \in {\mathbb {R}}^{m\times m}\) is defined as \(X_{ij} = (v_i,w_j)\). We define a norm on \(\Lambda _m({\mathbb {R}}^{n + m})\) by setting \(\Vert \tau \Vert = \sqrt{(\tau ,\tau )}\). Analogously, one introduces the space of mcovectors of \({\mathbb {R}}^{n + m}\), \(\Lambda ^*_m({\mathbb {R}}^{n + m})\), as the linear space generated by the wedge product of m covectors of \({\mathbb {R}}^{n + m}\). An element \(\eta \in \Lambda ^*_m({\mathbb {R}}^{n + m})\) acts by duality on elements of \(\Lambda _m({\mathbb {R}}^{n + m})\) in the following way. Let \(\eta = \eta ^1\wedge \dots \wedge \eta ^m\) and \(\tau = v_1\wedge \dots \wedge v_m\) (the general case follows by linearity). Then,
where \(Y \in {\mathbb {R}}^{m\times m}\) is the matrix defined as \(Y_{ij}\doteq \eta _i(v_j)\), \(\forall 1 \le i,j \le m\).
With these definition at hand, we can consider the space \({\mathcal {D}}^m({\mathbb {R}}^{n + m})\) as the space of smooth mforms of \({\mathbb {R}}^{n + m}\) with compact support, namely
We endow \(\Lambda _m^*({\mathbb {R}}^{n + m})\) with a norm given by
hence we can consider on \({\mathcal {D}}^m({\mathbb {R}}^{n + m})\) the norm
1.2 A.2. Planes and rectifiable sets
We denote with \({\mathbb {G}}(m,n+ m)\) the space of unoriented mplanes of \({\mathbb {R}}^{n + m}\). In [5], we used the identification of \({\mathbb {G}}(m, n + m)\) with the space of orthogonal projections on mplanes
It is not difficult to show that \(\Lambda _m^s({\mathbb {R}}^{n + m})\) can be identified with the space of oriented mdimensional planes of \({\mathbb {R}}^{m + n}\), see [14, Section 2.1]. It is thus natural to introduce the twotoone map
that takes \(v_1\wedge \dots \wedge v_m \in \Lambda _m^s({\mathbb {R}}^{n + m})\) to the projection on the mplane spanned by \(v_1,\dots , v_m\). Notice that f is not injective since \(f(\tau ) = f(\tau )\), \(\forall \tau \in \Lambda ^s_m({\mathbb {R}}^{n + m})\).
We recall that a set \(E \subset {\mathbb {R}}^{m + n}\) is called rectifiable of dimension m if
where \({\mathcal {H}}^m(E_0) = 0\), \(F_j \in C^1({\mathbb {R}}^m,{\mathbb {R}}^{n + m})\), and \(E_j \subset {\mathbb {R}}^{m + n}\) is Borel. To such a set E it is possible to associate naturally a notion of approximate tangent plane, i.e. a map
For the definition of \(T_xE\), we refer the reader to [23, Section 3.1]. An orientation \(x\mapsto \vec T(x)\) of \(T_xE\) is a Borel map \(\vec T \in L^\infty (E,\Lambda _m^s({\mathbb {R}}^{n + m}))\) with \(\Vert \vec T(x)\Vert = 1\) for \({\mathcal {H}}^m\llcorner E\) a.e. \(x \in {\mathbb {R}}^{n + m}\), and \(\vec T(x) \in f^{1}(T_xE)\), for \({\mathcal {H}}^m\llcorner E\) a.e. \(x \in {\mathbb {R}}^{n + m}\), where f is the map defined in (A.2).
1.3 A.3. Varifolds
A mdimensional rectifiable varifold V is a measure on \({\mathbb {R}}^{n + m}\times {\mathbb {G}}(n + m,m)\) given by
where E is m dimensional rectifiable set and \(\theta \in L^1(E;{\mathcal {H}}^m\llcorner E)\). The varifold is called integer rectifiable if in addition \(\theta \) has values in \({\mathbb {N}}\setminus \{0\}\). The notation for the varifold V defined as in (A.3) is
1.4 A.4. Currents
A rectifiable current of dimension m, denoted by T, is a linear functional over \({\mathcal {D}}^m({\mathbb {R}}^{n + m})\) represented as:
where E is an mrectifiable subset of \({\mathbb {R}}^{n + m}\), \(\vec T(x)\) is an orientation of \(T_xE\), and \(\theta \in L^1(E;{\mathcal {H}}^m\llcorner E)\). Such a current T is denoted as
The mass of the current T is defined as
and we introduce the notion of boundary \(\partial T\) of a rectifiable current T as the \(m  1\) dimensional current
We restrict our attention to the space of integer rectifiable currents of dimension m, \({\mathcal {R}}_m({\mathbb {R}}^{n + m})\), defined as the space of mdimensional rectifiable currents T with finite mass and for which \(\theta \) has values in \({\mathbb {N}}\setminus \{0\}\).
Given \(T = \llbracket E,\vec T,\theta \rrbracket \) and an injective vectorfield \(X \in C^1({\mathbb {R}}^{n + m},{\mathbb {R}}^{n + m})\), we define the pushforward of T as the current \(X_{\#}(T) \in {\mathcal {R}}_m({\mathbb {R}}^{n + m})\) defined by
where
if \(\vec T(x) = v_1(x)\wedge \dots \wedge v_m(x)\). Analogously, for a varifold \(V =\llbracket E,\theta \rrbracket \),
Notice that to every current \(T \in {\mathcal {R}}_m({\mathbb {R}}^{n + m})\) one can associate an integer rectifiable varifold \(V_T\) in the obvious way
where f is the map defined in (A.2).
Let us explain how to give to a graph of a Lipschitz map a structure of current, hence also of varifold. We essentially follow the theory developed in [14, 15]. We also refer the reader to [5], where this discussion was made for giving the graph a structure of varifold. Let \(\Omega \subset {\mathbb {R}}^m\) be open and bounded and let \(u \in {{\,\mathrm{Lip}\,}}(\Omega ,{\mathbb {R}}^n)\). Then, the graph of u defined as
is mrectifiable. Furthermore, as proved in [14, Sec. 1.5, Th. 5] its approximate tangent plane is, at a.e. \(x_0 \in \Omega \), given by the orthogonal projection on
where \(f_1,\dots ,f_m\) are the elements of the canonical basis of \({\mathbb {R}}^m\). Define \(v_i(x)\doteq (f_i,\partial u_i(x))^T\). The orientation we define on \(\pi (x_0)\) is the natural one:
Given a Borel function \(\theta \in L^1(\Gamma _u;{\mathcal {H}}^m\llcorner \Gamma _u)\), we define the current \(T_{u,\theta } = \llbracket \Gamma _u,\vec \xi _u,\theta \rrbracket \) and \(\beta (x) \doteq \theta (x,u(x))\). Through the area formula, see for instance [5, Proposition 6.4], we have
where
is the area function. Notice that in the case \(n = m = 2\), \({\mathcal {A}}\) has the form (4.4). In particular, by the definition of norm of a mvector, we notice that
where we have used the notation of (A.5).
1.5 A.5. Geometric functionals
Given a smooth and 1homogeneous function \(\Psi : \Lambda _s({\mathbb {R}}^{n + m}) \rightarrow {\mathbb {R}}\), we can define the functional on rectifiable currents \(T = \llbracket E, \vec T, \theta \rrbracket \)
as done in (1.1). On varifolds, given a smooth integrand \(F: {\mathbb {G}}(m,n+m)\rightarrow {\mathbb {R}}\), the counterpart of the previous energy has the form
if \(V = \llbracket E, \theta \rrbracket \). In particular, any even integrand \(\Psi :\Lambda _s({\mathbb {R}}^{n + m}) \rightarrow {\mathbb {R}}\) as above allows us to define a functional on varifold too. The minimal hypotheses that one requires on an integrand \(\Psi \) to get lower semicontinuity of the energy \(\Sigma _\Psi \), see [11, Section 5.1], is that \(\Psi \) has positive values and it satisfies Almgren’s ellipticity condition, i.e.
whenever \(\partial T = \partial Q\), Q has support contained in an m subspace of \({\mathbb {R}}^{n + m}\) whose orienting mvector is \(\vec \tau = v_1\wedge \dots \wedge v_m\) and the orientation of Q is given by \(\tau \). We say it satisfies a uniform Almgren ellipticity condition if there exists \(\varepsilon > 0\) such that
We give now the definition of stationarity in the sense of currents (or varifolds). Fix an energy \(\Sigma _\Psi \) and let \({\mathcal {U}} \subset {\mathbb {R}}^{m +n }\) be open. Given any function \(g \in C^\infty _c({\mathcal {U}},{\mathbb {R}}^{n + m})\), we define the flow \(X_\varepsilon (x) \doteq \gamma _x(\varepsilon )\), where \(\gamma _x\) is the solution of the ODE
We define the variation of T with respect to the vector field \(g \in C^1_c({\mathcal {U}}; {\mathbb {R}}^{m + n})\) as
Finally, the current T is said to be stationary in \({\mathcal {U}}\) if \([\delta _\Psi T] (g)=0, \forall g \in C^1_c({\mathcal {U}}; {\mathbb {R}}^{m + n})\). With obvious modifications, this definition holds for varifolds as well.
A simple computation shows the following characterization of the first variation of a geometric functional
Lemma 5.7
Let \(T = \llbracket T,\vec \tau ,\theta \rrbracket \) with \(\vec \tau = \tau _1\wedge \dots \wedge \tau _m\) and \(\Vert \vec \tau (x)\Vert = 1\). For any, \(g \in C_c^1({\mathcal {U}},{\mathbb {R}}^{m + n})\),
1.6 A.6. Functionals on graphs
It was shown in [5, Section 6] that from a functional defined on varifolds, one can define a functional on graphs, simply using the area formula. To do so, we introduced the map \(h:{\mathbb {R}}^{n\times m} \rightarrow {\mathbb {R}}^{(n + m)\times (n + m)}\) defined as
where
or, more explicitely,
This represents the orthogonal projection on the plane
and is the parametrization of one chart of \({\mathbb {G}}(m,n+m)\). If F is an integrand as in (A.9), we can define
where \({\mathcal {A}}\) is the area element defined in (A.6). The following holds
for every \(V_{u,\theta } = \llbracket \Gamma _u,\theta \rrbracket \), where \(\beta (x) = \theta (x,u(x))\). In [5, Proposition 6.6], we proved the previous equality in the case \(\theta \equiv 1\), but the case with multiplicity holds with the same proof. One can do the same for functionals defined on currents in the following way. Let \(\Psi \in C^\infty (\Lambda _m^s({\mathbb {R}}^{n + m}))\) and associate, to \(X \in {\mathbb {R}}^{n\times m}\), the simple vector
where \(M^i(X)\) denotes the ith column of the matrix M(X) defined in (A.14). If we define
the area formula once again yields the equality
for every \(T_{u,\theta } = \llbracket \Gamma _u,\vec \xi _u,\theta \rrbracket \), where \(\beta (x) = \theta (x,u(x))\).
Finally, let us discuss the link between stationarity for geometric objects and stationarity in the graph sense. We refer the interested reader to [5, Proposition 6.8] for a more precise statement in the case of multiplicity 1 graphs.
Proposition 5.8
Let \(F: {\mathbb {G}}(m,n+m) \rightarrow {\mathbb {R}}\) or \(\Psi : \Lambda _m^s({\mathbb {R}}^{n + m})\rightarrow {\mathbb {R}}\) be given and define f through formula (A.16) or (A.18), respectively. Let \(\Omega \) be a Lipschitz, bounded, open subset of \({\mathbb {R}}^m\) and \(\beta \in L^1(\Omega ,{\mathbb {R}}^+)\). A map \(u\in {{\,\mathrm{Lip}\,}}(\Omega ,{\mathbb {R}}^n)\) satisfies
if and only if the rectifiable varifold \(V_{u,\theta } = \llbracket \Gamma _u,\theta \rrbracket \) or the rectifiable current \(T_{u,\theta } = \llbracket \Gamma _u,\vec \xi _u,\theta \rrbracket \), where \(\theta (x,y) = \beta (x), \forall (x,y) \in {\mathbb {R}}^{m + n}\), are stationary with respect to \(\Sigma '_F\) or \(\Sigma _\Psi \), respectively.
Proof
Since the proof is essentially the same of [5, Proposition 6.8], we only sketch it. In [5, Proposition 6.8], only the varifold case was considered, hence let us consider the case of functionals defined on currents here.
Step 1: Reduction to special vector fields.
Define, for any \(g\in C_c^1(\Omega \times {\mathbb {R}}^n,{\mathbb {R}}^{m + n})\), \(g = (g_1,\dots ,g_{n + m},)\), two fields \(g^1 \doteq (g_1,\dots ,g_m,0,\dots ,0)\) and \(g^2 \doteq (0,0,g_{m + 1},\dots , g_{n + m})\), so that \(g = g^1 + g^2\). From now on, consider g fixed. From Lemma 5.7, we see that the first variation \([\delta _\Psi T]\) (see the notation introduced in (A.13)) enjoys the following properties:
and
(A.20) is trivial, while to show (A.21), simply notice that if
then
By exploiting the explicit form of the first variation written in Lemma 5.7, (A.21) follows at once. From (A.21) we conclude that it suffices to consider the first variation of the current \(T_u\) for vector fields g of the form
for \(G \in C_c^1(\Omega \times {\mathbb {R}}^n,{\mathbb {R}}^{n + m})\), and \(\chi \in C^\infty _c({\mathbb {R}}^n)\), \(\chi (y) \equiv 1\) on \(B_{2M}(0) \subset {\mathbb {R}}^n\) and \(\chi (y)\equiv 0\) on \(B_{3M + 1}(0)\), where \(M \doteq \max _{x \in \pi ({{\,\mathrm{spt}\,}}(G))}\Vert u(x)\Vert \). \(\pi : {\mathbb {R}}^{m + n} \rightarrow {\mathbb {R}}^m\) here denotes the projection \(\pi (x,y) \doteq x, \forall (x,y) \in {\mathbb {R}}^{m + n}, x \in {\mathbb {R}}^m,y\in {\mathbb {R}}^n\).
Step 2: Inner variations.
We let \(X_\varepsilon \) be the flow generated by \(g^1\), for g as in (A.23). It is easy to see that
where \(Z_\varepsilon \) is the flow generated by the field \(x\mapsto G^1(x,u(x))\). Using this information, one readily checks that
Through formula (A.19), we see that
By taking the derivative at \(\varepsilon = 0\) of the previous expression, we get
Step 3: Outer variations.
Similarly to the case above, consider the flow \(Y_\varepsilon \) generated by \(g^2\), for g as in (A.23). Then, one checks that
and hence
By (A.19), we write
whose derivative at \(\varepsilon = 0\) yields
Now the Proposition follows at once from (A.20)(A.21)(A.24)(A.25). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hirsch, J., Tione, R. On the constancy theorem for anisotropic energies through differential inclusions. Calc. Var. 60, 86 (2021). https://doi.org/10.1007/s0052602101981z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s0052602101981z
Mathematics Subject Classification
 35B65
 49Q15
 49Q20