A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {A}}$$\end{document}-Variational Principles

Though the A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {A}}$$\end{document}-quasiconvexity condition has been fully explored since its introduction, no explicit examples of associated variational principles have been considered except in the classical curl\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text {curl}}$$\end{document}-case. Our aim is to propose such a family of problems in the div-curl\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text {div}}{-}{\text {curl}}$$\end{document}-situation, and explore the corresponding A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {A}}$$\end{document}-polyconvexity condition as the main structural assumption to ensure weak lower semicontinuity and existence results.


Introduction
Vector variational problems for an integral functional of the form E(u) = Ω W (x, u(x), ∇u(x)) dx are of paramount importance for hyper-elasticity, where the internal energy density has to comply with a bunch of important conditions.In particular, the structural properties of W with respect to its gradient variable U are central to the existence of equilibrium states under typical boundary conditions.Another fundamental motivation to examine this family of variational problems is the study of non-linear systems of PDEs.As a matter of fact, this variational approach is the main method to show existence of weak solutions for such systems beyond the convex case.Check [7] for a discussion on these topics.
In these classic problems, the presence of the gradient variable U = ∇u stands as a major feature to be understood to the point that it determines the kind of structural assumption to be demanded on the dependence of W with respect to U .This leads to the quasiconvexity condition in the sense of Morrey [24], property that the community is still struggling to understand.This condition is precisely equivalent to the weak lower semicontinuity of functional E(u) above over usual Sobolev classes of functions.
The main source of quasiconvex functions which are not convex is the class of polyconvex integrands.These can be defined and determined in a very natural way once the collection of weak continuous integrands has been identified.In known existence results for minimizers of functional E under usual boundary conditions in hyper-elasticity, the polyconvexity of W with respect to variable U is always assumed, though it is known that the class of quasiconvex integrands is strictly larger than that of polyconvex densities.
The weak lower semicontinuity property has been treated in a much more general framework in which a constant-rank, linear partial differential operator of the form is involved.The concept of A-quasiconvexity is then suitably introduced [10], and shown to be necessary and sufficient [16] for the weak lower semicontinuity of a functional of the form under the differential constraint Av = 0, possibly in addition to suitable boundary restrictions.The importance of such an extension cannot be underestimated as it expands in an unbelievable way the analytical framework.For the particular case A = curl, we fall back to the classical gradient case.This new theory is by now very well understood covering the most fundamental developments: Young measures and A-free measures, relaxation, homogenization, regularity, dynamics, etc, (see [1,3,8,[12][13][14][15]17,18,20,21,23,28], among others); and yet explicit variational problems under more general differential constraints of the kind Av = 0 have not been systematically pursued, probably due to a lack of such examples of a certain relevance in Analysis or in applications.In the same vein, the natural and straightforward concept of A-polyconvexity, as far as we can tell, has not been explicitly examined (except recently in [18], and in a different form in [4]), again possibly because of lack of examples where such concept could go beyond plain convexity, and use in a fundamental way to show existence of solutions for such variational problems.
Our goal in this contribution is two-fold.On the one hand, we would like to focus on a particular example of a differential operator of the above kind distinct from the gradient situation (but not far), provide and motivate some examples of variational problems like (1.2) of a certain interest, and resort to A-polyconvexity as the main hypothesis allowing existence of minimizers.On the other hand, given that the representation form of polyconvexity in terms of compositions of convex and weak continuous functions involves some ambiguity, we have pushed some standard ideas and explicit calculations to gain more insight into this issue.
It is well-known that the representation of ψ(x) in terms of Ψ(x, y) is nonunique.There is however a canonical representative which is the largest such convex function Ψ.This is in fact standard (check for instance [11]).Consider the function where C stands for convexification in the usual sense.Moreover, if ψ(x) = φ(x, A(x)) with φ convex, then φ(x, y) ≤ Cψ A (x, y).

The conclusion in this proposition lets us define a canonical representative for
A-polyconvex functions.
Though the definition of the convex hull Cφ A (x, y) is pretty clear, it may be worthwhile to go through explicit calculations to check its explicit form in some distinguished examples like the ones below.Even in very simple situations, calculations turn out to be pretty demanding but they may help in gain insight into the nature of A-polyconvexity.The following are the situations examined here: 1. First scalar case.Take A(x) = x 3 , and find the canonical representative of ψ(x) = x 2 .2. Second scalar case.Take again A(x) = x 3 , and find the canonical representative for ψ(x) = x 4 + x 3 as a A-polyconvex function.3. Separately convex case.Take A(x, y) = xy, and check if ψ(x, y) = x 2 + y 2 is its own canonical representative.

A-Polyconvexity
Suppose a constant-rank operator A as in (1.1) is given.
Definition 4. A continuous, non-linear function a(v) : IR m → IR is weak continuous under the differential constraint Av = 0 if, for a suitable exponent p ≥ 1, Assume the components of the vector function A(v) : IR m → IR d are independent, weakly continuous functions under Av = 0 according to Definition 4. Note how this time the vector function A is precisely coming from a (partial) list of weak continuous functions for the operator A as indicated in the above definition.It is known that every component of A is a certain polynomial [25].Definition 5. Let the constant-rank operator A as in (1.1) be given.An integrand W (v) is said to be A-polyconvex if it can be written in the form W (v) = Ψ(A(v)) where A(v) : IR m → IR d is a collection of weak continuous functions for A according to Definition 4, and Ψ : IR d → IR is a convex function in the usual sense.

Consider the variational problem
Minimize in v(x) : under suitable boundary conditions for feasible v ∈ L p (Ω, R m ) or associated potentials, where Ω ⊂ R N is a bounded open set with Lipschitz boundary and W : Ω×R m → R is a Carathéodory function, in addition to complying with the previous differential constraint.We take for granted that boundary conditions imposed in the problem represent a set of competing fields that is weakly closed so that weak limits preserve such boundary conditions.We will denote by The following existence result is but a natural generalization of important existence results in hyperelasticity [2].
Then the above variational problem admits minimizers v ∈ L.
The main example corresponds to A = curl.This is the case for the classic Calculus of Variations and it is very well studied.Another one is the div case when complementary principles are utilized [6].This particular case has been recently retaken [9].We would like to focus and motivate the div − curl case.Though variational principles under the div − curl-constraint have been considered sporadically [19], they have not been pursued systematically as far as we can tell.
To be specific, we will stick to the framework in which Ω ⊂ IR N , and competing pairs of fields (v, ∇u) belong to suitable Sobolev spaces under the differential constraint div v = 0 in Ω.The family of integrands we would like to consider are of the form if t > 0, and Ψ = +∞, if t ≤ 0, that is to say Densities a(x) and b(x), and exponents p, q, r may vary in suitable ranges, and small coercive perturbation terms are typically added to it to ensure coercivity.More specifically, the following lemma is elementary.
The proof of this lemma is elementary.Simply note that functions in (1.3) are convex under the conditions just given.
In addition to proving Theorem 6, we will justify the form of the integrands in (1.4), show an existence theorem for minimizers (that will be a corollary of Theorem 6), and treat an easy argument to better understand the role of small perturbations added to ensure coerciveness.Needless to say, the div-curl Lemma [26,30] is central to state some non-standard results in this context.It yields the weak continuity of the inner product v • ∇u precisely under the differential constraint div v = 0.

Examples for A-Polyconvexity
We cover in this section the calculations involved for the explicit examples listed in Sect.1.1.
Example 8.In this first scalar case, we take ψ, A : R → R with A(x) = x 3 and ψ We can treat separately the case where only one of the α i is not zero, because it is trivial to conclude that we get x 2 for points (x, y) with y = x 3 and +∞ otherwise.Now, if at least two of the α i are non-zero, the above equality can be written as inf As a preliminary step, we study the case where exactly one of the α i is zero, Without loss of generality, we take α 3 = 0 and so, for each (x, y), we have to minimize x 1 (otherwise we fall again in a trivial case).From here and supposing e.g.x 1 < x < x 2 , one is lead to minimize , which have no solutions in its domain.The conclusion is that we have that x 2 is the minumum and it is attained for the points where y = x 3 (with x 1 = x 2 = x) and that the minimum cannot be attained when y = x 3 .
To prove this, take the case where y > x 3 (the other being analogue).It is easy to observe that the infimum cannot be smaller than x 2 , because by hypotheses We will prove that the infimum is x 2 and it is obtained when x 1 x (and x 2 → +∞).To do this, suppose that (x, y) is fixed (but arbitrary), with y > x 3 .We take three sequences of real numbers, If we take x n 1 x then we have From the other equation in (2.1), we have 3 .One must take some care while computing this limit, avoiding possible indeterminations.But as y − α n (x n 1 ) 3 → y − x 3 > 0 in a monotone way, then From here, we have 2 → +∞ and To finish the computations, one has to consider the case with 3 points.But given the solution obtained in the previous step, and again as the function x 2 is convex, the best possible value to obtain is indeed x 2 , and this value cannot be lowered by taking convex combinations of 2 or more points.From here, the conclusion is that in this case, one has that is, ψ is its own canonical representation as a A-polyconvex function.
We consider again a scalar function.
Example 9.In this second scalar case, we take ψ, A : R → R with A(x) = x 3 and in which we supposed that at least two of the α i are non-zero, to avoid trivialities.We will start again by dealing with the situation where one of the α i is zero.For each (x, y), we have to minimize x 1 , (notice that, as before, if x 2 = x 1 = x then y = x 3 and so the minimum is x 4 +x 3 ).From here and supposing without loss of generality, that x 1 < x < x 2 , one is lead to minimize , with γ = (1 − λ) in the last system.As is a cubic equation in γ, it always has at least one real solution.Substituting they/them into the expressions of x 1 , x 2 and then in the function that we are minimizing will lead us to the minimum.Unfortunately it is not possible to have an closed expression for the solution in this case, but at least we can still hope to compute Cφ A (x, y), for each (x, y).Nevertheless, some interesting conclusions could be made: for fixed γ, equation (2.2) defines the line that passes through the points (x 1 , x 3 1 ) and (x 2 , x 3 2 ).This means that the minimum for any (x, y) that belongs to this segment will always be attained at the extreme points (x 1 , x 3  1 ) and (x 2 , x 3 2 ) and, furthermore, Cφ A (x, y) will be a linear function along this segments which, in particular, this implies that we cannot have Cφ A (x, y) = x 4 + y, if the minimum is attained in the case where one of the α i is zero.
In order to obtain Cφ A (x, y), we must analyze the situation when α 1 α 2 α 3 = 0. We followed here a different technique: instead of solving the corresponding system of optimality conditions, we will consider two sub-cases, each of which will be reduce to the situation of the previous case.The idea of the reduction came to our minds after a couple of hundreds of numerical computations, where we always obtained optimal solutions with only two distinct points instead of three.
Consequently the minimum is attained in the case where one of the α i is zero and so we cannot have Cφ A (x, y) = x 4 + y and so ψ is not its own canonical representative.Nevertheless, we can compute Cφ A (x, y) at each (x, y) by the algorith described before.
Next we proceed with the separately convex case.Although it is not associated with a constant rank operator, it is a well-known important example which was introduced by Tartar [31] as a model to study rank-one convexity.
Example 10.For ψ, A : R 2 → R with A(x, y) = xy and ψ(x, y) = φ(x, y, xy) = x 2 + y 2 , we have Again, we supposed that at least two of the α i are non-zero, to avoid trivialities, in the last equality.Once again, in order to simplify the minimization problem, we will start again by dealing with the situation where two of the α i are zero.We take α 3 = α 4 = 0, without loss of generality.For each (x, y, z), we are left to minimize .
The optimality conditions are The first solution is always well defined for z ≤ xy and the second for z ≥ xy (we recall that, in particular, we can take y in the interior of the segment whose extreme points are y 1 and y 2 ; otherwise we interchange the papers of y 2 with x 2 ).Computing the minimum, one gets Notice that one need not take more than two of the α i = 0 in the minimization problem, because the function that we have obtained is already convex and lies below φ A (x, y, z).As in the previous case, ψ(x, y) is not its own canonical representative.Proposition 11.The canonical representative Cφ A (x, y, z) of ψ(x, y) = x 2 + y 2 as a separately convex function of the two variables x, y, is We now proceed to the typical example of polyconvexity in R 2×2 case, see e.g.[11].
Example 12.In this fully vector example, we take with A(ξ) = det ξ and We have (in the last equality, we supposed as usual that at least two of the α i are non-zero) Once again, in order to simplify the minimization problem, we will start again by dealing with the simplest situation, which in here will be the case where four of the α i are zero, which we take α 3 = α 4 = α 5 = α 6 = 0, without loss of generality.For each (ξ, t) ∈ R 5 , we have to minimize .
The optimality conditions are where λ = (λ 1 , λ 2 , λ 3 , λ 4 ), a, b stands for the inner product of the vectors a, b ∈ R 4 and adj ξ i is the matrix (taken as a vector in R 4 ) of all the 1 × 1 minors of matrix ξ i .The solutions of the above system are (where, in order to drop some of the indexes we will take, when necessary, ξ = (ξ 11 , ξ 12 , ξ 21 , ξ 22 ) = (x, y, z, w) and The first solution is well defined for t ≥ det ξ and the second for t ≤ det ξ (notice that ξ 1 , ξ 2 = ξ so, interchanging variables for which we solve the system of optimality conditions, one can suppose without loss of generality that ( Computing the minimum, one gets Notice that, as in the separately convex case, one need not to take more than two of the α i = 0 in the minimization problem, because the function that we have obtained is already convex and lies below φ A (ξ, t).And so, the answer to the question of whether ψ is its own canonical representative is again negative.
This holds for all subsequences, so it follows for the sequence itself.We now prove the (sequentially) weak lower semicontinuity of I. From (3.1), and as we are supposing that W (x, •) is A-polyconvex in the sense of Definition 5, and due to the upper bound assumed on A, we have for some q > 1, and so From Mazur Lemma, we can find a sequence of convex combinations ), and so Put Taking a subsequence, we can assume that I(v i ) converges to m, and taking lim inf in both sides of (3.3), we have On the other hand, by (3.2) and because I is strongly lower semicontinuous, In particular the boundary conditions are ensured by the weak continuity of the trace and consequently v is a minimizer.
As indicated in the Introduction, Theorem 6 is a natural extension of important results in classical settings.Our motivation was to look for new interesting explicit examples in non-standard situations.We have been inspired by some variational approaches to deal with the classic Calderón's problem [5] in inverse 3D-conductivity [22,27], in which we presume to aim at v = γ∇u for a certain unknown conductivity coefficient γ(x).We intend to recover such unknown coefficient γ by minimizing the functional corresponding to the integrand We take Ψ = +∞ whenever t ≤ 0. We plan to examine this approach more specifically in the future.An existence theorem, as a corollary of Theorem 6, can then be proved based on div-curl-polyconvexity.It can be very easily adapted to cover more general situations for integrands of the form (1.4).Proof.Consider an arbitrary, but fixed, ε > 0. W ε is A-polyconvex and satisfies the required growth condition.If (v j , ∇u j , v j • ∇u j ) is a minimizing sequence, then, possible for a subsequence, coercivity entails From (3.6) and (3.7), remembering that div v j = div v = 0, the div-curl Lemma [26,30] gives us in the sense of distributions; in fact, in L 2 (Ω, R) because of (3.8).Though elementary to check, it is relevant to stress that the function in (3.4) is convex in the usual sense, and so we are entitled to apply Theorem 6 (or rather its proof) with w = (v, ∇u), A(w) = A(v, ∇u) = v • ∇u, p = 2, to conclude that (v, u) ∈ L + is a minimizer.
There is nothing to prevent us from proving a similar existence theorem for a family of integrands like in (1.3) based on Lemma 7.
It is finally interesting to stress that the effect of the small perturbation added in (3.5) to ensure coercivity is quite innocent in the following sense.Proposition 16.Let {(v ε , u ε )} ε>0 be a sequence where, for each ε > 0, (v ε , u ε ) is the minimizer of I ε , that is, If we put I 0 (v, u) = I ε (v, u)| ε=0 , then {(v ε , u ε )} ε>0 is minimizing for I 0 .
Proof.We have, by one side, and, on the other side, In particular, Taking limits in both sides of this last inequality leads us to lim inf ε→0 I 0 (v ε , u ε ) ≤ I 0 (v, u), ∀(v, u) ∈ L + , from where it follows that {(v ε , u ε )} ε>0 is a minimizing sequence for I 0 due to the arbitrariness of (v, u) ∈ L + .