The Maximization of the First Eigenvalue for a Two-Phase Material

We study the optimal arrangement of two conductive materials in order to maximize the first eigenvalue of the corresponding diffusion operator with Dirichlet conditions. The amount of the highest conductive composite is assumed to be limited. Since this type of problems has no solution in general, we work with a relaxed formulation. We show that the problem has some good concavity properties which allow us to get some uniqueness results. By proving that it is related to the minimization of the energy for a two-phase material studied in [4] we also obtain some smoothness results. As a consequence, we show that the unrelaxed problem has never solution. The paper is completed with some numerical results corresponding to the convergence of the gradient algorithm.


Introduction
For two conductive materials (electric or thermic), represented by their diffusion constants 0 < α < β, we are interested in finding the mixture which maximizes the first eigenvalue of the corresponding diffusion operator with Dirichlet conditions. Namely, for a fixed bounded open set ⊂ R N , N ≥ 2, we look for a measurable subset ω ⊂ maximizing B Juan Casado-díaz jcasadod@us.es 1 Dpto. de Ecuaciones Diferenciales y Análisis Numérico, Universidad de Sevilla, Sevilla, Spain λ 1 (ω) := min (αχ ω + βχ \ω )|∇u| 2 dx |u| 2 dx . (1.1) With this formulation the solution is the trivial one given by ω = ∅. It consists in taking in every point the material with the highest diffusion coefficient. The problem becomes interesting when the amount of such material is limited, i.e. when we add the constraint |ω| ≥ κ, (1.2) for some κ ∈ 0, | | . From the point of view of the applications, the maximization of the first eigenvalue is a criterion to determine the best two-phase conductive material (associated to the Dirichlet conditions). To clarify this statement we can consider the parabolic problem Due to the diffusion process, the solution u (electric potential of temperature) tends to zero when t tends to infinity. Thus, we can consider that one material is a better diffuser than another if the solution converges to zero faster. In this point we recall the estimate (λ 1 the first eigenvalue of the elliptic operator) which is optimal in the sense that it is reached for u 0 an associated eigenfunction. Therefore, one way of understanding that the material is a good conductor is that λ 1 is large. The opposite problem, corresponding to get the worse conductive material (i.e. the best isolating one) has been considered in several papers such as [2,5,6,11,12], and [21] (see also [8] for the p-Laplacian operator). It consists in finding ω minimizing (1.1), when (1.2) is replaced by |ω| ≤ κ. Assuming connected, with connected boundary, it has been proved in [6] (see [5,8] and [25], for related results) that the problem has a solution if and only if is a ball.
The non-existence of solution for optimal design problems is classical ( [22,23]). Because of that, it is usual to work with a relaxed formulation. This can be achieved using the homogenization theory ( [1,[24][25][26]), which describes the set of materials that can be obtained mixing some elementary composites in a microscopic level. The resulting materials do not only depend on the proportion of the composites but also of their disposition. In the case of minimizing the first eigenvalue , the most elementary estimates in H convergence (see e.g. [25], Proposition 9) prove that such relaxed formulation consists in replacing αχ ω + βχ \ω by the harmonic mean value of α and β with proportions θ and 1 − θ respectively, θ ∈ (0, 1), i.e. by This homogenized material is obtained by laminates of α and β in the direction of the flow. In the case of the maximization considered here, it consists in replacing αχ ω + βχ \ω by the arithmetic mean value, i.e. by It is also obtained as a laminate of α and β but now in an orthogonal direction to the flow. Taking c = (β − α)/β, we are going then to be interested in the problem We show that is concave in L ∞ ( ; [0, 1]) and that the minimization in u becomes a convex problem after a change of variables. This allows us to write the max-min problem as a min-max problem. If (θ,û) is a saddle point we prove thatû is unique (taking it positive, with unit norm in L 2 ( )), andθ belongs to a convex set of functions which satisfyθ for a certainμ > 0, which depends on u but not onθ. Thus,θ is unique outside the set {|∇û| =μ}. The optimality conditions For an arbitrary f ∈ H −1 ( ), this is a problem which has been studied in [4], see also [3] for a related problem where θ ∈ L ∞ ((−∞, 0]). Applying the results in [4] to our problem we deduce that ∈ C 1,1 implies that the solutions (θ,û) of (1.3) satisfyû As a consequence of this result we show that the unrelaxed problem (1.1) has no solution for any bounded open set ∈ C 1,1 , and therefore that the set |∇û| =μ}, withμ satisfying (1.4) always has a positive measure. The connection between (1.3) and (1.5) merits to be compared with the results in [5] (see also [8] and [10]). In these papers, it has been proved that the minimization of the first eigenvalue can also be formulated as (1.6) However in our case we do not know if an equivalence result of this type holds. For an arbitrary f ∈ H −1 ( ), problem P f has also been considered by several authors ( [1,5,7,17,25]), specially for f = 1 where it applies for example to the optimal rearrangement of two materials in the cross section of a beam in order to minimize the torsion Taking into account the concavity of the functional , we finish the paper showing the convergence of the gradient method applied to (1.3). We also estimate the rate of convergence (see [9] for related results), and we provide some numerical examples corresponding to a circle and a square respectively.
Some results about the relaxation and the optimality conditions to the minimization or maximization of an arbitrary eigenvalue for a two-phase material have also been obtained in [14].

Theoretical Results for the Maximization of the Eigenvalue
For a bounded open set ⊂ R N , N ≥ 2, and three positive constants 0 < α < β, 0 < κ < | |, we look for a measurable set ω ⊂ , with |ω| ≥ κ, which maximizes the first eigenvalue of the operator u ∈ H 1 0 ( ) → −div αχ ω + β(1 − χ ω ) ∇u ∈ H −1 ( ), i.e. we are interested in the optimal design problem max ω⊂ |ω|≥κ Since this type of problems has no solution in general ( [22,23]), let us work with a relaxed formulation. Using the homogenization theory ( [1,25,26]), it is well known that it is given by It consists in replacing the mixture αχ ω + βχ ω c by a most general one obtained as a laminate of both materials in an orthogonal direction to ∇u with proportions θ and 1 − θ . Dividing by β and introducing we can also write (2.2) as Moreover, we can restrict ourselves to the set of functions u which are non-negative and have unit norm in L 2 ( ). The following theorem shows the uniqueness of the optimal state functionû and provides an equivalent formulation.

Moreover, ifθ is a solution of the left-hand side problem andû is the solution of
thenû is the unique solution of the right-hand side problem andθ is a solution of Related with the above result, we also have (2.4) if and only if defininĝ u as the unique solution of (2.6), we have thatθ satisfies (2.7).

Remark 2.4
Ifθ is a solution of (2.4) andû the solution of (2.6), thenû satisfies withλ the maximum value in (2.4). By Theorem 2.1, we also have thatθ is a solution of (2.7). This proves that (θ,û) is a saddle point of the problem and then of the optimal design problem studied in [4] with f =λû. Applying the smoothness results in this paper, we get Theorem 2.5 below.

Remark 2.6
By the strong maximum principle, we know thatû is strictly positive in . Using (2.13) andû ∈ H 2 ( ), we also have ∇û = 0 a.e. in . Taking into account (2.7), this implies that for every solutionθ of (2.4), there existsμ > 0 such that Sinceμ can be chosen aŝ andû is unique, we get thatμ can be chosen independently ofθ and then by (2.17) that all the solutions of (2.4) take the same value on the set {|∇û| =μ}. By (2.13), these solutions also satisfy the first order linear PDE However,û ∈ H 2 ( ) is not enough to conclude that (2.17), (2.18) andû unique implŷ θ unique. In any way, Theorem 2.2 proves that the set of solutions of (2.4) is a convex set.
For the problem of minimizing the first eigenvalue, it has been proved in [6] (see [5,7,8,25] for related results) that assuming that ∈ C 1,1 connected, with connected boundary, then the unrelaxed problem has a solution if and only if is a ball. In the case of the maximization of the first eigenvalue, the following theorem gives that an unrelaxed solution never exists. In particular this shows hatμ in (2.17) is such that {|∇û| =μ} is always positive.

Theorem 2.7
If is a C 1,1 domain in R N , then problem (2.1) has no solution.

Proof of the Theoretical Results
We show in this section the results stated in the previous one relative to the properties of the solutions of problem (2.4).

Proof of Theorem 2.1 Taking into account that
Here we introduce the change of variables It transforms the problem in Let us prove that for every θ ∈ L ∞ ( ; [0, 1]), the functional defined by is convex. Moreover, it is strictly convex on the set For this purpose, we take z 1 , z 2 ≥ 0, with √ z 1 , √ z 2 ∈ H 1 0 ( ) and r ∈ (0, 1). Then, using the convexity of the function ξ ∈ R N → |ξ | 2 , we have This proves the convexity of . In order to prove the strict convexity on the set defined by (3.4), we return to (3.5). Assuming that z 1 , z 2 are strictly positive a.e. in , and using that the function ξ ∈ R N → |ξ | 2 is strilctly convex, we deduce that (3.5) is an equality if and only if This proves that log(z 1 /z 2 ) is constant in and then that there exists a positive constante C > 0 such that z 1 = Cz 2 a.e. in . Since the integral of z 1 and z 2 takes the same value, we must have C = 1. Thus is strictly convex.
The convexity of , allows us to apply the Von-Neumann min-max theorem (see e.g. [27], chapter 2.13) to (3.2) Moreover, ifθ is a solution of the left-hand side problem andz is the solution of thenz is a solution of the right-hand side problem. Returning to the variables (θ, u), this proves the thesis of Theorem 2.1, except the uniqueness ofû. For this purpose, we recall that by the strong maximum principle, ifû is a solution of (2.6) then it is positive in and thereforeẑ := √û is also positive. As a consequence we have Since is strictly convex on the set defined by (3.6), we also have that the function is strictly convex as a maximum of strictly convex functions. Thusẑ and thenû is unique.

A Numerical Approximation
In this section, taking into account Theorem 2.2, let us prove the convergence of the gradient method applied to (2.4) together with an estimate of the error. The corresponding algorithm reads as follows: We fix δ > 0. • Take θ 0 ∈ L ∞ ( ; [0, 1]) such that • For Our main result is given by the following convergence theorem.

The sequence u n satisfies
withû defined by Theorem 2.1 and C > 0 depending only on c and u 0 H 1 0 ( ) . The first and second assertions of the theorem will follow from the following general convergence result for the gradient descendent method. It applies to a convex (but non necesaryly strictly convex) functional with bounded second derivative. It is strongly related with the classical convergence result of the gradient method with a fixed step. However, the rate of convergence is worse due to the lack of the ellipticity.
Then, for every k 0 ∈ K , and every δ ∈ (0, 1/M], the sequence {k n } defined by withk n solution of and The constant C only depends on F(k 0 ) and M. Moreover, everyk ∈ K such that there exists a subsequence of {k n } which converges weakly tok, is a minimum point for F.

Remark 4.4
The assumption F Gâteaux derivable in C can be relaxed by: For every k ∈ C, there exists F (k) ∈ X such that We also observe that this property combined with the convexity of F implies that F is lower semicontinuous in K and then that F has a minimum in K .

Proof of Theorem 4.3
Thanks to (4.8), for every ε ∈ [0, 1] we have Definingk n by (4.10), ε n by (4.11), and δ n by we then have Now, we takek ∈ C a minimum point for F. Using the convexity of F and the definition (4.10) ofk n , we have On the other hand, the convexity and lower semicontinuity of F imply that F is sequentially lower semicontinuous for the weak topology. Therefore, for every subsequence of k n , still denoted by k n , which converges weakly to somek ∈ K , we have and thenk is a minimum point of F in K .
In order to prove Theorem 4.2, let us also need the following two lemmas Proof Reasoning by contradiction, we assume that there exists a sequence θ n ∈ L ∞ ( ; [0, 1]) such that the 2 (θ n ) − (θ n ) goes to zero. Extracting a subsequence if necessary (see e.g. [24,26]), we can assume that there exists A ∈ L ∞ ( ) N ×N symmetric, with such that (1 − cθ n )I converges to A in the sense of the H -convergence in ( [1,24,26]). Denoting by λ 1 , λ 2 the first and second eigenvalues of the operator −div(A∇) with Dirichlet conditions, this implies Since λ 2 − λ 1 > 0, this contradicts the fact that 2 (θ n ) − 1 (θ n ) tends to zero.

Lemma 4.6
Assume a bounded open set, then there exist three constants R, M, γ > 0 such that for every θ, ϑ ∈ L ∞ ( ; [0, 1]), u defined by (2.12), and λ defined by (2.11), satisfy Proof We define λ 1 and λ 2 as the first and second eigenvalues of −div((1 − cθ)∇) with Dirichlet condtions, and u as the unique positive eigenfunction with unit norm in L 2 ( ) corresponding to λ 1 . Since u is orthogonal to u in L 2 ( ), we have Using then u as test function in (2.12), we have (4.20) Therefore (4.21) Using here Lemma 4.5, Cauchy-Schwarz's inequality and with λ * 1 the first eigenvalue of − in with Dirichlet conditions, we conclude the existence of R > 0 such that the first assertion in (4.18) holds. From this inequality, (4.17), (4.20), and (2.11), we easily conclude (4.18).

Proof of Theorem 4.2
Taking into account Lemma 4.18 and Remak 4.4, we have that (4.6) andθ solution of (2.4) are an immediate consequence of Theorem 4.3 applied to It remains to prove (4.7). Givenθ a solution of (2.4), and defining by (2.8), we have where thanks toθ solution of (2.4), we have Using then (θ) − (θ n ) =λ − λ n and the second inequality in (4.18), we get with u t the solution of (2.12) for θ =θ and ϑ = θ n . Now, we use that (4.22) and (4.18) imply where C only depends on u 0 and c. This proves (4.7).

Numerical Examples
In this last secton, we present some numerical experiments corresponding to the algorithm described in Section 4. Our first example refers to the case where is a ball in R N . In this case, as a simple application of the uniqueness of the optimal state function given by Theorem 2.1 we can easily show that the solutions are radial. The corresponding result is given in Proposition 5.1 below. In the case of the minimization of the first eigenvalue, a similar result has been obtained in [2] (see [11,12,21] for related results). The result is more delicate to prove than in the present paper due to the lack of uniqueness. We also recall that for the minimization problem, the corresponding solutionsθ are unrelaxed, i.e. Proof Along the proof, we denote by B R the ball of center 0 and radius R and by |A| N −1 the (N − 1)-Hausdorff measure of a subset A ⊂ R N .
Assumeθ a solution of (2.4) andû the solution of (2.6). Then, thanks to the symmetry properties of the unit ball, we have thatθ(x) :=θ(Px) is also a solution of (2.4) for every orthogonal matrix P. Moreover, the solution of (2.4) relative toθ is given byũ(x) =û(Px). Sinceũ =û by Theorem 2.1, we getû(Px) =û(x) for every orthogonal matrix P. This proves thatû is a radial function.
In figs 1, 2 and 3, we represent the optimal proportion θ of the best conductive material for the unit circle, c = 0.5 and κ = p| |, with p = 0.2, p = 0.5 and p = 0.8 respectively. Since we know thatθ is radial we have applied the algorithm in Section 4 directly to the corresponding one-dimensional problem.
In figs 4, 5 and 6 we represent the optimal proportion θ of the worse conductive material for the unit square, c = 0.5 and κ = p| |, p as above. As it is hoped, the solution has the symmetries of the square.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.