1 Introduction

For two conductive materials (electric or thermic), represented by their diffusion constants \(0<\alpha <\beta \), we are interested in finding the mixture which maximizes the first eigenvalue of the corresponding diffusion operator with Dirichlet conditions. Namely, for a fixed bounded open set \(\Omega \subset {\mathbb {R}}^N\), \(N\ge 2\), we look for a measurable subset \(\omega \subset \Omega \) maximizing

$$\begin{aligned} \lambda _1(\omega ):=\min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\not =0 \end{array}} {\int _\Omega (\alpha \chi _\omega +\beta \chi _{\Omega \setminus \omega })|\nabla u|^2dx\over \int _\Omega |u|^2dx}.\end{aligned}$$
(1.1)

With this formulation the solution is the trivial one given by \(\omega =\emptyset \). It consists in taking in every point the material with the highest diffusion coefficient. The problem becomes interesting when the amount of such material is limited, i.e. when we add the constraint

$$\begin{aligned} |\omega |\ge \kappa ,\end{aligned}$$
(1.2)

for some \(\kappa \in \big (0,|\Omega |\big )\). From the point of view of the applications, the maximization of the first eigenvalue is a criterion to determine the best two-phase conductive material (associated to the Dirichlet conditions). To clarify this statement we can consider the parabolic problem

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle \partial _t u-\mathrm{div}\big ((\alpha \chi _\omega +\beta \chi _{\Omega \setminus \omega })\nabla _xu\big ) =0 \ \hbox { in }(0,\infty )\times \Omega \\ \displaystyle u=0\ \hbox { on }(0,\infty )\times \partial \Omega ,\quad u_{|t=0}=u_0\ \hbox { in }\Omega .\end{array}\right. \end{aligned}$$

Due to the diffusion process, the solution u (electric potential of temperature) tends to zero when t tends to infinity. Thus, we can consider that one material is a better diffuser than another if the solution converges to zero faster. In this point we recall the estimate (\(\lambda _1\) the first eigenvalue of the elliptic operator)

$$\begin{aligned} \Vert u(t,.)\Vert ^2_{L^2(\Omega )}\le e^{-\lambda _1 t}\Vert u_0\Vert ^2_{L^2(\Omega )},\quad \forall \, t>0, \end{aligned}$$

which is optimal in the sense that it is reached for \(u_0\) an associated eigenfunction. Therefore, one way of understanding that the material is a good conductor is that \(\lambda _1\) is large.

The opposite problem, corresponding to get the worse conductive material (i.e. the best isolating one) has been considered in several papers such as [2, 5, 6, 11, 12], and [21] (see also [8] for the p-Laplacian operator). It consists in finding \(\omega \) minimizing (1.1), when (1.2) is replaced by \(|\omega |\le \kappa .\) Assuming \(\Omega \) connected, with connected boundary, it has been proved in [6] (see [5, 8] and [25], for related results) that the problem has a solution if and only if \(\Omega \) is a ball.

The non-existence of solution for optimal design problems is classical ([22, 23]). Because of that, it is usual to work with a relaxed formulation. This can be achieved using the homogenization theory ([1, 24,25,26]), which describes the set of materials that can be obtained mixing some elementary composites in a microscopic level. The resulting materials do not only depend on the proportion of the composites but also of their disposition. In the case of minimizing the first eigenvalue , the most elementary estimates in H convergence (see e.g. [25], Proposition 9) prove that such relaxed formulation consists in replacing \(\alpha \chi _\omega +\beta \chi _{\Omega \setminus \omega }\) by the harmonic mean value of \(\alpha \) and \(\beta \) with proportions \(\theta \) and \(1-\theta \) respectively, \(\theta \in (0,1)\), i.e. by

$$\begin{aligned} \Big ({\theta \over \alpha }+{1-\theta \over \beta }\Big )^{-1}. \end{aligned}$$

This homogenized material is obtained by laminates of \(\alpha \) and \(\beta \) in the direction of the flow. In the case of the maximization considered here, it consists in replacing \(\alpha \chi _\omega +\beta \chi _{\Omega \setminus \omega }\) by the arithmetic mean value, i.e. by

$$\begin{aligned} \alpha \theta +\beta (1-\theta ). \end{aligned}$$

It is also obtained as a laminate of \(\alpha \) and \(\beta \) but now in an orthogonal direction to the flow.

Taking \(c=(\beta -\alpha )/\beta \), we are going then to be interested in the problem

$$\begin{aligned} \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int _\Omega \theta dx\ge \kappa \end{array}}\Lambda (\theta )\quad \hbox {with}\quad \Lambda (\theta ):= \min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\not =0 \end{array}} {\int _\Omega (1-c\theta )|\nabla u|^2dx\over \int _\Omega |u|^2dx}.\end{aligned}$$
(1.3)

We show that \(\Lambda \) is concave in \(L^\infty (\Omega ;[0,1])\) and that the minimization in u becomes a convex problem after a change of variables. This allows us to write the max-min problem as a min-max problem. If \(({\hat{\theta }},{\hat{u}})\) is a saddle point we prove that \({\hat{u}}\) is unique (taking it positive, with unit norm in \(L^2(\Omega )\)), and \({\hat{\theta }}\) belongs to a convex set of functions which satisfy

$$\begin{aligned} \hat{\theta }=\left\{ \begin{array}{ll}\displaystyle 0&{}\hbox { if }|\nabla \hat{u}|> \hat{\mu }\\ 1 &{}\hbox { if }|\nabla \hat{u}|<\hat{\mu }, \end{array}\right. \qquad \int _\Omega \hat{\theta }\,dx=\kappa , \end{aligned}$$
(1.4)

for a certain \({\hat{\mu }}>0\), which depends on u but not on \({\hat{\theta }}\). Thus, \({\hat{\theta }}\) is unique outside the set \(\{|\nabla {\hat{u}}|={\hat{\mu }}\}.\)

The optimality conditions

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -\mathrm{div}\big ((1-c{\hat{\theta }})\nabla {\hat{u}}\big )={\hat{\lambda }} {\hat{u}}\ \hbox { in }\Omega \\ \displaystyle \int _\Omega \theta |\nabla {\hat{u}}|^2dx=\min _{\begin{array}{c} \vartheta \in L^\infty (\Omega ;[0,1])\\ \int _\Omega \vartheta dx\ge \kappa \end{array}}\int _\Omega \vartheta |\nabla {\hat{u}}|^2dx \\ \displaystyle {\hat{u}}\in H^1_0(\Omega ),\ {\hat{u}}> 0\ \hbox { in }\Omega ,\ \int _\Omega |{\hat{u}}|^2dx=1,\quad {\hat{\theta }}\in L^\infty (\Omega ;[0,1]),\ \int _\Omega {\hat{\theta }}\,dx=\kappa ,\end{array}\right. \end{aligned}$$

are necessary and sufficient. Because of that, it is simple to check that every solution \(({\hat{\theta }},{\hat{u}})\) of (1.3) is also a solution of

$$\begin{aligned} \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int _\Omega \theta dx\ge \kappa \end{array}}\min _{u\in H^1_0(\Omega )} \left\{ {1\over 2}\int _\Omega (1-c\theta )|\nabla u|^2dx-\int _\Omega fu\,dx\right\} ,\end{aligned}$$
(1.5)

with \(f={\hat{\lambda }} {\hat{u}}\). For an arbitrary \(f\in H^{-1}(\Omega )\), this is a problem which has been studied in [4], see also [3] for a related problem where \(\theta \in L^\infty ((-\infty ,0])\). Applying the results in [4] to our problem we deduce that \(\Omega \in C^{1,1}\) implies that the solutions \(({\hat{\theta }},{\hat{u}})\) of (1.3) satisfy

$$\begin{aligned} {\hat{u}}\in H^2(\Omega )\cap W^{1,\infty }(\Omega ),\qquad \nabla {\hat{\theta }}\cdot {\nabla } {\hat{u}}\in L^2(\Omega ). \end{aligned}$$

As a consequence of this result we show that the unrelaxed problem (1.1) has no solution for any bounded open set \(\Omega \in C^{1,1}\), and therefore that the set \(\big \{|\nabla {\hat{u}}|={\hat{\mu }}\}\), with \({\hat{\mu }}\) satisfying (1.4) always has a positive measure.

The connection between (1.3) and (1.5) merits to be compared with the results in [5] (see also [8] and [10]). In these papers, it has been proved that the minimization of the first eigenvalue can also be formulated as

$$\begin{aligned} \min _{\begin{array}{c} f\in L^2(\Omega )\\ \Vert f\Vert _{L^2(\Omega )} \end{array}\le 1}{\mathcal {P}}_f,\quad {\mathcal {P}}_f:=\min _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int _\Omega \theta dx\le \kappa \end{array}}\min _{u\in H^1_0(\Omega )} \left\{ {1\over 2}\int _\Omega {|\nabla u|^2\over 1+d\theta } dx-\int _\Omega fu\,dx.\right\} ,\quad d={\beta -\alpha \over \alpha }.\nonumber \\ \end{aligned}$$
(1.6)

However in our case we do not know if an equivalence result of this type holds. For an arbitrary \(f\in H^{-1}(\Omega )\), problem \(\mathcal{P}_f\) has also been considered by several authors ([1, 5, 7, 17, 25]), specially for \(f=1\) where it applies for example to the optimal rearrangement of two materials in the cross section of a beam in order to minimize the torsion

Taking into account the concavity of the functional \(\Lambda \), we finish the paper showing the convergence of the gradient method applied to (1.3). We also estimate the rate of convergence (see [9] for related results), and we provide some numerical examples corresponding to \(\Omega \) a circle and a square respectively.

Some results about the relaxation and the optimality conditions to the minimization or maximization of an arbitrary eigenvalue for a two-phase material have also been obtained in [14].

2 Theoretical Results for the Maximization of the Eigenvalue

For a bounded open set \(\Omega \subset {\mathbb {R}}^N\), \(N\ge 2\), and three positive constants \(0<\alpha <\beta \), \(0<\kappa <|\Omega |\), we look for a measurable set \(\omega \subset \Omega \), with \(|\omega |\ge \kappa \), which maximizes the first eigenvalue of the operator \(u\in H^1_0(\Omega )\mapsto -\mathrm{div}\big (\big (\alpha \chi _\omega +\beta (1-\chi _{\omega })\big )\nabla u\big )\in H^{-1}(\Omega )\), i.e. we are interested in the optimal design problem

$$\begin{aligned} \max _{\begin{array}{c} \omega \subset \Omega \\ |\omega |\ge \kappa \end{array}}\min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\not =0 \end{array}}{\int _\Omega \big (\alpha \chi _\omega +\beta (1-\chi _{\omega })\big )|\nabla u|^2dx\over \int _\Omega |u|^2dx}.\end{aligned}$$
(2.1)

Since this type of problems has no solution in general ( [22, 23]), let us work with a relaxed formulation. Using the homogenization theory ([1, 25, 26]), it is well known that it is given by

$$\begin{aligned} \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\not =0 \end{array}}{\int _\Omega \big (\alpha \theta +\beta (1-\theta )\big )|\nabla u|^2dx\over \int _\Omega |u|^2dx}.\end{aligned}$$
(2.2)

It consists in replacing the mixture \(\alpha \chi _\omega +\beta \chi _{\omega ^c}\) by a most general one obtained as a laminate of both materials in an orthogonal direction to \(\nabla u\) with proportions \(\theta \) and \(1-\theta \). Dividing by \(\beta \) and introducing

$$\begin{aligned} c=1-{\beta \over \alpha }\in (0,1),\end{aligned}$$
(2.3)

we can also write (2.2) as

$$\begin{aligned} \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\not =0 \end{array}}{\int _\Omega \big (1-c\theta \big )|\nabla u|^2dx\over \int _\Omega |u|^2dx}.\end{aligned}$$
(2.4)

Moreover, we can restrict ourselves to the set of functions u which are non-negative and have unit norm in \(L^2(\Omega )\). The following theorem shows the uniqueness of the optimal state function \({\hat{u}}\) and provides an equivalent formulation.

Theorem 2.1

We have

$$\begin{aligned} \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int _\Omega \theta dx\ge \kappa \end{array}}\min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\ge 0,\int _\Omega |u|^2dx=1 \end{array}}\int _\Omega (1-c\theta )|\nabla u|^2dx=\min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\ge 0,\int _\Omega |u|^2dx=1 \end{array}}\max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int _\Omega \theta dx\ge \kappa \end{array}}\int _\Omega (1-c\theta )|\nabla u|^2dx.\nonumber \\ \end{aligned}$$
(2.5)

Moreover, if \({\hat{\theta }}\) is a solution of the left-hand side problem and \({\hat{u}}\) is the solution of

$$\begin{aligned} \min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\ge 0,\int _\Omega |u|^2dx=1 \end{array}}\int _\Omega (1-c{\hat{\theta }})|\nabla u|^2dx,\end{aligned}$$
(2.6)

then \({\hat{u}}\) is the unique solution of the right-hand side problem and \({\hat{\theta }}\) is a solution of

$$\begin{aligned} \min _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int _\Omega \theta dx\ge \kappa \end{array}}\int _\Omega \theta |\nabla {\hat{u}}|^2dx.\end{aligned}$$
(2.7)

Related with the above result, we also have

Theorem 2.2

The function \(\Lambda :L^\infty (\Omega ;[0,1])\rightarrow {\mathbb {R}}\) defined by

$$\begin{aligned} \Lambda (\theta )=\hbox { first eigenvalue of }-\mathrm{div}\big ((1-c\theta )\nabla )\hbox { with Dirichlet conditions,}\end{aligned}$$
(2.8)

is concave. Further, for every \(\theta ,\vartheta \in L^\infty (\Omega ;[0,1])\), we have

$$\begin{aligned} \lambda ':={d\Lambda \over d\varepsilon }\big (\theta +\varepsilon (\vartheta -\theta )\big )_{|\varepsilon =0}=c\int _\Omega (\theta -\vartheta )|\nabla u|^2dx,\end{aligned}$$
(2.9)

with u the solution of

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -\mathrm{div}\big ((1-c\theta )\nabla u\big )=\Lambda (\theta ) u\ \hbox { in }\Omega \\ \displaystyle u=0\ \hbox { on }\partial \Omega ,\quad u>0\hbox { in }\Omega ,\quad \int _\Omega |u|^2dx=1,\end{array}\right. \end{aligned}$$
(2.10)

and

$$\begin{aligned} \lambda '':={d^2\Lambda \over d\varepsilon ^2}\big (\theta +\varepsilon (\vartheta -\theta )\big )_{|\varepsilon =0}=2\int _\Omega |u'|^2dx\left( \Lambda (\theta )-{\int _\Omega (1-c\theta )|\nabla u'|^2dx\over \int _\Omega |u'|^2dx}\right) \le 0,\end{aligned}$$
(2.11)

with \(u'\) the solution of

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -\mathrm{div}\big ((1-c\theta )\nabla u'-c(\vartheta -\theta )\nabla u\big )=\Lambda (\theta ) u'+\lambda 'u\ \hbox { in }\Omega \\ \displaystyle u'=0\ \hbox { on }\partial \Omega ,\quad \int _\Omega uu'\,dx=0.\end{array}\right. \end{aligned}$$
(2.12)

Corollary 2.3

A function \({\hat{\theta \in }} L^\infty (\Omega ;[0,1])\) is a solution of (2.4) if and only if defining \({\hat{u}}\) as the unique solution of (2.6), we have that \({\hat{\theta }}\) satisfies (2.7).

Remark 2.4

If \({\hat{\theta }}\) is a solution of (2.4) and \({\hat{u}}\) the solution of (2.6), then \({\hat{u}}\) satisfies

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -\mathrm{div}\big ((1-c{\hat{\theta }})\nabla {\hat{u}}\big )={\hat{\lambda }} {\hat{u}}\ \hbox { in }\Omega \\ \displaystyle {\hat{u}}=0\ \hbox { on }\partial \Omega ,\end{array}\right. \qquad \end{aligned}$$
(2.13)

with \({\hat{\lambda }}\) the maximum value in (2.4). By Theorem 2.1, we also have that \({\hat{\theta }}\) is a solution of (2.7). This proves that \(({\hat{\theta }},{\hat{u}})\) is a saddle point of the problem

$$\begin{aligned} \begin{array}{c}\displaystyle \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\min _{u\in H^1_0(\Omega )}\left\{ {1\over 2}\int _\Omega (1-c\theta )|\nabla u|^2dx-\hat{\lambda }\int _\Omega {\hat{u}}\, u\,dx\right\} \\ \displaystyle =\min _{u\in H^1_0(\Omega )}\max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\left\{ {1\over 2}\int _\Omega (1-c\theta )|\nabla u|^2dx-\hat{\lambda }\int _\Omega {\hat{u}}\, u\,dx\right\} ,\end{array}\end{aligned}$$
(2.14)

and then of the optimal design problem studied in [4] with \(f={\hat{\lambda }} {\hat{u}}\). Applying the smoothness results in this paper, we get Theorem 2.5 below.

A related problem to the one in [4] has been studied in [3], where the restriction \(\theta \in L^\infty (\Omega ;[0,1])\) is replaced by \(\theta \in L^\infty (\Omega ;(-\infty ,0])\). In this case, it is proved that the optimal state function u is still in \(W^{1,\infty }(\Omega )\).

Theorem 2.5

Assume \(\Omega \in C^{1,1}\). For \({\hat{\theta }}\) a solution of (2.4) and \({\hat{u}}\) the solution of (2.6), we have

$$\begin{aligned} {\hat{u}}\in H^2(\Omega )\cap W^{1,\infty }(\Omega ),\qquad \nabla {\hat{\theta }}\cdot \nabla {\hat{u}}\in L^2(\Omega ).\end{aligned}$$
(2.15)

If \({\hat{\theta }}\) is an unrelaxed solution of (2.4), i.e. \({\hat{\theta }}=\chi _{{\hat{\omega }}}\) for some measurable subset \({\hat{\omega }}\subset \Omega \), then

$$\begin{aligned} \nabla \hat{\theta }\cdot \nabla \hat{u}=0\ \hbox { in }\Omega ,\qquad -\Delta \hat{u}=\hat{\lambda }\Big (1 +{c\over 1-c}\chi _{\hat{\omega }}\Big )\hat{u}\ \hbox { in }\Omega . \end{aligned}$$
(2.16)

Remark 2.6

By the strong maximum principle, we know that \({\hat{u}}\) is strictly positive in \(\Omega \). Using (2.13) and \({\hat{u}}\in H^2(\Omega )\), we also have \(\nabla {\hat{u}}\not =0\) a.e. in \(\Omega \). Taking into account (2.7), this implies that for every solution \({\hat{\theta }}\) of (2.4), there exists \({\hat{\mu }}>0\) such that

$$\begin{aligned} {\hat{\theta }}=\left\{ \begin{array}{ll}\displaystyle 0&{}\hbox { if }|\nabla {\hat{u}}|> {\hat{\mu }} \\ \displaystyle 1 &{}\hbox { if }|\nabla {\hat{u}}|>{\hat{\mu }}, \end{array}\right. \qquad \int _\Omega {\hat{\theta }}dx=\kappa .\end{aligned}$$
(2.17)

Since \({\hat{\mu }}\) can be chosen as

$$\begin{aligned} {\hat{\mu }}=\sup \Big \{s\ge 0:\ \big |\big \{|\nabla {\hat{u}}|\ge s\big \}\big |\ge \kappa \Big \}, \end{aligned}$$

and \({\hat{u}}\) is unique, we get that \({\hat{\mu }}\) can be chosen independently of \({\hat{\theta }}\) and then by (2.17) that all the solutions of (2.4) take the same value on the set \(\{|\nabla {\hat{u}}|\not ={\hat{\mu }}\}\). By (2.13), these solutions also satisfy the first order linear PDE

$$\begin{aligned} c\nabla {\hat{\theta \cdot }} \nabla {\hat{u}}-(1-c{\hat{\theta }})\Delta {\hat{u}}=\lambda {\hat{u}}\ \hbox { in }\Omega .\end{aligned}$$
(2.18)

However, \({\hat{u}}\in H^2(\Omega )\) is not enough to conclude that (2.17), (2.18) and \({\hat{u}}\) unique imply \({\hat{\theta }}\) unique. In any way, Theorem 2.2 proves that the set of solutions of (2.4) is a convex set.

For the problem of minimizing the first eigenvalue, it has been proved in [6] (see [5, 7, 8, 25] for related results) that assuming that \(\Omega \in C^{1,1}\) connected, with connected boundary, then the unrelaxed problem has a solution if and only if \(\Omega \) is a ball. In the case of the maximization of the first eigenvalue, the following theorem gives that an unrelaxed solution never exists. In particular this shows hat \({\hat{\mu }}\) in (2.17) is such that \(\big |\{|\nabla {\hat{u}}|={\hat{\mu }}\}\big |\) is always positive.

Theorem 2.7

If \(\Omega \) is a \(C^{1,1}\) domain in \({\mathbb {R}}^N\), then problem (2.1) has no solution.

3 Proof of the Theoretical Results

We show in this section the results stated in the previous one relative to the properties of the solutions of problem (2.4).

Proof of Theorem 2.1

Taking into account that

$$\begin{aligned} \int _\Omega (1-c\theta )\big |\nabla |u|\big |^2dx= \int _\Omega (1-c\theta )|\nabla u|^2dx,\quad \forall \, u\in H^1_0(\Omega ), \end{aligned}$$

we deduce that \(\theta \) is a solution of (2.4) if and only if it is a solution of

$$\begin{aligned} \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\min _{\begin{array}{c} u\in H^1_0(\Omega )\\ u\ge 0,\int _\Omega |u|^2dx=1 \end{array}}\int _\Omega \big (1-c\theta \big )|\nabla u|^2dx.\end{aligned}$$
(3.1)

Here we introduce the change of variables

$$\begin{aligned} z=u^2\iff u=\sqrt{z}. \end{aligned}$$

It transforms the problem in

$$\begin{aligned} \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\min _{\begin{array}{c} \sqrt{z}\in H^1_0(\Omega )\\ z\ge 0, \int _\Omega zdx=1 \end{array}} \int _{\{z>0\}}(1-c\theta ){|\nabla z|^2\over z}\,dx.\end{aligned}$$
(3.2)

Let us prove that for every \(\theta \in L^\infty (\Omega ;[0,1])\), the functional \(\Phi \) defined by

$$\begin{aligned} \Phi (z):=\int _{\{z>0\}}{|\nabla z|^2\over z}\,dx,\quad \forall \,z\ge 0,\ \sqrt{z}\in H^1_0(\Omega ),\end{aligned}$$
(3.3)

is convex. Moreover, it is strictly convex on the set

$$\begin{aligned} \left\{ \sqrt{z}\in H^1_0(\Omega ):\ z>0\ \hbox { a.e. in }\Omega ,\ \int _\Omega zdx=1\right\} .\end{aligned}$$
(3.4)

For this purpose, we take \(z_1,z_2\ge 0,\) with \(\sqrt{z_1},\sqrt{z_2}\in H^1_0(\Omega )\) and \(r\in (0,1)\). Then, using the convexity of the function \(\xi \in {\mathbb {R}}^N\mapsto |\xi |^2\), we have

$$\begin{aligned} \begin{array}{l}\displaystyle \int _{\{rz_1+(1-r)z_2>0\}}(1-c\theta ){|r\nabla z_1+(1-r)\nabla z_2|^2\over rz_1+(1-r)z_2}\,dx\\ \displaystyle = r\int _{\{z_1>0,z_2=0\}}(1-c\theta ){|\nabla z_1|^2\over z_1}\,dx+(1-r)\int _{\{z_1=0,z_2>0\}}(1-c\theta ){|\nabla z_2|^2\over z_2}\,dx\\ \displaystyle +\int _{\{z_1,z_2>0\}}(1-c\theta )(rz_1+(1-r)z_2)\left| {rz_1\over rz_1+(1-r)z_2}{\nabla z_1\over z_1}+ {(1-r)z_2\over rz_1+(1-r)z_2}{\nabla z_2\over z_2}\right| ^2dx\\ \displaystyle \le r\int _{\{z_1>0,z_2=0\}}(1-c\theta ){|\nabla z_1|^2\over z_1}\,dx+(1-r)\int _{\{z_1=0,z_2>0\}}(1-c\theta ){|\nabla z_2|^2\over z_2}\,dx\\ \displaystyle +\int _{\{z_1,z_2>0\}}(1-c\theta )\left( r{|\nabla z_1|^2\over z_1}+(1-r){|\nabla z_2|^2\over z_2}\right) dx \\ \displaystyle = r\int _{\{z_1>0\}}(1-c\theta ){|\nabla z_1|^2\over z_1}\,dx+(1-r)\int _{\{z_2>0\}}(1-c\theta ){|\nabla z_2|^2\over z_2}\,dx,\end{array}\end{aligned}$$
(3.5)

This proves the convexity of \(\Phi \). In order to prove the strict convexity on the set defined by (3.4), we return to (3.5). Assuming that \(z_1,z_2\) are strictly positive a.e. in \(\Omega \), and using that the function \(\xi \in {\mathbb {R}}^N\mapsto |\xi |^2\) is strilctly convex, we deduce that (3.5) is an equality if and only if

$$\begin{aligned} {\nabla z_1\over z_1}={\nabla z_2\over z_2}\ \hbox { a.e. in }\Omega \iff \nabla \log z_1= \nabla \log z_2\ \hbox { a.e. in }\Omega , \end{aligned}$$

This proves that \(\log (z_1/z_2)\) is constant in \(\Omega \) and then that there exists a positive constante \(C>0\) such that \(z_1=Cz_2\) a.e. in \(\Omega \). Since the integral of \(z_1\) and \(z_2\) takes the same value, we must have \(C=1\). Thus \(\Phi \) is strictly convex.

The convexity of \(\Phi \), allows us to apply the Von-Neumann min-max theorem (see e.g. [27], chapter 2.13) to (3.2) to deduce

$$\begin{aligned} \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\min _{\begin{array}{c} \sqrt{z}\in H^1_0(\Omega )\\ z\ge 0, \int _\Omega zdx=1 \end{array}} \int _{\{z>0\}}\big (1-c\theta ){|\nabla z|^2\over z}dx=\min _{\begin{array}{c} \sqrt{z}\in H^1_0(\Omega )\\ z\ge 0, \int _\Omega zdx=1 \end{array}}\max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\int _{\{z>0\}} \big (1-c\theta ){|\nabla z|^2\over z}dx,\nonumber \\\ \end{aligned}$$
(3.6)

Moreover, if \({\tilde{\theta }}\) is a solution of the left-hand side problem and \({\tilde{z}}\) is the solution of

$$\begin{aligned} \min _{\begin{array}{c} \sqrt{z}\in H^1_0(\Omega )\\ z\ge 0, \int _\Omega zdx=1 \end{array}} \int _{\{z>0\}}\big (1-c{\tilde{\theta }}){|\nabla z|^2\over z}dx, \end{aligned}$$

then \({\tilde{z}}\) is a solution of the right-hand side problem. Returning to the variables \((\theta ,u)\), this proves the thesis of Theorem 2.1, except the uniqueness of \({\hat{u}}\). For this purpose, we recall that by the strong maximum principle, if \({\hat{u}}\) is a solution of (2.6) then it is positive in \(\Omega \) and therefore \({\hat{z}}:=\sqrt{{\hat{u}}}\) is also positive. As a consequence we have

$$\begin{aligned} \min _{\begin{array}{c} \sqrt{z}\in H^1_0(\Omega )\\ z\ge 0, \int _\Omega zdx=1 \end{array}}\max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\int _{\{z>0\}} \big (1-c\theta ){|\nabla z|^2\over z}dx=\min _{\begin{array}{c} \sqrt{z}\in H^1_0(\Omega )\\ z> 0, \int _\Omega zdx=1 \end{array}}\max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\int _\Omega \big (1-c\theta ){|\nabla z|^2\over z}dx.\nonumber \\ \end{aligned}$$

Since \(\Phi \) is strictly convex on the set defined by (3.6), we also have that the function

$$\begin{aligned} \left\{ z>0,\ \sqrt{z}\in H^1_0(\Omega ), \ \int _\Omega z\,dx=1\right\} \mapsto \max _{\begin{array}{c} \theta \in L^\infty (\Omega ;[0,1])\\ \int \theta dx\ge \kappa \end{array}}\int _\Omega \big (1-c\theta ){|\nabla z|^2\over z}dx, \end{aligned}$$

is strictly convex as a maximum of strictly convex functions. Thus \({\hat{z}}\) and then \({\hat{u}}\) is unique. \(\square \)

Proof of Theorem 2.2

Assume \(\theta ,\vartheta \in L^\infty (\Omega ;[0,1])\). We take \(\delta >0\) such that

$$\begin{aligned} 1-c\theta -c\varepsilon (\vartheta -\theta )\ge (1-c)/2\ \hbox { a.e. in }\Omega ,\quad \forall \varepsilon \in (-\delta ,1+\delta ). \end{aligned}$$

Taking into account that the eigenvalue \(\Lambda (\theta +\varepsilon (\vartheta -\theta ))\) is simple, we can apply the implicit function theorem to the function \(F:(-\delta ,1+\delta )\times {\mathbb {R}}\times H^1_0(\Omega )\rightarrow H^{-1}(\Omega )\times {\mathbb {R}},\) defined by

$$\begin{aligned} F(\varepsilon ,\lambda , u)=\Big (-\mathrm{div}\big ((1-c\theta -c\varepsilon (\vartheta -\theta ))\nabla u\big )-\lambda u, \int _\Omega |u|^2dx-1\Big ), \end{aligned}$$

to deduce that for \(\lambda _\varepsilon :=\Lambda \big (\theta +\varepsilon (\vartheta -\theta )\big ),\) and \(u_\varepsilon \) the unique solution of

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -\mathrm{div}\big ((1-c\theta -c\varepsilon (\vartheta -\theta ))\nabla u_\varepsilon \big )=\lambda _\varepsilon u_\varepsilon \ \hbox { in }\Omega \\ \displaystyle u_\varepsilon =0\ \hbox { on }\partial \Omega ,\quad u_\varepsilon \ge 0\ \hbox { in }\Omega ,\quad \int _\Omega |u_\varepsilon |^2dx=1,\end{array}\right. \end{aligned}$$
(3.7)

we have that the function \(\varepsilon \in (-\delta ,1+\delta )\rightarrow (\lambda _\varepsilon ,u_\varepsilon )\in {\mathbb {R}}\times H^1_0(\Omega )\) is in \(C^\infty (-\delta ,1+\delta ).\) Deriving twice in (3.7) with respect to \(\varepsilon \), and taking \(\varepsilon =0\), we get

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -\mathrm{div}\big ((1-c\theta )\nabla u'-c(\vartheta -\theta )\nabla u\big )=\lambda ' u+\lambda u'\ \hbox { in }\Omega \\ \displaystyle u'=0\ \hbox { on }\partial \Omega ,\quad \int _\Omega uu'dx=0,\end{array}\right. \end{aligned}$$
(3.8)
$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -\mathrm{div}\big ((1-c\theta )\nabla u''-2c(\vartheta -\theta )\nabla u'\big )=\lambda ''u+2\lambda ' u'+\lambda u''\ \hbox { in }\Omega \\ \displaystyle u''=0\ \hbox { on }\partial \Omega ,\quad \int _\Omega (|u'|^2+uu'')\,dx=0,\end{array}\right. \end{aligned}$$
(3.9)

with

$$\begin{aligned} \lambda =\lambda _0,\ \ u=u_0,\ \ \lambda '={d\lambda _\varepsilon \over d\varepsilon }\Big |_{\varepsilon =0},\ \ \lambda ''={d^2\lambda _\varepsilon \over d\varepsilon ^2}\Big |_{\varepsilon =0},\ \ u'={du_\varepsilon \over d\varepsilon }\Big |_{\varepsilon =0},\ \ u''={d^2u_\varepsilon \over d\varepsilon ^2}\Big |_{\varepsilon =0}. \end{aligned}$$

Equation (3.8) gives (2.12). In order to prove (2.9), we use u as test function in (3.8). Since \(\Vert u\Vert _{L^2(\Omega )}=1,\) we get

$$\begin{aligned} \int _\Omega \big ((1-c\theta )\nabla u'\cdot \nabla u-c(\vartheta -\theta )|\nabla u|^2\big )dx=\lambda \int _\Omega uu'\,dx+\lambda ',\end{aligned}$$
(3.10)

Using now \(u'\) as test function in (3.7), with \(\varepsilon =0\), we also have

$$\begin{aligned} \int _\Omega (1-c\theta )\nabla u'\cdot \nabla u\,dx=\lambda \int _\Omega uu'\,dx.\end{aligned}$$
(3.11)

Replacing (3.11) in (3.10), we conclude (2.9).

It remains to prove (2.11). For this purpose, and reasoning as above, we use u as test function in (3.9), which taking into account \(\Vert u\Vert _{L^2(\Omega )}=1\), \(\int _\Omega uu'dx=0\), gives

$$\begin{aligned} \int _\Omega \big ((1-c\theta )\nabla u''\cdot \nabla u-2c(\vartheta -\theta )\nabla u'\cdot \nabla u\big )dx=\lambda ''+\lambda \int _\Omega u''u\,dx, \end{aligned}$$

which using \(u''\) as test function in (3.7), with \(\varepsilon =0\), simplifies to

$$\begin{aligned} -2c\int _\Omega (\vartheta -\theta )\nabla u'\cdot \nabla u\,dx=\lambda '',\end{aligned}$$
(3.12)

On the other hand, using \(u'\) as test function in (3.8), we have

$$\begin{aligned} \int _\Omega \big ((1-c\theta )|\nabla u'|^2dx-c(\vartheta -\theta )\nabla u\cdot \nabla u'\big )dx=\lambda \int _\Omega |u'|^2dx.\end{aligned}$$
(3.13)

By (3.12) and (3.13), we deduce (2.11) and then that \(\Lambda \) is concave. \(\square \)

Proof of Theorem 2.7

Reasoning by contradiction, we assume that there exists a solution \({\hat{\theta }}\) of (2.13) such that \({\hat{\theta }}=\chi _{{\hat{\omega }}}\) with \({\hat{\omega }}\) a measurable set of \(\Omega \). By Theorem 2.5 we know that (2.16) holds, which taking into account that \({\hat{\lambda }}(1+c/(1-c)\chi _{{\hat{\omega }}}){\hat{u}}\) belongs to \(L^\infty (\Omega )\), implies (see e.g. [16])

$$\begin{aligned} D^2u\in BMO(\Omega )^{N\times N}.\end{aligned}$$
(3.14)

By the John-Niremberg theorem, [19], there exists \(\tau >0\) such that

$$\begin{aligned} \max _{1\le i,j\le N}\int _{{\tilde{\Omega }}} e^{\tau |\partial ^2_{ij}u|}dx<\infty ,\qquad 1\le i,j\le N.\end{aligned}$$
(3.15)

This allows us to use the imbedding theorems for Orlicz-Sobolev spaces, [15], which taking into account that

$$\begin{aligned} \int _{\rho ^{-N}}^\infty {\log t\over t^{1+{1\over N}}}dt=N^2\rho \big (1-\log \rho \big ), \end{aligned}$$

proves the existence of \(C>0\) such that

$$\begin{aligned} \big |\nabla {\hat{u}}(x)-\nabla {\hat{u}}(y)\big |\le C|x-y|\max \{-\log |x-y|,1\},\quad \forall \,x,y\in \Omega .\end{aligned}$$
(3.16)

Since

$$\begin{aligned} -\int _0^1{1\over \rho \ln \rho }\,d\rho =\infty , \end{aligned}$$

we then have (see e.g. [13], Theorem 2.5) that for every \(y\in \Omega \), there exists a unique maximal solution \(\varphi (.,y)\) of

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle \partial _t\varphi (t,y)=\nabla {\hat{u}}(\varphi (t,y))\\ \displaystyle \varphi (0,y)=y,\end{array}\right. \end{aligned}$$
(3.17)

which is defined in an open interval \(I(y)\subset {\mathbb {R}}.\) Taking into account that

$$\begin{aligned} \partial _t {\hat{u}}(\varphi (t,y))=\big |\nabla {\hat{u}}(\varphi (t,y))\big |^2\ge 0,\quad \forall \, y\in \Omega ,\ \forall \,t\in I(y), \end{aligned}$$

and that \({\hat{u}}\) is strictly positive in \(\Omega \) and vanishes on \(\partial \Omega \), we can apply LaSalle’s theorem, [20], to deduce

$$\begin{aligned} \sup I(y)=\infty \,\quad \lim _{t\rightarrow \infty }\mathrm{dist} \big (\varphi (t,y),\big \{\nabla {\hat{u}}=0\}\big )=0,\quad \forall y\in \Omega .\end{aligned}$$
(3.18)

Now, we observe that the equation \(\nabla \hat{\theta }\cdot \nabla \hat{u}=0\) in \(\Omega \), implies

$$\begin{aligned} {\hat{\theta }}(\varphi (t,y))\hbox { does not depend on }t,\ \hbox { a.e. }y\in \Omega .\end{aligned}$$
(3.19)

But this a contradiction with (3.18), which recalling that \({\hat{\theta }}=\chi _{{\hat{\omega }}}\) and that \(\nabla {\hat{u}}\) is continuous, implies

$$\begin{aligned} {\hat{\theta }}(\varphi (0,y))=0,\quad \lim _{t\rightarrow \infty }{\hat{\theta }}(\varphi (t,y))=1\ \hbox { a.e. }y\in \Omega \setminus {\hat{\omega }}, \end{aligned}$$

where by (1.4)

$$\begin{aligned} \big |{\hat{\Omega }}\setminus {\hat{\omega }}\big |=|\Omega |-\kappa >0. \end{aligned}$$

\(\square \)

4 A Numerical Approximation

In this section, taking into account Theorem 2.2, let us prove the convergence of the gradient method applied to (2.4) together with an estimate of the error. The corresponding algorithm reads as follows:

We fix \(\delta >0\).

  • Take \(\theta _0\in L^\infty (\Omega ;[0,1])\) such that

    $$\begin{aligned} \int _\Omega \theta _0dx=\kappa .\end{aligned}$$
    (4.1)
  • Assuming we have constructed \(\theta _n\in L^\infty (\Omega ;[0,1])\) such that

    $$\begin{aligned} \int _\Omega \theta _ndx=\kappa ,\end{aligned}$$
    (4.2)

    we define \((\lambda _n,u_n)\) as the unique solution of

    $$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -\mathrm{div}\big ((1-c\theta _n)\nabla u_n\big )=\lambda _n u_n\ \hbox { in }\Omega \\ \displaystyle u_n=0\ \hbox { on }\partial \Omega ,\quad u_n>0\ \hbox { in }\Omega ,\quad \int _\Omega |u_n|^2dx=1.\end{array}\right. \end{aligned}$$
    (4.3)
  • Take \(\vartheta _n\in L^\infty (\Omega ;[0,1])\) as a solution of

    $$\begin{aligned} \min _{\begin{array}{c} \vartheta \in L^\infty (\Omega ;[0,1])\\ \int _\Omega \vartheta dx\ge \kappa \end{array}}\int _\Omega \vartheta |\nabla u_n|^2dx.\end{aligned}$$
    (4.4)
  • For

    $$\begin{aligned} \varepsilon _n=\min \Big \{1,\delta \int _\Omega (\theta _n-\vartheta _n)|\nabla u|^2dx\Big \}, \end{aligned}$$

    we define

    $$\begin{aligned} \theta _{n+1}=(1-\varepsilon _n)\theta _n+\varepsilon _n\vartheta _n.\end{aligned}$$
    (4.5)

Remark 4.1

Problem (4.3) can be easily solved using the power method. The solutions of (4.4) are explicitly given by

$$\begin{aligned} \vartheta _n=\left\{ \begin{array}{ll}\displaystyle 0&{}\hbox { if }|\nabla u_n|> \mu _n\\ \displaystyle 1 &{}\hbox { if }|\nabla u_n|<\mu _n, \end{array}\right. \qquad \int _\Omega \vartheta _n\,dx=\kappa ,\qquad \mu _n>0. \end{aligned}$$

Our main result is given by the following convergence theorem.

Theorem 4.2

There exists \(\delta _0>0\), such that for every \(\delta \in (0,\delta _0]\), and every \(\theta _0\in L^\infty (\Omega ;[0,1])\) which satisfies (4.1), the sequence \((\theta _n,u_n)\) defined by the previous algorithm satisfies:

  1. 1.

    There exist \(C>0\), which only depends on c and \(\Vert u_0\Vert _{H^1_0(\Omega )}\) such that defining \({\hat{\lambda }}\) as the maximum value of (2.4), we have

    $$\begin{aligned} 0\le {\hat{\lambda }}-\lambda _n\le {C\over \sqrt{n}}.\end{aligned}$$
    (4.6)
  2. 2.

    Every \(\hat{\theta }\in L^\infty (\Omega ;[0,1])\) such that there exists a subsequence of \(\theta _n\) converging in \(L^\infty (\Omega )\) weak-\(*\) to \(\theta \) is a solution of (2.4).

  3. 3.

    The sequence \(u_n\) satisfies

    $$\begin{aligned} \int _\Omega |{\hat{u}}-u_n|^2dx\le C\,{\ln n\over \sqrt{n}}.\end{aligned}$$
    (4.7)

    with \({\hat{u}}\) defined by Theorem 2.1 and \(C>0\) depending only on c and \(\Vert u_0\Vert _{H^1_0(\Omega )}\).

The first and second assertions of the theorem will follow from the following general convergence result for the gradient descendent method. It applies to a convex (but non necesaryly strictly convex) functional with bounded second derivative. It is strongly related with the classical convergence result of the gradient method with a fixed step. However, the rate of convergence is worse due to the lack of the ellipticity.

Theorem 4.3

Assume X a reflexive space, \(K\subset X\) a bounded convex closed set and \(F:K\rightarrow {\mathbb {R}}\) a convex function, Gâteaux derivable, such that there exists \(M\ge 0\) satisfying (\(\langle .,.\rangle \) denotes the duality product)

$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0^+} {\langle F'(k_1+\varepsilon (k_2-k_1))-F'(k_1),k_2-k_1\rangle \over \varepsilon }\le M,\quad \forall \, k_1,k_2\in K.\end{aligned}$$
(4.8)

Then, for every \(k_0\in K\), and every \(\delta \in (0,1/M]\), the sequence \(\{k_n\}\) defined by

$$\begin{aligned} k_{n+1}=(1-\varepsilon _n)k_n+\varepsilon _n{\tilde{k}}_n\end{aligned}$$
(4.9)

with \({\tilde{k}}_n\) solution of

$$\begin{aligned} \langle F'(k_n),{\tilde{k}}_n\rangle =\min _{k\in K} \langle F'(k_n),k\rangle ,\end{aligned}$$
(4.10)

and

$$\begin{aligned} \varepsilon _n=\min \left\{ {\langle F'(k_n),{\tilde{k}}_n-k_n\rangle \over M},1\right\} ,\end{aligned}$$
(4.11)

satisfies

$$\begin{aligned} F(k_n)-\min _{k\in K}F(k)\le {C\over \sqrt{n}}.\end{aligned}$$
(4.12)

The constant C only depends on \(F(k_0)\) and M. Moreover, every \({\hat{k}}\in K\) such that there exists a subsequence of \(\{k_n\}\) which converges weakly to \({\hat{k}}\), is a minimum point for F.

Remark 4.4

The assumption F Gâteaux derivable in C can be relaxed by: For every \(k\in C\), there exists \(F'(k)\in X'\) such that

$$\begin{aligned} \langle F'(k),{\tilde{k}}-k\rangle =\lim _{\varepsilon \rightarrow 0^+} {F(k+\varepsilon (\tilde{k}-k))-F(k)\over \varepsilon },\quad \forall \, k,{\tilde{k}}\in K.\end{aligned}$$
(4.13)

We also observe that this property combined with the convexity of F implies that F is lower semicontinuous in K and then that F has a minimum in K.

Proof of Theorem 4.3

Thanks to (4.8), for every \(\varepsilon \in [0,1]\) we have

$$\begin{aligned} F\big (k_n+\varepsilon ({\tilde{k}}_n-k_n)\big )\le F(k_n)+\varepsilon \langle F'(k_n),{\tilde{k}}_n-k_n\rangle +{M\over 2}\varepsilon ^2. \end{aligned}$$

Defining \({\tilde{k}}_n\) by (4.10), \(\varepsilon _n\) by (4.11), and \(\delta _n\) by

$$\begin{aligned} \delta _n=\langle F'(k_n),k_n-{\tilde{k}}_n\rangle ,\end{aligned}$$
(4.14)

we then have

$$\begin{aligned} F(k_{n+1})\le \left\{ \begin{array}{ll}\displaystyle F(k_n)-{\delta _n^2\over 2M} &{}\hbox { if }\delta _n\le M\\ \displaystyle F(k_n)+{M\over 2}-\delta _n &{}\hbox { if }\delta _n>M.\end{array}\right. \le F(k_n)-{\delta _n\over 2}\min \left\{ {\delta _n\over M},1\right\} .\nonumber \\ \end{aligned}$$
(4.15)

Now, we take \({\hat{k}}\in C\) a minimum point for F. Using the convexity of F and the definition (4.10) of \({\tilde{k}}_n\), we have

$$\begin{aligned} F({\hat{k}})\ge F(k_n)+\langle F'(k_n),{\hat{k}}-k_n\rangle \ge F(k_n)+\langle F'(k_n),{\tilde{k}}_n-k_n\rangle =F(k_n)-\delta _n. \end{aligned}$$

Taking \(e_n:=F(k_n)-F({\hat{k}})\ge 0\), we deduce from this inequality and (4.15)

$$\begin{aligned} e_n\le \left\{ \begin{array}{ll}\displaystyle \sqrt{2M(e_n-e_{n+1})} &{} \hbox { if }\delta _n\le M\\ \displaystyle 2(e_n-e_{n+1}) &{}\hbox { if }\delta _n>M.\end{array}\right. \end{aligned}$$
(4.16)

Lemma 1 in [18], then proves (4.12).

On the other hand, the convexity and lower semicontinuity of F imply that F is sequentially lower semicontinuous for the weak topology. Therefore, for every subsequence of \(k_n\), still denoted by \(k_n\), which converges weakly to some \({\tilde{k}}\in K\), we have

$$\begin{aligned} F({\tilde{k}})\le \liminf F(k_n)=\min _KF, \end{aligned}$$

and then \({\tilde{k}}\) is a minimum point of F in K. \(\square \)

In order to prove Theorem 4.2, let us also need the following two lemmas

Lemma 4.5

Let \(\Omega \) be a bounded open set, then there exists \(\rho >0\) such that defining \(\Lambda (\theta )\) by (2.8), and \(\Lambda _2(\theta )\) as the second eigenvalue of \(-\mathrm{div}((1-c\theta )\nabla )\) with Dirichlet conditions, we have

$$\begin{aligned} \Lambda _2(\theta )-\Lambda (\theta )\ge \delta ,\quad \forall \, \theta \in L^\infty (\Omega ;[0,1]).\end{aligned}$$
(4.17)

Proof

Reasoning by contradiction, we assume that there exists a sequence \(\theta _n\in L^\infty (\Omega ;[0,1])\) such that the \(\Lambda _2(\theta _n)-\Lambda (\theta _n)\) goes to zero. Extracting a subsequence if necessary (see e.g. [24, 26]), we can assume that there exists \(A\in L^\infty (\Omega )^{N\times N}\) symmetric, with

$$\begin{aligned} (1-c)|\xi |^2\le A(x)\xi \cdot \xi \le |\xi |^2,\quad \forall \,\xi \in {\mathbb {R}}^N,\ \hbox { a.e. }x\in \Omega , \end{aligned}$$

such that \((1-c\theta _n)I\) converges to A in the sense of the H-convergence in \(\Omega \) ([1, 24, 26]). Denoting by \(\lambda _1\), \(\lambda _2\) the first and second eigenvalues of the operator \(-\mathrm{div}(A\nabla )\) with Dirichlet conditions, this implies

$$\begin{aligned} \Lambda (\theta _n)\rightarrow \lambda _1,\qquad \Lambda _2(\theta _n)\rightarrow \lambda _2. \end{aligned}$$

Since \(\lambda _2-\lambda _1>0\), this contradicts the fact that \(\Lambda _2(\theta _n)-\Lambda _1(\theta _n)\) tends to zero. \(\square \)

Lemma 4.6

Assume \(\Omega \) a bounded open set, then there exist three constants \(R,M,\gamma >0\) such that for every \(\theta ,\vartheta \in L^\infty (\Omega ;[0,1])\), \(u'\) defined by (2.12), and \(\lambda ''\) defined by (2.11), satisfy

$$\begin{aligned} \int _\Omega |\nabla u'|^2dx\le R,\qquad -M\le \lambda ''\le -\gamma \int _\Omega |u'|^2dx.\end{aligned}$$
(4.18)

Proof

We define \(\lambda _1\) and \(\lambda _2\) as the first and second eigenvalues of \(-\mathrm{div}((1-c\theta )\nabla )\) with Dirichlet condtions, and u as the unique positive eigenfunction with unit norm in \(L^2(\Omega )\) corresponding to \(\lambda _1\). Since \(u'\) is orthogonal to u in \(L^2(\Omega )\), we have

$$\begin{aligned} \int _\Omega (1-c\theta )|\nabla u'|^2dx\ge \lambda _2\int _\Omega |u'|^2dx,\end{aligned}$$
(4.19)

Using then \(u'\) as test function in (2.12), we have

$$\begin{aligned} (\lambda _2-\lambda _1)\int _\Omega |u'|^2dx\le \int _\Omega (1-c\theta )|\nabla u'|^2dx-\lambda _1\int _\Omega |u'|^2dx=c\int _\Omega (\vartheta -\theta )\nabla u\cdot \nabla u'dx.\nonumber \\ \end{aligned}$$
(4.20)

Therefore

$$\begin{aligned} \begin{array}{l}\displaystyle \int _\Omega (1-c\theta )|\nabla u'|^2dx\le c\int _\Omega (\vartheta -\theta )\nabla u\cdot \nabla u'dx+\lambda _1\int _\Omega |u'|^2dx\\ \displaystyle \le \left( 1+{\lambda _1\over \lambda _2-\lambda _1}\right) c\int _\Omega (\vartheta -\theta )\nabla u\cdot \nabla u'dx= {c\lambda _2\over \lambda _2-\lambda _1}\int _\Omega (\vartheta -\theta )\nabla u\cdot \nabla u'dx.\end{array}\end{aligned}$$
(4.21)

Using here Lemma 4.5, Cauchy-Schwarz’s inequality and

$$\begin{aligned} \int _\Omega (1-c\theta )|\nabla u|^2dx=\lambda _1\le \lambda ^*_1, \end{aligned}$$

with \(\lambda ^*_1\) the first eigenvalue of \(-\Delta \) in \(\Omega \) with Dirichlet conditions, we conclude the existence of \(R>0\) such that the first assertion in (4.18) holds. From this inequality, (4.17), (4.20), and (2.11), we easily conclude (4.18). \(\square \)

Proof of Theorem 4.2

Taking into account Lemma 4.18 and Remak 4.4, we have that (4.6) and \({\hat{\theta }}\) solution of (2.4) are an immediate consequence of Theorem 4.3 applied to \(K=L^\infty (\Omega ;[0,1])\subset L^2(\Omega )\) and \(F(\theta )=-\Lambda (\theta )\).

It remains to prove (4.7). Given \({\hat{\theta }}\) a solution of (2.4), and defining \(\Lambda \) by (2.8), we have

$$\begin{aligned} \Lambda (\theta _n)=\Lambda ({\hat{\theta }})+{d\Lambda \over d\varepsilon }\big ({\hat{\theta }}+\varepsilon (\theta _n-{\hat{\theta }})\big )_{|\varepsilon =0}+ \int _0^1(1-t){d^2\Lambda \over dt^2}\big ({\hat{\theta }}+t(\theta -{\hat{\theta }})\big )dt, \end{aligned}$$

where thanks to \({\hat{\theta }}\) solution of (2.4), we have

$$\begin{aligned} {d\Lambda \over d\varepsilon }\big ({\hat{\theta }}+\varepsilon (\theta _n-{\hat{\theta }})\big )_{|\varepsilon =0}\le 0. \end{aligned}$$

Using then \(\Lambda ({\hat{\theta }})-\Lambda (\theta _n)={\hat{\lambda }}-\lambda _n\) and the second inequality in (4.18), we get

$$\begin{aligned} {\hat{\lambda }}-\lambda _n\ge \gamma \int _0^1\int _\Omega (1-t)|u'_t|^2dxdt,\end{aligned}$$
(4.22)

with \(u'_t\) the solution of (2.12) for \(\theta ={\hat{\theta }}\) and \(\vartheta =\theta _n\).

Now, we use that (4.22) and (4.18) imply

$$\begin{aligned} \begin{array}{l}\displaystyle \Vert u_n-{\hat{u}}\Vert _{L^2(\Omega )}=\left\| \int _0^1 u'_t\,dt\right\| _{L^2(\Omega )}\le \left\| \int _0^{1-{1\over n}} u'_t\,dt\right\| _{L^2(\Omega )}+ \left\| \int _{1-{1\over n}}^1 u'_t\,dt\right\| _{L^2(\Omega )}\\ \displaystyle \le \left( \int _0^{1-{1\over n}}(1-t)\int _\Omega |u'|^2dxdt\right) ^{1\over 2}\left( \int _0^{1-{1\over n}}{dt\over 1-t}\right) ^{1\over 2}+ \int _{1-{1\over n}}^1 \Vert u'_t\Vert _{L^2(\Omega )}dt \le C{\sqrt{\log n}\over \root 4 \of {n}}+{C\over n}. \end{array}\end{aligned}$$

where C only depends on \(u_0\) and c. This proves (4.7). \(\square \)

Fig. 1
figure 1

Optimal proportion in the circle, \(c=0.5\), \(\kappa =0.2|\Omega |\)

Fig. 2
figure 2

Optimal proportion in the circle, \(c=0.5\), \(\kappa =0.5|\Omega |\)

Fig. 3
figure 3

Optimal proportion in the circle, \(c=0.5\), \(\kappa =0.8|\Omega |\)

Fig. 4
figure 4

Optimal proportion in the square, \(c=0.5\), \(\kappa =0.2|\Omega |\)

Fig. 5
figure 5

Optimal proportion in the square, \(c=0.5\), \(\kappa =0.5|\Omega |\)

Fig. 6
figure 6

Optimal proportion in the square, \(c=0.5\), \(\kappa =0.8|\Omega |\)

5 Numerical Examples

In this last secton, we present some numerical experiments corresponding to the algorithm described in Section 4.

Our first example refers to the case where \(\Omega \) is a ball in \({\mathbb {R}}^N\). In this case, as a simple application of the uniqueness of the optimal state function given by Theorem 2.1 we can easily show that the solutions are radial. The corresponding result is given in Proposition 5.1 below. In the case of the minimization of the first eigenvalue, a similar result has been obtained in [2] (see [11, 12, 21] for related results). The result is more delicate to prove than in the present paper due to the lack of uniqueness. We also recall that for the minimization problem, the corresponding solutions \({\hat{\theta }}\) are unrelaxed, i.e. they are characteristic functions. In the maximization problem, Theorem 2.7 shows that this is not the case.

Proposition 5.1

Let \(\Omega \) be the unit ball in \({\mathbb {R}}^N\). Then, there exists a unique solution \(({\hat{\theta }},{\hat{u}})\) of (2.5). Moreover \({\hat{u}}\) is a positive decreasing radial function in \(W^{2,\infty }(\Omega )\) and \({\hat{\theta }}\) is a radial function in \(W^{1,\infty }(\Omega )\).

Proof

Along the proof, we denote by \(B_R\) the ball of center 0 and radius R and by \(|A|_{N-1}\) the \((N-1)\)-Hausdorff measure of a subset \(A\subset {\mathbb {R}}^N\).

Assume \({\hat{\theta }}\) a solution of (2.4) and \({\hat{u}}\) the solution of (2.6). Then, thanks to the symmetry properties of the unit ball, we have that \({\tilde{\theta }}(x):={\hat{\theta }}(Px)\) is also a solution of (2.4) for every orthogonal matrix P. Moreover, the solution of (2.4) relative to \({\hat{\theta }}\) is given by \({\tilde{u}}(x)={\hat{u}}(Px)\). Since \({\tilde{u}}={\hat{u}}\) by Theorem 2.1, we get \({\hat{u}}(Px)={\hat{u}}(x)\) for every orthogonal matrix P. This proves that \({\hat{u}}\) is a radial function.

Once we know that \({\hat{u}}\) is radial we get that for every \({\tilde{\theta }}\) solution of (2.4), the function \({\hat{\theta }}\) defined by

$$\begin{aligned} {\hat{\theta }}(x)={1\over \big |\partial B_{|x|}\big |_{N-1}}\int _{\partial B_{|x|}}{\tilde{\theta }} (y)\,ds(y),\ \hbox { a.e. } x\in B_1, \end{aligned}$$

is also a solution of (2.4), which is radial.

With a little abuse of notation, let us also denote by \({\hat{u}}\), \({\hat{\theta }}\) the real function in (0, 1) satisfying

$$\begin{aligned} {\hat{u}}(x)={\hat{u}}(|x|),\quad {\hat{\theta }}(x)={\hat{\theta }}(|x|),\quad \ \hbox { a.e. }x\in \Omega . \end{aligned}$$

These functions satisfy

$$\begin{aligned} \left\{ \begin{array}{l}\displaystyle -{d\over dr}\Big (r^{N-1}(1-c{\hat{\theta }}){d{\hat{u}}\over dr}\Big )={\hat{\lambda }} \,r^{N-1}{\hat{u}}\ \hbox { in }(0,1),\\ \displaystyle {d{\hat{u}}\over dr}(0)={\hat{u}}(1)=0,\end{array}\right. \end{aligned}$$
(5.1)

with \({\hat{\lambda }}>0\) the value of the maximum in (2.4). From (2.17), we have \({\hat{\theta }}=1\) in \([0,\delta ]\) for some \(\delta >0\), and by Theorem 2.5, the function \({d{\hat{u}}\over dr}\) is continuous in [0, 1]. Then, (5.1) shows that \({\hat{\theta }}\) is a continuous function. Using also (2.17) and that (5.1) implies

$$\begin{aligned} {\hat{u}}\ \hbox { decreassing},\quad \mu {d\over dr}\big (r^{N-1}(1-c{\hat{\theta }})\big )=\lambda r^{N-1}{\hat{u}}\ \hbox { in }\ \mathrm{int}\Big (\Big \{{d{\hat{u}}\over dr}=-\mu \Big \}\Big ), \end{aligned}$$

we conclude that \({\hat{\theta }}\) is in \(W^{1,\infty }(0,1)\) and then by (5.1), the function \({\hat{u}}\) is in \(W^{2,\infty }(0,1)\).

To finish, it remains to prove the uniqueness of \({\hat{\theta }}\). Using the characteristic method, this follows from (2.17), \({\hat{u}}\in W^{1,\infty }(0,1)\) and \({\hat{\theta }}\) solution of

$$\begin{aligned} -\mathrm{div}\Big ((1-c{\hat{\theta }}){x\over |x|}{d{\hat{u}}\over dr}\Big )={\hat{\lambda }} {\hat{u}}\ \hbox { in }\Omega , \end{aligned}$$

for every solution \({\hat{\theta }}\) of (2.4). \(\square \)

In figs 1, 2 and 3, we represent the optimal proportion \(\theta \) of the best conductive material for \(\Omega \) the unit circle, \(c=0.5\) and \(\kappa =p|\Omega |\), with \(p=0.2\), \(p=0.5\) and \(p=0.8\) respectively. Since we know that \({\hat{\theta }}\) is radial we have applied the algorithm in Section 4 directly to the corresponding one-dimensional problem.

In figs 4, 5 and 6 we represent the optimal proportion \(\theta \) of the worse conductive material for \(\Omega \) the unit square, \(c=0.5\) and \(\kappa =p|\Omega |\), p as above. As it is hoped, the solution has the symmetries of the square.