Maximax rearrangement optimization related to a homogeneous Dirichlet problem

In this paper we investigate a maximax optimization problem related to a homogeneous Dirichlet problem in two classes of rearrangements. We prove existence and representation of the maximizers.

Recalling the variational formulation of u f h , see the next section, confirms the functional J : L 2 ( ) × L ∞ ( ) → R is well defined. Our interest is in the following maximax optimization problem where h 0 and f 0 satisfy the conditions mentioned above pertaining to the two functions h and f in (1). Moreover, R[h 0 ] and R[ f 0 ] indicate two rearrangement classes generated by h 0 and f 0 , respectively; see the next section for precise definition. The main result of this paper is Theorem 4.2, where we prove (2) is solvable, contingent to f 0 and h 0 satisfying a technical condition. By solvability of (2), we mean existence of a pair Optimization problems where the admissible set is a rearrangement class or a subset of that have become quite popular in recent years; see for example [5][6][7][8][9][10][11][12][13] and references therein. However, the one which is discussed here is unique in the sense that the admissible set is a cross product of two rearrangement classes. This problem is physically interesting as well. Let us go back to the membrane problem described above. In that setting, the optimization problem (2) is asking for the best design of the membrane (the functionĥ), and the optimal load distribution (the functionf ) which maximizes the quantity J .

Preliminaries
In this section we collect some well-known results.
Let be a smooth bounded domain in R N . For any E ⊂ we denote the N -dimensional Lebesgue measure of E, by L N (E). Let f, g : → R be two measurable functions. We say f and g are rearrangement of each other provided for all c ∈ R. It is well known that if f ∈ L p ( ), 1 ≤ p ≤ ∞, and g be a rearrangement of f , then g ∈ L p ( ) and in fact f p = g p , where . p denotes the standard norm of L p ( ). The rearrangement class by f is denoted R[ f ], which comprises all functions that are rearrangements of f . The readers can study [2,3] for more results about rearrangements of functions.
We say u ∈ H 1 0 ( ) is the weak solution of (1) whenever It is well known that u, the unique weak solution of (1), satisfies the following variational problem We now collect some useful lemmas to be applied later.

Lemma 2.1 ([3])
Let 1 ≤ p ≤ ∞ and q be the conjugate exponent of p. Let f 0 ∈ L p ( ). Then, Remark 2.2 L q -topology on L p ( ) is the weak topology for 1 ≤ p < ∞ and is the weak* topology for p = ∞.

Lemma 2.3 ([3])
Let f 0 : → R and g : → R be two measurable functions. If every level set of g has measure zero, then there exists an increasing function ϕ such that
(i) Suppose that is sequentially continuous in the L q -topology on L p ( ). Then attains a maximum value and that g is a member of sub-gradient of at f * . Then f * = ϕ • g almost everywhere in for some increasing function ϕ.

The inverse of − + hI
Let h be a non-negative function in L ∞ ( ) and I stand for identity operator. Let K h : L 2 ( ) → H 1 0 ( ) denote the inverse of − + h I with Dirichlet homogeneous boundary conditions on ∂ . Let f ∈ L 2 ( ), thus K h ( f ) is the unique weak solution of problem (1) corresponding to f and h. In the following, we prove some properties of K h .
(P1) K h is a bounded linear operator.
Proof It is clear from (3).
Proof It is clear from (4).

Maximax optimization
We begin with the following theorem.

Moreover, there exists an increasing function ϕ such that f h
Proof First, we show that the functional Let t ∈ [0, 1], h ∈ L ∞ ( ) and f, g ∈ L 2 ( ), we have thus Hence, J (., h) is convex. Suppose for some t ∈ (0, 1), we have Thus, g, h).
Since K h is linear and invertible, we deduce f = g almost everywhere in . Therefore J (., h) is strictly convex. Now by applying Lemma 2.5 we infer that there exists and f h = ϕ • K h ( f h ) almost everywhere in , for some increasing function ϕ. Now, we are ready to prove the following theorem.

Theorem 4.2 Let h 0 be a non-negative function in L ∞ ( ) and f
Then the following maximax optimization problem is solvable Proof Theorem 4.1 implies that we must prove the following optimization problem is solvable where L(h) : = J ( f h , h). From Hölder's inequality, the continuous embedding of H 1 0 ( ) into L 2 ( ) and inequality (5) From (3), we have Thus, when i → ∞, we deduce that ∇v · ∇w dx + ηvw dx = ξw dx, for all w ∈ H 1 0 ( ).