Sharp-Interface Limit of a Multi-Phase Spectral Shape Optimization Problem for Elastic Structures

. We consider an optimization problem for the eigenvalues of a multi-material elastic structure that was previously introduced by Garcke et al. [ Adv. Nonlinear Anal. 11 (2022), no. 1, 159–197]. There, the elastic structure is represented by a vector-valued phase-field variable, and a corresponding optimality system consisting of a state equation and a gradient inequality was derived. In the present paper, we pass to the sharp-interface limit in this optimality system by the technique of formally matched asymptotics. Therefore, we derive suitable Lagrange multipliers to formulate the gradient inequality as a pointwise equality. Afterwards, we introduce inner and outer expansions, relate them by suitable matching conditions and formally pass to the sharp-interface limit by comparing the leading order terms in the state equation and in the gradient equality. Furthermore, the relation between these formally derived first-order conditions and results of Allaire & Jouve [ Comput. Methods Appl. Mech. Engrg. , 194 (2005), pp. 3269–3290] obtained in the framework of classical shape calculus is discussed. Eventually, we provide numerical simulations for a variety of examples. In particular, we illustrate the sharp-interface limit and also consider a joint optimization problem of simultaneous compliance and eigenvalue optimization.


Introduction
The goal of structural shape and topology optimization is to find the optimal distribution of materials in a prescribed region, the so-called design domain.Here, in addition to pure shape optimization, also the topology of the structure is to be optimized.This includes the formation of holes (void regions) in the structure as well as the merging and splitting of connected material components.In many applications, certain properties of the materials (such as their elastic properties) as well as additional side conditions (e.g., volume constraints or support conditions) need to be taken into account within the optimization problem.
Besides the optimization of shape and topology, the optimization of eigenvalues is an important task in engineering science to make structures robust against vibrations.It has been observed that structures are less susceptiple against vibrations if their principal eigenvalue is large, see [18, Section 2], [4] and also [45] for concrete examples and further references.Heuristically, this can be explained by the fact that larger principal eigenvalues are associated with higher temporal frequencies which correspond to smaller wavelengths of the oscillations.
The traditional mathematical tool to handle shape optimization problems is the calculus of shape derivatives based on boundary variations (see, e.g., [5,6,35,59,66,67]).However, frequent remeshing leads to high computational costs and it cannot deal with topological changes, see also [62] for a comprehensive discussion.In some situations, it is possible to handle topology changes by means of homogenization methods (see, e.g., [3]) or variants of this approach such as the SIMP method (see, e.g., [18,26]).A drawback of this method occuring in applications to spectral problems is the phenomenon of so-called localized eigenmodes (also often referred to as arXiv:2304.02477v2[math.OC] 27 Nov 2023 spurious eigenmodes), see [7,18,29,63].In this context localized eigenmodes are eigenfunctions which are supported only in the void regions and pollute the spectrum with low eigenvalues.Especially in recent times, the level-set method has become a popular approach for topology optimization problems.After the method was developed in [61], it has been used extensively in the literature (see, e.g., [7,10,30,55,60,62]).Although the level-set method is capable of dealing with topological changes, difficulties can arise if voids are to be created.
In this paper, we consider an optimization problem that was introduced in [45].There, the authors employed a different method to optimize the shape and the topology as well as a finite selection of eigenvalues of an elastic structure, namely the so-called (multi-)phase-field approach.This method for shape and topology optimization was first developed in [26] and subsequently used frequently in the literature.We refer the reader to [11, 19, 20, 22, 27, 31, 32, 34, 36-38, 58, 64] to at least mention some of the various contributions.
In [45], an elastic structure consisting of N − 1 materials is described by a multi-phase-field variable.This is a vector-valued function φ : Ω → R N whose components φ 1 , ..., φ N −1 represent the volume fractions of the materials, and φ N represents the void (i.e., the region where no material is present).In particular, the components of φ are restricted to attain their values only in the interval [0, 1].In most parts of the design domain, the materials are expected to appear in their pure form, meaning that the corresponding component of the multi-phase-field φ attains the value one, whereas all other components are zero.These regions are separated by diffuse interfaces, which are thin layers between the pure phases whose thickness is proportional to a small parameter ε > 0. In particular, φ is expected to exhibit a continuous transition between the values zero and one at these diffuse interfaces.The main advantage of the phase-field approach in the context of shape and topology optimization is that topological changes (such as merging or splitting of material components or the creation of holes) during the optimization process can be handled without any problems.The optimization problem in [45] is formulated as a minimization problem for an objective functional which involves a selection of eigenvalues as well as a Ginzburg-Landau type penalisation term for the phase-field.For this problem, the existence of at least one global minimizer was established and a first-order necessary optimality condition for local minimizers was derived.A detailed mathematical formulation of the optimization problem from [45] will be presented in Section 2.
The main goal of this paper is to derive the sharp-interface limit of the aforementioned optimization problem from [45].This means that we want to send the parameter ε, that is related to the thickness of the diffuse interface, to zero.In this way, we can relate the diffuse-interface approach from [45] to the physically reasonable scenario of sharp-interfaces.In particular, one of our key goals is to show that minimizers of the problem in the diffuse-interface framework converge to minimizers of a corresponding sharp-interface optimization problem.
Qualitatively, there are two ways to deal with this passage to the limit: the rigorous investigation of the Γ-limit of the involved cost functional, and the formal method of matched asymptotic expansions.For a rigorous discussion of the sharp-interface limit of diffuse-interface models describing elastic systems, we refer the reader to [8,21].There, the void is modeled as a further material having low but non-degenerate stiffness, which is crucial for the analysis.Up to now, to the best of our knowledge, there is no rigorous Γ-limit analysis for spectral problems in the case of degenerating stiffness in the void regions.
As a first step towards the task of dealing with this delicate problem, the sharp-interface Γ-limit for an optimization problem involving a selection of eigenvalues of the Dirichlet Laplacian was rigorously established in [44].A relation between the minimization of the principal eigenvalue of the Dirichlet Laplacian on the phase-field level and the Faber-Krahn inequality on the sharpinterface level was discussed in [54].
The basic strategy of this formal approach is as follows: We assume that the phase-field as well as the corresponding eigenvalues and eigenfunctions each possess an inner asymptotic expansion and an outer asymptotic expansion, both given by a power series with respect to the interface parameter ε.The inner expansions approximate the aforementioned quantities "close" to the diffuse interface where the phase-transition takes place, whereas the outer expansions approximate these quantities in regions that are "far" away from the interface where only the pure phases are present.
Plugging the outer expansions into the eigenvalue equation on the diffuse-interface level, a comparison of the leading order terms leads to limit eigenvalue equations on the sharp-interface level.At this point we will include a discussion about localized eigenmodes.As also mentioned above, in numerical simulations the formation of eigenmodes that are supported only in void areas and produce eigenvalues which pollute the low part of the spectrum (which we are interested in) is a major problem.We will see that our asymptotic approach is able to deal with such localized eigenmodes.More precisely, we will see that if such modes appear, then the corresponding eigenvalues will diverge to infinity as ε → 0. Thus, if ε > 0 is sufficiently small, localized eigenmodes do not affect the lower part of the spectrum that is considered in our optimization problem.
The inner expansions are used to describe the aforementioned quantities in tubular neighborhoods around the interfaces.The distinction between inner and outer regions is needed as we expect the phase-field to change its values rapidly in regions close to the interface.This is because the diffuse interface will become infinitesimally thin as ε → 0. Here, the main idea is to introduce a rescaled coordinate system which takes the ε-scaling of this region into account.
After studying these two forms of expansions separately, it is crucial to match both expansions in a so-called intermediate region.This means we compare both extensions by exploiting the two different coordinate systems they are formulated in.Plugging these relations into the optimality system and comparing the leading order terms, we obtain boundary conditions for the previously obtained limit eigenvalue equations.We observe that the boundary condition on the free boundary will essentially be of homogeneous Neumann type.Furthermore, we use the inner expansions to derive a limit equality from the strong formulation of the gradient inequality.The limit eigenvalue equations together with this gradient equality will then constitute the optimiality system of a corresponding sharp-interface optimization problem.This will be justified from the viewpoint of classical shape calculus (see, e.g., [7]) by relating the limit of the gradient inequality to the shape derivative of the associated cost functional.
However, in order to apply the technique of formally matched asymptotics, we first need to reformulate the gradient inequality on the diffuse-interface level as a pointwise gradient equality by introducing suitable Lagrange multipliers.Under an additional regularity assumption on the involved eigenfunctions, this is achieved by employing a regularization technique following the ideas of [23], in which we eventually pass to the limit.A key benefit of this strategy is that it provides an explicit construction of the Lagrange multipliers arising from the constraints of our optimization problem.This specific knowledge about the Lagrange multipliers will turn out to be essential for the asymptotic analysis.As a byproduct, we also prove that the phase-field variable φ solving the original gradient inequality is actually H 2 -regular under the aforementioned assumption of suitably regular eigenfunctions.
The present paper is structured as follows.In Section 2, we first introduce the theory that is necessary to formulate the diffuse-interface optimization problem along with its first-order necessary optimality condition.The derivation of the strong formulation of the gradient inequality will then be performed in Section 3. Using the outer expansions, we derive the state equations of the limit problem in Section 4.1.In order to construct inner expansions we first analyze in Section 4.3 how the involved differential operators are reformulated in a suitable rescaled coordinate system.Suitable matching conditions connecting the outer expansions with the inner expansions are then derived in Section 5.In Section 6, we use the inner expansions to derive boundary conditions on the free boundary in the sharp-interface setting as well as the sharp-interface limit of the gradient inequality.Then, in Section 7, we comprehensively state the limit optimality system, and in Section 8, the first-order necessary optimality condition on the sharp-interface level is related to classical shape calculus.Eventually, in Section 9, we present several numerical solutions for concrete optimization problems on the diffuse interface-level.In this context, we also discuss suitable choices of the model parameters.In particular, we observe that our results using the phase-field approach compare very well to similar numerical results obtained in [7] by means of the level-set method combined with classical shape calculus on the sharp interface level.

Formulation of the problem
In this section, we recall the framework introduced in [45] in order to formulate and understand the optimality system our analysis is based on.Therefore, we first introduce the key assumptions which shall hold throughout the paper.where e i ∈ R N denotes the i-th standard basis vector in R N .Additionally, we assume ψ to be decomposed into

The phase-field variable
To describe the material distribution of (N − 1) different materials in the design domain Ω, we introduce the phase-field φ : Ω → R N .Its components φ i , i = 1, ..., N − 1 represent the materials, whereas φ N represents the void.We expect φ to continuously change its values at the diffuse interface.From a physical point of view, this means that the considered materials can be mixed at the interfacial region.In order for the phase-field to behave in a physically reasonable way we impose suitable constraints.First of all, we fix the total amount of each material by the mean value constraint with m i ∈ (0, 1) and m ∈ Σ N (cf.(2.1)).The constraint m ∈ Σ N is a consequence of the physical assumption that the amount of the individual volume fractions φ i needs to sum up to 1 at each point in the domain.Furthermore, it is physically reasonable to assume that each volume fraction shall attain its values only in the interval [0, 1].This property is incorporated by assuming that any φ belongs to the set of admissible phase-fields Here, G is the Gibbs simplex that was introduced in (A2).

The Ginzburg-Landau energy
In order to make our optimzation problem well-posed, we need to include a regularizing term for the phase-field in the cost functional.For this purpose, we use the so-called Ginzburg-Landau energy for all φ ∈ H 1 (Ω; R N ).Here, the parameter ε > 0 is related to the the thickness of the diffuseinterface and therefore, it is usually chosen very small.In the sharp-interface limit, we intend to (formally) send this parameter to zero.Due to assumption (A2), the potential ψ enforces the phase-field φ to attain its values only in the Gibbs simplex.However, as we already include the the Gibbs simplex constraint in the set of admissible phase-fields, it suffices to merely consider the regular part ψ 0 of the potential ψ in the Ginzburg-Landau energy as long as φ ∈ G.This means that for all φ ∈ G.

The elasticity tensor and the density function
As we intend to consider an elastic structure, we next introduce the two tensors of linear elasticity, which will be used to formulate the state equation.The strain tensor of a vector-valued function The elasticity tensor C : R N → R d×d×d×d is a fourth order tensor with the following properties.
(B3) C is coercive for any fixed ε > 0, i.e., there exists θ ε > 0 such that for all φ ∈ R N and all symmetric matrices B ∈ R d×d .For two matrices A, B ∈ R d×d this product is defined as The component specific densities are modeled by a density function ρ : R N → R with the following properties.
As in [20], we want C and ρ to possess a decomposition that reflects the material specific elasticity and density of the N − 1 materials.Therefore, for φ ∈ G, we set for any k, l ∈ N and any α for all s ∈ (0, 1). (2.6) In this way, we have The positivity conditions are necessary to satisfy the assumptions (B3) and (C2).The global non-negativity of α M , β M will be essential in avoiding spurious eigenmodes, see Section 4.2.
This means, for i ∈ {1, ..., N − 1}, we choose component specific but constant elasticity tensors C i ∈ R d×d×d×d and densities ρ i > 0. As the void obviously has neither a stiffness nor a density, we approximate the void components by some fixed elasticity tensor CN ∈ R d×d×d×d and density ρN > 0 that are multiplied by the small interface parameter ε that was introduced in Section 2.3 in the context of the Ginzburg-Landau energy.Of course, these constant prefactors need to be chosen such that the assumptions (B2), (B3) and (C2) are satisfied, see [45].
Even though an adequate scaling of the void components CN and ρN with respect to ε combined with an appropriate choice of interpolation functions α M , α V , β M , β V will be crucial for the numerical simulations in order to avoid spurious eigenmodes, see also Section 4.2, we emphasize that our formal analysis works for any kind of decomposition as in (2.5) as long as the void components are scaled with ε p for some p ∈ N. Thus, in terms of our analysis, we will work with the general decomposition in (2.5), but we will also justify in the framework of asymptotic expansions how a specific choice of k, l and α M , α V , β M , β V in (2.5) is capable of dealing with localized eigenmodes, see Section 4.2.
As in [20] and [45], we extend the definition (2.5) to the whole hyperplane Σ N by introducing a cut-off function for a small parameter ω > 0. We define where a ω and b ω are monotonically increasing C 1,1 functions that are constructed in such a way that σ ω is also a C 1,1 function.Then we consider the extensions where denotes the ℓ 2 projection of R N onto the convex set Σ N .Note that in order for (C2) to be satisfied, the demanded positivity of the interpolation functions in (2.7) is in general not enough, as we allow α V , β V to become negative outside the unit interval.However, the special choice for φ ∈ Σ N , which will be used in the numerical simulations, see also (9.3), satisfies (C2) due to the fact that the ℓ 1 and ℓ 2 norms are equivalent on R N .In our analysis we will stick to the general decomposition (2.5) for full generality.The tensor C is dealt with analogously.
To conclude this subsection, let us introduce some further notation.For φ ∈ L ∞ (Ω; R N ), we define a weighted scalar product on L 2 (Ω; R d ) by and a weighted scalar product on In the following, we write L 2 φ (Ω; R d ) in order to emphasize the fact that we equip L 2 (Ω; R d ) with the scalar product (•, •) ρ(φ) .

The state equation
We now introduce the system of equations describing the elastic structure, which will be referred to as the state equation.It reads as and its weak formulation is given by In [45], using classical spectral theory, it was shown that for any φ ∈ L ∞ (Ω, R N ), there exists a sequence of eigenvalues (with multiple eigenvalues being repeated according to their multiplicity) which can be ordered as (2.12) This comprises all eigenvalues of (2.11).Moreover, the corresponding eigenfunctions can be chosen as an orthonormal basis of L 2 φ (Ω; R d ), meaning that for all i, j ∈ N.This property will be crucial when considering the formal asymptotics of the eigenfunctions.In the following, when we talk about eigenvalues and eigenfunctions, we will always refer to the pairs (λ ε,φ i , w ε,φ i ) with i ∈ N, which have the aforementioned properties.

The optimization problem and the gradient inequality
Finally, we are in a position to state the optimization problem for some l ∈ N, where n 1 , . . ., n l ∈ N indicate a selection of eigenvalues.Here, γ > 0 is a fixed constant related to surface tension.
Remark 2.1.It is worth mentioning that we do not need any boundedness assumption on Ψ in order to prove the existence of a minimizer to (P ε l ) in the same way as in [45,Theorem 6.1].In analogy to [44,Lemma 3.7], one can show that there are constants C 1,ε , C 2,ε > 0 depending only on the choice of C ε and ρ ε such that Here, λ M k denotes the k-th eigenvalue of the problem (2.11) with C ≡ Id and ρ ≡ 1. Qualitatively speaking, λ M k denotes an eigenvalue in the situation when the whole design domain is occupied by one material.
In [45,Theorem 6.2], the following first-order necessary optimality conditions was derived.Theorem 2.2.Let φ ∈ G m be a local minimizer of (P ε l ), i.e., there exists δ > 0 such that We further assume that the eigenvalues λ ε,φ n1 , . . ., λ ε,φ n l are simple.Then the gradient inequality The upcoming sharp-interface analysis will be concerned with passing to the limit in the state equation (SE ε ) as well as in the gradient inequality (GI ε ).

Analysis of the gradient inequality
In this section, we will show under a suitable regularity assumption on the eigenfunctions involved in (GI ε ) that there exists a solution of the above gradient inequality possessing even the regularity φ ∈ H 2 (Ω; R N ).This will be carried out by applying a regularization process to the non-smooth potential ψ, which was employed in a similar fashion in [15,23,25,40].Our approach mainly follows the ideas of [23].
We regularize the gradient inequality in order to deal with the indicator functional I G contained in the definition of the potential ψ.This will yield a sequence of H 2 -regular approximating phase-fields (φ δ ) δ>0 solving regularized equations and converging to the desired phase-field φ.
Another convenient aspect of this procedure is that it will generate Lagrange multipliers that will allow us to transform the gradient inequality into an equality.This strong formulation of (GI ε ) will be the starting point for our asymptotic analysis in Section 6.

Regularization of the potential ψ and rewriting the constraints
We notice that φ ∈ G m needs to satisfy the constraint for almost every x ∈ Ω and i = 1, . . ., N .To deal with this constraint we regularize the potential appearing in the Ginzburg-Landau energy which was initially given as Definition 3.1.For δ > 0 we define the regularized potential where Remark 3.2.We see that the regularization now approximates the indicator functional I R N + by the function 1 δ ψ.For δ ↘ 0, exactly the negative parts of the components of φ are penalized.
To deal with the remaining constraints hidden in G m namely the integral constraint ffl Ω φ dx = m and the sum constraint N i=1 φ i = 1 a.e. in Ω, we introduce linear orthogonal projections.

Definition 3.3. Let us define the linear orthogonal projections
) To simplify the notation, we further define the composition P := P T Σ • P ´= P ´• P T Σ .
Remark 3.4.Note that for the constraint φ(x) ∈ R N + , we cannot introduce a linear orthogonal projection as there is no vector space corresponding to this constraint.Thus, the approximation of the indicator function in Definition 3.1 is actually necessary.

Smoothness assumption and rewriting the gradient inequality
In order to obtain a suitable regularization of the gradient inequality, we need to find a way to test (GI ε ) with arbitrary functions in H 1 (Ω; R N ) and not only in For this reason and to obtain higher regularity of the phase-field, we will need to assume higher regularity of the corresponding eigenfunctions.
We fix a parameter ε > 0 as well as a solution For a cleaner presentation we omit the superscript ε in the eigenvalues and eigenfunctions.
A priori, the term is well defined only for η ∈ L ∞ (Ω; R N ) as the expression E(w nr ) : E(w nr ) merely belongs to L 1 (Ω).However, in order to consider a suitable regularized problem associated to (GI ε ), we need this term to be an element in L 2 (Ω).For this purpose, we require the regularity w nr ∈ W 1,4 (Ω).
Therefore, we now make the following crucial regularity assumption which shall hold for the rest of this paper.
Remark 3.5.Note that there exists a regularity theory for the equations of linear and nonlinear elasticity, see, e.g.[52,65].However, due to the fact that the coefficient C(φ) is merely essentially bounded, we could only prove the existence of an (in general arbitrarily small) parameter ι > 0 such that E(w nr ) ∈ L 2+ι (Ω). (3.5) Note that there exist counterexamples going back to De Giorgi for linear systems of elliptic PDEs (see, e.g., [17,Section 4 where A is bounded and coercive and B denotes the unit ball.In particular, in the physically relevant case d = 3 where W 1,4 (Ω; R d ) → C 0 (Ω; R d ), the condition w nr ∈ W 1,4 (Ω; R d ) seems to be a real assumption as unbounded eigenfunctions might exist.
In the following, let (•, •) denote the classical scalar product on L 2 (Ω; R N ).Recalling for η ∈ L 2 (Ω; R N ), we have Note that the term in the last line is to be understood as Thus, the projection of this term is well defined and the L 2 regularity of this object is ensured by the assumptions (R) and (B1).For later purposes, we point out that a straightforward computation reveals where To have a more concise notation, we will write Analogously, we use the notation for the density term.To reformulate the gradient inequality (GI ε ), we further define the function By means of assumption (R), we infer f φ ∈ L 2 (Ω; R N ).This is crucial for the subsequent analysis, especially for the absorption argument at the end of Lemma 3.10, and therefore, assumption (R) cannot be waived.As φ ∈ G m is fixed, we write f = f φ in the following.Using this notation, we obtain:

The regularized problem and its limit
Now that we have introduced the regularized potential and suitable orthogonal projections, and have made the necessary regularity assumption, we can formulate a regularized problem which will approximate our initially fixed solution φ ∈ G m of (GI ε ) in order to provide the desired H 2 -regularity of φ.
Using all the previously introduced notation, we are now in a position to state the so-called regularized problem.
Definition 3.7.Let We say that φ δ ∈ Gm is a solution to the regularized problem if it solves Before proving the existence of a solution to (RE), we recall some properties proven in [23] that will be important for the upcoming analysis.
Proposition 3.8.Let ψ be as defined in (3.2).Then the following properties hold true.
(a) The weak derivative fulfills where, for ξ ∈ R N , φi (ξ for all r, s ∈ R and i = 1, . . ., N . (c) Convexity: ψ is convex, i.e., Using these results, we now prove the well-posedness result for the regularized problem.In order to show H 2 -regularity of the solution φ δ , we need the following regularity assumption on the design domain which shall hold for the rest of the paper.
(D) In addition to (A1), we assume that Ω has at least one of the following properties: (ii) Ω is convex.
The well-posedness result for the regularized problem (RE) reads as follows.

on ∂Ω, (P RE)
Proof.First of all, we want to show that there exists at most one solution to (RE).To this end, we assume that there are two solutions φ δ,1 , φ δ,2 ∈ Gm .Then, by subtracting the corresponding equations, we obtain , we can drop the projection P in the second term.Using the monotonicity property (3.10), we infer This yields φ δ,1 = φ δ,2 as these functions have identical mean value.
In order to prove the existence of a solution, we consider a suitable minimization problem.Therefore, we define the functional for all ξ ∈ H 1 (Ω; R N ).If we can now show that there exists a φ δ ∈ Gm that solves the minimization problem the existence result is proven since then the Gâteaux derivative of I ′ δ (φ δ ), which is given by By applying the projections P T Σ and P ´to any η ∈ H 1 (Ω; R N ) and then switching them to the other component in the L 2 scalar product, it follows that solving (3.13) is equivalent to solving (RE).
Note that there is no need to project the gradient term.This is justified as follows.By construction, we have On the other hand, we compute and therefore, the entries in each column are identical.Now, we compute We see that this term vanishes as by construction, as As the gradient term is invariant under addition of constants we can also omit the projection P ´.
It remains to show that there exists a minimizer of (3.13).By construction, ψ ≥ 0. Furthermore, using Young's inequality, we find a constant C > 0 such that In particular, ∥φ δ,k ∥ H 1 (Ω;R N ) remains bounded and thus, there exists φ δ ∈ H 1 (Ω; R N ) and such that the convergences hold along a non-relabeled subsequence.From these convergences, we deduce φ δ ∈ Gm .Using the estimate ψ(φ δ,k ) ≤ |φ δ,k | 2 a.e. in Ω along with the generalized majorized convergence theorem of Lebesgue (see [69, p. 1015]), and further employing the weak lower semincontinuity of norms, we conclude This implies that φ δ ∈ Gm is a minimizer.Now that we have shown the existence of a solution φ δ ∈ H 1 (Ω; R N ) to (RE), it remains to prove that it possesses the desired regularity H 2 (Ω; R N ).Since φ δ is a weak solution of (RE), it can be interpreted as a weak solution of with In particular, using this regularity, we conclude from (RE) that φ δ satisfies (P RE).
As we want to pass to the limit in the regularized equation, we need some uniform bounds to apply classical compactness results.
Proof.By the previous lemma, we know that φ δ minimizes I δ (see (3.12)) over Gm (see (3.8)).Thus, we have If we now choose any ξ ∈ G m ⊂ Gm , we know that it is additionally componentwise non-negative and therefore ψ(ξ) = 0 a.e. in Ω.In view of definition (3.12), this yields where C > 0 is a constant independent of δ.Recalling the absorption trick (3.16), we obtain which will be needed in the end of the proof.Furthermore, using the definition of ψ (see (3.2)), we deduce that which directly leads to (3.20).
We notice that 1 δ φ(φ δ ) is weakly differentiable (cf.[48,Lemma 7.6]) and belongs to H 1 (Ω; R N ).In order to prove (3.21), we test (RE) with η = 1 δ φ(φ δ ).We obtain Applying [48,Lemma 7.6] to φ, we further deduce Applying Hölder's inequality in (3.24) and an absorption argument, which crucially requires f ∈ L 2 (Ω; R N ), we thus infer and thus, As we now have bounded both the right-hand side of (P RE) and φ δ itself in L 2 (Ω; R N ) uniformly in δ (see ( In order to reformulate (P RE) by means of Lagrange multipliers that are expected to converge in the weak sense, we need to get rid of the projection in (3.21).This is done analogously to [23, Theorem 2.1] and therefore, we only present the statement of the result without a proof.

Lemma 3.11.
There exists a constant C > 0 such that Now, we introduce suitable Lagrange multipliers and pass to the limit in the the regularized equation.
Theorem 3.12.The initially chosen solution φ ∈ G m of (GI ε ) possesses the regularity φ ∈ H 2 (Ω; R N ).Furthermore, there are Lagrange multipliers Λ, µ ∈ L 2 (Ω; R N ) and ϑ ∈ R N such that with µ i ≥ 0 and µ i φ i = 0 a.e. in Ω for all i ∈ {1, . . ., N }, (3.27) Proof.In this proof, we will again use the notation f = f φ .From (3.19) we deduce the existence of a function φ ∈ H 2 (Ω; R N ) such that as δ → 0 along a non-relabeled subsequence.This directly implies that φ(x Recalling the definition of f in (3.6), we now define the Lagrange multipliers of the regularized problem as (3.30) The reason why we do not reformulate the projection term P T Σ f by means of a Lagrange multiplier is that this is a term depending on x, which will produce terms of order O( 1 ε 2 ) when we consider the inner expansions in Section 6 due to the involved derivative of eigenfunctions.Recalling Definition 3.3, we have Hence, we can write (RE) as We point out that the Lagrange multipliers are constructed in such a way that the factor 1 ε corresponding to the scaling of the original potential ψ is still present.This will be important in the next sections for the sharp-interface asymptotics.
Passing to the limit in (3.31), we infer Thus, the regularity φ ∈ H 2 (Ω; R N ) and integration by parts yield the equation If we can now show that for our initially fixed solution φ ∈ G m of (GI ε ) it holds φ = φ, the proof is complete.
Let us consider the test function In view of (3.28) we know that ϑ, η = 0, because by construction ´Ω η dx = 0.
From the previous reasoning we know Furthermore, we know that µ, φ ≥ 0 componentwise and thus, each summand in above equality has to be identical to 0. This verifies (3.27).
In the following, we use the above knowledge to show that our asymptotic expansions will produce a state equation and a gradient equality in the sharp-interface limit.

Asymptotic expansions
As mentioned above, we will now perform the procedure of sharp-interface asymptotics.Therefore, we start by analyzing outer and inner expansions approximating the quantities involved in our problem.The outer expansions are used to approximate these quantities in regions far away from the interfacial layers.They will be used to derive the state equation in the sharp-interface limit.The inner expansions are used in regions close to the interfacial layers where the phase transition takes place.They will provide boundary conditions for the equations obtained in the sharp-interface limit.As these layers are expected to scale proportionally to ε, a rescaling is needed here.By comparing the leading order equations, we will obtain jump conditions at the phase interfaces within the design domain and a sharp-interface version of the gradient equality (GS ε ).

Outer expansions
As in [20], we first consider the asymptotic expansions in regions "far" away from the interface.Therefore, we assume expansions of the form for all x ∈ Ω.Furthermore, we demand for all x ∈ Ω that φ 0 (x) ∈ G, φ k (x) ∈ T Σ, ffl φ 0 = m and ffl φ k = 0 for k ≥ 1, in order to be compatible with the constraints on the phase-field formulated in Section 2.2.As we are concerned with a formal limit process, we assume all the appearing quantities to possess a suitable regularity such that we can write the state equation (SE ε ) in its strong formulation.
Using standard arguments relying on the Γ-convergence of the Ginzburg-Landau energy in [14], we can partition the domain as where N ⊂ Ω is a Lebesgue null set.In general, the sets Ω i are only finite perimeter sets.This follows from the boundedness of the Ginburg-Landau energy, the inequality [14, (3.1)] and [14, Proposition 2.2].Nevertheless, for our asymptotic analysis we assume them to be smooth enough.
With this knowledge, we are in a position to derive the limit state equation resulting from (SE ε ) in the framework of outer asymptotic expansions.
Claim 4.1.Recall the scaling of C and ρ in (2.5), i.e., for φ ∈ G.Then, for r ∈ {1, . . ., l}, we obtain that the pair (λ 0,nr , w 0,nr ) fulfills the eigenvalue equations in the material regions for i = 1, . . ., N − 1.Furthermore, the normalization condition (2.13) is transferred to the limit eigenfunction w 0,nr meaning that In particular, the eigenfunction w 0,nr is non-trivial in Ω i for at least one index i ∈ {1, . . ., N −1}.Thus, w 0,nr cannot be a localized eigenmode as it is not supported only in the void region Ω N .
Remark 4.2.(a) Of course, the eigenvalue λ ε nr could degenerate in the limit, i.e., λ 0,nr = 0.This is no contradiction to the normalization (4.4) because w 0,nr could potentially be a nontrivial constant in each material region Ω i .If each material region Ω i shares a sufficiently nice part of the boundary with Γ D , one can use Korn's inequality (see, e.g., [33, or [69,Theorem 62.13]) to deduce that w 0,nr = 0 in each Ω i , which would then indeed contradict (4.5).The inner expansions will provide us with boundary conditions that allow us to refine this statement, see Section 7 (especially Remark 7.1).
(b) In the case λ 0,nr > 0, even though the limit eigenvalue equations (SE i 0 ) hold for all i ∈ {1, . . ., N −1}, the eigenfunction w 0,nr could potentially be non-trivial only in one particular material region Ω i but vanish in all other material regions Ω j with j ∈ {1, . . ., N − 1} \ {i}.This means that a non-trivial equation might hold only in one single material region.
Let us show the Claim 4.1 assuming that outer expansions of the form (4.1) exist.For the sake of a cleaner presentation, we will now fix the index n r ∈ N and in the following, we omit the subscript n r .In the spirit of formal asymptotics, we consider the state equation (SE ε ), i.e., and the normalization condition resulting from (2.13).Then, we plug in the asymptotic expansions (4.1) and consider each resulting order in ε separately.
We deduce that (4.5) reads to order O(1) as which proves (4.4).As a consequence, w 0 has to be non-trivial in in Ω i for at least one index i ∈ {1, . . ., N − 1}.
Eventually, we compare the contributions of order O(1) in the state equation.We obtain which reads for each phase The remaining boundary conditions on the outer boundary Γ follow directly by plugging in the asymptotic expansion into (SE ε ).This completes the argumentation.

Intermezzo on spurious eigenmodes
As already mentioned in the introduction, we now want to analytically justify the model that will be chosen for the numerical computations in order to avoid spurious eigenmodes.As we have seen in the above reasoning, assuming outer expansions of the form (4.1) and the general decomposition of C and ρ as in (4.3), we recover the desired limit system.In this subsection, in order to discuss spurious eigenmodes, we consider (4.3) with the specific choices k = 1, l = 2, and in addition to (2.5) and (2.6), we choose the interpolation functions α M and β M such that In numerical simulations the phenomenon of spurious eigenmodes is a serious issue, see [7,18,29,63].The problem is that if the model parameters are not chosen correctly, eigenmodes that are supported only in the void region can actually emerge.Of course, the associated eigenvalues are unphysical as the void should not contribute to the resonance behaviour of the structure.Nevertheless, even though spurious eigenmodes might not be avoided in numerical simulations, they do not pose any problem if their associated eigenvalues are large since then, they do not affect the part of the spectrum that is involved in the optimization problem (P ε l ).For this reason, as also observed in the aforementioned literature, the key idea is to choose the scaling and interpolation in (4.3) in such a way that spurious eigenmodes will only produce large eigenvalues or more precisely, eigenvalues λ ε with λ ε → ∞ as ε → 0. In particular, this means that by using an adequate interplay of scaling and interpolation, spurious eigenmodes will not enter the sharp interface limit as their eigenvalues leave the considered part of the spectrum.
In order to allow for spurious eigenmodes in our asymptotic expansions, we have to include terms of negative order in ε.Claim 4.3.Assume the following outer asymptotic expansions for an arbitrary m ∈ N. Let C and ρ be given as in (4.3) with k = 1, l = 2, i.e., for φ ∈ G.Then, for r ∈ {1, . . ., l}, we obtain w k,nr = 0 and λ k,nr = 0 for k < −1 and the pair (λ −1,nr , w −1,nr ) fulfills Remark 4.4.The asymptotic analysis in the argumentation of this subsection is crucially based on the interplay of the non-negative interpolation functions α M and β M , see (4.7), and the different ε-scaling of the void components in (4.3).Note that these two features are also important for our numerical experiments in Section 9, where the quadratic interpolation of C and ρ as well as the relatively lower scaling in ε of the void contribution of ρ compared to the void contribution of C are crucial to obtain meaningful results.It has also already been observed in the literature that a relatively lower scaling of mass compared to stiffness is an appropriate choice to deal with localized eigenmodes, see [7,29,63].
We now argue why Claim 4.3 is true.Therefore, we consider the state equation (SE ε ) and the normalization (4.5).First of all, we note that plugging in the asymptotic expansion of φ ε into (4.9)yields This implies that w −m = 0 in Ω\Ω N , or in other words, w −m is localized in the void region.Now, we consider (4.5) to the order O(ε −2m+2 ).We have Here, we used that −2m + 2 < 0. As w −m is localized in the void we infer Thus, due to the non-negativity of the first summand in (4.12) we deduce Using the crucial global non-negativity of β M as assumed in (4.7), we have ρ(φ 1 ) ≥ 0, see (4.9).Moreover, φ 0 = e N in Ω N and we thus deduce from (2.6) Hence, since ρN is positive, we infer w −m = 0 in Ω.These steps can now be repeated until the critical order O( 1) is reached because up to this order, the normalization equation ( 4 It remains to show that (λ −1 , w −1 ) solves the desired limit eigenvalue problem.Therefore we consider the state equation (SE ε ) to order O(1) In Ω N this simplifies to Summing up this intermezzo, we have now seen that even if spurious eigenmodes are not excluded, the appropriate choice of the model parameters will force the associated eigenvalues to leave the spectrum in the limit ε → 0. Hence, the spurious modes do not affect our optimization problem as they leave the considered part of the spectrum.

Inner expansions
In the interfacial regions, i.e., in layers separating two outer regions, we need to rescale our coordinate system in order to take into account that φ ε changes rapidly in directions perpendicular to the interface.
Therefore, for all i, j = 1, . . ., N , we write Γ = Γ ij to denote the sharp-interface separating Ω i and Ω j .Moreover, let n Γij denote the unit normal vector field on Γ pointing from Ω i to Ω j .In the following, we omit these indices to provide a cleaner presentation.
We now introduce a suitable coordinate system that fits the geometry of the interface.The following discussion can be found, e.g., in [1] and thus we only give the key steps needed for our analysis.Let us choose a local parametrization of Γ, where U is an open subset of R d .We further define ν := n Γ • γ.
As we want to describe a whole neighborhood surrounding the local part of the interface γ(U ) ⊂ Γ, we introduce the signed distance function relative to Ω i which satisfies d(x) > 0 if x ∈ Ω j and d(x) < 0 if x ∈ Ω i .For more details concerning the signed distance function we refer the reader to [48,Sec. 14.6].By introducing the rescaled distance coordinate z(x) := 1 ε d(x) ∈ R we define for fixed z ∈ R and sufficiently small ε > 0 the (d − 1)-dimensional submanifold which describes a translation of Γ in the direction ν.Here, for x belonging to a sufficiently thin tubular neighbourhood around γ(U ), s(x) is the unique point in U such that γ(s) ∈ Γ is the orthogonal projection of x onto Γ.The summand shifts the point γ s(x) back onto x.Hence, a sufficiently thin tubular neighborhood around γ(U ) can be expressed by the coordinate system (s, z).Now we can express the transformation of differential operators with respect to the coordinate transformation x → (s(x), z(x)).Therefore, let us consider an arbitrary scalar function b(x) = b(s(x), z(x)).

It holds
where ∇ Γεz stands for the surface gradient on Γ εz .Proceeding analogously, we deduce that the divergence of a vector-valued function j(x) = ĵ(s(x), z(x)) can be expressed as where ∇ Γεz • ĵ stands for the surface divergence on Γ εz .Furthermore the full gradient of a vector-valued function is given by where ⊗ denotes the dyadic product that is defined as a ⊗ b = (a i b j ) d i,j=1 for all a, b ∈ R d .Analogously, for a matrix-valued function we apply formula (4.18) to each component of the row-wise defined divergence ∇ x • A. We obtain For the Laplacian we obtain the representation Here W denotes the Weingarten map associated with Γ that is given by see, e.g., [39,Appendix B].Its non-trivial eigenvalues κ 1 , . . ., κ d−1 are the principal curvatures of Γ and its spectral norm can be expressed as Furthermore κ denotes the mean curvature which is defined as the sum of the principal curvatures of Γ.Note that in view of (4.22), κ can be expressed as which will be important for later purposes.
To conclude this section, we introduce the inner expansions that we will work with in the next section.Therefore, we make the ansatz where we assume Φ 0 (s(x), z(x)) ∈ G and Φ k (s(x), z(x)) ∈ T Σ N for all k ≥ 1.In the next section, we will relate these inner expansions to the outer expansions that were introduced before.
Remark 4.5.Note that the eigenvalues λ ε nr do not depend locally on x ∈ Ω and thus, their inner expansion simply equals their outer expansion.

The matching conditions
So far, we have constructed outer expansions which are supposed to hold inside the material regions Ω i for i = 1, . . ., N as well as inner expansions which are supposed to hold in a tubular neighborhood around the sharp-interfaces Γ ij .Note that due to the construction in the previous section, the thickness of this tubular neighborhood is proportional to ε.In order to be compatible, both expansions must match in a suitable intermediate region by suitable matching conditions.This region is approximately given by all points x ∈ Ω with the property dist(x, Γ) ≤ ε θ for some fixed θ ∈ (0, 1).This means we stretch the tubular neighborhood the inner expansions were constructed on from a thickness proportional to ε to a thickness proportional to ε θ and relate both expansions in this region.These matching conditions will be expressed as limit conditions for the inner expansions when ε → 0 or equivalently z → ±∞ depending on which side we approach the interface from.This procedure is again standard in the context of formally matched asymptotics and we only state the matching conditions, for the computations see [41].
Using the notation for the lowest order term we have the matching condition (5.2) For the term of order O(ε) we have for all x = γ(s) ∈ Γ = Γ ij .Note that here the symbol ≈ means that the difference of the left-hand side and the right-hand side as well as all its derivatives with respect to z tend to zero as z → ±∞.In particular (5. 3) provides us with as z → −∞. (5.4) The analogous relations also hold true for the expansions of w ε nr .In the following, we will also see that the jump across the interfaces Γ ij is an important quantity.It is defined by for any x = γ(s) ∈ Γ = Γ ij .Now, we have made all the necessary computations to analyze the state equations and the gradient equality near the interfaces Γ ij .In particular, we are able to investigate their behavior as ε → 0.

Comparison of the leading order terms
Now, we want to apply our knowledge about the inner and outer expansions to the optimality system consisting of (SE ε ) and (GS ε ).This means we apply the formulas for the differential operators discussed in Section 4.3 to the optimality system, compare the terms with same orders in ε and apply the matching conditions.In this section, we will suppress the index n r to provide a clearer notation.

Comparison of the leading order terms in the state equation
We point out that our state equation (SE ε ) differs from the one in [20] only in terms of the right-hand side.In contrast to [20], where the right-hand side is just a given function f , our right-hand is given by In particular, it depends on the phase-field φ as well as the corresponding eigenvalue λ ε,φ and its associated eigenfunction w ε,φ .Recall that the inner expansion of λ ε,φ equals its outer expansion (cf.Remark 4.5) as the eigenvalue does not depend locally on x ∈ Ω.As no derivatives of ρ, w ε,φ or φ are involved, we conclude that the inner expansion of (6.1) possesses only summands of non-negative orders in ε.As the discussion of the left-hand side of the state equation works exactly as in [20], we can thus proceed in a completely analogous manner.We will therefore only summarize the most important results.
For the functions W 0 and W 1 involved in the inner expansion of the eigenfunction, we deduce the following relations: ) for all x = γ(s) ∈ Γ ij .Here and in the remainder of this paper, the expression "around Γ ij " means that the statement is valid in a sufficiently thin tubular neighborhood around Γ ij where our inner expansions hold.We thus arrive at the jump condition [w 0 ] j i = 0 for all i, j = 1, . . ., N .(6.7) However, we point out that the jump condition on an interface between a material region and a void region (i.e., i = N or j = N ) is negligible as we do not have any information about the behavior of w 0 in the void.In other words, we will obtain a closed system of PDEs forming the state equations of the sharp-interface problem in Section 7 without needing this additional jump condition at the material-void boundary.
For the function C(Φ 0 (z)), where Φ 0 is the lowest order term of the inner expansion of the phase-field, we obtain: Here, the convergence (6.9) follows due to the additional factor ε in the void contribution of C(φ ε ) (see (4.3)).
Eventually, we obtain that holds on each Γ ij with i ̸ = N , where

Comparison of the leading order terms in the gradient equality
Now, we want to analyse the gradient equality (GS ε ), which reads as Here, we recall that the Lagrange multipliers were constructed in Theorem 3.12 in such a way that their sum appearing in the gradient equality (6.11) is scaled by the factor 1 ε .We now assume the Lagrange multipliers to have the following inner asymptotic expansions: Furthermore, in order to deal with the nonlinear terms in (6.11) involving C ′ , ρ ′ , ψ ′ 0 , ∂ λn r Ψ, we perform a (componentwise) Taylor expansion around the leading order term Φ 0 to obtain the inner expansions We now take a closer look at the quantities C ′ (Φ 0 ) and ρ ′ (Φ 0 ).To this end, we recall the definition of ρ in (2.9), which reads as Thus, it is clear that ρ ′ = ρ ′ + O(ε).Note that we can write the projection P Σ as for φ ∈ R N , where 1 = (1, ..., 1) T ∈ R N .For the partial derivatives with respect to φ j with j ∈ {1, ..., N }, we thus obtain and therefore, where δ ij denotes the Kronecker delta.Inserting Φ 0 (which belongs pointwise to G ⊂ Σ N and thus, no projection is necessary) and recalling that σ ω is the identity on [0, 1] (cf.(2.8)), using (2.6) we arrive at Thus, considering the inner expansion of ρ ′ (φ ε ) to the lowest order O(1) gives Thus, it obviously holds ρ ′ (Φ 0 ) ∈ T Σ N .The function C ′ (Φ 0 ) can be expressed analogously.
Altogether, this allows us to drop the projection acting on the left-hand side in (6.11) when considering only the lowest order contributions.Now that we have considered all the quantities appearing in (6.11), we begin with our formal asymptotics.First of all, applying formula (4.19) on the lowest order contribution W 0 of the inner expansion of w ε , we find that Comparing the contributions of order O(ε −2 ) in (6.11), we use (6.15) to obtain This equation is obviously fulfilled since ∂ z W 0 vanishes according to (6.3).
Let us now consider (6.11) to order O(ε −1 ).First of all, we infer from (6.3) and (6.15) that the left-hand side has no contribution of order O(ε −1 ).We thus have where we used the formula (4.21) to compute the Laplacian.Multiplying (6.16) by ∂ z Φ 0 and integrating with respect to z from −∞ to ∞, we deduce Now, we consider each of the terms in (6.17) separately.First of all, we see where the last equality follows from the matching condition (5.2).
As Φ 0 ∈ G pointwise, we know that ∂ z Φ 0 ∈ T Σ N pointwise.Hence, we obtain For the last equality, we used the fact that ψ 0 vanishes on e i for i = 1, . . ., N along with the matching condition (5.2).We have thus shown that the right-hand side of (6.17) vanishes.
Recall from (3.26) that Λ ε is identical in each component.It is therefore natural to assume that every term in the inner expansion of Λ ε also has this property.Thus, recalling that where Λ 0 denotes an arbitrary component of Λ 0 .
Recall from Theorem 3.12 that ϑ ε ∈ R N is constant.Thus, assuming that this property is transferred to the inner expansion, ϑ 0 is independent of z, we infer by means of the matching condition (5.2) that Eventually, we want to justify that the remaining Lagrange multiplier fulfills Therefore, we recall (3.27) which tells us for i = 1, . . ., N that Using [48, Lemma 7.7], we infer that for all i ∈ {1, . . ., N }, Using (4.17) and comparing the terms of order O(ε −1 ), we deduce for all i ∈ {1, . . ., N }.In particular, by multiplying with ν and integrating with respect to z from −∞ to ∞, we arrive at for all i ∈ {1, . . ., N }.This proves (6.22).
Combining (6.18)-(6.22),we conclude from (6.17) that ϑ 0 • (e j − e i ) = 0, for all i, j = 1, . . ., N , meaning that all components of ϑ 0 are equal.Since ϑ ε ∈ T Σ N in (3.28), we also assume ϑ 0 ∈ T Σ N .This implies that ϑ 0 = 0 and thus, (6.16) can be rewritten as Let now z ∈ R be arbitrary.Multiplying (6.25) by ∂ z Φ 0 and integrating with respect to z from −∞ to z, we obtain ˆz Here, the last equality holds because of (6.20) and (6.22).By the fundamental theorem of calculus, we thus have for all z ∈ R. We further know from the matching condition (5.2) that the left-hand side vanishes as z → ±∞.This entails and thus, we arrive at In order to obtain further information, we next show that (6.25) can be interpreted as the first-order optimality condition of a particular optimization problem that is similar to the minimization of the one-dimensional Ginzburg-Landau energy.Therefore, we first assume that = e j and θ(−1) = e i (6.28) possesses a minimizer, which we call θ ij .This means that θ ij is a geodesic with respect to the degenerate metric induced by the potential ψ 0 that connects the values e i and e j .Now, proceeding as in [68, proof of formula (15)], this geodesic can be used to construct a minimizer This means that Φ describes an optimal transition between the values e i and e j .As in [68, proof of formula ( 15)], we further see that Φ solves (6.25) and (6.27),where Λ 0 + µ 0 is the Lagrange multiplier for the Gibbs-Simplex constraint.Consequently, choosing Φ 0 = Φ we have found a solution of (6.25) and (6.27).Moreover, [68, formula (15)] states that 2σ ij is exactly the value of the minimum sought in (6.29).
As the minimizer Φ 0 = Φ of (6.29) satisfies (6.27), we further conclude which will be important for later purposes.
Finally, we now consider (6.11) to the order O (1).Using (4.21) to reformulate the term γε∆φ, employing (6.3), and recalling that (6.14) holds analogously for C ′ (Φ 0 ), we conclude We now multiply this equation by ∂ z Φ 0 and integrate with respect to z from −∞ to ∞.Let us consider each term of the resulting equation separately.Analogously to (6.20) and (6.21), we obtain Considering the Lagrange multiplier µ, we recall (6.23) Due to (4.17), its contribution of leading order O(1) in inner coordinates is given by for all i ∈ {1, . . ., N }.Multiplying this identity by ν and integrating the resulting equation with respect to z, we infer Furthermore, applying integration by parts twice and using that due to the matching condition (5.2) all derivatives of Φ 0 with respect to z tend to 0 as z → ±∞, we obtain As ∂ z Φ 0 attains its values only in T Σ N , we deduce due to the symmetry of the Hessian matrix.Moreover, recalling that W 0 is independent of z due to (6.3), a simple computation yields Furthermore, by the definition of the dyadic product, it holds Now we use (6.3) (which directly entails ∂ z ∇ Γ W 0 = 0), (6.4) and by means of the product rule and integration by parts.
To conclude this section we note that according to [20,Section 5.3] or [28, Section 2.4] equation (6.25) induces a further solvability condition, namely an angle condition for triple junctions.To see this, let us assume that the regions Ω i , Ω j , Ω k (with i, j, k pairwise different) meet at a triple point m ijk in the 2-dimensional case or on a triple curve m ijk in the 3-dimensional case.Then the angle condition is expressed via the normals of the three meeting interfaces as follows This relation immediately implies the angle condition where θ ij denotes the angle between n Γ jk and n Γ ki .For the choice ψ 0 (φ) = 1 2 (1 − φ • φ) we deduce that the transition energies denoted by σ ij , σ jk , σ ki are always equal.This follows by exploiting the symmetry of this potential in (6.28).Thus, in this case, we know that triple junctions always occur at a 120 • contact angle.

The sharp-interface problem
Now we are in a position to state the complete problem that is obtained from (SE ε ) and (GI ε ) in the sharp-interface situation.

The sharp-interface limit of the state equation
Therefore, we recall that the domain Ω is partitioned into N regions Ω i for i = 1, . . ., N representing the presence of the i-th material (i < N ) or void (i = N ) in its pure form.Those regions are separated by interfaces Γ ij .Furthermore we have chosen η Γij to be the unit normal vector field on Γ ij pointing from Ω i into the region Ω j .This means that x + δη Γij (x) ∈ Ω j and x − δη Γij (x) ∈ Ω i x ∈ Γ ij and δ > 0.
To capture the behavior of a function v across the interface Γ ij , we defined its jump by for all x ∈ Γ ij , see (5.5).
Combining the equations (SE i 0 ) derived in Claim 4.1 and the jump conditions obtained in (6.5) and (6.10), we obtain the system for i, j = 1, . . ., N − 1 and r = 1, . . ., l, as the sharp-interface limit of the state equation (SE ε ).
Here, w 0,nr is normalized in the material regions, i.e.,

=
N −1 i=1 ˆΩi Furthermore, we infer from (6.10) that [w 0,nr ] N i = 0 on Γ iN (7.2) for all i ∈ {1, . . ., N − 1} and each r ∈ {1, . . ., l}.However, this condition does not provide any additional information as we do not know how w 0,nr behaves in the void region.In particular, we see that by interpreting (SE ij r ) as one system of PDEs in the material region i=1 Ω i , the homogeneous Neumann boundary condition in the fourth line of (SE ij r ) is enough to obtain a closed system.
Combining the Neumann type jump condition on Γ ij stated in the second line of (SE ij r ) with the normality condition (7.1), we are able to obtain the relation ˆΩM C M E(w 0,nr ) : E(w 0,nr ) dx = λ 0,nr , ( with where 1 Ωi denotes the characteristic function on Ω i .This means that the eigenvalue λ 0,nr in the sharp-interface setting is indeed solely determined by an eigenvalue equation on the material region Ω M but does not have any contribution from the void region. To verify (7.3), we test (SE ij r ) with w 0,nr and integrate by parts.This yields ˆΩi for all i ∈ {1, . . ., N −1}, where n Γi stands for the outer unit normal vector field of ∂Ω i .Noticing that the outer unit normal vector simply switches its sign on neighboring boundaries, we now use the second and the fourth line of (SE ij r ) to infer C i E(w 0,nr )n Γi • w 0,nr dΓ = 0.
By the linearity of the integral, this directly proves (7.3).
Remark 7.1.As a refinement of Remark 4.2 (a), we now see that as long as at least one of the material regions Ω 1 , . . ., Ω N −1 shares a sufficiently nice part of its boundary with Γ D , we can apply Korn's inequality in order to deduce that all λ 0,nr are strictly positive.From a physical point of view, this is reasonable since if the material region Ω M of the structure is not attached to some fixed boundary the shape can freely move within the design domain just by translation without exhibiting any vibrations.

The sharp-interface limit of the first-order optimality condition
Now let us turn to the limit of the gradient inequality (GI ε ).For the sake of completeness, let us restate our final results from the previous section, i.e., (6.45) and (6.46).We have [∂ λn r Ψ] (λ 0,n1 , . . ., λ 0,n l ) • CE(w 0,nr ) : E(w 0,nr ) j i − 2 CE(w 0,nr )ν • ∇w 0,nr ν j i (7.5) on Γ ij for all i, j = 1 . . ., N − 1, and Here σ ij is defined as in (6.28) and stands for the total energy of a transition across the interface Γ ij .The vector ϑ 1 ∈ R N denotes the O(ε)-contribution of the Lagrange-multiplier resulting from the integral constraint ffl Ω φ ε dx = m that is hidden in the condition φ ε ∈ G m (cf.Theorem 3.12).
Recalling (6.47), we additionally have the triple junction condition at any junction m ijk with pairwise disjoint i, j, k{1, . . ., N }

The sharp-interface optimality system in the case of only one material
We now want to state above equations for the simplest case of only one single material (i.e., N = 2) as this is the scenario we further study in the subsequent sections.
In this case, we have Ω = Ω M ∪ Ω V , where Ω M and Ω V denote the material and the void parts of the domain, respectively.We now denote the interface separating the two phases by Γ M V , its outer unit normal vector field by n Γ M V and its mean curvature by κ Using the notation Γ M D := Γ D ∩ ∂Ω M and Γ M 0 := Γ 0 ∩ ∂Ω M , we obtain from (SE ij r ), (7.1) and (7.2) the state equation for r = 1, . . ., l, along with the first-order necessary optimality condition on Γ M V .This means that the functions w 0,nr are eigenfunctions to the eigenvalues λ 0,nr which essentially solve the eigenvalue problem for the elasticity equation subject to a homogeneous Neumann boundary condition on the shape Ω M .
Remark 7.2.Note that, in general, one cannot predict the behavior of solutions to (SE M V r ).If Ω M is merely a set of finite perimeter that does not have a Lipschitz boundary or if Γ M V ∩Γ M D = ∅, the classical spectral theory (as applied in Section 2.5) does not provide us with an infinite sequence of positive eigenvalues.Nevertheless, as we want to consider a well posed minimization problem and want to calculate shape derivatives associated to this problem, we assume that these issues do not occur.In particular, we always assume Ω M to be sufficiently smooth and ∂Ω M to have a suitably nice intersection with Γ M D such that an infinite sequence of positive eigenvalues actually exists (see also Remark 7.1).

Relating the first-order optimality condition to classical shape calculus
We now want to compare the above results, especially (7.7), to the results in [7], which were obtained using shape calculus.Our goal is to justify that the gradient equality (7.7) is indeed the first-order condition of a sharp-interface eigenvalue optimization problem, which is formally the limit of the diffuse-interface problem we started with.Therefore, we need to fit the notation of [7] to our setting.
As above consider the situation N = 2, i.e., Ω = Ω M ∪ Ω V .Denote with P Ω (Ω M ) the perimeter of the shape Ω M within the design domain Ω, which is given by the Hausdorff measure H d−1 (∂Ω M ∩ Ω) provided that Ω M is non-empty and sufficiently smooth.Furthermore, we consider a prescribed mass m = Ω M < |Ω|.In order to be consistent with the notation used in the previous chapters, we choose m Then the sharp-interface structural optimization problem that we intend to approximate via our Proof.We proceed analogously to [7, Theorem 2.5].In the following, Ω ζ = (Id+ζ)(Ω M ) denotes the perturbation of Ω M associated with a sufficiently small For the partial Fréchet derivatives of the Lagrangian with respect to v nr for r = 1, . . ., l at the point (Ω ζ , w n1 (Ω ζ ), . . ., w n l (Ω ζ )), we obtain This is simply due to the fact, that the derivative of the Rayleigh quotient and this vanishes due to (SE M V r ).
On the other hand, recalling the definition of J in (P 0 l ), we obviously have as the eigenvalues can be expressed by the corresponding Rayleigh quotients.Note that due to the differentiability of eigenfunctions as discussed in Remark 8.2, we can now apply the chain rule.Thus, using (8.2) we infer that the shape derivative is given by Applying the formulas for shape derivatives in [7, Lemma 2.3], we deduce where κ M denotes the mean curvature of ∂Ω M .By the assumption θ • n ∂Ω M = 0 on ∂Ω M \Γ M V , the boundary integrals vanish on ∂Ω M \Γ M V and we thus arrive at (8.1).Note that in [7], the mean curvature is defined as κ = ∇ ∂Ω M • n ∂Ω M , whereas (in accordance with (4.23)) our mean curvature is given by κ = −∇ ∂Ω M • n ∂Ω M .This explains the negative sign of our term involving κ M .
Remark 8.3.The preceding theorem shows that using the approach of classical shape calculus and additionally taking the volume constraint Ω M = m into account, we recover the gradient equality (7.7) since the volume constraint produces a Lagrange multiplier as in our previous analysis.This justifies our formal approach from the viewpoint of classical shape calculus since (7.7) can be interpreted as the first-order necessary optimality condition of the shape optimization problem (P 0 l ).

Numerical Examples
In the following, we present numerical results that illustrate the applicability of our approach to find optimal topologies.After a brief introduction of the numerical method, we investigate the dependence of solutions on the parameter ε in Section 9.1.Therefore, we study a particular setting of an elastic beam that is known from literature (cf.[7]).In Section 9.2, we consider a joint optimization of λ 1 and λ 2 for this beam setup, and in Section 9.3, we investigate an extended optimization problem to not only optimize the shape and topology of this beam with respect to its first eigenvalue but also its compliance.
As in Subsection 7.3 and Section 8, we restrict ourselves to the case of only two phases, i.e., material and void.In this situation, the vector-valued phase-field φ = (φ 1 , φ 2 ) can be represented by a scalar order parameter and in the Ginzburg-Landau energy where and analogously for α.
Numerical Solution Method.The numerical implementation is based on linear finite elements for all functions provided by the finite element package FEniCs [9,57] together with the PETSc linear algebra backend [12,13].For the eigenvalue problem, we use the package SLEPc [51].The optimization problem is solved by the VMPT method that is proposed in [24].
In our case, it can be understood as an extension of the projected gradient method into the space H 1 (Ω) ∩ L ∞ (Ω).We refer to [24,43,44] for more details.1: Scaled Ginzburg-Landau energy γE ε (φ) and principal eigenvalue λ 1 of the optimal beam shape for decreasing values of ε.This indicates that the values E ε (φ) and λ 1 converge as ε decreases.
Figure 1: The zero level lines of the beam for the ε → 0 test for all tested ε.The darker the line is, the smaller is ε.We observe that the interface seems to stabilize with decreasing values of ε and that it only mildly depends on ε.
9.1.Numerical investigation of the sharp-interface limit ε → 0 In this section, to illustrate the sharp-interface limit, we present numerical results for a sequence of decreasing values of ε.
We want to emphasize that this problem is expected to have many local minima and thus, the choice of initial function can significantly influence the shape and topology of the local minimizer found by our numerical method.
We now solve the optimization problem for a decreasing sequence of values of ε.In Table 1, we present the values of ε together with the corresponding value of the Ginzburg-Landau energy E ε (φ) = ´Ω ε 2 |∇φ| 2 + 1 2ε (1 − φ 2 ) dx and the eigenvalue λ 1 .Recall here that the values of the Ginzburg-Landau energy converge to a weighted perimeter of the shape in the sharp interface limit ε → 0. In Figure 1, we present the zero level lines of the (locally optimal) shapes we obtain for different values of ε.Here we started with ε = 0.08 and used the local optimum as initial value for subsequent simulations.

Optimization of a beam
As a first test, we illustrate the influence of the regularization strength γ on the found structure.The parameter γ acts as a weight for the penalization of the length of the interface between void and material.Thus a smaller value of γ is expected to lead to thinner structures which contain more braces.Using the same setup as before, we solve again the optimization problem for the cantilever beam, but this time we fix ε = 0.02.We perform two simulations with γ ∈ {10 −4 , 10 −5 }.The smaller γ is chosen, the finer structures we expect.We also expect that we reach a larger value for λ 1 , because less regularization is used.
In Figure 2, we present the found structures for these parameters.On the left we present the result for γ = 10 −4 and on the right for γ = 10 −5 .As expected, it is clearly visible that the structure obtained for the smaller value of γ is finer and contains more braces.Additionally,  decreasing γ also leads to sharper corners.

Joint optimization of compliance and principal eigenvalue
In this subsection, we extend the problem by using a linear combination of compliance and the first eigenvalue as objective.For any given φ ∈ H 1 (Ω) ∩ L ∞ (Ω), the compliance problem is to find a displacement field u φ c ∈ H 1 (Ω;  The first and second eigenvalue (λ 1 and λ 2 ) for the optimal topologies for the beam example and Ψ(λ 1 , λ 2 ) = −λ 1 − αλ 2 .As expected, for larger weights α we reach a lower value for λ 1 and a larger value for λ 2 .This means that we are looking for a structure that simultaneously minimizes the compliance with respect to a given force g and maximizes the first eigenvalue λ 1 .The optimization problem (9.5) is actually a special case of the compliance and eigenvalue optimization problems studied in [45] (with the quantities therein being chosen as N = 2, α = 1, β = 0, f ≡ 0, J 0 ≡ 0 and U c = H 1 (Ω), i.e., S 0 = S 1 = ∅).More details about the formulation of the problem (9.5) can be found in [45,Section 2.7].For the optimality system of (9.5), which we are going to solve numerically, we refer to [45,Theorem 7.2].
For the sharp-interface limit of compliance optimization problems without eigenvalue optimization (such as (9.5),where α is set to zero), we refer to [20].As our sharp-interface analysis for eigenvalue optimization problems without compliance optimization relies on the same expansions as in [20], both approaches can be combined to formally derive the sharp-interface limit of problem (9.5).
For our numerical computations, we use the same setup as in Section 9.2 for the beam example and fix γ = 1 • 10 −3 .Moreover, the exterior force is g = (0, −1) T and acts on Γ g = {(2.0,y) | y ∈ (0.45, 0.55)}.Note that Γ g belongs to the boundary of the domain Ω ρ on which we assume a higher value of the density ρ.
In Figure 4, we show numerical result for this setting for different values of α.We observe that the structures become finer when we increase the influence of the principal eigenvalue.In Table 3, we present the corresponding values for compliance and λ 1 for these shapes.As expected, we achieve a larger compliance when we increase the weight α of the principal eigenvalue.Simultaneously, we also obtain larger values for the principal eigenvalue.It is worth mentioning that these results compare very well with the ones obtained in [7], where a level-set method was used to directly tackle the sharp-interface problem (see especially Fig. 2 and Fig. 5 in [7]).

Figure 4 :
Figure 4: Numerical results for joint optimization of compliance and principal eigenvalue with weight α ∈ {10, 100, 500} (left to right).We observe that increasing the weight α of the first eigenvalue leads locally to a finer structure.
L 2 which controls the whole H 1 (Ω; R N )-norm as all ξ ∈ Gm have a fixed mean value.Hence, I δ is bounded from below on Gm and thus, the infimum exists.Consequently, we find a minimizing sequence (φ δ,k ) k∈N ⊂

Table 3 :
Values of compliance and principal eigenvalue λ 1 for joint optimization of compliance and principal eigenvalue with weight α ∈ {10, 100, 200, 500}.We observe that increasing α leads, as expected, to larger values of the principal eigenvalue and larger values for the compliance.