An interface-enriched generalized finite element method for level set-based topology optimization

During design optimization, a smooth description of the geometry is important, especially for problems that are sensitive to the way interfaces are resolved, e.g., wave propagation or fluid-structure interaction. A level set description of the boundary, when combined with an enriched finite element formulation, offers a smoother description of the design than traditional density-based methods. However, existing enriched methods have drawbacks, including ill-conditioning and difficulties in prescribing essential boundary conditions. In this work, we introduce a new enriched topology optimization methodology that overcomes the aforementioned drawbacks; boundaries are resolved accurately by means of the Interface-enriched Generalized Finite Element Method (IGFEM), coupled to a level set function constructed by radial basis functions. The enriched method used in this new approach to topology optimization has the same level of accuracy in the analysis as the standard finite element method with matching meshes, but without the need for remeshing. We derive the analytical sensitivities and we discuss the behavior of the optimization process in detail. We establish that IGFEM-based level set topology optimization generates correct topologies for well-known compliance minimization problems.


Introduction
The use of enriched finite element methods in topology optimization approaches is not new; the eXtended/Generalized Finite Element Method (X/GFEM) [1,2,3,4,5], for example, has been explored in this context. However, the Interface-enriched Generalized Finite Element Method (IGFEM) has been shown to have many advantages over X/GFEM [6,7]. In this work we extend IGFEM to be used in a levelset based topology optimization framework.
Topology optimization, first introduced by Bendsøe and Kikuchi [8], has been widely used to obtain designs that are optimized for a certain functionality, e.g., minimum compliance. In the commonly-used density-based methods, a continuous design variable that represents a material density is assigned to each element in the discretization. The design is pushed towards a black and white design by means of an interpolation function, e.g., the Solid Isotropic Material with Penalisation (SIMP) [9], that disfavors intermediate density values, also referred to as gray values. A filter is then required to prevent checkerboard-like density patterns, and to impose a minimum feature size. However, due to the filter, gray values are introduced. Density based topology optimization is straightforward to implement and widely available in both research codes and commercial software. However, because the topology is described by a density field on a (usually) structured mesh, material interfaces not only contain gray values but also suffer from pixelization or staircasing-staggered boundaries that follow the finite element mesh. Although a postprocessing step can be performed to smoothen the final design, the analysis during optimization is still based on gray density fields and a staircased representation. This may be detrimental to the approximate solution's accuracy, especially in cases that are sensitive to the boundary description, such as flow problems [10]. Furthermore, because the location of the material boundary is not well defined, it is difficult to track the evolving boundary during optimization, for example to impose contact constraints.
The aforementioned drawbacks could be alleviated by the use of geometry-fitted discretization methods, which have been widely used in shape optimization [11]. In these methods, the location of the material interface is known throughout the optimization, and the analysis mesh is modified to completely eliminate the pixalization Figure 1: Mathematical representation of a topology optimization design domain Ω. Essential and natural boundary conditions are prescribed on the part of the boundary denoted Γ u and Γ t , respectively. The material domain is referred to as Ω m , while the void region is denoted Ω v . The insert shows the discretization with a material interface, defined by the zero-contour of the levelset function φ, that is non-matching to the mesh. Original mesh nodes and enriched nodes are denoted with and symbols, respectively. is subjected to essential (Dirichlet) boundary conditions on Γ u , and to natural (Neumann) boundary conditions on Γ t , such that Γ = Γ u ∪ Γ t and Γ u ∩ Γ t = ∅. The material boundary, Γ m = Ω m ∩ Ω v \ Γ, is defined implicitly by a levelset function, φ (x) = 0, that is a function of the spatial coordinate x.
For any iteration in the elastostatic optimization procedure, the boundary value problem is solved with prescribed displacementsū : Γ u → R d , prescribed tractionst : Γ t → R d , and body forces b i ≡ b| Ωi : Ω i → R d , where i = m, v. We denote the field u i as the restriction of u to domain Ω i , i.e. u i ≡ u| Ωi . Note that here the field is solved on both the material domain and the void domain. However, following the techniques described in [7], it is also possible to completely remove the void regions from the analysis.
We define a vector-valued function space, denoted V 0 ≡ H 1 0 (Ω) d , where components of v ∈ V 0 belong to the first-order Sobolev space that satisfies homogeneous essential boundary conditions on Γ u . In this work we only focus on problems with homogeneous Dirichlet boundary conditions. For problems with non-homogeneous essential boundary conditions, the reader is referred to [35]. The weak form of the elastostatics boundary value problem can be written as: Find u ∈ V 0 such that where the bilinear and linear forms can be written as and respectively, where the stress tensor σ i ≡ σ| Ωi follows Hooke's law for linear elastic materials, σ i = C i i , and C i is the elasticity tensor. Small strain theory is used for the strain tensor, i.e., ε (u) = 1 2 (∇u + ∇u ), for the elastostatic boundary value problem.
For the heat conduction problem both the trial and the weight function are taken from the space V 0 ≡ H 1 0 (Ω). For a prescribed temperature u : Γ u → R, prescribed heat flux q : Γ q → R, heat source f i : Ω i → R, and conductivity tensor κ i ≡ κ| Ωi → R d × R d , the bilinear and linear forms for each iteration in the heat compliance problems equation are written as and Ψ i α i Figure 2: Schematic representation of enrichment function ψ i corresponding to the enriched DOFs α i , where the enriched nodes are shown as . This enrichment function is constructed from the standard Lagrange shape function of the integration elements.
It is worth noting that interface conditions that satisfy continuity of the field and its tractions (or fluxes) do not appear explicitly in Eq. (1) (or Eq. (4)), because they drop out due to the weight function v (or v) being continuous along the interface.
The design domain is discretized without prior knowledge of the topology as Ω h = i e i , where e i is the ith finite element, resulting in a mesh that is non-matching to material boundaries. The levelset function, whose zero contour defines the interface between void and material, is then evaluated on the same mesh. This is done for efficiency, as the mapping needs to be computed only once, and results in discrete nodal levelset values. New enriched nodes are placed at the intersection between element edges and the zero contour of the levelset. The locations of these enriched nodes, denoted x n , are found by linear interpolation between nodes x j and x k of the original mesh: Here x j and x k have levelset values of opposite sign. Cut elements are then subdivided into integration elements. Following a Bubnov-Galerkin procedure, the resulting finite dimensional problem is then solved by choosing trial and weight functions from the same enriched finite element space. The IGFEM approximation can then be written as for elastostatics or for heat conduction problems, where the first term corresponds to the standard finite element approximation, with shape functions N i (x) and corresponding standard degrees of freedom U i , and the second term refers to the enrichment, characterized by enrichment functions ψ i (x) and associated enriched DOFs α i . The index set containing all enriched nodes is denoted ι w . Enrichment functions ψ i can be conveniently constructed from Lagrange shape functions of integration elements, as illustrated in Figure 2, while the underlying partition of unity shape functions are kept intact. Subsequently, the local stiffness matrix k e and force vector f e are obtained numerically; elements that are not intersected follow the standard FEM procedures. For every integration element k e and f e are defined as where D e is the constitutive matrix, the shape functions vector N and enrichment functions ψ are stacked together. The differential operator ∆ is defined as: for elastostatics in 2-D and 3-D, respectively, and In this work, we are concerned with linear triangular elements, for which a single integration point in standard and integration elements is sufficient. The discrete system of linear equations KU = F is finally obtained through standard procedures, where and A denotes the standard finite element assembly operator.
For a more detailed description on IGFEM, the reader is referred to [6].

Radial basis functions
Although it is possible to directly use the levelset values φ j on original nodes of the finite element mesh as design variables, we choose to use compactly supported radial basis functions for the levelset parametrization for a number of reasons [50]: i) RBFs give control over the complexity of the designs, and as such, they act similarly to a filter in densitybased topology optimization; ii) By decoupling the finite element analysis mesh from the RBF grid, the design space can be restricted without deteriorating the finite element approximation. This can be used to mitigate approximation errors due to too coarse discretizations; and iii) By tuning the radius of support of RBFs, we can ensure that the influence of each design variable extends over multiple elements. This allows the optimizer to move the boundary further and therefore converge faster, while using fewer design variables. Figure 3 illustrates a compactly supported RBF θ [49] described by where the radius r i is defined as and r s is the radius of support. In (14) · denotes the Euclidian norm, and x i the coordinate of the center of the RBF θ i . The scalar-valued levelset function φ (x) is found as a summation of every non-zero RBF θ i , scaled with its corresponding design variable s i : where ι s , the index set containing all design variables, and s ∈ D is a vector of design variables with length |ι s |, the cardinality of ι s . Finally, evaluating this function at the original nodes of the finite element mesh results in the levelset vector φ = θ s, where θ is a matrix that needs to be computed only once.

Optimization
The optimization problem is chosen as a minimization of the compliance C with respect to the design variables s that scale the radial basis functions. It needs to be emphasized that compliance minimization is merely a demonstrator problem, and the method is not limited to it. The minimization problem is subject to equilibrium and a volume constraint V c . Furthermore, the design variables are bounded between s min and s max . This problem can be written as s = arg min The Method of Moving Asymptotes (MMA) [51] is employed to solve this optimization problem.

Sensitivity analysis
The compliance minimization problem is self-adjoint [52], resulting in the sensitivity of the compliance C with respect to the design variables s as Applying the chain rule, the sensitivity of the compliance C with respect to design variable s i can be written at the level of integration elements in terms of the nodal levelset values φ j : In (19), a summation is done over all the nodes in the index set ι i which contains all the original mesh nodes that are in the support of the RBF corresponding to design variable s i . Then, a summation is done over ι j , which refers to the index set of all integration elements e in the support of original mesh node j, i.e., the region where the original shape function N j is nonzero. Lastly, a summation is done over the index set ι e , which contains all the enriched nodes n in integration element e. The location of these enriched nodes is denoted x n . Note that a number of terms can be identified in the sensitivity formulation: the derivatives of nodal levelset values with respect to the design variables, ∂φ j /∂s i , the design velocities ∂x n /∂φ j , and the sensitivity of the element stiffness matrix and force vector with respect to the location of the nth enriched node, ∂k e /∂x n and ∂f e /∂x n , respectively. First, the sensitivity of the nodal levelset values with respect to the design variables is simply computed by taking the derivative of (15) with respect to s as The design velocities ∂x n /∂φ j also remain straightforward as they are computed by taking the derivative of Eq. (6) as Note that the enriched nodes remain on the element edges of the finite element mesh, and thus the direction of the design velocity is known a priori. More involved is the sensitivity of the eth integration element stiffness matrix k e with respect to the location of enriched node n, which for a single integration point can be written as: where B e = ∆N e J −1 p ∆ψ e J −1 e and J p and J e are the Jacobian of the parent and integration element, respectively. Recall that the material within each integration element remains constant, and therefore ∂D e /∂x n = 0. The first term in (22) contains the sensitivity of the Jacobian determinant, and represents the effect of the changing integration element area; the second and third term contain the sensitivity of the element B e matrix, and represents the effect of the changing shape and enrichment functions. The latter is computed as Figure 4: Problem description and initial design for the cantilever beam example in §3.1. The domain is clamped on the left and a downward force is applied in the middle of the right side.
Observe that only the enriched part of the formulation has an influence, as for linear elements the background shape function derivatives are constant throughout the element (∂∆N e /∂n = 0). The Jacobian of the parent element is not influenced by the enriched node location either (∂J p /∂x n = 0). The enrichment functions have a constant value at the integration point of the integration element (∂∆ψ e /∂n = 0). Appendix A describes how to compute the derivative of the Jacobian inverse and determinant, ∂J −1 e /∂x n and ∂j e /∂x n , respectively, by straightforward differentiation.
Finally, the sensitivity of the design-dependent force vector f e is evaluated. Due to the IGFEM discretization, enriched nodes whose support is subjected to a line or body load contribute to the force vector, implying that the derivatives of the force vector are nonzero for cases with line loads or body forces. Similarly to the sensitivity of the element stiffness matrix, the sensitivity of the element force vector consists of two terms, one relating to the Jacobian derivative, and another containing the function derivatives: Here, the element right hand side f e is computed as a function of body sources, but line loads and tractions can be handled completely analogously. In the second term, only the original shape functions have a contribution. This is because enriched functions always have the same value on the Gauss point of an integration element. However, as the location of the Gauss points with respect to the background element changes, ∂N e /∂x n is nonzero, and can be evaluated as where A is the inverse isoparametric mapping that maps global coordinates to the local master coordinate system of the parent element. Although the sensitivity analysis seems involved, every partial derivative is relatively straightforward and cheap to compute.

Numerical examples
The enriched method outlined above is demonstrated on a number of classical compliance optimization problems. The results generated by this approach are compared to those generated by open source optimization codes, and the influence of the design discretization is investigated. A 3-D compliance optimization case and a heat sink problem are also considered.
In this section, no units are specified; therefore, any consistent unit system can be assumed. For the MMA optimizer [51], the following settings are used unless otherwise specified: • The lower and upper bounds on the design variables s i are given by s min,i = −1 and s max,i = 1 for i = 1, . . . , I; • The move limit used by MMA is set to 0.01; • A value of 10 is used for the coefficient that controls how aggressively the constraints are enforced.  using radial-basis functions and density mapping, proposed by Wei et al. [54]; and iii) a code for discrete levelset topology optimization with topological derivatives by Challis [55]. The optimization problem for this comparison is the widely-used cantilever beam problem, as illustrated in Figure 4. It consists of a 2L × L rectangular domain that is clamped on the left and subjected to a downward point loadt in the middle of the right side. We set L equal to 1, the volume constraint to 55% of the design domain volume, and use |t| = 1. The material domain Ω m is assigned a Young's modulus E 1 = 1, whereas the void domain Ω v has Young's modulus E 2 = 10 −6 . Both domains have a Poisson ratio ν 1 = ν 2 = 0.3. Note that it is also possible to give the void regions a stiffness of exactly zero by removing DOFs [7]. However, this would entail extra overhead, and to ensure a fair comparison with the other models, in this work it is chosen to use a small value for the void stiffness. Figure 4 shows the initial design that is used for the IGFEM-based optimization, which is the same as that used in the paper describing the 88-line code [54]. The other two codes do not require an initial design, as they are able to nucleate holes. The optimization problem is solved on meshes defined on rectangular grids of 21 × 11, 41 × 21, 61 × 31, 81 × 41, and 101 × 51 nodes. Our proposed method makes use of triangular meshes, whereas the other methods use quadrilateral meshes. The RBF mesh used in the IGFEM-based solutions is the same as the analysis mesh, and a radius of influence of r s = √ 2 · a is used, where a is the distance between two RBFs. The results for each code are illustrated in Figure 5. For all methods, the design becomes more detailed when the mesh size is increased. Furthermore, the topologies obtained by each method are roughly the same. It is observed that the resulting designs are similar to those given by the code of Wei et al., especially for the finer meshes. Indeed, the our proposed method yields results that have clearly defined (black and white) nonstaircased boundaries. Figure 6a shows the convergence behavior of the different codes for the finest mesh. It is observed that our method leads to the lowest objective function value, which again is similar to that obtained by Wei et alli 's code, while initially converging faster in the volume fraction. Figure 6b shows the final compliance as a function of the number of DOFs. Initially, the different methods all find lower compliance values as the mesh is refined, but the method by Wei et al. and our method find slightly higher values for the finest mesh sizes. This may be explained by the optimizer converging to a local optimum. For each mesh size, the proposed method finds the lowest compliance value at the cost of adding some enriched DOFs. Compliance C IGFEM SIMP [53] Wei et al. [54] Challis [55] (b)

MBB beam
The influence of the number of radial basis functions is investigated on the well-known MBB beam 1 , which is illustrated in Figure 7. The optimization problem consists of a 3L × L domain with symmetry conditions on the left. On the bottom right corner, the domain is simply supported, and a downward forcet is applied on the top left corner. As in the previous example, the volume constraint is set to 55% of the volume of the total design domain. The initial design is also indicated in the figure, and the same material properties as in the previous example are used. This optimization problem is solved on a triangular analysis mesh defined on a grid of size 151 × 51, using a discretization of the design space consisting of 61 × 21, 91 × 31, 121 × 41 and 151 × 51 radial basis functions, so that only for the finest design space discretization, both resolutions match, and an RBF is assigned to every node in the analysis mesh. The support radius r s is changed together with the design grid so that the overlap of RBFs is the same in each case: r s = √ 2 · a, where a is again the distance between two RBFs. Figure 8 shows the optimized designs. As expected, the level of detail in the design can be controlled by the RBF discretization. However, it is noted that in the finest RBF mesh, artifacts appear on the design boundary. This behavior will be further analysed in §4.2. In Figure 9a the convergence behavior of the different RBF meshes is shown. Although the coarsest RBF mesh shows some initial oscillations, the overall convergence behavior is similar in all cases. Moreover, as shown in Figure 9b, the compliance no longer significantly improves for the finest RBF discretization.

3-D cantilever beam
To show that the method is not restricted to 2-D, a 3-D cantilever beam example is also considered. The material properties are the same as those of previous examples. The size of this cantilever beam is 2L × L × 0.5L, and a structured mesh with 40 × 20 × 10 × 6 tetrahedral elements is used to discretize the model. The design space is discretized using a grid of 21 × 11 × 6 RBFs, with r s = √ 2 · a. Figure 10 shows the initial design, along with the boundary conditions; the right surface is clamped, and a distributed line load with |t| = 0.2 per unit length is applied on the bottom-left edge. The move limit for MMA in this example is set to 0.001 to prevent the optimizer from moving the boundaries too fast, as only a small number of RBFs is used with a large r s compared to the analysis mesh. The objective function is again the structural compliance, and the volume constraint is set to 40% of the total design domain. Figure 11a displays the optimized design; the corresponding convergence plot is shown in Figure 11b, where it can be seen that the volume satisfied the constraint, and the objective function converges smoothly.

Heat sink
Lastly, we consider a heat compliance minimization problem, illustrated in Figure 12. In this two-material problem, a highly conductive material (κ 1 = 1) is distributed within an L × L square domain with a lower conductivity (κ 2 = 0.01). The bottom-right corner of the domain has a heat sink, with u = 0, whereas the domain edges are adiabatic boundaries, i.e.,q = 0. The entire domain is subjected to uniform heat source f = 1. The problem is solved on a 41 × 41 node analysis mesh, using 31 × 31 RBFs with r s = √ 2 · a. As this problem considers a case with a body load, the load vector also contains enriched degrees of freedom that depend on the locations of the enriched nodes. Therefore, the right hand side is design dependent, i.e., ∂F /∂s = 0, even though the body load is constant throughout the entire domain.
The results of this optimization problem are shown in Figure 13. In the optimized design, narrow features can be distinguished that follow the edges of original elements in the background mesh. This is an effect caused by how the intersections are detected, and is investigated in more detail in §4.1. The convergence plot shows that, although there are initially some oscillations in both the objective and constraint (also investigated further in §4.1), they converge in the end.      Figure 13: Results of the heat sink problem: (a) shows the optimized design of the heat sink problem. Observe that narrow features are created by placing the zero-contour of the levelset near the edges of the original mesh element. In the convergence plot shown in (b), initially some small oscillations are observed in both the objective and volume convergence, which can be prevented by the use of a smaller move limit. φ k φ j Figure 14: Illustration of the structure disconnecting due to the levelset discretization. On the left the zerocontour of the levelset, shown in red, defines the design shown in gray. The white arrows indicate the update of the levelset in the next iteration. On the right, the next iteration is shown, where the narrowest part of the zero-contour lies within a single element, and the nodal levelset values φ j and φ k have the same sign. Therefore, the two intersections shown as are not found, and the structure disconnects, as shown by the new gray design.

Oscillations: the levelset discretization
Oscillations in the objective functions are visible in the convergence of the heat sink problem in Figure 13, and in the coarsest RBF mesh of the MBB beam in Figure 9. As these oscillations might point to inaccurate modeling or sensitivities, the phenomenon is discussed here in more detail.
Recall that intersections between the zero contour of the levelset function and element edges are found using a linear interpolation of nodal levelset values. Because the levelset function is discretized, no intersections can be found if two adjacent nodes have the same sign. This effect is illustrated in Figure 14. On the left, the zerocontour of a levelset function is shown in red, which defines a design shown in white/gray. The white arrows indicate the movement of the material boundary during the next design update. On the right, the updated levelset contour is shown in red. As the two adjacent original nodes φ j and φ k now have the same sign, the two intersections between them, shown as cannot be found.
The sudden disconnection of the structure due to the levelset discretization is a discontinuous event that cannot be captured by the sensitivity information. Therefore, as soon as such discontinuous event occurs, the sensitivities and the modeling deviate, and oscillations may occur.
This problem can be alleviated by using a smaller move limit, as was done in the 3D MBB example. Another approach that could mitigate this issue is to evaluate the parametrized levelset function on a finer grid, so that multiple intersections are found on an element edge. However, the procedure that creates integration elements would also need to allow for these more complex intersections. It should be noted that neither of these methods completely eliminates the problem of discontinuous events. Rather, the methods alleviate the problem by limiting their chance of occurrence.
A related observation can be made in the zigzagged features in the heat sink design of Figure 13. As illustrated in Figure 15, this pattern occurs when the optimizer tries to make a narrow diagonal feature in the opposite direction of the mesh diagonals. The red intersections cannot be detected, and therefore the structure is disconnected. As a result, the optimizer can only create diagonal narrow features by zigzagging them along element edges, as illustrated in Figure 15 on the right.

Zigzagging: approximation error
In the final designs of some of the numerical examples, zigzagging of the edges occurred where the zero contour of the levelset function is not perfectly smooth, as detailed in Figure 16. To investigate the cause of this artifact, the test problem of a clamped beam loaded axially shown in Figure 17 was investigated. The compliance was computed for a varying zigzagging angle α while keeping the material volume constant.
The results in Figure 18 show that the minimum compliance is not found at α = 0, as one would expect, but instead it is found at a negative value of α. Furthermore, the compliance is not symmetric with respect to α = 0 due to the asymmetry of the analysis mesh. The cause of this zigzagging is an approximation error, as the mesh is too coarse to accurately describe the deformations and stresses in the structure, similarly to the effect described for nodal design variables in [57]. This effect can be resolved by reducing the design space with respect to the analysis mesh, for example with the use of RBFs, or by increasing the element order. Furthermore, as Figure 15: Illustration of the zigzagged pattern that appears in Figure 13. When a narrow diagonal line is desired in the opposite direction of the diagonal lines of the mesh, the problem illustrated in Figure 14 results in a disconnected line, as shown on the left. Instead, the optimizer will create narrow features along element edges, as illustrated on the right. Figure 16: Detail of zigzagging that might occur when the design space is not reduced with respect to the FE mesh.
α Figure 17: Schematic for the zigzagging approximation error. A beam with zigzagging angle α is clamped on the left, while a concentrated axial loading is applied on the right. The angle α is varied without changing the material volume, and the compliance is evaluated.  Figure 18: The compliance of the test case, illustrated in Figure 17, as a function of the zigzagging angle α. The compliance for this coarse test case is non-symmetric with respect to 0. the non-smoothness is confined to a single layer of background elements, mesh-refinement makes the issue less pronounced.

Summary and Conclusions
In this work we introduced a new enriched topology optimization approach based on the Interface-enriched Generalized Finite Element Method (IGFEM). The technique yields non-pixelized black and white designs, that do not require any postprocessing. We have derived an analytic expression for the sensitivities, and have shown that they can be computed with relatively low computational effort. Furthermore, the method was compared to a number of open source topology optimization codes, based on SIMP, the Ersatz approach, and discrete levelsets. The influence of decoupling the design discretization from the analysis mesh was investigated using the classical MBB beam optimization problem. A 3-D cantilever beam and a heat sink problem were also demonstrated. The convergence behavior was provided for each numerical example. Any numerical artifacts, such as approximation errors and discretization errors of the levelset, as discussed in §4, can be mitigated by means of suitable move limits and radial basis functions, where the latter serve as a sort of filter because they can control the design complexity.
A number of conclusions can be drawn from this work: • The combination of IGFEM with the levelset topology optimization based on RBFs results in crisp boundaries in both the design representation and the modeling. Because the RBF mesh and analysis mesh are completely decoupled, the resolution of the design and the modeling can be chosen independently, as is the case in any parametrized levelset optimization. In addition, the radial basis functions help in reducing numerical artifacts, as they act like a black-and-white filter. Lastly, as the RBFs may extend over multiple elements, they allow the boundary to move further and the optimizer to converge faster; • As only one intersection can be detected per element edge, due to the mapping of the levelset to the original mesh nodes, features smaller than a single element might not be described correctly. As discussed in §4.1, this may lead to oscillations in the convergence. Using a finer grid for evaluating the levelsets, more intersection may be found, allowing for narrower features. However, this will require a more involved procedure for creating integration elements. Similarly, the method may be extended to be used on quadrilateral elements, which also requires more involved integration element procedures. Furthermore, for quadrilateral elements, higher order enrichment functions are needed [58]; • Due to approximation error, numerical artifacts may occur which may be exploited by the optimizer when the RBF mesh is too fine with respect to the analysis mesh. Another known issue in IGFEM and other enriched methods, that may be exploited by the optimizer, is the fact that the computation of stresses near material interfaces may yield inaccurate results [59,60]; • In this work, we chose to model the void along with the material domain for a number of reasons, including the ease of implementation, and the ease of comparing to other methods. However, we could have chosen to completely remove the void from the analysis [7], which would reduce computation times and eliminate the artificial stiffness in the void.
The benefits of using an enriched formulation are expected to be more pronounced for problems that rely heavily on an accurate boundary description, such as fluid-structure interaction and wave scattering. In fact, the optimization of the latter is the subject of an incoming publication. Furthermore, by having an enriched formulation, the technique has the potential to easily handle problems where boundary conditions have to be prescribed on evolving boundaries. In particular, and unlike other immersed methods, with IGFEM it is even possible to prescribe strongly non-homogeneous essential (Dirichlet) boundary conditions [35,7].

A Derivatives of the Jacobian inverse and determinant
In the sensitivity computation discussed in §2.3.1, the derivative of the Jacobian inverse and determinant are required. According to Jacobi's formula [61], the derivative of the determinant of a matrix can be computed as the trace of the adjugate of the matrix, multiplied by the derivative of the matrix. For the Jacobian determinant j e , the derivative can thus be computed as: ∂j e ∂x n = tr adj (J e ) ∂J e ∂x n , The sensitivity of the Jacobian inverse can be computed by realizing that J e J −1 e = I: For both (26) and (28), the sensitivity of the Jacobian is required; as the Jacobian of the integration element is computed as J e = x e ∆ψ e it can be computed as ∂J e ∂x n = ∂x e ∂x n ∆ψ e , where ∂x e /∂x n is simply a selection matrix consisting of zeros except for a one on the coordinates of interest for enriched node n.