Fluid dynamic shape optimization using self-adapting nonlinear extension operators with multigrid preconditioners

In this article we propose a scalable shape optimization algorithm which is tailored for large scale problems and geometries represented by hierarchically refined meshes. Weak scalability and grid independent convergence is achieved via a combination of multigrid schemes for the simulation of the PDEs and quasi Newton methods on the optimization side. For this purpose a self-adapting, nonlinear extension operator is proposed within the framework of the method of mappings. This operator is demonstrated to identify critical regions in the reference configuration where geometric singularities have to arise or vanish. Thereby the set of admissible transformations is adapted to the underlying shape optimization situation. The performance of the proposed method is demonstrated for the example of drag minimization of an obstacle within a stationary, incompressible Navier-Stokes flow.


Introduction
Shape optimization is a mathematical tool to obtain an optimal contour for a randomly-shaped obstacle subject to physical phenomena described by a partial differential equation (PDE). In the classical sense, this is achieved by evaluation of sensitivities of a shape functional j(y, Ω) for a set of admissible shapes Ω ∈ G adm . The functional is constrained by one or several PDEs, among them a state equation E(y, Ω) = 0, which is fulfilled by the state variable y. In this article, we focus on incompressible flow described by Navier-Stokes equations where the objective is to minimize the energy dissipation around an obstacle in terms of its shape and additional geometrical constraints. Building on a long history ranging back several decades (see for instance [20,14,25]), the field of shape optimization with fluid dynamics applications is still very active today following various approaches (e.g. [33,27,12,10]). For an overview of shape optimization constrained by PDEs we refer the reader to [37,1,5].
The iterative application of deformation fields to a finite element mesh can lead to distortions and loss of mesh quality, as studied by [6,9]. This becomes particularly disruptive for numerical algorithms if there are large deformations leading from reference to optimal configuration. Several approaches, especially in recent studies, have been proposed to prevent this. For instance, the use of penalized deformation gradients in interface-separated domains helps maintain mesh quality but might still lead to element overlaps when taken to the limit [35]. Other approaches rely on remeshing the domain, as for instance in [39]. More recent efforts on this area make use of preshape calculus to allow for the simultaneous optimization of both the shape and mesh quality of the grid [22].
Although the variable of the optimization problem is only the contour of the shape, the surrounding space plays a crucial role since it describes the domain for the physical effects. Due to the Hadamard-Zolésio structure theorem (see for instance [37]) changes of the objective function under consideration can be traced back to deformations of the shape, which are orthogonal to its boundary. This has been recognized as a source for decreasing mesh qualities and is addressed by many authors, for instance by also allowing displacements tangent to the shape surface (cf. [23]). In contrast to the statement of the structure theorem, from a computational point of view, it can be favorable to extend surface deformations into the entire surrounding domain instead of building a new discretization around the improved shape after each descent step, i.e. avoid remeshing on each new iteration. In recent works it has become popular to reuse the domain around the shape, which describes the domain of the PDE, for the representation of Sobolev gradients (e.g. [34]). Typically, elliptic equations are solved in this domain in order to represent the shape sensitivity as a gradient with respect to a Hilbert space defined therein [7,17,11]. The benefit of this approach is that the resulting deformation field not only serves as a deformation to the obstacle boundary, but can also be utilized as a domain deformation. Thus, the discretization for the next optimization step is obtained without additional computational cost.
At this point, two different approaches can be distinguished. On the one hand, the computed gradient can be used directly as a deformation to the domain after each optimization step which can be seen as changing the reference domain iteratively. On the other, in the so called method of mappings [26], the reference domain is fixed and the shape optimization problem is interpreted via the theory of optimal control. These are implemented through the definition of a variable around the surface of the shape to be optimized and its connection with the deformation field affecting the whole domain through a so called extension operator. The solution of which results in the optimal deformation field for both the target shape and its surrounding domain. For an application of this method to aerodynamic shape optimization see for instance [10].
We can oppose these two approaches as where either a set of admissible domains G adm or a set of admissible transformations F adm has to be defined. A link between these two can then be established via in terms of a given reference configuration Ω.
The approach we propose here is based on the research carried out in [18]. Compared to iteratively updating the shape, it offers the possibility to require properties of the deformation from reference to optimal shape. Moreover, it reformulates the optimization over a set of admissible transformations F ∈ F adm , which enables us to carry out the optimization procedure on the reference domain. Additionally, it has been documented that such extension operators are possible without the need for heuristic parameter tuning. In [29] an additional nonlinear term is introduced to the elliptic extension equation allowing for large deformations while preserving mesh quality, and preventing element self-intersections and degeneration. In shape optimization this occurs when trying to obtain an optimal shape for an obstacle surrounded by media, for which the creation or removal of a singularity on the obstacle's boundary is necessary.
In the present work we focus on applying parallel solvers for the solution of PDEs in large distributed-memory systems. This stems from the fact that the discretizations will lead to a very large number of degrees of freedom (DoFs), for which the application of the geometric multigrid method (see for instance [16]) guarantees mesh-independent convergence on the simulations. Its application within the context of parallel computing towards the solution of PDEs is an area of ongoing research [32,15,2]. The feature of mesh-independent convergence is a necessary condition towards weak scalability of the entire optimization algorithm, which is why in this article we apply the multigrid method as a preconditioner for the solution of the PDE constraints. This requires to provide a sequence of hierarchically refined discretizations. However, the shape optimization problem is a fine grid problem, which means that the contour of the obstacle has to be representable within the entire grid hierarchy. This leads to undesired effects and conceptual challenges that have been addressed for instance in [28,36,31]. The finest grid in the hierarchy thus typically represents a high-resolution discretization of a polygonal shape, which -besides the aforementioned sources Figure 1: 2d holdall reference domain of the flow field with square obstacle.
for mesh degeneracy -also introduces challenges to the shape optimization. This is due to the fact that it results in singularities (e.g. edges and corners) that are visible to the discretization. Particularly for fluid dynamics applications this is problematic. If the regularity of the considered domain transformations, i.e. the descent steps of the optimization, is then too high, it is not possible to achieve smooth optimal shapes. Related to this is the problem, that, away from the hierarchical grid structure, if one starts with a smooth reference shape, it is not possible to have singularities in the optimal configuration.
In this work, we propose an approach that is able to identify these regions and adapt the set of admissible transformations F adm as part of the optimization problem. In [29] it is illustrated how adding a nonlinear convection term to the extension model that defines F adm affects forming singularities in optimal shapes. We thus study in this article how the non-linearity can be adjusted according to the shape of the reference domain.
The rest of this article is structured as follows: In section 2 the optimization problem is formulated and the underlying fluid experiment is outlined. Section 3 is devoted to the optimization algorithm and the computation of the reduced gradient via the adjoint method. Subsequently, in section 4 the performance of the proposed method is discussed by presenting numerical tests. The article closes with a conclusion in section 6.

Problem formulation and mathematical background
The model problem under consideration is sketched in figure 1 in a bounded holdall domain G := Ω ∪ Ω obs , where Ω is assumed to have a Lipschitz boundary. In Ω we consider a stationary, incompressible flow. It surrounds an obstacle Ω obs with variable boundary Γ obs , but fixed volume and barycentric coordinates. Throughout this article, the original setting of the domain will be referred to as the reference configuration or domain.
In this section we present the theoretical background that culminates with the algorithm presented in section 3. The problem is first formulated in terms of classical shape optimization, to be then reformulated as an optimal control problem. Later on, it is pulled back to the reference domain through the method of mappings. The extension operator used in our approach is described through its weak formulation to be able to formulate the augmented Lagrange approach later used. Finally, the Lagrangian is used to obtain the sensitivities necessary for the descent direction and for the approximation to the Hessian used. For an in-depth discussion of the underlying theory we refer the reader, as previously mentioned, to [29,3,18], the use of the adjoint method in fluid dynamics can be reviewed in [14,20], and [19] can be consulted for a mathematical review of the theory here used.
Let X = L 2 (Γ obs ) × L 2 (Ω), 0 < η lb < η ub , b > 0, α > 0, θ > 0 and consider the optimization problem min s.t. e(y, F (Ω)) = 0 where g(w) represents geometric constraints. S denotes a so-called extension operator, which links the boundary control variable u ∈ L 2 (Γ obs ) to a displacement field w : Ω → R d . Examples of possible choices of S are given and investigated in [18]. Particularly, the resulting regularity of the domain transformation F is discussed therein. Here we enrich this operator with an additional control variable η ∈ L 2 (Ω), which plays the role of a nonlinearity switch. In the following we assume that S is defined such that w| Γin∪Γ wall ∪Γout = 0 a.e. In the experiment presented here, the geometric constraints require the barycenter and volume to be the origin and constant, respectively. This excludes trivial solutions where the obstacle shrinks to a point or moves towards a position where the objective functional is minimized. Thus, the principal geometric constraints are given bŷ This simplifies to assuming, without loss of generality, that the barycenter of the reference domain is 0 ∈ R d and the volume is precisely retained. The condition (7) is approximated via a non-smooth penalty term. This results in an approximation via the objective function In contrast to the PDE constraints (5) to (7), the geometric constraints (9) are fixed dimensional (here it is d + 1 where d ∈ {2, 3}). Thus, the multipliers associated to these conditions are not variables in the finite element space but a d + 1-dimensional vector. This is incorporated into the optimization algorithm in form of an augmented Lagrange approach. This leads to the augmented objective function where τ > 0 is a penalty factor for the geometric constraints. The basic concept of the augmented Lagrange method is to optimize the objective (13). By contrast to a pure penalty method, the geometric constraints (9) are not entirely moved to the objective function, but the corresponding multipliers λ g are assumed to be approximately known and iteratively updated. We consider the PDE constraint e(y, F (Ω)) to be the stationary, incompressible Navier-Stokes equations in terms of velocity and pressure (v, p). In the following it is distinguished between PDE solutions defined on the reference domain Ω denoted by (v, p) and on the transformed domain F (Ω) as (v,p). We thus consider where for compatibility it is assumed that Γin v ∞ ·n ds = 0 holds for the inflow velocity profile v ∞ . Notice that the boundaries Γ in , Γ out , Γ wall in (15) to (17) are unchanged since the displacement w is assumed to vanish here. This assumption reflects that the outer boundaries of the experiment domain are not a variable in the optimization problem. The variational formulation of the PDE (14) to (18) pulled back to the reference domain Ω is given by: where trial and test functions are chosen in In the experiment considered in this work the physical part of the objective function (12) is given by the energy dissipation in terms of the velocity v, thus y = (v, p) and which can be pulled back to the reference domain Ω as The extension S(η, u, Ω) is defined to be the solution operator of the PDE div(Dw + Dw ) + η(w · ∇)w = 0 in Ω (Dw + Dw )n = un on Ω obs (26) Consider the space Then the variational formulation of (25) to (27) is obtained by: Find w ∈ W such that for all Finally, we can formulate the approximate optimization problem, which is then solved via the augmented Lagrange approach in section 3, as s.t. (14) to (18) and (29) where the multipliers for condtions (34) are assumed to be known in each iteration.
In order to formulate a gradient-based decent algorithm, we have to compute sensitivities of the final objective function J aug in (13) with respect to the variables (u, η). This means to differentiate the chain of mappings and obtain the sensitivities in reverse order Access to the adjoint gradient formulation can be obtained via the corresponding Lagrangian, which is given by (37) under the assumption that the barycenter of Ω is 0 ∈ R d . From the Lagrangian (37) the adjoint Navier-Stokes equations follow as: The adjoint displacement equation is obtained by: Find λ w ∈ W such that for all δ λw ∈ W it holds Ω Sym(Dλ w ) : In (40) R denotes the derivative of the Lagrangian (37) w.r.t. w. This is obtained after straightforward computations and omitted here for the sake of brevity. Finally, the reduced gradient is obtained as: Find (γ, κ) ∈ X such that for all (δ u , δ η ) ∈ X it holds Γ obs With the sensitivity equations (41) and (42) we are now prepared to apply a descent method.

Optimization Algorithm
In section 2 we present the approximate optimization problem (30) to (34), which is solved via the augmented Lagrange approach shown in algorithm 1. An initial guess is given to the Lagrange multipliers λ g associated to the geometrical constraints. These in turn are iteratively updated in each optimization step subject to the condition that the norm of the defect the geometrical constraints is smaller than a prescribed tolerance g > 0.
Most of the computational time is consumed for solving the PDE systems presented in section 2. This is carried out by algorithm 3 in a block-wise manner, where the output consists of the new displacement field to update the transformation (5), as well as the results of the reduced gradient ∇u k, , ∇η k, , which will be further used to obtain updates for the current control and extension factor, u k, +1 , η k, +1 , respectively. A reference for the reduced gradient method can be found for instance in [19].

Algorithm 1 Augmented Lagrange Outer Optimization Algorithm
if g def 2 < g then 10: τ ← τ inc τ 11: else 12: end if 14: The objective function is non-differentiable due to the presence of the positive part mapping in R in (40), which is discussed in depth in [18]. Moreover, a discussion on quasi-Newton methods for semi-smooth objective functions and can be found in [24]. Determine χ η according to (43) and (·, ·)X according to (45) 3: if > 0 then 5: end for 15: end if 16: return u + q 1 , P (η lb ,η ub ) (η + q 2 ) 17: end function The use of box-constraints for the extension factor η make it necessary to implement the BFGS method similarly to what can be found in [4], from which algorithm 2 is partly inspired. For the box-constrained limited memory BFGS method, we introduce the indicator function for the condition η lb ≤ η ≤ η ub as for some small σ > 0. Recall that the canonical inner product on X is given as This is now modified to take the active box-constraints into account by introducing χ η into the second term and thereby reducing the integration to the region of inactive constraints (45) defines the inner product appearing in lines 8, 11, 13 of algorithm 2.
Conceptually the optimization scheme presented consists of outer and an inner iterations. The outer, seen in algorithm 1, updates either λ g or the penalty factor τ by increment factors λ inc and τ inc , respectively. In each cycle of the inner loop, a complete optimization is solved using BFGS updates as seen in algorithm 2. (20) and (21) 4: (41) and (42) 7: return (w, γ, κ) 8: end function

Shape Optimization Applications
In this section we present shape optimization applications with the incompressible, stationary Navier-Stokes equations as state equation. The purpose of the featured case studies in this section is to show the application of the algorithm presented in section 3, which includes the effect of the nonlinearity control variable η on the extension operator S. The obstacle shape deformations demonstrate the algorithm's capabilities at the detection, smoothing and creation of domain singularities such as tips and edges. Aspects of the multigrid preconditioner's effects are discussed. Moreover, a grid independence study illustrates that the optimal shape is reached regardless of the number of refinements in the grid hierarchy. The latter result is a fundamental stepping stone towards a scalable parallel implementation of the methodology proposed in section 5.
The flow tunnel is depicted as the holdall domain in figure 1 with for the 2d and 3d cases respectively, taking into account that for the 3d case the obstacle has a spherical shape. Thus, in 2d we have Ω obs = (−0.5, 0.5) 2 and Ω obs = {x ∈ R 3 : x 2 < 1} in 3d, respectively. The boundary conditions at the inflow boundary Γ in are set as with δ the diameter of the flow tunnel. The side length of the square obstacle is d = 1, whereas the radius of the sphere in the 3d case is r = 0.5. The simulations are performed using UG4 [38]. We expand UG4 through its C++ based plugin functionality. The code used for the studies here presented can be consulted at the online repository in [30].The 2d and 3d grids are generated using the GMSH toolbox [13].

2d Results
In this section we present 2d simulations for a flow with viscosity ν = 0.03. All PDEs are discretized using a P 1 approximation, except for the Navier-Stokes equations and its adjoint which are solved with a stable P 2 − P 1 finite element discretization. For this example, η has initial value of 0.5 and box constraints are 0 ≤ η ≤ 1.0 and b = 0.001. The grid consists of 421,888 triangular elements, with 5 refinement levels. Figure 2 shows results for the optimization of a square obstacle subject to an incompressible, stationary flow. The reference configuration with the extension factor η and optimal displacement field are shown together with the transformed domain and a closeup of the front tip where the element edges are depicted. Regarding the reference configuration, it can be seen that the extension factor approaches the imposed values of the box-constraints at two places, the corners of the square and the sections where new singularities have to be created. If we recall the weak form of the extension factor (29), η controls the nonlinearity in each element. Given that the same initial value of η is set for all elements at the beginning of the simulation and the θ-term in (12) penalizes the deviation from the average of 1 2 (η ub + η lb ), the different extension factor values, particularly close to the obstacle's surface Γ obs , show that equation (29) adapts depending on the current iterate for the displacement field w. This ensures that the w promotes both the generation of new non-smooth points on the boundary, as well as the smoothing of such points introduced by the choice of the reference domain, i.e. the four corners of the box inclusion Ω obs . This can be observed in figure 2, where high valued displacements are present at the sections where the tips and corners are generated or smoothed. Which in turn leads to achieving large deformations F (Ω) without loss of convergence of the iterative solvers.
In section section 2 we already mention that explicit mesh deformations are avoided. This comes from the fact that all optimization steps are solved on the reference domain through the method of mappings. Therefore we speak of obtaining an optimal deformation field F , which is used to transform the domain Ω → F (Ω) according to (5). This is used to obtain the optimal shape, as in figure 2. The transformed domain shows the smoothed corners and the generated front and back surface singularities, which are in accordance to the previously mentioned properties of η and w. However, throughout the optimization process the proposed algorithm does not require the nodal positions to be redefined, since the reference grid transformation is only performed as part of the post-processing and not of the optimization. The close-up corresponds to the front singularity with respect to the direction of flow. Figure 2 also shows that elements around the generated tip show no distortion and no significant loss of quality. This stems from both the effect of the nonlinear term in the extension operator S and the imposed upper bound b on the determinant of the deformation gradient det(DF ), given in (7). The latter condition is what preserves local injectivity, thus avoiding the loss of mesh quality. In figure 2 this is shown as the absence of collapsed or overlapping elements, as is previously mentioned, even for the elements that clearly undergo large deformations, i.e. the ones that conform the generated tips and the smoothed square corners. Moreover, in sections 4.3 and 5 this can be understood as a mesh independent preservation of the geometrical and numerical convergence in terms of the final optimal shape achieved and the total iteration counts of the iterative solvers.
On the other hand, the extension equation adapts where the tips have to be created to reach an optimal value of the objective function. This is illustrated through the changing value of the nonlinearity switch η on each optimization step. Figure 3 shows the plot of η over the reference domain compared to the domain transformed by the displacement field w.
At the start of the simulation the extension equation is already adapted to find the corners of the reference configuration, this is shown as the concentrated value of η. As the obstacle's initial singularities are removed, necessary ones are created. This causes a concentration of the extension factor at the reference configuration locations where new geometrical singularities have to be created. Afterwards, the optimization scheme works towards smoothing the obstacle's surface, therefore η goes through no major concentration values across the grid, as can be seen in step 74 of figure 3.
The distribution of η across the grid has to be compared against the transformed grid. Given that the Lagrange multipliers are yet to converge, the initial steps can incur in violations of the geometrical constraints. This can be seen as the highly deformed shapes at the initial steps of figure 3. However, as the algorithm performs the multipliers' update, as in algorithm 1 line 12, the geometrical constraints are fulfilled according to the prescribed g and the new obstacle surface's singularities are formed. Moreover, since the reference configuration singularities are identified at the initial optimization steps, the necessary smoothing is carried out until the simulation converges or the maximum number of steps is reached. This can be seen comparing the last 2 steps of figure 3.

3d Results
3d results for the optimization of a unit-diameter sphere are presented here. For these results, 4 levels of refinements are used with up to 12,296,192 tetrahedral elements, while the obstacle's surface consists of 54,784 triangular elements. The viscosity is set to ν = 0.1, with a discretization scheme as in section 4.1 where, in contrast to the 2d case, P 1 − P 1 mixed elements are used for the Navier-Stokes equations and its adjoint. Regarding the extension equation, η has initial value of 30 and the box constraints are 0 ≤ η ≤ 60.0. A pressure projection stabilization term is used for the mixed finite element approximation, as given in [8].
The discretized domain representing the 3d obstacle is shown in figure 4. Both the base level (bold blue lines) and highest refinement level are presented. As mentioned in section 1, we investigate the application of the geometric multigrid method as a preconditioner in shape optimization. This implies that we strive to maintain the base level as coarse as possible, as can be seen by the underrepresented sphere shown; with the idea of solving the coarsest problem with a direct method as quickly as possible. While this is ideal for the usage and convergence of the geometric multigrid method, it has some undesired effects. As can be seen, the several refinements introduced by the creation of the hierarchical grid levels do not correct necessarily introduce a smoothing of the obstacle's surface. The refinements are limited to subdividing the triangular faces present on Γ obs , while the edges from the base grid remain.
The results after 61 steps are shown in figure 5. Non-smooth points, i.e. the two tips, are generated on the front and back of Γ obs with respect to the direction of flow. This is comparable to the optimal shape obtained for the 2d case in figure 2. The effects of the grid hierarchy can be seen as the remaining edges of the super-elements. Step

Grid Independence Study
In order for the proposed optimization scheme to be scalable in terms of time-to-solution to very high numbers of DoFs, it is necessary for the obtained obstacle shape to be independent of the initial level of refinement. In other words, besides the scalability of the finite element building blocks of the optimization algorithm, the overall convergence of the objective function has to be mesh independent. This can be understood as obtaining the same optimized shape after a given number of outer iterations in algorithm 1, with the necessary surface singularities appearing at approximately the same locations. Therefore, in this section we provide results for a comparative study between different levels of refinement. Figure 6: Optimal displacement field w after 400 optimization steps applied to the reference shape for several levels of refinement, indicated by colored nodes.
The grid used in this section is formed by 412 triangular elements and refined to 1,687,552 elements. The results shown go from 2 to 6 refinement levels. The simulations are set with viscosity ν = 0.1. An equal number of 400 optimization steps is run for all grids, to have an adequate comparison point. Figure 6 shows the superimposed contours of the obstacle for 2,3,4,5,6 levels of refinement, indicated by the colored nodes. A magnification is used on the front tip, to emphasize that all tips appear on the same location, with slight differences owing to the discretization error introduced by different element sizes. In addition to this results, figure 7 shows a side-by-side comparison of the tips of the aforementioned refinement levels. This indicates, that the singularities on the obstacle surface generated by algorithm algorithm 1, are grid independent.
Moreover, in figure 8 the objective function plots for 2 different refinement levels are shown. With the same viscosity as in figures 6 and 7, the simulations are set to run for 1000 steps. The purpose is to demonstrate that the achieved minimal value is independent of the geometry. We thus choose this large number of optimization steps regardless of any tolerance outer . The value of (13) (in blue) is compared against the Euclidean norm of the Lagrange multipliers (in green) for each refinement, while the update of the multipliers is signaled by the dashed lines (dark blue). It is evident that before the convergence of the multipliers, the optimization process is local, which is why differences between the two plots with respect to the objective function value are present. In algorithm 1, the condition for the update of the aforementioned Lagrange multipliers is mentioned. This is related to the set tolerance g . Given that the two geometries are different due to the level of refinement, the fulfillment of the geometrical tolerance is not necessarily achieved in the same optimization steps. Which in turn, as seen in algorithm 1, has an effect as to when the multipliers are updated. Nevertheless, as previously mentioned in this section, the objective function converges altogether with the norm of the multipliers, as seen by comparing the plots in both refinement cases presented in figure 8. It can also be seen that in most cases, a significant update of the Lagrange multipliers is accompanied by a substantial jump, negative or positive, in the value of the objective function. This is signaled by the intersection data points between the changes in the norm level of the multipliers λ g , the jumps of the objective, and the marked update steps.

Algorithmic Scalability
In this section we present weak scalability results for the 2d case presented in section 4.1. These were carried out at HLRS using the modern HPE Hawk supercomputer. It consists of 5632 dualsocket nodes with the AMD EPYC 7742 processor. Each node has a total of 128 cores and 256GB of memory. The machine presents a 16-node hypercube connection topology, therefore the core counts are aimed at maximizing hypercube use, without significantly reducing the bandwidth. The grid partitioning is based on ParMETIS [21]. Figure 9 shows accumulated wallclock times, gained speedup relative to 24 cores and iteration counts for the first three optimization steps. These results are shown for the nonlinear extension operator (29), state (14) to (18), and the adjoint displacement (40). A P 1 finite element discretization is used for extension operator and its adjoint equation, while mixed P 1 − P 1 shape functions are used for the state equation Navier-Stokes equations. The nonlinear problems are solved using Newton's method altogether with a BiCGStab solver for the underlying linearizations. The linear solver is preconditioned with the geometric multigrid, which uses a V-cycle, 3 pre-and postsmoothing steps, and an LU base solver gathered on a single core. The error reduction is set to an absolute of 10 −14 and relative 10 −8 for the nonlinear solver, and 10 −12 and 10 −3 for the linear solvers. The purely linear problem has a relative and absolute reduction of 10 −16 . A Jacobi smoother is used within the geometric multigrid for the extension equation and the derivative, whereas the Navier-Stokes equation solver features an ILU smoother (see for instance [40]). The results presented start at 24 cores, with a fourfold increase for each mesh refinement. The studies show scalability and speedup for up to 6144 cores and more than 27 million triangular elements. Given that mesh refinements are performed for each core count increase, a different geometrical problem is solved therefore differences in iteration counts are expected. However, even for a significant increase in the number of geometric elements the iteration counts for the linear problems remain within moderate bounds. Moreover, its important to point out that the total number of DOFs solved within the presented PDEs in figure 9 increases from about 783k to 189 million. While the total number of DOFs solved in one optimization step is close to 300 million.
Together with the grid independence study for the outer optimization routine we thus obtain weak scalability of the overall method.

Conclusion
In this article we presented an optimization methodology which relies on the self-adaption of the extension operator within the method of mappings. The results show that large deformations with respect to the reference configuration are possible while preserving mesh quality. This has been studied in situations where singularities during the shape optimization process have to be smoothed out and newly generated. It has been demonstrated that these two effects are particularly important to be tackled for applications of hierarchical multigrid solvers when experiments from fluid dynamics are considered. The method's scalability and grid independency has been illustrated with the results of sections 4.3 and 5. Grid independence is necessary for applications where a high level of refinement is needed, since it guarantees that the same optimal shape is obtained regardless of the number of elements. This becomes particularly important for the weak scalability, where the grid is refined on each core count increase. The results shown in figure 9, in combination with the ones of figures 6 and 7, establish a proof of concept for industrial applicability, where a high number of DOFs are expected. Overall, in this article we have presented an algorithm towards scalable shape optimization for large scale problems with the potential to work reliably also in complex geometric situations.