## Abstract

We present a monolithic parallel Newton-multigrid solver for nonlinear nonstationary three dimensional fluid-structure interactions in arbitrary Lagrangian Eulerian (ALE) formulation. We start with a finite element discretization of the coupled problem, based on a remapping of the Navier–Stokes equations onto a fixed reference framework. The strongly coupled fluid-structure interaction problem is discretized with finite elements in space and finite differences in time. The resulting nonlinear and linear systems of equations are large and show a very high condition number. We present a novel Newton approach that is based on two essential ideas: First, a condensation of the solid deformation by exploiting the discretized velocity-deformation relation \(d_t \mathbf {u}= \mathbf {v}\), second, the Jacobian of the fluid-structure interaction system is simplified by neglecting all derivatives with respect to the ALE deformation, an approximation that has shown to have little impact. The resulting system of equations decouples into a joint momentum equation and into two separate equations for the deformation fields in solid and fluid. Besides a reduction of the problem sizes, the approximation has a positive effect on the conditioning of the systems such that multigrid solvers with simple smoothers like a parallel Vanka-iteration can be applied. We demonstrate the efficiency of the resulting solver infrastructure on a well-studied 2d test-case and we also introduce a challenging 3d problem.

### Similar content being viewed by others

Avoid common mistakes on your manuscript.

## 1 Introduction

Fluid-structure interactions appear in various problems ranging from classical applications in engineering like the design of ships or aircrafts, to the design of wind turbines. But they are also present in bio/medical systems describing the blood flow in the heart or in general problems involving the cardiovascular system. The typical challenge of fluid-structure interactions is two-fold. First, the special coupling character that stems from the coupling of a hyperbolic-type equation—the solid problem—with a parabolic-type equation—the Navier–Stokes equations. Second, the moving domain character brings along severe nonlinearities that have a non-local character, as geometrical changes close to the moving fluid-solid interface might have big impact on the overall solution.

Numerical approaches can usually be classified into *monolithic approaches*, where the coupled fluid-structure interaction system is taken as one entity and into *partitioned approaches*, where two separate problems—for fluid and solid—are formulated and where the coupling between them is incorporated in terms of an outer (iterative) algorithm. This second approach has the advantage that difficulties are isolated and that perfectly suited numerical schemes can be used for each of the subproblems. There are however application classes where partitioned approaches either fail or lack efficiency. The *added mass effect* [10] exactly describes this special stiffness connected to fluid-structure interactions. It is typical for problems with similar densities in the fluid and the solid—as it happens in the interaction of blood and tissue or in the interaction of water and the solid structure of a vessel. Here, monolithic approaches are considered to be favorable.

Monolithic approaches all give rise to strongly coupled, usually very large and nonlinear algebraic systems of equations. Although there has been substantial progress in designing efficient numerical schemes for tackling the nonlinear problems [16, 21, 23] (usually by Newton’s method) and the resulting linear systems [2, 11, 13, 19, 28, 32, 36], the computational effort is still immense and numerically accurate results for 3d problems are still rare.

In this contribution we present an approximated Newton scheme for solving nonstationary fluids structure interactions in a strictly monolithic formulation. The idea is based on the observation that the Newton convergence rate does not significantly worsen, if we neglect the derivatives with respect to the ALE deformation, see [33, Section 5.2.3]. Although convergence rates slightly suffer, overall computational times can be reduced due to lesser effort for assembling the matrix. Here, we exploit this structure of the reduced Jacobian to achieve an exact splitting of the monolithic Jacobian into a coupled problem for the velocities of fluid and solid and into a second step, where separate update problems are solved for solid and fluid deformation. Apart from the approximation of the Jacobian, no further splitting error is introduced. The benefit of this approach is twofold: instead of one large system with 7 coupled unknowns (pressure, velocity field and deformation field in 3d) we solve one coupled system of four unknowns (pressure and velocities) and two separate problems involving the deformations of each domain. Second, separating a reduced velocity problem has a positive effect on the system matrices such that efficient preconditioners and smoothers can be applied that are suitable for easy parallelization. Finally, we use the newly developed solver to introduce and test a new three dimensional benchmark configuration that is based on the configurations described by Hron and Turek [23].

In the following section we give a brief presentation of the fluid-structure interaction problem in a variational Arbitrary Lagrangian Eulerian formulation. Section 3 shortly presents the discretization of the equations in space and time. As formulation and discretization are based on established techniques, these two sections are rather concise. The nonlinear and linear solution framework is described in Sect. 4, where we start by an approximation of the Jacobian that results in a natural partitioning of the linear systems, which in turn are approximated by parallel multigrid methods. Numerical test-cases demonstrate the efficiency and scalability in Sect. 5. Here, we also present a new and challenging 3d configuration for benchmarking fluid-structure interactions. We conclude in Sect. 6.

## 2 Governing Equations

Here, we present the monolithic formulation for fluid-structure interactions, coupling the incompressible Navier–Stokes equations and an hyperelastic solid, based on the St. Venant Kirchhoff material. For details we refer to [33].

On the *d*-dimensional domain, partitioned in reference configuration \(\varOmega = \mathcal {F}\cup \mathcal {I}\cup \mathcal {S}\), where \(\mathcal {F}\) is the fluid domain, \(\mathcal {S}\) the solid domain and \(\mathcal {I}\) the fluid-structure interface, we denote by \(\mathbf {v}\) the velocity field, split into fluid velocity \(\mathbf {v}_f:=\mathbf {v}|_{\mathcal {F}}\) and solid velocity \(\mathbf {v}_s:=\mathbf {v}|_{\mathcal {S}}\), and by \(\mathbf {u}\) the deformation field, again with \(\mathbf {u}_s:=\mathbf {u}|_{\mathcal {S}}\) and \(\mathbf {u}_f:=\mathbf {u}|_{\mathcal {F}}\). The boundary of the fluid domain \(\varGamma _f:=\partial \mathcal {F}\setminus \mathcal {I}\) is split into inflow boundary \(\varGamma _f^{in}\) and wall boundary \(\varGamma _f^{wall}\), where we usually assume Dirichlet conditions, \(\varGamma _f^D:=\varGamma _f^{in}\cup \varGamma _f^{wall}\), and a possible outflow boundary \(\varGamma _f^{out}\), where we enforce the do-nothing outflow condition [22]. The solid boundary \(\varGamma _s=\partial \mathcal {S}\setminus \mathcal {I}\) is split into Dirichlet part \(\varGamma _s^D\) and a Neumann part \(\varGamma _s^N\).

We formulate the coupled fluid-structure interaction problem in a strictly monolithic scheme by mapping the moving fluid domain onto the reference state via the ALE map \(T_f(t):\mathcal {F}\rightarrow \mathcal {F}(t)\), constructed by a fluid domain deformation \(T_f(t)={\text {id}} + \mathbf {u}_f(t)\). In the solid domain, this map \(T_s(t)={\text {id}}+\mathbf {u}_s(t)\) denotes the Lagrange-Euler mapping and as the deformation field \(\mathbf {u}\) will be defined globally on \(\varOmega \) we simply use the notation \(T(t)={\text {id}}+\mathbf {u}(t)\) with the deformation gradient \(\mathbf {F}:=\nabla T\) and its determinant \(J:={\text {det}}(\mathbf {F})\). We find the global (in fluid and solid domain) velocity and deformation fields \(\mathbf {v}\) and \(\mathbf {u}\) and the pressure *p* in the function spaces

as solution to

where the test functions are given in

By \(\rho _s^0\) we denote the solid’s density, by \(\mathbf {u}^D(t)\in H^1(\varOmega )^d\) and \(\mathbf {v}^D(t)\in H^1(\varOmega )^d\) extensions of the Dirichlet data into the domain. The Cauchy stress tensor of the Navier–Stokes equations in ALE coordinates is given by

with the kinematic viscosity \(\nu _f\) and the density \(\rho _f\). In the solid we consider the St. Venant Kirchhoff material with the Piola Kirchhoff tensor

and with the shear modulus \(\mu _s\) and the Lamé coefficient \(\lambda _s\). In (1) we construct the ALE extension \(\mathbf {u}_f=\mathbf {u}|_\mathcal {F}\) by a simple harmonic extension. A detailed discussion and further literature on the construction of this extension is found in [33, 40].

For shorter notation, we denote by \(U:=(\mathbf {v},\mathbf {u},p_f)\) the solution and by \(\varPhi :=(\xi ,\phi ,\psi _f,\psi _s)\) the test functions.

## 3 Discretization

We give a very brief presentation on the numerical approximation of System (1). In time, we use the theta time-stepping scheme, which includes the backward Euler method, the Crank-Nicolson scheme and variants like the fractional step theta method, see [37]. In space we use conforming finite elements.

### 3.1 Temporal Discretization

For discretization in time we split the temporal interval \(I=[0,T]\) into discrete time steps \(0=t_1<t_2<\cdots < t_N = T\) with the step size \(k:=t_n-t_{n-1}\). For simplicity we assume that the subdivision is uniform. By \(U_n\approx U(t_n)\) we denote the approximation at time \(t_n\). We choose the theta time-stepping method for temporal discretization with \(\theta \in [0,1]\). To simplify the presentation we introduce

Then, one time step \(t_{n-1}\mapsto t_n\) of the theta scheme is given as

with \({{\bar{J}}}_n = \nicefrac {1}{2}(J_{n-1}+J_n)\) and \({{\bar{\mathbf {F}}}}_n = \nicefrac {1}{2}(\mathbf {F}_{n-1} + \mathbf {F}_n)\). Note that the ALE extension equation \(A_{ALE}\), the divergence equation \(A_{div}\) and the pressure coupling \(A_p\) are completely implicit. A discussion of this scheme and results on its stability for fluid-structure interactions are found in [33, 35]. We consider \(\theta = \nicefrac {1}{2}+{{{\mathcal {O}}}}(k)\) to get second order convergence and good stability properties.

The last equation in (3) gives a relation for the new deformation at time \(t_n\)

and we will use this representation to eliminate the unknown deformation \(\mathbf {u}_n\) and base the solid stresses purely on the last time step and the unknown velocity \(\mathbf {v}_n\), i.e. by expressing the deformation gradient as

Removing the solid deformation from the momentum equation will help to reduce the algebraic systems in Sect. 4. A similar technique within a Eulerian formulation and using a characteristics method is presented in [30, 31].

### 3.2 Finite Elements

In space, we discretize with conforming finite elements by choosing discrete function spaces \(U_h\in X_h\) and \(\varPhi _h\in Y_h\). We only consider finite element meshes that resolve the interface \(\mathcal {I}\) in the reference configuration, such that the ALE formulation will always exactly track the moving interface. In our setting, implemented in the finite element library Gascoigne 3D [5] we use quadratic finite elements for all unknowns and add stabilization terms based on local projections [4, 18, 29, 33] to satisfy the inf-sup condition. Where transport is dominant, additional stabilization terms of streamline upwind type [23, 34, 38] or of local projection type [14, 33] are added. As the remainder of this manuscript only considers the fully discrete setting, we refrain from indicating spatial or temporal discrete variables with the usual subscripts.

For each time step \(t_{n-1}\mapsto t_n\) we introduce the following short notation for the system of algebraic equations that is based on the splitting of the solution into unknowns acting in the fluid domain \((\mathbf {v}_f,\mathbf {u}_f)\), on the interface \((\mathbf {v}_i,\mathbf {u}_i)\) and those on the solid \((\mathbf {v}_{s},\mathbf {u}_{s})\). The pressure variable *p* acts in the fluid and on the interface.

\({{{\mathcal {D}}}}\) describes the divergence equation which acts in the fluid domain and on the interface, \({{{\mathcal {M}}}}\) the two momentum equations, acting in the fluid domain, on the interface and in the solid domain (which is indicated by a corresponding index), \({{{\mathcal {E}}}}\) describes the ALE extension in the fluid domain and \({{{\mathcal {U}}}}\) is the relation between solid velocity and solid deformation, acting on the interface degrees of freedom and in the solid. Note that \(\mathcal{M}^i\) and \({{{\mathcal {M}}}}^s\), the term describing the momentum equations, do not directly depend on the solid deformation \(\mathbf {u}_{s}\) as we express the deformation gradient by the velocity, see (5).

## 4 Solution of the Algebraic Systems

In fluid-structure interactions the solid and fluid problem are coupled via interface conditions. Forces in normal direction along the interface have to be equal (dynamic coupling condition) and the fluid domain has to follow the solid motion (kinematic and geometric coupling condition). If the solid motion is rather small and slow the energy exchange happens mainly via the dynamic coupling conditions. This allows the use of explicit time-stepping schemes for the mesh motion and ALE transformation for these examples. We want to follow a different approach and use a fully implicit time-stepping with an inexact Jacobian in the Newton algorithm. We neglect the derivatives with respect to the ALE deformation. Thereby, we have to solve in every Newton step a linear system of the same complexity as in the case of a partitioned time-stepping scheme.

In [33, chapter 5] we give a numerical study on different linearization techniques. It is found that the overall computational time can be reduced by neglecting the ALE derivatives in the Jacobian. Even for the fsi-3 benchmark problem of Hron and Turek [24] it is more efficient (in terms of overall computational time) to omit these derivatives at the cost of some additional Newton steps. Neglecting the ALE derivatives will be crucial for the reduction step described in the following section.

As we only change the Jacobian, we still apply a fully implicit time-stepping scheme and take advantage of its stability properties. Furthermore the transport due to the mesh motion is well approximated. For small time step sizes we will still observe super-linear convergence as with an exact Newton algorithm. In addition, the simplified structure of the matrix simplifies the development of preconditioners significantly as we will see later.

### 4.1 Relation to Approaches in Literature

Many (perhaps most) works on solvers for fluid-structure interactions are based on partitioned schemes, where highly tuned schemes can be applied to the two subproblems and acceleration methods are developed for the coupling. For an overview on some methods we refer to contributions in [8, 9] and the literature cited therein. We focus on problems with a dominant added mass effect, where monolithic approaches are believed to be more efficient [21].

In the following we assume that the monolithic problem is approximated with a Newton scheme. It has been documented [2, 32] that the Jacobian is very ill-conditioned with condition numbers exceeding those in fluid or solid mechanics by far. Furthermore, the systems are (in particular in 3d) so large that direct solvers are not applicable. In addition we found [32] that the condition numbers may be so large that direct solvers do not even converge well.^{Footnote 1} All successful solution strategies will therefore feature some kind of partitioning, usually be means of a decoupled preconditioner within a GMRES iteration. In [28] an overview on state of the art precondition techniques for iterative fluid-structure interaction solvers is given.

Multigrid solvers have first been used to accelerate the solution of the subproblems within an iterative scheme. A fully monolithic geometric multigrid approach was presented in [23] for 2d fsi problems. Here, the multigrid smoother was based on a Vanka iteration. In [7] the authors analyzed a highly simplified model problem and showed that a partitioned iteration as smoother should result in ideal multigrid performance with improved convergence rates on deeper mesh hierarchies. An algebraic multigrid method with applications in 2d and 3d [19] was based on a Gauss-Seidel splitting in the smoother. In [32] we presented a fully geometric monolithic multigrid method with a smoother that is based on a partitioning into fluid and solid problem and a block decomposition of each equation. This approach has been extended to incompressible materials and also to direct-to-steady-state solutions [2].

Some of these contributions employ parallelism. Recently, a block-preconditioned parallel GMRES iteration was presented [25] and showed good performance on various 2d and 3d test cases. A Gauss-Seidel decoupling with highly efficient and massively parallel preconditioners based on the SIMPLE scheme for the fluid and multigrid for a linear elasticity problem is presented in [13].

### 4.2 Linearization and Splitting

Each time step of the fully discrete problem is solved by Newton’s method. Evaluating the Jacobian is cumbersome due to the moving domain character of the fluid problem. First presentations of the derivatives of the fsi problem with respect to the mesh motion based on the concept of shape derivatives have been given by Fernandez and Moubachir [17]. Details in the spirit of our formulation in ALE coordinates are given in [33, Section 5.2.2]. Based on the notation (6) let \(U^{(0)}\) be an initial guess (usually taken from the last time step) we iterate for \(l=0,1,2,\dots \)

with a line search parameter \(\omega ^{(l)}>0\) and the Jacobian \({{{\mathcal {A}}}}'(U)\) evaluated at *U*. Each linear problem can be written as

where the right hand side vector \({\mathbf {B}}={{{\mathcal {B}}}}-\mathcal{A}(U^{(l)})\) is the Newton residual. The Jacobian shows the coupling structure of the nonlinear problem (6). The indices \({{{\mathcal {M}}}}^f,{{{\mathcal {M}}}}^i,{{{\mathcal {M}}}}^s\) correspond to the degrees of freedom, whether it belongs to a Lagrange node in the fluid, on the interface or in the solid. The subnodes correspond to the dependency on the unknown solution component, pressure, velocity and deformation, each in the different domains.

Three of the entries in bold letters, \(\varvec{\mathcal{M}^{s}_{\mathbf {u}_i},{{{\mathcal {M}}}}^s_{\mathbf {u}_s}}\) and \(\varvec{\mathcal{M}^i_{\mathbf {u}_s}}\) are zero. As the deformation gradient is expressed in terms of the velocity, see (5), the dependency of the solid equation on the solid’s deformation does not appear. The entry \(\varvec{{{{\mathcal {M}}}}^i_{\mathbf {u}_{i}}}\) belongs to test functions \(\phi \) that live on the interface. Thus, it contributes to both the solid equation and the fluid equation, e.g.

where only the solid part will vanish, compare (2). The remaining part belongs to the ALE map and these terms require the highest computational effort.

Corresponding terms are found in \({{{\mathcal {M}}}}^{f}_{\mathbf {u}_f},\mathcal{M}^{i}_{\mathbf {u}_f}, {{{\mathcal {M}}}}^{f}_{\mathbf {u}_i}\) and also in \({{{\mathcal {D}}}}_{\mathbf {u}_f}\) and \({{{\mathcal {D}}}}_{\mathbf {u}_i}\), which are all highlighted and marked in gray. We will set these matrix entries to zero and note once more that this is the only approximation within our Newton-multigrid scheme. Sorting the unknowns as \((p,\mathbf {v}_f,\mathbf {v}_i,\mathbf {v}_s,\mathbf {u}_f,\mathbf {u}_i,\mathbf {u}_s)\), the reduced system takes the following form and reveals a block structure

The dropped ALE derivatives (bold face zeros) are the most costly parts in matrix assembly. While skipping these terms does worsen Newton convergence rates, the overall computational time can still benefit. This has been shown in [33, Section 5.2.3] considering a benchmark problem with large deformation. This reduced linear system decomposes into three sub-steps. First, the coupled momentum equation, living in fluid and solid domain and acting on pressure and velocity

Second, the update equation for the deformation on the interface and within the solid domain

which, as a finite element discretization of the zero-order equation \(\mathbf {u}_n = \mathbf {u}_{n+1}+k (1-\theta )\mathbf {v}_{n-1}+k\theta \mathbf {v}_n\), only involves the mass matrix on both sides, such that this update can be performed by one vector-addition. Finally it remains to solve for the ALE extension equation

a simple elliptic equation, usually either the vector Laplacian or a linear elasticity problem, see [33, section 5.2.5]. The main effort lies in the momentum equations (10), which is still a coupled fluid-solid problem with saddle-point character due to the incompressibility.

Details on the derivatives appearing in (10) are given in [17, 41, 42] and in [33, Section 5.2.2] in the framework of this work. Note however that most of these terms, including all derivatives of the Navier–Stokes equations in direction of the fluid domain deformation \(\mathbf {u}_f\) are dropped.

### 4.3 Solution of the Linear Problems

The efficient solution of the linear systems arising in Newton approximations to nonlinear fluid-structure interaction problems is still an open problem. Lately some progress has been done in the direction of multigrid preconditioners for the monolithic problem [2, 19, 32, 33]. In all these contributions it has proven to be essential to apply a partitioning into fluid-problem and solid-problem within the smoother.

We shortly present the linear algebra framework used in the software library *Gascoigne 3D* [5]. We are using equal-order finite element for all unknowns, namely pressure, velocity and deformation such that we can locally block all degrees of freedom in each Lagrange point. The solution \(U_h\) is written as

By \(N_h\) we denote the number of degrees of freedom (for every unknown), by *d* the dimension. Likewise, the system matrix \(\mathbf {A}\) is a matrix with block structure, i.e. \(\mathbf {A}\in \mathbb {R}^{N_h(2d+1)\times N_h(2d+1)}\) with \(\mathbf {A}_{ij}\in \mathbb {R}^{(2d+1)\times (2d+1)}\). Considering the approximation scheme described in (10), (11) and (12), the coupled momentum equation has \(n_c^\mathcal{M}=d+1\) components and the extension problem consists of \(n_c^\mathcal{E}=d\) components. In general, the complete linear algebra module is acting on generic matrices and vectors with a block structure and local blocks of size \(n_c\times n_c\) and \(n_c\), respectively. The linear solver is designed by the following approach:

- (I)
As outer iteration we employ a GMRES method. Usually very few (\(<10\)) iterations are required such that restarting strategies are not used.

- (II)
The GMRES solver is preconditioned by a geometric multigrid method in V-cycle [3, 26]. The finite element mesh on each multigrid level resolves the fluid-solid interface.

- (III)
As smoother in the multigrid solver we use a Vanka type iteration which we will outline in some detail.

The smoother for the velocity problem and the smoother for the ALE extension problem is of Vanka type. Let \({{{\mathcal {N}}}}_h\) be the set of degrees of freedom of the discretization on mesh level \(\varOmega _h\). By \({{{\mathcal {P}}}} = \{P_1,\dots ,P_{n_{{{\mathcal {P}}}}}\}\) with \(P_i\subset \mathcal{N}_h\) we denote a partitioning of unknowns into local patches. In the most simple case, \(P_i\) includes all degrees of freedom in one element of the mesh. Larger patches, e.g. by combining 4 adjacent elements in 2d or 8 elements in 3d are possible. By \(n_{{{\mathcal {P}}}}\) we denote the number of patches and by \(n_p\) the size of each patch, which is the number of degrees of freedom in the patch. For simplicity, we assume that all patches in \({{{\mathcal {P}}}}\) have the same size. By \({{{\mathcal {R}}}}_i:\mathbb {R}^N\rightarrow \mathbb {R}^{n_p}\) we denote the restriction of a global vector to the degrees of freedom in one patch, by \({{{\mathcal {R}}}}^T_i\) the prolongation. Given a block vector \(\mathbf {x}\in \mathbb {R}^{N_hn_c}\) and a block matrix \(\mathbf {A}\in \mathbb {R}^{N_hn_c\times N_hn_h}\) we denote by

the restrictions to the degrees of freedom of one patch \(P_i\). We iterate

with a damping parameter \(\omega _V\approx 0.8\). This smoother can also be considered as a domain decomposition iteration with minimal overlap. Numerical tests have shown that this simple Jacobi coupling is more efficient than a corresponding Gauss-Seidel iteration.

The local matrices \(\mathbf {A}_i\) are inverted exactly using the library *Eigen* [20]. They are of substantial size, for \(d=3\), the local matrices corresponding to the momentum equations (10) have dimension \(108\times 108\), if small patches are used, and \(500\times 500\) if the smoother is based on the larger patches.

### 4.4 Parallelization

Basic features of *Gascoigne 3D* [5] are parallelized based on *OpenMP* [27]. For parallelization of the assembly of residuals and the matrix as well as application of the Vanka smoother (13) we use a coloring of the patches \({{{\mathcal {P}}}}\) such that no collisions appear. The usual memory bottleneck of finite element simulations will limit the parallel efficiency of matrix vector product and Vanka smoother. We will present some data on the parallel performance in Sect. 5.5.4.

## 5 Numerical Results

### 5.1 Problem Configuration

Two different test-cases are considered to study the performance of the discretization and the solvers that have been presented in Sects. 3 and 4 . First, we perform a numerical study based on the 2d fsi-3 benchmark problem that has been defined by Hron and Turek [24]. Second, we present a new 3d benchmark configuration that is based on the Hron & Turek problem.

#### 5.1.1 2d Configuration

As two dimensional configuration we solve the nonstationary 2d fsi-3 benchmark problem that has been introduced by Hron and Turek [24] and since then has been revisited in many contributions [21, 34] or [33, chapter 7]. We present results for this well established benchmark problem in order to validate the discretization and to compare the performance of the solver with results published in literature. The material parameters are given in Table 1 and the parameters yield a Reynolds number Re = 200 showing a periodic flow pattern.

#### 5.1.2 3d Configuration

Figure 1 shows the geometric configuration of the 3d benchmark problem. The computational domain with dimension \(2.8{\hbox { m}}\times 0.41{\hbox { m}}\times 0.41{\hbox { m}}\) is hexahedral with a cylinder cut out of it

The midpoint of the cylinder is slightly non-symmetric to allow for a stable oscillatory flow at low Reynolds numbers. Attached to the cylinder is an elastic beam with approximate dimension \(0.35\times 0.02\times 0.2\) given in initial state at time \(t=0\) as

The reference fluid domain at time \(t=0\) is given by

*Boundary conditions*

The boundary of the domain is split into the *inflow boundary*\(\varGamma _f^{in}\) at \(x=0\), the *outflow boundary*\(\varGamma _f^{out}\) at \(x=2.8\), the *wall boundaries* at \(z=0\) and \(z=0.41\) as well as \(y=0\) and \(y=0.41\) as well as the *cylinder boundary*\(\varGamma ^{cyl}_f\) at \((x-0.5)^2+(y-0.2)^2=0.05^2\). On the inflow boundary \(\varGamma _f^{in}\) we prescribe a bi-parabolic profile

that satisfies \(|\varGamma _f^{in}|^{-1}\int _{\varGamma _f^{in}} \mathbf {v}^{in}\,\text {d}s ={{\bar{\mathbf {v}}}}\), where \({{\bar{\mathbf {v}}}}\) is the average velocity. For regularization we suggest to introduce a transient start-up of the inflow

On the remaining boundaries \(\varGamma _f^{wall}\cup \varGamma _f^{cyl}\) the no-slip condition \(\mathbf {v}=0\) is prescribed. For the deformation \(\mathbf {u}\) (both the solid deformation and the ALE extension), a no-slip condition \(\mathbf {u}=0\) is prescribed on all boundaries. On the outer boundaries \(\varGamma _\text {wall}, \varGamma _\text {in}\) and \(\varGamma _\text {out}\) this condition can be relaxed to allow for larger mesh deformations, see [33, Section 5.3.5].

*Material Parameters* Similar material parameters as for the 2d set are taken and the values are given in Table 1. These parameters give a Reynolds number of \(Re=175\) and a periodic flow pattern arises.

### 5.2 Quantities of Interest

For the 2d configuration, we present the displacement at the tip of the flag at the point \(A=(0.6, 0.2)\) in x- and y-direction. In the case of the 3d configuration we take the point \(B=(0.9,0.2,0.3)\) on the back face of the beam and present the displacement in x-, y- and z-direction. These values are evaluated at every time-point. In addition we compute the drag and lift values around the beam and cylinder. To compute the lift \(\mathbf {f}\cdot \mathbf {e}_1\) and drag forces \(\mathbf {f}\cdot \mathbf {e}_2\) with \(\mathbf {e}_i = (\delta _{ij})_{j=1}^3\in \mathbb {R}^3\) and

we evaluate the residual representation

where \(\mathbf {1}_{\varGamma _{cyl}}\) is a finite element testfunction which is one along the cylinder \(\varGamma _{cyl}\) and zero elsewhere. Thereby we can compute the mean drag and lift value on every time interval \(I_n=[t_n,t_{n+1}]\) with very high precision. Details on the evaluation of such surface integrals for flow problems are given in [6] and in [33, Section 6.6.2] in the case of fluid-structure interactions.

### 5.3 Approximative Newton Scheme (2d benchmark)

We start by investigating the effect of the approximation of the Jacobian in our reduced Newton scheme. The 2d fsi-3 benchmark problem by Hron and Turek is evaluated on the time interval \(I=[5,5.5]\), where the dynamic is fully evolved and large deformations appear. A similar study with the same parameters and discretization has been performed in [33, chapter 5.2.3], however, based on the full monolithic Jacobian and using a direct solver for the linear problems. The comparison with the results in [33] enables to evaluate the effects of the presented inexact Jacobian on the Newton scheme. On the time interval \(I=[5,5.5]\) the oscillations are fully developed such that significant oscillations appear and the geometric nonlinearities, that come from the ALE mapping, have to be taken into account.

We only update the Jacobian of (10), the momentum equation, if the nonlinear convergence rate, that is measured as

is above a given threshold \(\gamma _{nt}\). The Jacobian of (12), the mesh motion problem, is only assembled once in the first time step, as we use a linear elasticity law. Like in [33], we investigate the behavior for the parameters

where \(\gamma _{nt}=0\) corresponds to the assembly of the approximated Jacobian in every Newton step. We solve the linear systems in every Newton step using a direct solver without any parallelization. The computations are performed on an Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz. For the time-stepping we use the suggested implicitly shifted Crank-Nicolson scheme with \(\theta =0.5+2k\) and the time step size \(k=0.005\)s. For spatial discretization we choose equal-order biquadratic elements on a mesh with \(80\,960\) dofs (mesh level 4). The Newton algorithm is stopped if the relative error reduces by eight orders of magnitude (\(\text {relative tol}=10^{-8}\)).

In Fig. 2 we show the results for each time step in the interval \(I=[5,5.5]\). The top row shows that the least number of Newton steps are required, if \(\gamma _{nt}=0\) is used. This is expected as \(\gamma _{nt}=0\) corresponds to the full Newton scheme that allows for quadratic convergence. While the effect is small for \(\gamma _{nt}=0.05\), the resulting Newton iteration count strongly increases for \(\gamma _{nt}=0.5\), where up to 20 steps are required, compared to a limit of 5 steps for \(\gamma _{nt}=0\) and 6 steps for \(\gamma _{nt}=0.05\). In the middle plot of Fig. 2 we give the number of Jacobians that have to be assembled. For \(\gamma _{nt}=0\) these numbers obviously correspond to the number of Newton steps, as the Jacobian is newly assembled in each step. For \(\gamma _{nt}=0.05\) and \(\gamma _{nt}=0.5\) the required number of assemblies is strongly limited. Finally, the lower plot shows the resulting computational time. Although \(\gamma _{nt}=0\) yields the best convergence rates, it requires the highest computational time. The choice \(\gamma _{nt}=0.05\) reduces the computational time by a factor of 2 while still giving very robust convergence. These results are in agreement with the study in [33]. With respect to computational time, Table 2 shows that \(\gamma _{nt}=0.2\) is most efficient, as the reduced time to assemble the Jacobian and the increased time, due to more Newton steps balances best. These results also reveal the large computational time that is required for assembling the Jacobian and preparing the multigrid smoother.

We can see in Table 2, where we collect the accumulated numbers for the complete interval \(I=[5,5.5]\) that we need 460 Newton steps, if we assemble the Jacobian in (10) in every Newton step. As we neglect the sensitivity information with respect to the mesh motion, we still have an inexact Newton scheme. Nevertheless, we need less Newton steps compared to the use of an exact Jacobian as in [33], where 532 Newton steps were required for the same setting. This is in line with the numerical tests on the inexact Jacobian for the 2d fsi-3 benchmark results in [33], where in first numerical studies no disadvantages due to the inexact Jacobian could be observed. Nevertheless, the better convergence rate is surprising. The direct solver UMFPACK [12] has difficulties to solve the exact Jacobian accurately enough as reported in [32, 33], which could be the reason for the higher number of Newton steps. The condition numbers for the matrices of the subproblems (10), (11) and (12) are much better then for the exact Jacobian as already analyzed in [33]. To conclude the modification of the Newton scheme only has minor influences on the Newton convergence rate, at least in the here presented numerical study.

### 5.4 Reference Values

All presented results in the following sections are initiated by using a time-stepping scheme with \(k=0.004\)s to compute a solution on the time interval \(I=[0,8]\) on all mesh levels indicated in Table 3. The corresponding solutions at time \(t=8\)s act as initial values for further computations on the interval \(I=[8,10]\) based on the time step sizes \(k=0.004\)s, \(k=0.002\)s and \(k=0.001\)s. To avoid inaccuracies in the reference values due a rapid change of the numerical discretization parameters, we only present results on the interval \(I=[9,10]\). A similar approach on adaptive time-stepping schemes is demonstrated in [15] and shows accurate results.

#### 5.4.1 Reference Values for the 2d Configuration

We summarize the results in Table 4. Within the interval \(I=[9,10]\) we indicate for each functional \(j(\cdot )\) the average of minimum and maximum as well as the deviation, i.e.

The presented values in the table for different time step sizes and spatial mesh sizes indicate convergence of the algorithm in space and a dominance of the spatial discretization error on the coarse grids in comparison to the temporal discretization error. These results are in very good agreement to the values found in literature [36].

#### 5.4.2 Reference Values for the 3d Configuration

In 3d, we evaluate the displacement of the elastic beam in the point *B* and also compute the drag and lift coefficients around the whole cylinder and the flag. In Fig. 3 we show the different functionals as function over the time interval \(I=[9,10]\). In addition, we summarized the average of maximal and minimal value as well as the deviation (see 16) for different meshes and for different time step sizes *k*. To draw a conclusion on the convergence or to present reference values, the computation has to be repeated on even finer meshes in the future.

### 5.5 Performance of the Linear Solver

To test the linear iterative solver presented in Sect. 4, we recomputed the solution on different mesh levels for the 2d and 3d benchmark configuration on the time interval \(I=[9,9.5]\) with time step size \(k=0.002\)s (250 steps). The beam oscillates in this time interval. Hence, due to the strong coupling, the solution of the Newton system is very challenging and the fluid as well as the solid elasticity problem have both to be solved very accurately.

The Newton algorithm in every time step terminates, if the residual is reduced by eight orders of magnitude (\(\text {relative tol}=10^{-8}\)) or if the absolute value, so the residual, falls below \(10^{-8}\). In every Newton step, the iterative solver for the linear problem (10) reduces the error by a factor of \(10^{-4}\). The parameter \(\gamma _{nt}=0.05\) is chosen as in Sect. 5.3 to decide, if the Jacobian of the momentum equation (10) is reassembled in the next Newton step. The mesh motion subproblem (12) is a linear elasticity problem and hence can be solved very efficiently with the geometric multigrid solver. Nevertheless, as we have to solve it after every Newton step, the solution of the linear system has still a high contribution to the computational time. The matrix for the linear meshmotion problem (12) only has to be assembled once in the first step.

In the following, we will only present averaged values. By “mean time per Newton step” we denote the average time of each step, measured over all 250 time steps. Hence, this average value also includes the time to reassemble the Jacobian, whose assembly incidence depends on the Newton rate, see Sect. 5.3. To make the values comparable with other solution approaches, we additionally present the mean time to assemble one Jacobian of the momentum equation (10). In the case of the direct solver, this includes the times for preparation and computation of the LU decomposition. In the case of the ILU and Vanka smoother the assemble times include the time to compute the ILU or the LU of the block matrices \(\mathbf {A}_i\).

#### 5.5.1 Dependency on the Vanka Patch Size (2d fsi-3)

Concerning the Vanka smoother, the question arises, how large we should choose the patches \(P_i\) to solve the linear system coming from the momentum equation (10) most efficiently. The simple structure of the Vanka solver enables to use different patch sizes in the fluid and solid domain. To test different blocking strategies we recorded the computational time for the 2d fsi-3 benchmark on the finest mesh level 6 and present the mean number of Newton steps and matrix assemblies per time step in Table 5. We either choose patches consisting of one element (\(n_p=3^2\cdot 3=27\)) or patches stretching over four adjacent elements (\(n_p=5^2\cdot 3=75\)). This yields local matrices of size \(\mathbf {A}_i\in \mathbb {R}^{27\times 27}\) or \(\mathbf {A}_i\in \mathbb {R}^{75\times 75}\) if larger patches are used.

We can observe that the minimal number of GMRES steps to solve (10) in every Newton step can be obtained, by using \(n_p=75\). If we only use the degrees of freedom of one element (\(n_p=27\)) as block on the solid domain, the number of GMRES steps increases and the Newton convergence suffers. This effect cannot be observed, if we only use smaller patches within the fluid domain, but large patches in the solid. As computational times are reasonable small for 2d computations, we will always use larger patches of size \(n_p=75\) in the Vanka smoother for all 2d computations to follow.

In 3d the same blocking strategy would correspond to combining 8 elements to one block, resulting in \(n_p = 5^3\cdot 4 = 500\) and matrices of size \(\mathbf {A}_i\in \mathbb {R}^{500\times 500}\). This strategy is forbiddingly expensive with increasing memory and time consumption for each block-LU. As the results in Table 5 show that it is sufficient to use small patches in the fluid domain, we will combine large patches with \(n_p=500\) in the solid with smaller patches of size \(n_p = 3^3\cdot 4=108\) in the fluid domain for all 3d computations to follow.

#### 5.5.2 Geometric Multigrid Performance in 2d and 3d (Sequential Computations)

All computations have been carried out on an Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz. Single Core performance only is used in this section. In Fig. 4 we show the results for both 2d and 3d benchmark problems on sequences of meshes.

In the top row we present the memory consumption (in 3d, the finest mesh level exceeded the available memory). In particular the 3d results show the expected superiority of iterative solvers as compared to the direct linear Solver UMFPACK [12] with a non-optimal scaling. The Vanka smoother requires slightly more memory which comes from the overlap of degrees of freedom between the different blocks. The middle plot of Fig. 4 shows the resulting computational time. According to our previous study [32], the multigrid method is not able to beat the direct solver in 2d. The situation dramatically changes in 3d, where the direct solver shows a strongly non-optimal scaling. The multigrid solvers shows nearly linear scaling for both ILU and Vanka smoothing. Concerning the ILU smoother, this is an improvement to our previous study presented in [32], where the multigrid solver was performed in a purely monolithic setting and an ILU that consists of local blocks coupling pressure, velocity and deformation. Here no convergence could be achieved on fine meshes. We note that the 3d benchmark problem considered in this paper is by far more challenging than the problem investigated in [32, 33] as it comprises very large deformation and hence strong nonlinearities in the solid and also in the ALE map. The lowest row shows the time for one assembly of the Jacobian, including the computational times for preparing the direct solver, the ILU smoother and the Vanka smoother. Here, the main discrepancies between the direct solver and the multigrid methods arise. Since we do not recompute the Jacobian in every Newton step (not even in every time step), it is no inconsistency that the assembly time is larger than the complete time per Newton step. In 2d the results appear slightly sub-optimal. This is due to the necessity to assemble the matrices along the complete multigrid hierarchy yielding a scaling of order \(\mathcal{O}(n\log n)\).

The average number of Newton steps and the average number of Jacobians assembled for all time steps within the intervals are gathered in Table 6. In addition, we present the mean number of GMRES steps to solve the linearized momentum equation, problem (10), once. The values show that the average number of matrix assemblies in each time step can be below 1. This is due to the approximation of the Jacobian by reassembling it, only if the convergence rates deteriorate. Both multigrid approaches, Vanka and ILU are very robust with regard to mesh refinement. The linear iteration counts rise only slightly.

Figure 5 shows the average number of GMRES steps required for both Vanka and for ILU smoothing in every time step. The values fluctuate due to the oscillatory motion of the beam.

According to Fig. 4 we need 43.88s for each Newton step on mesh level 6 in the 2d configuration. And according to Table 6 an average of 5.1 Newton steps. The mean computational time per time step is \(43.88 {\hbox {s}} \cdot 5.1= 223.49\hbox { s}\), whereby an average of \(7.01{\hbox {s}}\cdot 1.23=8.66{\hbox {s}}\) are used to construct the Jacobian. Most of the computational time is spent by the linear solver. In every Newton step the linear solver needs about \((223.9{\hbox {s}}-8.66{\hbox {s}} )/{5.1}=42{\hbox {s}}\).

#### 5.5.3 Dependency on the Time Step Size

The approach for splitting and reducing the Jacobian is based on two steps, skipping the ALE derivatives in the fluid problem and inserting the discretized velocity-deformation relation (4) into the solid’s stress tensor, see (5). Both steps rely on time-stepping. Small time steps will limit the domain motion from step to step, such that the influence of the ALE derivative and also the update in (4) is small. In Table 7 we investigate the robustness of the solution approach versus the time step size. Besides average iteration counts we also indicate the total number of GMRES steps that would be required to advance one time unit (\(1\hbox {s}\)), i.e. the number of steps necessary for 1/*k* time steps. This value stands for the overall effort of the method. However, we do not relate this effort to the resulting error. Further, this value does not include times for assembling the Jacobian that can be essential. The values in Table 7 show an increase of Newton steps with larger time step sizes, in particular for \(k=0.01\hbox {s}\). This is expected due to the decreased impact of the mass matrix and since we drop the ALE derivatives. The number of GMRES iterations per time Newton step is very stable up to \(k=0.004{\hbox {s}}\) and starts to increase at \(k=0.01{\hbox {s}}\). Despite this, the overall number of GMRES steps per time unit still shows increased efficiency for larger time step sizes.

To classify these results in the context of the 2d benchmark problem we refer to [35], where a meta-analysis on the temporal discretization of various solution approaches to this benchmark problem is given. It is found that time steps \(k\ll 0.01{\hbox {s}}\) are used by nearly all contributors to this benchmark and that \(k\approx 0.01{\hbox {s}}\) must be considered as too large in terms of the approximation error. A possible explanation for the need of such small time steps is found in high frequent oscillations of the solid that have to be resolved. With this in mind, the solution approach presented here can be regarded as highly efficient and appropriate for nonstationary fluid-structure interactions.

Considering stationary problems the coupling character of fluid-structure interactions strongly changes. Instead of a parabolic/hyperbolic-type setup, we then deal with a coupling of elliptic/elliptic character. The velocity-deformation equation \(\mathbf {v}_s=\partial _t\mathbf {u}_s\) is not present as the solid’s velocity, being zero, is not even part of the system. For a direct-to-steady-state solution approaches with stronger coupling must be considered. We refer to [32] and in particular to [2], where a variant of the former approach is realized for stationary problems.

#### 5.5.4 Parallelization

The Vanka smoother (based on a Jacobi iteration) has the advantage that it can be easily parallelized. We introduce a cell wise coloring of the Vanka patches. Colors are attributed by a simple ad hoc algorithm. We run over all patches; if a patch is not already labeled with a color, we will label it and block all its neighbours that share a common degree of freedom for this color. Then, we continue with the next color. This algorithm is not optimal in terms of “numbers of colors” and also not optimal in terms of “balanced number of elements per color” but adequate for our purpose with a moderate number of cores. As different patch sizes for fluid and solid domain are used in 3d, a different color is always allocated to fluid and solid patches, such that a good load balancing is possible. The finest mesh level in 3d is partitioned into 22 colors (13 within the fluid, 9 in the solid domain), see Fig. 6, whereby the number of patches in each color ranges between 6 716 and 2 within the fluid domain, and is constant with 80 patches per color within the solid. About 99% of the fluid patches belong to colors containing at least 500 patches, such that very little overhead must be expected due to suboptimality of partitioning the remaining colors (as long as a moderate number of threads is considered). Our algorithms yields solid colors with 80 patches each. While 80 is dividable by 16, it is not dividable by 32. Hence, the potential efficiency of functions depending on this coloring is reduced to about 0.8 (for 32 threads).

Furthermore, we parallelized the matrix vector product. Although in principle trivial to parallelize, we suffer from the usually memory bandwidth restrictions that will limit possible speedups for matrix vector products. All parallelization is done in *OpenMP* [27]. We note that the parallelization is not the focus of this work. Only first steps have been undertaken and the implementation allows for further optimization.

Similar to Sect. 5.5.2 we recompute the 2d and 3d problem on the time-interval \(I=[9,9.5]\) with the step size \(k=0.002\)s using the finest refinement levels 6 (in 2d) and 3 (in 3d). The mean computational time per time step on an Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz is given in Fig. 7 in a strong scalability test. In 3d we can observe that the parallelization of all ingredients scales rather well. If we double the number of cores the computational time reduces by a factor of 0.57. With 32 threads we achieve a speed up of about 10 in comparison to single core performance. The drop in efficiency from 16 to 32 threads (in 3d) is clearly visible in the assembly of the residual and the application of the Vanka smoother, two functions that strongly depend on the coloring of the patches.

In Table 8 we show, how the distribution of the computational time to the different ingredients develops for an increasing numbers of threads. These results belong to the 3d benchmark problem on the finest mesh level 3. The numbers show that more than 80% of the time is spend in linear algebra routines like sparse matrix-vector products and the application of the Vanka smoother. These operations are mainly limited by the memory bandwidth. The very low contribution of only 5% for the matrix assembly could lead to the conclusion that a matrix free implementation might be the proper choice. However, our implementation requires less than one matrix assembly per time step in average. The multigrid smoother however is applied many hundred times (about 5 Newton steps, 10 GMRES steps each, several Vanka steps). A matrix free implementation on such a low number of threads would hence strongly increase the overall time.

#### 5.5.5 Comparison to Results in the Literature

A direct comparison of the computational times to results published in literature is difficult, since different numerical test-cases are considered. The complexity depends on the problem size (number of degrees of freedom) but also on the density of the system matrices (depending on the finite element space). Another important factor is the impact of the nonlinearity which is influenced by the deformation of the solid and also the time step size. The 3d test case that we discussed in [32] features less deformation and is simpler as compared to the 3d benchmark problem introduced here. Further, computational times are often not presented in full detail. Besides the algebraic parts of the solver, assembly times have an important impact. Thereby the influence due to the strategy for smart Jacobian re-assemblies is significant. Finally, different hardware is used throughout all contributions.

To give a comparison of the various approaches we compute the ratio of “degrees of freedom per second for each time step” and indicate an accurate description of the setup used. Table 9 lists the results for 2d benchmark problems (mostly the fsi-3 problem) and different 3d problems. We only list contributions that can be regarded as fully monolithic and that handle nonlinear fluid-structure interactions including geometric nonlinearities by domain motion.

Due to the reasons mentioned above, these results have to be interpreted with care. They show however that the 2d benchmark problem is handled well by all approaches with little room for improvement. In 3d the direct solver clearly fails. Furthermore, multigrid approaches are superior on fine meshes with a little advantage for the reduced Newton-multigrid approach presented here.

## 6 Summary

We have introduced a Newton multigrid framework for monolithic fluid-structure interactions in ALE coordinates. The solver is based on two reduction techniques in the Jacobian: first, a condensation of the solid deformation by representing the deformation gradient on the velocity only and second, by skipping the ALE derivatives within the Navier–Stokes equations. This second steps leads to an approximated Newton method but we could show (also in preliminary works) that the time-to-solution even benefits from this approximation, as the computational time for assembling the ALE derivatives is very high. The reduction has two positive effect: the large system of 7 unknowns (in 3d) decomposes into one fluid-solid problem in pressure and velocity with 4 unknowns and two partitioned systems with 3 unknowns each for solving solid and fluid deformation. The second effect is the better conditioning of the coupled system that allows for the use of very simple multigrid smoothers that are easy to parallelize. Also, while ILU smoothing applied to the monolithic system was not convergent in our previous contribution [32], performed well for smoothing the global momentum equations. Combined with first steps of parallelization and in comparison to our past approaches based on a monolithic solution of the complete pressure-velocity-deformation system and partitioned smoothers we could significantly reduce the computational time.

As basis for future benchmarking of 3d fluid-structure interactions we presented an extension of the 2d benchmark problems by Hron and Turek [24] that is by far more challenging (due to larger deformations and a strong dynamic behavior) as compared to a first test case introduced in our past work [32] which has also been considered in [2, 25] in very similar studies. It will still require further effort to establish reference values for this new 3d benchmark case.

Our work includes some first simple steps of parallelization which have to be extended in future work. In particular, in order to overcome the memory bandwidth limitations which are common in such memory extensive computations, distributed memory paradigms have to be incorporated [26]. Further, some benefit can be expected by using GPU acceleration for matrix vector product and Vanka smoother.

## References

Amestoy, P.R., Guermouche, A., L’Excellent, J.Y., Pralet, S.: Hybrid scheduling for the parallel solution of linear systems. Parallel Comput.

**32**(2), 136–156 (2006)Aulisa, E., Bna, S., Bornia, G.: A monolithic Ale Newton-Krylov solver with multigrid-Richardson–Schwarz preconditioning for incompressible fluid-structure interaction. Comput. Fluids

**174**, 213–228 (2018)Becker, R., Braack, M.: Multigrid techniques for finite elements on locally refined meshes. Numer. Linear Algebra Appl.

**7**, 363–379 (2000). Special IssueBecker, R., Braack, M.: A finite element pressure gradient stabilization for the Stokes equations based on local projections. Calcolo

**38**(4), 173–199 (2001)Becker, R., Braack, M., Meidner, D., Richter, T., Vexler, B.: The finite element toolkit Gascoigne. http://www.gascoigne.uni-hd.de

Braack, M., Richter, T.: Solutions of 3D Navier–Stokes benchmark problems with adaptive finite elements. Comput. Fluids

**35**(4), 372–392 (2006)Brummelen, E., Zee, K., Borst, R.: Space/time multigrid for a fluid-structure-interaction problem. Appl. Numer. Math.

**58**(12), 1951–1971 (2008)Bungartz, H.J., Schäfer, M. (eds.): Fluid-Structure Interaction. Modelling, Simulation, Optimisation. Lecture Notes in Computational Science and Engineering, vol. 53. Springer (2006). ISBN-10: 3-540-34595-7

Bungartz, H.J., Schäfer, M. (eds.): Fluid-Structure Interaction II. Modelling, Simulation, Optimisation. Lecture Notes in Computational Science and Engineering. Springer (2010)

Causin, P., Gereau, J., Nobile, F.: Added-mass effect in the design of partitioned algorithms for fluid-structure problems. Comput. Methods Appl. Mech. Eng.

**194**, 4506–4527 (2005)Crosetto, P., Deparis, S., Fourestey, G., Quarteroni, A.: Parallel algorithms for fluid-structure interaction problems in haemodynamics. SIAM J. Sci. Comput.

**33**(4), 1598–1622 (2011). https://doi.org/10.1137/090772836Davis, T.: Umfpack, an unsymmetric-pattern multifrontal method. ACM Trans. Math. Soft.

**30**(2), 196–199 (2014)Deparis, S., Forti, D., Grandperrin, G., Quarteroni, A.: Facsi: a block parallel preconditioner for fluid-structure interaction in hemodynamics. J. Comput. Phys.

**327**, 700–718 (2016). https://doi.org/10.1016/j.jcp.2016.10.005Failer, L.: Optimal control for time dependent nonlinear fluid-structure interaction. Ph.D. thesis, Technische Universität München (2017)

Failer, L., Wick, T.: Adaptive time-step control for nonlinear fluid-structure interaction. J. Comput. Phys.

**366**, 448–477 (2018)Fernández, M., Gerbeau, J.F.: Algorithms for fluid-structure interaction problems. In: Formaggia, L., Quarteroni, A., Veneziani, A. (eds.) Cardiovascular Mathematics: Modeling and Simulation of the Circulatory System, MS & A, vol. 1, pp. 307–346. Springer, Berlin (2009)

Fernández, M., Moubachir, M.: A newton method using exact Jacobians for solving fluid-structure coupling. Comput. Struct>

**83**, 127–142 (2005)Frei, S.: Eulerian finite element methods for interface problems and fluid-structure interactions. Ph.D. thesis, Universität Heidelberg (2016). https://doi.org/10.11588/heidok.00021590

Gee, M., Küttler, U., Wall, W.: Truly monolithic algebraic multigrid for fluid-structure interaction. Int. J. Numer. Method Eng.

**85**, 987–1016 (2010)Guennebaud, G., Jacob, B., et al.: Eigen v3. http://eigen.tuxfamily.org (2010)

Heil, M., Hazel, A., Boyle, J.: Solvers for large-displacement fluid-structure interaction problems: segregated vs monolithic approaches. Comput. Mech.

**43**, 91–101 (2008)Heywood, J., Rannacher, R., Turek, S.: Artificial boundaries and flux and pressure conditions for the incompressible Navier–Stokes equations. Int. J. Numer. Math. Fluids.

**22**, 325–352 (1992)Hron, J., Turek, S.: A monolithic FEM/Multigrid solver for an ALE formulation of fluid-structure interaction with applications in biomechanics. In: Bungartz, H.J., Schäfer, M. (eds.) Fluid-Structure Interaction: Modeling, Simulation, Optimization. Lecture Notes in Computational Science and Engineering, pp. 146–170. Springer, Berlin (2006)

Hron, J., Turek, S.: Proposal for numerical benchmarking of fluid-structure interaction between an elastic object and laminar incompressible flow. In: Bungartz, H.J., Schäfer, M. (eds.) Fluid-Structure Interaction: Modeling, Simulation, Optimization. Lecture Notes in Computational Science and Engineering, pp. 371–385. Springer, Berlin (2006)

Jodlbauer, D., Langer, U., Wick, T.: Parallel block-preconditioned monolithic solvers for fluid-structure interaction problems. Int. J. Numer. Methods Eng.

**117**(6), 623–643 (2019)Kimmritz, M., Richter, T.: Parallel multigrid method for finite element simulations of complex flow problems on locally refined meshes. Numer. Linear Algebra Appl.

**18**(4), 615–636 (2010)Klemm, M., Supinski, B., (eds.): OpenMP Application Programming Interface Specification Version 5.0. Independently published (2019)

Langer, U., Yang, H.: Recent development of robust monolithic fluid-structure interaction solvers. In: Fluid-Structure Interaction. Modeling, Adaptive Discretization and Solvers. Radon Series on Computational and Applied Mathematics, vol. 20, pp. 169–192. de Gruyter (2017)

Molnar, M.: Stabilisierte Finite Elemente für Strömungsprobleme auf bewegten Gebieten. Master’s thesis, Universität Heidelberg (2015)

Pironneau, O.: An energy preserving monolithic eulerian fluid-structure numerical scheme. Chinese Annals of Mathematics

**39**, (2016). https://doi.org/10.1007/s11401-018-1061-9Pironneau, O.: An Energy stable Monolithic Eulerian Fluid-Structure Numerical Scheme with compressible materials (2019). arXiv:1607.08083

Richter, T.: A monolithic geometric multigrid solver for fluid-structure interactions in ALE formulation. Int. J. Numer. Meth. Eng.

**104**(5), 372–390 (2015)Richter, T.: Fluid-structure Interactions. Models, Analysis and Finite Elements. Lecture Notes in Computational Science and Engineering, vol. 118. Springer, Berlin (2017)

Richter, T., Wick, T.: Finite elements for fluid-structure interaction in ALE and Fully Eulerian coordinates. Comput. Methods Appl. Mech. Eng.

**199**(41–44), 2633–2642 (2010)Richter, T., Wick, T.: On time discretizations of fluid-structure interactions. In: Carraro, T., Geiger, M., Körkel, S., Rannacher, R. (eds.) Multiple Shooting and Time Domain Decomposition Methods. Contributions in Mathematical and Computational Science, vol. 9, pp. 377–400. Springer, Berlin (2015)

Turek, S., Hron, J., Madlik, M., Razzaq, M., Wobker, H., Acker, J.: Numerical simulation and benchmarking of a monolithic multigrid solver for fluid–structure interaction problems with application to hemodynamics. Technical report, Fakultät für Mathematik, TU Dortmund (2010). Ergebnisberichte des Instituts für Angewandte Mathematik, Nummer 403

Turek, S., Rivkind, L., Hron, J., Glowinski, R.: Numerical study of a modified time-stepping theta-scheme for incompressible flow simulations. J. Sci. Comput.

**28**(2–3), 533–547 (2006)Wall, W.: Fluid-structure interaction with stabilized finite elements. Ph.D. thesis, University of Stuttgart (1999). Urn:nbn:de:bsz:93-opus-6234

Wick, T.: Personal communication. University of Hannover (September 2019)

Yirgit, S., Schäfer, M., Heck, M.: Grid movement techniques and their influence on laminar fluid-structure interaction rpoblems. J. Fluids Struct.

**24**(6), 819–832 (2008)Zee, K., Brummelen, E., Borst, R.: Goal-oriented error estimation and adaptivity for free-boundary problems: the domain-map linearization approach. SIAM J. Sci. Comput.

**32**(2), 1074–1092 (2010)Zee, K., Brummelen, E., Borst, R.: Goal-oriented error estimation and adaptivity for free-boundary problems: the shape-linearization approach. SIAM J. Sci. Comput.

**32**(2), 1093–1118 (2010)

## Acknowledgements

Open Access funding provided by Projekt DEAL. Both authors acknowledge the financial support by the Federal Ministry of Education and Research of Germany, Grant Number 05M16NMA (TR) and 05M16WOA (LF). TR further acknowledges the support of the GRK 2297 MathCoRe, funded by the Deutsche Forschungsgemeinschaft, Grant Number 314838170 and LF acknowledges the support of the GRK 1754, funded by the Deutsche Forschungsgemeinschaft, Grant Number 188264188.

## Author information

### Authors and Affiliations

### Corresponding author

## Additional information

### Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

## Rights and permissions

**Open Access** This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

## About this article

### Cite this article

Failer, L., Richter, T. A Parallel Newton Multigrid Framework for Monolithic Fluid-Structure Interactions.
*J Sci Comput* **82**, 28 (2020). https://doi.org/10.1007/s10915-019-01113-y

Received:

Revised:

Accepted:

Published:

DOI: https://doi.org/10.1007/s10915-019-01113-y