1 Introduction

Fluid–structure interactions are part of various applications ranging from classical engineering problems like aeroelasticity or naval design to medical applications, e.g. the flow of blood in the heart or in blood vessels. More and more of these applications are regarded recently in combination with optimal control, shape-optimization, and parameter estimation. Especially in hemodynamical applications—in order to get a deeper understanding of the development of vascular diseases—patient specific properties have to be incorporated into the models. For example, in Bertoglio et al. (2012, 2013, 2014), D’Elia et al. (2012), Lassila et al. (2013), Pant et al. (2014), Moireau et al. (2013) patient specific boundary conditions and vessel material parameters are determined to simulate arterial blood flow. Similar approaches using gradient information have been proposed in D’Elia et al. (2012), Bertagna et al. (2014), Perego et al. (2011) to estimate Young’s modulus of an artery.

As computer tomography (CT) and magnetic resonance imaging (MRI) evolve rapidly, already very accurate measurements of the movement of the vessel wall are possible nowadays and even averaged flow profiles in blood vessels can be provided, see Asner et al. (2016), Bertoglio et al. (2014), Lamata et al. (2014). To incorporate the data in the vascular models, it is necessary to improve the available parameter estimation and optimal control algorithms for fluid–structure interaction applications, in particular since only few approaches in the literature take the sensitivity information of the full time-dependent nonlinear system into account. For example in Degroote et al. (2013), Martin et al. (2005), adjoint equations are derived for one-dimensional fluid–structure interaction configurations and in Richter and Wick (2013) for a stationary fluid–structure interaction problem. In contrast, the authors of Pant et al. (2014), Bertoglio et al. (2012, 2014), Moireau et al. (2013) use a sequential reduced Kalman filter.  Perego et al. (2011) compute sensitivity information to estimate the wall stiffness. To reduce the computational time, they solve in every time-point an optimal control problem. As the mesh motion is discretized via an explicit time-stepping scheme, no sensitivity information of the mesh motion equation has to be computed. Similar to the articles Bertoglio et al. (2012, 2014), Moireau et al. (2013), the estimated parameters are updated in every time step and the forward simulation only runs once.

In this paper we are going to compute gradient information for the full time-dependent nonlinear system for 3d applications. Thereby, the optimization algorithm takes the intrinsic property of fluid–structure interaction, transport over time, into account. In addition the here presented approach enables to regard tracking type functionals with observation at a singular time-point or on a specific time-interval. Furthermore a time-dependent parameter can be reconstructed. This would not be possible, if we would use a Kalman filter or would solve an optimization problem in every time step as in the literature cited above. The dual problem to compute sensitivities can be derived as in Failer (2017) or as in Failer and Wick (2018), where sensitivity information was used for a dual-weighted residual error estimator.

For various applications and a general overview on modeling and discretization techniques for fluid–structure interactions we refer to  Bazilevs et al. (2013), Richter (2017). Mathematically, two challenges come together in fluid–structure interactions: First, fluid–structure interactions are free boundary value problems. The governing domains for the fluid—we will consider the incompressible Navier–Stokes equations—and the solid—we consider hyperelastic materials like the St. Venant Kirchhoff model—move and the motion is determined by the coupled dynamics, i.e. it is not known a priori. This geometric problem is treated by mapping onto a fixed domain, see for example Donea (1982), such that movement of the boundary is incorporated into the equation and we can derive the dual problem on the fixed reference domain. Second, the two problems that are coupled are of different type, the parabolic Navier–Stokes equation and the hyperbolic solid problem. On the common and moving interface, both systems are coupled by different conditions. This coupling gives rise to stability problems that can call for small time steps or many subiterations. Most prominently this problem shows itself in the so called added mass effect presented in Causin et al. (2005). The added mass effect is of particular relevance in hemodynamical applications, that are focus of this work, and calls according to Hron et al. (2010) for monolithic formulations and strongly coupled discretizations and solution techniques. This property is transmitted to the dual problem, such that we have to derive strongly coupled solution techniques for the dual problem. To compute dual information, we extend the Newton solver proposed for time-dependent fluid–structure interactions in Failer and Richter (2020) to the dual problem. Thereby, iterative solvers, preconditioned with geometric multigrid, can solve the resulting linear problems in every state and dual time step very robust and efficiently.

In Sect. 2 we present the optimal control problem, which is discretized in Sect. 3 in space and time. For the discretized system we derive optimality conditions. In Sect. 4 we discuss modifications for the Newton scheme presented first in Failer and Richter (2020) and extend the approach to the dual problem. Finally we test the proposed algorithm in Sect. 5 numerically to analyze the behavior of the Newton scheme. In addition we take a closer look on the convergence behavior of the iterative solvers.

2 Governing equations

Here, we present the optimal control problem of a tracking type functional subject to fluid structure interactions. We use a monolithic formulation for the fluid–structure interaction model coupling the incompressible Navier–Stokes equations and an hyperelastic solid, based on the St. Venant Kirchhoff material. For details we refer to Richter (2017). The here presented optimization approach can be directly extended to specific material laws used in hemodynamics. As control variable we chose exemplarily the mean pressure over time at the outflow boundary. In the following we restrict us to the control space \(Q=L^2(I)\), but the here presented optimization algorithm can as well be applied to determine material parameters (e.g. \(Q=\mathbb {R}^n\)) or space-distributed parameters (e.g. \(Q=L^2(\Omega )\)) entering the fluid- or solid-problem.

On the d-dimensional domain, partitioned in reference configuration \(\Omega = {\mathcal {F}}\cup {\mathcal {I}}\cup {\mathcal {S}}\), where \({\mathcal {F}}\) is the fluid domain, \({\mathcal {S}}\) the solid domain and \({\mathcal {I}}\) the fluid structure interface, we denote by \({\mathbf {v}}\) the velocity field, split into fluid velocity \({\mathbf {v}}_f:={\mathbf {v}}|_{\mathcal {F}}\) and solid velocity \({\mathbf {v}}_s:={\mathbf {v}}|_{\mathcal {S}}\), and by \({\mathbf {u}}\) the deformation field, again with \({\mathbf {u}}_s:={\mathbf {u}}|_{\mathcal {S}}\) and \({\mathbf {u}}_f:={\mathbf {u}}|_{\mathcal {F}}\). The boundary of the fluid domain \(\Gamma _f:=\partial {\mathcal {F}}{\setminus }{\mathcal {I}}\) is split into inflow boundary \(\Gamma _f^{in}\) and wall boundary \(\Gamma _f^{wall}\), where we usually assume Dirichlet conditions, \(\Gamma _f^D:=\Gamma _f^{in}\cup \Gamma _f^{wall}\), and a possible outflow boundary \(\Gamma _f^{out}\), where we enforce the do-nothing outflow condition presented in Heywood et al. (1992), and the control boundary \(\Gamma _q\). The solid boundary \(\Gamma _s=\partial {\mathcal {S}}{\setminus }{\mathcal {I}}\) is split into Dirichlet part \(\Gamma _s^D\) and a Neumann part \(\Gamma _s^N\).

We formulate the coupled fluid–structure interaction problem in a strictly monolithic scheme by mapping the moving fluid domain onto the reference state via the ALE map \(T_f(t):{\mathcal {F}}\rightarrow {\mathcal {F}}(t)\), constructed by a fluid domain deformation \(T_f(t)={\text {id}} + {\mathbf {u}}_f(t)\). In the solid domain, this map \(T_s(t)={\text {id}}+{\mathbf {u}}_s(t)\) denotes the Lagrange-Euler mapping and as the deformation field \({\mathbf {u}}\) will be defined globally on \(\Omega\) we simply use the notation \(T(t)={\text {id}}+{\mathbf {u}}(t)\) with the deformation gradient \({\mathbf {F}}:=\nabla T\) and its determinant \(J:={\text {det}}({\mathbf {F}})\).

For given desired states \(\tilde{{\mathbf {v}}}(t) \in L^2({\mathcal {F}})\) or \(\tilde{{\mathbf {u}}}(t) \in L^2({\mathcal {S}})\), we find the global (in fluid and solid domain) velocity and deformation fields

$$\begin{aligned} {\mathbf {v}}(t)&\in {\mathbf {v}}^D(t)+H^1_0(\Omega ;\Gamma _f^D\cup \Gamma _s^D)^d \text { and } {\mathbf {u}}(t)\in {\mathbf {u}}^D(t)+H^1_0(\Omega ;(\partial {\mathcal {F}}{\setminus }{\mathcal {I}})\cup \Gamma _s^D)^d, \end{aligned}$$

the pressure \(p\in L^2({\mathcal {F}})\) and the control parameter \(q\in Q\) satisfying the initial condition \({\mathbf {v}}(0)={\mathbf {v}}_0\) and \({\mathbf {u}}(0)={\mathbf {u}}_0\), as solution to

$$\begin{aligned} \begin{aligned} \min _{q \in Q} J(q,{\mathbf {v}},{\mathbf {u}})= \frac{1}{2}\int _{I} \Vert {\mathbf {v}}-\tilde{{\mathbf {v}}} \Vert ^2_{\mathcal {F}} \text { d}t+ \frac{1}{2}\int _{I} \Vert {\mathbf {u}}-\tilde{{\mathbf {u}}} \Vert ^2_{\mathcal {S}}\text { d}t +\frac{\alpha }{2}\Vert q \Vert ^2_Q \end{aligned} \end{aligned}$$

and subject to

$$\begin{aligned} \begin{aligned}&\big ( J(\partial _t {\mathbf {v}}+ ({\mathbf {F}}^{-1}( {\mathbf {v}}-\partial _t {\mathbf {u}})\cdot \nabla ) {\mathbf {v}},\phi \big )_{\mathcal {F}} + ( J\varvec{\sigma }_f {\mathbf {F}}^{-T},\nabla \phi )_{\mathcal {F}} \\&\quad +(\rho _s^0\partial _t {\mathbf {v}},\phi )_{\mathcal {S}} +( {\mathbf {F}}\varvec{\Sigma }_s,\nabla \phi )_{\mathcal {S}} = (q,\phi )_{\Gamma _q} \\&(J{\mathbf {F}}^{-1}:\nabla {\mathbf {v}}^T,\xi )_{\mathcal {F}} = 0\\&(\partial _t {\mathbf {u}}- {\mathbf {v}},\psi _s)_{\mathcal {S}}=0\\&(\nabla {\mathbf {u}},\nabla \psi _f)_{\mathcal {F}}=0, \end{aligned} \end{aligned}$$

where the test functions are given in

$$\begin{aligned} \phi \in H^1_0(\Omega ;\Gamma _f^D\cup \Gamma _s^D)^d,\quad \xi \in L^2({\mathcal {F}}),\quad \psi _f\in H^1_0({\mathcal {F}})^d,\quad \psi _s\in L^2({\mathcal {S}})^d. \end{aligned}$$

By \(\rho _s^0\) we denote the solid’s density, by \({\mathbf {u}}^D(t)\in H^1(\Omega )^d\) and \({\mathbf {v}}^D(t)\in H^1(\Omega )^d\) extensions of the Dirichlet data into the domain. The Cauchy stress tensor of the Navier–Stokes equations in ALE coordinates is given by

$$\begin{aligned} \varvec{\sigma }_f({\mathbf {v}},p) = -p_f I + \rho _f\nu _f (\nabla {\mathbf {v}}{\mathbf {F}}^{-1} + {\mathbf {F}}^{-T}\nabla {\mathbf {v}}^T) \end{aligned}$$

with the kinematic viscosity \(\nu _f\) and the density \(\rho _f\). In the solid we consider the St. Venant Kirchhoff material with the second Piola Kirchhoff tensor \(\varvec{\Sigma }_s\) based on the Green Lagrange strain tensor \({\mathbf {E}}_s\)

$$\begin{aligned} \varvec{\Sigma }_s({\mathbf {u}}) = 2\mu _s {\mathbf {E}}_s + \lambda _s{\text {tr}}({\mathbf {E}}_s)I,\quad {\mathbf {E}}_s:=\frac{1}{2}({\mathbf {F}}^T{\mathbf {F}}-I) \end{aligned}$$

and with the shear modulus \(\mu _s\) and the Lamé coefficient \(\lambda _s\). In (2) we construct the ALE extension \({{\mathbf {u}}}_f = {{\mathbf {u}}}|_{\mathcal {F}}\) by a simple harmonic extension. A detailed discussion and further literature on the construction of this extension is found in Yirgit et al. (2008), Richter (2017). For shorter notation, we denote by \(U:=({\mathbf {v}},{\mathbf {u}},p_f)\in X\) the solution variable and with X the corresponding ansatz space and by \(\Phi :=(\phi ,\psi _f,\psi _s,\xi )\in Y\) the test functions and the corresponding test space.

For a control \(q\in L^2(I)\) and the here given tracking-type functional constrained by linear-fluid structure interaction, we were able to proof in Failer et al. (2016) existence of a unique solution and \(H^1(I)\) regularity of the optimal control. In addition an optimality system could be rigorously derived. Due to the missing regularity results for the here regarded nonlinear control to state mapping, no further theoretical conclusions are possible here.

3 Discretization

In the following we give a description of the discretization of the fluid–structure interaction system (2) in space and in time. While there exist many variants and different realizations, our choice of methods is based on the following principles

  • Since the FSI system is a constraint in the optimization process we base the discretization on Galerkin methods in space and time. This helps us to derive the discrete optimality system. As far as possible (up to quadrature error) we aim at permutability of discretization and optimization.

  • Aiming at three dimensional problems we consider methods of reasonable approximation error at feasible costs. In space we will use second order finite elements and in time a second order time stepping scheme. This approach is similar to Hron et al. (2010) or our previous work documented in Richter (2017).

  • Since the key component of the linear solver is a geometric multigrid method with Vanka type blocking in the smoother we choose equal-order finite elements for all unknowns, pressure, velocity and deformation adding stabilization terms for the inf-sup condition. This setup allows for efficient linear algebra and local blocking of the unknowns that is in favor of strong local couplings taking care of all nonlinearities, see also Braack and Richter (2006) for a detailed description of the realization in the context of reactive flows.

  • The temporal dynamics of fluid–structure interactions is governed by the parabolic/hyperbolic character of the coupling. In particular long term simulations give rise to stability problems. The Crank–Nicolson shows stability problems such that variants will be considered, see Richter and Wick (2015).

3.1 Temporal discretization

In Richter and Wick (2015) and Richter (2017, Section 4.1) many aspects of time discretization of monolithic fluid–structure interactions are discussed. It turns out that the standard Crank–Nicolson scheme is not sufficiently stable for long time simulations. Suitable variants are the fractional step theta method or shifted versions of the Crank–Nicolson scheme which we refer to as theta time stepping methods. Applied to the ode \(u'=f(t,u(t))\) they take the form

$$\begin{aligned} u_n-u_{n-1} = k_n \theta f(t_n,u_n) + k_n(1-\theta ) f(t_{n-1},u_{n-1}), \end{aligned}$$

if \(0=t_0<t_1<\cdots <t_N=T\) are the discrete time steps with step size \(k_n=t_n-t_{n-1}\). The choice \(\theta =\frac{1}{2}+\mathcal{O}(k)\) gives second order convergence and sufficient stability Richter and Wick (2015). Alternative approaches are the fractional step theta scheme that consists of three sub steps with specific choices for \(\theta\) and the step size or the enrichment of the Crank–Nicolson scheme with occasional Euler steps, see Rannacher (1984).

In the context of optimization problems we aim at permutability of optimization and discretization such that Galerkin approaches are of a favor. In Meidner and Richter (2014, 2015) we have demonstrated an interpretation of the general theta scheme and the fractional step theta scheme as Galerkin method with adapted function spaces: the solution is found in the space of continuous and piecewise (on \(I_n=(t_{n-1},t_n)\)) linear functions, the test-space is a space rotated constant functions with jumps at the discrete time steps \(t_n\), namely

$$\begin{aligned} \psi ^\theta \big |_{I_n}(t) = 1+ \frac{(6\theta -3)(2t-t_{n-1}-t_n)}{k_n}. \end{aligned}$$

The theta scheme is recovered exactly for linear problems and approximated by a suitable quadrature rule for nonlinear problems.

In case of fluid–structure interactions the domain motion term \((J{\mathbf {F}}^{-1}\partial _t{\mathbf {u}}\cdot \nabla {\mathbf {v}},\phi )\) takes a special role since it couples temporal and spatial differential operators. In Richter and Wick (2015) various discretizations are analyzed and all found to give results in close agreement.

Here, we consider the Galerkin variant of the theta scheme and we approximate all temporal integrals by the quadrature rule [see Meidner and Richter (2014)]

$$\begin{aligned} \int _{t_{n-1}}^{t_n} f(t)\psi ^\theta (t)\,\text {d}t =k_n\theta f(t_n) + k_n(1-\theta )f(t_{n-1}) + \mathcal{O}\big (k_n^2 \Vert f\Vert _{W^{2,1}([t_{n-1},t_n])}\big ). \end{aligned}$$

The resulting discrete scheme is—up to quadrature error—the standard theta time stepping scheme, which we use in our implementation for reasons of efficiency.

For the following we denote by \(U_n\approx U(t_n)\) the approximation at time \(t_n\). Further we introduce

$$\begin{aligned} \begin{aligned}&A_F(U,\phi ) := (J({\mathbf {F}}^{-1}{\mathbf {v}}\cdot \nabla ){\mathbf {v}},\phi )_{\mathcal {F}}+ \big (\rho _f\nu _f J(\nabla {\mathbf {v}}{\mathbf {F}}^{-1}+{\mathbf {F}}^{-T}\nabla {\mathbf {v}}^T){\mathbf {F}}^{-T},\nabla \phi \big )_{\mathcal {F}}\\&A_S(U,\phi ) := ({\mathbf {F}}\varvec{\Sigma }_s,\nabla \phi )_{\mathcal {S}} ,\quad A_{ALE}(U,\psi _f) := (\nabla {\mathbf {u}},\nabla \psi _f)_{\mathcal {F}}\\&A_{p}(U,\phi ) := (Jp{\mathbf {F}}^{-1},\nabla \phi )_{\mathcal {F}} ,\quad A_{div}(U,\xi ) := ( J{\mathbf {F}}^{-1}:\nabla {\mathbf {v}}^T,\xi )_{\mathcal {F}}\\&F_{TR}(U_n,U_{n-1},\phi ):=(( \bar{J}_n\bar{{\mathbf {F}}}^{-1} ({\mathbf {u}}_n-{\mathbf {u}}_{n-1})\cdot \nabla )\bar{{\mathbf {v}}}_n,\phi )_{\mathcal {F}}, \end{aligned} \end{aligned}$$

and the step \(t_{n-1}\mapsto t_n\) is given as

$$\begin{aligned} \begin{aligned}&\big (\bar{J}_n ({\mathbf {v}}_n-{\mathbf {v}}_{n-1}),\phi \big )_{\mathcal {F}} -F_{TR}(U_n,U_{n-1},\phi ) + k A_p(U_n,\phi ) +k\theta A_F(U_n,\phi ) \\&\qquad +\big (\rho ^0_s ({\mathbf {v}}_n-{\mathbf {v}}_{n-1}),\phi \big )_{\mathcal {S}}+k\theta A_S(U_n,\phi )\\&\quad = - k(1-\theta ) A_F(U_{n-1},\phi )- k(1-\theta ) A_S(U_{n-1},\phi ) +k(q_n,\phi )_{\Gamma _q} \\&k A_{div}(U_n,\xi ) =0\\&k A_{ALE}(U_n,\psi _f) =0\\&\big ({\mathbf {u}}_n,\psi _s\big )_{\mathcal {S}} -k\theta \big ({\mathbf {v}}_n,\psi _s\big )_{\mathcal {S}}= \big ({\mathbf {u}}_{n-1},\psi _s\big ) + k(1-\theta )\big ({\mathbf {v}}_{n-1},\psi _s\big )_{\mathcal {S}},\\ \end{aligned} \end{aligned}$$

with \(\bar{J}_n = {1}/{2}(J_{n-1}+J_n)\) and \(\bar{{\mathbf {F}}}_n = {1}/{2}({\mathbf {F}}_{n-1} + {\mathbf {F}}_n)\). The divergence equation \(A_{div}\) and the pressure coupling \(A_p\) are fully implicit, which can be considered as a post processing step, see Meidner and Richter (2015).

If the optimality system is first derived and then discretized using the Petrov–Galerkin discretization, we could observe that the control variable has to be in the theta dependent test space of the adjoint variable. As this space is very difficult to interpret the control variable \(q\in Q\) is approximated by piece-wise constant functions \(q_n\) on every time-interval in the following. An alternative interpretation is to actually use the theta dependent test space for the adjoint variable but to approximate these integrals with the midpoint rule giving

$$\begin{aligned} \int _{t_{n-1}}^{t_n} f(t)\psi ^\theta (t)\,\text {d}t =k_n f(t_{n-\frac{1}{2}}) + \mathcal{O}\left( k_n\big |2\theta -1\big | \Vert f\Vert _{W^{2,1}([t_{n-1},t_n])}\right) . \end{aligned}$$

Given the choice \(\theta ={1}/{2}+\mathcal{O}(k_n)\) this gives correct second order convergence. Numerical studies comparing both approaches did not result in a different behavior of the optimization algorithm.

3.2 Finite elements

Spatial discretization of the primal and adjoint problem is by means of quadratic finite elements in all variables on quadrilateral and hexahedral meshes. The interface \({\mathcal {I}}\) is resolved by the mesh such that no additional approximation error appears. To cope with the saddle point structure of the flow problem we use the local projection method for stabilization (Becker and Braack 2001; Frei 2016; Molnar 2015; Richter 2017). In the context of optimization problems this scheme has the advantage that stabilization and optimization commute, see Braack (2009). Further details on this and comparable approaches are found in the literature (Hron et al. 2010; Richter and Wick 2010; Richter 2017).

The use of equal order finite elements in all variables has the advantage that one set of scalar test functions \(\{\phi _h^{(1)},\dots ,\phi _h^{(N)}\}\) can be chosen for all variables. The discrete solution \(U_h\) can then be written as

$$\begin{aligned} U_h(x) = \sum _{i=1}^N \mathbf {U}_i \phi _h^{(i)}(x) \end{aligned}$$

with coefficient vectors \(\mathbf {U}_i=(p_i,{\mathbf {v}}_i,{\mathbf {u}}_i)\in \mathbb {R}^{2d+1}\) and scalar test functions \(\phi _h^{(i)}\). Likewise, the resulting matrix entries \(A_{ij}=A'(U_h)(W_h^{(j)},\Phi _h^{(i)})\) are small but dense local matrices of size \((2d+1)\times (2d+1)\). All linear algebra routines act on these blocks, e.g. inversion of a matrix entry corresponds to the inversion of these blocks \(A_{ij}^{-1}\), which results in a better cache efficiency and reduced effort for indirect indexing of matrix and vector entries. The effect of this approach is described in Braack and Richter (2006).

3.3 Optimality system and adjoint equation

As gradient based algorithms for parameter estimation are not very common in the hemodynamics community, we shortly derive the Karush–Kuhn–Tucker system and show how gradient information can thereby be extracted. To derive the Karush–Kuhn–Tucker system, we define Lagrange multipliers \(Z_n=({\mathbf {z}}_n^p,{\mathbf {z}}_n^v,{\mathbf {z}}_n^{uf},{\mathbf {z}}_n^{us})\in Y_h\) in every time step \(n=0,\ldots ,N\) and get the discrete Lagrangian \(L:\left( \mathbb {R}^N,(X_h)^{N+1},(Y_h)^{N+1}\right) \longmapsto \mathbb {R}\):

$$\begin{aligned} \begin{aligned}&L((q_n)_{n=1}^N,(U_n)_{n=0}^N,(Z_n)_{n=0}^N)\\&\quad :=\sum _{n=1}^{N-1} \Big \{\frac{1}{2} k \Vert {\mathbf {v}}_n-\tilde{{\mathbf {v}}}(t_n) \Vert ^2_{\mathcal {F}} +\frac{1}{2} k \Vert {\mathbf {u}}_n-\tilde{{\mathbf {u}}}(t_n) \Vert ^2_{\mathcal {S}}+ \frac{\alpha }{2} k q_n^2 \Big \}\\&\qquad +\frac{1}{4} k \Vert {\mathbf {v}}_0-\tilde{{\mathbf {v}}}(t_0) \Vert ^2_{\mathcal {F}} +\frac{1}{4} k \Vert {\mathbf {u}}_0-\tilde{{\mathbf {u}}}(t_0) \Vert ^2_{\mathcal {S}}\\&\qquad +\frac{1}{4} k \Vert {\mathbf {v}}_N-\tilde{{\mathbf {v}}}(t_N) \Vert ^2_{\mathcal {F}} +\frac{1}{4} k \Vert {\mathbf {u}}_N-\tilde{{\mathbf {u}}}(t_N) \Vert ^2_{\mathcal {S}}+\frac{\alpha }{2} k q_N^2\\&\qquad - \sum _{n=1}^{N} \Big \{ \big (\rho ^0_s ({\mathbf {v}}_n-{\mathbf {v}}_{n-1}),{\mathbf {z}}^v_n \big )_{\mathcal {S}}+k\theta A_S(U_n,{\mathbf {z}}^v_n)+ k(1-\theta ) A_S(U_{n-1},{\mathbf {z}}^v_n) \\&\qquad + \big ({\mathbf {u}}_n,{\mathbf {z}}^{us}_n\big )_{\mathcal {S}} -\big ({\mathbf {u}}_{n-1},{\mathbf {z}}^{us}_n\big )-k\theta \big ({\mathbf {v}}_n,{\mathbf {z}}^{us}_n\big )_{\mathcal {S}}-k(1-\theta )\big ({\mathbf {v}}_{n-1},{\mathbf {z}}^{us}_n\big )_{\mathcal {S}}\\&\qquad +(\bar{J}_n ({\mathbf {v}}_n-{\mathbf {v}}_{n-1}),{\mathbf {z}}^v_n\big )_{\mathcal {F}} -F_{TR}(U_n,U_{n-1},{\mathbf {z}}^v_n) + k A_p(U_n,{\mathbf {z}}^v_n)\\&\qquad +k\theta A_F(U_n,{\mathbf {z}}^v_n) + k(1-\theta ) A_F(U_{n-1},{\mathbf {z}}^v_n) - (q_n,{\mathbf {z}}^{v}_{n})_{\Gamma _q}\\&\qquad +k A_{div}(U_n,{\mathbf {z}}^p_n) +k A_{ALE}(U_n,{\mathbf {z}}^{uf}_n) \Big \}\\&\qquad +\big ({\mathbf {u}}(0)-{\mathbf {u}}_0,{\mathbf {z}}^{us}_0\big )_{\mathcal {S}} +\big ({\mathbf {v}}(0)-{\mathbf {v}}_0,{\mathbf {z}}^{v}_0\big )_{\mathcal {F}} +\big ({\mathbf {v}}(0)-{\mathbf {v}}_0,{\mathbf {z}}^{v}_0\big )_{\mathcal {S}} \end{aligned} \end{aligned}$$

If the triplet \(U_n=(p_n,{\mathbf {v}}_n,{\mathbf {u}}_n)\in X_h\) is the solution of the discrete fluid–structure interaction system of (4) in every time step \(n=0,\ldots ,N\) with the control parameter \((q_n)_{n=1}^N\) in the boundary condition, the useful identity

$$\begin{aligned} j((q_n)_{n=1}^N):=J((q_n)_{n=1}^N,(U_n(q_n))_{n=0}^N) =L((q_n)_{n=1}^N,(U_n)_{n=0}^N,(Z_n)_{n=0}^N) \end{aligned}$$

is true for arbitrary values \((Z_n)\in Y_h\), \(n=0,\dots ,N\). If we denote by \((\delta U_n)_{n=0}^N=\frac{d}{dq}(U_n)_{n=0}^N ((\delta q)_{n=1}^N)\) the derivative of the state variable with respect to the control, we obtain via the Lagrange functional the representation

$$\begin{aligned}j'((q_n)_{n=1}^N)((\delta q)_{n=1}^N) &=L'_q((q_n)_{n=1}^N,(U_n)_{n=0}^N,(Z_n)_{n=0}^N)((\delta q)_{n=1}^N)\\&\quad +L'_{U}((q_n)_{n=1}^N,(U_n)_{n=0}^N,(Z_n)_{n=0}^N)(\delta U)_{n=0}^N) \end{aligned}$$

of the derivative of the reduced functional. If we choose the Lagrange multiplier \(Z_n\in Y_h\) for \(n=N,\ldots ,0\) such that the dual problem

$$\begin{aligned} \frac{d}{d U_n} L((q_n)_{n=1}^N,(U_n)_{n=0}^N,(Z_n)_{n=0}^N)(\Phi )&=0 \quad \forall \Phi \in X_h \quad \text {for } n=N,\dots ,0 , \end{aligned}$$

is fulfilled, then we can evaluate the derivative of the reduced functional \(j((q_n)_{n=1}^N)\) in an arbitrary direction \((\delta q)_{n=1}^N\) by evaluating

$$\begin{aligned} j'((q_n)_{n=1}^N))((\delta q)_{n=1}^N)=L'_q((q_n)_{n=1}^N),(U_n)_{n=0}^N,(Z_n)_{n=0}^N)((\delta q)_{n=1}^N). \end{aligned}$$

This enables us to apply any gradient based optimization algorithm. We will later use a limited memory version of the Broyden–Fletcher–Goldfarb–Shanno (BFGS) update formula [see for example Geiger and Kanzow (2013)] to find a local minima of the discretized optimization problem.

In every update step we first have to solve for the solution \(U_n\) of the state equation (4) for \(n=0,\ldots ,N\) and then compute the dual problem (7) for \(n=N,\ldots ,0\). Thereby, the dual problem for \(n=N-1,\dots ,1\) consists of three equations with the test function \(\Phi \in X_h\). Due to derivatives with respect to the velocity variable \(v_n\), we obtain:

$$\begin{aligned}&\big (\rho ^0_s \phi ,{\mathbf {z}}^v_n\big )_{\mathcal {S}}+k\theta \frac{d}{d v_n} A_S(U_n,{\mathbf {z}}^v_n)(\phi ) -k\theta \big (\phi ,{\mathbf {z}}^{us}_n\big )_{\mathcal {S}}+(\bar{J}_n \phi ,{\mathbf {z}}^v_n \big )_{\mathcal {F}} \nonumber \\&\qquad -\frac{d}{d v_n} F_{TR}(U_n,U_{n-1},{\mathbf {z}}^v_n)(\phi ) +k\theta \frac{d}{d v_n} A_F(U_n,{\mathbf {z}}^v_n)(\phi ) +k \frac{d}{d v_n} A_{div}(U_n,{\mathbf {z}}^p_n) (\phi )\nonumber \\&\quad =k ({\mathbf {v}}_n-\tilde{{\mathbf {v}}}(t_n),\phi )_{\mathcal {F}} + \big (\rho ^0_s \phi ,{\mathbf {z}}^v_{n+1} \big )_{\mathcal {S}}\nonumber \\&\qquad - k(1-\theta ) \frac{d}{d v_n}A_S(U_{n},{\mathbf {z}}^v_{n+1})(\phi ) -k(1-\theta )\big (\phi ,{\mathbf {z}}^{us}_{n+1}\big )_{\mathcal {S}}+(\bar{J}_{n+1} \phi ,{\mathbf {z}}^v_{n+1} \big )_{\mathcal {F}}\nonumber \\&\qquad +\frac{d}{d v_n} F_{TR}(U_{n+1},U_{n},{\mathbf {z}}^v_{n+1})(\phi ) - k(1-\theta ) \frac{d}{d v_n}A_F(U_{n},{\mathbf {z}}^v_{n+1})(\phi ). \end{aligned}$$

Due to derivatives of the Lagrangian with respect to the displacement \(u_n\), we obtain:

$$\begin{aligned}&k\theta \frac{d}{d u_n} A_S(U_n,{\mathbf {z}}^v_n)(\psi ) + \big (\psi ,{\mathbf {z}}^{us}_n\big )_{\mathcal {S}}\nonumber \\&\qquad +\left( \frac{d}{d u_n}(\bar{J}_n)(\phi ) ({\mathbf {v}}_n-{\mathbf {v}}_{n-1}) ,{\mathbf {z}}^v_n\right) _{\mathcal {F}} -\frac{d}{d u_n} F_{TR}(U_n,U_{n-1},{\mathbf {z}}^v_n)(\psi ) \nonumber \\&\qquad + k \frac{d}{d u_n} A_p(U_n,{\mathbf {z}}^v_n)(\psi ) +k\theta \frac{d}{d u_n} A_F(U_n,{\mathbf {z}}^v_n)(\psi )\nonumber \\&\qquad +k \frac{d}{d u_n} A_{div}(U_n,{\mathbf {z}}^p_n) (\psi ) +k \frac{d}{d u_n} A_{ALE}(U_n,{\mathbf {z}}^{uf}_n)(\psi )\nonumber \\&\quad = - k(1-\theta ) \frac{d}{d u_n}A_S(U_{n},{\mathbf {z}}^v_{n+1})(\psi ) +(\psi ,{\mathbf {z}}^{us}_{n+1})\nonumber \\&\quad -\left( \frac{d}{d u_n}(\bar{J}_{n+1})(\phi ) ({\mathbf {v}}_{n+1}-{\mathbf {v}}_{n}) ,{\mathbf {z}}^v_{n+1} \right) _{\mathcal {F}} +\frac{d}{d u_n} F_{TR}(U_{n+1},U_{n},{\mathbf {z}}^v_{n+1})(\psi ) \nonumber \\&\quad - k(1-\theta ) \frac{d}{d u_n}A_F(U_{n},{\mathbf {z}}^v_{n+1})(\psi ) +k ({\mathbf {u}}_n-\tilde{{\mathbf {u}}}(t_n),\psi )_{\mathcal {S}}. \end{aligned}$$

Finally due to derivatives of the Lagrangian with respect to the pressure variable \(p_n\), we obtain:

$$\begin{aligned} k \frac{d}{d p_n} A_p(U_n,{\mathbf {z}}^v_n)(\xi )=0 . \end{aligned}$$

The first and last step of the discrete dual problem have a slightly different structure, but can be derived in a similar way. Since the monolithic formulation is a Petrov Galerkin formulation with different trial and test spaces, the adjoint coupling conditions differ from the primal ones. In the primal problem the solid displacement field enters as Dirichlet condition on the interface for the ALE extension problem. In the adjoint problem the shape derivatives of the adjoint ALE equation are coupled with the adjoint solid problem via a global test function which corresponds to a Neumann condition. As \({\mathbf {z}}^{uf}\) fulfills zero Dirichlet conditions on the interface, this corresponds to a back coupling of the shape derivatives into the adjoint solid problem via residuum terms. Similar to the primal problem the adjoint velocity \({\mathbf {z}}^v\) has to match on the interface and in addition an “adjoint dynamic” coupling condition is hidden in the test function \(\phi\). Therefore, block preconditioners suggested in the literature cannot be directly applied to the adjoint problem, but have to be adapted to the new structure.

3.4 Short notation for state and dual equation

Short notation of the state equation Key to the efficiency of the multigrid approach demonstrated in Failer and Richter (2020) is a condensation of the deformation unknown \({\mathbf {u}}_n\) from the solid problem. The last equation in (4) gives a relation for the new deformation at time \(t_n\)

$$\begin{aligned} {\mathbf {u}}_n = {\mathbf {u}}_{n-1}+k\theta {\mathbf {v}}_n + k(1-\theta ){\mathbf {v}}_{n-1}\text { in }{\mathcal {S}} \end{aligned}$$

and we will use this representation to eliminate the unknown deformation and base the solid stresses purely on the last time step and the unknown velocity, i.e. by expressing the deformation gradient as

$$\begin{aligned} \begin{aligned} {\mathbf {F}}_n={\mathbf {F}}({\mathbf {u}}_{n}) \,&\widehat{=}\, {\mathbf {F}}({\mathbf {u}}_{n-1},{\mathbf {v}}_{n-1};{\mathbf {v}}_n) \\&=I+\nabla \big ({\mathbf {u}}_{n-1}+k\theta {\mathbf {v}}_n + k(1-\theta ){\mathbf {v}}_{n-1}\big )\text { in }{\mathcal {S}}. \end{aligned} \end{aligned}$$

Removing the solid deformation from the momentum equation will help to reduce the algebraic systems in Sect. 4. A similar technique within an Eulerian formulation and using a characteristics method is presented in Pironneau (2016, 2019).

For each time step \(t_{n-1}\mapsto t_n\) we introduce the following short notation for the system of algebraic equations that is based on the splitting of the solution into unknowns acting in the fluid domain \(({\mathbf {v}}_f,{\mathbf {u}}_f)\), on the interface \(({\mathbf {v}}_i,{\mathbf {u}}_i)\) and those on the solid \(({\mathbf {v}}_{s},{\mathbf {u}}_{s})\). The pressure variable p acts in the fluid and on the interface.

$$\begin{aligned} \underbrace{\left( \begin{array}{c} \mathcal{D}(p,{\mathbf {v}}_f,{\mathbf {u}}_f,{\mathbf {v}}_{i},{\mathbf {u}}_{i},{\mathbf {v}}_{s},{\mathbf {u}}_{s}) \\ \mathcal{M}^f(p,{\mathbf {v}}_f,{\mathbf {u}}_f,{\mathbf {v}}_{i},{\mathbf {u}}_i) \\ \mathcal{M}^i(p,{\mathbf {v}}_f,{\mathbf {u}}_f,{\mathbf {v}}_{i},{\mathbf {u}}_i,{\mathbf {v}}_s) \\ \mathcal{M}^s(p,{\mathbf {v}}_{i},{\mathbf {u}}_i,{\mathbf {v}}_s) \\ \mathcal{E}({\mathbf {u}}_f,{\mathbf {u}}_i)\\ \mathcal{U}^i({\mathbf {v}}_{i},{\mathbf {u}}_{i},{\mathbf {v}}_s,{\mathbf {u}}_s)\\ \mathcal{U}^s({\mathbf {v}}_{i},{\mathbf {u}}_{i},{\mathbf {v}}_s,{\mathbf {u}}_s) \end{array}\right) }_{=: \mathcal{A}(U)} = \underbrace{ \left( \begin{array}{c} \mathcal{B}_1 \\ \mathcal{B}_2\\ \mathcal{B}_3\\ \mathcal{B}_4\\ \mathcal{B}_5\\ \mathcal{B}_6 \\ \mathcal{B}_7 \end{array}\right) }_{=: \mathcal{B}} \end{aligned}$$

\(\mathcal{D}\) describes the divergence equation which acts in the fluid domain and on the interface, \(\mathcal{M}\) the two momentum equations, acting in the fluid domain, on the interface and in the solid domain (which is indicated by a corresponding index), \(\mathcal{E}\) describes the ALE extension in the fluid domain and \(\mathcal{U}\) is the relation between solid velocity and solid deformation, acting on the interface degrees of freedom and in the solid. Note that \(\mathcal{M}^i\) and \(\mathcal{M}^s\), the term describing the momentum equations, do not directly depend on the solid deformation \({\mathbf {u}}_{s}\) as we base the deformation gradient on the velocity, see (13).

Short notation dual equation We aim at applying a similar reduction scheme to the adjoint problem. Here, there is no direct counterpart to (12). Instead, we first introduce the new variable \(\tilde{{\mathbf {z}}}^{us}_n\) such that

$$\begin{aligned} \big (\psi ,\tilde{{\mathbf {z}}}^{us}_n\big )_{\mathcal {S}}= \big (\psi , {\mathbf {z}}^{us}_n\big )_{\mathcal {S}}+k\theta \frac{d}{d u_n} A_S(U_n,{\mathbf {z}}^v_n)(\psi ). \end{aligned}$$

Thereby we can substitute all terms in (9), (10) and (11) which depend on \({\mathbf {z}}^{us}_n\) by the new variable \(\tilde{{\mathbf {z}}}^{us}_n\), such as

$$\begin{aligned} -\theta k \big (\phi ,{\mathbf {z}}^{us}_n\big )_{\mathcal {S}}=- \theta k \big (\phi ,\tilde{{\mathbf {z}}}^{us}_n\big )_{\mathcal {S}}+(\theta k)^2\frac{d}{d u_n} A_S(U_n,{\mathbf {z}}^v_n)(\phi ) \end{aligned}$$

in (9). Now the adjoint terms \(\mathcal{M}_{{\mathbf {u}}_i}\) and \(\mathcal{M}_{{\mathbf {u}}_s}\) resulting from derivatives of the momentum equation with respect to the displacement variable do not depend on the adjoint velocity variable \({\mathbf {z}}^{v}\) anymore which will enable later to decouple the problem in three well conditioned subproblems. Furthermore the “adjoint dynamic“ coupling conditions now corresponds to equivalents of adjoint boundary forces on the interface as in the state equation.

For each time step \(t_{n+1}\mapsto t_n\) we introduce again a short notation for the system of algebraic equations that is based on the splitting of the adjoint solution into unknowns acting in the fluid domain \(({\mathbf {z}}^v_f,{\mathbf {z}}^{uf}_f)\), on the interface \(({\mathbf {z}}^v_i, \tilde{{\mathbf {z}}}^{us}_i)\) and those on the solid \(({\mathbf {z}}^v_s, \tilde{{\mathbf {z}}}^{us}_s)\). The adjoint pressure variable \(z^p\) acts in the fluid and on the interface.

$$\begin{aligned} \underbrace{\begin{pmatrix} \mathcal{M}_{p} ({\mathbf {z}}^v_f,{\mathbf {z}}^v_{i},{\mathbf {z}}^v_{s}) \\ \mathcal{D}_{{\mathbf {v}}_f} ({\mathbf {z}}^p)+ \mathcal{M}_{{\mathbf {v}}_f} ({\mathbf {z}}^v_f,{\mathbf {z}}^v_i)\\ \mathcal{D}_{{\mathbf {u}}_f} ({\mathbf {z}}^p)+ \mathcal{M}_{{\mathbf {u}}_f} ({\mathbf {z}}^v_f,{\mathbf {z}}^v_i)+\mathcal{E}_{{\mathbf {u}}_f}({\mathbf {z}}^{uf}_f)\\ \mathcal{D}_{{\mathbf {v}}_i} ({\mathbf {z}}^p)+ \mathcal{M}_{{\mathbf {v}}_i} ({\mathbf {z}}^v_f,{\mathbf {z}}^v_i,{\mathbf {z}}^v_s) +\mathcal{U}_{{\mathbf {v}}_i} ( \tilde{{\mathbf {z}}}^{us}_{i}, \tilde{{\mathbf {z}}}^{us}_{s}) \\ \mathcal{D}_{{\mathbf {u}}_i} ({\mathbf {z}}^p)+ \mathcal{M}_{{\mathbf {u}}_i} ({\mathbf {z}}^v_f,{\mathbf {z}}^v_i)+\mathcal{E}_{{\mathbf {u}}_i}({\mathbf {z}}^{uf}_f)+\mathcal{U}_{{\mathbf {u}}_i} (\tilde{{\mathbf {z}}}^{us}_{i}, \tilde{{\mathbf {z}}}^{us}_{s})\\ \mathcal{M}_{{\mathbf {v}}_s} ({\mathbf {z}}^v_i,{\mathbf {z}}^v_s)+\mathcal{U}_{{\mathbf {v}}_s} ( \tilde{{\mathbf {z}}}^{us}_{i}, \tilde{{\mathbf {z}}}^{us}_{s}) \\ \mathcal{U}_{{\mathbf {u}}_s} ( \tilde{{\mathbf {z}}}^{us}_{i}, \tilde{{\mathbf {z}}}^{us}_{s})\\ \end{pmatrix}}_{=: {\mathcal{A}^{\text {Dual}}}(Z)} = \underbrace{\begin{pmatrix} \mathcal{B}_1^d \\ \mathcal{B}_2^d\\ \mathcal{B}_3^d\\ \mathcal{B}_4^d\\ \mathcal{B}_5^d\\ \mathcal{B}_6^d \\ \mathcal{B}_7^d \end{pmatrix}}_{=: \mathcal{B}^d} \end{aligned}$$

\(\mathcal{M}_{p}\) describes the adjoint divergence equation which acts in the fluid domain and on the interface, \(\mathcal{M}_{{\mathbf {v}}}\) and \(\mathcal{M}_{{\mathbf {u}}}\) the derivatives of the momentum equation with respect to the velocity and displacement variable, acting in the fluid domain, on the interface and in the solid domain (which is indicated by a corresponding index) and \(\mathcal{E}_{{\mathbf {u}}}\) describes the adjoint ALE extension in the fluid domain and \(\mathcal{U}_{{\mathbf {v}}}\) and \(\mathcal{U}_{{\mathbf {u}}}\) result from the relation between solid velocity and solid deformation, which act on the interface degrees of freedom and in the solid.

4 Solution of the algebraic systems

In Failer and Richter (2020), we have derived an efficient approximated Newton scheme for the forward fluid–structure interaction problem. We briefly outline the main steps and then focus on transferring these ideas to the dual equations. The general idea is described by the following two steps

  1. 1.

    In the Jacobian, we omit the derivatives of the Navier–Stokes equations with respect to the fluid domain deformation, which results in an approximated Newton scheme. In Richter (2017, chapter 5) it is documented that this approximation will slightly increase the iteration counts of the Newton scheme. On the other hand, the overall computational time is nevertheless reduced, since assembly times for these neglected terms are especially high. Since the Newton residual is not changed, the resulting nonlinear solver is of an approximated Newton type.

  2. 2.

    We use the discretization of the relation \(\partial _t {\mathbf {u}}={\mathbf {v}}\) between solid deformation and solid velocity, namely \({\mathbf {u}}^{n+1}={\mathbf {u}}^n + \theta k {\mathbf {v}}^{n+1}+(1-\theta )k{\mathbf {v}}^n\) to reformulate the solid’s deformation gradient based on the velocity instead of the deformation. This step has been explained in the previous section.

These two steps, the first one being an approximation, while the second is an equivalence transformation, allow to reduce the number of couplings in the Jacobian in such a way that each linear step falls apart into three successive linear systems. The first one describes the coupled momentum equation for fluid- and solid-velocity, the second realizes the solid’s velocity-deformation relation and the third one stands for the ALE extension. We finally note that the approximations only involve the Jacobian. The residual of the systems is not altered such that we still solve the original problem and compute the exact discrete gradient.

Then, in Sect. 4.2 we describe the extension of this solution mechanism to the adjoint system. Two major differences occur: first, the adjoint system is linear, such that we realize the solver in the framework of a preconditioned Richardson iteration. The preconditioner takes the place of the approximated Jacobian. Second, the adjoint interface coupling conditions differ from the primal conditions as outlined in the last paragraph of Sect. 3.3. This will call for a modification of the condensation procedure introduced as second reduction step in the primal solver.

4.1 Solution of the primal problem

In each time step of the forward problem we must solve a nonlinear problem. We employ an approximated Newton scheme

$$\begin{aligned} \tilde{\mathcal{A}}'(U^{(l)}) W^{(l)} = \mathcal{B}-\mathcal{A}(U^{(l)}),\quad U^{(l+1)}:=U^{(l)}+\omega ^{(l)} \cdot W^{(l)}, \end{aligned}$$

where \(\omega ^{(l)}\) is a line search parameter, \(U^{(0)}\) an initial guess. By \(\mathcal{A}'(U)\) we denote the Jacobian, by \(\tilde{\mathcal{A}}'(U)\) an approximation. As outlined in Failer and Richter (2020) the Jacobian is modified in two essential steps: first, in the Navier–Stokes problem, we skip the derivatives with respect to the ALE discretization. These terms are computationally expensive and they further introduce the only couplings from the fluid problem to the deformation unknowns. In Richter (2017, chapter 5) it has been shown that while this approximation does slightly worsen Newton’s convergence rate, the overall efficiency is nevertheless increased, as the number of additional Newton steps is very small in comparison to the savings in assembly time. Second, we employ the reduction step outlines in Sect. 3.4, which is a static condensation of the deformation unknowns from the solid’s momentum equation. Taken together, both steps completely remove all deformation couplings from the combined fluid–solid momentum equation and the Jacobian takes the form

$$\begin{aligned} \left( \begin{array}{cccc|c|cc} 0 &{} \mathcal{D}_{{\mathbf {v}}_f} &{} \mathcal{D}_{{\mathbf {v}}_i} &{} 0 &{} 0 &{} 0&{}0\\ \mathcal{M}^f_p &{} \mathcal{M}^f_{{\mathbf {v}}_f} &{} \mathcal{M}^f_{{\mathbf {v}}_i} &{} 0 &{} 0 &{} 0&{}0 \\ \mathcal{M}^i_p &{} \mathcal{M}^i_{{\mathbf {v}}_f} &{} \mathcal{M}^i_{{\mathbf {v}}_i} &{} \mathcal{M}^i_{{\mathbf {v}}_s} &{} 0 &{} 0&{}0\\ \mathcal{M}^s_p &{} 0 &{} \mathcal{M}^s_{{\mathbf {v}}_i} &{} \mathcal{M}^s_{{\mathbf {v}}_s} &{} 0 &{} 0&{}0 \\ \hline 0 &{} 0 &{} 0 &{} 0 &{} \mathcal{E}^f_{{\mathbf {u}}_{f}} &{} \mathcal{E}^f_{{\mathbf {u}}_{i}}&{}0\\ \hline 0&{}0&{}\mathcal{U}^i_{{\mathbf {v}}_{i}}&{}\mathcal{U}^i_{{\mathbf {v}}_{s}}&{}0&{}\mathcal{U}^i_{{\mathbf {u}}_{i}}&{}\mathcal{U}^i_{{\mathbf {u}}_{i}}\\ 0&{}0&{}\mathcal{U}^s_{{\mathbf {v}}_{i}}&{}\mathcal{U}^s_{{\mathbf {v}}_{s}}&{}0&{}\mathcal{U}^s_{{\mathbf {u}}_{i}}&{}\mathcal{U}^s_{{\mathbf {u}}_{i}}\\ \end{array} \right) \left( \begin{array}{c} \delta \mathbf {p}\\ {\delta }{\mathbf {v}}_f \\ {\delta }{\mathbf {v}}_{i} \\ {\delta }{\mathbf {v}}_{s} \\ {\delta }{\mathbf {u}}_f \\ {\delta }{\mathbf {u}}_{i}\\ {\delta }{\mathbf {u}}_{s} \end{array} \right) = \left( \begin{array}{c} \mathbf {b}_1 \\ \mathbf {b}_2 \\ \mathbf {b}_3 \\ \mathbf {b}_4 \\ \mathbf {b}_5 \\ \mathbf {b}_6 \\ \mathbf {b}_7 \end{array} \right) . \end{aligned}$$

This reduced linear system decomposes into three sub-steps. First, the coupled momentum equation, living in fluid and solid domain and acting on pressure and velocity,

$$\begin{aligned} \left( \begin{array}{cccc} 0 &{} \quad \mathcal{D}_{{\mathbf {v}}_f} &{} \quad \mathcal{D}_{{\mathbf {v}}_i} &{} \quad 0\\ \mathcal{M}^f_p &{} \quad \mathcal{M}^f_{{\mathbf {v}}_f} &{} \quad \mathcal{M}^f_{{\mathbf {v}}_i} &{} \quad 0\\ \mathcal{M}^i_p &{} \quad \mathcal{M}^i_{{\mathbf {v}}_f} &{} \quad \mathcal{M}^i_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^i_{{\mathbf {v}}_s}\\ \mathcal{M}^s_p &{} \quad 0 &{} \quad \mathcal{M}^s_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^s_{{\mathbf {v}}_s}\\ \end{array} \right) \left( \begin{array}{c} \delta \mathbf {p}\\ {\delta }{\mathbf {v}}_f \\ {\delta }{\mathbf {v}}_{i} \\ {\delta }{\mathbf {v}}_{s} \\ \end{array} \right) = \left( \begin{array}{c} \mathbf {b}_1 \\ \mathbf {b}_2 \\ \mathbf {b}_3 \\ \mathbf {b}_4 \end{array} \right) , \end{aligned}$$

second, the update equation for the deformation on the interface and within the solid domain,

$$\begin{aligned} \left( \begin{array}{cc} \mathcal{U}^i_{{\mathbf {u}}_{i}}&{} \quad \mathcal{U}^i_{{\mathbf {u}}_{i}}\\ \mathcal{U}^s_{{\mathbf {u}}_{i}}&{} \quad \mathcal{U}^s_{{\mathbf {u}}_{i}}\\ \end{array} \right) \left( \begin{array}{c} {\delta }{\mathbf {u}}_{i}\\ {\delta }{\mathbf {u}}_{s} \end{array} \right) = \left( \begin{array}{c} \mathbf {b}_6 \\ \mathbf {b}_7 \end{array} \right) - \begin{pmatrix} \mathcal{U}^i_{{\mathbf {v}}_{i}}&{} \quad \mathcal{U}^i_{{\mathbf {v}}_{s}}\\ \mathcal{U}^s_{{\mathbf {v}}_{i}}&{} \quad \mathcal{U}^s_{{\mathbf {v}}_{s}} \end{pmatrix} \begin{pmatrix} {\mathbf {v}}_i \\ {\mathbf {v}}_s \end{pmatrix}, \end{aligned}$$

which is a finite element discretization of the zero-order equation \({\mathbf {u}}_n = {\mathbf {u}}_{n+1}+k (1-\theta ){\mathbf {v}}_{n-1}+k\theta {\mathbf {v}}_n\). This update can be performed by one algebraic vector-addition. Finally it remains to solve for the ALE extension equation

$$\begin{aligned} \mathcal{E}^f_{{\mathbf {u}}_{f}}\delta {\mathbf {u}}_{f} = \mathbf {b}_5- \mathcal{E}^f_{{\mathbf {u}}_{i}}\delta {\mathbf {u}}_{i}, \end{aligned}$$

one simple equation, usually either a vector Laplacian or a linear elasticity problem, see Richter (2017, Section 5.2.5).

4.2 Dual

Due to the unsymmetrical structure of the fluid–structure interaction model the block collocation and coupling of the blocks in the transposed Jacobian \(A'(U)^T\) in the dual problem differs to the Jacobian of the primal problem. This stays in strong relation to the adjoint coupling conditions, see Sect. 3.3. Hence, block preconditioners developed for the state problem can not be applied in a black box way to the linear systems arising in the dual problem, but have to be adjusted. Furthermore, the dual system is linear such that the approximated Newton scheme must be replaced by a different concept. We start by indicating the full system matrix of the dual problem

$$\begin{aligned} \underbrace{\left( \begin{array}{cccc|c|cc} 0&{} \quad \mathcal{M}^{f,T}_p &{} \quad \mathcal{M}^{i,T}_p &{} \quad \mathcal{M}^{s,T}_p &{} \quad 0&{} \quad 0&{} \quad 0\\ \mathcal{D}_{{\mathbf {v}}_f}^T &{} \quad \mathcal{M}^{f,T}_{{\mathbf {v}}_f} &{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_f} &{} \quad 0&{} \quad 0&{} \quad 0&{} \quad 0\\ \mathcal{D}_{{\mathbf {u}}_f}^T &{} \quad \mathcal{M}^{f,T}_{{\mathbf {u}}_f} &{} \quad \mathcal{M}^{i,T}_{{\mathbf {u}}_f} &{} \quad 0&{} \quad \mathcal{E}^{f,T}_{{\mathbf {u}}_{f}} &{} \quad 0&{} \quad 0\\ \hline \mathcal{D}^T_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{f,T}_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{s,T}_{{\mathbf {v}}_i} &{} \quad 0&{} \quad \mathcal{U}^{i,T}_{{\mathbf {v}}_{i}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {v}}_{i}}\\ \mathcal{D}_{{\mathbf {u}}_i}^T&{} \quad \mathcal{M}^{f,T}_{{\mathbf {u}}_i}&{} \quad \varvec{\mathcal{M}^{i,T}_{{\mathbf {u}}_i}}&{} \quad \varvec{ \mathcal{M}^{s,T}_{{\mathbf {u}}_i}}&{} \quad \mathcal{E}^{f,T}_{{\mathbf {u}}_{i}}&{} \quad \mathcal{U}^{i,T}_{{\mathbf {u}}_{i}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {u}}_{i}}\\ \hline 0&{} \quad 0&{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_s} &{} \quad \mathcal{M}^{s,T}_{{\mathbf {v}}_s}&{} \quad 0&{} \quad \mathcal{U}^{i,T}_{{\mathbf {v}}_{s}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {v}}_{s}}\\ 0&{} \quad 0&{} \quad \varvec{\mathcal{M}^{i,T}_{{\mathbf {u}}_s}}&{} \quad \varvec{\mathcal{M}^{s,T}_{{\mathbf {u}}_s}}&{} \quad 0&{} \quad \mathcal{U}^{i,T}_{{\mathbf {u}}_{s}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {u}}_{s}}\\ \end{array} \right) }_{=A'_D} \left( \begin{array}{c} {\mathbf {z}}^p \\ {\mathbf {z}}^v_f \\ {\mathbf {z}}^v_i \\ {\mathbf {z}}^v_s \\ {\mathbf {z}}^{uf}_f\\ \tilde{{\mathbf {z}}}^{us}_i \\ \tilde{{\mathbf {z}}}^{us}_s \end{array} \right) = \left( \begin{array}{c} \mathcal{B}^d_1 \\ \mathcal{B}^d_2 \\ \mathcal{B}^d_3 \\ \mathcal{B}^d_4 \\ \mathcal{B}^d_5 \\ \mathcal{B}^d_6 \\ \mathcal{B}^d_7 \end{array} \right) , \end{aligned}$$

given as the transposed of the primal Jacobian, \(A_D = A'(U)^T\), see Failer and Richter (2020).

For solving the dual problem we want to mimic the primal approach: first, approximate the system matrix by neglecting couplings, second, use the static condensation as described in Sect. 3.4 in (15). As the problem is linear, a direct modification of the system matrix would alter the dual solution. Instead, we approximate the solution by a preconditioned Richardson iteration with an inexact matrix \(\tilde{A}'_D\approx A'(U)^T\) as preconditioner (approximated by a geometric multigrid solver)

$$\begin{aligned} Z^{(0)}=0,\quad \tilde{A}'_D W^{(l)}=\mathcal{B}^d-A'(U)^TZ^{(l-1)},\quad Z^{(l)}=Z^{(l-1)}+W^{(l)}, \end{aligned}$$

where \(Z^{(l)} = \{{\mathbf {z}}^p,{\mathbf {z}}^v_f,{\mathbf {z}}^v_i,{\mathbf {z}}^v_s,{\mathbf {z}}^{uf}_f,\tilde{{\mathbf {z}}}^{us}_i,\tilde{{\mathbf {z}}}^{us}_s\}\) and the update in every Richardson iteration is given by \(W^{(l)} = \{\delta {\mathbf {z}}^p,\delta {\mathbf {z}}^v_f,\delta {\mathbf {z}}^v_i,\delta {\mathbf {z}}^v_s,\delta {\mathbf {z}}^{uf}_f,\delta \tilde{{\mathbf {z}}}^{us}_i,\delta \tilde{{\mathbf {z}}}^{us}_s\}\). The residual is computed based on the full Jacobian \(A'(U)^T\) (including the ALE derivatives) such that we still converge to the original adjoint problem. Since we never assemble the complete Jacobian \(A'(U)\) in the primal solver, the adjoint residual \(\mathcal{B}^d-A'(U)^TZ^{(l-1)}\) can be established in a matrix free setting.

Then, similar to the described approach in case of the primal system, we neglect the ALE terms (shaded entries). Finally, we reorder to reach the preconditioned iteration

$$\begin{aligned} \underbrace{\left( \begin{array}{cccc|c|cc} 0&{} \quad \mathcal{M}^{f,T}_p &{} \quad \mathcal{M}^{i,T}_p &{} \quad \mathcal{M}^{s,T}_p &{} \quad 0&{} \quad 0&{} \quad 0\\ \mathcal{D}_{{\mathbf {v}}_f}^T &{} \quad \mathcal{M}^{f,T}_{{\mathbf {v}}_f} &{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_f} &{} \quad 0&{} \quad 0&{} \quad 0&{} \quad 0\\ \mathcal{D}^T_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{f,T}_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{s,T}_{{\mathbf {v}}_i} &{} \quad 0&{} \quad \mathcal{U}^{i,T}_{{\mathbf {v}}_{i}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {v}}_{i}}\\ 0&{} \quad 0&{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_s} &{} \quad \mathcal{M}^{s,T}_{{\mathbf {v}}_s}&{} \quad 0&{} \quad \mathcal{U}^{i,T}_{{\mathbf {v}}_{s}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {v}}_{s}}\\ \hline 0 &{} \quad 0 &{} \quad 0 &{} \quad 0&{} \quad \mathcal{E}^{f,T}_{{\mathbf {u}}_{f}} &{} \quad 0&{} \quad 0\\ \hline 0&{} \quad 0&{} \quad 0&{} \quad 0&{} \quad \mathcal{E}^{f,T}_{{\mathbf {u}}_{i}}&{} \quad \mathcal{U}^{i,T}_{{\mathbf {u}}_{i}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {u}}_{i}}\\ 0&{} \quad 0&{} \quad 0&{} \quad 0&{} \quad 0&{} \quad \mathcal{U}^{i,T}_{{\mathbf {u}}_{s}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {u}}_{s}}\\ \end{array} \right) }_{=\tilde{A}'_D} \left( \begin{array}{c} \delta {\mathbf {z}}^p \\ {\delta }{\mathbf {z}}^v_f \\ {\delta }{\mathbf {z}}^v_i \\ {\delta }{\mathbf {z}}^v_s \\ {\delta }{\mathbf {z}}^{uf}_f\\ {\delta }\tilde{{\mathbf {z}}}^{us}_i \\ {\delta }\tilde{{\mathbf {z}}}^{us}_s \end{array} \right) = \mathcal{B}^d-A'(U)^T Z^{(l-1)}, \end{aligned}$$

with the preconditioner \(\tilde{A}_D\) that decomposes into three separate steps. First, the equation for the adjoint mesh deformation variable

$$\begin{aligned} \mathcal{E}^{f,T}_{{\mathbf {u}}_f} {\delta }{\mathbf {z}}^{uf}_f=\mathbf {b}^d_3. \end{aligned}$$

Usually a symmetric extension operator \(\mathcal{E}^f_{{\mathbf {u}}_f}\) can be chosen. This avoids re-assembly of this matrix and possible preparations for the linear solver. See Richter (2017, Section 5.3.5) for different efficient options for extension operators. Second, the update for the adjoint solid deformation,

$$\begin{aligned} \left( \begin{array}{cc} \mathcal{U}^{i,T}_{{\mathbf {u}}_{i}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {u}}_{i}}\\ \mathcal{U}^{i,T}_{{\mathbf {u}}_{s}}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {u}}_{s}}\\ \end{array} \right) \left( \begin{array}{c} {\delta }\tilde{{\mathbf {z}}}^{us}_i\\ {\delta }\tilde{{\mathbf {z}}}^{us}_s\\ \end{array} \right) = \left( \begin{array}{c} \mathbf {b}^d_5 -\mathcal{E}^{f,T}_{{\mathbf {u}}_i} {\delta }{\mathbf {z}}^{uf}_f \\ \mathbf {b}^d_7 \end{array} \right) , \end{aligned}$$

which only involves inversion of the mass matrix and finally the update for the adjoint velocity and adjoint pressure

$$\begin{aligned} \left( \begin{array}{cccc} 0&{} \quad \mathcal{M}^{f,T}_p &{} \quad \mathcal{M}^{i,T}_p &{} \quad \mathcal{M}^{s,T}_p\\ \mathcal{D}_{{\mathbf {v}}_f}^T &{} \quad \mathcal{M}^{f,T}_{{\mathbf {v}}_f} &{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_f}&{} \quad 0\\ \mathcal{D}^T_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{f,T}_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_i} &{} \quad \mathcal{M}^{s,T}_{{\mathbf {v}}_i}\\ 0&{} \quad 0&{} \quad \mathcal{M}^{i,T}_{{\mathbf {v}}_s} &{} \quad \mathcal{M}^{s,T}_{{\mathbf {v}}_s} \end{array} \right) \left( \begin{array}{c} \delta {\mathbf {z}}^p \\ {\delta }{\mathbf {z}}^v_f\\ {\delta }{\mathbf {z}}^v_i \\ {\delta }{\mathbf {z}}^v_s\\ \end{array} \right) = \left( \begin{array}{c} \mathbf {b}^d_1 \\ \mathbf {b}^d_2 \\ \mathbf {b}^d_4 \\ \mathbf {b}^d_6 \end{array} \right) - \left( \begin{array}{cc} 0&{} \quad 0\\ 0&{} \quad 0\\ \mathcal{U}^{i,T}_{{\mathbf {v}}_s}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {v}}_s}\\ \mathcal{U}^{i,T}_{{\mathbf {v}}_s}&{} \quad \mathcal{U}^{s,T}_{{\mathbf {v}}_s}\\ \end{array} \right) \left( \begin{array}{c} {\delta }\tilde{{\mathbf {z}}}^{us}_i\\ {\delta }\tilde{{\mathbf {z}}}^{us}_s \end{array} \right) . \end{aligned}$$

The numbering of the right hand side \(\mathbf {b}^d_1,\dots ,\mathbf {b}^d_7\) is according to (23). As we do not modify the residuum, the derivatives with respect to the ALE transformation \(\mathcal{M}_{{\mathbf {u}}_f} ({\mathbf {z}}^v_f,{\mathbf {z}}^v_i)\) and \(\mathcal{M}_{{\mathbf {u}}_i} ({\mathbf {z}}^v_f,{\mathbf {z}}^v_i)\) still enter into \(\mathbf {b}_3\) and \(\mathbf {b}_5\). Hence the resulting problem in Eq. (26) corresponds to a linear elasticity problem on the fluid domain with an artificial forcing term in the right hand side and zero Dirichlet data on the interface. In Equation (27) the shape derivatives of the the ALE transformation enter via Residuum terms \(\mathcal{M}_{{\mathbf {u}}_i} ({\mathbf {z}}^v_f,{\mathbf {z}}^v_i)\) on the interface. These terms contain the adjoint geometric coupling condition. An explicit update by one vector-addition as for the corresponding primal equation is not possible. The “adjoint kinematic” and “adjoint dynamic” coupling conditions are fully incorporated in (28), similar as for the state equation, and thereby these coupling conditions are fully resolved in every Richardson iteration.

4.3 Solution of the linear problems

In each step of the Newton iteration for solving the state equation and in each step of the Richardson iteration in the case of the adjoint system, we must approximate three individual linear systems of equations. The mesh-update problems are usually of elliptic type, the vector Laplacian or a linear elasticity problem. Here, standard geometric multigrid solvers are highly efficient. Problem (27) and the primal counterpart correspond to zero order equations. Multigrid solvers or the CG method converge with optimal efficiency. It remains to approximate the coupled momentum equations, given by (28) in the dual case. Here, we are lacking any desirable structure. The matrices are not symmetric, they feature a saddle-point structure and involve different scaling of the fluid- and solid-problem. We approximate these equations by a GMRES iteration that is preconditioned with a geometric multigrid solver. Within the multigrid iteration we employ a smoother of Vanka type, where we invert local patches exactly. These patches correspond to all degrees of freedom of one element (within the fluid) and to a union of \(2^d\) elements (within the solid). For the sake of simplicity (in terms of implementational effort) we use this highly robust solver also for the other two problems, despite their simpler character. For details we refer to Failer and Richter (2020).

4.4 Algorithm

To get an overview how the final optimization routine works we summarized all the intermediate steps in the following algorithm:

figure a

5 Numerical results

5.1 Problem configuration 2d

Fig. 1
figure 1

Geometry for flow around cylinder with elastic beam. The blue region denotes the observation domain \(\Omega _{obs}\). (Color figure online)

We modify the well known FSI Benchmark from Hron and Turek (2006) by adding an additional boundary \(\Gamma _{q}\) as in Fig. 1. The material parameters are chosen as for the FSI 1 Benchmark. In the solid the Lame parameters with \(\lambda =2.0 \times 10^6\hbox { kg/ms}^{2}\) and \(\mu =0.5 \times 10^6\hbox { kg/ms}^{2}\) and a fluid viscosity \(\nu _f={0.001}\,\hbox {m}^2/\hbox {s}\) are chosen. The solid and fluid densities are given by \(\rho _s={1000.0}\,{\hbox {kg/m}^3}\) and \(\rho _f={1000.0}{\hbox { kg/m}^3}\). The inflow velocity is increased slowly during the time interval \(I=[0,2]\) as for the instationary FSI 2 and FSI 3 benchmark.

In the first example, we would like to determine the pressure profile \(q(t)\in L^2(I)\) at the control boundary \(\Gamma _{q}\) on the time interval \(I=[0,6]\), leading to the displacement profile over time

$$\begin{aligned} \tilde{u}(t)= {\left\{ \begin{array}{ll} 0 &{}\quad t< {2}\,\text {s}\\ 0.01\cdot \sin (2\pi t) &{}\quad t \ge 2\text { s} \end{array}\right. } \end{aligned}$$

in y-direction in the area \(\Omega _{obs}=\{0.525\le x\le 0.6,0.19\le y\le 0.21 \}\) at the tip of the flag (see Fig. 1). To do so, we minimize the functional

$$\begin{aligned} \begin{aligned} \min _{q \in L^2(I)} J(q,{\mathbf {u}})= \frac{1}{2} \int _{0}^{6}\Vert {\mathbf {u}}_y-\tilde{u} \Vert ^2_{\Omega _{Obs}}\text { d}t+ \frac{\alpha }{2}\int _{0}^{6} q(t)^2 \text { d}t \end{aligned} \end{aligned}$$

constrained by the fluid–structure interaction problem. We discretize the system as presented in Sect. 3 in time using a shifted Crank–Nicolson time stepping scheme with time step size \(k=0.01\) and \(\theta =0.5+2k\) and quadrilateral meshes with varying mesh size, see Table 1. The control variable is chosen to be piece-wise constant on every time interval (\(\dim (Q)= 600\)). The Tikhonov regularization parameter is set to \(\alpha =1.0\times 10^{-17}\).

5.2 Problem configuration 3d

In the second example we regard a pressure wave in straight cylinder as presented in Gerbeau and Vidrascu (2003). The cylinder has the length 5 cm and a radius of \({0.5}\,\text {cm}\). The cylinder is surrounded by an elastic structure with constant thickness \(h={0.1}\,\text {cm}\). The elastic structure is clamped at the inflow boundary and the outflow domain is fixed in x-direction and free to move in y- and z-direction. At the inlet we describe a pressure step function \(p_{in}=1.33\times 10^4\hbox { g cm}^{-1}\hbox { s}={10}\,\text {mmHg}\) for \(t\le {0.003}\,\text {s}\), afterwards we set the pressure to zero. In the solid the Lame parameters with \(\lambda =1.73\times 10^6 \hbox {g/cms}^2\) and \(\mu =1.15\times 10^6\,\hbox {g/cms}^2\) and a fluid viscosity \(\nu _f={0.03}\,{\hbox {cm}^2/\hbox {s}}\) are chosen. The solid and fluid densities are given by \(\rho _s={1.2}\,{\hbox {g}/\hbox {cm}^3}\) and \(\rho _f=1.0\,\hbox {g/cm}^3\). We plotted the solution at \(t={0.006}\,\text {s}\) in Fig. 2.

Fig. 2
figure 2

Velocity field of the pressure wave at \(t={0.006}\,\text {s}\) on the deformed domain (amplified by a factor 10)

If only a do-nothing condition is described with constant pressure at the outflow boundary, then pressure waves are going to be reflected at the outflow boundary. If the pressure along the outflow boundary is chosen appropriate the pressure wave will leave the cylindrical domain without any reflection such that the system will be at rest after some time. To determine the corresponding pressure profile \(q(t)\in L^2(I)\) on the time interval \(I=[0,0.04]\), we minimize the kinetic energy in the fluid domain for \(t>{0.03}\,\text {s}\). Hence we minimize the functional

$$\begin{aligned} \begin{aligned} \min _{q \in L^2(I)} J(q,{\mathbf {v}})= \frac{1}{2}\int _{0.03}^{0.04} \Vert {\mathbf {v}} \Vert ^2_{\mathcal {F}}\text { d}t+ \frac{\alpha }{2}\int _{0}^{0.04} q(t)^2 \text { d}t \end{aligned} \end{aligned}$$

constrained by the fluid–structure interaction problem. In time we use, as already in the previous example, a shifted Crank–Nicolson time stepping scheme with time step size \(k=0.0001\) and \(\theta =0.5+2k\) and hexahedral meshes with varying mesh size, see Table 1. Only at the time points \(t=0\) and \(t=0.003\), we use for four steps a time step-size of \(k=0.00005\) with \(\theta =1.0\). Thereby, no artificial effects occur due to the jump in the pressure at the inflow boundary. The Tikhonov regularization parameter is set to \(\alpha =1.0\times 10^{-8}\).

Table 1 Spatial degrees of freedom for 2d and 3d configuration on every refinement level. In 3d the mesh on level 4 and 5 are locally refined along the interface. In time we use a uniform partitioning. In 3d, we add 4 initial backward Euler steps with reduced step size for smoothing at times \(t=0\,\text {s}\) and \(t=0.03\,\text {s}\) each

5.3 Optimization algorithm

Given the computed gradient information using Formula (8), we apply a BFGS algorithm implemented in the optimization library RoDoBo (Becker et al. 2020b) to solve the optimization problem. We use a limited memory version as presented e.g. in Geiger and Kanzow (2013) such that there is no need to assemble and store the BFGS update matrix. To guarantee that the update matrix keeps symmetric and positive definite a Powell-Wolfe step size control should be used. As this step size criteria is in most cases very conservative, we only check if we have descend in the functional value. Control constraints could be added in the optimization algorithm via projection of the update in every optimization step. In this paper we only regard unconstrained examples. Fast convergence of the BFGS algorithm only can be expected close to the optimal solution. Hence, we take advantage of the mesh hierarchy, which is used in the geometric multigrid algorithm, and first solve the optimization problem on a coarse grid to have a good initial guess on finer meshes. As the computation time rises for 3d examples on finer meshes very fast, we can save a lot of computation time using this approach. In the following we reduce the norm of the gradient by a factor of \(10^{-1}\) in every optimization loop and then refine the mesh and restart the optimization with the control from the coarser mesh. To compute the gradient, we have to solve the state and dual problem for all time steps. The state solutions are stored on the hard disc and loaded during the computation of the dual problem if necessary. Thereby, only the current state and adjoint solutions have to be held in the memory.

In every Newton step a GMRES iterative solver preconditioned with a geometric multigrid solver provided by the FEM software library Gascoigne (Becker et al. 2020a) is used. All linear systems are solved up to a relative accuracy of \(10^{-4}\). In every time step the initial Newton residuum is reduced by a factor of \(10^{-6}\). We use the same tolerances for the state and dual problem. The matrices are only reassembled, if the nonlinear convergence rate falls below \(\gamma =0.05\) as in Failer and Richter (2020). In every dual step, the matrices are assembled at least once at the beginning of every Richardson iteration.

5.4 Numerical results 2d example

In Fig. 3 we plot the value of the regularized functional j(q) and the norm of the gradient \(\Vert j'(q) \Vert\) in every optimization step. The optimization algorithm is started with \(q_n=0\) for \(n=1,\ldots ,N\). The computed optimal control is given in Fig. 4. The functional value reduces in every optimization step and the gradient can be reduced by a factor of \(10^{-3}\) after less then 40 optimization cycles (see Fig. 3). In addition we plotted the optimal solution for the 2d example on the finest mesh at the point B in the center of the observation domain \(\Omega _{obs}\) and compare the solution to the reference solution in Fig. 4. Only around the kink of the desired state a mismatch between desired state and optimized solution can be seen.

Fig. 3
figure 3

2d example: functional value plotted over optimization steps (left), norm of the gradient plotted over optimization steps (right)

Fig. 4
figure 4

2d example: optimal control \(q_{opt}\) plotted over time (left) and optimal solution and reference solution in the point B plotted over time (right)

To evaluate our approach to solve the optimization algorithm first on coarser grids and then to refine systematically, we restarted the optimization algorithm directly on meshlevel 5. A similar behavior in functional and gradient to the previous approach can be observed in Fig. 3. To reduce the gradient to a tolerance of \(10^{-12}\) only 20 optimization loops are required. But since about \({14000}\,\mathrm{s}\) of computational time are needed to solve one cycle of the state and the adjoint system for all time steps on meshlevel 5, but only less then \({4000}\,\mathrm{s}\) on the coarser meshlevel 4 (even less time on still coarser grids), systematical refinement of the mesh is much more efficient. The computations were performed on a Intel(R) Xeon(R) Gold 6150 CPU @ 2.70GHz with 18 threads. We parallelized the assembling of matrix and the Vanka smoother as well as the matrix vector multiplication via OpenMP, see Failer and Richter (2020).

5.5 Numerical results 3d example

For the 3d example we used the same optimization algorithm as in the 2d case. We can see in Fig. 5 that using the optimized pressure profile at the outflow boundary about \(98.9\%\) of the kinetic energy after \(t>0.03s\) now leaves the cylinder. The jumps in the gradient after every refinement step indicate that a more accurate computation on the coarse grid would not result in better starting values on the finer meshes. The norm of the gradient could be reduced from \(1.15\times 10^{-4}\) to \(3.68\times 10^{-7}\) in 23 optimization steps, whereby only 9 optimization cycles were necessary on the computationally costly fine grid on meshlevel 4.

Fig. 5
figure 5

3d example: functional value plotted over optimization steps (left), norm of the gradient plotted over optimization steps (right)

To compare the controlled solution with the uncontrolled solution, we computed in addition the solution on a cylinder with length \({10}\,\mathrm{cm}\). As the reflection on the outflow boundary occurs later the pressure and flow values in the center at \(x={5}\, \mathrm{cm}\) can be seen as reference values for optimal non reflective boundary conditions for \(t<{0.02}\,\mathrm{s}\). As we only control the pressure on the fluid domain, reflections on the solid boundary can still occur. Furthermore we can observe that the pressure is not constant along the virtual outflow boundary at \(x={5}\,\mathrm{cm}\) for the long cylinder. Thus, we can not expect the reference solution to fully match the optimized solution. Nevertheless, we can see in Fig. 6 that pressure and outflow profiles of the controlled solution are very close to the reference values at the outflow boundary. In addition the kinetic energy in the fluid domain has a similar decay behavior. After the time point \(t={0.025}\,\mathrm{s}\) the kinetic energy in the left half of the long cylinder rises again due to the reflection of the pressure wave at the outflow boundary at \(x={10}\,\mathrm{cm}\). This explains the different behavior of pressure and outflow after \(t={0.025}\,\mathrm{s}\).

Fig. 6
figure 6

Kinetic energy in the fluid domain \({\mathcal {F}}\) as well as outflow and mean pressure plotted over time at \(\Gamma _q\) for \(q=0\), \(q_{opt}\) and for a long cylinder

5.6 Test of the Newton scheme, of the Richardson iteration and of the iterative linear solver

How to evaluate the performance of the quasi Newton scheme or of the iterative linear solver is not so obvious. Due to the changing control in every optimization cycle and the nonlinear character of the problem, the condition numbers of the matrices occurring in the linear subproblems will vary in every time step and optimization cycle. Hence, we first compute only one optimization step with \(q=0\) on various meshlevels to analyze the h-dependence of our solution algorithm. Thereby, we compare the mean number of Newton/Richardson iterations and GMRES steps per time step on every meshlevel. Afterwards, we compute mean values in every optimization loop to analyze how the performance can vary during the optimization procedure.

Table 2 Average number of Newton/Richardson iterations, average number of matrix assemblies per time step and average number of GMRES steps for solving the three subproblems in the first optimization step. Top: 2d example state, (20) is the coupled momentum equation, (21) the coupling between solid velocity and deformation and (22) the fluid deformation extension. Bottom: 2d example dual, where (26) is the adjoint extension equation, (27) the adjoint solid velocity-deformation coupling and (28) the adjoint coupled momentum equation

In Tables 2 and 3 we present the mean number of Newton/Richardson iterations and Matrix assemblies per time step for the 2d and 3d examples. In addition we present the mean number of GMRES steps needed per Newton/Richardson iteration to solve the linear subproblems (22), (21) and (20) and (26), (27) and (28). We can observe that the number of Newton/Richardson iterations per time step ranges between 3 and 4 for state and dual problem. The coupled momentum equation remains the most complex problem with the highest number of steps required. Equations (21) and (27) belong to the discretization of the velocity deformation coupling \(d_t u=v\) within the solid domain. This corresponds to the inversion of the mass matrix which explains the very low iteration counts. The results for the state equation are similar to the values in Failer and Richter (2020), where we already could observe for different examples that neglecting the ALE derivatives only has minor influences on the behavior of the Newton scheme. In Failer and Richter (2020) a more detailed analysis of the smoother in the geometric multigrid algorithm can be found.

As the dual equation is linear with respect to the adjoint variable, we would have expected to need only one Richardson iteration per time step. However, since we neglected terms occurring due to the ALE transformation, we loose the optimal convergence and need about 3 Richardson iterations per time step. On the other hand, the matrices occurring in cascade of subproblems in the dual system have the same condition number as the matrices in the state equation. Only by this approximation and splitting, iterative solvers can successfully be applied to solve the linear problems. The number of GMRES steps in the dual subproblems in Tables 2 and 3 are rather small and close to the values for the state problem. As (28) and (20) are still fully coupled problems of fluid and elastic solid, most of the computational time is spent in solving these two subproblems. The least number of GMRES steps is needed to invert the mass matrix in (21) and (27). In all subproblems the number of GMRES steps only increases slightly under mesh refinement.

In Fig. 7 we show the mean number of GMRES steps per Newton step in every timestep and the number of Newton steps per timestep. For the given examples the values only vary slightly over time. Therefore the mean values in Tables 2 and 3 represent the overall behavior very well.

To understand how the performance of the solution algorithm in the case of the 2d example varies during the optimization loop, we show in Fig. 8 the average number of Newton steps per time step and the average number of GMRES steps per Newton step for each optimization step. The computation was started with \(q=0\) on meshlevel 5. No further mesh was applied.

The dependency on the time step size of the presented quasi Newton scheme for the state equation was already analyzed in Failer and Richter (2020). Therein, we could observe an increasing Newton iteration count for larger time steps, in particular for very large time steps. A similar behavior can be expected for the dual problem. While the presented solution approach can be regarded as highly efficient and appropriate for optimal control of nonstationary fluid–structure interactions, stationary or quasi stationary applications will call for alternative approaches like the geometric multigrid solver presented by Aulisa et al. (2018).

In Failer and Richter (2020) more details regarding the computational time and savings due to parallelization can be found. As we have to evaluate state and adjoint variables, as well as additional terms due to linearization in every dual step, the computational time to assemble the matrix and to compute the Newton residuum is slightly larger for the dual equation then for the state equation.

Table 3 Average number of Newton/Richardson iterations, average number of matrix assemblies per time step and average number of GMRES steps for solving the three subproblems in the first optimization step. Top: 3d example state, (20) is the coupled momentum equation, (21) the coupling between solid velocity and deformation and (22) the fluid deformation extension. Bottom: 3d example dual, where (26) is the adjoint extension equation, (27) the adjoint solid velocity-deformation coupling and (28) the adjoint coupled momentum equation
Fig. 7
figure 7

Performance in first optimization loop on mesh level 5 in 2d (left) and on mesh level 4 in 3d (right). Mean GMRES steps per linear solve of (20) and (28) plotted over time steps and number of Newton steps plotted over time steps

Fig. 8
figure 8

Performance in every optimization loop on mesh level 5 in 2d. Left: mean GMRES steps per linear solve of (20) and (28) plotted over optimization steps . Right: mean Newton steps per time step plotted over optimization steps

6 Summary

We extended the Newton multigrid framework for monolithic fluid–structure interactions in ALE coordinates presented in Failer and Richter (2020) to the dual system of fluid–structure interaction problems. The solver is based on replacing the adjoint solid deformation by a new variable and on skipping the ALE derivatives within the adjoint Navier–Stokes equations. As we do not modify the residuum, state and dual solution in each time step still converge to the exact discrete solution. The adjoint coupling conditions incorporated in the monolithic formulation are still fulfilled. As we compute correct gradient information, we see fast convergence in our optimization algorithm. The coupled system is better conditioned (as compared to monolithic Jacobians) which allows to use very simple multigrid smoothers that are easy to parallelize. Only this makes gradient based algorithm feasible and efficient for 3d fluid–structure interaction applications, where memory consumption prevents the use of direct solvers.

It would be straightforward to combine the presented algorithm with dual-weighted residual error estimators for mesh and time step refinement. Instead of global refinement of the mesh after every optimization loop the error estimators indicate where to refine the mesh locally. The sensitivity information from the optimization algorithm can be directly used to evaluate the error estimators.