Adaptive time-step control for a monolithic multirate scheme coupling the heat and wave equation

We consider the dynamics of a parabolic and a hyperbolic equation coupled on a common interface and develop time-stepping schemes that can use different time-step sizes for each of the subproblems. The problem is formulated in a strongly coupled (monolithic) space-time framework. Coupling two different step sizes monolithically gives rise to large algebraic systems of equations where multiple states of the subproblems must be solved at once. For efficiently solving these algebraic systems, we inherit ideas from the partitioned regime and present two decoupling methods, namely a partitioned relaxation scheme and a shooting method. Furthermore, we develop an a posteriori error estimator serving as a mean for an adaptive time-stepping procedure. The goal is to optimally balance the time step sizes of the two subproblems. The error estimator is based on the dual weighted residual method and relies on the space-time Galerkin formulation of the coupled problem. As an example, we take a linear set-up with the heat equation coupled to the wave equation. We formulate the problem in a monolithic manner using the space-time framework. In numerical test cases, we demonstrate the efficiency of the solution process and we also validate the accuracy of the a posteriori error estimator and its use for controlling the time step sizes.


Introduction
In this work, we are going to work with surface coupled multiphysics problems that are inspired by fluid-structure interaction (FSI) problems [1]. We couple the heat equation with the wave equation through an interface, where the typical FSI coupling conditions or Dirichlet-Neumann type act. Despite of its simplicity, each of the subproblems exhibits different temporal dynamics which is also found in FSI. The solution of the heat equation, as a parabolic problem, manifests smoothing properties, thus it can be characterized as a problem with slow temporal dynamics. The wave equation, on the other hand, is an example of a hyperbolic equation with highly oscillatory properties. FSI problems are characterized by two specific difficulties: the coupling of an equation of parabolic type with one of hyperbolic type gives rise to regularity problems at the interface. Further, the added mass effect [2], which is present for problems coupling materials of a similar density, calls for discretization and solution schemes which are strongly coupled. This is the monolithic approach for modeling FSI, in contrast to partitioned approaches, where each of the subproblems is treated and solved as a separate system. While the monolithic approach allows for a more rigorous mathematical setting and the use of large time steps, the partitioned approach allows using fully optimized separate techniques for both of the subproblems. Most realizations for FSI, such as the technique described here, have to be regarded as a blend of both philosophies: while the formulation and discretization are monolithic, ideas of partitioned approaches are borrowed for solving the algebraic problems.
Featuring distinct time scales in each of the problems, the use of multirate time-stepping schemes with adapted step sizes for fluid and solid is obvious. For parabolic problems, the concept of multirate time-stepping was discussed in [3], [4] and [5]. In the hyperbolic setting, it was considered in [6], [7] [8] and [9]. In the context of fluid-structure interactions, such subcycling methods are used in aeroelasticity [10], where explicit time integration schemes are used for the flow problem and implicit schemes for the solid problem [11]. In the low Reynolds number regime, common in hemodynamics, the situation is different. Here, implicit and strongly coupled schemes are required by the added mass effect. Hence, large time steps can be applied for the flow problem, but smaller time steps might be required within the solid. A study on benchmark problems in fluid dynamics (Schäfer, Turek '96 [12]) and FSI presented in [13] shows that FSI problems demand a much smaller step size, although the problem configuration and the resulting nonstationary dynamics are very similar to oscillating solutions with nearly the same period [14].
We will derive a monolithic variational formulation for FSI like problems that can handle different time step sizes in the two subproblems. Implicit coupling of two problems with different step sizes will give rise to very large systems where multiple states must be solved at once. In Section 3 we will study different approaches for an efficient solution of these coupled systems, a simple partitioned relaxation scheme and a shooting like approach.
Next, in Section 4 we present a posteriori error estimators based on the dual weighted residual method [15] for automatically identifying optimal step sizes for the two subproblems. Numerical studies on the efficiency of the time adaptation procedure are presented in Section 5.  Figure 1. In the domain Ω f we pose the heat equation parameterized by the diffusion parameter ν > 0 with an additional transport term controlled by β ∈ R 2 . In the domain Ω s we set the wave equation. By √ λ we denote the propagation speed and by δ ≥ 0 a damping parameter. On the interface, we set both kinematic and dynamic coupling conditions. The former guarantees the continuity of displacement and velocity along the interface. The latter establishes the balance of normal stresses. The exact values of the parameters read as Figure 1: View of the domain split into "fluid" Ω f and "solid" Ω s along the common interface Γ. and the complete set of equations is given by

Presentation of the model problem
in Ω s We use symbols ⃗ n f and ⃗ n s to distinguish between normal vectors for different space domains.
The external forces are set to be products of functions of space and time g f (⃗ x, t) ∶= g 1 f (⃗ x)g 2 (t) and g s (⃗ x, t) ∶= g 1 s (⃗ x)g 2 (t) where g 1 f , g 1 s are space components and g 2 is a time component. We will consider two configurations of the right hand side. In Configuration 1, the right hand side is concentrated in Ω f where the space component consists of an exponential function centered around 1 2 , 1 2 . For Configuration 2 we take a space component concentrated in Ω s with an exponential function centered around 1 2 , − 1 2 . Configuration 1 For both cases, we chose the same time component g 2 (t) ∶= 1 ⌊t⌋,⌊t⌋+ 1 10 (t) for t ∈ I illustrated in Figure 2.
Since our example might be treated as a simplified case of an FSI problem, in the text we will use the corresponding nomenclature. We will refer to domain Ω f as the fluid domain and the problem defined there as the fluid problem. Similarly, we will use solid domain and solid problem phrases.

Continuous variational formulation
As the first step, let us introduce a family of Hilbert spaces, which will be later on used as the trial and test spaces for our variational problems Because we would like to incorporate the Dirichlet boundary conditions on Γ 2 f and Γ 1 s , Γ 3 s into spaces of solutions, for Υ ⊂ ∂Ω, we define We take X f ∶= (X(H f )) 2 , X s ∶= (X(H s )) 2 and X = X f × X s for space-time trial and test function spaces. Below we present notations for inner products and duality pairings: To shorten the notation, we introduce the abbreviations After these preliminaries, we are ready to construct a continuous variational formulation of the problem. We define operators describing the fluid and the solid problem All the Laplacian terms were integrated by parts and the dynamic coupling condition was added. The kinematic coupling condition was incorporated into the fluid problem, while the dynamic condition became a part of the solid problem. The Dirichlet boundary conditions over the interface Γ were formulated in a weak sense using Nitsche's method [16]. We arbitrarily set γ = 1000, while h is the mesh size. The compact version of the variational problem presents itself as: for all Φ f ∈ X f and Φ s ∈ X s .

Semi-discrete Petrov-Galerkin formulation
One of the main challenges emerging from the discretization of Problem 1 is the construction of a satisfactory time interval partitioning. Our main objectives include:

Handling coupling conditions
For the time interval I = [0, T ] we introduce a coarse time-mesh which is shared by both of the subproblems We will refer to this mesh as a macro time mesh.

2.
Allowing for different time-step sizes (possibly non-uniform) in both subproblems For each of the subintervals I n = (t n−1 , t n ] we create two distinct submeshes corresponding to each of the subproblems t n−1 = t 0 s,n < t 1 s,n < ... < t Ln s,n = t n , k l s,n = t l s,n − t l−1 s,n , I l s,n = (t l−1 s,n , t l s,n ]. We will refer to these meshes as micro time meshes.
By P r (I, H) we denote the space of polynomials with degree r and values in H. To shorten the notation, we set We assume that inner points of fluid and solid micro time-meshes do not necessarily coincide, i. e. for every n = 1, ..., N , m = 1, ..., M n − 1, l = 1, ..., L n − 1 we may have t m f,n ≠ t l s,n . Because of this fact, a function defined on the fluid micro time-mesh can not be directly evaluated in the points of the solid micro time mesh, and vice versa. To solve this problem, we introduce nodal interpolation operators i f n ∶ X n → X n f × P 1 (I n , X n s ), i s n ∶ X n → P 1 (I n , X n f ) × X n s , Since the operators B f and B s are linear, the resulting scheme is equivalent to the Crank-Nicolson scheme up to the numerical quadrature of F f , see also [17,18]. Taking trial functions piecewise linear in time ⃗ U k ∈ X k and test functions piecewise constant in time

we can construct operators on every of the macro time-steps
Then, the forms on the whole time interval I = [0, T ] are just sums of the operators over the subintervals and initial conditions: With that at hand, we can pose a semi-discrete variational problem: for all Φ f,k ∈ Y f,k and Φ s,k ∈ Y s,k .

Decoupling methods
Even though Problem 2 is discretized in time, it is still coupled across the interface. That makes solving the subproblems independently impossible. To deal with this obstacle, we chose to use an iterative approach on each of the subintervals I n and introduce decoupling strategies. For a fixed time interval I n every iteration of a decoupling method consists of the following steps: 1. Using the solution of the solid subproblem from the previous iteration ⃗ U s,k , we set the boundary conditions on the interface at the time t n , solve the fluid problem and get the solution ⃗ U 3. We apply a decoupling function to the intermediate solution⃗ U This procedure is visualized by The main challenge emerges from the transition between⃗ U In the next subsections, we will present two techniques. The first one is the relaxation method described in Section 3.1. The second one, in Section 3.2, is the shooting method.
We clarify how the intermediate solution⃗ U by the definition of Problem 3.

Problem 3 For a given ⃗ U
s,k ∈ X n s,k such that: The semi-discrete fluid operator (4) is coupled with the solid operator (5) only across the interface Γ. Additionally, the interpolation operator (3) constructs values over the whole time interval I n based only on values in the points t n−1 and t n .

Relaxation method
The first of the presented methods consists of a simple interpolation operator being an example of a fixed point method. It contains the iterated solution of each of the two subproblems, taking the interface values from the last iteration of the other problem. For reasons of stability, such explicit partitioned iteration usually requires the introduction of a damping parameter. Here, we only consider fixed damping parameters.
s,k ∈ X n s,k be the solid solution of Problem 3. Then for τ ∈ [0, 1] the relaxation function R ∶ X n s,k → X n s,k is defined as: Assuming that we already know the value ⃗ U s,k (t n−1 ), we pose The stopping criterion is based on checking how far the computed solution is from the fixed point. We evaluate the l ∞ norm of ⃗ U and once for i stop this norm is desirably small, we set

Shooting method
Here we present another iterative method, where we define a root-finding problem on the interface. We use the Newton method with a matrix-free GMRES method for approximation of the inverse of the Jacobian.
s,k ∈ X n s,k be the solid solution of Problem 3. Then the shooting function S ∶ (X n s,k ) 2 → (L 2 (Γ)) 2 is defined as: Our aim is finding the root of function (7). To do so, we employ the Netwon method In each iteration of the Newton method, the greatest difficulty causes computing and inverting the Jacobian s,k ). Instead of approximating all entries of the Jacobian matrix, we consider an approximation of the matrix-vector product only. Since the Jacobian matrix-vector product can be interpreted as a directional derivative, one can assume In principle, the vector ⃗ d is not known. Thus, the formula above can not be used for solving the system directly. However, it is possible to use this technique with iterative solvers which only require the computation of matrix-vector products. Because we did not want to assume much structure of the operator (8), we chose the matrix-free GMRES method. Such matrix-free Newton-Krylov methods are frequently used if the Jacobian is not available or too costly for evaluation [19]. Once ⃗ d is computed, we set Here, we stop iterating when the l ∞ norm of S( ⃗ U (i) s,k ) is sufficiently small and then we take We note that the method presented here is similar to the one presented in [20], where the authors also introduced a root-finding problem on the interface and solved it with a quasi-Newton method. The main difference lies in the approximation of the inverse of the Jacobian. Instead of using a matrix-free linear solver, there the Jacobian is approximated by solving a least-squares problem.

Numerical comparison of the performance
In Figures 3 and 4 we present the comparison of the performance of both methods based on the number of micro time-steps. We assumed that the micro time-steps have a uniform size. We performed the simulations in the case of no micro time-stepping (L n = 1, M n = 1), micro time-stepping in the fluid subdomain (M n = 10, L n = 1) and the solid subdomain (M n = 1, L n = 10). Figure 3 shows results for the right hand side according to Configuration 1. Figure 4 corresponds to Configuration 2. We investigated one macro time-step

Goal oriented estimation
In Section 1 we formulated the semi-discrete problem enabling usage of different time-step sizes in fluid and solid subdomains, whereas in Section 2 we presented methods designed to efficiently solve such problems. However, so far the choice of the step sizes was purely arbitrary. In this section, we are going to present an easily localized error estimator, which can be used as a criterion for the adaptive choice of the time-step size. For the construction of the error estimator, we used the dual weighted residual (DWR) method [15]. Given a differentiable functional J ∶ X → R, our aim is finding a way to approximate J( ⃗ U ) − J( ⃗ U k ), where ⃗ U is the solution to Problem 1 and ⃗ U k is the solution to Problem 2. The goal functional J ∶ X → R is split into two parts J f ∶ X f → R and J s ∶ X s → R which refer to the fluid and solid subdomains, respectively The DWR method embeds computing the value of J in the optimal control frameworkit is equivalent to solving the following optimization problem Solving this problem corresponds to finding stationary points of a Lagrangian L ∶ X × (X ⊕  We can not take X × X as the domain of L because we operate in a nonconforming set-up, that is Y k ∉ X. Because the form B describes a linear problem, finding stationary points of L is equivalent to solving the following problem: Problem 4 For a given ⃗ U ∈ X being the solution of Problem 1, find ⃗ Z ∈ X such that: The solution ⃗ Z is called an adjoint solution. By J ′ ⃗ U (Ξ) we denote the Gateaux derivative of J(⋅) at ⃗ U in direction of the test function Ξ.

Continuous variational formulation
As the first step in decoupling the Problem 4, we would like to split the form B into forms corresponding to fluid and solid subproblems. However, we can not fully reuse the forms (2a) and (2b) because of the interface terms -the forms have to be sorted regarding test functions. Thus, after defining abbreviations,  whereB We have applied integration by parts in time which reveals that the adjoint problem runs backward in time. That leads to the formulation of a continuous adjoint variational problem: Problem 5 For a given ⃗ U ∈ X being the solution of Problem 1, find ⃗ Z ∈ X such that: for all Ξ f ∈ X f and Ξ s ∈ X s .

Semi-discrete Petrov-Galerkin formulation
The semi-discrete formulation for the adjoint problem is similar to the one of the primal problem. The main difference lies in the fact that this time trial functions are piecewise constant in time ⃗ Z k ∈ Y k , while test functions are piecewise linear in time Ξ f ∈ X f,k , Ξ s ∈ X s,k . After the rearrangement of the terms in accordance to test functions on every interval I n , we arrive with the schemẽ + (η s,k (t n−1 ), z s,k (t n−1 ) − z s,k (t 1 s,n )) s + (ξ s,k (t n−1 ), y s,k (t n−1 ) − y s,k (t 1 s,n )) s + k 1 s,n 2ã s (Ξ s,k (t n−1 ))(i s n ⃗ Z s,k (t 1 s,n )).
Note that the adjoint problem does not have a designated initial value at the final time T . Instead, the starting value is implicitly defined by the variational formulation. The final schemes are constructed as sums over the macro time intervals I n and values at the final time TB With that at our disposal, we can formulate a semi-discrete adjoint variational problem: Problem 6 For a given ⃗ U ∈ X being the solution of Problem 1, find ⃗ Z k ∈ Y k such that: After formulating the problem in a semi-discrete manner, the decoupling methods from Section 3 can be applied.

A posteriori error estimate
We define the primal residual, split into parts corresponding to the fluid and solid sub- Similarly, we establish the adjoint residual resulting from the adjoint problem Becker and Rannacher [15] introduced the a posteriori error representation: This identity can be used to derive an a posteriori error estimate. Two steps of approximation are required: first, the third order remainder is neglected and second, the approximation errors ⃗ Z − Φ k and ⃗ U − Ξ k , the weights, are replaced by interpolation errors ⃗ Z − i k ⃗ Z and ⃗ U − i k ⃗ U , which are then replaced by discrete reconstructions, since the exact solutions ⃗ U , ⃗ Z ∈ X are not available. See [21] and [22] for a discussion of different reconstruction schemes. Due to these approximation steps, this estimator is not precise and it does not result in rigorous bounds. The estimator consists of a primal and adjoint component. Each of them is split again into a fluid and a solid counterpart The primal estimators are derived from the primal residuals using ⃗ U k and ⃗ Z k being the solutions to Problems 2 and 6, respectively

The adjoint reconstructions ⃗ Z
(1) f,k and ⃗ Z (1) s,k approximating the exact solution are constructed from ⃗ Z k using linear extrapolation (see Figure 7, right) The adjoint estimators are based on the adjoint residuals The primal reconstructions ⃗ U (2) f,k and ⃗ U (2) s,k are extracted from ⃗ U k using quadratic reconstruction. The reconstruction is performed on the micro time mesh level on local patches consisting of two neighboring micro time-steps (see Figure 7, left). In general, the patch structure does not have to coincide with the micro and macro time mesh structure -two micro time-steps being in the same local patch do not have to be in the same macro timestep. Additionally, we demand two micro time steps from the same local patch to have the same length.
We compute the effectivity of the error estimate using where J( ⃗ U exact ) can be approximated by extrapolation in time.

Adaptivity
The residuals (15) can be easily localised by restricting them to a specific subinterval θ n,m f,k ∶= θ f,k I m f,n , θ n,m s,k ∶= θ s,k I m s,n , ϑ n,m f,k ∶= ϑ f,k I m f,n , ϑ n,m s,k ∶= ϑ s,k I m s,n .
After defining global numbers of subintervals M ∶= ∑ N n=1 M n and L ∶= ∑ N n=1 L n we can compute an average for each of the components This way we can obtain satisfactory refining criteria refine Figure 8: An example of preserving the local patch structure during the marking procedure: if the time step is refined, the other time step belonging to the same patch will also be refined.
resolve macro mesh fluid refinement Figure 9: An example of a splitting mechanism of macro time-steps. On the left, we show the mesh before refinement: middle (in black) the macro nodes, top (in blue) the fluid nodes and bottom (in red) the solid nodes with subcycling. In the center sketch, we refine the first macro interval once within the fluid domain. Since one node is shared between fluid and solid, we refine the macro mesh to resolve subcycling. This final configuration is shown on the right.
θ n,l s,k ≥σ k or ϑ n,l s,k ≥σ k ⇒ refine I l s,n .
Taking into account the time interval partitioning structure, we arrive with the following algorithm: 1. Mark subintervals using the refining criteria (18).
2. Adjust the local patch structure -in case only one subinterval from a specific patch is marked, mark the other one as well (see Figure 8).

Perform time refining.
4. Adjust the macro time-step structure -in case within one macro time-step there exist a fluid and a solid micro time-step that coincide, split the macro time-step into two macro time-steps at this point (see Figure 9).

Fluid subdomain functional
For the first example, we chose to test the derived error estimator on a goal functional concentrated in the fluid subproblem whereΩ f = (2, 4) × (0, 1) is the right half of the fluid subdomain. For this example, we also took the right hand side concentrated in the fluid subdomain, presented in Configuration 1.
Since the functional is nonlinear, we use a 2-point Gaussian quadrature for integration in time. With (16), the quadrature points read as With that at hand, we can formulate the discretization of the functional where the nodal interpolation is defined as: In Table 1 we show results of the a posteriori error estimator on a sequence of uniform time meshes. Here, we considered the case without any micro time-stepping, that is the timestep sizes in both fluid and solid subdomains are uniformly equal. That gives a total number of time-steps in the fluid domain equal to N and N in the solid domain. Table 1 consists of partial residuals θ f,k , θ s,k , ϑ f,k and ϑ s,k , overall estimate σ k , extrapolated errorsJ − J( ⃗ U k ) and effectivities eff k . The values of the goal functional on the three finest meshes were used for extrapolation in time. As a result, we got the reference valueJ = 6.029469 ⋅ 10 −5 . Except for the coarsest mesh, the estimator is very accurate and the effectivities are almost 1. On finer meshes, values of θ f,k and ϑ f,k are very close to each other which is due to the linearity of the coupled problem [15]. A similar phenomenon happens for θ s,k and ϑ s,k . The residuals are concentrated in the fluid subdomain, which suggests the usage of smaller time-step sizes in this space domain. Table 2 collects results for another sequence of uniform time meshes. In this case, each of the macro time-steps in the fluid domain is split into two micro time-steps of the same size. That results in 2N time-steps in the fluid domain and N in the solid domain. The performance is still highly satisfactory. The residuals remain mostly concentrated 9.66 ⋅ 10 −9 4.99 ⋅ 10 −10 9.96 ⋅ 10 −9 5.01 ⋅ 10 −10 2.06 ⋅ 10 −8 2.17 ⋅ 10 −8 0.95 100 2.48 ⋅ 10 −9 1.37 ⋅ 10 −10 2.52 ⋅ 10 −9 1.39 ⋅ 10 −10 5.28 ⋅ 10 −9 5.45 ⋅ 10 −9 0.97 200 6.28 ⋅ 10 −10 2.99 ⋅ 10 −11 6.33 ⋅ 10 −10 3.01 ⋅ 10 −11 1.32 ⋅ 10 −9 1.43 ⋅ 10 −9 0.92 400 1.58 ⋅ 10 −10 9.44 ⋅ 10 −12 1.58 ⋅ 10 −10 9.56 ⋅ 10 −12 3.35 ⋅ 10 −10 3.58 ⋅ 10 −10 0.94 Table 2: Residuals and effectivities for fluid subdomain functional in case of uniform timestepping in case M n = 2 and L n = 1 for all n.  in the fluid subdomain. Additionally, after comparing Tables 1 and 2, one can see that corresponding values of θ f,k and ϑ f,k are the same (value for N = 800 in Table 1 and N = 400 in Table 2, etc.). Overall, introducing micro time-stepping improves performance and reduces extrapolated errorJ − J( ⃗ U k ) more efficiently. In Table 3 we present findings in the case of adaptive time mesh refinement. We chose an initial configuration of uniform time-stepping without micro time-stepping for N = 50 and applied a sequence of adaptive refinements. On every level of refinement, the total number of time-steps is M +L. One can see that since the error is concentrated in the fluid domain, only time-steps corresponding to this space domain were refined. Again, effectivity gives very good results. The extrapolated errorJ − J( ⃗ U k ) is even more efficiently reduced.

Solid subdomain functional
For the sake of symmetry, for the second example, we chose a functional concentrated on the solid subdomain and allows for a discretization according to (19). Similarly, Table 4 gathers results for a sequence of uniform meshes without any micro time-stepping (N + N micro time-steps).
The last three solutions are used for extrapolation in time which givesJ = 3.458826 ⋅ 10 −4 . Also for this example, the effectivity is very satisfactory. On the finest discretization, the effectivity slightly declines. This might come from the limited accuracy of the reference value. Once more, on finer meshes, fluid residuals θ f,k , ϑ f,k and solid residuals θ s,k ϑ s,k have similar values. This time, the residuals are concentrated in the solid subdomain and, in this case, the discrepancy is a bit bigger.   In Table 5 we display outcomes for a sequence of uniform meshes where each of the macro time-steps in the solid subdomain is split into two micro time-steps. That gives N + 2N time-steps. Introducing micro time-stepping does not have a negative impact on the effectivity and significantly saves computational effort. Corresponding values of θ s,k and ϑ s,k in Tables 4 and 5 are almost the same. Residuals remain mostly concentrated in the solid subdomain.
Following the fluid example, in Table 6 we show calculation results in the case of adaptive time mesh refinement. Here as well we took the uniform time-stepping without micro time-stepping for N = 50 as the initial configuration and the total number of time-steps is M +L. Except for the last entry, only the time-steps corresponding to the solid domain were refined. On the finest mesh, the effectivity deteriorates. However, adaptive time-stepping is still the most effective in reducing the extrapolated errorJ − J( ⃗ U k ). Finally, we show in Figure 10 a sequence of adaptive meshes that result from this adaptive refinement strategy. In the top row, we show the initial mesh with 50 macros steps and no further splitting in fluid and solid. For a better presentation, we only show a small subset of the temporal interval [0.1, 0.4]. In the middle plot, we show the mesh after 2 steps of adaptive refinement and in the bottom line after 4 steps of adaptive refinement. Each plot shows the macro mesh, the fluid mesh (above) and the solid mesh (below). As expected, this example leads to a sub-cycling within the solid domain. For a finer approximation, the fluid problem also requires some local refinement. Whenever possible we avoid excessive subcycling by refining the macro mesh as described in Section 4.3.

Conclusion
In this paper, we have developed a multirate scheme and a temporal error estimate for a coupled problem that is inspired by fluid-structure interactions. The two subproblems, the heat equation and the wave equation, feature different temporal dynamics such that balanced approximation properties and stability demands ask for different step sizes. We introduced a monolithic variational Galerkin formulation for the coupled problem and then used a partitioned framework for solving the algebraic systems. Having different time-step sizes for each of the subproblems couples multiple states in each time-step, which would require an enormous computational effort. To solve this, we discussed two different decoupling methods: first, a simple relaxation scheme that alternates between fluid and solid problem and second, similar to the shooting method, where we defined a rootfinding problem on the interface and used matrix-free Newton-Krylov method for quickly approximating the zero. Both of the methods were able to successfully decouple our specific example and showed good robustness concerning different subcycling of the multirate scheme in fluid-or solid-domain. However, the convergence of the shooting method was faster and it required fewer evaluations of the variational formulation.
As the next step, we introduced a goal-oriented error estimate based on the dual weighted residual method to estimate errors with regard to functional evaluations. The monolithic space-time Galerkin formulation allowed to split the residual errors into contributions from the fluid and solid problems. Several numerical results for two different goal functionals show very good effectivity of the error estimate. Finally, we established the localization of the error estimator. That let us derive an adaptive refinement scheme for choosing optimal distinct time meshes for each problem.
In future work, it remains to extend the methodology to nonlinear problems, in particular, to fully coupled fluid-structure interactions.