Preconditioned smoothers for the full approximation scheme for the RANS equations

We consider multigrid methods for finite volume discretizations of the Reynolds Averaged Navier-Stokes (RANS) equations for both steady and unsteady flows. We analyze the effect of different smoothers based on pseudo time iterations, such as explicit and additive Runge-Kutta (ARK) methods. Furthermore, we derive the new class of additive W (AW) methods from Rosenbrock smoothers. This gives rise to two classes of preconditioned smoothers, preconditioned ARK and additive W (AW), which are implemented the exact same way, but have different parameters and properties. The new derivation allows to choose some of these based on results for time integration methods. As preconditioners, we consider SGS preconditioners based on flux vector splitting discretizations with a cutoff function for small eigenvalues. We compare these methods based on a discrete Fourier analysis. Numerical results on pitching and plunging airfoils identify AW3 as the best smoother regarding overall efficiency. Specifically, for the NACA 64A010 airfoil steady-state convergence rates of as low as 0.85 were achieved, or a reduction of 6 orders of magnitude in approximately 25 pseudo-time iterations. Unsteady convergence rates of as low as 0.77 were achieved, or a reduction of 11 orders of magnitude in approximately 70 pseudo-time iterations.


Introduction
We are interested in numerical methods for compressible wall bounded turbulent flows as they appear in many problems in industry. Therefore, both steady and unsteady flows will be considered. Numerically, these are characterized by strong nonlinearities and a large number of unknowns, due to the requirement of resolving the boundary layer. High fidelity approaches such as Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) are slowly getting within reach through improvements in high order discretization methods. Nevertheless, these approaches are, and will remain in the foreseeable future, far too costly to be standard tools in industry.
However, low fidelity turbulence modelling based on the Reynolds Averaged Navier-Stokes (RANS) equations discretized using second order finite volume discretizations is a good choice for many industrial problems where turbulence matters. For steady flows, this comes down to solving one nonlinear system. In the unsteady case, the time discretization has to be at least partially implicit, due to the extremely fine grids in the boundary layer, requiring solving one or more nonlinear systems per time step. The choice for numerical methods for these comes down to Jacobian-free Newton-Krylov (JFNK) methods with appropriate preconditioners or nonlinear multigrid methods (Full Approximation scheme -FAS) with appropriate smoothers, see [3] for an overview.
In this article, we focus on improving the convergence rate of agglomeration multigrid methods, which are the standard in the aeronautical industry. For the type of problems considered here, two aspects have been identified that affect solver efficiency. Firstly, the flow is convection dominated. Secondly the grid has high aspect ratio cells. It is important to note that the viscous terms in the RANS equations do not pose problems in itself. Instead, it is that they cause the boundary layer to appear, thus making high aspect ratio grids necessary. These aspects are shared by the Euler equations, meaning that solvers developed for one equation may also be effective for the other.
With regards to convection dominated flows, smoothers such as Jacobi or Gauß-Seidel do not perform well, in particular when the flow is aligned with the grid [23]. One idea has been to adjust multigrid restriction and prolongation by using directional or semi coarsening that respects the flow direction [24]. This approach has the problem to be significantly more complicated to implement than standard agglomeration. Thus, the alternative is to adjust the smoother. As it turned out, symmetric Gauß-Seidel (SGS) is an excellent smoother for the Euler equations even for grid aligned flow [7], simply because it takes into account propagation of information in the flow direction and backwards.
However, when discretizing the Euler equations on high aspect ratio grids suitable for wall bounded viscous flows, this smoother does not perform well.
During the last ten years, the idea of preconditioned pseudo time iterations has garnered interest [25,17,20,18,19,21,14,5,6]. This goes back to the additive Runge-Kutta (ARK) smoothers originally introduced in [8] and independently in a multigrid setting in [11]. These exhibit slow convergence, but if they are combined with a preconditioner, methods result that work well for high Reynolds number high aspect ratio RANS simulations.
The preconditioned RK method suggested in [25] was properly derived in [18]. There it is shown that this smoother arises from using a Rosenbrock method and then approximating the system matrix in the arising linear systems. This is in fact called a W method in the literature on ordinary differential equations. Consequently, we now introduce the class of preconditioned additive W methods and derive preconditioned additive explicit Runge-Kutta methods. This allows us to identify the roles the preconditioners have to play. As for preconditioners itself, it turns out that again, SGS is a very good choice, as reported in [25,18].
The specific convergence rate attainable depends on the discretization, in particular the flux function. Here, we consider the Jameson-Schmidt-Turkel (JST) scheme in its latest version [15]. We perform a discrete Fourier analysis of the smoother for the linearized Euler equations on grids with variable aspect ratios. This is justified, since the core issues of convection and high aspect ratio grids are present in this problem.
A convenient truth is that if we have a fast steady-state solver then it can be used to build a fast unsteady solver via dual timestepping. However, there are subtle differences that affect convergence and stability. In particular, the eigenvalues of the amplification matrix are scaled and shifted in the unsteady case relative to the steady case. For a fuller discussion of these issues we refer to our earlier work [5,6] and to [2]. We compare the analytical behavior and numerical performance of iterative smoothers for steady and unsteady problems.
The article is structured as follows. We first present the governing equations and the discretization, then we describe multigrid methods and at length the smoothers considered. Then we present a Fourier analysis based on the Euler equations and finally numerical results for airfoil test cases.

Discretization
We consider the two dimensional compressible (U)RANS equations, where the vector of conservative variables is (ρ, ρv 1 , ρv 2 , ρE) T and the convective and viscous fluxes are given by where we used the Einstein notation.
Here, ρ is the density, v i the velocity components and E the total energy per unit mass. The enthalpy is given by being the pressure and γ = 1.4 the adiabatic index for an ideal gas. Furthermore, τ ij is the sum of the viscous and Reynolds stress tensors, q j the sum of the diffusive and turbulent heat fluxes, µ the dynamic viscosity, µ t the turbulent viscosity and P r, P r t the dynamic Prandtl and turbulent Prandtl numbers.
As a turbulence model, we use the 0-equation Baldwin-Lomax model [1] for two reasons. Firstly, it performs well for flows around airfoils we use as primary motivation. Secondly, an algebraic turbulence model does not lead to additional questions regarding implementation as 1-or 2-equation models do. We believe that these difficulties have to be systematically looked at, but separately from this investigation.
The equations are discretized using a finite volume method on a structured mesh and the JST scheme as flux function. There are many variants of this method, see e.g. [15]. Here, we use the following, for simplicity written as if for a one dimensional problem: Here, f R (ū) is the difference of the convective and the viscous fluxes, u ∈ R m is the vector of all discrete unknowns andū = (ρ, ρv, ρE) is the vector of conservative variables. The artificial viscosity terms are given by with ∆ j being the forward difference operator and the vector w beingū where in the last component, the energy density has been replaced by the enthalpy density. The scalar coefficient functions (2) j+1/2 and (4) j+1/2 are given by (2) j+1/2 = s j+1/2 r j+1/2 and (4) j+1/2 = max(0, r j+1/2 /32 − 2 Here, the entropy sensor s j+1/2 = min(0.25, max(s j , s j+1 )) given via For the Euler equations, it is suggested to instead use a corresponding pressure sensor.
Furthermore, r i+1/2 is the scalar diffusion coefficient, which approximates the spectral radius and is chosen instead of a matrix valued diffusion as in other versions of this scheme. It is The specific choice of r j is important with respect to stability and the convergence speed of the multigrid method. Here, we use the locally largest eigenvalue r j = |v n j | + a j as a basis, where a is the speed of sound. In the multidimensional case, this is further modified to be [22]: where r i corresponds to the x direction and r j to the y direction. Additionally, to obtain velocity and temperature gradients needed for the viscous fluxes, we exploit that we have a cell centered method on a structured grid and use dual grids around vertices to avoid checker board effects [13, p. 40].
For boundary conditions, we use the no slip condition at fixed wall and far field conditions at outer boundaries. These are implemented using Riemann invariants [13, p. 38].
In time, we use BDF-2 with a fixed time step ∆t, resulting at time t n+1 in an equation system of the form Here, f (u) describes the spatial discretization, whereas Ω is a diagonal matrix with the volumes of the mesh cells as entries. We thus obtain For a steady state problem, we just have

The full approximation scheme
As mentioned in the introducion, we use an agglomeration FAS to solve equations (4) and (5). To employ a multigrid method, we need a hierarchical sequence of grids with the coarsest grid being denoted by level l = 0. The coarse grids are obtained by agglomerating 4 neighboring cells to one. On the coarse grids, the problem is discretized using a first order version of the JST scheme that does not use fourth order differences or an entropy sensor. The iteration is performed as a W-cycle, where on the coarsest grid, one smoothing step is performed. This gives the following pseudo code: Function FAS-W-cycle(u l , s l , l) The restriction R l−1,l is an agglomeration that weighs components by the volume of their cells and divides by the total volume. As for the prolongation P l,l−1 , it uses a bilinear weighting [12].
On the finest level, the smoother is applied to the equation (4) resp. (5). On sublevels, it is instead used to solve with s l = F l (R l,l+1 u l+1 ) + R l,l+1 r l+1 .

Preconditioned smoothers
All smoothers we use have a pseudo time iteration as a basis. These are iterative methods for the nonlinear equation F(u) = 0 that are obtained by applying a time integration method to the initial value problem For convenience, we have dropped the subscript l that denotes the multigrid level.

Preconditioned additive Runge-Kutta methods
We start with splitting F(u) in a convective and diffusive part Hereby, f c contains the physical convective fluxes, as well as the discretized time derivative and the multigrid source terms, whereas f v contains both the artificial dissipation and the discretized second order terms of Navier-Stokes. An additive explicit Runge-Kutta (AERK) method is then implemented in the following form: where The second to last line implies that β 1 = 1. Here, ∆t * is a local pseudo time step, meaning that it depends on the specific cell and the multigrid level. It is obtained by choosing c * , a CFL number in pseudo time, and then computing ∆t * based on the local mesh width ∆x k l : This implies larger time steps on coarser cells, in particular on coarser grids.  As for the coefficients, 3-, 4-and 5-stage schemes have been designed to have good smoothing properties in a multigrid method for convection dominated model equations. The first three schemes have been designed by Jameson using linear advection with a fourth order diffusion term. We denote these by ARKsJ with s the number of stages. See [11] for ARK4J and ARK5J and [14] for ARK3J. The 5-stage schemes ARK51 and ARK52 are from [27]. The scheme ARK52 is employed in [25]. Coefficients for the 3-and 5-stage schemes can be found in table 1. All of these schemes are first order, except for the last one, which has order two and is therefore denoted as ARK52. In the original publication ARK51 and ARK52 are not additive. When using these within an additive method, we use the β coefficients from ARK5J. For current research into improving these coefficients we refer to [4,2].
Setting β j = 1 for all j gives an unsplit low storage explicit Runge-Kutta method that does not treat convection and diffusion differently. We refer to these schemes as ERK methods, e.g. ERK3J or ERK51.
To precondition this scheme, a preconditioner P −1 ∈ R m×m is applied to the equation system (4) or (6) by multiplying them with it, resulting in an equation In a pseudo-time iteration for the new equation, all function evaluations have to be adjusted. In the above algorithm, this is realized by replacing the term A good preconditioner should approximate the Jacobian ∂F ∂u of F well, while simultaneously being easy to apply.

Additive W-methods
An alternative way of deriving a preconditioned explicit method has been presented by Langer in [17]. He calls these methods preconditioned implicit smoothers and derives them from specific singly diagonally implicit RK (SDIRK) methods. SDIRK methods consist of a nonlinear system at each step, which he solves with one Newton step each and then simplifies by always using the Jacobian from the first stage. This is known as a special choice of Rosenbrock method in the literature on differential equations [10, p. 102]. To arrive at a preconditioned method, Langer then replaces the system matrix with an approximation, for example originating from a preconditioner as known from linear algebra. In fact, this type of method is called a W-method in the IVP community [10, p. 114].
We now extend the framework from [17] to additive Runge-Kutta methods. For clarity we repeat the derivation, but start from the split equation as described in (7). To this equation, we apply an additive SDIRK method with coefficients given in tables 2 and 3: Hereby, the vectors k are called stage derivatives and we have k = k c + k v according to the splitting (7). Thus, we have to solve s nonlinear equation systems for the stage derivatives k i . To obtain an additive Rosenbrock method, these are solved approximately using one Newton step each with initial guess zero, changing the stage values to where ). Thus, we now have to solve a linear system at each stage. This type of scheme is employed in Swanson et. al. [25]. They refer to the factor η as and provide a discrete Fourier analysis of this factor.
As a final step, we approximate the system matrices I + η∆t * J i by a matrix W. This gives us a new class of schemes, which we call additive W (AW) methods, with stage derivatives given by: In both additive Rosenbrock and additive W methods, equation (16) remains unchanged.
Finally, after some algebraic manipulations, this method can be rewritten in the same form as the low storage preconditioned ARK methods presented earlier: As for the explicit methods, one recovers an unsplit scheme for β i = 1 for all i and we refer to these methods as SDIRK, Rosenbrock and W methods.

Comparison
To get a better understanding for the different methods, it is illustrative to consider the linear case. Then, these methods are iterative schemes to solve a linear equation system (A + B)x = b and can be written as The matrix M is the iteration matrix and for pseudo time iterations, it is given as the stability function S of the time integration method. These are a polynomial P s of degree s in ∆t * (A + B) for an s stage ERK method and a bivariate polynomial P s of degree s in ∆t * A and ∆t * B for an s stage AERK method. When preconditioning is added, this results in P s (∆t * P −1 (A + B)) and P s (∆t * P −1 A, ∆t * P −1 B), respectively.
For the implicit schemes, we obtain a rational function of the form Q s (I+η∆t * (A+B)) −1 P s (∆t * (A+B)) for an s stage SDIRK or Rosenbrock method. Here, Q s is a second polynomial of degree s. Finally, for an s-stage additive W method, we obtain a function of the form Q s (W) −1 P s (∆t * A, ∆t * B).
Due to the specific contruction, the inverse can simply be moved from the left into the argument which gives P s (∆t * W −1 A, ∆t * W −1 B). Note that this is the same as for the preconditioned AERK method, except for the preconditioner.
The additive W method and the preconditioned ARK method have three main differences. First of all, there is the role of P in the AERK method versus the matrix W. In the W method W ≈ (I + η∆t * J i ), whereas in the AERK scheme, P ≈ J i . Second, the timestep in the one case is that of an explicit ARK method, whereas in the other, that of an implicit method. The latter in its SDIRK or Rosenbrock form is A-stable. However, approximating the Jacobian can cause the stability region to become finite. Finally, the latter method has an additional parameter η that needs to be chosen. However, the large stability region makes the choice of ∆t * easy for the additive W method (very large), whereas it has to be a small value for the preconditioned ARK scheme.

SGS Preconditioner
The basis of our method is the preconditioner suggested by Swanson et al. in [25] and improved by Jameson in [16]. In effect, this is a choice of a W matrix in the framework just presented. We now repeat the derivation of their preconditioner in our notation to obtain an improved version.
The first step is to approximate the Jacobian by using a different first order linearized discretization. It is based on a splitting A = A + +A − of the flux Jacobian. This is evaluated in the average of the values on both sides of the interface, thereby deviating from [25]. The split Jacobians correspond to positive and negative eigenvalues: Alternatively, these can be written in terms of the matrix of right eigenvectors R as where Λ ± are diagonal matrices containing the positive and negative eigenvalues, respectively. As noted in [16], it is now crucial to use a cutoff function for the eigenvalues beforehand, to bound them away from zero. We use a parabolic function which kicks in when the modulus of the eigenvalue λ is smaller or equal to a fraction ad of the speed of sound a with free parameter d ∈ [0, 1]: With this, an upwind discretization is given in cell i by Here, e ij is the edge between cells i and j, N (i) is the set of cells neighboring i and n ij the unit normal vector from i to j.
For the unsteady equation (4), we obtain instead The corresponding approximation of the Jacobian is then used to construct a preconditioner. Specifically, we consider the block SGS preconditioner where L, D and U are block matrices with 4 × 4 blocks. This preconditioner would look different when several SGS steps would be performed. However, we did not find this to be beneficial. We now have two cases. In the AERK framework, L + D + U = J and we arrive at respectively in the unsteady case.
In the additive W framework, L + D + U = I + η∆t * J and we obtain or in the unsteady case Applying this preconditioner requires solving small 4 × 4 systems coming from the diagonal. We use Gaussian elimination for this. A fast implementation is obtained by transforming first to a certain set of symmetrizing variables, see [25].

Discrete Fourier Analysis
We now perform a discrete Fourier analysis of the preconditioned ARK method for the two dimensional Euler equations using the JST scheme. For a description of this technique, also called local Fourier analysis (LFA) in the multigrid community, we refer to [26,9]. The rationale for this is that the core convergence problems for multigrid methods for viscous flow problems on high aspect ratio grids are the convective terms and the high aspect ratio grids. The viscous terms are of comparatively minor importance. Here, we do not take into account the coarse grid correction. Thus, our aim is to obtain amplification-and smoothing factors for the smoother. The latter is known to be representative of 2-grid convergence rates and is given by where λ HF denote the high frequency eigenvalues. Since eigenfunctions of first order hyperbolic differential operators involve e iφx , these are in [−π, −π/2] and [π/2, π]. We now consider a linearized version of the underlying equation with periodic boundary conditions on the domain Ω = [0, 1] 2 : with A = ∂f 1 ∂u and B = ∂f 2 ∂u being the Jacobians of the Euler fluxes in a fixed pointû, to be set later.

JST scheme
We discretize (32) on a cartesian mesh with mesh width ∆x in x-direction and ∆y = AR∆x in y-direction (AR=aspect ratio), resulting in an n x × n y mesh. A cell centered finite volume method with the JST flux is employed. We denote the shift operators in x and y direction by E x and E y . Cells are indexed the canonical doubly lexicographical way for a cartesian mesh. In cell ij we write the discretization as respectively in the unsteady case, For H v , the starting point is that the pressure in conservative variables is In the fraction, all potential shift operators cancel out. Thus, for the second order differences in both directions, For the fourth order difference, there's a corresponding identity. Furthermore, applying the second or fourth order difference to ρH j = ρE j + p j results in This gives For the coefficient functions (2) and (4) (see (1) and (2)), we first look at the shock sensor s j+1/2 . Here, we use the version for the Euler equations based on pressure. Straightforward calculations give For simplicity, we now assume that max(s j , s j+1 ) = s j . Thus, For the spectral radius we note that in the speed of sound a j = γp j /ρ j , possible shift operators cancel out as well, implying that is constant over the mesh. This gives Regarding the maxima, we have r j = r j+1 =: r and correspondingly for the y direction with r i . Thus, (2) = rs j+1/2 and (4) = max(0, r/32 − 2 (2) ).

Preconditioner
With regards to the SGS preconditioner P −1 = (D + L)D −1 (D + U), the different discretization based on the flux splitting (20) with cutoff function (19) gives (see (23)- (26)) We now get two different operators for the diagonal part for the steady and for the unsteady case. We have for the steady case, whereas for the unsteady case there is With these, the preconditioner (22) is formed. For the W methods, these matrices need to be adjusted slightly, compare (27)-(30).
We now make one simplification in the analysis and that is that we assume the matrices to be evaluated with the value of the respective cell and not the average as in the actual method.
As an example, the application of the 3-stage ARK scheme results in the following operator, where we writeH c := P −1 H c ,H c := P −1 H v and α i = ∆t * α i : For other smoothers, we have to use other appropriate stability functions, as discussed in section 4.3.

Amplification and Smoothing factors
We are now interested in the amplification factor of the corresponding method for different values of ∆x and ∆y. Working with G directly would require assembling a large matrix in R 4nx×4ny . Instead, we perform a discrete Fourier transform. In Fourier space, the transformed operator block diagonalizes, allowing to work with the much smaller matrixĜ ∈ R 4×4 . Thus, we replace u ij by its discrete Fourier series and analyzeû k+1 kx,ky =Ĝ kx,kyû k kx,ky . When applying a shift operator to one of the exponentials, we obtain E x e 2πi(kxx i +kyy j ) = e 2πi(kx(x i +1/nx)+kyy j ) = e 2πikx/nx e 2πi(kxx i +kyy j ) and similar for E y . Defining the phase angles Θ x = 2πk x /n x , Θ y = 2πk y /n y , the Fourier transformed shift operators arê E x = e iΘx ,Ê y = e iΘy and can replace the dependence on the wave numbers with a dependence on phase angles.
To compute the spectral radius of G, we now just need to look at the maximum of the spectral radius ofĜ Θx,Θy =Ĝ kx,ky over all phase angles Θ x and Θ y between −π and π. Furthermore, this allows to compute the smoothing factor (31) as well, by instead taking the maximum over all wave numbers between −π and −π/2, as well as π/2 and π.

Results
We evaluate the matrices in the pointŝ . We use a 8 × (8 · AR) grid with different aspect ratios (AR), namely AR=1, AR=100 and AR=10000. To determine the physical time step, a CFL number c of 200 is chosen. All results were obtained using a python script, which can be accessed at http://www.maths.lu.se/philipp-birken/rksgs fourier.zip.

The explicit schemes
Results for explicit schemes for different test cases are shown in table 4. As can be seen, these methods have terrible convergence rates, but are good smoothers for equidistant meshes. For non-equidistant meshes, this is not the case, which demonstrates the poor performance of these methods for viscous flow problems.

Preconditioned ARK
We now consider preconditioned ARK3J with SGS and exact preconditioning. The Mach number is set to 0.8 and the angle of attack to zero degrees,  which is the most difficult test case of the ones considered. Even so, it is possible to achieve convergence at all aspect ratios with a large physical CFL c=200. With regards to stability, we show the maximal possible c * in table 5. We can see that this is dramatically improved compared to the unpreconditioned method, but it remains finite, as predicted by the theory. We furthermore notice that the choice of d in the cutoff function (19) is important. In particular, the smaller we choose d, meaning the smaller we allow eigenvalues to be, the less stable the method will be. Maximal c * is approximately proportional to the aspect ratio and to d. The eigenvalues and contours of smoothing factor for d=0.5 are also illustrated in Figure 1 for aspect ratios 1 and 100, respectively. Clustering of the eigenvalues along the real axis is observed indicating good convergence.
For each value of d considered, c * was optimised (c * opt) to minimise the smoothing factor (SM fct. opt). The results are shown in table 6. Optimal smoothing factors improve as d is increased. Preconditioning with the exact inverse affords better smoothing factors than SGS preconditioning. With  Table 6: Amplification and smoothing factors of ARK3J, c=200, M = 0.8, α = 0 • . SGS preconditioning optimal smoothing factors at AR=1 are on the whole lower at than those at large AR. Conversely smoothing factors at AR=1 are equal to or higher than those at large AR when exact preconditioning is used. In general, optimal c * is close to maximal c * .

Additive W methods
Results for AW3 with SGS and exact preconditioning are shown in Tables 7  and 8. Again, the Mach number is set to 0.8, the physical CFL c=200 and the angle of attack to zero degrees. We set η = 0.8. An A means that no bound on c * was observed. As we can see, as long as d is chosen sufficiently large, the methods are practically A-stable, as suggested by the theory. Surprisingly, for d small, stability is worse than for the preconditioned ARK methods. This is also illustrated in Figure 2 for for Mach 0.5 and aspect ratios 1 and 100, respectively. As with ARK3J, the eigenvalues are clustered along the real axis.
A slightly more complex picture emerges when the optimal smoothing factor is considered. At AR=1, the AW3 scheme attains very low optimal smoothing factors of around 0. 3    slightly higher than ARK3J. Using exact preconditioning in both schemes at AR=100 and 10000, AW3 and ARK3J obtain comparable smoothing factors. Regarding the optimal c * , it is generally lower than with ARK3J except for d ≥ 0.5 and AR> 1.

Comparison of AW schemes and choice of η
One important question is the optimal choice of the additional parameter η in the W methods. Based on the AW3 results in Table 7 it was decided to focus on two values of d: d=0.1 where limited stability was observed, and d=0.5 where A-stability was observed. Only SGS preconditioning was used. For each W scheme and value of d, optimal values of c * , η and amplification and smoothing factors were determined. These are presented in Table 9 for initial conditions Mach=0.8, α = 0 • and in Table 10 for initial conditions Mach=0.8, α = 45 • .
Looking just at Table 9, the optimal value of η is low, either 0.4 or 0.5 (with one case of 0.7), when d=0.5. When d=0.1, the optimal η depends on AR: for AR=1, optimal values of η are 0.5 or 0.6 and for AR=100 and 10000 the values are higher, mostly 0.8. Looking at Table 10, the optimal value of η is independent of d and the choice of scheme but not of AR. The optimal value of η appears to be somewhat dependent on the initial conditions and other free parameters but independent of the specific W scheme. Furthermore, the optimisation process demonstrated (not all results are shown for brevity) that the W schemes are all stable within a range: 0.5 η 0.9 but the maximal c * varies with η within the range. As shown in Table 8, fixing η = 0.8 across all tests results in a stable but sub-optimal scheme. Looking at the relative performance of different W schemes in Tables 9 and 10, it is apparent that they all obtain similar optimal smoothing factors at similar c * values. Therefore, AW3 is the best scheme as it uses only three stages.
The discrete Fourier analysis suggests that the preconditioned ARK3J and additive W schemes should theoretically achieve very good smoothing factors under challenging flow conditions and on high aspect ratio grids. Moreover, in the W schemes the eigenvalue limiting parameter d plays an important role: for d ≥ 0.5 and AR> 1 the allowable c * is unlimited, while for smaller d or AR=1 the optimal c * is finite and smaller than that found for preconditioned ARK3J.

Numerical results
We now proceed to tests on the RANS equations and use a FAS scheme as the iterative solver. We employ the Fortran code uflo103 to compute flows around pitching airfoils. All computations are run on Ubuntu 16.04 on a single core of an 8-core Intel i7-3770 CPU at 3.40GHz with 8 GB of memory.
C-type grids are employed, where the half of the cells that are closer to the boundary in y-direction get a special boundary layer scaling. To obtain initial conditions for the unsteady simulation, far field values are used from which a steady state is computed. The first unsteady time step does not use BDF-2, but implicit Euler as a startup for the multistep method. From then on, BDF-2 is employed. We look at the startup phase to evaluate the performance of steady state computations and at the second overall timestep, meaning the first BDF-2 step, to evaluate performance for the unsteady case.
As a first test case, we consider the flow around the NACA 64A010 pitching and plunging airfoil at a Mach number 0.796. The grid is illustrated in Figure 3. For the pitching, we use a frequency of 0.202 and an amplitude of 1.01 • . 36 timesteps per cycle (pstep) are chosen. The Reynolds number is 10 6 and the Prandtl number is 0.75. The grid is a C-mesh with 512 × 64 cells and maximum aspect ratio of 6.31e6. As a second test case, we look    Figure 3. For the pitching, we use a frequency of 0.202 and an amplitude of 1.01 • and pstep=36. The grid has 320 × 64 cells and maximum aspect ratio of 8.22e6. The results of the Fourier analysis suggest that the most interesting schemes are SGS preconditioned ARK3J and the various AW schemes. A first thing to note is that due to nonlinear effects, the schemes need to be tweaked from the linear to the nonlinear case. In particular, it is necessary to start with a reduced pseudo CFL number c * . We restrict it to 20 for the first two iterations.

Choice of parameters
In the AW methods, there are now three interdependent parameters to choose: η, d and c * , the CFL number in pseudo time. We start by fixing d. Choosing d = 0 does not cause instability per se, but it leads to a stall in the iteration away from the solution. The convergence rates for d = 0.05, d = 0.1 and d = 0. 5      Qualitatively, we observe the following behavior: • Increasing d makes the schemes slower to converge and more stable • This effect is stronger for the unsteady system • If η is too small, we get instability • Decreasing η within the stable region will improve the convergence rate We thus suggest two different modes of operation: 1. The robust mode: Choose d = 0.5, η = 0.5 and c * very large 2. The fast mode: Choose d = 0.05, η = 0.5 and c * =100 The robust mode trades some convergence rate for more robustness. The numerical experiments find somewhat different optimal values of η to those found in the discrete Fourier analysis. Possible reasons for the discrepancies include the linearisations used in the discrete Fourier analysis and the non-cartesian meshes in the numerical experiments.

Comparison of Schemes
The linear analysis suggests that preconditioned ARK3J is competitive with the preconditioned W methods in terms of smoothing power. However, its application requires choosing c * within a stability limit whereas the W methods are A-stable for a certain range of d. To test the stability limit of the ARK schemes, we apply preconditioned ARK3J and ARK51 to the pitching NACA airfoil test case. The ARK3J method becomes unstable for c * > 1, whereas ARK51 can be run with c * =3. However, both methods are completely uncompetitive with convergence rates of 0.999. Hereafter we compare only the AW schemes.   We first look at the NACA airfoil and compare AW3, AW51, AW52 and AW5J for the two modes of operation: d=0.05 and c * =100 versus d=0.5 and c * =10000. The relative residuals for the initial steady state computation and  for the second unsteady time step are plotted in Figure 4 for d=0.05 and c * =100 and in Figure 5 for d=0.5 and c * =10000. The convergence rates and CPU times are summarized in Table 17. Faster convergence is obtained with d=0.05 for all schemes except AW5J.  The residual histories for the same tests, but for the RAE airfoil can be seen in Figures 7 and 6. Convergence rates and CPU times are summarized in table 18. The numbers after the scheme names are the values of c * used in the steady iterations. Again, faster convergence is obtained with d=0.05 for all schemes except AW5J.
As an immediate conclusion, it can be seen that the different schemes  have similar convergence rates. Thus, AW3 performs best in terms of CPU times, since it is a three stage smoother, opposed to the five stage smoothers.
With the fast mode, we get a convergence rate for the unsteady case of 0.77 for the NACA profile and 0.8 for the RAE profile. However, for the RAE profile, we have to reduce c * from 100 for 3 of the 4 schemes to prevent instability. With the convergence rate obtained, 20 iterations are sufficient for most applications, which is a matter of seconds. In the robust mode, the convergence rate goes down to 0.8 for the NACA profile and 0.84 for the RAE profile.
In the steady state case, there is a decline in convergence rate after 20 to 30 iterations. This explains why the convergence rates are significantly slower here. In the first phase, a convergence rate of about 0.7 is obtained and the norm of the residual is decreased by about 10 6 , which is completely sufficient for most applications.

Mesh Independence
To verify that the solvers' performance is mesh-independent, we run the pitching NACA 64010 airfoil with AW3 and d=0.5 on coarse (256 × 32), medium (384 × 48) and fine (512 × 64) meshes in robust mode. Table 19 shows the results. As can be seen, the convergence of the preconditioned W schemes is mesh-independent. With d set to 0.05 the simulations on the coarse mesh diverged, which is an example where the robust mode is indeed more robust.

Effect of flow angle
In the Fourier analysis it was found that grid-aligned flow could be problematic. We therefore choose angles of attack α of 0, 1, 2 and 4 degrees for the steady state computation or the second time step in an unsteady computation. Table 20 shows the convergence rates in fast mode (d=0.05). Essentially, it is unaffected by the angle of attack. However, for two cases, the iteration stalls after 30, resp. 65 iterations at relative residuals of 10 −4 and 10 −5 , respectively.

Conclusions
We considered preconditioned pseudo time iterations for agglomeration multigrid schemes for the steady and unsteady RANS equations. As a discretization, the JST scheme was used as a flux function in a finite volume method. We derived preconditioned additive W methods, as well as preconditioned additive explicit RK methods. Both are implemented in exactly the same way with the difference being in how the preconditioner is chosen, as well as the pseudo time step size. For the latter, the preconditioner has to approximate the Jacobian J and the pseudo time iteration has a finite stabil-  ity region. In the additive W case, the preconditioner has to approximate I + η∆t * J, whereby the pseudo time step size is possibly unbounded. However, we obtain an additional parameter η which currently must be chosen empirically. As a preconditioner, we choose a flux vector splitting with a cutoff of small eigenvalues controlled by the free variable d.
To compare the different methods, we used a discrete Fourier analysis of the linearized Euler equations. Numerical results show that AW3, AW51 and AW52 have similar convergence rates, meaning that AW3 performs best, since it uses two stages less. The free parameter η can be chosen with relative freedom within a stable range (0.5 η 0.9) although the optimal value is dependent in some cases on the initial conditions, d and the aspect ratio. Fixing η = 0.8 is an acceptable simplification in the cases tested. The most significant parameter affecting stability and convergence is the eigenvalue cutoff coefficient d in the numerical flux function. It was found that the W schemes were A-stable for d ≥ 0.5 and had stability limits lower than preconditioned ARK schemes for d < 0.5. Thirdly, the pseudo CFL number c * was tuned for optimal performance. Different optimal values were obtained for different aspect ratios but as long as c * was within the stability limit, good convergence was achieved. This is useful since the aspect ratios in practical meshes vary considerably.
Simulations of pitching and plunging NACA 64A010 and RAE2822 airfoils in high Reynolds number flow at Mach 0.796 were performed using the 2D URANS code uflo103. The preconditioned ARK schemes were completely uncompetitive with convergence rates of around 0.999. The additive W schemes, on the other hand, achieved mesh-independent convergence rates of as low as 0.85 for the initial steady-state iteration and 0.77 for the unsteady iterations. Slightly different optimal values of η and c * were found although the behaviour of the schemes was qualitatively similar to that predicted by the linear analysis. We emphasise two modes of operation for the W schemes: a fast mode, d = 0.05, η=0.5 and c * =100 and a robust mode, d = 0.5, η=0.5 and c * =10000. Unsteady convergence rates in the robust mode were higher than the fast mode but still competitive. Steady-state convergence rates for all tests stalled to varying degrees after around 20 iterations but the residuals had already fallen by 6 orders of magnitudemore than sufficient for most practical applications.
In summary, the new additive W schemes achieve excellent performance as smoothers in the agglomeration multigrid method applied to 2D URANS simulations of high Reynolds number transonic flows. The stiffness associated with very high aspect ratio grids is counteracted by highly tuned preconditioning. The underlying aim of this paper was to present a complete analysis of the reasons why such preconditioned iterative smoothers are effective, in order that their high performance can be replicated. We encountered two parameters that resisted analysis and had to be tuned empirically: η and d. Nevertheless, this is considered a great improvement. Future work will look at these parameters in more detail. In addition, boundary conditions should have an influence on convergence speed.