1 Introduction

One of the most important concerns when solving numerically partial differential equations is finding the optimal grid on which the solution will be computed. Unfortunately in most cases it is not an easy task that could be determined in advance without deep understanding of the studied problem. That is why self-adapting algorithms like e.g. adaptive mesh refinement (AMR) have received much attention in past decades and became an important part of many packages for numerical modelling of fluid dynamics e.g. [9, 18]. The goal of AMR is to control the computational error during the simulation by placing higher resolution grids where it is needed. This makes the numerical modelling more robust, and gives the possibility to increase the accuracy of numerical simulations at minimal computational cost. The drawback is, however, increased solver complexity, and it that can have negative effects on the parallel code performance, in particular related to load balancing.

There are number of different AMR schemes, and in the context of the spectral element method (SEM) [16], in which the discretisation is based on a decomposition of the computational domain into a number of non-overlapping, high-order sub-domains called elements, we can distinguish three different categories: The mesh adaptation in this case can mean adjusting the (local) size of an element (r-refinement), changing the polynomial order in a particular element (p-refinement), or splitting the element into smaller ones (h-refinement). In this work we concentrate on an h-refinement framework and its implementation in Nek5000 [8], which is a highly parallel and efficient SEM solver for the incompressible Navier–Stokes equations. In its established version, Nek5000 only supports conformal elements at constant polynomial order throughout the domain.

The present work was started within EU project CRESTA, where the non-conforming solver for advection-diffusion problem was developed and the basic AMR tasks were implemented using existing external libraries. As h-refinement affects the element connectivity resulting in non-conforming meshes, a special grid manager is required to perform local refinement/coarsening and to build globally consistent meshes. For this task the p4est library [1] has been chosen, as it is designed to manipulate domains composed of multiple, non-overlapping logical cubic sub-domains, which can be represented by a recursive tree structure. This library provides element connectivity information for the dual graph, which is later manipulated by ParMETIS [10] producing a new element-to-processor mapping. The final step of grid refinement/coarsening and redistribution is performed within the non-conforming version of Nek5000, which utilises the so-called conforming-space/nonconforming-mesh approach based on the previous work of Fischer et al. [7, 11]. As the solver complexity grows special care has been taken to develop efficient tools that can be used within AMR framework. A more detailed description of them and the related scaling tests can be found in [17].

The goal of ExaFLOW is to extend results of CRESTA to the full incompressible Navier–Stokes equations focusing on proper adaptation of the pressure preconditioners for nonconforming SEM. Defining a robust parallel preconditioning strategy has received much attention in past decades, as the linear sub-problem associated with the divergence-free constraint (pressure-Poisson equation) can become very ill-conditioned. In the context of SEM two possible approaches based on the additive overlapping Schwarz method [4, 6] and the hybrid Schwarz-multigrid method [5, 12] were proposed and implemented in Nek5000, leading to a significant reduction of pressure iterations.

In the present paper, we discuss the modifications necessary to adapt Nek5000 for the h-type AMR framework. The article is organised as follows. A short description of SEM and pressure preconditioners is given in Sects. 2 and 3. The following Sects. 4 and 5 describe the algorithmic modifications and parallel performance of the code. Finally, Sect. 6 provides conclusion and future work.

2 SEM Discretisation of the Navier–Stokes Equations

We review briefly the discretisation of the incompressible Navier–Stokes equations to introduce notation and point out algorithm parts that require modification. The more in-depth derivation can be found in e.g. [4]. The temporal discretisation is based on a semi-implicit scheme in which the nonlinear term is treated explicitly and the remaining unsteady Stokes problem is solved implicitly. To avoid spurious pressure modes our spatial discretisation is based on the \(\mathbb {P}_N - \mathbb {P}_{N-2}\) SEM, where velocity and pressure spaces are spanned by Lagrangian interpolants on the Gauss–Lobatto–Legendre (GLL) and Gauss–Legendre (GL) quadrature points, respectively. Note that the basis for velocity is continuous across element interfaces, whereas the basis for pressure is not. Assuming f n incorporates all nonlinear and source terms treated explicitly at time t n, the matrix form of the Stokes problem after applying the Uzawa decoupling reads:

$$\displaystyle \begin{aligned} \left[ \begin{array}{cc} \mathsf{H} \ \ & -\frac{\Delta t}{\beta_0} \mathsf{H} \mathsf{B}^{-1} \mathsf{D}^T \\ 0 \ \ & \mathsf{E} \end{array} \right] \left( \begin{array}{c} {\mathbf{u}}^n \\ \Delta p \end{array} \right) = \left( \begin{array}{c} \mathsf{B} {\mathbf{f}}^n + \mathsf{D}^Tp^{n-1} \\ g \end{array} \right), {} \end{aligned} $$
(1)

where

$$\displaystyle \begin{aligned} \mathsf{E} = \frac{\Delta t}{\beta_0}\mathsf{D}\mathsf{B}^{-1}\mathsf{D}^T {} \end{aligned} $$
(2)

is the Stokes Schur complement governing the pressure, Δp = p n − p n−1 is the pressure update, and g is the inhomogeneity arising from Gaussian elimination. In these equations \(\mathsf {H} = - \frac {1}{Re} \mathsf {A} + \frac {\beta _0}{\Delta t} \mathsf {B}\) and D are the discrete Helmholtz and divergence operators, respectively. β 0, A and B denote here a coefficient from time derivative, a discrete Laplacian and a diagonal mass matrix associated with the velocity mesh. Applying the Uzawa decoupling we use the inverse mass matrix B −1 as approximation of the inverse Helmholtz operator H −1, giving rise to a splitting error. Note that for this splitting method the diagonality of the mass matrix B is crucial to avoid costly matrix inversion.

All operators H, A, B and E are symmetric positive definite (SPD) and can be solved with a preconditioned conjugate gradient (PCG) method. Moreover, E has properties similar to a Poisson operator, and is often referred to as a consistent Poisson operator. The systems involving H and E are solved iteratively with E being more challenging, and in the next section we will present the preconditioning strategy for the pressure equation,

$$\displaystyle \begin{aligned} \mathsf{E}\Delta p = - \mathsf{D}\mathbf{u} \ . {} \end{aligned} $$
(3)

We close this section by shortly presenting the SEM operators. SEM introduces a globally unstructured and locally structured basis by tessellating the domain into K non-overlapping subdomains (deformed quadrilaterals), \(\Omega = \bigcup _{k=1}^K \Omega _k\), and representing functions in each subdomain in terms of tensor-product polynomials on a reference subdomain \(\hat \Omega = [-1,1]^d\). In this approach every function or operator is represented by its local counterparts, which in case of functions takes the form of a sum over the subdomains

$$\displaystyle \begin{aligned} f(\mathbf{x}) = \sum_{k=1}^K \sum_{i} f_i^k h_i(\mathbf{r})\ . \end{aligned}$$

Here, \(f_i^k\) and h i are the nodal values of the function in Ωk and the base functions in \(\hat \Omega \), respectively, with i representing the natural ordering of nodes in \(\hat \Omega \). Combining the coefficients \(f_i^k\) one can build global \({ \underline f}\) and local \({ \underline f}_L\) representations of the function. Each global degree of freedom occurs only once in the global representation, but has multiple copies of faces, edges and vertices related to Ωk in the local one. To enforce function continuity, the global-to-local mapping is defined as the matrix–vector product \({ \underline f}_L = \mathsf {Q} { \underline f}\), where Q is a binary operator duplicating the basis coefficients in adjoining subdomains. The action \(\mathsf {Q}^T { \underline f}_L\) sums multiple contributions to the global degree of freedom from their local values. The assembled global stiffness matrix A takes the form

$$\displaystyle \begin{aligned} \left(\nabla f,\nabla g\right) = {\underline f}^T \mathsf{A} {\underline g} = {\underline f}^T \mathsf{Q}^T \mathsf{A}_L \mathsf{Q} {\underline g}, \end{aligned}$$

where a block diagonal matrix A L is the unassembled stiffness matrix with each diagonal block consisting of the local stiffness matrix \(\mathsf {A}^k_{ij}=\int \frac {dh_i}{dx}\frac {dh_j}{dx}dx\). In practise, the global stiffness matrix is never formed explicitly, and the gather–scatter operator Q Q T is used instead. This operator contains all information about element connectivity.

3 Pressure Preconditioner

An efficient solution of Eq. (3) requires finding an SPD preconditioning matrix M −1 which can be inexpensively applied and which reduces the condition number of M −1 E. Preconditioners based on domain decomposition are a natural choice for SEM as the data is structured within an element but is otherwise unstructured.

An overlapping additive Schwarz preconditioner for Eq. (3) was developed in [4] based on linear finite element discretisation of Poisson operator. It combines solutions of the local Poisson problems in overlapping subdomains \(\mathsf {R}_k^T \hat {\mathsf {A}}_k^{-1} \mathsf {R}_k\) with the coarse grid problem \(\mathsf {R}_0^T \hat {\mathsf {A}}_0^{-1} \mathsf {R}_0\), which is solved on few degrees of freedom, but covers the entire domain

$$\displaystyle \begin{aligned} \mathsf{M}^{-1} = \mathsf{R}_0^T \hat {\mathsf{A}}_0^{-1} \mathsf{R}_0 + \sum_k \mathsf{R}_k^T \hat {\mathsf{A}}_k^{-1} \mathsf{R}_k. \end{aligned}$$

For the local problems restriction and prolongation operators, R k and \(\mathsf {R}_k^T\), are Boolean matrices that transfer data to and from the subdomain, and \(\hat {\mathsf {A}}_k\) is a local stiffness matrix which can be inverted with e.g. a fast diagonalisation method. Note that action of R k and \(\mathsf {R}_k^T\) are similar to the gather–scatter operator Q Q T.

The coarse grid problem corresponds to the Poisson problem solved on the element vertices only, with \(\mathsf {R}_0^T\) being the linear operator interpolating the coarse grid solution onto the tensor product array of GL points. Unlike in [4, 6], \(\hat {\mathsf {A}}_0\) is defined using local SEM-based Neumann operators performing the projection of local stiffness matrices A k evaluated on the GLL quadrature points onto the set of coarse base functions b i representing the linear finite element base on the GLL grid. The coarse base functions are defined in \(\hat \Omega \) as a tensor-product of the one-dimensional linear functions. The local contribution to \(\hat {\mathsf {A}}_0\) is given by \(b_i^T \mathsf {A}_k b_j\), and the full \(\hat {\mathsf {A}}_0\) is finally assembled by local-to-global mapping summing contributions to the global degree of freedom from their local counterparts. \(\hat {\mathsf {A}}_0\) is one of few matrices formed explicitly in Nek5000.

On the other hand, the hybrid Schwarz-multigrid preconditioner is based on the multiplicative Schwarz method, which for the two-level scheme takes the form,

$$\displaystyle \begin{aligned} \mathsf{M}^{-1} = \mathsf{R}_0^T \hat {\mathsf{A}}_0^{-1} \mathsf{R}_0 \left[\sum_k \mathsf{R}_k^T \hat {\mathsf{A}}_k^{-1} \mathsf{R}_k\right], \end{aligned}$$

and leads to the following two-level multigrid scheme,

$$\displaystyle \begin{aligned} \begin{array}{rcl} (i) &\displaystyle u^1&\displaystyle = \sum_k \mathsf{R}_k^T \hat {\mathsf{A}}_k^{-1} \mathsf{R}_k g,\\ (ii) &\displaystyle r&\displaystyle = g - \mathsf{A}u^1,\\ (iii) &\displaystyle e&\displaystyle = \mathsf{R}_0^T \hat {\mathsf{A}}_0^{-1} \mathsf{R}_0 r,\\ (iv) &\displaystyle u&\displaystyle = u^1 + e, \end{array} \end{aligned} $$

where g, r, e and u are right-hand side, residual, coarse-grid error and solution of equation A u = g, respectively. This method can be extended to a general multilevel solver performing a full V cycle [5, 12]. Notice that by replacing step ii) with r = g we obtain the additive Schwarz preconditioner.

4 Adaptation for Non-conforming Meshes

The important advantage of SEM in the context of AMR is its spatial decomposition into elements that can easily be split into smaller ones, and use of the local representation of the operators which decouples intra- and inter-element operations. As h-type AMR using the conforming-space/nonconforming-mesh approach leaves the approximation spaces unchanged, most of the tensor-product operations evaluated element-by-element are preserved, limiting the changes in the algorithm.

The inter-element operations are mostly performed by the gather–scatter operator Q Q T which has to be redefined to include spectral interpolation at the non-conforming faces. Following [7] we consider a non-conforming face shared by one low resolution element (parent) and two (in 3D four) high resolution elements (children). We introduce a local parent-to-child interpolation operator J cp which is a spectral interpolation operator with entries

$$\displaystyle \begin{aligned} \left(\mathsf{J}^{cp}\right)_{ij}=h_j(\zeta_i^{cp}), \end{aligned}$$

where \(\zeta _j^{cp}\) represents the mapping of GLL points from the child face to its parent. This operator is locally applied to give the desired nodal values on the child face, after Q copies data form the parent to the children. Building a block-diagonal matrix J L with local matrices J cp one can redefine scatter J L Q and gather–scatter \(\mathsf {J}_L \mathsf {Q Q}^T \mathsf {J}_L^T\) operators, respectively. For more discussion see Fig. 6 and Sect. 4 in [7].

The next crucial modification is diagonalisation of the global mass matrix Q T B L Q (B L is a block-diagonal built of local mass matrices), whose inverse is required in Eqs. (1) and (2). It is non-diagonal due to the fact that the quadrature points in the elements along the non-conforming faces do not coincide. A diagonalisation procedure is given in [7] and consists of building the global vector \(\tilde { \underline b}\)

$$\displaystyle \begin{aligned} \tilde {\underline b} := \mathsf{B} \hat {\underline e} = \mathsf{Q}^T \mathsf{J}_L^T \mathsf{B}_L \hat {\underline e_L}, \end{aligned}$$

and finally setting the lumped mass matrix \(\tilde {\mathsf {B}}_{ij}= \delta _{ij} \tilde { \underline b}_i\). \(\hat { \underline e}\) and \(\hat { \underline e}_L\) denote here the global and local vectors containing all ones.

The additive Schwarz preconditioner requires two significant modifications. The first one is related to the assembly of the coarse grid operator \(\hat {\mathsf {A}}_0\), which gets more complex for non-conforming meshes. This is due to the fact that the non-conforming mesh introduces hanging vertices located in the middle of faces or edges. These hanging vertices are not global degrees of freedom and cannot be included in \(\hat {\mathsf {A}}_0\). To remove them from consideration one has to modify the set of local coarse base functions b i, which are thus dependent on the shape of the refined region as well as the position and orientation of the child face with respect to the parent one. Unlike the conforming case, where all b i could be represented by a tensor product of two or three linear functions, the non-conforming mesh requires 5 basic components in two and 21 in three dimensions to assemble all the possible shapes of b i.

The last missing components are the restriction and prolongation operators, R k and \(\mathsf {R}_k^T\), for the local Poisson problem. Taking into account the similarity between these operators with Q Q T and following the previous development we use an operator similar to \(\mathsf {J}_L \mathsf {Q Q}^T \mathsf {J}_L^T\), replacing J L with the interpolation operator defined on the GL quadrature points. Although this choice seems to be optimal as it preserves properties of the preconditioner and \(\mathsf {J}_L^T\) is well defined, our numerical experiments showed a significant increase of pressure iterations in some cases. It was found to be caused by the noise introduced by \(\mathsf {J}_L^T\) in the Schwarz operator. To reduce this noise we replaced the transposed interpolation operator with the inverse one, getting a significant reduction of iterations. Unfortunately, such a preconditioner is no longer SPD and PCG cannot be used as an iterative solver in this case. The other problem is the definition of \(\mathsf {J}_L^{-1}\), as J cp can be inverted for square matrices only, thus excluding p-refinement strategies. To avoid this problem we define a child-to-parent interpolation operator J pc with the entries

$$\displaystyle \begin{aligned} \left(\mathsf{J}^{pc}\right)_{ij} = \left\{ \begin{array}{cl} h_j(\zeta_i^{pc}) &\mbox{ if }{\zeta_j^{p} \in \partial \Omega^p \cap \partial \Omega^c} \\ 0 &\mbox{ otherwise} \end{array} \right. \ , \end{aligned} $$

where Ωp and Ωc are the parent and child common faces, \(\zeta _j^{p}\) is a parent GLL point at the face Ωp, and \(\zeta _j^{pc}\) represents the mapping of \(\zeta _j^{p}\) to the child face Ωc. This operator is locally applied to give the desired nodal values on the child face, before Q T sums data form the children and the parent. Building block-diagonal matrix \(\mathsf {J}_L^{-1}\) consisting of local matrices J pc one can redefine the gather–scatter \(\mathsf {J}_L \mathsf {Q Q}^T \mathsf {J}_L^{-1}\) operator such that it is appropriate for the pressure preconditioner.

In a similar way we modify the multiplicative Schwarz method, as it shares a number of features with the additive one. In this case we distinguish between Schwarz (acting at single level) and restriction (connecting different levels) operators and apply \(\mathsf {J}_L \mathsf {Q Q}^T \mathsf {J}_L^{-1}\) and \(\mathsf {J}_L \mathsf {Q Q}^T \mathsf {J}_L^T\) to each of them, respectively. Unlike the additive preconditioner, the hybrid one requires also the redefinition of the diagonal weight matrix that indicates the number of sub-domains sharing a given node, and is used to accommodate for overlapping regions. Its value is important as it reduces the largest eigenvalue of the M A operator and defines the smoothing properties of the additive Schwarz step (see [4] and the references therein). In the conforming case its definition is straightforward, however the non-conforming case is more involved as hanging nodes are not real degrees of freedom. In the current implementation the information about node multiplicity on the non-conforming faces is hidden to the parent element, so the parent element sees only one neighbour instead of two (four in 3D). Although this choice gives a preconditioner that significantly reduces the number of pressure iterations, its performance for the studied cases is slightly worse than the performance of the additive Schwarz preconditioner. This can be caused by a non-optimal value of the weight matrix, or by the fact that the hybrid preconditioner is superior over the additive one for high-aspect ratio elements (that are not present in our adaptive simulations).

5 Parallel Performance

The parallel performance test is based on the one of the ExaFLOW flagship calculations, and consists of the turbulent flow around a NACA4412 wing section with 5 angle of attack, at a Reynolds number based on inflow velocity U and chord length c of Re c = 200, 000. It was previously studied in a series of well-resolved large-eddy simulations conducted with the conforming Nek5000 version, and discussed in detail in [19]. This flow configuration was chosen to illustrate the significant benefit of using AMR, in particular when it comes to the farfield region in the computational domain, but for this article we will only briefly discuss the strong scaling results. We omit here a weak scaling test, as Nek5000 uses iterative solvers and with the current example we cannot provide meaningful data.

Fig. 1
figure 1

(a) Volume visualisation of that part of the domain covered by refinement levels higher than one for the turbulent flow around a wing profile. The wing vicinity and wake region are resolved and a colour indicates different refinement levels. (b) Strong scaling of the non-conforming Nek5000 solver for the same case performed on Beskow. The plot shows the time per time step as a function of node number. Each node consists of 32 cores

The initial coarse and conforming mesh consisted of 2190 elements with polynomial order N = 7 and was evolved for 7.2 time units cU to evolve the refinement process using spectral error indicators [13, 14], and allowing for 6 refinements levels. The resulting non-conforming grid was built of 224,272 elements with 76.37 × 106 degrees of freedom, resolving the wing surface and the wake, Fig. 1a. This final mesh was used to test the parallel performance of the non-conforming solver using the petascale Cray XC40 system Beskow at PDC (Stockholm). This system consists of 2060 nodes with 32 cores per node and 2.438 PFlops peak performance. We compare our results with the scaling tests of the conforming Nek5000 presented in Offermans et al. [15]. The most relevant test in this article is pipe flow at Re τ = 360 (upper-right plot in their Fig. 5), as it is similar in size with the discussed wing case. We should mention here that our goal is not to improve the parallel performance of the conforming code, but rather to retain it despite of a work imbalance introduced by an additional operator in the direct stiffness summation of the non-conforming solver.

To be able to compare to the conforming solver, we focus on the time evolution loop only, excluding code initialisation, finalisation, mesh rebuilding within AMR and I/O operations. The result of the strong scaling test is presented in Fig. 1b showing the time per time step as a function of node count. This plot is almost identical with the reference one in [15]. Both show slight super-linear scaling between 32 and 256 nodes despite growing work imbalance for the non-conforming solver. We also reach the strong scaling limit at around 256 nodes, which for the conforming solver on Beskow was estimated to be between 30, 000 and 50, 000 degrees of freedom per core [15]. This shows that the parallel performance of the non-conforming and conforming solvers is almost the same and proves the efficiency of our implementation.

The maximum number of the compute nodes used in the test was not set by the parallel properties of the non-conforming Nek5000, but by the quality of the domain partitioning provided by ParMETIS. Within ExaFLOW we developed a new grid partitioning scheme for Nek5000 (not discussed in this paper) that takes into account a core distribution among the nodes, and consists of two steps: inter- and intra-node partitioning. Although this two-level partitioning scheme significantly improves the efficiency of a coarse grid operations for XXT, especially during the setup phase, it relies on the quality of an inter-node partitioning. If the first step gives subdomains with disjoint graphs, the second step cannot be performed. We found that the probability of getting disjoint graphs increases with decreasing number of elements per node, virtually prohibiting the runs with less than 1000 elements per node. However, this limit can differ between simulations. We note however that in the standard production use of the solver this limitation is not critical, as according to [15] it is usually close to the strong scaling limit of conforming Nek5000.

6 Conclusions

Within the ExaFLOW project we developed a fully functional SEM-based h-type adaptive mesh refinement (AMR) solver for the incompressible Navier–Stokes equations. This allows for much larger flow cases to be run at reduced cost, as the high resolution grid is placed only in those region where it is needed. At the same time the simulation quality is improved, as the computational error can be controlled during the run.

We have optimised for non-conforming meshes the pressure preconditioners based on the additive overlapping Schwarz and hybrid Schwarz-multigrid methods. To achieve this we modified the base functions for the assembly of a coarse-grid operator to remove hanging nodes, and redefined the direct stiffness summation operator to include spectral interpolation at the non-conforming faces and edges. We introduced two operators \(\mathsf {J}_L \mathsf {QQ}^{T} \mathsf {J}_L^{T}\) and \(\mathsf {J}_L \mathsf {QQ}^{T} \mathsf {J}_L^{-1}\) for the different steps in the pressure calculation. The last crucial modification was the diagonalisation of the global mass matrix.

Using real flow cases we show our AMR implementation to be correct and efficient. An important success is the fact that parallel performance of the conforming and non-conforming solvers is very similar, despite the increased complexity of the non-conforming one.

In the future we are going to investigate other definitions of the weight matrix for the hybrid Schwarz-multigrid method, and to test different pressure preconditioners based on the restricted additive Schwarz method [2, 3]. We are going as well to work on the quality of the graph partition, as the two-level partitioning would not accept disjoint graphs on the node’s subdomain.