1 Introduction

The purpose of this paper is to investigate and highlight the effect of various parameter choices and modelling aspects on the performance of a cumulant lattice Boltzmann model based on the decay of a three-dimensional Taylor–Green vortex as an example for a canonical transitional flow. Recent years have seen the upcoming of many new lattice Boltzmann models based on various modelling rationals. The promise behind the various models is to provide more efficient, more accurate or more stable results than previous lattice Boltzmann models. While derivation details between the many variants differ, the novelty as compared to the classical single and multiple relaxation time models is found in one or several of the following categories:

  • Advanced discretizations of the momentum space

  • Advance equilibrium functions

  • Advanced bases transformations for collision

  • Adaptive relaxation rates

  • Optimized relaxation rates

  • Non-local collision terms

From this list we explicitly exclude the wide field of non-lattice conforming discretizations such as finite difference lattice Boltzmann [1], discontinuous Galerkin lattice Boltzmann [2] and semi-Lagrangian lattice Boltzmann [3]. While all these approaches are interesting and potentially superior to the classical stream-on-lattice and collide-on-nodes approach, they have diverged so considerable from the original lattice Boltzmann model that they qualify as independent models to be discussed in a different context.

Advanced discretization of momentum space usually involves the utilization of many discrete speeds [4,5,6,7], or a lattice structure other than the Cartesian simple cubic lattice (e.g. body-centred cubic lattices [8, 9]). From a finite difference perspective it is most natural to try to improve a method by enlarging or rearranging its stencil. However, in the lattice Boltzmann method progress in this direction has always been very slow and unsatisfactory. Substantial advances would require velocity sets with non-rational ratios between the velocities reminiscent of Gauss integration points [10]. The success of mono-speed lattice Boltzmann models can partly be attributed to the simple fact that a single velocity can always be scaled onto a conforming integer lattice. Therefor, three-point symmetric Gauss integration is possible on an Cartesian lattice, but this cannot be generalized to more than three points per direction.

Advanced equilibrium functions are improvements over the classical Taylor expanded Maxwellian used in classical single or multiple relaxation time methods [11, 12]. The simplest approach is to increase the order of the Taylor expansion from two to three or higher. Some entropic lattice Boltzmann models use non-polynomial equilibrium functions [13].

Advanced bases transformations are used in various new multiple relaxation time methods. The idea is always that the collision operator is not applied to the distributions directly but to some sort of moments such that different relaxation parameters can be attached to different observable quantities. While the original multiple relaxation time models used unweighted orthogonal raw moments [14,15,16], a wide range of other options have now been promoted, including central moments [17,18,19], Hermite moments [20] and cumulants [21]. Besides these complete transformations there exist methods separating hydrodynamic relevant information from so-called ghost modes using only a small set of categories (two or three). Hydrodynamic modes are than relaxed as in the single relaxation time method, while the ghost modes are treated separately, for example by regularization [22, 23] or by adaptive relaxation [24].

Beside several adopted standard turbulence models, the most prominent methods applying adaptive relaxation times are the entropic models. The original entropic models were single relaxation time models with the relaxation time chosen to fulfil a discrete H-theorem replacing physical entropy by information entropy [13, 25]. More recently entropic models using two relaxation rates were promoted. They use a fixed relaxation rate for the hydrodynamic moments and a modulated one for the ghost moments [24]. Applying limiters to the relaxation of some high-order cumulants technically also falls into the category of adaptive relaxation rates [26].

Optimization of relaxation rates cannot be discussed separated from the type of basis transformation applied. The basic idea is always that the governing equation constrains only some of the available relaxation rates for the various non-conserved observable quantities. The remaining rates can then be chosen to fulfil other objectives. The concept of magic relations between hydrodynamic and non-hydrodynamic relaxation times in lattice Boltzmann originated in the 1990s [27]. When distinguishing only between the relaxation of odd and even moments, it had been observed that the error in the lattice Boltzmann method was a function of a certain combination of the two relaxation rates such that the error could be kept constant with respect to varying viscosities by keeping the particular function constant [28]. However, this by itself did not mean that the error was small. Xu and coworkers proposed a sensitivity analysis for the free relaxation parameters in the MRT model and optimized them with regard to minimal dispersion and dissipation errors [29]. Modena and co-workers used a different objective and optimized relaxation rates according to a \(k-1\)% rule with the aim to increase dissipation at high wave numbers in order to improve stability [30]. The current authors used the objective to eliminate the leading-order error in diffusion in the asymptotic expansion of a cumulant lattice Boltzmann model and found a general analytical solution [26].

Non-local collision terms refer to all methods that invoke correction terms based on finite differences. A recent example is the hybrid recursive regularized lattice Boltzmann [31]. Finite differences can also be used to correct for the phase error in lattice Boltzmann [32].

The cumulant method aims at incorporating most of the progress discussed above with the exception of large stencils. Among the lattice Boltzmann models on standard (monospeed) lattices, the cumulant method is the only one for which fourth-order convergence of momentum diffusion and advection could be attained to date [26, 32]. The cumulant transform implicitly incorporates an extended equilibrium and a recursive regularization by design. Unlike regularized methods it retains the free relaxation rates for higher-order cumulants which can then be used for elimination of the leading-order error terms [26] in momentum diffusion. The elimination of the leading error in advection requires a correction term based on finite differences [32]. The determination of both the diffusion and the advection correction used asymptotic expansion techniques in diffusive scaling. The diffusive scaling assumes a limit where the time step scales with the grid spacing squared. The diffusive scaling is the only scaling for which convergence of the lattice Boltzmann method to the Navier–Stokes equation can be formally shown without any assumptions on the Mach number [33]. This is not only true for the incompressible Navier–Stokes equation, but it is intrinsic to the fact that a static velocity set is used in connection with explicit time marching algorithm that allows information only to travel a finite distance per time step. Despite this theoretical obstacle it has become quite popular to apply the lattice Boltzmann method to aero-acoustic problems and to dismiss analysis based on diffusive scaling accordingly [20]. It is therefor of considerable practical interest to investigate the fourth-order convergent cumulant method in both diffusive and acoustic scaling, which is one of the main aims of the current study. Further, in order to highlight the respective contributions of the various measures taken in the development of the cumulant method we will apply the method using various configurations. Since our aim is to investigate the influence of various modelling options in different asymptotic scalings, we have to conduct a large number of simulations even for a single test case. In this paper we will therefor limit ourselves to the three-dimensional Taylor–Green vortex as a generic test for isotropic transitional flow. By limiting ourselves to this particular flow we are able to provide a dense sampling of various modelling parameters.

2 Differences between the cumulant model and other lattice Boltzmann variants

The standard lattice Boltzmann model approximates the kinetic momentum distribution function f through a small set of discrete distributions \(f_{ijk}\) with finite velocities \((i,j,k)^T\Delta x/\Delta t\) and \(i,j,k\in \mathbb {Z}\) arranged on a regular lattice with grid spacing \(\Delta x\) such that each particle moves from one lattice node to another lattice node in a fixed time step \(\Delta t\). This arrangement requires all lattice nodes to be located on a regular grid. The movement of the discrete distributions is modified with the application of the collision operator \(\Omega _{ijk}\). The restrictions of the lattice can be relaxed or mitigated by various interpolation schemes, more sophisticated discretizations [1,2,3] or by allowing a large number of discrete velocities [6, 12]. These approaches, albeit interesting, are not subject of the current paper. Instead we will focus on small regular lattices, where \(i,j,k\in \{-1,0,1\}\). While for these models the choice of the velocity set is rather restrictive, there is a considerable freedom in the design of the collision operator and a large number of proposals has been put forward in the last three decades. The very first lattice Boltzmann models were based on matrix collision operators [34]. They were quickly replaced by the BGK collision operator [35] which relaxes the distribution directly towards an equilibrium distribution using a single relaxation time. This approach is motivated by the classical BGK collision operator used in kinetic theory [36].

The current study focuses on the cumulant method [21, 26, 37,38,39,40,41,42,43,44,45]. In the cumulant method we transform the distribution into a set of statistically independent observable quantities. This is mathematically expressed by demanding that the joint probability distribution of all observable quantities is the product of their individual distribution functions. This requirement is met by the cumulant generating function being the logarithm of the moment generating function. Cumulants appear to be a very abstract mathematical construction. However, applied to the momentum distribution functions they turn out to be very intuitive quantities of which humans have been aware for a very long time. It is quite striking that the leading-order cumulants just turn out to be velocity and temperature without even requiring normalization to unit density. In fact, the most compact and concise definition of temperature is saying that it is the trace of the second-order cumulant tensor of the momentum distribution function.

In addition to the basis transformation, recent research has focused on the role of the equilibrium distribution [46, 47]. The cumulant method intrinsically uses the same number of monomials in the equilibrium function as the number of discrete velocities. It has been assumed for a long time that this maximizes stability and accuracy [48, 49].

Among the recent new lattice Boltzmann models the cumulant model stands out in a few aspects.

These include:

  • A parametrization resulting in fourth-order spatial accuracy of the viscous term has been identified [26].

  • Fourth-order spatial accuracy of the advection term can be obtained with a small correction involving finite differences [32].

  • Even without this correction, the viscosity is Galilean invariant, i.e. there are no cubic error terms in the viscosity known to plague almost all competing lattice Boltzmann models [21].

  • The cumulant method does not coincide with the BGK model if all relaxation rates are identical [21, 26].

The fact that the cumulant method is not identical to the BGK method if used in single relaxation time (SRT) mode is surprising as this is the case for all other multiple relaxation time bases we are aware of. In the lattice Boltzmann literature BGK and SRT are usually used synonymous. Here we briefly discuss the reason why this is not the case for a cumulant basis.

The BGK collision operator \(\partial _t^c f\) for the continuous Boltzmann equation is stated as:

$$\begin{aligned} \partial _t f + \xi \cdot \nabla f=\partial _t^c f=-\frac{f-f^{\mathrm{eq}}}{\tau }, \end{aligned}$$
(1)

where f is the momentum distribution, \(f^{\mathrm{eq}}\) is the equilibrium distribution, \(\xi \) is the microscopic particle velocity vector, and \(\tau \) is the single relaxation time being the only adjustable parameter of this model. In the discrete form used later for the lattice Boltzmann equation the relaxation time has to be corrected [50] which has been interpreted in different ways, either as an integration along characteristics [51] or as a consequence of Strang splitting [52].

Instead of writing Eq. (1) in terms of the distribution function f we can write it in terms of the Fourier transformed distribution function \(F=\mathcal {F}\{f\}\):

$$\begin{aligned} \partial _t f + \xi \cdot \nabla f=\mathcal {F}^{-1}\left\{ \partial _t^c F\right\} =\mathcal {F}^{-1}\left\{ -\frac{F-F^{\mathrm{eq}}}{\tau }\right\} . \end{aligned}$$
(2)

Since the Fourier transform and Eq. (1) are linear, it is easy to see that the equation does not change its form. The Fourier transform of a distribution function is also called its moment-generating function. Various types of moments are obtained by taking derivatives of the moment-generating function in various combinations. In that way, the moment-generating function gives rise to the various different bases used for multiple relaxation time lattice Boltzmann models, ranging from raw moments over central moments to Hermite moments including all different sorts of orthogonalization and regrouping. Due to the linearity of Eq. (1) and due to the fact that all moments are obtained by linear combinations of derivatives of the moment-generating function, it is easy to see that if all moments are relaxed with same relaxation time, Eq. (2) and consequently also Eq. (1) are recovered. It is hence obvious that all moment methods recover the same BGK operator if they assign the same relaxation time to each moment and if they have the same equilibrium function. Cumulants behave differently. The cumulant generating function is the logarithm of the moment-generating function. Applying a single relaxation time to the cumulant generating function leads to:

$$\begin{aligned} \partial _t^c\log \{F\}=F^{-1}\partial _t^cF=-\frac{\log \{F\}-\log \{F^{\mathrm{eq}}\}}{\tau }. \end{aligned}$$
(3)

Solving for \(\partial _t^cF\) in Eq. (3) and putting it into the Boltzmann equation gives:

$$\begin{aligned} \partial _t f + \xi \cdot \nabla f=\mathcal {F}^{-1}\left\{ -F\frac{\log \{F\}-\log \{F^{\mathrm{eq}}\}}{\tau }\right\} . \end{aligned}$$
(4)

We will call Eq. (4) the cumulant SRT equation or KSRT (where we use ‘K’ for cumulants [20, 53] due to the fact that ‘C’ has been established for central moments in the lattice Boltzmann context and because cumulant is written with ’K’ in many European languages). Further, we denote the BGK operator with the same equilibrium as the cumulant operator as BGK+ in order to distinguish it from the standard lattice BGK model which uses a Taylor expanded Maxwellian as equilibrium. The KSRT equation is no longer equivalent to the BGK or BGK+ equation even in the continuum. It has, however, never been investigated what consequences these differences have. As we argued before [26], the differences in the asymptotic expansion between the BGK+ and the KSRT method are extremely small. In diffusive scaling differences occur only beyond the order of the leading error such that the accuracy is not expected to be strongly affected. Yet an influence on stability is possible.

Very recently, Coreixas and co-workers published their analysis of the KSRT method in two dimensions and found little difference to other SRT collision models with regard to nonlinear stability [20]. Quite interestingly, the KSRT method had a smaller range of stability than raw and central moment methods when studied theoretically with a von-Neumann analysis. However, this behaviour was no longer observed in actual simulations where the effective stability range appeared to be larger than implied by the linearized von-Neumann analysis. It is of course of note here that the primary difference between cumulants and moments is their nonlinear behaviour. Thus, it is not surprising that von-Neumann analysis gives an incomplete picture for the performance of the cumulant method. We also note that pronounced differences between moments and cumulants appear in three dimensions as we have repeatedly argued [21, 26]. Differences between cumulants and central moments are so small in two dimensions that we have abstained from presenting a two-dimensional cumulant method with the exception of a model used for solving the shallow water equation [54]. Finally we note that the cumulant method analysed by Coreixa et al. [20] differs from the cumulant method proposed by us due to their omission of the Galilean invariance correction. (See Eqs. (A.43)–(A.45) below.)

The cumulant lattice Boltzmann method unfolds its full potential only if combined with the parametrization for fourth-order accuracy. Multiple relaxation time models are frequently criticized for having unknown parameters which the user cannot easily deduce from the physics of the problem. In the case of the cumulant method optimal relaxation rates for third-order cumulants have been found [26]. It is correct that this leaves relaxation rates for cumulants of order four and higher undetermined. However, the influence of these higher-order cumulants is small compared to the third-order cumulants at least in the asymptotic expansion. The reason why it is difficult to predict the influence of higher-order relaxation rates is exactly because this influence is small.

2.1 Galilean invariance correction

Most lattice Boltzmann models using a D3Q27 velocity set or one of the other standard lattices suffer from a velocity-dependent viscosity [55]. This problem is common to all models which do not recover the full velocity moment tensor of third order and are equipped with an at least second-order expansion of the equilibrium function. This includes the standard BGK method [35], classical MRT models [16], regularized lattice Boltzmann models [22], recursively regularized lattice Boltzmann models [23], the classical entropic lattice Boltzmann model [25], the KBC model [24] and classical central moment models [17, 18]. The problem of velocity-dependent viscosity is not present in models with increased isotropy such as those using a body-centred cubic lattice [8]. Using the standard D3Q27 lattice, the problem can be removed by a modification of the relaxation rate [47] or the equilibrium moments [21]. This correction is usually applied in the cumulant method and has also been included into some central moment methods [19]. However, in their recent work Coreixas and co-workers analysed cumulant methods omitting the correction [20]. Consequently, their conclusion that only the equilibrium function affects the accuracy of the method does not apply to our cumulant method or to other methods incorporating the correction. In [21] we showed how the same correction can be incorporated into the BGK and the MRT method. Incorporating the same correction into entropic and regularized schemes would be trivial but has not been done to the best of our knowledge. Despite the Galilean invariance of the viscosity in the cumulant scheme, the advection operator is still discretized with only second-order accuracy. This is evident from the phase lack of a planar vortex in a superimposed constant velocity field (e.g. Fig. 6 in [21]). The reason for this error is in the aliasing of fourth-order cumulants due to the finite number of discrete velocities [32]. This error is small when compared to other defects in classical lattice Boltzmann models. However, compared to the fourth-order accurate diffusion term of the cumulant method with optimal relaxation rates it becomes non-negligible. An effective way to overcome this error is to add a correction term to the equilibrium of second-order cumulants [32]. This correction term requires a second derivative of the velocity. It would be very convenient to obtain this second derivative from the non-equilibrium odd-order cumulants (third and fifth order) which is theoretical possible but apparently unfeasible due to the poor conditioning of these cumulants. An obvious but more costly solution to obtain the missing derivatives is the application of finite differences. With this extension, fourth-order convergence of the advection can be obtained even in combination with fourth-order convergent diffusion. It has, however, been observed that the advection correction reduces stability [26]. In the current paper we use a slightly modified version of the correction which we found to be more robust.

The leading error in Galilean invariance (i.e. advection) in the lattice Boltzmann method on the D3Q27 lattice arises from the aliasing of the fourth-order cumulants \(c_{400}\), \(c_{040}\) and \(c_{004}\) which is found to be [32]:

$$\begin{aligned} c_{400}=c_{100}^2(6\theta -3)+c_{200}(1-3\theta )+\mathcal {O}(\epsilon ^3). \end{aligned}$$
(5)

Here \(\epsilon =\Delta x =\Delta t^{1/2}\) is the diffusive scaling parameter, \(\theta =1/3\) is the dimensionless temperature, and \(c_{100}=u\) is the velocity in x-direction. The cumulant \(c_{400}\) is seen to have a spurious dependency on the velocity which breaks Galilean invariance. This deviation from true Galilean invariance is rather small, and it is ignored in almost all lattice Boltzmann models as these models are typically only second-order accurate anyway. If we aim for fourth-order convergence rate, this error can no longer be ignored. One way to remove it [32] is to add a correction term to the equilibrium of the second-order cumulant, i.e.:

$$\begin{aligned} c_{200}^{\mathrm{eq}}=\theta +\left( \left\{ -9\nu u^2\partial _x u\right\} +\left[ \left( 18\nu ^2-\frac{1}{6}\right) ((\partial _x u)^2+u\partial _{xx}u)\right] \right) . \end{aligned}$$
(6)

Here \(\nu \) denotes the kinematic viscosity. Note that we did use cumulants in normalized form which is why no density appears in (6). The term in \(\{\cdot \}\) in (6) is to compensate for the aliasing of the third-order cumulant \(c_{300}\) and removes the dependency of the viscosity from velocity (see Appendix H in [21] and see also [47] where the viscosity but not the phase is corrected in a BGK context). The derivative \(\partial _x u\) can be obtained from the non-equilibrium second-order cumulants which makes this correction non-invasive. The terms in \([\cdot ]\) compensate the aliasing of \(c_{400}\) and remove the phase shift. This correction depends on a second derivative \(\partial _{xx}u\). Also the viscosity independent prefactor of 1/6 is rather large. Previously we used finite differences to determine \(\partial _{xx}u\) and computed \(\partial _x u\) from the non-equilibrium second-order cumulants. This resulted in a noticeable reduction of stability (see section 7 in [26]). One reason for this could be that \((\partial _x u)^2\) and \(u\partial _{xx}u\) are computed by different and possibly incompatible approaches. Both terms actually arise from the same aliasing of the fourth-order cumulant and can be rewritten as:

$$\begin{aligned} (\partial _x u)^2+u\partial _{xx}u=\frac{1}{2}\partial _{xx}u^2=\partial _{x}(u\partial _x u). \end{aligned}$$
(7)

The correction can hence be determined in different ways. For the current paper we decide to obtain the central finite difference of \(u\partial _{x}u\) where \(\partial _x u\) is obtained from the non-equilibrium second-order cumulants. The complete algorithm including all corrections is given in “Appendix A”.

2.2 Conditioning

Fully explicit numerical methods can be sensitive to the build-up of round-off errors. The kinetic approach of the lattice Boltzmann method is particularly vulnerable due to the fact that what is governed by gradients of macroscopic variables in the Navier–Stokes equation is contained in the successive hierarchy of moments or cumulants of the velocity distribution. These cumulants replace the spatial dependencies in the Navier–Stokes equations by the local dependencies on the memory of the flow. This memory composed of cumulants is distributed across all distributions which has severe implications on the conditioning of the memory terms. Cumulants of succeeding order are asymptotically smaller than cumulants of lower order. Each distribution \(f_{ijk}\) is a composition of the cumulants of the various orders such that observable quantities of asymptotically different size share the same floating point variable. As a result, the higher-order cumulants can be poorly represented numerically, and the relevant bits they are represented by within a float or double variable may sensitively depend on the conditioning of the algebraic expressions used in a specific implementation. An additional problem is that cumulants are by no means easily computed. Obtaining them directly from the definition as the series expansion of the logarithm of the Fourier transform of the distribution function would in fact become prohibitive. An obvious way to compute cumulants is to apply the chain rule to the derivatives of the cumulant generating function to obtain expressions for cumulants as a function of the derivatives of the moment-generating function. In this way, cumulants can be computed from e.g. central moments. However, the relationships between cumulants and moments can be identified as functions of quickly growing combinatorial complexity, such that cumulants of successively higher order are not only decreasing in magnitude very quickly, but also require successively more floating point operations to be computed. This all leads to a rather poor numerical conditioning of the procedure. In fact, some early attempts on building lattice Boltzmann methods based on cumulants were only partially successful [56, 57]. The feasibility to implement a cumulant lattice Boltzmann model depends on a series of measures to improve the conditioning of the cumulant transform such that the build-up of round-off errors is sufficiently suppressed. The three most important of these measures are the subtraction of the background density from the distributions, the utilization of asymptotically ordered pairwise sums in the transformation and the introduction of the Chimera transform for the computation of central moments.

The subtraction of the background density from the distribution simply removes the constant part of \(f_{ijk}\) such that the distribution is no longer a positive function but one that varies around zero. This simple adjustment leaves more significant digits for the higher cumulants in the floating point numbers.

Pairwise summation is a method to reduce round-off errors in long sums compared to naive summation without causing any computational overhead. In the transformation from distributions to moments and vice versa, many sums have to be calculated. One simple example is the computation of the velocity in x direction which is naively done this way:

$$\begin{aligned} u=\rho ^{-1}\sum _{i=-1}^{1}\sum _{j=-1}^{1}\sum _{k=-1}^{1}if_{ijk}. \end{aligned}$$
(8)

This naive calculation is not ideal since the different \(f_{ijk}\) tent to be of different magnitude and they are summed in arbitrary order. In a naive summation of n summands the round-off errors grow like \(\mathcal {O}(n)\). This can be improved to \(\mathcal {O}(\log n)\) at the same computational cost by pairwise summation [58]. In addition the a priori knowledge of the size of the different distributions can be used in order to group them such that the partial sums are added in increasing size. The computation of the velocity then reads:

$$\begin{aligned} u= & {} \rho ^{-1}((((f_{111}-f_{\bar{1}\bar{1}\bar{1}})+(f_{1\bar{1}1}-f_{\bar{1}1\bar{1}}))+((f_{11\bar{1}}-f_{\bar{1}\bar{1}1})+(f_{1\bar{1}\bar{1}}-f_{\bar{1}11})))\nonumber \\&+(((f_{101}-f_{\bar{1}0\bar{1}})+(f_{10\bar{1}}-f_{\bar{1}01}))+((f_{110}-f_{\bar{1}\bar{1}0})+(f_{1\bar{1}0}-f_{\bar{1}10})))+(f_{100}-f_{\bar{1}00})). \end{aligned}$$
(9)

Here the over bar in the index indicates a negative lattice direction. The most appropriate ordering of the different partial sums can be difficult to find.

While naive summation and pairwise summation have the same computational cost but different accuracy, the Chimera transform has the advantage that it decreases both computational cost and round-off errors. The purpose of the Chimera transform is to efficiently compute central moments from distributions and back. A naive way of computing central moments \(\kappa _{\alpha \beta \gamma }\) is by:

$$\begin{aligned} \kappa _{\alpha \beta \gamma }=\sum _{i=-1}^{1}\sum _{j=-1}^{1}\sum _{k=-1}^{1}(i-u/c)^\alpha (j-v/c)^\beta (k-w/c)^\gamma f_{ijk}, \end{aligned}$$
(10)

where c is the lattice speed and it is assumed that \(0^0:=1\). It is faster and more accurate to split up the three sums computing Chimeras (partly distributions and partly moments) first:

$$\begin{aligned} \kappa _{ij|\gamma }= & {} \sum _{k=-1}^{1}(k-w/c)^\gamma f_{ijk},\nonumber \\ \kappa _{i|\beta \gamma }= & {} \sum _{j=-1}^{1}(j-v/c)^\beta \kappa _{ij|\gamma },\nonumber \\ \kappa _{\alpha \beta \gamma }= & {} \sum _{i=-1}^{1}(i-u/c)^\alpha \kappa _{i|\beta \gamma }. \end{aligned}$$
(11)

This transformation does not only reduce the cost of the computationally most expensive part of the collision operator, it also reduces round-off errors quite significantly. To understand this, we should first recall that the computation of moments from discrete distributions is a simplified discrete Fourier transform. The discrete Fourier transform can be substantially accelerated by splitting it into partial sums leading to the fast Fourier transform (FFT). The fast Fourier transform is called this way because its most obvious advantage over a naive discrete Fourier transform is its reduced computational complexity. In addition, it also has the perhaps even more useful property of being more accurate than the naive transform [59]. In order to understand this, we have to recall that a naive discrete Fourier transform is computed by summing up a large amount of numbers in direct succession resulting in a cumulative round-off error proportional to \(\mathcal {O}(n)\), whereas the hierarchical summation of pairs of sums in the fast Fourier transform only leads to round-off errors growing with the logarithm of the number of summands. For the same reason the Chimera transform has smaller round-off errors than the naive moment transform as will be demonstrated with a small numerical experiment. Figure 1 shows the accumulated absolute root-mean-square error due to successive applications of the naive and the Chimera central moment transform of the distribution from a randomly chosen lattice node. Successive forward and backward transformations without the application of the collision operator are applied, and the result is compared to the original distribution to compute the error. Note that only a single distribution function was chosen and the result could be slightly different in different cases. However, the discrepancy between the two methods is so drastic that the advantage of the Chimera transform over the naive transform becomes obvious. Both methods show a linear growth in the error with successive repetitions of the transformation such that even the Chimera transform is far from ideal and further improvements are desirable. However, compared to the naive transform the error is systematically three orders of magnitude smaller. More specifically, the loss of precision due to the naive transform as compared to the Chimera transform accounts for 11 bit in the mantissa which is close to half of the 23 bits of the mantissa of an IEEE-754 single precision floating point number [60]. It is hence obvious why the Chimera transform plays such an important role in the practical implementation of the cumulant method. The reduction in computational cost is also considerable. The naive forward transform requires a total of 1482 binary additions and 1458 binary multiplications. The Chimera transform needs only 270 binary additions and 162 multiplications. A similar comparison applies to the back-transformation.

Fig. 1
figure 1

Comparison of the built-up of round-off errors for the two different ways of computing central moments. The same distribution was transformed to central moments either by Eqs. (10) or (11). For the transformation back to distributions the inverse Chimera transform was applied in both cases. While both methods show the same growth rates with repeated application of the forward and backward transformation cycle, the absolute difference in the errors remains constant at a value of 2400 which is equivalent to a relative loss of 11 binary digits only due to the transformation. This plot is shown here to emphasize the importance of the Chimera transform for the successful implementation of a cumulant method

3 Turbulence modelling

When simulating turbulent flows, it is often unfeasible to resolve all hydrodynamically relevant length scales down to the Kolmogorov scale. The nonlinear advection term in the Navier–Stokes equation is responsible for the transfer of kinetic energy between different wave numbers. Only wave numbers that can be resolved by the grid are captured by the method. Turbulence models are applied to capture the effect of the unresolved scales. A popular method to do this is to add an eddy viscosity, a viscosity caused by the turbulence in the unresolved scales. Large eddy simulation (LES) models add the eddy viscosity to a transient simulation which is supposed to accurately capture the dynamics of the resolved (large) eddies. Eddy viscosity models can only dissipate energy, while in reality energy might flow also from the unresolved to the resolved eddies (the so-called back-scatter). However, this is often deemed to be sufficiently rare to ignore it.

The fundamental problem in building an eddy viscosity model is that the small eddies on which the model depends are not known. The model hence has to infer the unknown from the known, i.e. from the dynamics of the large eddies. How this should be done is far from clear. Apart from the technical issue of making an eddy viscosity model work there is a philosophical issue in eddy viscosity modelling which appears to be dominating the direction of research. While most eddy viscosity models are described as “empirical”, it has to be noted that academic publications and educational material usually put a strong emphasize on the positivistic aspect of the theory behind the empirical models. The positivistic perspective is that new knowledge is to be inferred from secured and perceivable facts. Arguing that turbulent viscosity emerges from the mixing of small eddies is a positivistic deduction. In this positivistic perspective at least the structure of the eddy viscosity model requires a physical interpretation in order to be legitimate. Only the model constants can be obtained by induction from experimental data (i.e. empirically). Despite its popularity, the positivistic perspective on eddy viscosity models stands on very thin ice scientifically. Among its many fault lines is the hypothesis that the unknown sub-scale turbulence can be inferred from the known large-scale motion in the first place. A more practical problem is that the positivistic perspective requires the eddy viscosity model to be deduced from physics rather than from numerics. In reverse, a sub-grid scale eddy viscosity model together with its modelling constants would have to be independent from the underlying solution method, i.e. whether the flow is solved with control volume methods, finite element methods or lattice Boltzmann methods. This, however, is mathematically implausible for the following reasons: Most contemporary sub-grid scale models are based on filters with a filter width proportional to the grid spacing. Also, the turbulent viscosity used in many such models (e.g. the Smagorinsky [61] and the WALE model [62]) depends on the shear rate and/or the vorticity in a way that does not allow for scale separation between the influence of the sub-grid scale model and the truncation error of the method. To be more specific we take the Smagorinsky model [61] as an example. The turbulent viscosity \(\nu _T\) is given as:

$$\begin{aligned} \nu _t=(C\Delta )^2\left| \frac{1}{2}\left( \frac{\partial \bar{u}_i}{\partial x_j}+\frac{\partial \bar{u}_j}{\partial x_i}\right) \right| =\mathcal {O}(\Delta ^2). \end{aligned}$$
(12)

The bar indicates the filtered velocity component and C is assumed to be a constant. The filter width \(\Delta \) is usually taken proportional or even equal to the grid spacing \(\Delta \propto \Delta x\) which implies \(\mathcal {O}(\Delta ^2)=\mathcal {O}(\Delta x^2)\). From a numerical point of view it appears problematic to combine a turbulent viscosity of \(\mathcal {O}(\Delta x^2)\) with a second-order convergent Navier–Stokes solver as is unfortunately often done. The problem is that the truncation error of the solver is of the same size as the influence of the turbulent viscosity. Considering the turbulent viscosity in the Navier–Stokes equation as a physical quantity does not make sense since, from a numerical point of view, it has to be absorbed into the truncation error. From this it is also evident that (contrary to the positivistic perspective of physically determined model constants) different discretization methods require different eddy viscosity models and/or different model constants as the effective filter is always a blend between the imposed sub-grid model and the unavoidable truncation error. From this insight arises a different concept of sub-grid scale models, the so-called implicit large eddy simulation (ILES) [63, 64]. The idea behind ILES is that since there is no scale separation between the turbulence model and the truncation error, a well-behaved truncation error can be used to mimic a turbulence model. One advantage of an explicit sub-grid model over an implicit one is that the former offers some adjustable parameters through which the model can be calibrated to match experimental data. It is hence possible to do without the positivistic physical interpretation of a turbulence model and judge it only by its ability to match experimental data.

In addition to the positivistic and the purely empirical perspectives there is a third perspective, which originally played a dominant role for lattice Boltzmann simulations. Before the advent of stable collision operators the lattice Boltzmann method would usually develop instabilities when applied to flows of large Reynolds number. This problem could be solved by adding a dissipative sub-grid scale model. From this perspective, the role of a sub-grid scale model is to keep the simulation stable in the absence of sufficient resolution.

3.1 WALE model

Despite the fact that modern lattice Boltzmann models based on central moments or cumulants require no turbulence model for stabilization at very high Reynolds numbers, and despite the fact that there is no scale separation between the influence of the truncation error and the sub-grid scale model, explicit LES models have been advertised as an improvement to lattice Boltzmann models. Here we refer explicitly to the Wall Adaptive Local Eddy (WALE) model [62] which has recently become a popular additive in commercial lattice Boltzmann codes. Unlike the Smagorinsky model, the WALE model computes the turbulent viscosity not only from the shear rate but also from the vorticity. The turbulent viscosity reads:

$$\begin{aligned} \nu _t=(C_w\Delta )^2 \frac{(S_{ij}^dS_{ij}^d)^{3/2}}{(\bar{S}_{ij}\bar{S}_{ij})^{5/2}+(S_{ij}^dS_{ij}^d)^{5/4}+\varepsilon }. \end{aligned}$$
(13)

with \(\varepsilon \) being a small number avoiding division by zero and the other terms with implied summation over double indexes defined as:

$$\begin{aligned} \bar{S}_{ij}= & {} \frac{1}{2}\left( \frac{\partial \bar{u}_i}{\partial x_j}+\frac{\partial \bar{u}_j}{\partial x_i}\right) ,\end{aligned}$$
(14)
$$\begin{aligned} S_{ij}^d= & {} \frac{1}{2}(\bar{g}_{ij}^2+\bar{g}_{ji}^2)-\frac{1}{3}\delta _{ij}\bar{g}_{kk}^2,\end{aligned}$$
(15)
$$\begin{aligned} \bar{g}_{ij}= & {} \frac{\partial \bar{u}_i}{\partial x_j}. \end{aligned}$$
(16)

The WALE model is supposed to overcome the over-damping of the Smagorinsky model in the vicinity of walls where the turbulent viscosity based on shear rate alone is unphysically high. This is traditionally mitigated by applying a damping function near the wall as introduced by van Driest [65]. However, this has limitations in complex geometries as it requires knowledge of the friction coefficient [62]. The idea behind the WALE model is hence to essentially turn the sub-grid model off at the wall.

In our implementation the vorticity and the shear rate are computed by simple central differences. The filter width is identical to the gird spacing (i. e. \(\Delta =\Delta x\)), and the constant is chosen to be \(C_w=0.5\) in accordance with recommendations for isotropic turbulence [66, 67].

3.2 Limiters

An unconventional alternative to explicit turbulence modelling is to enforce stability by applying limiters as done in flux-corrected schemes [68]. A similar approach can also be applied in the case of the lattice Boltzmann method.

In [26] it was observed that the cumulant method with optimized relaxation rates had a smaller range of stability than the regularized cumulant method. The same paper proposes a limiter that restores stability while sustaining the order of convergence. The idea of the limiter is to adjust the relaxation rate \(\omega _{\star }\) assigned to a scaled cumulant \(C_{\star }\) by:

$$\begin{aligned} \omega _{\star }^{\prime }=\omega _{\star }+\frac{(1-\omega _{\star })|C_{\star }|}{\rho \lambda _{\star }+|C_{\star }|}. \end{aligned}$$
(17)

Here \(C_{\star }\) is any of the third-order cumulants specified in “Appendix A” [Eqs. (A.22), (A.23) and permutations thereof], and \(\omega _{\star }\) is any of their respective relaxation rates as used in Eq. (A.63) to (A.65). The parameter \(\lambda _{\star }\) is a soft limit for the cumulant \(C_{\star }\). The limiter is ineffective as long as \(\rho \lambda _{\star }\ll |C_{\star }|\). As \(|C_{*}|\) gets larger, \(\omega _{\star }\) approaches one. In [26] stability was restored by setting \(\lambda _{\star }=0.01\) for all third-order cumulants. This is an arbitrary value, but despite its heuristic motivation it leads to satisfactory results. For example, the limiter with this value was used to successfully simulate the drag crisis behind a sphere in [69]. The value \(\lambda _{*}\) is always a compromise between stability and accuracy.

The limiter behaves similar to a sub-grid scale model in the technical sense that it adds dissipation locally where instabilities could emerge otherwise. It is however not based on any positivistic physical considerations. Also it has a strictly different asymptotic behaviour from an eddy viscosity model such as Smagorinsky or WALE. As explained in section 6 of [26] and confirmed in section 7 therein, the limiter is designed to add dissipation at the order of the leading error of the fourth-order convergent scheme, whereas the eddy viscosity approach adds dissipation at the leading order of the second-order convergent scheme. In signal processing terms, it can be said that the low-pass characteristic of the limiter is steeper than the low-pass characteristic of the eddy viscosity model.

4 Investigated models

During the evolution of the cumulant method many different variants of the model were presented. One goal of the current study is to investigate differences of these models through the numerical benchmark of a decaying three-dimensional Taylor–Green vortex. Even though classical MRT models [14,15,16, 70] based on raw moments and central moment-based models [17, 18, 46, 71,72,73,74,75] including factorized central moment methods [76, 77] are part of the heritage of the cumulant method, we abstain from including them into the current analysis. This is mostly because the deficiencies of these models have already been widely discussed (see in particular our discussion in [21]). Models considered here are the cumulant method with regularized ghost modes as published in 2015 [21] denoted by K15, the cumulant method with optimized relaxation rates for third-order cumulants published in 2017 denoted by K17, the optimized cumulant method with advection correction using three additional finite differences denoted by KF3, the latter two methods equipped with a stabilizing limiter denoted by K17L and KF3L and the K17 method equipped with a WALE sub-grid model denoted by K17W. It is also common to include a BGK variant of the lattice Boltzmann scheme in such discussions. The standard lattice BGK method is notorious for its limited range of stability and its susceptibility to grid scale oscillations which makes it an easily beaten competition. The standard BGK can be improved significantly by replacing the Taylor expanded equilibrium by the equilibrium derived from vanishing cumulants. Here we term this method BGK+. In [26] we showed that numerical errors for simple test simulations could be improved by up to two orders of magnitude by using the modified equilibrium. In addition, numerical stability was substantially improved. In this paper we consider only the BGK+ method together with the cumulant method with a single relaxation time denoted KSRT. Both methods use the same stencil, the same relaxation rate and the same equilibrium. The nomenclature for the models under consideration is listed in Table 1. Reference implementations for the cumulant method are available for download within the open-source framework VirtualFluids [78].

Table 1 Abbreviations for the different methods

4.1 Predicted errors in the cumulant lattice Boltzmann method

The leading error terms in the cumulant lattice Boltzmann method have been computed analytically by Taylor expansion [26] and through aliasing relations of the asymptotic expansion [32] for diffusion and advection errors, respectively. The current paper is focused on numerical rather than analytical assessment of the methods. Here we only compute the respective error terms by inserting the parameters of the respective cumulant methods and list them in Table 2. Without loss of generality all error terms are listed only for the Navier–Stokes equation governing the evolution of u (i.e. for the x-velocity). Error terms in the remaining directions can be obtained by exchanging the indices. The leading error terms comprise various spatial derivatives of velocity and their prefactors which dependent on the model parameters.

Table 2 Leading error terms of the cumulant methods according to the analysis in [26, 32]

Table 2 provides some insight in the expected behaviour of the different methods. While all errors in Table 2 are second order in diffusive scaling (i.e. \(\Delta t\propto \Delta x^2\)), they are of varying orders in acoustic scaling (\(\Delta t\propto \Delta x\)). The fact that the grid spacing \(\Delta x\) appears in the denominator indicates divergence of the method for \(\Delta x\rightarrow 0\) if \(\Delta t=const\) which is a manifestation of the diffusive CFL condition. While it is generally accepted for a numerical method that the grid spacing cannot be made arbitrary small without making the time step small too, it would be desired that at least the time step could be made arbitrary small at fixed grid spacing. This, however, is not the case for the lattice Boltzmann method with fixed relaxation rates for non-hydrodynamic moments [79] of which the K15 method is an example. From the table the reason for this unfavourable behaviour becomes clear: Some of the error terms of the K15 method have \(\Delta t\) in the denominator. In diffusive scaling this is compensated through the \(\Delta x^4\) in the nominator, but at fixed \(\Delta x\) the method will obviously diverge for vanishing \(\Delta t\). It is seen that the leading-order error in the KF3 method does not disappear entirely as already discussed in [26]. The fourth-order convergent cumulant method is hence not strictly fourth-order convergent in an orthodox sense. However, the appearance of the viscosity taken to the cube renders the remaining error essentially non-existing for simulations of turbulent flow where \(\nu \) is small.

5 Three-dimensional Taylor–Green vortex

The Taylor–Green vortex is a synthetic flow field that fulfils the time-dependent incompressible Navier–Stokes equations. In two dimensions the vortex is stable and keeps its shape, while dissipating energy. This is due to the fact that vorticity in two dimensions becomes a scalar which cannot be generated by the nonlinear term in the (two-dimensional) Navier–Stokes equation [80].

In three dimensions vortex stretching occurs and energy is transported from large scales to smaller scales, such that new small-scale vortices appear, while the base vortex decays. Since this is a cascading process, the flow transitions to turbulence. An exact initial condition can be specified, which allows comparisons of different solution methods.

The Taylor–Green vortex benchmark has been used in connection with many numerical methods, often considering the performance of the methods for under-resolved flows. Diosady and Murman [81] used the Taylor–Green vortex to measure the computational cost associated with obtaining a given level of accuracy for discontinuous-Galerkin finite element methods with different orders of the ansatz functions. They found that a sixteenth-order spatial discretization with only \(48^3\) elements had a much better accuracy than a second-order method with \(256^3\) elements and had a three orders of magnitude lower computational cost. Flad et al. [82] introduced an adaptive filtering technique for high-order discontinuous-Galerkin methods in order to reduce aliasing artefacts and used the under-resolved Taylor–Green vortex simulations to assess the performance of their method. Bull and Jameson used the Taylor–Green vortex to investigate the performance of a high-order flux reconstruction scheme and found that using high-order ansatz functions can lead to oscillations [83]. Piatkowski et al. used the Taylor–Green vortex test case to benchmark their discontinuous-Galerkin-based splitting method [84]. Wiart and Hillewaert used the Taylor–Green benchmark on the discontinuous-Galerkin solver Argo using different kinds of meshes [85]. Kulikov and Son used the Taylor–Green test for assessing the performance of the CABARET scheme for under-resolved simulations [86]. Lee et al. used the benchmark for their low-dissipation solver based on OpenFOAM [87]. The same benchmark was also used in several studies using different variants of the lattice Boltzmann scheme [88,89,90], the semi-Lagrangian off-lattice Boltzmann method [3], the lattice kinetic scheme [91] and GKS [92, 93].

The test case of a three-dimensional Taylor–Green vortex at fixed Reynolds number is chosen here for the following reasons:

  • The test case is well studied, and reference solutions for energy decay, enstrophy evolution and energy spectra are readily available from the literature. Also it is possible to compare our results to results obtained with other methods including results not yet published as the test case is rather popular.

  • The specific Reynolds number of 1600 is chosen as it is a case for which reference data are not only available for the increasing enstrophy but also for decaying enstrophy. In addition, the Reynolds number of 1600 is the highest for which single relaxation time models provide a stable solution using feasible resolutions.

  • Studying a single test case allows a dense sampling of different simulation parameters.

For our investigations we follow the setup of Wang et al. [94] which was also used by Jacobs et al. [95]. The flow is solved in a periodic domain \(-\pi L< x_1 < \pi L\), \(-\pi L< x_2 < \pi L\) and \(-\pi L< x_3 < \pi L\), where L is a reference length. For incompressible flow the physical parameters are defined by the Reynolds number, which is chosen as

$$\begin{aligned} {\hbox {Re}} = \dfrac{\rho _0 U L}{\mu } = 1600 . \end{aligned}$$
(18)

The initial flow field is given by:

$$\begin{aligned} \begin{array}{lclcl} u_0 &{}=&{} &{}~&{} U \sin (x / L) \cos (y / L) \cos (z / L) \\ u_1 &{}=&{} &{}-&{} U \cos (x / L) \sin (y / L) \cos (z / L) \\ u_2 &{}=&{} &{}~&{} 0, \\ p &{}=&{} p_0 &{}+&{} \dfrac{\rho _0 U^2}{16} \big ( \cos (2 x / L) + \cos (2 y / L) \big ) \big ( 2 + \cos (2 z / L) \big ) \end{array} \end{aligned}$$
(19)

The initial pressure field is set implicitly via the density field. The evolution of the flow field is observed for \(20 t^*\), where \(t^*= L / U\) is the reference time. We define a base time step \(\Delta t_0\) for the simulations which corresponds to \(t^* = 250 \Delta t_0\) at a spatial resolution of \(64^3\). The simulations are run on three different grids with \(64\times 64\times 64\), \(128\times 128\times 128\) and \(256\times 256\times 256\) grid nodes each of which is run with three different time steps. The Reynolds number is 1600 unless stated otherwise. The time step is coupled to the Mach number. The chosen time steps allow to study the flow under both the acoustic and the diffusive scaling. The diffusive scaling (constant relaxation rates, varying Mach numbers) is obtained by regarding the set \(\left\{ \{X=64,\Delta t =\Delta t_0\},\{X=128,\Delta t =\Delta t_0/2\},\{X=256,\Delta t =\Delta t_0/4\}\right\} \), whereas the set \(\left\{ \{X=64,\Delta t =\alpha \Delta t_0\},\{X=128,\Delta t =\alpha \Delta t_0/2\},\{X=256,\Delta t =\alpha \Delta t_0/4\}\right\} \) with \(\alpha \in \{1,1/2,1/4\}\) corresponds to acoustic scaling. The considered resolutions in space and time are also listed in Table 3 together with the velocity in lattice unit and the approximate Mach number.

Table 3 Spatial and temporal resolution of all considered simulation and initial lattice velocity and approximate Mach number of the initial condition

6 Numerical results

Here we use the three-dimensional Taylor–Green vortex to study the performance of the cumulant lattice Boltzmann model. The results from Wang [94] are taken as a reference solution which we consider to be sufficiently accurate for the purpose of this study. We investigate the influence of various parameters and modelling options on the performance of the simulation.

To this end, we primarily observe the time evolution of integral kinetic energy and integral enstrophy. The time evolution of the integral kinetic energy characterizes the dissipation of the flow. It is calculated by

$$\begin{aligned} E_{kin} = \dfrac{1}{2 \rho _0 \Omega } \int \limits _\Omega \rho u_i u_i d\Omega , \end{aligned}$$
(20)

with \(\Omega \) being the volume of the computational domain, i.e. \(\Omega = (2 \pi L)^3\). The integral enstrophy \(\mathcal {E}\) (which is proportional to the integral energy dissipation rate \(\epsilon \)), is a measure for the dissipation of kinetic energy and can be computed in two different ways which should yield identical results for incompressible flow [94]. First, the enstrophy is proportional to the time derivative of the kinetic energy [96], i.e.

$$\begin{aligned} \mathcal {E} = - \dfrac{1}{2 \nu } \epsilon = -\dfrac{1}{2 \nu } \dfrac{\partial E_{kin}}{\partial t} \text {.} \end{aligned}$$
(21)

Second, the enstrophy can be computed locally as the square of the vorticity \(\omega = \nabla \times u\) [94, 95], such that the integral enstrophy is given by

$$\begin{aligned} \mathcal {E} = \dfrac{1}{2 \rho _0 \Omega } \int \limits _\Omega \rho \omega _i \omega _i d\Omega . \end{aligned}$$
(22)

The former relation is strongly coupled to the dynamic process of dissipation, whereas the latter is purely kinematic. Indeed the latter is a measure for how strong small-scale structures are represented in the flow field. For spatially discretized flow simulations, this is an important measure, since some numerical methods tend to smear out small-scale structures on the grid scale.

6.1 Integral kinetic energy decay and enstrophy decay

As a first test we investigate the decay of the integral kinetic energy and the enstrophy. Unless otherwise stated we obtain the enstrophy from computing the vorticity with eighth-order accurate finite differences. Figure 2 shows the decay of kinetic energy for several lattice Boltzmann models. Figure 3 shows the corresponding enstrophy evolution.

Fig. 2
figure 2

Kinetic energy evolution for various LBM models

Fig. 3
figure 3

Enstrophy evolution for various LBM models

It is observed from Figs. 2 and 3 that the scaling (diffusive or acoustic) is of minor relevance for both the energy and the enstrophy. At all resolutions the BGK+ methods follow the decay of kinetic energy and the growth of enstrophy of the reference closely in the beginning. The coarse resolution becomes unstable after \(t^{*}\sim 10\) irrespective of the Mach number. As could be expected, we observe that the coarse grid simulations underestimate the peak enstrophy quite considerably. Enstrophy is dominated by the smallest resolved scales and the coarse simulation cannot resolve all scales. This under-prediction of enstrophy in under-resolved simulations is common to all methods. This makes enstrophy a measure of the minimum feature scale that can be resolved by a given method for a given grid resolution.

It is quite interesting to compare the BGK+ simulation to the KSRT simulation which shares the same lattice, the same equilibrium and the same single relaxation time. The two cases with the higher resolution show very similar results. However, the KSRT method remains stable also for the coarse simulation. This is interesting, as it could indicate that the stability of the cumulant method does not only originate from the equilibrium and the damping of higher-order modes but it also persists for the single relaxation time version. However, one should be careful to draw conclusion from a single observation.

We note here that Nathen et al. studied the same test case with a standard BGK model and 19 instead of 27 velocities [88]. They also observed instability for the BGK model at a spatial resolution of \(64^3\) lattice nodes. However, in their case the instability set in much earlier than in our case, indicating that the BGK+ method is more stable than the regular BGK method.

Next we test the cumulant lattice Boltzmann method in regularization mode, i.e. by setting all relaxation rates irrelevant for shear viscosity to one. This is the set of relaxation rates used in [21] and denoted here as K15 in Figs. 2 and 3. We observe that the deviation from the reference simulation is much larger for K15 than for the KSRT and BGK+ simulations. This is obviously due to a spurious damping which is seen by the rather quick decrease in integral kinetic energy and a lower peak in enstrophy than observed in the single relaxation time methods. Unlike the single relaxation time methods, the regularized cumulant method shows a notable dependency on the Mach number. We also point out that the deviation from the reference increases with decreasing Mach number which is an undesired effect as it implies that the method becomes less accurate if a smaller time step is used. The reason for this behaviour is clearly seen from the leading error term as given in Table 2. Several of the spurious fourth-order spatial derivatives of velocity have prefactors of the form \(\mathcal {O}(\Delta x^4/\Delta t)\) which are second order in diffusive scaling, but divergent if the time step is reduced for fixed spatial resolution. This effect is well known to exist in lattice Boltzmann methods with multiple relaxation times and is not particularly related to cumulants. It was first described by Dellar [79] for a multiple relaxation time method using a raw moment basis and was further discussed in the context of cumulants in [26].

The Mach number dependence of the regularized cumulant method is easily overcome by the optimal relaxation rates derived in [26] and denoted as K17 in Figs. 2 and 3. This is also evident from the absence of any error terms with \(\Delta t\) in the denominator for the K17 method in Table 2. The optimal relaxation rates were obtained in [26] from a linearized asymptotic expansion. They increase the convergence order of the diffusive term in the Navier–Stokes equation provided that the Reynolds number is sufficiently large so that \(\nu ^3\sim 0\). It is not self-evident that this should also increase the accuracy of the method in the under-resolved case as a higher order of convergence might also be associated with spurious oscillations in the absence of sufficient dissipation. Here, however, we see that the optimized cumulant method deviates less from the reference solution across all studied resolutions than the KSRT method. This, in fact, indicates that setting all rates to the same value is not the best choice and it is also not necessary for the elimination of the dependence on the Mach number. We also note that the deviations from the reference solution for the coarser simulations are quite similar to the ones observed in the KSRT method. However, at the highest resolution the correspondence between the solution obtained by the optimized cumulant method and the reference solution is significantly closer than that between the highest resolution KSRT simulation and the reference, as should indeed be expected from an increased order of convergence. When comparing the enstrophies resulting from the simulations based on the regularized cumulant method K15 to the ones of the optimized method K17, we note that the optimized method with resolution \(64\times 64\times 64\) shows a striking correspondence to the regularized cumulant method with resolution \(128\times 128\times 128\). The same can be said for the optimized method with resolution \(128\times 128\times 128\) and the regularized method with resolution \(256\times 256\times 256\). The results for the cumulant method with optimal parametrization and advection correction (KF3) are also shown. They look quite similar to those from the K17 method, indicating that the remaining error is not dominated by advection in the current setup.

In Figs. 2 and 3 we also show results for the cumulant method enhanced by the WALE sub-grid model denoted as K17W. All relaxation rates were set according to the optimal set of relaxation rates (i.e. K17). It is observed that the WALE model behaves very similar to the regularized cumulant method during the early onset of the decay of integral kinetic energy and considering the reduced peaks in the enstrophy. However, the WALE model is insensitive to changes in the Mach number, a property inherited from the K17 method. Here, the similarity with the results from the optimized cumulant method without sub-grid scale model (K17) at halve the resolution is even more striking. The effect of adding a WALE model to the optimized cumulant method appears to result in reducing the effective resolution by a factor of two. This does not come as a surprise, as it is the very purpose of the sub-grid model to filter out eddies at the filter scale which also coincides with the grid spacing. Thus, a sub-grid scale model that reduces the effective resolution by a factor of about two does exactly what can be expected. Whether this is actually useful for efficient simulations is of course a different question.

In Fig. 4 energy and enstrophy evolution for the K17L and KF3L method using a limiter for the third-order cumulants are shown. The purpose of the limiter is to enhance stability which is not necessary in the current setup. The results indicate some additional damping coming from the limiter, in particular at the coarser resolutions. It is noteworthy that the high-resolution case is only very weakly affected.

When comparing the models with limiters (K17L and KF3L) and the one using a turbulence model (K17W) with the unmodified models, it is important to note that the limiter and the turbulence model are parameterized and the result would change by choosing a different parameter. However, we note that the WALE model adjusts the viscosity at the order of the leading error of the second-order scheme, whereas the limiter acts at the leading error of the fourth-order scheme. It is therefore to be expected that the limiter shows less effect at low wave numbers compared to the WALE model, as will be demonstrated below.

Fig. 4
figure 4

Kinetic energy and enstrophy evolution for the lattice Boltzmann models with limiters for the third moments

So far we computed the enstrophy using Eq. (22) but as discussed above the enstrophy can also be computed from the dissipation rate according to Eq. (21). Here the dissipation rates are computed by central differencing of the kinetic energy over time. A time increment of twenty time steps is used for the finite differences. For convenience, the results are scaled to the enstrophy such that Figs. 5 and 6 are quantitatively comparable to Fig. 3. Unlike the enstrophy according to Eq. (22), the magnitude of the dissipation rate [Eq. (21)] is seen to be essentially insensitive to the resolution for all considered methods. The discrepancy between the enstrophy and the dissipation rate will in the following be used for determining an effective viscosity.

Fig. 5
figure 5

Scaled dissipation rate evolution for BGK+, KSRT, K16, K17W, K17 and KF3 models as in Fig. 3

Fig. 6
figure 6

Scaled dissipation rate evolution for K17L and KF3L methods as in Fig. 4

6.2 Energy balance and effective viscosity

In order to gain quantitative insight into the accuracy to which our under-resolved simulations capture the actual physics of the problem, we compute the effective viscosity. The effective viscosity can be computed using the balance equation of kinetic energy [97, 98]. The dissipation of kinetic energy E is balanced by the enstrophy as [96]:

$$\begin{aligned} \frac{d}{dt}E=-\nu _{eff} \mathcal {E}. \end{aligned}$$
(23)

The time evolution of the kinetic energy (Fig. 2) is better captured than the enstrophy (Fig. 3) which is biased towards the smallest resolved scales. Figures 7 and 8 show the effective viscosity of the lattice Boltzmann schemes over time. The time derivative of the kinetic energy was obtained by central differences using a time increment of 20 time steps. We observe for all cases that the effective viscosity approaches the nominal viscosity when the resolution is sufficiently large and that the performance of the K15 method deteriorates for smaller time steps at fixed spatial resolution. A distinctive feature only seen in the WALE simulation is that the effective viscosity for the lowest resolution deviates from the nominal value even in the beginning of the simulation. Comparing the results for K17 and KF3 with K17L and KF3L in Figs. 7 and 8 also provides an indication of the effect of the advection correction. While the methods show a similar behaviour at later times, it is seen that the models with advection correction capture the nominal viscosity much better in the initial phase of the simulation, in particular when a coarse grid is used.

Fig. 7
figure 7

Effective viscosity evolution for various LBM models. The dashed line indicates the nominal viscosity

Fig. 8
figure 8

Effective viscosity evolution for LBM models with limiter. The dashed line indicates the nominal viscosity

6.3 Energy spectra

In Figs. 10 and 11 we display the energy spectra of the investigated methods. We only show spectra at \(t=10t^{*}\) which is briefly after the enstrophy reaches its maximum over time. At a Reynolds number of 1600 the flow is not highly turbulent such that the Kolmogorov \(E\sim k^{-5/3}\) behaviour is observed only in a limited range of wave numbers. Using the enstrophy of the reference simulation at \(t=10t^{*}\), which is \(\mathcal {E}\approx 9(t^{*})^{-2}\), we can estimate the Kolmogorov length \(\eta \) using the initial Reynolds number to substitute the viscosity:

$$\begin{aligned} \eta =\left( \frac{\nu ^3}{\epsilon }\right) ^\frac{1}{4}=\left( \frac{\nu ^2}{2\mathcal {E} }\right) ^\frac{1}{4}=\left( \frac{L^4}{2({\hbox {Re}}\cdot t^{*})^2\mathcal {E} }\right) ^\frac{1}{4}\approx 0.012L. \end{aligned}$$
(24)

As the domain measures \(2\pi L\) in each direction, the Kolmogorov length measures \(0.12\Delta x\), \(0.24 \Delta x\) and \(0.49 \Delta x\) for the \(64\times 64\times 64\), the \(128\times 128\times 128\) and \(256\times 256\times 256\) resolutions, respectively. We hence see that none of the meshes reaches DNS quality albeit they are also not under-resolved by more than an order of magnitude. Figure 10 shows the results for the methods K17, KF3, K17L, K17W and K15 computed with the MATLAB code provided by Felix Dietzsch [99]. The energy spectrum is computed by integrating over shells of equal wave number [100]. The results are in excellent agreement with those from Foti and Duraisamy [101] showing data at the same time step as we do. The results from Foti and Duraisamy were digitized from a low-resolution figure in [101] and plotted here only for \(k \le 128/(2\pi L)\), while in the source they were plotted for \(k \le 128\sqrt{(}3)/(2\pi L)\). It appears to be common in literature [101, 102] to plot the energy spectrum up to the highest theoretically available wave number \(2\sqrt{3}/\Delta x\). We however decided to plot results only up to \(2/\Delta x\) due to the reasoning illustrated in Fig. 9. Due to the sampling theorem for real-valued discrete data sets, relevant information is only contained up to \(k=2/\Delta x\) in each Cartesian direction of the Fourier transform. The highest wave number is therefore \(k_{\mathrm{max}}=2\sqrt{D}/\Delta x\) with \(D=3\) being the number of dimensions. However, a complete isotropic energy shell is only available up to \(k_{maxShell}=2/\Delta x\) as can be clearly seen in Fig. 9. Plotting results of a low-resolution simulation next to a result of a high-resolution one beyond \(2/\Delta x_{lowResolution}\) is therefore misleading as the low-resolution result would only be integrated over a partial energy shell, whereas the high-resolution result would be integrated over a full energy shell.

Fig. 9
figure 9

Sketch of the energy spectrum in Fourier space for two different resolutions. Only two dimensions are shown for clarity. Due to the sampling theorem, information is only contained in the lower quarter (octant in 3D) of the spectrum. The energy \(E(|\mathbf {k}|)\) is obtained by integrating over spherical shells. It is common in the literature to show results integrated up to \(k_{\mathrm{max}}=2\sqrt{D}/\Delta x\). However, we plot them only below \(2/\Delta x\) due to the fact that this is the highest wave number for which a complete energy shell is present. Plotting results of the low-resolution simulation with \(\Delta x_{64}\) alongside results of the high-resolution simulation with \(\Delta x_{128}\) at the depicted \(\mathbf {k}\) would be misleading since the two results would be shown for different portions of the energy shell and would hence not represent the same physics

For the K17, KF3 and K17L methods at the highest resolution there is essentially no deviation from the spectral results up to a wave number of \(2\pi L k\approx 80\). The KF3 method, which has the highest theoretical accuracy among the tested methods, also shows the highest consistency between results of different resolution albeit the difference to the K17 method is not large. The limiter in the K17L method acts like an eddy-viscosity turbulence model by adding dissipation to higher wave numbers. This is similar to the explicit sub-grid scale model in the K17W method which dissipates the higher wave numbers to a larger extent than the K17L method. A quite significant difference between the limiter (K17L) and the sub-grid scale model (K17W) is in the steepness at which the spectra deviate from the reference. Even though both methods drop to comparable values at the highest resolved wave number, the K17L method is seen to drop much steeper. This indicates that the limiter affects only the high wave number frequencies, while the sub-grid scale model also affects intermediate wave numbers. This behaviour is to be expected as the two models act on different asymptotic orders. The dissipation added by the limiter was designed [26] not to compromise the fourth-order convergence of the cumulant method, whereas the sub-grid model is designed to add an eddy viscosity proportional to \(\mathcal {O}( \Delta x^2)\). The dissipation added by the limiter is hence two asymptotic orders smaller (i.e. \(\mathcal {O}(\Delta x^4)\)) than the eddy viscosity in the K17W method. This naturally results in a much steeper fall-off irrespective of the used constants in the two methods (which are not related to each other).

The K15 method behaves differently from the other lattice Boltzmann kernels in two aspects: While the methods K17, K17L, K17W, KSRT and BGK+ are not sensitive to the time step size, K15 appears to be very sensitive. Secondly, the method displays large dissipation at higher wave number. Dissipation becomes stronger with shorter time steps. The steep drop in kinetic energy for higher wave numbers in the K15 method is similar to the one observed in [91] for the lattice kinetic scheme which is also a regularized lattice Boltzmann method.

The BGK+ and KSRT methods also show good agreement with the spectral reference in the low wave number regime (see Fig. 11). In the high wave number regime, the disagreement with the reference is larger than for the K17 and KF3 methods. The BGK+ method for the lowest resolution is the only case where an over-prediction of the energy is observed, but this is due to the onset of instability leading to complete numerical divergence at later time. In general, the BGK+ and KSRT methods are characterized by low dissipation, which is why they match part of the high-frequency domain of the spectrum better than the K17L and K17W methods. However, at very high frequencies the mismatch grows more rapidly than for the K17L method.

Fig. 10
figure 10

Energy spectra for the K17, KF3, K17L, K17W and K15 methods at time \(t^{*}=10\). At Reynolds number 1600 the flow is not highly turbulent, and the \(k^{-5/3}\) decay is observed only in a limited range. Only the K15 method shows a strong dependency on the chosen time step

Fig. 11
figure 11

Same as Fig. 10 but for the KSRT and BGK+ methods

6.3.1 Comparison to regularized and KBC lattice Boltzmann models

In a recent paper Krämer et al. [89] used the Taylor–Green vortex test case for the investigation of a regularized LBM method (RLBM) derived from a pseudoentropy maximization and the KBC method named after Karlin, Bösch and Chikatamarla [24]. Both methods apply regularization to non-hydrodynamic moments in order to improve the stability of the lattice Boltzmann scheme. We contrast this approach with the results of the K17 and K17L methods which use optimized relaxation meant to improve accuracy rather than stability. The limiter in the K17 method also stabilizes the method at higher Reynolds numbers. We note that this is not necessary in the current case, but it is shown here for fairness as the other two methods are explicitly designed for maximum stability. We use the original data from [89] which was computed on a grid with \(80^3\) nodes and compare them to our results for the same resolution. The advection correction (KF3) is not used here since it requires finite differences and would render the comparison unfair. The methods compared here all share the same D3Q27 lattice without any add-ons. The energy spectra at \(t=10t^{*}\) are shown in Fig. 12. We note that the methods agree well for low wave numbers, but that both the RLBM and the KBC method deviate further from the reference at high wave numbers than the K17 and the K17L method. At the highest resolved wave number the limited K17L method matches up with the KBC method. However, this is due to the KBC method flattening out at high wave numbers.

Fig. 12
figure 12

Comparison between the K17 and K17L method with two other recent lattice Boltzmann methods: the regularized LBM (RLBM) and the KBC method. The data for the latter two methods were supplied by the authors of [89]. The RLBM and KBC methods were designed to maximize stability, while the K17 method maximizes accuracy. It is seen that the K17 and the K17L methods deviate considerably less from the reference solution than the other two methods in the high wave number regime

6.4 DNS results

So far all shown results were obtained on under-resolved grids and a small but noticeable residual deviation of the peak enstrophy could even be seen in the K17 and KF3 method. To demonstrate that the cumulant method recovers the correct energy spectrum and the correct enstrophy evolution we repeated the K17 simulation and the K17L simulation on a grid with \(512^3\) nodes. From Fig. 13 it is evident that, when run at DNS resolution, the reference spectra, the dissipation rate and the enstrophy are faithfully recovered. In particular, the limiter shows no effect when the method is applied at DNS resolution.

Fig. 13
figure 13

K17 and K17L simulations at Re = 1600 at a resolution of \(512^3\) lattice nodes (DNS resolution). Both the energy spectrum (left) and the enstrophy and scaled dissipation rate (right) are indistinguishable from the reference

6.5 Vortical structures

To gain a qualitative overview we also display vortical structures by showing contours of the Q-criterion [103]. For better comparison we restrict us here to the KSRT, K17, K17W and KF3 methods with medium time step and group equal time instances together. At \(t=5t^{*}\) (Fig. 14) the methods agree well and the resolution has only a minor effect as small vortices have not yet developed. The KSRT method exhibits some oscillations at the lower resolutions, while the other methods are essentially smooth. At \(t=10t^{*}\) (Fig. 15) a noticeable deviation of the flow patterns between the methods can be appreciated for the low-resolution cases. In particular, the KSRT method shows essentially noise. At later times and with developing turbulence, the differences between the results become more pronounced (see Figs. 1617).

Fig. 14
figure 14

Q-criterion at \(t=5t^{*}\). The colour indicates the magnitude of the velocity (color figure online)

Fig. 15
figure 15

Q-criterion at \(t=10t^{*}\)

Fig. 16
figure 16

Q-criterion at \(t=15t^{*}\)

Fig. 17
figure 17

Q-criterion at \(t=20t^{*}\)

6.6 Reynolds number 16,000 and 160,000

The standard Taylor–Green test case discussed in the previous sections has a comparatively low Reynolds number. In this section we investigate the performance of the cumulant method at higher Reynolds numbers, i.e. 16,000 and 160,000. The usage of a limiter or a sub-grid model becomes necessary at such high Reynold numbers for stability reasons. Figure 18 shows the energy spectrum for the two Reynolds numbers simulated with the K17L method. While no reference solution can be provided, we find good agreement with the ideal \(k^{-5/3}\) law. At larger resolution we observe a deviation from this ideal behaviour. There is a build-up of energy at high wave numbers preceded by a region of reduced energy level at intermediate scales. The same behaviour was observed for under-resolved DNS of an inviscid Taylor–Green vortex simulated with the discontinuous-Galerkin method [104, 105]. In the discontinuous-Galerkin method the choice of the flux computation has a strong effect on the shape of the energy spectrum at higher wave numbers. Lax–Friedrichs methods appear to show a larger bottleneck effect (dip in energy) than Roe’s scheme [104]. However, according to Flad and Gassner [106] neither choice provides the means to substantially increase the fidelity of the method in the sense of an implicit LES. Some kinetic energy preserving discontinuous-Galerkin schemes behave slightly different at very high Reynolds numbers, only showing a build-up of energy at higher wave numbers without the dip in the intermediate range [82, 106]. Inviscid simulations such as those shown in [104, 105] assume an infinite Reynolds number for which the ideal \(k^{-5/3}\)-law should hold for arbitrary high Reynolds numbers. At finite Reynolds numbers deviations from the \(k^{-5/3}\)-law are to be expected, and a bottleneck effect is also seen in experiments [107]. However, it is difficult to say whether the appearance of the bottleneck effect in under-resolved simulations bears any physical meaning. Frisch et al. investigated the build-up of energy at high wave numbers when explicit hyperviscosity is introduced [108]. They explain the presence of the energy bottleneck as an effective increase in turbulent viscosity originating from the energy bump at even higher wave numbers. Since the small scales contain more energy than predicted by the Kolmogorov theory, the intermediate scales are mixed more strongly than predicted by Kolmogorov’s theory such that they dissipate more quickly.

Figure 19 depicts the effective viscosity for the two simulations. It is seen that the two coarsest resolutions follow the nominal viscosity only in the beginning of the simulation. Later on they show effective viscosities independent of the nominal viscosity. Only the highest resolution simulation is obviously affected by the nominal viscosity.

Fig. 18
figure 18

Energy spectra for Reynolds numbers 16,000 (left) and 160,000 (right) using the K17L method. Some deviation from the ideal \(k^{-5/3}\) can be observed in a slight build-up of energy at higher wave numbers

Fig. 19
figure 19

Effective viscosities for nominal Reynolds numbers 16,000 (left) and 160,000 (right) using the K17L method. The dashed line shows the value of the nominal viscosity

We repeat the simulation at the higher Reynolds number with the KF3L and the K17W method and show the results in Figs. 20 and 21. Even though a better agreement between the different resolutions is observed for the KF3L method as compared to the K17W method, the deviation from the ideal \(k^{-5/3}\) is still observed. Results from the K17W method with its explicit turbulence model agree better with Kolmogorov theory even though a small bottleneck effect is still visible. Quite interestingly, it is observed that adding explicit turbulent viscosity in the K17W method enhances the energy in the intermediate range compared to not using an explicit turbulence model.

The energy bottleneck is obviously influenced by the insufficient resolution and the mismatch between the nominal and the effective viscosity. Still it is instructive to briefly discuss the physical bottleneck effect. Falkovich [109] explains that the presence of molecular viscosity increases turbulence levels by inhibiting turbulent transfer. Donzis and Sreenivasan argue that the amplitude of the bottleneck effect decreases with increasing Reynold number such that the effect is not visible from observations in atmospheric flows which have extremely high Reynolds number [110]. In laboratory experiments the effect is difficult to study as it requires sufficiently high Reynolds numbers and is visible only for extremely high wave numbers. To overcome the experimental difficulties, Küchler et al. [107] employed a very sophisticated experimental setup using sulphur-hexaflouride at pressures of up to 15 bar in order to obtain Taylor micro-scale Reynolds numbers of up to 1600 and Kolmogorov scales of ten microns. This provides sufficient measurement accuracy to investigate the bottleneck effect.

According to a theoretical investigation by Verma and Donzis [111] the bottleneck effect is suppressed if the inertial range spans more than four decades. This obviously also requires the resolution to span four decades. Our finest grid supports only a spectrum spanning two decades. As our simulations do not reach DNS quality, the location and strength of the bottleneck is obviously influenced by the effective viscosity such that we cannot claim that the observation of a bottleneck in Fig. 20 was physically correct. While the bottleneck effect is a physical reality for turbulent flows with inertial ranges larger than one decade but smaller than four decades, the strength and location of the bottleneck is most likely not accurately represented in our simulations due to the unphysical cut-off of the spectrum at resolutions below DNS quality.

Fig. 20
figure 20

Energy spectra at Reynolds number 160,000 using the KF3L (left) and the K17W (right) method

Fig. 21
figure 21

Effective viscosity for the KF3L (left) and K17W (right) methods at Reynolds number 160,000

7 Conclusion

In this paper we presented a comprehensive study of the performance of cumulant-based lattice Boltzmann schemes using the three-dimensional Taylor–Green Vortex benchmark. Several other lattice Boltzmann model variants have also been considered for comparison. The improved BGK+ method was found to be less stable than the single relaxation time cumulant method. We should however mention here that the difference in stability is not large. A preliminary search for the highest attainable Reynolds number at resolution \(64\times 64\times 64\) gave Re = 1529 for the BGK+ and Re = 1980 for the KSRT at the highest used Mach number. For the K17 and KF3 method a Reynolds number of 3000 could be achieved. These values are preliminary in the sense that according to our observation simulations that became unstable at a low Reynolds number could stay stable at a higher Reynolds number. In the literature it is often assumed that there is a certain cut-off Reynolds number above which all simulations would become unstable. While this assumption appears to be reasonable, we do not know of any proof for it and actually observed several counter examples. One counter example was also observed by Nathen et al [88] where a MRT model was seen to decrease in stability when the resolution was increased. We therefore prefer to be very cautious with giving explicit limits for stability. For the current setup, all but the BGK+ method were stable for all test cases. Limiters or a sub-grid scale model were hence not required for stability as long as the Reynolds number was low enough (i.e. \({\hbox {Re}}=1600\)). We note here that both the sub-grid scale model and the limiter drastically improve the stability. However, stability was not the concern in the presented study and stability should also not be the primary motivation for the application of a sub-grid scale model, that is to say at least not, if the sub-grid scale model is motivated by physics-based considerations to incorporate the influence of unresolved scales on the larger scales. The observation of this study is that the WALE model successfully filters out high wave numbers from the simulation. This results in an under-prediction of the high wave number part of the energy spectrum and a general under-prediction of the enstrophy. All of this is expected from a sub-grid scale model. For Reynolds number 1600, omitting the WALE model systematically improves the correspondence between the simulations and reference results even for the lowest resolution which is clearly under-resolved. At this Reynolds number the application of a turbulence model appears to be neither necessary nor helpful to increase both accuracy and efficiency. We saw that applying a limiter to the third-order cumulant leads to a much steeper cut-off in the Kolmogorov spectrum than using the WALE model. This is also obvious from the different asymptotics. While the WALE model adds a turbulent viscosity of the order of \(\mathcal {O}(\Delta x^2)\) in diffusive scaling, the limiter was designed not to reduce the fourth-order accuracy of the K17 method and hence becomes significant at much higher wave numbers than the turbulent viscosity of the WALE model. On top of this, the limiter is a local operation with virtually zero computational cost, while the WALE model depends on costly computations of vorticity. However, at substantially higher Reynolds number the picture changes. At a Reynolds number of 160,000 we observe that the K17L and KF3L methods show a deviation from the ideal Kolmogorov spectrum at high enough wave numbers, showing an energy bottleneck followed by an energy bump at sufficiently high wave numbers. To our knowledge this is the first time that such a behaviour is observed in a lattice Boltzmann simulation.

We also confirmed previous results showing that regularization in the lattice Boltzmann context leads to a significant dependency on the time step which is just the opposite of what is expected from a good numerical scheme: Results generally get worse with shorter time steps. In contrast to the regularized K15 method, no dependence on the time step was observed for the optimized K17 and KF3 methods.

In conclusion, we observed that the cumulant lattice Boltzmann method simulated the Taylor–Green vortex successfully at moderate levels of under-resolution. These results are naturally limited to isotropic turbulence at moderate Reynolds numbers. The observation of an energy bottleneck at very high Reynolds numbers in the cumulant method is interesting and requires further investigations. We note that the energy bottleneck is a physical reality which has also been observed in recent experiments. However, since our simulation does not reach DNS quality for the Reynolds numbers in question, we do not claim that the observed bottleneck effect is physical. As the physical bottleneck effect is assumed to disappear at extremely large Reynolds numbers, the numerical bottleneck effect could also be regarded as an artefact that has to be mitigated by the use of an explicit turbulence model. In this sense, the question whether adding a WALE model to the cumulant method is advantageous is still open due to different interpretations and focuses.