Lossy Data Compression Effects on Wallbounded Turbulence: Bounds on Data Reduction
 296 Downloads
Abstract
Postprocessing and storage of large data sets represent one of the main computational bottlenecks in computational fluid dynamics. We assume that the accuracy necessary for computation is higher than needed for postprocessing. Therefore, in the current work we assess thresholds for data reduction as required by the most common data analysis tools used in the study of fluid flow phenomena, specifically wallbounded turbulence. These thresholds are imposed a priori by the user in L_{2}norm, and we assess a set of parameters to identify the minimum accuracy requirements. The method considered in the present work is the discrete Legendre transform (DLT), which we evaluate in the computation of turbulence statistics, spectral analysis and resilience for cases highlysensitive to the initial conditions. Maximum acceptable compression ratios of the original data have been found to be around 97%, depending on the application purpose. The new method outperforms downsampling, as well as the previously explored data truncation method based on discrete Chebyshev transform (DCT).
Keywords
Lossy data compression Data reduction Turbulence statistics Orthogonal polynomials Resilience1 Introduction
The field of computational fluid dynamics (CFD) is data intensive, particularly for highfidelity simulations. Direct and largeeddy simulations (DNS and LES, respectively) of turbulent flows, which are framed in this highfidelity regime, involve a wide range of flow scales leading to a high number of degrees of freedom. Besides the computational bottleneck brought by the large size of the problem, a less explored aspect in the context of flow simulations is the data management. Large amounts of disk space and also slow speed of I/O (input/output) impose limitations on largescale simulations. Typically the computational requirements for wellresolved flow structures are far higher than those needed for postprocessing. Despite the wealth of strategies for truncating the data, such as using wavelets [1, 2, 3], reducedorder models [4], or orthogonal transforms [5], only few studies [2, 3, 6, 7, 8, 9] have considered an analysis of the impact of data truncation on the accuracy of the postprocessed data from a physical point of view. Here we focus on one particular lossy data compression strategy, which has been equipped with an apriori error estimator in L_{2}norm, and explore different levels of accuracy of the compressed data. The purpose is to identify bounds within which computational data can be reduced without affecting the ulterior postprocessing, i.e., determination of relevant flow quantities such as turbulence statistics, powerspectral densities or coherent structures. Also we underline the superiority of a data compression strategy over a common approach in engineering applications, namely to downsample the data set when it becomes computationally cumbersome.
In the present work we use the spectralelement code Nek5000 [10] to produce the data under analysis. The method considered here is an orthogonaltransformbased method inspired by the discrete Chebyshev transform (DCT), widely used in the image compression community [11], as well as in CFD applications [12]. However, as we previously identified (see the work by Marin et al. [13]), it is preferable to employ a spectral transform tailored to the spectral discretization of the underlying code. Since the numerical code under consideration is based on a Gauss–Lobatto–Legendre (GLL) grid, we employ an orthogonal transform based on Legendre grids, to which we shall refer as the discrete Legendre transform (DLT). However similar ideas using orthogonal transforms can be used on grids that do not correspond to spectral discretizations, such as finite volumes or finite differences. The difficulty in these cases though would pertain to adapting to curvilinear surfaces and achieving efficiency, both issues that are easily addressed in conjunction with variational formulations and finite element type of discretization. Also on equidistant grids it would be more suitable to apply the discrete Cosine transform since the mapping to a spectral grid is not stable for very high polynomial orders. The current choice of compression algorithm adapted to the mesh and spectral discretization is nonetheless convenient in the context of turbulence simulations which are widely performed using spectral element discretizations.
Some relevant techniques used to analyze turbulent flows include the computation of turbulence statistics, which are based on averages of flow realizations in time and in any homogeneous direction; the assessment of powerspectral density maps, based on Fourier analysis performed in the homogeneous directions; and identification of coherent structures in the flow, which is based on the extraction of relevant flow features from the instantaneous flow fields. In the present study we consider these analysis techniques in a canonical flow case, namely the turbulent flow through a straight pipe [14], and also on a more complex geometry, i.e. the turbulent flow around a NACA4412 wing section [15]. Additionally, we consider the impact of data truncation on restarts from previous flow fields stored in disk, assessed in a highly sensitive case of transitional flow, namely the jet in cross flow [16].
The article is organized as follows: in Sections 2 and 3, a description of the lossy data compression under consideration, namely the DLT, is provided from a mathematical and the implementation point of view, respectively; the test cases used for investigation are described in Section 4; an assessment of the impact of such compression on visualizations, turbulence statistics, spectral analysis, and simulation resilience in several test cases is presented in Section 5; finally, a summary of the conclusions of the present study is given in Section 6.
2 Mathematical Formulation of Data Compression Algorithm
Orthogonal transforms have been reported to yield high energycompactness ratios [5]. This feature makes them very good candidates for data compression, especially on spectral grids which are widely used in direct numerical simulations (DNS) of turbulent flows [17]. Additional benefits, such as high computational efficiency due to tensorproduct implementations on hexahedral grids [18], further increase their advantage due to the fact that data compression should be designed to be of negligible computational cost and low accuracy loss.
The solution is defined via an expansion in orthogonal polynomials ϕ defined at the element level as \(u={\sum }_{i = 0}^{N} a_{i} \phi _{i}\) which for higher dimensions is easily expanded via separability to \(\mathbf u={\sum }_{ijk= 0}^{N} a_{ijk} \phi _{ijk}\). Therefore it is natural to make all theoretical considerations in one dimension and they naturally carry to any higher dimension. The partial differential equation \(\mathcal L \mathbf u =\mathbf f\) is solved in weak form via a continuous Galerkin approach by minimizing the projection of the residual on a space of test functions v, details can be found in e.g. [17]. The approach ensures C_{0} continuity between elements, i.e. the function values are the same, but not derivatives. Here we concentrate on connecting the orthogonal transform to the polynomial expansion and use an orthogonal transform tailored to the numerical scheme employed by the code. The specific implementation in the code Nek5000 [10] uses Legendre polynomials for the expansion of the solution and hereby we use the Discrete Legendre Transform (DLT) as outlined in [13].
Now using Eqs. 1 and 4 we can construct the forward and backward transform. Once the data is transformed into spectral space, the decay is much faster and it is easy to perform a truncation such that it satisfies an apriori error threshold imposed by the user.
Moreover, the fact that we operate on curvilinear meshes which are mapped to a reference element leads to a rescaling by the Jacobian of the orthogonality relation T^{− 1} W T = I as described in [13]. The truncation procedure relies on the fact that the L_{2}norm in spectral space over one element is equal to the L_{2}norm in real space given as \(\ \mathbf u \^{2}_{v_{e}} = \ \mathbf v \^{2}_{v_{e}}\). Thus, imposing a threshold over the entire velocity field u,∥u∥^{2} ≤ 𝜖, is equivalent to \(\ \mathbf v \^{2}_{v_{e}}\leq \epsilon \).

Convert data field to spectral space: v = T u.

Truncate in spectral space: v being truncated to \(\tilde {\mathbf {v}}\) (remove lowmagnitude spectral coefficients).

Write to disk compressed data field: \(\tilde {\mathbf {v}}\).

Recover data field: \(\tilde {\mathbf {u}} = T^{1} \ \tilde {\mathbf {v}}\).
It is important to note that the truncation algorithm is identical irrespective of the applied transform, and all throughout the remainder of this manuscript we shall refer to the DLT or DCT algorithms, although the only difference is given by the orthogonal transform applied. This will be further discussed in the next section.
3 Implementation of the Method
3.1 Description of the compression procedure
Figure 2 gives a general overview of the compression procedure used for this investigation, where both the DLT and the DCT methods can be used to convert the data to spectral space (and vice versa). As it can be observed, the difference between the DLT and the DCT compression is the additional mapping into a Chebyshev grid required for the latter. The user has control over the level of compression by prescribing an error threshold as an input to the truncation. Once the truncation is performed, the resulting compression ratio is obtained. Therefore, the error threshold is an input, and the compression ratio is an output of the truncation process.
In fact the truncation is defined as opposed to downsampling exclusively in terms of mode amplitude and it is performed up to a usercontrolled error level (line 4). Therefore, this compression method truncates data in an adaptive way by setting to zero the modes with lowest energy contribution to the flow (line 6). These modes are first sorted by increasing energy content through the function sort (line 2). Finally, the error estimator allows to control the error incurred through the truncation by comparing the L_{2}norm of the error being applied, with the provided error threshold (line 4). The implementation of the whole algorithm was performed with high efficiency in mind by employing tensor products as described in [13].
In this work we define the overall compression ratio Cr as the arithmetic average of the compression ratios Cr_{el} over all the elements. The compression ratio can be then defined as \(Cr=\frac {lost\ data\ size}{original\ data\ size}\) computed as the number of entries set to zero over the total number of grid points averaged over the elements. Thus, higher compression ratios imply smaller file sizes.
A very important conclusion inferred from Fig. 3 (right) is the fact that the highest compression ratios can be achieved when the turbulence level is low: Cr values above 80% are observed within the viscous sublayer (y^{+} < 7 approximately). The increase in streamwise velocity fluctuations (see Section 5.2 for a detailed description of the turbulence statistics) significantly reduces the average compression ratio below 60%, before increasing again up to around Cr = 70%. It is important to note that the particular trend observed in Fig. 3 (right) is also determined by the nonuniform grid distribution in the wallnormal direction, and the fact that the value of Cr is unique for each spectral element. In any case, this result is consistent with the fact that it is much more complicated to compress a highlyturbulent signal, spanning a wide range of scales, than a closetolaminar one which contains little information in addition to its mean component. Also note how the lowest instantaneous Cr of 10.5% is much lower than the average one, a fact that is connected with the highlyvarying instantaneous turbulence intensities in wallbounded turbulent flows.
3.2 Comparison of DLT and downsampling
In DLT, the truncation algorithm is based on the amplitude of the modes and not on the mode number as it is the case in the downsampling method. The DLT truncation eliminates data in an adaptive way by removing the weak modes with less energetic contribution to the whole flow field. Since turbulent flows contain a wide range of scales, both in space and in time, it is possible that for a particular instantaneous flow realization certain higherorder modes contain more energy than some of the lowerorder modes.
3.2.1 Description of the downsampling procedure
The results presented in Section 5 will focus on the DLT performance, which will be compared with the one obtained by means of downsampling.
4 Summary of Test Cases
The second test case is the wellresolved LES of turbulent flow around a NACA4412 wing section at a Reynolds number based on inflow velocity and chord length Re_{c} = 100,000, computed by Vinuesa and Schlatter [20]. This case constitutes a moderately complex scenario for the study of wallbounded turbulence due to the streamwise pressure gradients imposed on the boundary layers developing on the suction and pressure sides of the wing, the incipient separation around the trailing edge and the development of a turbulent wake. The assessment of pressuregradient and flowhistory effects on turbulent boundary layers is a challenging problem which has received considerable attention in the wallbounded turbulence community in recent years [26]. In the case under consideration, an angle of attack of 5° is introduced, and the spanwise periodic direction has a length of 10% of the wing chord. The computational mesh, which is shown in Fig. 6 (right), consists of 28,000 spectral elements with N = 11, which amount to a total of 48.4 million grid points. Additional simulation details, including the implementation in Nek5000, can be found in [15, 27].
After defining the various test cases, the impact of data compression on the various analysis techniques is assessed below.
5 Data Analysis on Truncated Data Sets
In this section we investigate the impact of data compression on various analysis techniques commonly used to characterize wallbounded turbulent flows. This analysis is performed on the test cases described in the previous section (Section 4).
5.1 Flow visualization and vortex identification
5.2 Turbulence statistics
Furthermore, in the present work we consider that the maximum admissible level of e is 1%, a threshold which can be used to define a bound on the compression level before the resulting statistics are considered to be inaccurate. Note that typical statistical uncertainty in the evaluation of turbulence quantities from DNS data is on the order of 1%, a fact that motivates this choice as an acceptable error level induced by the compression. Based on this threshold, the compression obtained by means of DLT would yield acceptable values for all the statistics up to Cr = 97.5%, which is consistent to what was observed in Fig. 10, and interestingly if one restricts the analysis to the streamwise mean flow and velocity fluctuations it would be possible to increase Cr up to almost 99% keeping the value of e below 1%. Regarding the compression based on downsampling, a maximum compression ratio of about 90% can be applied with all the statistics within acceptable levels of error, whereas the value of Cr could be increased to 94.7% if only the mean flow and the Reynoldsshear stress are considered. Note that these results indicate that it is possible to apply an additional 7.5% compression ratio (97.5% compared to 90%), while keeping the error levels acceptable, when using DLT with respect to downsampling. Interestingly, these values are significantly larger than the ones obtained when using DCT, which allows a maximum level of compression of 67.7%.
In Fig. 13 (right) it can be observed that ε remains approximately constant and equal to the uncompressed value 2.5 × 10^{− 3} in DLT, and before a slight reduction to ε ≃ 2 × 10^{− 3} the value of ε increases strongly beyond Cr = 97.5%. In the case of downsampling, a mild reduction with Cr is observed, from ε = 2.5 × 10^{− 3} to around 2 × 10^{− 3}, before the increase at Cr = 87.5%. Despite the mild reduction in both DLT and downsampling techniques, which might be associated to the fact that the initial uncompressed fields were not completely converged from the statistical point of view, these results indicate that the maximum acceptable Cr values are 97.5% and 87.5% for DLT and downsampling, respectively. Note that these results are in excellent agreement with the ones discussed above in terms of the error indicator e.
5.3 Spectral analysis
Although the turbulence statistics discussed above provide a good indication of the impact of the compression on the flow field, an analysis of the powerspectral density maps in the pipe yields even more detailed information. This is due to the fact that the powerspectral density distributions allow to assess, for each wallnormal position, the scales on which the energy resides, thus providing a more complete picture of the various contributions to the flow field. It is also important to note that, although in the context of the present work we computed turbulence statistics based on instantaneous fields in order to compare various compression levels, in practice those statistics would be computed directly during the simulation as described in [37]. On the other hand, spectral analysis of the flow is typically performed on the instantaneous fields, and therefore it is important to determine the maximum level of possible compression without negatively impacting the spectra, since this would be an analysis method that would greatly benefit from the storage savings associated with compression.
By increasing the Cr with DLT it can be noted that the resulting powerspectral density distribution is in excellent agreement with the uncompressed data up to Cr = 97.5%, and only the larger compression ratio of 98.8% reveals small differences in the spectrum. Regarding the downsampling, the maximum compression ratio in good agreement with the uncompressed data is 87.5%, with significant deviations emerging at higher Cr. A very interesting observation can be made by comparing DLT at Cr = 98.8% and downsampling at Cr = 94.7%: whereas in the former the contours start to smoothly deviate from the reference data, in the latter a significant impact in all the small scales (below wavelengths of around \(\lambda _{r\theta }^{+} \simeq 40\)) is observed. This is even more noticeable when considering Cr = 98.8% with downsampling, where all the contours below \(\lambda _{r\theta }^{+} \simeq 60\) become significantly affected. In this case, the nearwall spectral peak becomes completely distorted as well. This reflects the essence of the DLT and downsampling methods, as also illustrated in Fig. 5: whereas in the DLT method we keep the modes with the largest energy content, in downsampling all the modes beyond a certain threshold are made zero, regardless of their energy contributions. This is the reason why, at progressively larger Cr values, a broader range of the smallscale spectral range becomes significantly distorted by the compression based on downsampling.
The premultiplied streamwise powerspectral density \(k_{x}{\Phi }_{uu}/u^{2}_{\tau }\) shown in Fig. 14 (middle) confirms the observations made for the azimuthal spectrum, namely the maximum acceptable Cr values of 97.5% and 87.5% for DLT and downsampling, respectively. This spectrum also reveals the different effect of DLT and downsampling on the smaller scales, where a progressively smaller distortion on the shortest streamwise wavelengths close to the wall is noticeable for higher Cr when employing downsampling. The nearwall spectral peak is located in this case at y^{+} ≃ 15 and \(\lambda _{x}^{+} \simeq 1,000\), which is in agreement with the pipeflow DNS results by El Khoury et al. [42]. The premultiplied twodimensional powerspectral density \(k_{r\theta }k_{x}{\Phi }_{uu}/u^{2}_{\tau }\) is shown in Fig. 14 (bottom) at y^{+} = 15. This spectral map combines the information from the two previous onedimensional spectra, exhibiting a clear nearwall spectral peak at \(\lambda _{r \theta }^{+} \simeq 120\) and \(\lambda _{x}^{+} \simeq 1,000\), also in excellent agreement with the values reported by El Khoury et al. [42] for pipe flow up to Re_{τ} = 1,000. This spectral map also shows the effect of the downsamplingbased compression method on the smaller scales, which is consistent with the two previous onedimensional spectra. These results indicate that DLT significantly outperforms downsampling, since at very high compression ratios up to 97.5% the smallscale energy content is well preserved.
5.4 Restart and resilience
Finally, we analyze the impact of data compression on the simulation restart using the jet in crossflow case. As already indicated in Section 4, this case has been chosen for its strong sensitivity to the initial conditions, which will allow us to clearly detect any impact coming from the additional noise added by the compression of the restart files.
This implies that in less sensitive cases, such as turbulent simulations, highlycompressed files could potentially be used for restart purposes, which would lead to significant benefits in terms of checkpointing and resilience in largescale simulations.
6 Conclusions
In the present work we propose a lossy data compression algorithm, with which we have obtained high levels of acceptable compression ratios for the turbulent flow through a straight pipe and the turbulent boundary layers developing around a wing section. In the case of a turbulent flow through a straight pipe, it was possible to truncate up to about 98% of the data for the computation turbulence statistics, whereas with downsampling the maximum acceptable compression ratios were about 90%. The DLT based truncation method also outperforms DCT, which could compress up to a maximum of 70%. These results were also consistent with the spectral analysis, which in addition also reflected that downsampling significantly affects the smallest scales in the powerspectral density distributions.
For visualization and vortex identification purposes, the results were slightly more sensitive to the compression. It was possible to compress up to around 90% of the original data with DLT, while preserving the topology of the vortex clusters in the turbulent pipe flow case. On the other hand, the maximum compression ratio is casedependent, and lower Cr values were obtained in the case of the turbulent flow around a NACA4412 wing section. It is also important to note that whereas the former data set was based on a DNS, the latter was obtained from a wellresolved LES, a fact that is related to the lower attainable compression ratios. Moreover, we have observed that a strong data compression could be applied for a restart procedure in simulations of transition, which are typically highlysensitive to the initial conditions. Hence, the DLT method highly compresses the data, while preserving the most relevant flow physics with control on the error incurred. The new DLT based method [13], which removes data in an adaptive way depending on the flow properties in the domain, strongly outperformed downsampling as well as the previous DCT based truncation method [19].
Further work could concern a quantitative analysis of the impact of the compression on the coherent structures to, among other aspects, derive a correlation between the amount of structures and the compression ratio. Similarly, the shape and size of identified structures, as a function of compression, could be assessed to give further quantitative bounds on the maximum compression. Relating to more implementational details of the method, the actual reading and writing of the compressed data should be extended with an Huffman encoding [13] for a final user application in the simulation software (in our case Nek5000 [10]). It should be noted that in principle our approach is suitable for any elementwise discretization in which a spectral representation within an element can be formulated. In the case of other basis functions than the Legendre polynomials, further investigation could be necessary to deeply understand the effect of the additional mapping to e.g. a Chebyshev grid on the compression performance, as observed with DCT truncation. Finally an interesting extension of the present work would be to assess, for canonical flow case, the attainable Cr for a certain error threshold as a function of Reynolds number. By knowing the necessary data size in boundary layers for various Re, potentially a model for the necessary information in more general cases could be derived.
Notes
Acknowledgments
Financial support from the Stiftelsen för Strategisk Forskning (SSF) and the Swedish eScience Research Centre (SeRC) via the SESSI project is acknowledged. RV and PS acknowledge the funding provided by the Knut and Alice Wallenberg Foundation and the Swedish Research Council (VR). The present work was also partially supported by the ExaFLOW H2020 Project (Project Number 671571). All the data analysis in the present work was performed on resources provided by the Swedish National Infrastructure for Computing (SNIC). Also parts of this work were supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computing Research under Contract DEAC0206CH11357 and the Exascale Computing Project (Contract No. 17SC20SC).
Funding Information
Stiftelsen för Strategisk Forskning (SSF), Swedish eScience Research Centre (SeRC), Knut and Alice Wallenberg Foundation, and Swedish Research Council (VR).
Compliance with Ethical Standards
Conflict of interests
The authors declare that they have no conflict of interest.
References
 1.Trott, A., Moorehead, R., McGinley, J.: Wavelets applied to lossless compression and progressive transmission of floating point data in 3D curvilinear grids Visualization’96 (1996)Google Scholar
 2.Sakai, R., Sasaki, D., Nakahashi, K.: Parallel implementation of largescale CFD data compression toward aeroacoustic analysis. Comput. Fluids 80, 116–127 (2013)CrossRefGoogle Scholar
 3.Lee, D., Beam, R.M., Warming, R.F.: Supercompact multiwavelets for flow field simulation. Comput. Fluids 30, 783–805 (2001)CrossRefGoogle Scholar
 4.Blanc, T.J.: Analysis and compression of large CFD data sets using proper orthogonal decomposition, Brigham Young University (2014)Google Scholar
 5.Rao, K.R., Yip, P.: Discrete cosine transform: algorithms, advantages, apllications. Academic Press Professional, San Diego (1990)CrossRefGoogle Scholar
 6.Lehmann, H., Werzner, E., Mendes, M.A.A., Trimis, D., Jung, B., Ray, S.: In situ data compression algorithm for detailed numerical simulation of liquid metal filtration through regularly structured porous media. Adv Eng Mater 15, 1260–1269 (2013)CrossRefGoogle Scholar
 7.Laney, D., Langer, S., Weber, C., Lindstrom, P., Wegener, A.: Assessing the effects of data compression in simulations using physically motivated metrics. Sci. Program. 22, 141–155 (2014)Google Scholar
 8.Baker, A.H., et al.: Evaluating lossy data compression on climate simulation data within a large ensemble. Geosci. Model Dev. 9, 4381–4403 (2016)CrossRefGoogle Scholar
 9.Li, S., Sane, S., Orf, L., Mininni, P., Clyne, J., Childs, H.: Spatiotemporal wavelet compression for visualization of scientific simulation data. In: Proc. 2017 IEEE international conference on cluster computing (CLUSTER), Honolulu (2017)Google Scholar
 10.Fischer, P.F., Lottes, J.W., Kerkemeier, S.G.: NEK5000: open Source spectral element CFD solver Available at: http://nek5000.mcs.anl.gov (2008)
 11.Ahmed, N., Natarajan, T., Rao, K.R.: Discrete cosine transform. IEEE Trans. Comput. C23, 90–93 (1974)MathSciNetCrossRefGoogle Scholar
 12.Schmalzl, J.: Using standard image compression algorithms to store data from computational fluid dynamics. Comput. Geosci. 29, 1021–1031 (2003)CrossRefGoogle Scholar
 13.Marin, O., Schanen, M., Fischer, P.F.: Largescale lossy data compression based on an a priori error estimator in a spectral element code ANL/MCSp60240616 (2016)Google Scholar
 14.El Khoury, G.K., Schlatter, P., Noorani, A., Fischer, P.F., Brethouwer, G., Johansson, A.V.: Direct numerical simulation of turbulent pipe flow at moderately high Reynolds numbers. Flow Turbul. Combust. 91, 475–495 (2013)CrossRefGoogle Scholar
 15.Hosseini, S.M., Vinuesa, R., Schlatter, P., Hanifi, A., Henningson, D.S.: Direct numerical simulation of the flow around a wing section at moderate Reynolds number. Int. J. Heat Fluid Flow 61, 117–128 (2016)CrossRefGoogle Scholar
 16.Peplinski, A., Schlatter, P., Henningson, D.S.: Global stability and optimal perturbation for a jet in crossflow. Eur. J. Mech. B/Fluids 49, 438–447 (2015)MathSciNetCrossRefGoogle Scholar
 17.Deville, M.O., Fischer, P.F., Mund, E.H.: HighOrder methods for incompressible fluid flow. Cambridge University press, Cambridge (2002)CrossRefGoogle Scholar
 18.Orszag, S.A.: Spectral methods for problems in complex geometries. J. Comput. Phys. 37, 70–92 (1980). https://doi.org/10.1016/00219991(80)900054 MathSciNetCrossRefzbMATHGoogle Scholar
 19.Otero, E., Marin, O., Vinuesa, R., Schlatter, P., Siegel, A., Laure, E.: The effect of lossy data compression in computational fluid dynamics applications: resilience and data postprocessing. In: Proc. Direct and LargeEddy Simulation 11 (DLES11), Pisa (2017)Google Scholar
 20.Vinuesa, R., Schlatter, P.: Skinfriction control of the flow around a wing section through uniform blowing. In: Proc. European drag reduction and flow control meeting (EDRFCM), Rome (2017)Google Scholar
 21.LozanoDurán, A., Jiménez, J.: Effect of the computational domain on direct simulations of turbulent channels up to r e _{τ} = 4200. Phys. Fluids 26, 011702 (2014)CrossRefGoogle Scholar
 22.Lee, M., Moser, R.D.: Direct numerical simulation of turbulent channel flow up to r e _{τ} 5200. J. Fluid Mech. 774, 395–415 (2015)CrossRefGoogle Scholar
 23.Schlatter, P., Örlü, R.: Assessment of direct numerical simulation data of turbulent boundary layers. J. Fluid Mech. 659, 116–126 (2010)CrossRefGoogle Scholar
 24.Sillero, J.A., Jiménez, J., Moser, R.D.: Onepoint statistics for turbulent wallbounded flows at Reynolds numbers up to δ ^{+}2000. Phys. Fluids 25, 105102 (2013)CrossRefGoogle Scholar
 25.Vinuesa, R., Duncan, R.D., Nagib, H.M.: Alternative interpretation of the Superpipe data and motivation for CICLoPE: The effect of a decreasing viscous length scale. Eur. J. Mech. B/Fluids 58, 109–116 (2016)CrossRefGoogle Scholar
 26.Bobke, A., Vinuesa, R., Örlü, R., Schlatter, P.: History effects and near equilibrium in adversepressuregradient turbulent boundary layers. J. Fluid Mech. 820, 667–692 (2017)MathSciNetCrossRefGoogle Scholar
 27.Vinuesa, R., Hosseini, S.M., Hanifi, A., Henningson, D.S., Schlatter, P.: Pressuregradient turbulent boundary layers developing around a wing section. Flow Turbul. Combust. 99, 613–641 (2017)CrossRefGoogle Scholar
 28.Peplinski, A., Vinuesa, R., Offermans, N., Schlatter, P.: ExaFLOW use cases for Nek5000: incompressible jet in crossflow and flow around a NACA4412 wing section. In: H2020 FETHPC12014 D3.1 (2016)Google Scholar
 29.Jeong, J., Hussain, F.: On the identification of a vortex. J. Fluid Mech. 285, 69–94 (1995)MathSciNetCrossRefGoogle Scholar
 30.del Álamo, J. C., Jiménez, J., Zandonade, P., Moser, R.D.: Selfsimilar vortex clusters in the turbulent logarithmic region. J. Fluid Mech. 561, 329–358 (2006)CrossRefGoogle Scholar
 31.Vinuesa, R., Fick, L., Negi, P.S., Marin, O., Merzari, E., Schlatter, P.: Turbulence statistics in a spectralelement code: a toolbox for highfidelity simulations, Technical report ANL/MCSTM367 (2017)Google Scholar
 32.Örlü, R., Schlatter, P.: On the fluctuating wallshear stress in zero pressuregradient turbulent boundary layer flows. Phys. Fluids 23, 021704 (2011)CrossRefGoogle Scholar
 33.Dony, R.D.: KarhunenLoève transform, the transform and data compression handbook. CRC Press LLC, Boca Raton (2001)Google Scholar
 34.Unser, M.: On the approximation of the discrete karhunenloève transform for stationary processes. Signal Process. 7, 231–249 (1984)MathSciNetCrossRefGoogle Scholar
 35.Lazaridis, P., Bizopoulos, A., Tzekis, P., Zaharis, Z., Debarge, G., Gallion, P.: Comparative study of DCT and discrete legendre transform for image compression. In: Proc. X international conference society for electronics, telecommunications, automatics and informatics (ETAI), Ohrid (2011)Google Scholar
 36.Pope, S.: Turbulent flows. Cambridge University Press, Cambridge (2000)CrossRefGoogle Scholar
 37.Vinuesa, R., Prus, C., Schlatter, P., Nagib, H.M.: Convergence of numerical simulations of turbulent wallbounded flows and mean crossflow structure of rectangular ducts. Meccanica 51, 3025–3042 (2016)MathSciNetCrossRefGoogle Scholar
 38.Kim, J., Moin, P., Moser, R.D.: Turbulence statistics in fully developed channel flow at low Reynolds number. J. Fluid Mech. 177, 133–166 (1987)CrossRefGoogle Scholar
 39.Moser, R.D., Kim, J., Mansour, N.N.: Direct numerical simulation of turbulent channel flow up to r e _{τ} = 590. Phys. Fluids 11, 943–945 (1999)CrossRefGoogle Scholar
 40.Kline, S.J., Reynolds, W.C., Schraub, F.A., Runstadler, P.W.: The structure of turbulent boundary layers. J. Fluid Mech. 30, 741–773 (1967)CrossRefGoogle Scholar
 41.Gupta, A.K., Laufer, J., Kaplan, R.E.: Spatial structure in the viscous sublayer. J. Fluid Mech. 50, 493–512 (1971)CrossRefGoogle Scholar
 42.El Khoury, G.K., Schlatter, P., Brethouwer, G., Johansson, A.V.: Turbulent pipe flow: statistics, R edependence, structures and similarities with channel and boundary layer flows. J. Phys.: Conf. Ser. 506, 012010 (2014)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.