Model Order Reduction with Neural Networks: Application to Laminar and Turbulent Flows

We investigate the capability of neural network-based model order reduction, i.e., autoencoder (AE), for fluid flows. As an example model, an AE which comprises of convolutional neural networks and multi-layer perceptrons is considered in this study. The AE model is assessed with four canonical fluid flows, namely: (1) two-dimensional cylinder wake, (2) its transient process, (3) NOAA sea surface temperature, and (4) a cross-sectional field of turbulent channel flow, in terms of a number of latent modes, the choice of nonlinear activation functions, and the number of weights contained in the AE model. We find that the AE models are sensitive to the choice of the aforementioned parameters depending on the target flows. Finally, we foresee the extensional applications and perspectives of machine learning based order reduction for numerical and experimental studies in the fluid dynamics community.


Introduction
Thus far, modal analysis has played a significant role in understanding and investigating complex fluid flow phenomena.In particular, the combination with linear-theory based data-driven and operator-driven tools, e.g., proper orthogonal decomposition (POD) [1], dynamic mode decomposition [2], and Resolvent analysis [3], has enabled us to examine the fluid flows with interpretable manner [4,5].We are now able to see their great abilities in both numerical and experimental studies [6,7,8,9,10,11,12].In addition to these efforts aided by linear-theory based methods, the use of neural networks (NNs) has recently been attracting increasing attentions as a promising tool to extract nonlinear dynamics of fluid flow phenomena [13,14,15,16].In the present paper, we discuss the capability of the NN-based nonlinear model order reduction tool, i.e., autoencoder, while considering canonical fluid flow examples with several assessments.
In recent years, machine learning methods have exemplified their great potential in fluid dynamics, e.g., turbulence modeling [17,18,19,20,21,22,23,24,25], and spatio-temporal data estimation [26,27,28,29,30,31,32,33,34,35,36,37].Reduced order modeling (ROM) is no exception, referred to as machine-learning-based ROM (ML-ROM).An extreme learning machine based ROM was presented by San and Maulik [38].They applied the method for quasi-stationary geophysical turbulence and showed its robustness over a POD-based ROM.Maulik et al. [39] applied a probabilistic neural network to obtain a temporal evolution of POD coefficients from the parameters of initial condition considering the shallow water equation and the NOAA sea surface temperature.It was exhibited that the proposed method is able to predict the temporal dynamics while showing a confidence interval of its estimation.For more complex turbulence, Srinivasan et al. [40] demonstrated the capability of a long short-term memory (LSTM) for a nine-equation shear turbulent flow model.In this way, the ML-ROM efforts combined with the traditional ROM are ongoing now.
Furthermore, the use of autoencoder (AE) has also notable role as an effective model-order reduction tool of fluid dynamics.To the best of our knowledge, the first AE application to fluid dynamics is performed by Milano and Koumoutsakos [41].They compared the capability of the POD and a multi-layer perceptron (MLP)-based AE using the Burgers equation and a turbulent channel flow.They confirmed that the linear MLP is equivalent to POD [42] and demonstrated that the nonlinear MLP provides improved reconstruction and prediction capabilities for the velocity field at a small additional computational cost.More recently, convolutional neural network (CNN)-based AEs have become popular thanks to the concept of filter sharing in CNN, which is suitable to handle high-dimensional fluid data set [43].Omata and Shirayama [44] used the CNN-AE and POD to reduce a dimension of a wake behind a NACA0012 airfoil.They investigated the difference of the temporal behavior in the low-dimensional latent space depending on the tool for order reduction and demonstrated that the extracted latent variables properly represent the changes in the spatial structure of the unsteady flow field over time.They also exhibited that the capability of this method to compare flow fields constructed using different conditions, e.g., different Reynolds numbers and angles of attack.Hasegawa et al. [45,46] proposed a CNN-LSTM based ROM to predict the temporal evolution of unsteady flows around a bluff body by following only the time series of low-dimensional latent space.They consider flows around a bluff body whose shape is defined using a combination of trigonometric functions with random amplitudes and demonstrated that the trained ROM were able to reasonably predict flows around a body of unseen shapes, which implies generality of the low-dimensional features extracted in the latent space.The method has also been extended to the examination of Reynolds number dependence [47] and turbulent flows [48], whose results also unveiled some major challenges for CNN-AE, e.g., its capability against significantly different flow topologies from those used for training and the difficulty in handling very high-dimensional data.From the perspective on understanding the latent modes obtained by CNN-AE, Murata et al. [49] suggested a customized CNN-AE referred to as a mode-decomposing CNN-AE which can visualize the contribution of each machine learning based mode.They clearly demonstrated that, similar to the linear MLP mentioned above, the linear CNN-AE is equivalent to POD, the nonlinear CNN-AE improves the reconstruction thanks to the nonlinear operations, and a single CNN-AE mode represents multiple POD modes.Apart from these, we can also see a wide range of AE studies for fluid flow analyses in [50,51,52,53,54].As a result of such eagerness for the uses of AE, we here arrive at the following open questions: 1. How is the dependence on a lot of considerable parameters for AE-based order reduction, e.g., a number of latent modes, activation function, and laminar or turbulent?2.Where is the limitation of AE-based model order reduction for fluid flows? 3. Can we expect further improvements of AE-based model order reduction with any well-designed methods?
These questions should commonly appear in feature extraction using AE not only for fluid mechanics problems but also for any high-dimensional nonlinear dynamical systems.To the best of authors' knowledge, however, the choice of parameters often relies on a trial-and-error, and a systematic study for answering these questions seems to be still lacking.
Our aim in the present paper is to demonstrate the example of assessments for AE-based order reduction in fluid dynamics, so as to tackle the aforementioned questions.Especially, we consider the influence on (1) a number of latent modes, (2) choice of activation functions, and (3) the number of weights in an AE with four data sets which cover a wide range of nature from laminar to turbulent flows.Findings through the present paper regarding neural-network-based model order reduction for fluid flows can be applied to a wide range of problems in science and engineering, since fluid flows can be regarded as a representative example of complex nonlinear dynamical systems.Moreover, because the demand for model order reduction techniques can be found in various scientific fields including robotics [55], aerospace engineering [56,57], and astronomy [58], we can expect that the present paper can promote the unification of computer science and nonlinear dynamical problems from the perspective of application side.Our presentation is organized as follows.We introduce the present AE models with fundamental theories in section 2. The information for the covered fluid flow data sets is provided in section 3. We then present in section 4 the assessments in terms of various parameters in  the AE application to fluid flows.At last, concluding remarks with outlook of AE and fluid flows are stated in section 5.
2 Methods and theories of autoencoder

Machine learning schemes for construction of autoencoder
In the present study, we use an autoencoder (AE) based on a combination of a convolutional neural network (CNN) and a multi-layer perceptron (MLP), as illustrated in figure 1.Our strategy for order reduction of fluid flows is that the CNN is first used to reduce the dimension into a reasonably small dimensional space, and the MLP is then employed to further map it into the latent (bottleneck) space, as explained in more detail later.In this section, let us introduce the internal procedures in each machine learning model with mathematical expressions and conceptual figures.
We first utilize a convolutional neural network (CNN) [59] for reducing the original dimension of fluid flows.Nowadays, CNNs have become the essential tools in image recognition thanks to their efficient filtering operation.Recently, the use of CNNs has also been seen in the community of fluid dynamics [60,61,62,63,64,65].A CNN mainly consists of convolutional layers and pooling layers.As shown in figures 2(a) and (b), the convolutional layer extracts the spatial feature of the input data by a filter operation, where C = floor(H/2), c ijm and c ijm are the input and output data at layer l, and b m is a bias.A filter is expressed as h pqkm with the size of (H × H × K).In the present paper, the size H for the baseline model is set to 3, although it will be changed for the study on the dependence on the number of weights in section 4.3.The output from the filtering operation is passed through an activation function φ.In the pooling layer illustrated in figure 2(c), representative values, e.g.maximum value or average value, are extracted from arbitrary regions by pooling operations, such that the image would be downsampled.It is widely known that the model acquires robustness against the input data by pooling operations thanks to the decrease in the spatial sensitivity of the model [66].Depending on users' task, the upsampling layer can also be utilized.The upsampling layer copies the value of the low-dimensional images into a high-dimensional space as shown in figure 2(d).In the training process, the filters in the CNN h are optimized by minimizing a loss function E with the back propagation [67] such that w = argmin w [E(q output , F(q input ; w))].
We then apply an MLP [68] for additional order reduction.The MLP has been utilized to a wide range of problems, e.g., classification, regression, and speech recognition [69].We can also see the applications of MLP to various problems of fluid dynamics such as turbulence modeling [19,20,23], reduced order modeling [70], and field estimation [71].The MLP can be regarded as the aggregate of a perceptron which is a minimum unit, as illustrated in figure 3. The input data from the (l − 1)th layer are multiplied by a weight W .These inputs construct a linear superposition W q with biases b and then passed through a nonlinear activation function φ, q As illustrated in figure 3, these perceptrons of MLP are connected with each other and have a fully-connected structure.As well as the CNN, the weights among all connections W ij are optimized with the minimization manner.By combining two methods introduced above, we here construct the AE as illustrated in figure 1.The AE has a lower dimensional space called the latent space at the bottleneck so that a representative feature of high-dimensional data q can be extracted by setting the target data as both the input and output.In other words, if we can obtain the similar output to the input data, the dimensional reduction can be successfully achieved onto the latent vector r.In turn, with over-compression of the original data q, of course, the decoded field q cannot be remapped well.In sum, these relations can be formulated as where F e and F d are encoder and decoder of autoencoder as shown in figure 1. Mathematically, in the training process for obtaining the autoencoder model F, the weights w is optimized to minimize an error function E such that w = argmin w [E(q, F(q; w))].
In the present study, we use the L 2 norm error as the cost function E and Adam optimizer [67] is applied for obtaining the weights.Other parameters used for construction of the present AE are summarized in table 2. Note that theoretical optimization methods [72,73,74] can be considered for hyperparameter decision to further improve the AE capability, although not used in the present study.To avoid an overfitting, we set the early stopping criteria [75].With regard to the internal steps of the present AE, we first apply the CNN to reduce the spatial dimension of flow fields by O( 102 ).The MLP is then used to obtain the latent modes by feeding the compressed vector by the CNN, as summarized in table 1, which shows an example of the AE structure for the cylinder wake problem with the number of latent variables n r = 2.For a case where it is considered irrational to reduce the number of latent variables down to O(10 2 ), e.g., n r = 3072 of turbulent channel flow, the order reduction is conducted by means of CNN only (without MLP).

The role of activation functions in autoencoder
One of key features in the mathematical procedure of AE is the use of activation functions.This is exactly the reason why neural networks can account for nonlinearlity into their estimations.Here, let us mathematically introduce the strength of nonlinear activation functions for model order reduction by comparing to a proper orthogonal decomposition (POD), which has been known as equivalent to AE with the linear activation function z = φ(z).The details can be seen in some previous studies [42,76,77].We first visit the POD-based order reduction for high-dimensional data matrix q ∈ R nt×ng with a reduced basis Γ ∈ R nr×ng , where n t , n r and n g denote the numbers of collected snapshots, latent modes and spatial discretized grids, respectively.An optimization procedure of POD can be formulated with an L 2 minimization manner, where q is the mean value of the high-dimensional data matrix.Next, we consider an AE with the linear activation function z = φ(z), as shown in figure 4(a).Analogous to equation 4, the optimization procedure can be written as with a reconstructed matrix q, and weights w = [W , b, W , b] in the AE, which contains the weights of encoder W and decoder W and the biases of encoder b and decoder b.Substituting these variables with q for equation 4, the argmin formulation can be transformed as Noteworthy here is that equations 4 and 6 are equivalent with each other by replacing the variables as follows, These transformations tell us that the AE with the linear activation is essentially the same order reduction tool as POD.In other words, with regard to the AE with nonlinear activation functions as illustrated in figure 4(b), the nonlinear activation functions of encoder φ e and decoder φ d are multiplied to the weights (reduced basis) in equation 4, such that Note that the weights w including biases are optimized in the training procedure of autoencoder.As discussed above, the choice of nonlinear activation function is crucial to outperform the traditional linear method.In this study, we consider the use of various activation functions as summarized in figure 5.As shown, we cover a wide nature of nonlinear functions which are the well-used candidates, although these are not all of the used functions in the machine learning field.We refer the readers to Nwankpa et al. [78] for details of each nonlinear activation function.For the baseline model of each example, we use the Rectified Linear Unit (ReLU) [79], which has been widely known as a good candidate in terms of weight updates.Note in passing that we use the same activation function at all layers in each model; namely, a case like ReLU for the 1st and 2nd layers and tanh for the 2nd and 3rd layers is not considered.In the present paper, we perform a three-fold cross validation [80], although the representative flow fields with ensemble error values are reported.For readers' information, the training cost with the representative number of latent variables for each case performed under the NVIDIA TESLA V100 graphics processing unit (GPU) is summarized in table 3.   3.1 Two-dimensional cylinder wake at Re D = 100 The first example is a two-dimensional cylinder wake at Re D = 100.The data set is prepared using a twodimensional direct numerical simulation (DNS) [81].The governing equations are the continuity and the incompressible Navier-Stokes equation, where u = [u, v] and p are the velocity vector and pressure, respectively.All quantities are non-dimensionalized with the fluid density, the free-stream velocity, and the cylinder diameter.The size of the computational domain is (L x , L y )=(25.6,20.0), and the cylinder center is located at (x, y) = (9, 0).A Cartesian grid with the grid spacing of ∆x = ∆y = 0.025 is used for the numerical setup, and the number of grid points for the present DNS is (N x , N y ) = (1024, 800).For the machine learning, only the flow field around the cylinder is extracted as the data set to be used such that 8.We then consider a transient wake behind the cylinder, which is not periodic in time.Since almost the same computational procedure is applied as the case of periodic shedding, let us briefly touch the notable contents.The training data is obtained from the DNS similarly to the first example.Here, the domain is extended 3 folds in x direction such that (N * x , N * y ) = (1152, 192) with (L * x , L * y ) = (28.8, 4.8).Also similarly to the first example, we use the vorticity field ω as the data attribute.For training the present AE model, we use 4000 snapshots whose sampling interval is finer than that in the periodic shedding case to capture the transient nature correctly.

NOAA sea surface temperature
The third example is the NOAA sea surface temperature data set [82] obtained from satellite and ship observations.The aim in using this data set is to examine the applicability of AE to more practical situations, such as the case without governing equations.We should note that this data set is driven by the influence of seasonal periodicity.The spatial resolution here is 360 × 180 based on a one degree grid.We use the 20 years of data corresponding to 1040 snapshots spanning from year 1981 to 2001.

y − z sectional field of turbulent channel flow at Re τ = 180
To investigate the capability of AE for chaotic turbulence, we lastly consider the use of a cross-sectional (y − z) streamwise velocity field of a turbulent channel flow at Re τ = 180 under a constant pressure gradient condition.The data set is prepared by the DNS code developed by Fukagata et al. [83].The governing equations are the incompressible Navier-Stokes equations, i.e., where u = [u v w] T represents the velocity with u, v and w in the streamwise (x), the wall-normal (y) and the spanwise (z) directions.Also, t is the time, p is the pressure, and Re τ = u τ δ/ν is the friction Reynolds number.The quantities are non-dimensionalized with the channel half-width δ and the friction velocity u τ .The size of the computational domain and the number of grid points are (L x , L y , L z ) = (4πδ, 2δ, 2πδ) and (N x , N y , N z ) = (256, 96, 256), respectively.The grids in the x and z directions are uniform, and a non-uniform grid is applied in the y direction.The no-slip boundary condition is applied to the walls and the periodic boundary condition is used in the x and z directions.We use the fluctuation component of a y − z sectional streamwise velocity u as the representative data attribute for the present AE.

Results
In this section, we examine the influence on the number of latent modes (sec.4.1), the choice of activation function for hidden layers (sec.4.2), and the number of weights contained in an autoencoder (sec.4.3) with the data sets introduced above.

Number of latent modes
Let us first investigate the dependence of reconstruction accuracy by autoencoder (AE) on the number of latent variables n r .The expected trend in this assessment for all data sets is that the reconstruction error would increase with reducing n r , since the AE needs to discard the information while extracting dominant features from high-dimensional flow fields into a limited dimension of latent space.
The relationship between the number of latent variables n r and L 2 error norm = ||q Ref −q ML || 2 /||q Ref || 2 , where q Ref and q ML indicate the reference field and the reconstructed field by an AE, is summarized in figure 7.For the periodic shedding case, the structure of whole field can be captured with only two modes as presented in figure 7(a).However, the L 2 error norm at n r = 2 is approximately 0.4, which shows a relatively large value despite the fact that the periodic shedding behind a cylinder at Re D = 100 can be represented by only one scalar value [84].The error value here is also larger than the reported L 2 error norms in the previous studies [49,85] which handle the cylinder wake at Re D = 100 using AEs with n r = 2.This is likely due to the choice of activation function -in fact, we find a significant difference depending on the choice, which will be explained in section 4.2.It is also striking that the L 2 error at n r = 10 1 − 10 2 somehow increases.This indicates that the number of weights n w contained in the AE has also a non-negligible effect for the mapping ability, since the number of weights is increasing with n r in our AE setting as can be seen in table 1, e.g., n w (n r = 128) > n w (n r = 256).This point will be investigated in section 4.3.We then apply the AE to the transient flow as shown in figure 7(b).Since the flow in this transient case is also laminar, the whole trend of wake can be represented with a few number of modes, e.g., 2 and 16 modes.The behavior of AE with the data which has no modelled governing equation can be seen in figure 7(c).As shown, the entire trend of the field can be extracted with only two modes since the considered data here is driven by the seasonal periodicity as mentioned above.The reason for a rise in the error at n r = 128 is likely because of the influence on the number of weights, which is also observed in the cylinder example.We can see a striking difference of AE performance against the previous three examples in figure 7(d).For the turbulent case, the required number of modes to represent finer scales is visibly larger than those in the other cases since turbulence has a much wider range of scales.The difficulty in compression for turbulence can also be found in terms of L 2 error norms.
To further examine the dependence of reconstruction ability by AEs on the number of latent modes in terms of scales in turbulence, we check the premultiplied spanwise kinetic turbulent energy spectrum k + z E + uu (k + z ) in figure 8.As shown here, the error level decreases with increasing n r , especially on the low-wavenumber space.This implies that the AE model extracts the information in the low-wavenumber region preferentially.Noteworthy here is that the low error level portion suddenly appears in the high-wavenumber region, i.e., white lines at n r = 288 and 1024 in the L 2 error map of figure 8.These correspond to the crossing point of the E + uu (k + z ) curves of DNS and AE as can be seen in the lower left of figure 8.The under-or overestimation of the kinetic turbulent energy spectrum is caused by a combination of several reasons, e.g., underestimation of u because of L 2 fitting and the relationship between a squared velocity and energy such that u 2 = E + uu (k + z )dk + z , although it is difficult to separate these into each element.Note that the aforementioned observation is also seen in Scherl et al. [86], who applied a robust principal component analysis to the channel flow.

Choice of activation function
Next, we examine the influence of the AE based low dimensionalization on the choice of activation functions.As presented in figure 5, we cover a wide range of functions: namely linear mapping, sigmoid, hyperbolic tangent (tanh), rectified linear unit (ReLU, baseline), Leaky ReLU, ELU, Swish, and Softsign.Since the use of activation function is a essential key to account for nonlinearities into the AE-based order reduction as discussed in section 2.2, the investigation of the choice here can be regarded as immensely crucial for the construction of AE.For each fluid flow example, we consider the same AE construction and parameters as the models at the first assessment (except for the activation function), which achieved the L 2 error norm of approximately 0.2.Note that an exception is the example of sea surface temperature, since the highest reported L 2 error over the covered number of latent modes is already lower than 0.2.Hence, the numbers of The dependence of the AE reconstruction on activation functions is summarized in figure 9.As compared to the linear mapping φ(z) = z, the use of nonlinear activation functions leads to improve the accuracy with almost all cases.However, as shown in figure 9(a), the field reconstructed with sigmoid function φ(z) = (1 + e −z ) −1 is worse than the others including linear mapping, which shows the mode 0-like field.This is likely because of the widely known fact that the use of sigmoid function has often encountered the vanishing gradient which cannot update weights of neural networks accordingly [78].The same trend can also be found with the transient flow, as presented in figure 9(b).To overcome the aforementioned issue, the swish function φ(z) = z/(1 + e −z ), which has a similar form to sigmoid, was proposed [87].This swish can improve the accuracy of the AE based order reduction with the example of periodic shedding, although it is not for the transient.These observations indicate the importance to check the ability of each activation function so as to represent an efficient order reduction.The other notable trend here is the difference of the influence on the selected activation functions depending on the target flows.In our investigation, the appropriate choice of nonlinear activation functions exhibits significant improvement for the first three problem settings, i.e., periodic wake, transient, and sea surface temperature.In contrast, we do not see substantial differences among the nonlinear functions with the turbulence example.It implies that a range of scales contained in flows has also a great impact on the mapping ability of AE with nonlinear activation functions.Summarizing above, a careful choice of activation functions should be taken to implement the AE with acceptable order reduction since the influence and abilities of activation functions highly depend on the target flows which users deal with.

Number of weights
As discussed in section 4.1, the number of weights contained in AEs may also have an influence for the mapping ability.Here, let us examine this point with two investigations as follows: 1. by changing the amount of parameters in the present AE.(section 4.3.1) 2. by pruning weights in the present AE.(section 4.3.2)

Change of parameters
As mentioned above, the present AE comprises of a convolutional neural network (CNN) and multi-layer perceptrons (MLP).For the assessment here, to change the number of weights in the AE, we tune various parameters in both the CNN and the MLP, e.g., the number of hidden layers in both CNN and MLP, the number of units in MLP, the size and number of filters in CNN.Note in passing that we here only focus on the number of weights, although the way to align the number of weights may also have effects to AE's ability.For example, we do not consider difference between two cases of n w,1 = n w,2 -one model with n w,1 is obtained by tuning the number of layers, while the other with n w,2 is constructed by changing the number of unitsalthough the performance of two may be different caused by the tuned parameters.The same AE construction and parameters as those used in the second assessment with ReLU function are considered, which achieved the L 2 error norm of approximately 0.2 in the first investigation.Similarly to the investigation for activation functions, the numbers of latent modes n r for each example are 16 (cylinder wake), 16 (transient), 2 (sea surface temperature), and 1024 (turbulent channel flow), respectively.The dependence of the mapping ability on n w is investigated as follows, 1.The number of weights in the MLP n w,MLP is fixed, and the number of weights in the CNN n w,CNN is changed.The L 2 error is assessed by comparing the original number of weights in the CNN such that n w,CNN /n w,CNN:original .2. The number of weights in the CNN n w,CNN is fixed, and the number of weights in the MLP n w,MLP is changed.The L 2 error is assessed by comparing the original number of weights in the MLP such that n w,MLP /n w,MLP:original .
The number of weights n w = n w,CNN + n w,MLP in the baseline models is summarized in table 4. Note again that the CNN first works to reduce the dimension of flows by O(10 2 ) and the MLP is then utilized to map into the target dimension as stated in section 2.1.Hence, the considered models for the cylinder wake, transient flow, and sea surface temperature have both the CNN and the MLP.On the other hand, the MLP is not applied to the model for the example of turbulent channel flow, since n r = 1024.Following this reason, only the influence on the weights of CNN is considered for the turbulent case.
The dependence on the number of weights tuned by changing of parameters is summarized in figure 10.The expected trend here is that the error decreases with increasing the number of weights, which is seen with the example of turbulent channel flow shown in figure 10(d).The same trend can also be observed with the parameter modification in CNN for the examples of periodic shedding and transient cylinder flows, as presented in figures 10(a) and (b).The error curve for the parameter modification in MLP, however, shows the contrary trend in both cases -the error increases for the larger number of weights.This is likely because there are too many numbers of weights in MLP, and this deepness leads to a difficulty in weight updates, even if we use the ReLU function.With regard to the models of sea surface temperature, the error curves of both CNN and MLP show a valley-like behavior.It implies that these numbers of weights which achieve the  lowest errors are appropriate in terms of weight updates.For the channel flow example, the error converges around 0.15 as shown in figure 10(d), but we should note that it already reaches almost n w /n w,original = 10.Such a large model will be suffered from a heavy computational burden in terms of both time and storage.Summarizing above, care should be taken for the choice of parameters contained in AE, which directly affect the number of weights, depending on user's environment, although an increase of number of weights basically leads to acquire a good AE ability.Especially when an AE includes MLP, we have to be more careful since the fully-connected manner inside MLP leads to the exponential increase for the number of weights.

Pruning operation
Through the investigations above, we found that the appropriate parameter choice enables the AE to improve its mapping ability.Although we may encounter the problem of weight updates as reported above, increasing the number of weights n w is also essential to achieve a better AE compression.However, if the AE model could be constructed well with a large number of weights, the increase of n w also directly corresponds to a computational burden in terms of both time and storage for a posteriori manner.Hence, our next interest here is whether an AE model trained with a large number of weights can maintain its mapping ability with a pruning operation or not.The pruning is a method to cut edges of connection in neural networks and has been known as a good candidate to reduce the computational burden while keeping the accuracy in both classification and regression tasks [88].We here assess the possibility to use the pruning for AEs with fluid field data.As illustrated in figure 11, we do pruning by taking a threshold based on a maximum value of weights per each layer such that |W | th = µ|W | max , where W and µ express weights and a sparsity threshold coefficient, respectively.An original model without pruning corresponds to µ = 0.The pruning for the CNN is also performed as well as MLP.For this assessment, we consider the best models in our investigation for activation functions (section 4.2), as summarized in table 5.
Let us present in figure 12 the results of pruning operation for the AEs with fluid flows.To check the weight distribution of AEs for each problem setting, the relationship between the sparsity threshold coefficient µ and the sparsity factor γ = 1−n w /n w,original is studied in figure 12(a).With all cases, µ and γ have a proportional relationship as can be expected.Noteworthy here is that the sparsity of the model for the turbulent channel flow is lower than that for the others over the considered µ.This indicates the significant difference in the weight distribution in the AEs in the sense that the contribution of the weights in the turbulence example for reconstruction is distributed more uniformly across the entire system.In contrast, for the laminar cases, i.e., the periodic shedding and transient flows, a fewer number of weights has the higher magnitudes and a contribution for reconstruction than the turbulence case.The curve of sea surface temperature model shows the intermediate behavior between them.It is also striking that the aforementioned trend is analogous to the singular value spectra in figure 6.The relationship between the sparsity factor γ and the L 2 error norm with some representative flow fields are shown in figures 12(b) and (c).As presented, the reasonable recovery can be performed up to approximately 10 to 20% pruning.For reader's information, the number of singular modes which achieves 80% cumulative energy n M, e=0.8 is presented in figure 12(a).Users can decide the coefficient γ to attain a target error value depending on target flows.Summarizing above, the efficient order reduction by AE can be carried out by considering the considerable methods and parameters as introduced through the present paper.

Concluding remarks
We presented the assessment of neural network based model order reduction method, i.e., autoencoder (AE), for fluid flows.The present AE which comprises of a convolutional neural network and multi-layer perceptrons was applied to four data sets, i.e., two-dimensional cylinder wake, transient process, NOAA sea surface temperature, and turbulent channel flow.The model was evaluated in terms of various considerable parameters in deciding a construction of AE.With regard to the first investigation for the number of latent modes, it was clearly seen that the mapping ability of AE highly depends on the target flows, i.e., complexity.In addition, the first assessment enables us to notice the importance of investigation for the choice of activation function and the effect of number of weights contained in the AE.Motivated by the first assessment, the choice of activation function was then considered.We found that we need to be careful for the choice of of activation functions at hidden layers of AE so as to achieve the effective order reduction because the influence of activation functions highly depends on target flows.At last, the dependence of the reconstruction accuracy by the AE on the number of weights contained in the model was assessed.We exhibited that care should be taken to change the amount of parameters of AE models because we may encounter the problem of weight updates by using many numbers of weights.The use of pruning was also examined as one of the considerable candidates to reduce the number of weights.Although it has been known that the capability of neural networks may be improved by re-training of pruned neural networks [89], the possibility of this strategy will be tackled in future.
We should also discuss remaining issues and considerable outlooks regarding the present form of AE.One of them is the interpretability of the latent space, which may be regarded as one of the weaknesses of neural network-based model order reduction due to its black-box characteristics.Since the AE-based modes are not orthogonal with each other, it is still hard to understand the role of each latent vector for reconstruction [44].To tackle this issue, Murata et al. [49] proposed a customized CNN-AE which can visualize individual modes obtained by the AE, considering a laminar cylinder wake and its transient process.However, they also reported that the applications to flows requiring a lot of spatial modes for their representation, e.g., turbulence, are still challenging with their formulation.This is because the structure of the proposed AE would be more complicated with increasing the complexity of target problems, and this eventually leads to difficulty in terms of interpretability.The difficulty in applications to turbulence could also be found in this paper, as discussed above.More recently, Fukami et al. [85] have attempted to use a concept of hierarchical AE to handle turbulent flows efficiently, although it still needs further improvement for handling turbulent flows in a sufficiently interpretable manner.Although the aforementioned challenges are just examples, through such continuous efforts, we hope to establish in near future more efficient and elegant model order reduction with additional AE designs.
Furthermore, of particular interest here is how we can control a high-dimensional and nonlinear dynamical system by using the capability of AE demonstrated in the present study, which can promote efficient order reduction of data thanks to nonlinear operations inside neural networks.From this viewpoint, it is widely known that capturing appropriate coordinates capitalizing on such model order reduction techniques including AE considered in the present study is also important since being able to describe dynamics on a proper coordinate system enables us to apply a broad variety of modern control theory in a computationally efficient manner while keeping its interpretability and generaliziability [90].To that end, there are several studies to enforce linearity in the latent space of AE [91,92], although their demonstrations are still limited to lowdimensional problems that are distanced from practical applications.We expect that the knowledge obtained through the present study, which addresses complex fluid flow problems, can be transferred to such linearenforced AE techniques towards more efficient control of nonlinear dynamical systems in near future.

Fig. 1
Fig. 1 Autoencoder (AE) used in the present study.The present model is constructed by a combination of multi-layer perceptron and convolutional neural network.

Fig. 2
Fig. 2 Fundamental operations in convolutional neural network (CNN).(a) Convolutional operation with a filter.(b) Computational procedure in the convolution layer.(c) Max and average pooling operations.(d) Upsampling operation.

Fig. 5
Fig. 5 Activation functions covered in the present study.

Fig. 6
Fig. 6 (a) Examples of flow fields used in the present study.(b) Normalized cumulative sum of the singular values.

Fig. 7
Fig. 7 The relationship between the number of latent variables nr and L 2 error norm = ||q Ref − q ML || 2 /||q Ref || 2 .(a) Two-dimensional cylinder wake.(b) Transient flow.(c) NOAA sea surface temperature.(d) y − z sectional streamwise velocity fluctuation of turbulent channel flow.Three-fold cross validation is performed, although not shown.

Fig. 8
Fig. 8 Premultiplied spanwise kinetic energy spectrum at nr = 288 and 1024.The lower left shows E + uu (k + z ) at y + = 13.2; in this plot, the gray line indicates the reference DNS and the other colored plots correspond to the AE models.

Fig. 9
Fig. 9 Dependence of mapping ability on activation functions.(a) Two-dimensional cylinder wake.(b) Transient flow.(c) NOAA sea surface temperature.(d) y − z sectional streamwise velocity fluctuation of turbulent channel flow.Three-fold cross validation is performed, although not shown.Values above contours indicate the L 2 error norm.

Fig. 10
Fig. 10 Dependence on the number of weights tuned by modification of parameters in AE.(a) Two-dimensional cylinder wake.(b) Transient flow.(c) NOAA sea surface temperature.(d) y − z sectional streamwise velocity fluctuation of turbulent channel flow.

Fig. 12
Fig. 12 Pruning for AEs with fluid flows.(a) Relationship between the sparsity threshold coefficient µ and the sparsity factor γ = 1 − nw/n w,original .The number of singular modes which achieves 80% cumulative energy n M, e=0.8 is also shown.(b) Relationship between a sparsity factor γ and L 2 error norm .(c) Representative flow fields of each example.

Table 1
The present AE structure for the periodic shedding example at the number of latent space nr = 2.

Table 2
Hyper parameters used in the present AE model.

Table 3
Training time of AE in the present study.

Table 4
The number of weights in the baseline AEs for section 4.3.1.

Table 5
Models for the pruning study.