Abstract
This paper surveys machinelearningbased superresolution reconstruction for vortical flows. Super resolution aims to find the highresolution flow fields from lowresolution data and is generally an approach used in image reconstruction. In addition to surveying a variety of recent superresolution applications, we provide case studies of superresolution analysis for an example of twodimensional decaying isotropic turbulence. We demonstrate that physicsinspired model designs enable successful reconstruction of vortical flows from spatially limited measurements. We also discuss the challenges and outlooks of machinelearningbased superresolution analysis for fluid flow applications. The insights gained from this study can be leveraged for superresolution analysis of numerical and experimental flow data.
Graphical abstract
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Super resolution reconstructs a spatially highresolution field data \({{{\varvec{q}}}}_{\textrm{HR}}\) from its lowresolution counterpart \({{{\varvec{q}}}}_\textrm{LR}\) [1,2,3]. This problem set has been traditionally tackled in computer visions with various techniques including interpolation [4,5,6,7], examplebased internal learning [8,9,10,11], highfrequency transfer [12,13,14,15], neighbor embedding [16,17,18,19,20], and sparse coding [21,22,23,24,25]. Although these implementations are effortless, it is generally challenging to reconstruct highwavenumber contexts. To address this difficulty, machine learning has been used for accurate superresolution reconstruction of images [26,27,28]. Machine learning can find a nonlinear relationship between input and output data even under illposed conditions. This approach can be applied to a pair of low and highresolution images, providing a finer level of images from extremely coarse images [29].
Machinelearningbased techniques in general [30,31,32] have been considered for a range of applications in fluid mechanics including turbulence modeling [33,34,35,36,37], reducedorder modeling [38,39,40,41,42], data reconstruction [43,44,45,46], and flow control [47,48,49,50,51,52]. Superresolution reconstruction with machine learning is no exception. The lower barrier to access open source codes in image science and implement models also enables fluid mechanicians to apply methods for fluid flow data by replacing RGB components (red, green, and blue) with velocity components \(\{u,v,w\}\).
While super resolution can be regarded as an imagebased data recovery technique, it is also a general framework for a broad range of applications in fluid mechanics. For instance, a lowresolution fluid flow image can be interpreted as a set of sparse sensor measurements. In this aspect, the inverse problem of global field reconstruction from local measurements is an extension of superresolution analysis [64, 68, 69]. If we consider lowresolution fluid flow data as noisy experimental measurements, superresolution analysis can also be extended to denoising problem [60, 70, 71]. Furthermore, largeeddy simulation (LES) can incorporate superresolution reconstruction to reveal finer structures inside a lowresolution grid cell [63, 66].
This paper surveys the current status and the challenges of machinelearningbased superresolution analysis for vortical flows. We first cover several machinelearning models and their applications to super resolution of fluid flows. We then offer case studies using a supervised learningbased super resolution for an example of twodimensional decaying isotropic turbulence. We consider embedding physics into the model design to successfully reconstruct a highresolution vortical flow from lowresolution data. We further discuss the challenges and outlooks of machinelearningbased super resolution in fluid flow applications. The present paper is organized as follows. We introduce machinelearning approaches of superresolution reconstruction for vortical flows in Sect. 2. Applications of these machinelearning techniques are discussed in Sect. 3. We perform case studies in Sect. 4. Extensions of superresolution analysis for fluid dynamics are discussed in Sect. 5. Concluding remarks with outlooks are provided in Sect. 6.
2 Approaches
A variety of machinelearning models have been proposed for the superresolution reconstruction of vortical flows, as summarized in Table 1. Machinelearningbased approaches can find a nonlinear relationship between the lowresolution input and the corresponding highresolution output from a large collection of data through training. In superresolution analysis, the dimension of the input (lowresolution data) \({{{\varvec{q}}}}_{\textrm{LR}} \in {\mathbb {R}}^{m}\) is smaller than that of the highresolution output \({{{\varvec{q}}}}_{\textrm{HR}} \in {\mathbb {R}}^{n}\) with \(m\ll n\),
where F is the superresolution model. Depending on the flow of interest and the size of data, the machinelearning model should be carefully chosen. In Sect. 2.1, we introduce three types of machinelearning models that are widely used. We also discuss the use of physicsbased loss functions in Sect. 2.2.
2.1 Machinelearning models
2.1.1 Fully connected network (multilayer perceptron)
The fully connected network, also called the multilayer perceptron [72], is the most basic neural network model. Nodes between layers are fully connected with each other, as illustrated in Fig. 1. The minimum unit of a fully connected network is called perceptron. For each perceptron, the linear combination of the inputs from layer \((l1)\), \(c_j^{(l1)}\), is connected with weights \({{{\varvec{w}}}}\) yielding the output at layer (l), \(c_i^{(l)}\),
where \(\varphi \) is the activation function and b is the bias added at each layer. We can choose a nonlinear function for \(\varphi \), enabling the network to capture the nonlinear relationship between the input and the output.
A fully connected model can be used for supervised machine learningbased super resolution. A training process for supervised machinelearning models is cast as an optimization problem to determine the weights \({{{\varvec{w}}}}\) inside the model F. The weights \({{{\varvec{w}}}}\) are optimized by minimizing the loss function \({{\mathcal {E}}}\) through backpropagation [73]. This optimization procedure is described as
Since superresolution reconstruction aims to obtain a highresolution image \({{{\varvec{q}}}}_{\textrm{HR}}\) from the corresponding lowresolution data \({{{\varvec{q}}}}_{\textrm{LR}}\), the loss function (error) can be formulated as
where P indicates the norm. While the \(L_2\) norm is widely used, we can instead consider other norms such as \(L_1\) norm and logarithmic norm depending on the data characteristics. The \(L_1\) norm can be used for model construction that is not as sensitive for outliers in the data. The logarithmic norm is suitable for cases where underestimation should be avoided.
As mentioned above, the difference of data dimension between the input and the output in the superresolution analysis is substantial. Hence, models generally comprise the decodertype structure [55, 74], meaning that the number of nodes gradually increases towards the output layer. This is especially the case for highdimensional inverse problems such as superresolution reconstruction of fluid flows. This leads to the number of nodes and their connections to drastically increase, leading to the prohibitively expensive computational cost and the failure of nonconvex optimization known as the curse of dimensionality [75]. Users should be mindful of computational time and memory requirements for fully connected models.
2.1.2 Convolutional neural network
To address the issue of the computational burden associated with the fully connected models, convolutional neural networks (CNNs) [76] have been widely utilized in superresolution analysis of fluid flows. CNNs incorporate a function called filter sharing, enabling the processing of large vortical flow data without encountering the curse of dimensionality [77].
A CNN is generally comprised of the convolutional layer, pooling layer, and upsampling layer. The convolutional layer depicted in Fig. 2 captures the nonlinear relationship between input and output data by extracting spatial features of supplied data through filtering operations. This operation is expressed as
where \(G=\lfloor H/2\rfloor \), H is the width and height of the filter, M is the number of input channel, n is the number of output channel, b is the bias, and \(\varphi \) is the activation function. As in the fully connected models, a nonlinear function can be chosen for \(\varphi \) to account for nonlinearities in the machinelearning model.
In addition to the convolutional layer, a pooling layer also plays an important role in CNNbased analysis. The pooling layer downscales the data, reducing data dimension. For regression tasks, it is useful for reducing spatial sensitivity, producing a robust CNN model against noisy inputs [78]. It is also possible to expand the data dimension through the upsampling layer. Upsampling copies the value onto an arbitrary region to expand the dimension. This function is especially useful to align the data dimension inside the network.
For super resolution in which the dimension of the output \({\mathbb {R}}^{n}\) is larger than that of the input \({\mathbb {R}}^{m}\), there are several ways to treat the difference of the dimensions between the input and the output. For example, the upsampling can be used inside a network to expand the dimension [79]. One can also implement a resize or interpolation function for the input data to align the size with that of the output [29, 54, 80]. This can avoid the use of pooling or upsampling operations, reducing the complexity of the model.
2.1.3 Generative adversarial network
In addition to supervised fully connected networks and convolutional networks, unsupervised learning with generative adversarial network (GAN) [81] has also been proposed for superresolution analysis of fluid flows [53, 56, 59, 62, 82]. GAN is attractive for cases in which it is difficult to prepare paired input and output data. For example, the application of super resolution with LES can correspond to this scenario. A model trained with a pair of highfidelity DNS and subsampled lowresolution data may not directly support superresolution reconstruction for LES data. Super resolution of PIV measurements with limited spatiotemporal resolution (without corresponding highresolution solution images) also needs to be carefully considered.
GAN is composed of two networks, namely, a generator (G) and a discriminator (D). A generator produces a fake image which is similar to the solution from random noise \({\varvec{n}}\). In contrast, a discriminator judges the generated (fake) image as whether it is likely to be a realistic image by returning a probability between 0 (fake) and 1 (real). A generator usually possesses a decodertype structure to expand the data dimension from noise to images, while a discriminator is composed of an encodertype network towards reducing the data size from images to the probability. Throughout the training process, the weights inside the generator are being updated to deceive the discriminator toward the direction of minimizing the probability by generating images increasingly similar to the real data. Fake images produced by the generator eventually become highquality images that cannot be distinguished from the real image.
These processes can be mathematically expressed with regard to the cost function V(D, G),
where \({\varvec{d}}\) is a real data set and \(p_{\textrm{data}}\) is the probability distribution of the real data. The parameters in the generator G are trained towards the direction in which \(D(G({{\varvec{n}}}))\) becomes 1. On the other hand, the weights in the discriminator D are updated so that \(D({{{\varvec{d}}}})\) returns a value close to 1. Since the discriminator becomes wiser through training, \(D(G({{\varvec{n}}}))\) provides a value close to 0. Summarizing, the parameter inside the generator G is optimized by minimizing the loss function while that for the discriminator D is adjusted by maximizing the loss function, referred to as competitive learning [83]. Once the training ends, the trained generator can produce an output with indistinguishable quality compared to the real data. For superresolution problems, we can use lowresolution data as the input for the generator G instead of random noise \({\varvec{n}}\), as illustrated in Fig. 3. A generator in superresolution reconstruction provides a statistically plausible highresolution output by learning the relationship between the input lowresolution data set and the highresolution data set, which need not be paired.
2.2 Choice of loss function
Here, let us discuss the choice of loss (cost) function for machinelearningbased superresolution analysis. In standard formulation, we can have the cost function defined by Eqs. 4 and 6. However, superresolved flow fields with direct applications of machinelearning models do not satisfy physical conditions, such as the conservation laws. To address such an issue, loss functions that embed physics laws can be utilized [84, 85]. Together with the original databased cost \({{{\mathcal {E}}}}_d\) from Eqs. 4 or 6, the loss function \(\mathcal{E}\) incorporating a physicsinspired loss function \({{{\mathcal {E}}}}_p\) for superresolution analysis can take the form of
where \(\beta \) provides a scale between \({{{\mathcal {E}}}}_d\) and \(\mathcal{E}_p\).
There are several approaches to introduce the physicsbased loss term for fluid flows. For instance, we can directly substitute a reconstructed highresolution field \({{{\varvec{q}}}}_{\textrm{HR}}\) into the governing equation [85] if we have all data for the state variables to have,
where \({{{\mathcal {N}}}}\) is an operator from governing equations. Minimizing a loss function incorporating only certain terms of the Navier–Stokes equation [38] can also be considered,
where \({{{\mathcal {N}}}}_j\) is a term in the governing equation and \({{\varvec{q}}}_{\textrm{Ref}}\) is a reference data. It is known that these physicsbased loss functions help in reconstructing flows with a small amount of data [60]. What these terms in the loss function do is to better constrain the solution space [86, 87]. This is a similar concept to semisupervised learning which combines a small amount of labeled data with a large amount of unlabeled data [88]. In the present paper, we demonstrate the effectiveness of training with small data set for superresolution reconstruction of turbulent vortices in Sect. 4. We should, however, note that the socalled physicsinspired analysis can suffer from large numerical error if \({{{\varvec{q}}}}_{\textrm{HR}}\) contains error or noise. This approach should be used with caution as it assumes that \({{{\varvec{q}}}}_{\textrm{HR}}\) can be used to evaluate certain terms.
3 Applications
In this section, we survey recent superresolution applications for fluid flows through supervised (Sect. 3.1) and semisupervised/unsupervised learning (Sect. 3.2).
3.1 Supervised learning
In machinelearningbased superresolution reconstruction of fluid flows, supervised techniques are often used. Supervised learning requires a pair of input and output flow field data as training data. For superresolution analysis, a highresolution reference flow field and the corresponding lowresolution data need to be available for training models. To avoid the curse of dimensionality, CNN models are often used for imagebased super resolution of fluid flows rather than fully connected models.
Fukami et al. [54, 89, 90] proposed a CNNbased superresolution reconstruction for fluid flows in a supervised manner. The CNNbased model was applied to examples of a twodimensional cylinder wake, twodimensional isotropic turbulence, and threedimensional turbulent channel flow. To capture multiscale physics in turbulent vortical flows, they also proposed the hybrid downsampled skipconnection/multiscale (DSC/MS) model based on the CNN. The model is composed of the up/downsampling operations, the skip connection [91], and CNNs with various sizes of filters. While up/downsampling operations support robustness against rotation and translation of vortical structures, the skip connection provides stability of the learning process [91]. Moreover, the multiscale CNN aims to capture a variety of length scales in turbulent flows. Especially for the examples of turbulence, it was shown that the DSC/MS model is effective in accurately preserving the energy spectrum.
Following this study, supervised CNNbased superresolution analysis has been actively studied for a range of flows. ObiolsSales et al. [57] proposed a CNNbased superresolution model called SURFNet and tested its performance for wakes around various NACAtype airfoils, ellipses, and cylinders. SURFNet includes a transfer learningbased augmentation [92]. The model is first trained using only lowresolution flow data, and then, the pretrained weights are transferred in training with highresolution data sets. Transfer learning over multiple levels of spatialresolution flow field can improve the accuracy of superresolution reconstruction [93], which is also related to multifidelity learning [94]. UNetbased model (illustrated in Fig. 4) can also reduce the training cost for superresolution reconstruction of turbulent flows since the size of fluid flow data is reduced through an autoencodertype model structures [95].
Incorporating physical insights and domain knowledge into model construction further supports or enhances supervisedlearningbased superresolution reconstruction in vortical flows. For instance, accounting for spatial length scales of the flow structures in the models improves reconstruction [54]. Kong et al. [96] developed a multiple path superresolution CNN with several connections inside the model to capture variations of spatial temperature distribution in a supersonic combustor. They reported that the proposed multiplepath CNN provides enhanced reconstruction of temperature fields compared to a regular CNN. Incorporating the time history of flow fields is also useful for superresolving vortical flows in a supervised manner. Liu et al. [58] compared two types of supervised CNNbased models for superresolution analysis: namely the static CNN (SCNN) and the multiple temporal paths CNN (MTPC). While the SCNN model uses instantaneous flow snapshots as the input, the MTPC model considers a time series of velocity fields as the input to read spatial and temporal information simultaneously. With examples of forced isotropic turbulence and turbulent channel flow, they found that the MTPC model can improve the reconstruction of turbulence statistics such as kinetic energy spectra and the second and third invariants of the velocity gradient tensor.
Once supervised models are trained, machinelearning models can be used for data compression since we only need to save only the input data to recover highresolution flow fields. Matsuo et al. [97] proposed an adaptive superresolution analysis. They focused on how a lowresolution field is prepared in training a supervised learningbased model. While max and averaging pooling operations are generally used for preparing lowresolution data sets, they considered the spatial standard deviation in arbitrary subdomains in a flow field to determine the local degree of downsampling. This can account for the importance of flow structures in generating lowresolution data sets. They reported that supervised CNN models can reconstruct a highresolution field of threedimensional square cylinder wake from adaptive lowresolution data, achieving approximately 0.05% data compression against the original data.
Compressing fluid flow data in the time direction can also be considered. Fukami et al. [90] used the DSC/MS model to reconstruct highresolution turbulent flows from coarse flow data in space and time inspired by a concept of superresolution analysis and inbetweening [98]. In their formulation, two spatial coarse flow fields at \(t=n\Delta t\) and \(t=(n+k)\Delta t\) are taken as the input of the first machinelearning model. Once the spatialreconstruction model provides two superresolved highresolution flow fields, these outputs are then fed into the second model to perform inbetweening that provides highresolution snapshots between the beginning and the end frames. By combining these two models, spatiotemporal highresolution vortical flows can be obtained from only two coarse snapshot data. It should be note that linear interpolation in time cannot capture advective physics. They demonstrated the model capability with turbulent channel flows and reported that the flow field can be quantitatively reconstructed, achieving 0.04% data compression. Arora and Shrivastava [99] have recently combined this superresolution/inbetweening idea with physicsinformed neural network [85] to improve the reconstruction accuracy and demonstrated it with an example of a mixedvariable elastodynamics system.
Furthermore, supervised superresolution reconstruction can be used to examine how machine learning extracts the relationship between small and largescale vortical structures. Kim and Lee [100] considered a CNNbased estimation of the highresolution heat flux field in a turbulent channel flow from poorly resolved wallshear stresses and pressure. They revealed that the CNN model focuses on the relationship between vortical structures and pressure distribution in channel turbulence to estimate the local heat flux from the wallshear stress. Morimoto et al. [101] have recently examined the effect of inter and extrapolation with machinelearningbased superresolution reconstruction with respect to flow parameters. They considered twostaggered cylinder wakes whose flow dynamics are characterized based on the diameters and the distance between two cylinders. They found that the supervised CNNbased model can quantitatively reconstruct a vortical flow even for untrained parameter cases by preparing flow field data based on the information of lift coefficient spectrum.
Supervised superresolution techniques have also been applied to largerscale meteorological flows [102, 103]. Onishi et al. [102] proposed a CNNbased model for superresolution analysis of temperature fields in urban environment. The proposed model provides a highresolution temperature field at reduced computational time than the corresponding highfidelity simulation, suggesting the potential use of machinelearning models as a surrogate for largescale numerical simulations. To improve the model performance, Yasuda et al. [103] extended their model by incorporating skip connection [91] and channel attention [104]. While skip connection [91] helps stabilize the learning process of deep CNNs, channel attention [104] can discover the crucial and irrelevant spatial regions of fluid flow regressions. The model trained with temperature fields in one city (Tokyo) provides quantitative reconstruction for test temperature data for another city with similar climate (Osaka). They also observed that including building height information as a part of the input of the machinelearning model is important for successful temperature reconstruction.
In addition to the aforementioned studies with numerical data, applications to experiments have also been considered [105]. For such cases, the effects of noise in the input data must be carefully considered. Deng et al. [106] developed a machinelearning model to superresolve PIV measurements. For training the model based on CNN, a pair of highresolution experimental velocity data collected by PIV with crosscorrelation method and downsampled lowresolution data is used. The model was tested for turbulent flows around a single cylinder and two cylinders. For more complex turbulent flows, Wang et al. [107] proposed a superresolution neural network for twodimensional PIV (PIV2DSR) based on CNNs. Once they trained the model with velocity fields of turbulent channel flow at Re\(_\tau = 1000\) obtained by direct numerical simulation (DNS), the model is assessed with not only numerical channel flow field data at a much higher Reynolds number of 5200 but also real experimental PIV data for a turbulent boundary layer at Re\(_\tau = 2200\).
For the preparation of training data in these experimental studies, crosscorrelation methods [109] are generally used to obtain velocity fields from particle images. Instead of giving a velocity field from the correlation method, one may consider providing a particle image directly into a model to obtain a higherresolution flow field. Cai et al. [110] used FlowNetS [111] to estimate velocity fields of cylinder wake, backwardfacing step flow, and isotropic turbulence from synthetic particle images. They exhibited that a machinelearning model provides higherresolution flow field data than the conventional PIV. The proposed method was also tested with experimental particle images of a turbulent boundary layer. Reconstructed flows based on machine learning may capture phenomena that cannot be observed with conventional techniques. This FlowNetSbased method has recently been commercialized as AIPIV [112]. The superresolution approach with particle images has also been applied to a wake around bluff bodies to remove the influence of reflection and halation in PIV measurements [113].
Alternatively, a set of sparse sensor measurements can be considered as input to machinelearning models instead of the lowresolution flow data. For instance, Erichson et al. [55] used a fully connected model to reconstruct a global flow field from local sensors. The model was applied to geophysical flow and forced isotropic turbulence. Their fully connected model is a shallow decoder—the model that incorporates a dimension compression to nonlinearly extract key features from sensors, after which the whole field is recovered from these latent representations of the input sensors, as illustrated in Fig. 5. By visualizing the weight distribution between the latent space representation and the whole field, the shallow decoder provides nonlinear modes that represent the contribution of each latent variable for superresolution reconstruction, which are analogous to those captured by nonlinear autoencoders [108, 115,116,117,118,119,120].
As mentioned above, fully connected networkbased reconstruction is prohibitively expensive for global flow field reconstruction due to the very large number of parameters in the network [121]. To address this issue, there are also some efforts to estimate loworder representations such as coefficients obtained through proper orthogonal decomposition (POD) from sparse sensor measurements [122,123,124]. For instance, Nair and Goza [67] proposed a fully connected modelbased estimator of POD coefficients and applied it to a laminar wake around a flat plate. Their fully connected model takes vorticity sensors on the airfoil surface and then, outputs POD coefficients, as illustrated in Fig. 6. They considered wakes with two different angles of attacks and reported that the neuralnetwork model outperforms conventional linear techniques such as Gappy POD [125, 126] and linear stochastic estimation [127]. Similarly, Manohar et al. [128] has also recently performed a fully connected model and PODbased sparse reconstruction for wake interactions of two cylinders. Their model considers the time history of sensor measurements with long shortterm memory (LSTM) [129], achieving more robustness against noisy inputs compared to a regular MLP model. These reducedorder strategies in machinelearningbased vortical flow reconstruction are summarized in Dubois et al. [114]. With flow examples of two and threedimensional cylinder wakes and a spatial mixing layer, they discussed pros and cons of a variety of techniques such as POD [130,131,132], regular autoencoder [133], variational autoencoder [134], linear/nonlinear fully connected networks, support vector machine [135], gradient boosting [136], and librarybased reconstruction [137, 138].
From the aspect of reducing the number of parameters inside machinelearning models, a combination of a fully connected model and CNNs has also been leveraged to overcome the limitation of fully connected networks. Morimoto et al. [139] considered a combination of multilayer perceptron (MLP) and CNN (called MLPCNNbased estimator) to estimate vortical flows around urban structures and temperature data (DayMET) across North America from sparse sensors. The sensor inputs are first given into the part of a fully connected model and the model extracts the features from the input sensors. The feature vectors extracted from it are then given to the convolutional layers. Compared to solely using fully connected layers, the computational cost can be significantly reduced while maintaining the reconstruction accuracy. A similar MLPCNN model was also considered by Zhong et al. [140, 141] for a vortexairfoil gust interaction problem. The model estimates a twodimensional vorticity field from pressure sensor measurements on an airfoil surface. They reported that transfer learning [92, 142] can help in reducing the required amount of training data, while recurrent neural network (long shortterm memory, LSTM [129]) also improves the reconstruction performance of complex transient wake problems.
3.2 Semisupervised and unsupervised learning
In addition to supervisedlearningbased efforts, semisupervised and unsupervised learning can be used in superresolution analysis of fluid flows. Semisupervised learning combines a small amount of labeled data with a large amount of unlabeled data, which can also be augmented with prior knowledge incorporated into the loss function. Gao et al. [60] proposed a semisupervised CNNbased superresolution analysis for fluid flows. Through the investigation of a twodimensional laminar flow and a cardiovascular flow, they showed that the constraints based on the conservation laws and boundary conditions enable successful superresolution reconstruction without highresolution labeling. These physicslawbased augmentations inspired by physicsinformed neural network (PINN) [84, 85, 143] achieve accurate reconstruction while reducing the required amount of training data [65, 144].
There are also a couple of studies on semisupervised super resolution. Bode et al. [66] proposed the physicsinformed enhanced superresolution generative adversarial network (PIESRGAN) for applications to subgridscale modeling of LES. To incorporate a physicsbased loss function, they used the following cost function \({{\mathcal {E}}}\) for training,
where \(\beta _{\textrm{reg}}\), \(\beta _{\textrm{grad}}\), and \(\beta _{\textrm{cont}}\) are weighting coefficients for the different loss term contributions. The first loss term \({{{\mathcal {E}}}}_{\textrm{adv}}\) corresponds to a regular adversarial loss used in GANbased models, introduced in Eq. 6 [145]. The second term \({{{\mathcal {E}}}}_{\textrm{reg}}\) is a regular supervised loss function, which is equivalent to Eq. 4. The PIESRGAN also includes the gradient loss \({{{\mathcal {E}}}}_{\textrm{grad}}\) defined as the \(L_2\) error norm of the gradient of state variables [56]. Weighting the gradient of the flow field promotes a smooth and physically plausible reconstruction [146, 147]. They also considered \({{{\mathcal {E}}}}_{\textrm{cont}}\), the divergencefree error for incompressible flow. Similarly, a combination of physicsbased loss and UNet (Fig. 4) was proposed by Esmaeilzadeh et al. [148] as MeshfreeFlowNet and was applied for the Rayleigh–Bénard instability problem. Due to the UNetbased augmentation, the training for MeshfreeFlowNet takes only less than 4 min with 128 GPUs while achieving quantitative reconstruction. To improve the generalizability of MeshfreeFlowNet [148] for a wide variety of problems, Wang et al. [149] have recently proposed TransFlowNet which weakens the constraint of initial and boundary conditions compared to MeshfreeFlowNet. TransFlowNet was tested with examples of shallow water equation and Rayleigh–Bénard convection. The model provides better reconstruction than the original MeshfreeFlowNet, although the instability of training process is also observed due to the complexity of model.
While incorporating the aforementioned physics loss can promote a physically plausible superresolution solution, we should be mindful of the fact that finding an appropriate balance between the weighting coefficients is challenging. We can consider the use of optimization for finding an optimal set of coefficients, although it is computationally expensive [151]. The influence of balancing between an adversarial error and a regular \(L_2\) reconstruction error for sparse flow reconstruction is discussed in detail by Zhang et al. [152] for an example of a flow around building models. Moreover, achieving stable convergence during training is also difficult with such complex loss functions. To avoid this issue, additional machinelearning functions such as skip connection [91] and BatchNormalization [153] can be leveraged. In fact, the aforementioned models such as PIESRGAN [56], MeshfreeFlowNet [148], and TransFlowNet [149] are composed of ResBlock [91] (illustrated in Fig. 7a), which includes both BatchNormalization and skip connection, for stable and successful learning.
In contrast to supervised and semisupervised learning, unsupervised learning, which does not require labeled data sets, is also used for superresolution analysis. Kim et al. [59] proposed a cycle generative adversarial network (cGAN)based framework for unsupervised superresolution reconstruction of turbulent flows. While a regular GAN is composed of one generator and one discriminator as presented in Sect. 2.1.3, cGAN possesses two generators (\(G_1\) and \(G_2\)) and two discriminators (\(D_1\) and \(D_2\)), as illustrated in Fig. 7b. One generator \(G_1\) attempts to reconstruct a highresolution data \({{\varvec{q}}}_{\textrm{HR}}\) from a lowresolution flow field \({{{\varvec{q}}}}_{\textrm{LR}}\), while another generator \(G_2\) provides lowresolution fields from the generated highresolution flow data through \(G_1\). The discriminators \(D_1\) and \(D_2\) are trained to distinguish the real data from the generated data, as depicted in Fig. 7b. This operation allows the cGAN model to learn common features between low and highresolution data, that need not be paired [150]. The proposed model can reconstruct a velocity field of turbulent channel flow from its lowresolution field. They also demonstrated that the model trained with data from DNS can be applied to the LES data.
Following the study by Kim et al. [59], the unsupervised GANbased super resolution has recently been examined for a variety of flows. Wurster et al. [154] proposed a hierarchical GAN to perform super resolution of fluid flows. Analogous to SURFNet [57], a hierarchical GAN is first trained with lowresolution data sets. The model weights are then transferred to training with higherresolution flow fields. Güemes et al. [62] combined a GANbased superresolution reconstruction and state estimation [155,156,157,158] from the wall sensor measurements of turbulent channel flow. They first perform superresolution reconstruction for wallshear stresses and wall pressure. Another GAN model is then constructed to estimate wallparallel velocity fields at several wallnormal locations from the superresolved wall measurements. The GAN models are able to provide reasonable agreement with the reference simulation data up to \(y^+\approx 50\). Yousif et al. [159] extended a superresolution GAN model by combining it with multiscale CNN [54] and applied it to a turbulent channel flow with large longitudinal ribs. The reconstructed flow fields are shown to retain the temporal correlations and highorder spatial statistics.
Moreover, the use of a CNNbased GAN for threedimensional superresolution analysis was examined by Xu et al. [160] for computed tomography (CT) of turbulent jet combustor. With an example of turbulent atmospheric flow, Hassanaly et al. [161] has comprehensively compared various models for superresolution reconstruction, including a superresolution GAN [162], stochastic estimation, a deconvolution GAN [163], and diversitysensitive conditional GAN [164]. Although GANbased models have issues with stability during the learning process, these models hold potential for highwavenumber reconstruction of turbulent flows.
4 Case study: superresolution reconstruction of turbulence
This section offers details of CNNbased superresolution reconstruction for fluid flows through a case study. As an example, we consider twodimensional decaying isotropic turbulence, which serves as a canonical turbulent flow. The flow field data to be studied are generated by a twodimensional DNS [165], which numerically solves the twodimensional vorticity transport equation,
where \({{{\varvec{u}}}}=(u,v)\) and \(\omega \) represent the velocity and vorticity fields, respectively. The computational domain is a biperiodic square with \(L_x=L_y=1\). The initial Reynolds numbers for training/validation and test data sets are, respectively, set to Re\(_0\equiv u^*l_0^*/\nu =\{451,442\}\). Here, \(u^*\) is the characteristic velocity defined as the square root of the spatially averaged initial kinetic energy, \(l_0^*=[2{\overline{u^2}}(t_0)/{\overline{\omega ^2}}(t_0)]^{1/2}\) is the initial integral length, and \(\nu \) is the kinematic viscosity. The numbers of computational grid points used by DNS are \(N_x=N_y=512\). For training the baseline networks, we use 1000 snapshots over an eddy turnovertime of \(t\in [2,6]\) with a time interval of \(\Delta t=0.004\). We consider a vorticity field \(\omega \) as the variable of interest.
We note that our previous studies [54, 90] on machinelearningbased superresolution reconstruction were performed with twodimensional decaying turbulence but at lower Reynolds numbers (Re\(_0\approx 80\)) with smaller numbers of the grid points (\(N=128\)). The present case study examines how the model can be improved with regard to not only reconstruction accuracy but also a large amount of necessary training data at a higher Reynolds number.
For the present study, we consider superresolution reconstruction with a regular CNN and the hybrid downsampled skipconnection/multiscale (DSC/MS) model [54]. The design of the DSC/MS model is illustrated in Fig. 8. The red portion of the downsampled skip connection (DSC) model is composed of up/downsampling operations and skip connection. The up/downsampling operations provide robustness against rotational and translational invariance. The skip connection plays a crucial role in learning hierarchically the relationship between the highresolution output and the lowresolution input, while providing numerical stability during the learning process of the CNN [91]. The present model also incorporates the multiscale model (MS) model [166], corresponding to the blue portion of Fig. 8. This part of the model performs filtering operations across three different sizes, capturing a range of spatial length scales in vortical flows.
To accurately reconstruct twodimensional higher Reynolds number turbulent flow, we provide additional internal skip connections between the DSC model and MS model, as depicted in the green and orange boxes in Fig. 8. Each green box in the DSC model connects with each of the orange boxes in the MS model; hence, nine connections are present. With these interconnections, this interconnected DSC/MS model enables the intermediate input/output from both submodels to correlate with each other through the learning process. Since coverage of spatial length scales increases with the Reynolds number, the interconnections are expected to be important in learning the relationship between small and large vortical elements by the model. For the activation function \(\varphi \), this study uses the ReLU function [167] to avoid vanishing gradients of weights during the training process.
Furthermore, we consider a physicsbased loss function to examine its effects on machinelearningbased superresolution reconstruction of turbulent vortical flows. As discussed in Sect. 2.2, the use of physicsinspired loss function may not only promote the physical validity of reconstruction but also reduce the amount of necessary training data in a semisupervised manner [60, 65, 148, 168]. Here, we use the nonlinear advection term and the linear viscous diffusion term in Eq. 11 for the physicsbased loss function. The present cost function \({{{\mathcal {E}}}}\) is hence defined as
in which \(\omega _{\textrm{DNS}}\) and \(\omega _{\textrm{LR}}\), respectively, represent the reference (highresolution) DNS field and the lowresolution input flow field. The coefficients \(\beta _{\textrm{adv}}\) and \(\beta _{\textrm{visc}}\) determine the balance of the terms in the loss function. The terms \((\cdot )_{\textrm{ML}}\) inside \({{{\mathcal {E}}}}_\textrm{adv}\) and \({{{\mathcal {E}}}}_{\textrm{visc}}\) are computed with the superresolved vorticity field \(F(\omega _{\textrm{LR}})\).
In what follows, we assess six different machinelearning models:

1.
CNN\(L_2\): a regular CNN model with \({{{\mathcal {E}}}} = {{{\mathcal {E}}}}_{\omega }\),

2.
CNN\(L_{\textrm{phys}}\): a regular CNN model with \({{{\mathcal {E}}}} = {{{\mathcal {E}}}}_{\omega } + \beta _{\textrm{adv}}{{{\mathcal {E}}}}_{\textrm{adv}} + \beta _{\textrm{visc}}{{{\mathcal {E}}}}_{\textrm{visc}}\),

3.
DSC/MS\(L_2\): the original DSC/MS model [54] with \({{{\mathcal {E}}}} = {{{\mathcal {E}}}}_{\omega }\),

4.
DSC/MS\(L_{\textrm{phys}}\): the original DSC/MS model with \({{{\mathcal {E}}}} = {{{\mathcal {E}}}}_{\omega } + \beta _{\textrm{adv}}{{{\mathcal {E}}}}_{\textrm{adv}} + \beta _{\textrm{visc}}{{{\mathcal {E}}}}_{\textrm{visc}}\),

5.
IDSC/MS\(L_2\): the interconnected DSC/MS model with \({{{\mathcal {E}}}} = {{{\mathcal {E}}}}_{\omega }\),

6.
IDSC/MS\(L_{\textrm{phys}}\): the interconnected DSC/MS model with \({{{\mathcal {E}}}} = {{{\mathcal {E}}}}_{\omega } + \beta _{\textrm{adv}}{{{\mathcal {E}}}}_{\textrm{adv}} + \beta _{\textrm{visc}}{{{\mathcal {E}}}}_{\textrm{visc}}\).
These six machinelearning models are tasked to reconstruct the highresolution vortical flow field of size \(512^2\) from the corresponding lowresolution data of size \(16^2\), generated by averagepooling operations [54]. We set \(\beta _{\textrm{adv}} = \beta _{\textrm{visc}} = 0.1\) to the balance of the order for each term.
Let us consider the reconstructed vorticity fields from machinelearingbased superresolution approaches in Fig. 9. The largescale vortices can be reconstructed with the regular CNN models. However, the reconstructed fields are pixelized around rotation and sheardominated structures, which was also observed with a regular CNNbased superresolution reconstruction in our previous study [54]. The \(L_2\) norm error, \(\epsilon = {f_{\textrm{DNS}}}{f_{\textrm{ML}}}_2/{f_\textrm{DNS}}_2\), is found to be larger than 0.5 with the regular CNNs. The DSC/MS model with the \(L_2\)based optimization provides a better and clear reconstruction for large vortical structures with an \(L_2\) norm error of 0.241. This indicates that embedding the physicsinspired DSC functions and the MS filters enables accurate reconstruction of vortical flows.
While the DSC/MS model achieves a qualitative reconstruction for vortical structures, finer scales of shear layers that appear around large rotational elements cannot be recovered well. The reconstruction over these scales that emerge in higher Reynolds number flows can be improved by introducing either the physicsbased loss function or the interconnection inside the DSC/MS model, as presented in Fig. 9. With DSC/MS\(L_{\textrm{phys}}\), IDSC/MS\(L_2\), and IDSC/MS\(L_{\textrm{phys}}\), these shear layers are more accurately reconstructed compared to the reconstruction with the regular model, as highlighted by the red boxes in Fig. 9. Hence, both the physicsinspired optimization and model design greatly assist in the reconstruction of higher Reynolds number flows. Note that the difference between the interconnectionbased model enhancement and using the physicsbased loss function is in their robustness against noisy lowresolution input, as it will be discussed later.
We here examine each term in the physicsloss function; namely the linear term \(\nabla ^2\omega \) and the nonlinear term \({{\varvec{u}}}\cdot \nabla \omega \) of the present superresolution reconstruction, as shown in Fig. 10. These results are from the same case as the vorticity snapshots presented in Fig. 9. Examination of these terms is a strict test since higherorder derivations can amplify errors greatly for high wavenumbers. Let us first focus on the estimated linear viscous diffusion term visualized in Fig. 10a. The regular CNN completely fails to estimate \(\nabla ^2\omega \), as evident from the pixelized vorticity reconstruction in Fig. 9. Using the DSC/MS model with the regular \(L_2\) optimization, the linear term field also exhibits erroneous profiles comprised of pairwise structures that cannot be observed in the reference field. These derivativebased assessments are again very sensitive and also affected by the reconstruction of surrounding local structures. As expected, estimation is improved by including the physicsbased term in the loss function (DSC/MS\(L_{\textrm{phys}}\)). The accuracy of \(\nabla ^2\omega \) can be further enhanced by using the interconnected DSC/MS models, presenting finescale structures in the highorder derivation field. This indicates that adding the interconnections inside the machinelearning model enables physically compatible superresolution reconstruction of turbulent flows in addition to the physicslossbased optimization.
The estimation of the nonlinear term is shown in Fig. 10b. The whole trend in reconstruction is analogous to that for the linear term, hence the interconnected DSC/MS models well reconstruct the nonlinear term fields compared to the reference field. The \(L_2\) errors for the nonlinear term with DSC/MS\(L_{\textrm{phys}}\), IDSC/MS\(L_2\), and IDSC/MS\(L_{\textrm{phys}}\) are higher than that of the linear term. This suggests that the estimation for the nonlinear term is more difficult than the linear term.
We also investigate the dependence of the reconstruction error on the number of the training snapshots \(n_{\textrm{snapshot}}\). For all models, the error decreases as \(n_{\textrm{snapshot}}\) increases, as shown in Fig. 11. Both the interconnection and the physicsbased loss enable a qualitative reconstruction with a reduced number of training snapshots. The observation that a physicsinspired optimization reduces the required amount of training data has also been reported in previous studies [60, 143]. The interconnected DSC/MS model reconstructs fine vortical structures even with only \(n_{\textrm{snapshot}} = 50\) (Fig. 11c), while the original DSC/MS model provides only largescale structures, as shown in Fig. 11a. This suggests that the present machinelearning model efficiently captures a nonlinear relationship between the underresolved input and the highresolution vortical flow from a small amount of training data by capitalizing on the interconnected skip connections.
The use of the physicsbased loss function can lead to robustness against noisy inputs. Here, let us examine the influence of noise on superresolution reconstruction. We add the Gaussian noise \({{{\varvec{n}}}}\) to the lowresolution input \({\omega }_{\textrm{LR}}\) and assess the reconstruction \(L_2\) error \(\epsilon = \omega _{\textrm{HR}}  F(\omega _{\textrm{LR}} + {{{\varvec{n}}}})_2/\omega _{\textrm{HR}}_2\), where the magnitude of the noise is given as \(\gamma = {{\varvec{n}}}/\omega \). Here, the models trained with 1000 snapshots are used. The relationship between the error and the noise magnitude is shown in Fig. 12. For all cases, the error increases with increasing magnitude \(\gamma \). The reconstructed flow fields generally reveal the largescale vortices, while the finer scales are affected by the noisy input. Especially for \(\gamma >0.3\), the DSC/MS models with the physicsbased loss function are observed to be more robust than models trained with the simple \(L_2\) error optimization. Hence, it can be argued that physicsbased loss function helps in devising robust models against noisy measurements.
5 Extensions
In the above sections, we surveyed various machinelearningbased superresolution approaches and their applications to vortical flows. Here, we discuss extensions of machinelearningbased superresolution analysis beyond their basic applications.
5.1 Changing input variable setups
When a machinelearning model is trained, the size of the input and output variables or more specifically the setup of the input and output variables is fixed. If the setup is changed, the machinelearning model generally needs to be completely retrained, which is a heavy burden. This issue is in fact a limitation of many machinelearning models, and machinelearningbased superresolution models are no exceptions. If the number of pixels or their locations is different from that used in the training process, the trained model cannot be used without retraining the model with the input variable size changed. Preprocessing the differentsize input data with interpolation may work, but care should be taken since such an approach generally loses information. Unstructured grid and randomly sampled data also require some care since standard CNNbased models may not be appropriate.
There are several approaches to address these challenges. For instance, PointNet [170] is able to handle unorganized and sparse data with a point cloud. Although this was originally used for image classification and segmentation tasks, Kashefi et al. [169] have recently applied it to fluid flows. In their formulation, sensors on the grid can be directly treated and a model can learn the relationship between the sensors and outputs, as illustrated in Fig. 13a.
To handle spatially irregular sensor arrangements, graph neural network (GNN) [121] can be considered. GNN is able to perform a convolutional operation on unstructured mesh data, which is similar to that inside of CNNs. Such GNNbased methods can be applied for machinelearningbased superresolution reconstruction by modifying the setup for data dimensions between input and output, as shown in Fig. 13b.
Coordinate transformation can also be considered to simply use regular machinelearning models for vortical flows. PhyGeoNet [173] includes coordinate transformation from an irregular domain to a structured mesh space for fluid flow regression, allowing us to convolve on flow fields, as illustrated in Fig. 13c. Finding the appropriate coordinate transformation may be a challenge for complex flow field domain geometry.
We can also generalize superresolution analysis by considering sensor measurements in the flow field as the input for machinelearning models to reconstruct the flow field. For a fixed number of sensors with their positions unchanged, regular machinelearning models developed in image science can often be directly used. However, when sensors go online or offline changing the number of sensors and moving spatially over time, the machinelearning models cannot be applied without special care. Voronoitessellationbased CNN [64] can handle an arbitrary number of moving sensors in a single model. In this formulation, sparse sensor measurements are projected onto grids generated from Voronoi tessellation, as illustrated in Fig. 13d. The flow field discretized with Voronoi tessellation is then used as an input for CNNbased superresolution reconstruction. This approach provides robust realtime superresolution analysis for vortical flows.
5.2 Super resolution for turbulent flow simulations
With the ability to recover finescale flow structures from coarse images of the flow field, it is natural to ask whether machinelearningbased superresolution analysis can be incorporated into numerical simulations to improve turbulent flow simulations. From a broader perspective, this question translates to whether superresolution analysis can be implemented in a simulation of multiscale physical phenomena to accurately reconstruct the subgridscale physics [63, 66].
For superresolution analysis to reconstruct a physically accurate highresolution flow field, it is generally necessary that the lowresolution input data are accurate on its own coarse grid. If the coarse flow field input is provided by some turbulent flow simulation (e.g., largeeddy simulation, detached eddy simulation, and Reynoldsaveraged Navier–Stokes simulation [174]), it is important that the coarse flow be accurate to begin with. The superresolved field would not be a physically accurate if the lowresolution flow field (input) is deviated from the true solution. Conservatively speaking, turbulent flow statistics may be predicted well with super resolution, but highly accurate reconstruction of each and every instantaneous flow would likely be a major difficulty, if not impossible [175]. In other words, we should not expect that LES results (or those from other solvers with turbulence models) can be transformed to yield DNS results.
A worthy question to ask is whether superresolution analysis can support the development of subgridscale models. This could be different from other turbulence modeling approaches that use applied regressions to directly determine the subgridscale models for turbulent flow simulations. Similar to an approximate deconvolution model [176] which considers inverse mapping of spatial filters, super resolution could be used to augment the subgridscale models. Furthermore, it remains to be seen whether superresolution analysis can simultaneously nudge the lowresolution field and recover the subgridscale flow structures. Again, the success of such simultaneous corrections will likely require the lowresolution flow field to be fairly accurate on its own grid. Alternatively, GANbased techniques may also provide interesting approaches to achieve super resolution for turbulence.
Ongoing research developments in superresolution analysis of turbulent flows and datadriven turbulence models [33, 35] may address the issues identified here in the coming years. As superresolution methods are extended and incorporated into turbulent flow analysis and simulations, it is important to ensure that the derived superresolution method is generalizable over a range of Reynolds numbers and turbulent flow problems to confirm robust and reliable performance. This is critical if these techniques are to be implemented in generaluse turbulent flow simulators.
5.3 Applications to realworld problems
Superresolution analysis holds great potential for fluid dynamics as discussed above. However, there still exists some challenges, especially toward applications to realworld problems. This section discusses the current challenges and possible future directions of machinelearningbased fluid flow super resolution.
One of the major challenges of machinelearningbased superresolution reconstruction for fluid flows is the necessity for a certain amount of training data. While unsupervised learning used for GANs and semisupervised learning assisted with physicsinspired loss functions introduced in the present survey can mitigate this issue, existing techniques still require learning the relationship between coarse data and highresolution vortical flows from either unpaired or paired training data for successful reconstruction. Since the majority of realworld problems do not have access to ground truth and only sparse and noisy measurements are available, one can consider the use of data assimilation [177,178,179] to improve superresolution reconstruction by incorporating the latest observations with a shortrange realtime forecast.
Yasuda and Onishi [180] has recently proposed a fourdimensional superresolution data assimilation and demonstrated its performance with a twodimensional periodic channel flow. The proposed method considers the temporal evolution of a system from lowresolution simulations with the aid of an ocean model, while a trained machinelearning model is simultaneously used to perform data assimilation and super resolution. Since there is a huge amount of historical weather and climate reanalysis data available, the unification of super resolution with data assimilation or preexisting models would be an interesting research direction.
In addition, most of the existing studies focus on designing a reconstruction model for a particular flow problem, variable, or data shape. From this aspect, it would be desired to simultaneously leverage a variety of multimodal data such as pointwise measurements, imagebased data, and online measurements such as LiDARtype data. Prediction of unavailable parameters from such sparse and noisy but available measurements may also become an interesting direction of superresolution studies of fluid dynamics.
6 Conclusions
We provided a survey on machinelearningbased superresolution reconstruction of vortical flows. Several machinelearning approaches and the use of physicsbased cost functions for superresolution analysis were discussed. We further performed case studies of superresolution reconstruction of turbulent flows with convolutional neural network (CNN)based methods. We demonstrated that a superresolution model with physicsbased loss function or physicsinspired neural network structures can reconstruct vortical flows even with limited training data and noisy inputs. We also discussed extensions and challenges of machinelearningbased super resolution for fluid flows from the aspects of changing input variable setups and applications for turbulent flow simulations.
The insights obtained through the present survey can be leveraged for a variety of machinelearningbased superresolution models. For instance, the use of multiscale filters inside CNN can be generalized not only in supervised learning but also in unsupervised techniques [181]. Physicsinformed loss functions can also be extended for various machinelearning models. Moreover, it may also be interesting to develop superresolution models in wavespace to incorporate certain spectral properties.
We remind that studies surveyed in the present paper are generally based on clean training data. Preparing highquality input data is essential for successfully reconstructing turbulent flows. However, it is necessary to assess the robustness and sensitivity of the models against noisy inputs [182]. This point will be important as machinelearningbased superresolution analyses become utilized in industrial applications [183]. Together with the accuracy of the models, quantifying uncertainties in machinelearning prediction is also required to assess their reliability and limitations. For these reasons, making computational and experimental fluidflow databases [184,185,186] available is critically important to advance studies on datadriven analysis of vortical flows. We hope that this survey paper provides some guidance in advancing algorithms and applications of machinelearningbased superresolution analysis for a variety of fundamental and industrial fluid flow problems.
Availability of data and materials
The data that support the findings of this study are available from the corresponding author upon reasonable request.
References
Irani, M., Peleg, S.: Improving resolution by image registration. CVGIP Gr. Models Image Process. 53(3), 231–239 (1991)
Salvador, J.: ExampleBased super resolution. Academic Press, Cambridge (2016)
Bannore, V.: Iterativeinterpolation superresolution image reconstruction: a computationally efficient technique, vol. 195. Springer, Berlin (2009)
Keys, R.: Cubic convolution interpolation for digital image processing. IEEE Trans. Acoust. Speech Signal Process. 29(6), 1153–1160 (1981)
Vandewalle, P., Süsstrunk, S., Vetterli, M.: A frequency domain approach to registration of aliased images with application to superresolution. EURASIP J. Adv. Signal Process. 1–14, 2006 (2006)
Joshi, N., Szeliski, R., Kriegman, D.J.: PSF estimation using sharp edge prediction. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE, (2008)
Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. Proc. DARPA Image Underst. Workshop 81, 674–679 (1981)
Michaeli, T., Irani, M.: Nonparametric blind superresolution. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 945–952. (2013)
Glasner, D., Bagon, S., Irani, M.: Superresolution from a single image. In: IEEE 12th International Conference on Computer Vision, pp. 349–356. IEEE, (2009)
Zontak, M., Mosseri, I., Irani, M.: Separating signal from noise using patch recurrence across scales. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1195–1202, (2013)
Shahar, O., Faktor, A., Irani, M.: Spacetime superresolution from a single video. In: CVPR 2011, pp. 3353–3360. IEEE Computer Society, (2011)
Freedman, G., Fattal, R.: Image and video upscaling from local selfexamples. ACM Trans. Graph. 30(2), 1–11 (2011)
Yang, J., Lin, Z., Cohen, S.: Fast image superresolution based on inplace example regression. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1059–1066, (2013)
Baker, S., Kanade, T.: Limits on superresolution and how to break them. IEEE Trans. Pattern Anal. Mach. Intell. 24(9), 1167–1183 (2002)
Park, S.C., Park, M.K., Kang, M.G.: Superresolution image reconstruction: a technical overview. IEEE Signal Process. Mag. 20(3), 21–36 (2003)
Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290(5500), 2323–2326 (2000)
Bevilacqua, M., Roumy, A., Guillemot, C., Morel, M.L.A.: Lowcomplexity singleimage superresolution based on nonnegative neighbor embedding. In: British Machine Vision Conference (BMVC), (2012)
Chang, H., Yeung, D.Y., Xiong, Y.: Superresolution through neighbor embedding. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. I–I. IEEE, (2004)
Freeman, W.T., Jones, T.R., Pasztor, E.C.: Examplebased superresolution. IEEE Comput. Graph. Appl. 22(2), 56–65 (2002)
Freeman, W.T., Pasztor, E.C., Carmichael, O.T.: Learning lowlevel vision. Int. J. Comput. Vis. 40(1), 25–47 (2000)
Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. In: Proceedings of the 19th International Conference on Neural Information Processing Systems, pp. 801–808, (2006)
Yang, J., Wright, J., Huang, T., Ma, Y.: Image superresolution as sparse representation of raw image patches. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8. IEEE, (2008)
Lu, X., Yuan, H., Yan, P., Yuan, Y., Li, X.: Geometry constrained sparse coding for single image superresolution. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1648–1655. IEEE, (2012)
Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image superresolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010)
Zhang, K., Gao, X., Tao, D., Li, X.: Single image superresolution with nonlocal means and steering kernel regression. IEEE Trans. Image Process. 21(11), 4544–4556 (2012)
Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image superresolution. In: European Conference on Computer Vision, pp. 184–199. Springer, (2014)
Dong, C., Loy, C.C., Tang, X.: Accelerating the superresolution convolutional neural network. In: European Conference on Computer Vision, pp. 391–407. Springer, (2016)
Yang, W., Zhang, X., Tian, Y., Wang, W., Xue, J.H., Liao, Q.: Deep learning for single image superresolution: a brief review. IEEE Trans. Multimed. 21(12), 3106–3121 (2019)
Dong, C., Loy, C.C., He, K., Tang, X.: Image superresolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2015)
Brunton, S.L., Noack, B.R., Koumoutsakos, P.: Machine learning for fluid mechanincs. Annu. Rev. Fluid Mech. 52, 477–508 (2020)
Brunton, S.L., Hemati, M.S., Taira, K.: Special issue on machine learning and datadriven methods in fluid dynamics. Theor. Comput. Fluid Dyn. 34, 333–337 (2020)
Brenner, M.P., Eldredge, J.D., Freund, J.B.: Perspective on machine learning for advancing fluid mechanics. Phys. Rev. Fluids 4, 100501 (2019)
Duraisamy, K., Iaccarino, G., Xiao, H.: Turbulence modeling in the age of data. Annu. Rev. Fluid Mech. 51, 357–377 (2019)
Maulik, R., San, O., Jacob, J.D., Crick, C.: Subgrid scale model classification and blending through deep learning. J. Fluid Mech. 870, 784–812 (2019)
Ling, J., Kurzawski, A., Templeton, J.: Reynolds averaged turbulence modelling using deep neural networks with embedded invariance. J. Fluid Mech. 807, 155–166 (2016)
Novati, G., de Laroussilhe, H.L., Koumoutsakos, P.: Automating turbulence modelling by multiagent reinforcement learning. Nat. Mach. Intell. 3(1), 87–96 (2021)
Bae, H.J., Koumoutsakos, P.: Scientific multiagent reinforcement learning for wallmodels of turbulent flows. Nat. Commun. 13(1), 1–9 (2022)
Lee, S., You, D.: Datadriven prediction of unsteady flow fields over a circular cylinder using deep learning. J. Fluid Mech. 879, 217–254 (2019)
Callaham, J.L., Rigas, G., Loiseau, J.C., Brunton, S.L.: An empirical meanfield model of symmetrybreaking in a turbulent wake. Sci. Adv. 8(19), eabm4786 (2022)
San, O., Maulik, R.: Neural network closures for nonlinear model order reduction. Adv. Comput. Math. 44(6), 1717–1750 (2018)
Fukami, K., Murata, T., Zhang, K., Fukagata, K.: Sparse identification of nonlinear dynamics with lowdimensionalized flow representations. J. Fluid Mech. 926, A10 (2021)
Srinivasan, P.A., Guastoni, L., Azizpour, H., Schlatter, P., Vinuesa, R.: Predictions of turbulent shear flows using deep neural networks. Phys. Rev. Fluids 4, 054603 (2019)
Scherl, I., Strom, B., Shang, J.K., Williams, O., Polagye, B.L., Brunton, S.L.: Robust principal component analysis for modal decomposition of corrupt fluid flows. Phys. Rev. Fluids 5, 054401 (2020)
Manohar, K., Brunton, B.W., Kutz, J.N., Brunton, S.L.: Datadriven sparse sensor placement for reconstruction: demonstrating the benefits of exploiting known patterns. IEEE Control Syst. Mag. 38(3), 63–86 (2018)
Fukami, K., Fukagata, K., Taira, K.: Assessment of supervised machine learning for fluid flows. Theor. Comput. Fluid Dyn. 34(4), 497–519 (2020)
Kim, H., Kim, J., Lee, C.: Interpretable deep learning for prediction of prandtl number effect in turbulent heat transfer. J. Fluid Mech. 955, A14 (2023)
Rabault, J., Kuchta, M., Jensen, A., Réglade, U., Cerardi, N.: Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control. J. Fluid Mech. 865, 281–302 (2019)
Bieker, K., Peitz, S., Brunton, S.L., Kutz, J.N., Dellnitz, M.: Deep model predictive flow control with limited sensor data and online learning. Theor. Comput. Fluid Dyn. 34(4), 577–591 (2020)
Zhou, Y., Fan, D., Zhang, B., Li, R., Noack, B.R.: Artificial intelligence control of a turbulent jet. J. Fluid Mech. 897, A27 (2020)
Paris, R., Beneddine, S., Dandois, J.: Robust flow control and optimal sensor placement using deep reinforcement learning. J. Fluid Mech. 913, A25 (2021)
Park, J., Choi, H.: Machinelearningbased feedback control for drag reduction in a turbulent channel flow. J. Fluid Mech. 904, A24 (2020)
Ghraieb, H., Viquerat, J., Larcher, A., Meliga, P., Hachem, E.: Singlestep deep reinforcement learning for openloop control of laminar and turbulent flows. Phys. Rev. Fluids 6(5), 053902 (2021)
Xie, Y., Franz, E., Chu, M., Thuerey, N.: tempogan: a temporally coherent, volumetric GAN for superresolution fluid flow. ACM Trans. Graph. 37(4), 1–15 (2018)
Fukami, K., Fukagata, K., Taira, K.: Superresolution reconstruction of turbulent flows with machine learning. J. Fluid Mech. 870, 106–120 (2019)
Erichson, N.B., Mathelin, L., Yao, Z., Brunton, S.L., Mahoney, M.W., Kutz, J.N.: Shallow neural networks for fluid flow reconstruction with limited sensors. Proc. Roy. Soc. A 476(2238), 20200097 (2020)
Bode, M., Gauding, M., Kleinheinz, K., Pitsch, H.: Deep learning at scale for subgrid modeling in turbulent flows: regression and reconstruction. In: International Conference on High Performance Computing, pp. 541–560. Springer, (2019)
ObiolsSales, O., Vishnu, A., Malaya, N.P., Chandramowlishwaran, A.: SURFNet: superresolution of turbulent flows with transfer learning using small datasets. In: IEEE 30th International Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 331–344. IEEE, (2021)
Liu, B., Tang, J., Huang, H., Lu, X.Y.: Deep learning methods for superresolution reconstruction of turbulent flows. Phys. Fluids 32(2), 025105 (2020)
Kim, H., Kim, J., Won, S., Lee, C.: Unsupervised deep learning for superresolution reconstruction of turbulence. J. Fluid Mech. 910, A29 (2021)
Gao, H., Sun, L., Wang, J.X.: Superresolution and denoising of fluid flow using physicsinformed convolutional neural networks without highresolution labels. Phys. Fluids 33(7), 073603 (2021)
Zhou, X.H., McClure, J.E., Chen, C., Xiao, H.: Neural networkbased pore flow field prediction in porous media using super resolution. Phys. Rev. Fluids 7(7), 074302 (2022)
Güemes, A., Discetti, S., Ianiro, A., Sirmacek, B., Azizpour, H., Vinuesa, R.: From coarse wall measurements to turbulent velocity fields through deep learning. Phys. Fluids 33(7), 075121 (2021)
Pradhan, A., Duraisamy, K.: Variational multiscale superresolution: a datadriven approach for reconstruction and predictive modeling of unresolved physics. arXiv:2101.09839, (2021)
Fukami, K., Maulik, R., Ramachandra, N., Fukagata, K., Taira, K.: Global field reconstruction from sparse sensors with Voronoi tessellationassisted deep learning. Nat. Mach. Intell. 3(11), 945–951 (2021)
Yousif, M.Z., Yu, L., Lim, H.C.: Highfidelity reconstruction of turbulent flow from spatially limited data using enhanced superresolution generative adversarial network. Phys. Fluids 33(12), 125119 (2021)
Bode, M., Gauding, M., Lian, Z., Denker, D., Davidovic, M., Kleinheinz, K., Jitsev, J., Pitsch, H.: Using physicsinformed enhanced superresolution generative adversarial networks for subfilter modeling in turbulent reactive flows. Proc. Combust. Inst. 38(2), 2617–2625 (2021)
Nair, N.J., Goza, A.: Leveraging reducedorder models for state estimation using deep learning. J. Fluid Mech. 897, R1 (2020)
Güemes, A., Vila, C.S., Discetti, S.: Superresolution generative adversarial networks of randomlyseeded fields. Nat. Mach. Intell. 4, 1165–1173 (2022)
Sun, L., Wang, J.X.: Physicsconstrained Bayesian neural network for fluid flow reconstruction with sparse and noisy data. Theor. Appl. Mech. Lett. 10(3), 161–169 (2020)
Fathi, M.F., PerezRaya, I., Baghaie, A., Berg, P., Janiga, G., Arzani, A., D’Souza, R.M.: Superresolution and denoising of 4Dflow MRI using physicsinformed deep neural nets. Comput. Methods Programs Biomed. 197, 105729 (2020)
Vlasenko, A., Schnörr, C.: Superresolution and denoising of 3D fluid flow estimates. In: Joint Pattern Recognition Symposium, pp. 482–491. Springer, (2009)
Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by backpropagation errors. Nature 322, 533–536 (1986)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv:1412.6980, (2014)
Williams, J., Zahn, O., Kutz, J.N.: Datadriven sensor placement with shallow decoder networks. arXiv:2202.05330, (2022)
Domingos, P.: A few useful things to know about machine learning. Commun. ACM 55(10), 78–87 (2012)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradientbased learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Morimoto, M., Fukami, K., Zhang, K., Nair, A.G., Fukagata, K.: Convolutional neural networks for fluid flow analysis: toward effective metamodeling and low dimensionalization. Theor. Comput. Fluid Dyn. 35(5), 633–658 (2021)
Nakamura, T., Fukagata, K.: Robust training approach of neural networks for fluid flow state estimations. Int. J. Heat Fluid Flow 96, 108997 (2022)
Wurster, S.W., Guo, H., Shen, H.W., Peterka, T., Xu, J.: Deep hierarchical super resolution for scientific data. IEEE Trans. Vis. Comput. Graph. (2022). https://doi.org/10.1109/TVCG.2022.3214420. (Early Access)
Romano, Y., Isidoro, J., Milanfar, P.: RAISR: rapid and accurate image super resolution. IEEE Trans. Comput. Imaging 3(1), 110–125 (2016)
Goodfellow, I., PougetAbadie, J., Mirza, M., Xu, B., WardeFarley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial networks. Commun. ACM 63(11), 139–144 (2020)
Maejima, S., Tanino, K., Kawai, S.: Unsupervised machinelearningbased subgrid scale modeling for coarsegrid LES. In Review, (2023)
Rumelhart, D.E., Zipser, D.: Feature discovery by competitive learning. Cogn. Sci. 9(1), 75–112 (1985)
Lagaris, I.E., Likas, A., Fotiadis, D.I.: Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 9(5), 987–1000 (1998)
Raissi, M., Perdikaris, P., Karniadakis, G.E.: Physicsinformed neural networks: a deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707 (2019)
Karniadakis, G.E., Kevrekidis, I.G., Lu, L., Perdikaris, P., Wang, S., Yang, L.: Physicsinformed machine learning. Nat. Rev. Phys. 3(6), 422–440 (2021)
Cai, S., Mao, Z., Wang, Z., Yin, M., Karniadakis, G.E.: Physicsinformed neural networks (PINNs) for fluid mechanics: a review. Acta Mech. Sin. 37, 1–12 (2022)
Zhu, X., Goldberg, A.B.: Introduction to semisupervised learning, vol. 3. Morgan & Claypool Publishers, San Rafael (2009)
Fukami, K., Fukagata, K., Taira, K.: Superresolution analysis with machine learning for lowresolution flow data. In: 11th International Symposium on Turbulence and Shear Flow Phenomena (TSFP11), Southampton, UK, Number 208, (2019)
Fukami, K., Fukagata, K., Taira, K.: Machinelearningbased spatiotemporal super resolution reconstruction of turbulent flows. J. Fluid Mech. 909, A9 (2021)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778, (2016)
Pan, S.J., Yang, Q.: A survey on transfer learning. IEEE Trans. Knowl. Data Eng. 22(10), 1345–1359 (2009)
Guastoni, L., Güemes, A., Ianiro, A., Discetti, S., Schlatter, P., Azizpour, H., Vinuesa, R.: Convolutionalnetwork models to predict wallbounded turbulence from wall quantities. J. Fluid Mech. 928, A27 (2021)
Liu, Y., Ponce, C., Brunton, S.L., Kutz, J.N.: Multiresolution convolutional autoencoders. J. Comput. Phys. 474, 111801 (2022)
Pant, P., Farimani, A.B.: Deep learning for efficient reconstruction of highresolution turbulent DNS data. arXiv:2010.11348, (2020)
Kong, C., Chang, J.T., Li, Y.F., Chen, R.Y.: Deep learning methods for superresolution reconstruction of temperature fields in a supersonic combustor. AIP Adv. 10(11), 115021 (2020)
Matsuo, M., Nakamura, T., Morimoto, M., Fukami, K., Fukagata, K.: Supervised convolutional network for threedimensional fluid data reconstruction from sectional flow fields with adaptive superresolution assistance. arXiv:2103.09020, (2021)
Li, Y., Roblek, D., Tagliasacchi, M.: From here to there: Video inbetweening using 3D convolutions. arXiv:1905.10240, (2019)
Shrivastava, A., Arora, R.: Spatiotemporal superresolution of dynamical systems using physicsinformed deeplearning. In: AAAI 2023: Workshop on AI to Accelerate Science and Engineering (AI2ASE), (2022)
Kim, J., Lee, C.: Prediction of turbulent heat transfer using convolutional neural networks. J. Fluid Mech. 882, A18 (2020)
Morimoto, M., Fukami, K., Zhang, K., Fukagata, K.: Generalization techniques of neural networks for fluid flow estimation. Neural Comput. App. 34(5), 3647–3669 (2022)
Onishi, R., Sugiyama, D., Matsuda, K.: Superresolution simulation for realtime prediction of urban micrometeorology. SOLA 15, 178–182 (2019)
Yasuda, Y., Onishi, R., Hirokawa, Y., Kolomenskiy, D., Sugiyama, D.: Superresolution of nearsurface temperature utilizing physical quantities for realtime prediction of urban micrometeorology. Build. Environ. 209, 108597 (2022)
Hu, J., Shen, L., Sun, G.: Squeezeandexcitation networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7132–7141, (2018)
Discetti, S., Liu, Y.: Machine learning for flow field measurements: a perspective. Meas. Sci. Technol. 34, 021001 (2023)
Deng, Z., He, C., Liu, Y., Kim, K.C.: Superresolution reconstruction of turbulent velocity fields using a generative adversarial networkbased artificial intelligence framework. Phys. Fluids 31, 125111 (2019)
Wang, H., Yang, Z., Li, B., Wang, S.: Predicting the nearwall velocity of wall turbulence using a neural network for particle image velocimetry. Phys. Fluids 32(11), 115105 (2020)
Murata, T., Fukami, K., Fukagata, K.: Nonlinear mode decomposition with convolutional neural networks for fluid dynamics. J. Fluid Mech. 882, A13 (2020)
Adrian, R.J.: Twenty years of particle image velocimetry. Exp. Fluids 39(2), 159–169 (2005)
Cai, S., Zhou, S., Xu, C., Gao, Q.: Dense motion estimation of particle images via a convolutional neural network. Exp. Fluids 60, 60–73 (2019)
Dosovitskiy, A., Fischer, P., Springenberg, J.T., Riedmiller, M., Brox, T.: Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 38, 1734 (2019)
Majewski, W., Wei, R., Kumar, V.: Developing particle image velocimetry software based on a deep neural network. J. Flow Vis. Image Process. 27(4), 359–376 (2020)
Morimoto, M., Fukami, K., Fukagata, K.: Experimental velocity data estimation for imperfect particle images using machine learning. Phys. Fluids 33(8), 087121 (2021)
Dubois, P., Gomez, T., Planckaert, L., Perret, L.: Machine learning for fluid flow reconstruction from limited measurements. J. Comput. Phys. 448, 110733 (2022)
Fukami, K., Taira, K.: Learning the nonlinear manifold of extreme aerodynamics. NeurIPS2022, (2022)
Fukami, K., Nakamura, T., Fukagata, K.: Convolutional neural network based hierarchical autoencoder for nonlinear mode decomposition of fluid field data. Phys. Fluids 32, 095110 (2020)
Eivazi, H., Le Clainche, S., Hoyas, S., Vinuesa, R.: Towards extraction of orthogonal and parsimonious nonlinear modes from turbulent flows. Expert Syst. Appl. 202, 117038 (2022)
Fukami, K., Hasegawa, K., Nakamura, T., Morimoto, M., Fukagata, K.: Model order reduction with neural networks: application to laminar and turbulent flows. SN Comput. Sci. 2, 467 (2021)
Linot, A.J., Graham, M.D.: Deep learning to discover and predict dynamics on an inertial manifold. Phys. Rev. E 101(6), 062209 (2020)
Fukami, K., Taira, K.: Grasping extreme aerodynamics on a lowdimensional manifold. arXiv:2305.08024, (2023)
Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., Philip, S.Y.: A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 32(1), 4–24 (2020)
Carter, D.W., De Voogt, F., Soares, R., Ganapathisubramani, B.: Datadriven sparse reconstruction of flow over a stalled aerofoil using experimental data. Data Centric Eng. 2, e5 (2021)
Giannopoulos, A., Aider, J.L.: Datadriven order reduction and velocity field reconstruction using neural networks: the case of a turbulent boundary layer. Phys. Fluids 32(9), 095117 (2020)
Maulik, R., Fukami, K., Ramachandra, N., Fukagata, K., Taira, K.: Probabilistic neural networks for fluid flow surrogate modeling and data recovery. Phys. Rev. Fluids 5, 104401 (2020)
Everson, R., Sirovich, L.: KarhunenLoeve procedure for gappy data. J. Opt. Soc. Am. 12(8), 1657–1664 (1995)
BuiThanh, T., Damodaran, M., Willcox, K.: Aerodynamic data reconstruction and inverse design using proper orthogonal decomposition. AIAA J. 42(8), 1505–1516 (2004)
Adrian, R.J., Moin, P.: Stochastic estimation of organized turbulent structure: homogeneous shear flow. J. Fluid Mech. 190, 531–559 (1988)
Manohar, K.H., Morton, C., Ziadé, P.: Sparse sensorbased cylinder flow estimation using artificial neural networks. Phys. Rev. Fluids 7(2), 024707 (2022)
Hochreiter, S., Schmidhuber, J.: Long shortterm memory. Neural Comput. 9, 1735–1780 (1997)
Lumley, J.L.: The structure of inhomogeneous turbulent flows. In: Yaglom, A.M., Tatarski, V.I. (eds.) Atmospheric Turbulence and Radio Wave Propagation. Nauka, (1967)
Holmes, P., Lumley, J.L., Berkooz, G., Rowley, C.W.: Turbulence, Coherent Structures, Dynamical Systems and Symmetry, 2nd edn. Cambridge Univ. Press, Cambridge (2012)
Taira, K., Brunton, S.L., Dawson, S.T.M., Rowley, C.W., Colonius, T., McKeon, B.J., Schmidt, O.T., Gordeyev, S., Theofilis, V., Ukeiley, L.S.: Modal analysis of fluid flows: an overview. AIAA J. 55(12), 4013–4041 (2017)
Hinton, G.E., Salakhutdinov, R.R.: Reducing the dimensionality of data with neural networks. Science 313(5786), 504–507 (2006)
Rezende, D.J., Mohamed, S., Wierstra, D.: Stochastic backpropagation and approximate inference in deep generative models. In: International Conference on Machine learning, pp. 1278–1286. PMLR, (2014)
Smola, A.J., Schölkopf, B.: A tutorial on support vector regression. Stat. Comput. 14(3), 199–222 (2004)
Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 29(5), 1189–1232 (2001)
Brunton, S.L., Proctor, J.L., Kutz, J.N.: Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. U.S.A. 113(15), 3932–3937 (2016)
Callaham, J.L., Maeda, K., Brunton, S.L.: Robust flow reconstruction from limited measurements via sparse representation. Phys. Rev. Fluids 4(10), 103907 (2019)
Morimoto, M., Fukami, K., Maulik, R., Vinuesa, R., Fukagata, K.: Assessments of epistemic uncertainty using gaussian stochastic weight averaging for fluidflow regression. Phys. D: Nonlinear Phenom. 440, 133454 (2022)
Zhong, Y., Fukami, K., An, B., Taira, K.: Machinelearningbased reconstruction of transient vortexairfoil wake interaction. AIAA Paper, 2022–3244, (2022)
Zhong, Y., Fukami, K., An, B., Taira, K.: Sparse sensor reconstruction of vorteximpinged airfoil wake with machine learning. Theor. Comput. Fluid Dyn. (2023). https://doi.org/10.1007/s0016202300657y
Lee, S., Yang, J., Forooghi, P., Stroh, A., Bagheri, S.: Predicting drag on rough surfaces by transfer learning of empirical correlations. J. Fluid Mech. 933, A18 (2022)
Raissi, M., Yazdani, A., Karniadakis, G.E.: Hidden fluid mechanics: learning velocity and pressure fields from flow visualizations. Science 367(6481), 1026–1030 (2020)
Yousif, M.Z., Yu, L., Hoyas, S., Vinuesa, R., Lim, H.C.: A deeplearning approach for reconstructing 3D turbulent flows from 2D observation data. arXiv:2208.05754, (2022)
Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Change C.L.: ESRGAN: enhanced superresolution generative adversarial networks. In: Proceedings of the European Conference on Computer Vision (ECCV) Workshops, PP. 63–79 (2018)
Hasegawa, K., Fukami, K., Murata, T., Fukagata, K.: Machinelearningbased reducedorder modeling for unsteady flows around bluff bodies of various shapes. Theor. Comput. Fluid Dyn. 34(4), 367–388 (2020)
Hasegawa, K., Fukami, K., Murata, T., Fukagata, K.: CNNLSTM based reduced order modeling of twodimensional unsteady flows around a circular cylinder at different Reynolds numbers. Fluid Dyn. Res. 52(6), 065501 (2020)
Esmaeilzadeh, S., Azizzadenesheli, K., Kashinath, K., Mustafa, M., Tchelepi, H.A., Marcus, P., Prabhat, M., Anandkumar., et al.: MeshfreeFlowNet: a physicsconstrained deep continuous spacetime superresolution framework. In: SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pp. 1–15. IEEE, (2020)
Wang, X., Zhu, S., Guo, Y., Han, P., Wang, Y., Wei, Z., Jin, X.: TransFlowNet: a physicsconstrained transformer framework for spatiotemporal superresolution of flow simulations. J. Comput. Sci. 65, 101906 (2022)
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired imagetoimage translation using cycleconsistent adversarial networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2223–2232, (2017)
Psaros, A.F., Kawaguchi, K., Karniadakis, G.E.: Metalearning PINN loss functions. J. Comput. Phys. 458, 111121 (2022)
Zhang, B., Ooka, R., Kikumoto, H., Hu, C., Tim, K.T.: Towards realtime prediction of velocity field around a building using generative adversarial networks based on the surface pressure from sparse sensor networks. J. Wind. Eng. Ind. 231, 105243 (2022)
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456, (2015)
Wurster, S.W., Shen, H.W., Guo, H., Peterka, T., Raj, M., Xu, J.: Deep hierarchical superresolution for scientific data reduction and visualization. arXiv:2107.00462, (2021)
Bewley, T.R., Moin, P., Temam, R.: DNSbased predictive control of turbulence: an optimal benchmark for feedback algorithms. J. Fluid Mech. 447, 179–225 (2001)
Chevalier, M., Hœpffner, J., Bewley, T.R., Henningson, D.S.: State estimation in wallbounded flow systems. Part 2. Turbulent flows. J. Fluid Mech. 552, 167–187 (2006)
Colburn, C.H., Cessna, J.B., Bewley, T.R.: State estimation in wallbounded flow systems. Part 3. The ensemble kalman filter. J. Fluid Mech. 682, 289–303 (2011)
Suzuki, T., Hasegawa, Y.: Estimation of turbulent channel flow at \({Re}_{\tau } = 100\) based on the wall measurement using a simple sequential approach. J. Fluid Mech. 830, 760–796 (2006)
Yousif, M.Z., Yu, L., Lim, H.C.: Superresolution reconstruction of turbulent flow fields at various Reynolds numbers based on generative adversarial networks. Phys. Fluids 34(1), 015130 (2022)
Xu, W., Luo, W., Wang, Y., You, Y.: Datadriven threedimensional superresolution imaging of a turbulent jet flame using a generative adversarial network. Appl. Opt. 59(19), 5729–5736 (2020)
Hassanaly, M., Glaws, A., Stengel, K., King, R.N.: Adversarial sampling of unknown and highdimensional conditional distributions. J. Comput. Phys. 450, 110853 (2022)
Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., Shi, W.: Photorealistic single image superresolution using a generative adversarial network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4681–4690, (2017)
Stengel, K., Glaws, A., Hettinger, D., King, R.N.: Adversarial superresolution of climatological wind and solar data. Proc. Natl. Acad. Sci. U.S.A. 117(29), 16805–16815 (2020)
Yang, D., Hong, S., Jang, Y., Zhao, T., Lee, H.: Diversitysensitive conditional generative adversarial networks. In: 7th International Conference on Learning Representations, ICLR 2019. International Conference on Learning Representations, ICLR, (2019)
Taira, K., Nair, A.G., Brunton, S.L.: Network structure of twodimensional decaying isotropic turbulence. J. Fluid Mech. 795, R2 (2016)
Du, X., Qu, X., He, Y., Guo, D.: Single image superresolution based on multiscale competitive convolutional neural network. Sensors 789(18), 1–17 (2018)
Nair, V., Hinton, G.E.: Rectified linear units improve restricted boltzmann machines. In: Proceedings of the 27th International Conference on Machine Learning, (2010)
Ren, P., Rao, C., Liu, Y., Ma, Z., Wang, Q., Wang, J.X., Sun, H.: Physicsinformed deep superresolution for spatiotemporal data. arXiv:2208.01462, (2022)
Kashefi, A., Rempe, D., Guibas, L.J.: A pointcloud deep learning framework for prediction of fluid flow fields on irregular geometries. Phys. Fluids 33(2), 027104 (2021)
Qi, C.R., Su, H., Mo, K., Guibas, L.J.: PointNet: Deep learning on point sets for 3D classification and segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 652–660, (2017)
Liu, Q., Zhu, W., Jia, X., Ma, F., Gao, Y.: Fluid simulation system based on graph neural network. arXiv:2202.12619, (2022)
Gruber, A., Gunzburger, M., Ju, L., Wang, Z.: A comparison of neural network architectures for datadriven reducedorder modeling. Comput. Methods Appl. Mech. Eng. 393, 114764 (2022)
Gao, H., Sun, L., Wang, J.X.: PhyGeoNet: Physicsinformed geometryadaptive convolutional neural networks for solving parameterized steadystate pdes on irregular domain. J. Comput. Phys. 428, 110079 (2020)
Kajishima, T., Taira, K.: Computational Fluid Dynamics: Incompressible Turbulent Flows. Springer, Berlin (2017)
Kochkov, D., Smith, J.A., Alieva, A., Wang, Q., Brenner, M.P., Hoyer, S.: Machine learningaccelerated computational fluid dynamics. Proc. Natl. Acad. Sci. U.S.A. 118(21), e2101784118 (2021)
Stolz, S., Adams, N.A., Kleiser, L.: An approximate deconvolution model for largeeddy simulation with application to incompressible wallbounded flows. Phys. Fluids 13(4), 997–1015 (2001)
Du, Y., Wang, M., Zaki, T.A.: State estimation in minimal turbulent channel flow: a comparative study of 4DVar and PINN. Int. J. Heat Fluid Flow 99, 109073 (2023)
Di Leoni, P.C., Mazzino, A., Biferale, L.: Synchronization to big data: nudging the NavierStokes equations for data assimilation of turbulent flows. Phys. Rev. X 10(1), 011023 (2020)
Di Leoni, P.C., Mazzino, A., Biferale, L.: Inferring flow parameters and turbulent configuration with physicsinformed data assimilation and spectral nudging. Phys. Rev. Fluids 3(10), 104604 (2018)
Yasuda, Y., Onishi, R.: Spatiotemporal superresolution data assimilation (SRDA) utilizing deep neural networks with domain generalization technique toward fourdimensional SRDA. arXiv:2212.03656, (2022)
Yousif, M.Z., Yu, L., Lim, H.C.: Physicsguided deep learning for generating turbulent inflow conditions. J. Fluid Mech. 936, A21 (2022)
Nakamura, T., Fukami, K., Fukagata, K.: Identifying key differences between linear stochastic estimation and neural networks for fluid flow regressions. Sci. Rep. 12, 3726 (2022)
Fukami, K., An, B., Nohmi, M., Obuchi, M., Taira, K.: Machinelearningbased reconstruction of turbulent vortices from sparse pressure sensors in a pump sump. J. Fluids Eng. 144(12), 121501 (2022)
Li, Y., Perlman, E., Wan, M., Yang, Y., Meneveau, C., Burns, R., Chen, S., Szalay, A., Eyink, G.: A public turbulence database cluster and applications to study Lagrangian evolution of velocity increments in turbulence. J. Turb 9, N31 (2008)
Wu, X., Moin, P.: A direct numerical simulation study on the mean velocity characteristics in turbulent pipe flow. J. Fluid Mech. 608, 81–112 (2008)
Towne, A., Dawson, S., Brès, G.A., LozanoDurán, A., SaxtonFox, T., Parthasarathy, A., Jones, A.R., Biler, H., Yeh, C.A., Patel, H.D., Taira, K.: A database for reducedcomplexity modeling of fluid flows. AIAA J. (2023). https://doi.org/10.2514/1.J062203
Acknowledgements
K. Fukami acknowledges the support from the UCLAAmazon Science Hub for Humanity and Artificial Intelligence. K. Fukagata acknowledges the support by JSPS KAKENHI (Grant numbers 18H03758 and 21H05007). K. Taira acknowledges the generous support from the US Air Force Office of Scientific Research (Grant FA95502110178) and the US Department of Defense Vannevar Bush Faculty Fellowship (Grant N000142212798).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Authors’ contributions
Ka. F, Ko. F, and KT designed research. Ka. F performed research and analyzed data. Ka. F and KT wrote the paper. Ko. F and KT supervised.
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Communicated by Rajat Mittal.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fukami, K., Fukagata, K. & Taira, K. Superresolution analysis via machine learning: a survey for fluid flows. Theor. Comput. Fluid Dyn. 37, 421–444 (2023). https://doi.org/10.1007/s00162023006630
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00162023006630