Toward learning Lattice Boltzmann collision operators

Abstract In this work, we explore the possibility of learning from data collision operators for the Lattice Boltzmann Method using a deep learning approach. We compare a hierarchy of designs of the neural network (NN) collision operator and evaluate the performance of the resulting LBM method in reproducing time dynamics of several canonical flows. In the current study, as a first attempt to address the learning problem, the data were generated by a single relaxation time BGK operator. We demonstrate that vanilla NN architecture has very limited accuracy. On the other hand, by embedding physical properties, such as conservation laws and symmetries, it is possible to dramatically increase the accuracy by several orders of magnitude and correctly reproduce the short and long time dynamics of standard fluid flows. Graphic abstract


Introduction
The Lattice Boltzmann Method (LBM) is a computationally efficient method for the simulation of fluid flows in a wide range of regimes.LBM allows solving a set of macroscopic equations via the time evolution of a (minimal) discrete version of the continuum Boltzmann equation, following the stream and collide paradigm.
While its original formulation targets mostly isothermal weakly compressible fluid flows, over the years several algorithmic developments have allowed extending the method to support the simulation of a wide range of complex flows, such as multi-phase [1,2], turbulence [3], thermo-hydrodynamics [4,5], non-Newtonian flows [6,7], radiative transport [8], semi-classical fluids [9], relativistic flows [10], and many others [11].Most of these algorithmic enhancements have targeted the modeling of the collision process and, as a result, a large variety of collision models have been proposed to extend the applicability and overcome the shortcomings of the standard LBM.Notable examples extending the single relaxation time Bhatnagar-Gross-Krook (BGK) collision operator [12] are given by the two relaxation times (TRT) [13], multi-relaxation time (MRT) [14, a e-mail: a.gabbana@tue.nl15], which can be combined with regularization procedures [16][17][18], and local viscous corrections, ensuring the validity of the H-theorem after the velocity discretization [19,20].More recent developments have taken into consideration the ellipsoidal statistical BGK [21] and the Shakov model [22], which allow to decouple the thermal relaxation from the viscous one.They also made possible to compute equilibrium distributions numerically, in principle, allowing to reproduce an arbitrary number of moments of the Maxwell-Boltzmann distribution [23].For a comprehensive review comparing collision models for LBM the interested reader is referred to [24].
In recent years, there has been an increased interest in adoption of machine learning (ML) models, typically, of artificial neural networks (NN), to approximate various kernels/operators in the simulation of physical systems.Artificial neural networks form a class of nonlinear parametric models satisfying universal approximation property [25].This property coupled with efficient computational tools for automatic differentiation and sensitivity analysis of forward and backward propagation, in the last decade, has led to outstanding results in such fields as computer vision [26] and natural language processing [27].
However, until recently, the biggest achievements of ML in scientific environment have been limited to approaches that are data-driven but agnostic to traditional scientific modeling of the underlying physics.Integrating the modern ML with physical modeling is the major challenge of what we call today Physics-Informed Machine Learning (PIML) [28,29].In particular, in fluid dynamics, there has been significant PIML activity in recent years.Examples include embedding physical constraints, such as Galilean invariance and rotational invariance, into the closure model [30,31] and PIML models infusing physical constraints into the neural networks [32,33].Other efforts on turbulence modeling are summarized in [34,35].In addition to developing closure models, novel ML approaches have been used to learn turbulence dynamics [36], where a Convolutional Long Short Term Memory (ConvLSTM) Neural Network was developed to learn spatial-temporal turbulence dynamics; study super-resolution allowing to reconstruct turbulence fields using under-resolved data [37]; use Neural Ordinary Differential Equation (Neural ODE) for turbulence forecasting [38]; or measure [39], model and control flows [40].
Up to now, very few works have proposed applications of ML to LBM.Most of these have been focusing on accelerating the calculation of steady-state flows using convolutional neural networks [41][42][43], while Bedrunka et al. [44] employed a fully connected feedforward neural network to tune the parameters of a MRT collision operator.
Since LBM entails a mesoscopic representation, it employs substantially more degrees of freedom (i.e. the number of discrete particle distribution functions) than the macroscopic observables of interest.These extra degrees of freedom suggest a possibility of using ML to encode more information in the model in order, for example, to extend its applicability, accuracy, and enhance the numerical stability.
In this work we take a first step in this direction, and consider the problem of learning a custom collision operator from reference data.The collision operator will be represented by a NN that takes as inputs pre-collision and return post-collision populations.As a proof-of-concept we evaluate different neural network architectures to identify design choices that improve performance of the learned collision operator.To make performance evaluation more straightforward we consider a large synthetic dataset containing pre-and postcollision populations pairs that itself was generated by a collision operator, specifically the BGK collision operator.In theory, in the limit of infinite data and infinite training resources it should be possible to recover the underlying operator.On the other hand, in practice, there will always be an error that (as we show later) significantly depends on the architecture of the NN.We show that constraining the NN to respect physics properties such as conservation laws and symmetries is key for accuracy.We evaluate the accuracy of the learned collision operator on both single-step (static) collision, as well as multi-step (dynamic) collisions, interleaved with advection steps, for the simulation of standard benchmarks.The focus of this work is on exposing the main ingredients needed to accurately learn a collision operator from data, while, for the moment, no attention is paid to computational efficiency.
This article is structured as follows: in Section 2, we provide a brief description of the Lattice Boltzmann Method.In Section 3, we define a PIML approach for learning a collision operator from data, focusing in particular on the embedding of relevant physical properties.In Section 4, we report simulations results for two numerical benchmarks where we have replaced the collision term in LBM simulations with a neural network.Here, we also compare the accuracy achieved by different neural network architectures.Concluding remarks and future directions are summarized in Section 5.

Lattice Boltzmann Method
In this section, we give a short introduction to the Lattice Boltzmann Method (LBM); the interested reader is referred to, e.g., Ref. [11,45] for a thorough introduction.
LBM simulates the evolution of macroscopic quantities (such as density and velocity) through a mesoscopic approach based on the synthetic dynamics of a set of discrete velocity distribution functions f i (x, t), i = 0, . . ., q − 1, to which we will refer as lattice populations.
At each grid node x, the lattice populations are defined along the discrete components of the stencil {ξ i }, i = 1, . . ., q − 1.It is customary to distinguish between different LBM schemes using the DdQq nomenclature, in which d refers to the number of spatial dimensions and q to the number of discrete components.
The time evolution of each lattice population is ruled by the lattice Boltzmann equation which, in the absence of external forces, reads as: where Ω is the collision operator.Among various possible choices, in this work we consider the BGK [12] operator which models collisions as a linear relaxation process of the distribution function towards its equilibrium.Here, τ is the relaxation time, ∆t the time step, and f eq i (x, t) is the discrete equilibrium distribution, for which we employ a second order Hermite-expansion of the Maxwell-Boltzmann distribution: In lattice units, ∆t = 1, while the speed of sound in the lattice for the D2Q9 model is c s = 1/ √ 3. Finally, ρ and u indicate, respectively, the macroscopic density and the velocity fields.These macroscopic observable can be computed in terms of the moments of the velocity distribution functions as Following an asymptotic analysis, like the Chapman-Enskog expansion [46], it can be shown that Eq. 1 delivers a second order approximation of the Navier-Stokes equations.In particular, the following relation between the relaxation time parameter τ and the kinematic viscosity ν of the fluid holds: We conclude this section by sketching the LBM algorithm.Provided a suitable initialization of the particle distribution functions, each time iteration of the algorithm entails the following steps: 1. Perform the streaming step: 2. Compute the macroscopic fields using Eq. 4 3. Calculate the equilibrium distribution function using Eq. 3 4. Apply the collision operator

Collision invariants and equivariances
The operator Ω carries physical properties of the Boltzmann collision, which can be phrased in terms of invariances and equivariances.Respecting these physical aspects will turn central in the performance of the machine learning models discussed in the next sections.In particular, Ω satisfies the following: P1 Scale equivariance.Scale factors λ > 0, remodulating all the pre-collision populations, are preserved, i.e.Ω(λf pre i In other terms, the collision is degree-1 homogeneous.P2 Rotation and reflection equivariance.Generic twodimensional collisions are equivariant with respect to the 2-dimensional orthogonal group O(2).This translates into the rotational and mirror independence on the spectator viewpoint.As we restrict to a D2Q9 lattice, this property reduces to preserving the 8th order dihedral symmetry group of the lattice D 2n ⊂ O(2), n = 4.This group is generated by a 90 degree rotation and a mirroring with respect to symmetry axes of the cell (e.g. the x axis).Naming these two operations, respectively, r and s, and identifying with I the identity operation, the 8 elements of D 8 are Here, the n-th power indicates n subsequent applications of the same operator (i.e.r 2 is a 180 degree rotation).When applied to the populations, these operators effectively yields permutations of the population indices (cf.Fig. 2).Finally, in formulas, rotation and mirroring equivariance of collisions reads P3 Mass and momentum invariance.In the D2Q9 LBM model, mass and momentum are preserved "exactly" by the collision.This holds thanks to the underlying Gaussian quadrature used in the discretization of the velocity space [47,48]: Finally, we shall require positivity (P4) for the postcollision lattice populations (f post i > 0 for all i), since they represent discrete velocity distribution functions.

Machine learning approach
In this section we describe a machine learning approach, hinged on a neural network, to approximate the collision operator.Therefore, such a neural network will act as a replacement of the right hand side of Eq. 1.Our learning problem aims at finding a neural network Ω NN such that Ω NN ≈ Ω, i.e., formally, where the input of the network, f pre i , is given by the pre-collision (post-streaming) lattice populations, and the network output, f post i , targeting the post-collision populations f post i .
In the reminder of the section we will define: -The loss function whose minimization drives the NN training process.This will also formalize our desired approximation -The training and testing datasets.
-The network architecture, addressing the strategies that we considered to embed symmetries and conservations.
Loss function and training procedure.We train the neural network to minimize the Mean Squared Relative Error (MSRE) between ground-truth post-collision populations, f post i , and the neural network approximations, f post i , accumulated across the populations: Here, the use of a relative error metric is crucial in order to achieve good accuracy, since in general the lattice populations take values proportional to the corresponding lattice weights w i , and, as a consequence, an absolute error metric would lead to the NN learning with higher accuracy the rest-population f 0 (typically the one taking the largest value) at the expense of the others.
From an implementation perspective, we consider a mini-batch stochastic gradient descent approach driven by standard ADAM optimizer [49].
Training and testing datasets.In order to control the distribution of the macroscopic parameters appearing in the training set, we rely on synthetic data rather than actual simulation data.The training set consists of N pairs of 9-tuples where the pre-collision distributions are generated as In the above, the equilibrium distribution f eq i is calculated using Eq. 3 from a set of randomly sampled macroscopic variables ρ, u.The non equilibrium part f neq i is such that each population is randomly drawn from a Gaussian distribution, after which corrections are introduced to ensure no contributions to lower order moments, i.e.
Fig. 2: Sketch of a Neural Network architecture implementing the group averaging method.The core network (gray box on the left hand side) is evaluated 8 times on rotated/shifted versions of the input.The inverse transformation is applied to the 8 outputs which are then averaged in order to produce the final prediction.Batch size 32

Number of epochs 200
Initial learning rate 10 −3

Neural network architectures
We consider variations of a fully connected feed-forward Neural Network, henceforth referred to as NN Naive, which is composed of two hidden layers of 50 neurons each.We use ReLU as activation functions and no biases in the linear layers.The Naive NN, as it concatenates bias-less linear layers and ReLU activations, all degree-1 homogenous functions, is itself degree-1 homogeneous.Therefore it is hardwired to respect the scale equivariance P1.Yet, no other properties such as conservation of mass, momentum and D 8 equivariance are imposed, thus the denomination naive.
Before detailing the structure of these networks, we present a more general approach to satisfy P1, which we will use in all next three architectures.It hinges on considering pre-and post-collision populations normalized by the corresponding macroscopic density (invari-ant, P3).In formulas, we effectively consider and train a NN, ΩNN , operating as where the normalized pre-collision populations are defined as The normalized post-collision populations are defined analogously.
Our final collision approximator, Ω NN , prepends and appends rescaling operations as On this basis, we can enforce positivity, P4, by considering a softmax activation function at the final layer of the network (i.e., in place of a ReLU activation).Let y 0 , . . ., y 8 be the 9 inputs of the final activation, then the softmax outputs read Note that this returns normalized populations by construction (cf.Eq. 18).

D 8 equivariance: NN Sym
We establish a collision NN, ΩNN , in which we enforce the rotation and symmetry equivariance (cf.Eq. 10).
We achieve this by applying a D 8 group averaging operation on a generic collision Ω NN .In formulas, ΩNN operates as follows A proof that Eq.21 satisfies P2 (Eq.10) is provided in Appendix D. Note that this approach is general: given any symmetry group the average in Eq.21 generates an operator that is equivariant with respect to such a group action.Note that here we perform a convex combination of populations, hence ensuring positivity of populations, with combined weight of unity, which ensures preservation of density (assuming the original operator Ω NN had these properties).In Fig. 2, we report our implementation of Eq. 21.Both at training time and for predictions the core network Ω NN is evaluated 8 times on rotated/shifted versions of the input (σf pre i ).The outputs are then averaged after an application of the inverse rotation/shift (σ −1 ).

Conservation of mass and momentum: NN Cons
A possible approach to ensure that Eq. 11 is satisfied, is algebraically correcting the lattice populations which the NN outputs (see also Ref [50] for an example where hard-constraints on conservation laws are imposed on the full Boltzmann equation).The method is based on the observation that all the conserved quantities are linear combinations of the lattice populations.Let be the vector of the lattice populations, and C be an invertible matrix (representing change of bases): with Consequently, the remaining column vectors c 3 , . . ., c 8 are linearly independent and complementing c 0 , c 1 , c 2 to a base of R 9 .
The matrix C represents an invertible map R 9 → R 9 which can be used to express a change of basis: Thus, the first three entries of b are the density and the momentum components.
A second example is provided in Appendix C. Note that this approach allows to enforce the conservation of mass and momentum at training time and yields no additional hyperparameters to be tuned.
An alternative approach, commonly adopted in the literature [51][52][53], consists of introducing a soft constraint in the loss function in order to penalize mass and momentum mismatches.In formulas, this reads: where ρ and ũ are the macroscopic quantities calculated over the lattice populations output of the network fi post , while α 1 and α 2 weights the relative importance of each single constraint.Since we have observed that the imposition of hard constraints via algebraic reconstruction systematically outperforms the soft-constraint based approach, the latter will not be covered in our analysis in the coming sections.Nevertheless, a few numerical results are reported in Appendix B where we highlight the shortcomings of this approach.

Numerical results
In this section, we present the results of LBM simulations where the collision term is replaced by either of the four neural networks introduced in the previous section: NN Naive, NN Sym, NN Cons, NN Sym+Cons.For each NN architecture we trained 50 instances, adopting random weights initialization.We stop the training process at 200 epochs.See Table .1 for the full list of training hyper-parameters.b) Error committed in the conservation of momentum, with the uniformly filled boxplots representing the error associated to u x , and the boxplots with patterned filling the error associated to u y (see Eq. 29 for the definition of the error metric).Note that the errors for NN Cons and NN Sym+Cons are zero to machine precision.c) Error committed in violating rotation and mirroring equivariance (see Eq. 30 for the definition of the error metric).Note that for NN Sym and NN Sym+Cons the error is zero down to machine precision.
We will first provide a static evaluation of the NN prediction error on the post-collision lattice populations.We also report on the physical properties of the learned collision operator.We will then turn our analysis to the comparison of time dependent flows consid-ering two standard benchmarks: a Taylor-Green decay, and a lid-driven cavity flow.

Static accuracy evaluation
We start by comparing the accuracy of the various NN architectures described in the previous section taking into consideration the training error.In Fig. 3(a) we show the distribution of the absolute relative error on the post-collision populations committed by the NN on the test dataset (generated following the procedure described in Appendix A).The boxplots compare the accuracy of 50 different instances of each NN architecture in the prediction of populations of index i.By comparing the median values we observe that NN implementing symmetries slightly, although systematically, outperform the Naive NN.On the other hand, hardwiring conservation laws does not lead to an improvement in the prediction of the lattice populations.This is due to the specific choice of algebraically reconstructing populations of index 2, 5 and 8 to restore the conservation of mass and momentum, and it can indeed be seen from the plot that the largest errors area associated to these three elements.A major improvement is achieved when combining conservation with rotation and symmetry equivariance (NN Sym+Cons).This case allows to improve accuracy in the prediction of the single lattice populations between 1 and 2 order of magnitudes with respect to all the previous cases.
We now evaluate how well the different architecture comply to the physical properties of the collision operator.In Fig. 3(b) we evaluate the distribution of the error committed in the momentum conservation by the various NN.We define with u pre j the momentum calculated on the pre-collision distribution functions, and u post j the momentum calculated from the distribution functions predicted by the NN; in the plot the case j = x is represented by the boxplots with uniform filling, and j = y by the boxplots with patterned filling.
The ε 1 error distribution for the Naive NN is different when comparing the two spatial components, and also asymmetric with respect to zero.We observe that the NN implementing the symmetries of the lattice (NN Sym) outperforms the Naive NN, in turn restoring the symmetry in the error distribution.By construction, the error for the NN implementing conservation laws is systematically zero to machine precision.is the most accurate, followed by NN with only conservation properties (blue), followed by NN with only symmetries (orange).The naive NN (green) is the least accurate.
Finally, in Fig. 3(c) we evaluate the distribution of the following error metric which quantifies the violation of the D 8 equivariance.For D 8 -equivariant collisions, i.e. satisfying P2 (Eq.10), the term within the absolute value is zero to machine precision.We evaluate ε 2 over the entire test dataset.We observe that the network implementing conservation laws (NN Cons) commits a larger error even when comparing with the Naive NN.This is due to the fact that the algebraic reconstruction procedure used to implement the conservation laws leads to the error accumulating along some lattice directions.The error metric is systematically zero for all the NN implementing the group-averaging technique.
In the coming sections we compare the performance of the different NN in the simulation of time-dependent fluid flows.

Benchmark I: Taylor-Green Vortex
We consider the time dynamics of a Taylor-Green vortex, a standard benchmark for the validation of fluid with u 0 the initial value for |u|, it is simple to show that the flow decays exponentially and proportionally to where ν is the kinematic viscosity of the fluid (Eq.5).This benchmark allows us to evaluate the time dynamic of a flow, covering different orders of magnitude in the values of the macroscopic parameters, and also to evaluate the preservation of symmetries by observing the structure of the vortexes.We consider a 32 × 32 grid, with u 0 = 10 −2 , τ = 1.In Fig. 4 we compare the time decay of the average absolute value of the velocity field from simulations making use of different NNs, comparing against the analytic solution.Once again, for each type of NN we have evaluated the results from 50 different networks trained starting from a random choice of the initial weights.The plot highlights the variability in the results from the different NNs by means of boxplots.From the plot we can see that the Naive NN is able of correctly follow the flow decay for only 20-40 iterations, after which not only the flow stops decaying but we also observe an increase in the kinetic energy.By employing a NN satisfying the symmetries of the lattice it is possible to restore the decaying trend of the flow, although we observe a deviation from the correct decaying rate.This can be attributed to the network not being able of preserving momentum.On the other hand, NNs enforcing the conservation laws are able to provide a more accurate dynamic, with only small variability around the analytic solution, which can be further reduced by combining conservation and preservation of symmetries.
The importance of embedding conservation laws and symmetries together in the NN is even more evident in Fig. 4, where evolution statistics is shown for four types of NN designs.Embedding symmetries or conservation properties shows an immediate and dramatic improvement over the naive NN in the ability of the NN to capture the decay rate of the average velocity field.Enforcing conservation properties is appreciably more important (for the purpose of this test) than enforcing symmetries.Yet, enforcing both symmetries and conservation properties produces the most accurate results capturing the decay of average velocity with minimal variability all the way to machine precision, which is a remarkable result, especially compared to the performance of a naive NN.Moreover, we should stress that a NN with a lower training error will not necessarily guarantee for better results when employed in simulations; for example, NN Cons, which in Fig. 3(a) presents the larger training error, is among the best performing one when looking at Fig. 4.
On a more qualitative basis, in Fig. 5 we provide snapshots of the velocity field at a later stage of the dynamics (after t = 1000 iterations), comparing the ground truth given by a plain LBM simulation against an example of the profile provided by each of the different NN implementations.The figure shows that, besides failing to reproduce the decay of the flow, the Naive NN is also not able to preserve the structure of the vortexes.The NN with symmetries, on the other hand, nicely preserves the geometric structure, although the amplitude of the velocity is slightly off with respect to the reference LBM profile.The NN enforcing conservation laws correctly captures on average the decaying rate (c.f.Fig. 4), however, Fig. 5 clearly shows that the structure of the vortexes is not symmetric anymore.This can be attributed to the fact that the algebraic reconstruction is performed on 3 lattice populations, leading to a less balanced distribution of the error (cf.Fig. 3(c)).Only by combining conservation and symmetries in the NN it is possible to reproduce correctly the velocity profile.

Benchmark II: Lid driven cavity flow
As a second example, we consider the lid-driven cavity flow, a wall-bounded benchmark in a very simple geometry, still leading to a non-trivial dynamic.Indeed there is no analytic solution for this flow, and for this reason we will compare this only against reference LBM simulations.
The setup consists of a top-lid moving at a constant velocity (u 0 ), with no-slip boundary conditions at bottom and side walls.We consider a L × L grid, the relaxation time set to τ = 1, and report the results for simulations at two different Reynolds numbers, respec- In simulations the NN does not handle the evolution of the boundary nodes.Instead, we employ standard LBM approaches for implementing the boundary conditions.In particular, the bounce back rule is used to implement the no-slip condition.Here the lattice populations that during the streaming step interact with a solid wall get reflected to their original location with their velocity reversed: where fī is the population of index ī such that ξī = −ξ.
For the top wall we use a Dirichlet boundary condition where ρ w and u w = (u 0 , 0) are respectively the density and the velocity at the top wall.
In Fig. 6 we show the steady state velocity profiles along the vertical (a) and horizontal (b) centerlines of the lid-driven cavity for Re = 10, comparing the results from a plain LBM simulation against results obtained employing NNs with different architectures.All simulations are performed on a square grid of side L = 64.
Once again we show data collected simulating 50 different instances of each NN architecture, with the boxplots reporting the variability in the obtained results.We observe that in this case the results of the Naive NN are much closer to the reference data with respect to the previous benchmark.This can be attributed to the boundary conditions constraining the flow.Both NN Sym and NN Cons provide an improvement over the Naive NN, however it is interesting to point out that the results provided by the latter present a much higher variability than the one observed in the simulation of the Taylor-Green vortex.Indeed, the plot clearly shows that only the case NN Sym+Cons is able to correctly reproduce the results of the LBM simulation.We select this NN architecture to perform simulation at a higher Reynolds number.In Fig. 7 we show the results obtained at Re = 100, varying the grid size, and comparing with both a LBM simulation as well as with reference data from Ghia et al. [54].The results from the simulation using the finer grid resolution (L = 256) are found to be in excellent agreement with the reference data.On the other hand, we see that for coarser grid sizes the NN struggles to correctly reproduce the velocity in the proximity of the moving plate (see Fig. 7(b)).We shall discuss the origin of this mismatch in the coming subsection.
In Fig. 8 we show a more qualitative for case = presenting of velocity at state, comparing results a simulation results by differ-NN It interesting observe each NN a prediction the cation the vortex, only reproduce secondary located at the bottom right corner.As from the analysis above, NN Sym+Cons provides results in excellent agreement with the LBM simulation.

Extrapolation
In Fig. 7(b) we have observed significant deviations in the numerical results produced by the NN Sym+Cons architecture in the proximity of the moving plate, in particular for coarse grids.Since in simulations we are keeping fixed the kinematic viscosity and the Reynolds number, it follows from Eq. 33 that by increasing the grid resolution we also decrease the numerical value of the lid velocity u 0 .For L = 64 the numerical value used at the top lid u 0 ≈ 0.26 falls well outside the range of values shown to the network at training time.It is therefore interesting to investigate the extrapolation capabilities of the different NNs.In Fig. 9 we show the average MSRE on 50 instances of each NN architecture, calculated in the prediction of the equilibrium distribution f eq i (ρ = 1, u x , u y = 0) at varying values of u x .The continuous lines show the performance of the NNs trained on a dataset where the macroscopic velocity takes values in the interval (−0.03, 0.03); likewise, the dotted lines show the results for NNs trained on values of the macroscopic velocity in the interval (−1/3, 1/3).Corresponding gray continuous (dotted) vertical lines are reported to identify the boundary of the two training datasets.Here we can see that when working in the range of values shown to the NN during the training, the NN Sym+Cons outperforms all the other network architectures.On the other hand, this NN commits the largest extrapolation error, i.e. it commits a larger error in predicting the equilibrium distribution outside of the values of the training set.While the reason for this behavior is currently unclear to us and will be object of further analysis in future work, these results explain the discrepancies observed in Fig. 7, and in turn point to the need of extra care in the preparation of the training dataset.The plot shows the average MSRE, computed on 50 instances of each NN architecture, in the prediction of the equilibrium distribution f eq i (ρ = 1, u x , u y = 0) at varying values of u x .The continuous lines refer to NNs trained on a dataset where the macroscopic velocity takes values in the interval (−0.03, 0.03), while the interval (−1/3, 1/3) has been used to train the NNs corresponding to the dotted lines.The gray continuous (dotted) vertical lines identify the boundary of the two training datasets.

Conclusion
In this work we have presented a machine learning approach for learning a collision operator for the Lattice Boltzmann Method from data.As a proof of concept, we have developed a neural network capable of approximating to good accuracy the BGK collision operator.We have discussed in details a few methods which allow enriching the structure of the neural network to enforce relevant physical properties of the collision operator.We have shown that only by embedding conservation laws and lattice symmetries in the neural network it is possible to correctly reproduce the time dynamics of a fluid flow.This work can be regarded as a first step towards the application of neural networks for extending the applicability of LBM in kinematic regimes not supported by the basic method.To give an example, in future extensions of the present work, we plan to evaluate the possibility of using our approach for learning collision operators from molecular dynamics and Monte Carlo simulations in regimes beyond hydrodynamic limit.where the parameters κ 1 , κ 2 , κ 3 are lattice dependent, and for the D2Q9 are given by: Appendix D The group-averaged operator Ω satisfies D 8 equivariance We prove here that the the group averaged operator defined in Eq.21, respects property P2 (Eq.10).The proof that we propose here is indeed general and holds for any symmetry group.For this reason we indicate here the symmetry group with the generic symbol S.

By definition, it holds
Yet, Sη = S or, equivalently, the presence of η yields a permutation of the terms to add (by uniqueness of the inverse within a group).Thus, we can write which concludes the proof.

8 Fig. 1 :
Fig. 1: Example of a 3x3 LBM grid (with a single grid point shown on the right hand side) making use of the D2Q9 model where the lattice populations can move along 9 possible directions.

( 16 )
See Appendix A for further details. f

Fig. 3 :
Fig. 3: Static evaluation of the accuracy achieved by the four different NN architecture considered in this work.a) Comparison of the absolute relative error on the post-collision populations of index i (cf.Fig.1).b) Error committed in the conservation of momentum, with the uniformly filled boxplots representing the error associated to u x , and the boxplots with patterned filling the error associated to u y (see Eq. 29 for the definition of the error metric).Note that the errors for NN Cons and NN Sym+Cons are zero to machine precision.c) Error committed in violating rotation and mirroring equivariance (see Eq. 30 for the definition of the error metric).Note that for NN Sym and NN Sym+Cons the error is zero down to machine precision.

Fig. 4 :
Fig.4: Time evolution of the average absolute value of the velocity field in a Taylor-Green vortex, comparing the analytic solution (gray dotted line) against simulations making use of NNs with different architectures.The boxplots show variability among 50 different instances for each different NN architecture.The NN with built-in symmetries and conservation properties (red) is the most accurate, followed by NN with only conservation properties (blue), followed by NN with only symmetries (orange).The naive NN (green) is the least accurate.

Fig. 5 :
Fig.5: Velocity profile from simulations of a Taylor-Green vortex decay, after 1000 time steps.Colors map indicates the absolute value of the velocity vector, whereas white lines provide the velocity streamlines.We compare the ground truth from a LBM simulation against the results provided by different NN implementations.

Fig. 6 :
Fig. 6: Steady state profiles for (a) u x along the vertical centerline, and (b) u y along the horizontal centerline of the domain of a lid-driven cavity flow at Re = 10.Simulations are performed on a square grid of side L = 64.We compare the results of a LBM simulation (black line), against results obtained employing the four NN architectures considered in this work.The boxplots report the variability in the results from 50 instances of each NN architecture.

Fig. 7 :
Fig.7: Steady state profiles for (a) u x along the vertical centerline, and (b) u y along the horizontal centerline of the domain of a lid-driven cavity flow at Re = 100.Dotted lines represent results obtained using a NN Sym+Cons architecture for increasing number of nodes in the grid side L. We compare the results against a LBM simulation (black line, L = 256), and reference data from Ref.[54] (orange dots, L = 129).

Fig. 8 :
Fig. 8: Steady state profile of the velocity field for a lid-driven cavity flow at Re = 100, comparing the results of a LBM simulation against the results provided by different NN implementations.Colors map the absolute value of the fluid velocity normalized over the lid velocity, whereas white lines provide the velocity streamlines.Simulations are performed on a square grid of side L = 128.

Fig. 9 :
Fig. 9: Comparison for the accuracy of the different NNs architecture within and outside the training dataset.The plot shows the average MSRE, computed on 50 instances of each NN architecture, in the prediction of the equilibrium distribution f eq i (ρ = 1, u x , u y = 0) at varying values of u x .The continuous lines refer to NNs trained on a dataset where the macroscopic velocity takes values in the interval (−0.03, 0.03), while the interval (−1/3, 1/3) has been used to train the NNs corresponding to the dotted lines.The gray continuous (dotted) vertical lines identify the boundary of the two training datasets.

Fig. 10 :
Fig.10: Time evolution of the average absolute value of the velocity field in a Taylor-Green vortex, comparing the analytic solution (gray dotted line) against simulations making use of NNs employing a soft-constraint on momentum conservation, for a few selected values of the parameter α which weights the relative importance of the soft-constraint (see Eq. 44).For each value of α we show results obtained with and without the group-averaging method used for embedding symmetries in the NN architecture.The boxplots show variability among 20 different instances for each different NN architecture.

Table 1 :
List of hyper-parameters used in the training of the NNs presented in this work.