Review on Smoothed Particle Hydrodynamics: Methodology development and recent achievement

Since its inception, the full Lagrangian meshless smoothed particle hydrodynamics (SPH) method has experienced a tremendous enhancement in methodology and impacted a range of multi-physics applications in science and engineering. The paper presents a concise review on latest developments and achievements of the SPH method, including (1) brief review of theory and fundamental with kernel corrections, (2) the Riemann-based SPH method with dissipation limiting and high-order data reconstruction by using MUSCL, WENO and MOOD schemes, (3) particle neighbor searching with particle sorting and efficient dual-criteria time stepping schemes, (4) total Lagrangian formulation with stablized, dynamics relaxation and hourglass control schemes, (5) fluid-structure interaction scheme with interface treatments and multi-resolution discretizations, (6) novel applications of particle relaxation for mesh and particle generations. Last but not least, benchmark tests for validating computational accuracy, convergence, robustness and efficiency are also supplied accordingly.


Introduction
As a fully Lagrangian meshless method, whereby a set of particles are introduced to discretize the continuum media and their interactions determined by a Gaussianlike kernel function to approximate the mechanics, the smooth particle hydrodynamics (SPH) [1,2] has been demonstrated to be a compromising alternative of meshbased methods and received significant interest in the past decades [3,4,5,6].Thanks to its Lagrangian feature, the SPH method has shown peculiar advantages in handling free-surface flows [7,8] involving violent impact and breaking events [9], structure analysis with crack propagation and large deformation [4,10,11] and multi-physics problems [12] including fluid-structure interactions (FSI) [13,14,15], multi-phase flows [16,17,18], additive manufacturing [19,20] and cardiac modeling [21,22,23], and comprehensive reviews can be found in recent Refs.[9,14,24,25,26,27,28].From the methodological point of view, tremendous efforts have been devoted to address the improvement of convergence, consistency and stability, the treatment of boundary conditions, the adaptive discretization and extension to multi-physics applications, as highlighted by Refs.[12,29,30] This paper aims at providing a concise description on the state-of-the-art methodology development and achievement for the SPH method.In particular, attempts are devoted to address the perspectives of the Riemannbased SPH method with dissipation limiting and highorder data reconstruction, efficient particle-interaction configuration updating, stablized scheme, dynamics relaxation and hourglass control for total Lagrangian formulation, novel applications for mesh and particle gen-erations, and FSI schemes.This paper is organized as follows.In Section 2, we will briefly summarize the theory and fundamental of the SPH method.In Section 3, the traditional and Riemann-based SPH discretizations of the fluid dynamics is presented with special attention devoted to the Riemann solver with dissipation limiting, high-order reconstruction and efficient update of particleinteraction configuration.In Section 4, we will present the total Lagrangian SPH formulation with stablized scheme, dynamics relaxation and hourglass control.Section 5 reports the interface treatments and multi-resolution discretization for FSI.Section 6 focuses on the novel applications of the particle relaxation for high-quality particle and unstructure mesh generations.Finally, concluding remarks are given in Section 7.

Theory and fundamental of SPH
In SPH method, a set of Lagrangian particles whose interactions determined by a Gaussian-like kernel function is introduced to discretize the continuum media.Then, the particle-average based discretization of a variable field f (r) can be defined as Here, i is the particle index, f i the discretized particleaverage variable and r i the particle position.For the compact-support kernel function W(r i − r, h), h is the smoothing length determining a radially symmetric support domain with respect to r i .As the mass of each particle m i is known and invariant (indicating mass conservation), one has the particle volume V i = m i /ρ i with ρ i denoting the particle-average density.By introducing particle summation, Eq. ( 1) can be approximated as Here, the summation is over all the neighboring particles j located in the support domain of the particle of particle i. Substituting the variable f (r) with density, one gets the approximation of the particle-average density which is an alternative way to write the continuity equation for updating the density.
Similarly, the approximation of the spatial derivative of f (r) can be derived as Furthermore, Eq. ( 4) can be rewritten into a strong form where the inter-particle difference value f i j = f i − f j , and ∇ i W i j = e i j ∂W i j ∂r i j with r i j and e i j are the distance and unit vector of the particle pair (i, j), respectively.The strongform approximation of the derivative is used to determine the local structure of a field.On the other hand, with a slight different modification, Eq. ( 4) can be rewritten into a weak form as where the inter-particle average value f i j = f i + f j /2.The weak-form approximation of derivative is used to compute the surface integration with respect to a variable for solving its conservation law.Thanks to its antisymmetric property, i.e., ∇ i W i j = −∇ j W ji , the momentum conservation of the particle system is implied.

Kernel correction
When particle is close to boundary or particle distribution is irregular, the 0-order and 1st-order consistency of particle approximation of Eqs. ( 2) and ( 4) are not satisfied, respectively.To remedy this issue, a number of correction techniques has been proposed in the literature [4,6,31,32,33].Here, we briefly review the two mostly applied techniques, i.e., kernel correction and kernel gradient correction which are also known as Shepard filter and renormalization formulation [4,34], respectively.

Kernel correction
Following Refs.[4,31,33,35,36], the improved partition of unity, 0-order consistency, can be achieved by interpo-lating with a correction kernel W as f (r) = j V j f j W(r − r j ) = j V j f j α(r)W j (r), (7) where the parameter α i is evaluated by enforcing that any constant distribution is exactly interpolated, that is Therefore, the following condition must be satisfied with the corrected kernel and this gives The scalar parameter of α(r) is also known as Shepard filter [4,31] and provides a much improved partition of unity, in particular for particles near the domain boundary [35,36].

Kernel gradient correction
To assess the consistency order of the the particle approximation of Eq. ( 4), we can Taylor-expand f j around r and substitute it to Eq. ( 4) Accurate approximation of ∇ f (r) requires that j V j ∇ j W(r i − r, h) ≈ 0, (13) and j V j r j − r ∇ j W(r i − r, h) ≈ I, (14) where I is the unit matrix.Eq. ( 13) is an gradient expression of the partition of unity requirement.To satisfy Eq. ( 14), a correction matrix B is introduced to modify the kernel gradient as and inserted to Eq. ( 14) This gives The use of the kernel gradient correction can improve the gradient approximation, in particular for irregular distributed particles [4,34,37].However, extra computational efforts are induced due to the interpolation and matrix inverse for each particle at every time step.On the other hand, this kernel gradient correction is widely applied in the total Lagrangian formulation as it only to be calculated once at the initial reference configuration [38,39,40] which will be presented in details in Section 4.

Governing equations
For inviscid flow, the conservation of mass and momentum in the Lagrangian frame can be written as where v is the velocity, ρ the density, p the pressure, g the gravity, d dt = ∂ ∂t + v • ∇ stands for the material derivative and f s presents the force exerting on the fluid due to the existence of solid, which can be the solid wall or flexible structure.
To close the system of Eq. ( 18), the pressure can be calculated with an artificial isothermal equation of state (EoS) in the form where γ = 1 or 7 is an empirically determined constant, ρ 0 the reference density and c the speed of sound [7,41].Following the weakly-compressible assumption [42], an artificial sound speed of c = 10U max , where U max denoting the maximum anticipated flow speed, is employed for density fluctuation to be approximately 1%, implying the Mach number M ≈ 0.1.

Traditional SPH method
Having Eqs. ( 5) and ( 6), the discretization of Eq. ( 18) in the traditional SPH formulation reads [6,37,43,44] where v i j = v i − v j is the relative velocity.To dampen the pressure oscillation and prevent instability in the particle motion, where single particle moves in a rather chaotic way, Monaghan and Gingold [45] introduced a Neumann-Richtmeyer type artificial viscosity term where c c i + c j /2, ρ = ρ i + ρ j /2 and α ≤ 1.0 is a tunable parameter.While a moderate artificial viscosity is able to stabilize the computation, it may lead to excessive dissipation which affects the physical flow characteristics [37,46].Another weakness is that the tunable parameter α requires careful numerical calibrations and its values usually are case dependent [8].

Density reinitialization
Implementing density reinitialization, i.e., the density is integrated by the continuity equation and periodically reinitialized by applying proper formulation, is an efficient way to address the high frequency density oscillations.The straight forward formulation reads [4,16,47] by using the Shepard filter [4] of Eq. ( 10), resulting firstorder accuracy.Colagrossi and Landrini [16] suggested to consider a mean-least-squares (MLS) kernel interpolation where W MLS j is the MLS kernel [16] given by This formulation achieves second-order accuracy and shows good results while is computationally rather expensive [48].
More recently, Zhang et al. [49] and Rezavand et al. [50] proposed new density reinitialization formulations read and for free-surface and internal flow, respectively.Here, ρ * denotes the density before reinitialization and superscript 0 represents the initial reference value.Note that the density reinitialization is applied every n-time steps, for example n = 20 in Refs.[16,47] and Zhang et al. [49] apply it every advection time step which consists of several acoustic time steps for particle relaxation.

Diffusive term in the continuity equation
Another approach to reduce the density oscillation is to introduce a diffusive term into the continuity equation of Eq. 20, resulting smooth pressure field and stable time integration.Inspired by the Riemann-based SPH method [37,51], Ferrari et al. [8] modified the original SPH discretization of the continuity equation as by introducing a Rusanov diffusive term D i defined as As highlighted by Ref. [8], the Rusanov diffusive term D i is independent of any tunable parameter and no artificial viscosity of Eq. ( 21) is required in the momentum equation.However, this scheme is not compatible with the hydrostatic solution, exhibiting unphysical freesurface motion and expansion in long term simulations due to the inconsistency induced by the singularity of the density approximation at the surface [48,52].Molten and Colagrossi [53] pursued a similar idea of introducing diffusive term but it still suffers the incompatibility with the hydrostatic solution [54].To decrease such artifacts, Antuono et al. [48] proposed an improvement and the diffusive term reads where δ is a non-dimensional parameter, and F i j = ∇ρ i + ∇ρ j is a renormalized correction term to prevent the singularity at the surface [55].This corrected δ-term is compatible with the hydrostatic solution, whereas induces extra computational efforts due to the correction term of F i j .Note that the parameter δ is not freely tunale and usually set as 0.1 [55], and a small amount of artificial viscosity defined by Eq. ( 21) with parameter of α = 0.02 is still applied for numerical stability.

Riemann-based SPH method
As a variant of the SPH method, the Riemann-based SPH method solves a one-dimensional Riemann problem along each particle pair to determine its interaction.Compared with the traditional SPH method presented in Section 3.2, the Riemann-based SPH method introduces implicit numerical dissipation other than use explicit artificial viscosity, and achieves the dissipation in a more accurate manner [37,56,57,58].Monaghan [56] pointed out that the artificial viscosity [45] is analogous to the dissipative terms of the Riemann solver, which scales with the wave speed and the velocity jump between interacting particles, whereas showed that using the exact, or well approximated, Riemann solution can obtain more accurate results in capturing shock wave.The pioneering work of developing Riemann-based SPH method can be tracked back to Vila [37], where a Riemann-based ALE-SPH scheme was proposed by discretizing the Euler equa-tions in conservative form and calculating the fluxes between particles with a Riemann solver.It is worth noting that the particle in the Riemann-based ALE-SPH scheme represents a volume of the considered discretized fluid medium and it may move with the fluid velocity (Lagrangian description), remain still (Eulerian description) or move in any arbitrary way.Instead of following the ALE description, Parshikov et al. [59] and Parshikov and Stanislav [60] proposed a Riemann-based SPH formulation in purely Lagrangian framework by using a firstorder Riemann solution to describe the contact interaction between particles.To improve the accuracy of capturing strong shocks, Inutsuka [61] has reformulated the Riemann-based SPH formulation with second-order Riemann solution.Then, Cha and Whitworth [62] derived different versions of Riemann-based SPH schemes, and performed a von Neumann stability analysis and concluded that the Riemann-based SPH is stable for all wavelengths, while the traditional SPH is unstable for certain wavelengths.Subsequently, enormous progress has been made toward accomplishing high-order data reconstruction [63,64,65,66], dissipation limiting [67,68] and solid boundary treatment [69,67], which will be reviewed in the following parts.
To derive the Riemann-based SPH discretization of Eq. 18, we rewrite its traditional SPH discretization as by introducing the inter-particle average velocity v i j = v i + v j /2 and pressure p i j = p i + p j /2.Then, the inter-particle average variables are replaced by the Riemann solution, i.e., U * and P * , resulting where v * = U * e i j + v i j − Ue i j .In this case, the interparticle average variables are replaced by solutions of the Riemann problem, implying that numerical dissipation, i.e., density regularization and numerical viscosity, is implicitly present.
In Riemann-based SPH the solution to the Riemann problem is reduced to a one-dimensional problem constructed along the interaction line of particles.Then, the first step is to construct the L and R states between each pair of interacting particles.Following the Godunov-type method, which applies piece-wise constant assumption, i.e., first-order reconstruction, the L and R states are defined as In this case, the initial Riemann left and right states are on particles i and j, respectively, and the discontinuity is at the middle point r i j = 1 2 r i + r j , as shown in Figure 1.

Riemann solver with dissipation limiter
Since its inception, the Riemann-based SPH method has been applied to solve strong shocks problems [70,71,72], solid mechanics problems [59,60,73], interface instability [74,75] and magnetohydrodynamics (MHD) [76] problems.However, it is generally too dissipative to reliably reproduce violent free-surface flows involving violent events such as impact and breaking [34,77,78].To cope with the excessive dissipation introduced by directly applying a Riemann solver, Zhang et al. [67] proposed a simple low-dissipation limiter to the classic linearized Riemann solver [79,80], ensuring no or decreased numerical dissipation for expansion or compression waves, respectively.This method is compatible with the hydrostatic solution and able to resolve violent wave breaking and impact events accurately, produces very small damping of mechanical energy, smooth pressure fields and predicts reasonable pressure peaks.Then, this method was extended by Rezavand et al. [18] for modeling multiphase flow with high density ratio and Zhang et al. [18] for FSI problems.Inspired by Ref. [67], Meng et al. [68] proposed a dissipation limiter to Roe's approximated Riemann solver [81,82] to develop a multiphase SPH model for complex interface flows.Furthermore, Meng et al. [68] derived the equivalent relation between the intrinsic dissipation of the Riemann solver and the Reynolds number for accurate modeling of viscous flow.

Linearized Riemann solver
where the limiter is defined as with Here, the dissipation limiter β ensures that there is no dissipation when the fluid is under the action of an expansion wave, i.e.U L < U R , and that the parameter η is used to modulate dissipation when the fluid is under the action of a compression wave, i.e.U L ≥ U R .In Ref. [67], constant parameter η = 3 is suggested according to numerical experiments.Meng et al. [68] presented that the relation between η and the parameter α in artificial viscosity of Eq.( 21) is given by Having the equivalent relation of the artificial viscosity and the physical kinematics viscosity [59,84], Meng et al. [68] derived the relation between the parameter η and the Reynolds number as where d is the dimension, Re the Reynolds number, U c and L c are the characteristic velocity and length, respectively.Figure 3 presents the validation of the linearized Riemann solver with dissipation limiter by studying the Taylor-Green vortex flow [67].Without the dissipation limiter, the Riemann-based SPH is too dissipative to predict a reasonable kinetic energy decay.While with the limiter, low-dissipation feature and better agreement with the analytical solution are achieved in comparison with the traditional SPH method with the artificial viscosity (α = 0.02) scheme.Also, the Riemann-based SPH achieves 2nd-order convergence for the total kinetic energy with increasing particle resolution.The performance of the Riemann-based SPH with dissipation limiter for modeling free-surface flows exhibiting violent events such as impact and breaking is addressed in Figure 4 where dam-break and sloshing flows are investigated by comparing with experimental data.Both tests demonstrate that the Riemann-based SPH can accurately predict the violent motion of free surface and meanwhile capture the impacting pressure reasonably.

HLLC Riemann solver
The HLLC Riemann solver proposed by Toro [83] has been applied in Riemann-based SPH method for capturing strong shock waves [70,71] and energetic flows [58], exhibiting excessive numerical dissipation with the piecewise constant assumption [58].Following Ref. [80], the wave speeds estimate respectively from Left (L) and Right (R) regions, s L and s R , are and then the intermediate wave speed s * is then calculated as Subsequently, the intermediate states of pressure can be obtained accordingly by Finally, the HLLC solution to the Riemann problem is then expressed by

High-order reconstruction
Another key feature of the Riemann-based SPH is the possibility of implementing high-order data reconstruction [85,86,87,88,89,90], which is widely applied in Eulerian Godunov-method [89,90,91] to decrease the dissipation and improve the accuracy [37,34].Vila [34] first introduced the MUSCL scheme for second-order data reconstruction into the Riemann-base ALE-SPH method.Similar approach was developed by Inutsuka et al. [61] for reformulating a second-order Riemann-based SPH method.Since then, more attempts have been aimed at implementing different limiting functions for MUSCL scheme, e.g., van Leer limiter [76,92], SuperBee limiter [78,58,93] and Barth-Jespersen-type limiter [94], to reduce the numerical dissipation and increase spatial order of Riemann-based SPH method.More recently, increasing attentions are drawn to implement WENO scheme [89,91] ant its variants [95,96,97].Zhang et al. [98] have considered a fifth-order WENO reconstruction for computing one-dimensional problems, however, its multidimensional extension is not straightforward.The first WENO reconstruction for computing multi-dimensional problems is proposed by Avesani et al. [63], in which the directionally-biased multi-dimensional candidate stencils with high-order Moving-Least-Squares (MLS) reconstructions are combined with the WENO weighting strategy.Although this method achieves higher accuracy than those using linear reconstructions, it exhibits much lower computational efficiency due to a large number of multi-dimensional candidate-stencil evaluations.Nogueira et al. [99] proposed a SPH-MOOD-MLS method which uses a MLS-based approximation and a posteriori Multidimensional Optimal Order Detection (MOOD) approach for numerical stability.This method shows considerable improvement for modeling compressible flows with shock and blast waves.Different with Ref. [63], Zhang et al. [64] proposed an efficient one-dimensional 4-point stencil incremental stencil WENO (IS-WENO) reconstruction [65] along the interaction line of each particle pair, where the variable calculation of the missing points is based on SPH derivative approximation as that in MUSCL scheme, for the Riemann-based SPH method to increase accuracy by decreasing the numerical dissipation other than increasing the formal approximation order.This method preserves the capability of producing smooth and accurate pressure fields of the original method and now achieves also very small numerical dissipation.Similar with Ref. [64], Meng et al. [100] developed 5-point stencil WENO reconstruction [89,91] along the particle interacting line, while the missing variables are evaluated by firstly searching their nearest fluid particles and then adopting the first-order Taylor expansion.The tests showed that this method is robust and able to accurately capture shockwaves.Benefiting from the low-dissipation property, it also has a good performance in resolving small-scale structures in flows.Similar with Refs.[64,65], Meng et al. [100] implemented the TENO scheme [96] to capture the shocks and small-scale structures in some compressible flows, and obtain superior accuracy in some incompressible vortex flows and free surface flows.

MUSCL reconstruction
The MUSCL scheme was developed by van Leer [85,86] to replace the piecewise constant approximation of Godunov's scheme by reconstructing left and right states with piecewise linar approximations to calculate fluxes in the Eulerian methods [80].In MUSCL scheme, the spatial derivatives of field variable are used for data reconstruction, while direct usage results unstable scheme due to spurious oscillations near high gradients [80].To remedy the spurious oscillation, slope limiter or flux limiter, is applied to limit the approximated gradient near shocks or discontinuities.Similar as in Eulerian method, the reconstructed, limited left and right states are used as input to the Riemann solver in the Riemann-based SPH method [58,61], or to obtain the fluxes in the Riemann-based ALE-SPH [37,94].With the piecewise linear approximation, the left and right states of the Riemann problem are reconstructed from where φ is the limiting function and λ the ratio of successive gradients defined as Here, the ∇Φ i and ∇Φ j are the corresponding gradients calculated from the SPH approximation as or with kernel grad correction defined in Eq. ( 16).Here, we briefly summarize several widely used slope limiters.
In the Minmod limiter [101], the limiting function is defined as For the SuperBee [101] limiter, the limiting function is given by As for the van Leer [102] limiter, the limiting function is defined as Base on the Barth-Jespersen-type limiter [87], Hopkins [94] proposed a limiting function defined as where β is constant, Φ max i j,ngb and Φ min i j,ngb are the maximum and minimum values of Φ j among all neighbor particles j of particle i, and Φ max i j,mid and Φ min i j,mid are the maximum and minimum values (over all pairs i j of the j neighbours of i) reconstructed on the 'i side' of the interface between particles i and j (i.e.,

WENO reconstruction
Different with the classical point-wise one-dimensional WENO reconstruction from structured mesh data [89,91,103], the reconstruction from scattered data, i.e., cell average data using unstructured mesh or meshless particle data, is numerically very critical challenging as it requires solving interpolation problems [104,105], in particular when the reconstruction order is high, or when the scattered data are very unevenly distributed [106,107].The mostly common procedure applied in the unstructured mesh method is to construct a set of reconstruction stencils for each element by dividing its neighbor elements to different groups [108].Concerning WENO date reconstruction in the Riemann-based SPH method, two types of stencil reconstruction, i.e., multi-dimensional [63] and one-dimensional [64] stencils, are developed.Following Refs.[106,108], Avesania et al. [63] proposed a new class of MLS-ALE-SPH methods by first producing each particle a set of high-order MLS reconstructions based on multi-dimensional reconstructed stencils and then applying a nonlinear WENO technique to combine reconstructions with each other.For each particle i, the reconstructed stencils are defined as where S i,0 is the central stencil containing neighboring particle union P 0 with distance less than h, and S i,κ with κ ∈ [1, s] are the one-sided stencils consisting of neighboring particle union P κ with distance less than 2h and Visual particle

Real particle Multi-dimensional stencil
One-dimensional stencil located at specified Circular sector determined by the angle θ formed by the vector r i j and the x-axis, as shown in Figure 5.Note that more one-sided stencils can be constructed by diving the cutoff region of particle i into more Circular sectors.Having the definition of the constructed stencils, the Moving-Least-Squares interpolation is applied for each particle by assuming the reconstruction polynomials in the form for each stencil defined in Eq. ( 47).Here, N is the size of the polynomial basis (that depends on the polynomial degree and on the space dimension) B χ i (r − r i ) are the associated basis functions, and C χ i are the (unknown) polynomial coefficients, more details are referred to Ref. [63].With the constructed polynomials, the classical WENO scheme can be applied to obtain the final polynomial defined as with the normalized nonlinear weights given by Here, the constant parameter = 10 −6 , and λ 0 = 1.0 and λ κ = 10 −5 for central and sided stencils, respectively.For the calculation of the smoothness indicator, Avesania et al. [63] proposed the following equation The MLS-ALE-SPH method was further improved by Nogueira et al. [99] using the MOOD paradigm to improve the accuracy and the robustness and Avesani et al. [66] adopting ADER approach (Arbitrary Derivative in space and time) to guarantee a high order space-time reconstruction.The MLS-ALE-SPH method and its improvements are able to capture the discontinuities and to maintain accuracy and low numerical dissipation in smooth regions, while they are generally excessive computational expensive due to particle search for all the stencils of each particle and the corresponding MLS interpolations.
To improve the computational efficiency of applying WENO reconstruction, Zhang et al. [64] developed a onedimensional stencil reconstruction along the interacting line of each particle pair as shown in Figure 5.They first introduced an IS-WENO reconstruction, by which the full 4-point stencil as shown in Figure 5 is constructed following the concept of Refs.[96,97].To construct the 4-point stencil for each interacting particle pair, such as particle i and j, the values at the stencil points are calculated as where Φ i and Φ j represent the primitive values, i.e., ρ, P and v • e i j , at particle i and j respectively.In this 4point stencil, two visual particles, namely i − 1 and i + 1, are constructed along the interacting line r i j with the gradients calculated from the SPH approximation of Eq. ( 42).Similar with Ref. [64], Wang et al. [65] introduced 5-point stencil by constructing more visual particles along the interacting line r i j and whose values are calculated as where Φ c denotes the variable of particle c which is the closed particle to the visual particle located at r i + (i − m) • r i j .Compared with the 4-point stencil construction, the construction proposed by Wang et al. [65] provides the possibility of implementing the classical 5-order WENO scheme, while requires extra computational efforts for nearest particle search of each visual particles.This searching procedure increases the complexity of neighbor searching from O(N) to 5O(N) [65].
Compared with the multi-dimension WENO reconstruction [63], the one-dimensional reconstruction [27,65] can no longer main the higher-order data reconstruction [63].However, it is reasonable as the main objective of applying the WENO reconstruction aims to increase accuracy by decreasing the numerical dissipation other than increasing the formal approximation order of the SPH method [63], which depends on many factors and is quite difficult to achieve in practice.It is shown that a general SPH method applying Gaussian-like kernel achieves only 2nd-order convergence even when the integration error is sufficiently small [3,109].Following the WENO reconstruction, the mid-point value, i.e., q 1/2 as shown in Figure 6, is predicted by the non-linear weighted average where q (k) 1/2 and w k , k = 1, 2, 3, are the reconstructed values from the candidate stencils and their non-linear weights.
For classic 5-point stencil WENO scheme, the reconstructed values are defined as with the renomalized nonlinear weights given by Also, the smoothing indicator is calculated from As for the 4-point stencile IS-WENO scheme, these reconstructed values are defined as [27,97] Also, the non-linear weights are where the linear weights are determined as and τ 4 is a global reference smoothness indicator [96] given as Both the WENO and IS-WENO reconstruction can be further improved by the TENO scheme [96] with the nonlinear weights are reformulated as Here, d k are optimal weights with respect to their dispersion and dissipation properties [95,96,97] and δ is a sharp cutoff function to determine the contribution of each substencil.Following Fu et al. [96], the cutoff function is defined as with threshold C T = 10 5 and the parameter χ r is a normalized smoothness measure.For each sub-stencil, χ r is defined as where γ is a scale separation to distinguish the discontinuity form the smoothe region.For 5-point WENO scheme, and for 4-point IS-WENO The convergence rate of different data reconstruction, i.e., piece-wise constant reconstruction termed as "Baseline", MUSCL and IS-WENO schemes, is presented in Figure 7 which gives the density error with increasing particle resolution for one-dimensional acoustic wave propagation [64].Both MUSCL and IS-WENO reconstructions achieve second-order convergence, which is the formal accuracy of a general SPH approximation with Gaussianlike smoothing kernels when the particle integration error is negligible [109].As expected, the Baseline achieves first-order convergence only, and MUSCL exhibits considerably larger errors due to numerical dissipation.

MOOD scheme
The aforementioned MUSCL and WENO scheme provide a priori limitation procedure, which is performed with data at time t n to eliminate the spurious numerical oscillation in the vicinity of discontinuity at time t n+1 [63].Recently, Clain et al. [110] proposed a posteriori limiting paradigm, multi-dimensional optimal order detection (MOOD), within the finite volume Eulerian framework on unstructured mesh.The MOOD paradigm consists of detecting problematic situations after each time update of the solution and of reducing the local polynomial degree before recomputing the solution.Nogueira et al. [99] implemented the MOOD paradigm in the MLS-WENO-SPH method [63,66] to determine, a posteriori, the optimal order of the polynomial reconstruction of MLS interpolation for each particle that provides the best compromise between accuracy and stability.Then, Antona et al. [111] extended this method to the simulation of weaklycompressible viscous flow.
Different with Refs.[99,111], where the posteriori limiting procedure is performed at the MLS interpolation process, we derive herein the exploitation of MOOD paradigm to determine the optimal data reconstruction of the Left and Right states in the Riemann-based SPH method.The key idea is to introduce a Data Reconstruction Degree decrementing process to replace the counterpart based on Particle Polynomial Degree (PPD) applied in the original MOOD paradigm [99,110].More precisely, the present MOOD paradigm consists of two ingredients, a DRD and a detector.The DRD indicates the optimal data reconstruction, Baseline, MUSCL or WENO scheme, for the Riemann problem to obtain the candidate Riemann solution of Φ * , i.e., U * and P * .The detector controls the admissibility of the resulting Riemann solution and the particle DRD will be decremented from the WENO scheme to MUSCL scheme and further to Godunov scheme when a detector is activated.The Riemann solution with first-order, robust Godunov scheme, i.e., the reconstruction with piecewise constant assumption, is assumed to be always valid as the original MOOD paradigm in Refs.[110,99].Figure 8 sketches the Riemann-based SPH method with Godunov in the top panel, MUSCL or WENO reconstruction, while displays in the bottom panel the present posteriori MOOD procedure.Concerning the detector, the physical admissibility detection or the discrete maximum principle proposed by Ref. [99] can be applied.

Particle-interaction configuration
In the particle-base methods, the pairwise interaction between neighboring particles is determined through a Gaussian-like kernel function which has radial-symmetric compact support.Therefore, implementing the particleinteraction configuration, i.e., searching of neighbor particles and computing corresponding kernel weights and gradients, is a critical aspect of the high-performance particle-based solver.Concerning the searching of neighbor particles, two different approaches, i.e., cell-linked list (CLL) [112] and Verlet list (VL) [113], are widely used in the particle-method community [114,115,116].With different neighboring search technique, the particleinteraction configuration can be updated accordingly.For the CLL approach, the neighbouring search procedure must be performed at each numerical iteration, indicating that the particle-interaction configuration is also updated accordingly.As for the VL approach, a VL containing all potential neighboring particles is created and stored for each particle.Therefore, a VL may be used for multiple times without executing neighbor search if the particleinteraction configuration can be obtained [114,116].Notwithstanding the wide implementation of the CLL and VL approach in the particle-based methods, they may become not sufficiently efficient when adaptive particle resolution with variable smoothing lengths is applied [117,118], where tree-based neighboring search technique can be applied for address this issue [119,120,121].With the CLL and VL approaches in hand, several schemes, data sorting with space filling curve [114,122,123,124], dual-criteria time stepping [49] and multi-cell linked lists [125], are developed for further improvement of the computational efficiency.

Cell-linked and Verlet lists
In the linked list approach, i.e., CLL and VL, the whole computational domain is partitioned into equisized cells and each cell creates a list consisting of the references of all the particles located within it, as shown in Figure 9.
For the CLL approach, the cell size is equal to that of the support or cut-off radius of the kernel function, i.e., 2h, and the searching of neighbor particles is restricted to the nearest neighboring cells, 9 cells in two dimensions as shown in Figure 9.After the neighbor-searching operation, the kernel function values and interaction forces can be calculated with respect to the particles belong to these neighboring cells only if they are found within the cut-off radius.As the CLL approach does not store the neighboring-particle identities and the corresponding kernel function values, the particle-interaction configuration must be updated multiple times during a single time step in the time integration, e.g. the particle-interaction configuration is updated twice when the kick-drift-kick time integration scheme is applied [67,126,127].
Different with the CLL approach, the VL increases the cell size to 2h + ∆h, creates and stores a Verlet list which contains the references to all potential neighboring particles for each particle by checking all particles within the adjacent cells [114], as shown in Figure 9. Without conducting neighbor particle search, a VL can be used for multiple times, twice for one single time step with the kick-drift-kick scheme, with non-vanishing kernel function values [114].Dominguez et al. [114] conducted a comprehensive study of the CLL and VL approaches, and concluded that the VL approach has to use cells with considerably larger size than the cut-off radius to increase the reuse of the Verlet lists.They demonstrated that with a 50% increase of the cell size, only a slight performance gain of 8% is achieved when the Verlet lists are reused for 13 times in 7 time steps.Winkler et al. [128] also found that in general this approach is not able to substitute the CLL due to its poor performance.More recently, Fraga Filho et al. [129] evaluated the performance of the CLL and the VL approaches, and concluded that the VL approach is an optimisation proposal in which the neighbour list is not update at each numerical iteration through an appropriate choice of the cutoff radius ensuring no accuracy loss in the location of neighbor particles.

Particle sorting with space filling curve
Traditionally, particles are generated and stored following a given order, row or column order for Lattice distributed particles, and the corresponding memory allocation of particle data is unaltered during the simulation.This results poor temporal and spatial data access and insufficient usage of memory hierarchy [117,114] due to the full Lagrangian feature of the particle-based methods.Therefore, sorting particle data to change their memory location for better data locality, which has positive effects on hardware caching [130], can improve the memory access and decrease the computational time, achieving scalability and efficiency for large scale particle simulations.To that end, particle date arrays including particle index and other physical variables are rearranged so that the neighbouring particles are close in computer memory space.This procedure can be realized by implementing sorting algorithm with proper space filling curve (SFC), for example the Morton SFC [131] and the Hilbert SFC [132], which traverses higher dimensional space in a continuous fashion [133].Note that particle sorting does not change the particle-interaction configuration.Springel [117] implemented an efficient Hilbert SFC in a cosmological N-body/SPH code to domain decomposition and particle sorting within each processor, exhibiting approximated speedup of 2 compared with random sorting.For pure SPH simulation, the CLL approach is introduced to avoid the naive O(n 2 ) neighbor searching and particle sorting can be conducted with respect to the mapped cell index [114,123].Dominguez et al. [114] showed that implementing particle sorting with the Morton SFC [131] can increase the computational performance about 20% for weakly-compressible SPH simulations.Since then, particle sorting has been implemented in particle-based code with the coupling of MPI [134], GPU-acceleration [122,135,128] and sharememory high-performance computing strategy [123].
Following Refs.[114,123], the cell index in the CLL will be defined by a SFC function [131,132] H (x cell , y cell , z cell ) = n cell which produces one dimensional indices n cell where two cells i and j that are geometrically close will be ordinally close.As pointed out by Dominguez et al. [114] particle sorting provides a way to improve the data access pattern and neighboring particle search, while it increases the memory requirements.

Dual-criteria time stepping
Zhang et al. [49] proposed a dual-criteria time stepping scheme to optimize the computational efficiency of the WCSPH method by introducing two time-step criteria characterized by the particle advection and the acoustic speeds, respectively.In this scheme, the advection criterion determines the updating frequency of the particleinteraction configuration, i.e., the simplest VL approach with a cell size of 2h and the corresponding kernel weights and gradients, and the acoustic criterion controls the frequency of the pressure relaxation process, i.e., the time integration of the particle density, position and velocity due to the action of pressure gradient.
The time-step size determined by the advection criterion, termed ∆t ad , has the following form where CFL ad = 0.25, |v| max is the maximum particle advection speed in the flow and ν the kinematic viscosity.The time-step size according to the acoustic criterion, termed ∆t ac , has the form where CFL ac = 0.6.Therefore, the pressure relaxation process is carried out approximately k ≈ ∆t ad ∆t ac times, for example k is about 4 to 5 when considering inviscid flow [49], during one advection step.Also, the particleinteraction configuration is not altered in one advection time step, a large CFL ac = 0.6 value typically for a Eulerian method is allowable without introducing numerical instability.
As reported in Ref. [49], the dual-criteria time stepping scheme can achieve an speedup up to 2.80 with good robustness and accuracy, in comparison to the traditional counterpart where the CLL approach is applied.

Solid mechanics
In the SPH method, there generally two types of formulations, namely update Lagrangian (UL) and total Lagrangian (TL) formulations, have been developed for solid dynamics.The UL formulation, where the current configuration is used as the reference, suffers from several shortcomings, e.g. the presence of tensile instability [136,137] exhibiting non-physics fracture, the appearance of zero-energy modes due to the rank-deficiency inherent to the use of under-integrated particle integration [138] and the reduced order of convergence for derived variable [10].To address these problems, many modifications by correcting the kernel function [33,38,139], improving the interpolation integral [4,15,140,141,142] or introducing transport-velocity formulation [137] have been proposed.Compared with the UL formulation, the TL formulation shows promising potential in the simulation of finite deformation due to its attractive advantages in being free from tensile instability and ensuring 1st-order consistency when computing deformation gradient by introducing the kernel gradient correction.Since its inception, it has been applied for the problems of necking and fracture in thermomechanical deformations [143], fluid-structure interaction (FSI) [13,144,14,145] and biomechanics [146,22], among many others.In this paper, we focus on the TL formulation with highlights on stablized term, the steady state solution and the hourglass control scheme.

Governing equations
The kinematics of the finite deformations can be characterized by introducing a deformation map ϕ, where a material point r 0 can thus be mapped from the initial reference configuration Ω 0 ⊂ R d to the point r = ϕ r 0 , t in the deformed configuration Ω = ϕ Ω 0 .Here, the superscript (•) 0 denotes the quantities in the initial reference configuration.Accordingly, the deformation tensor F can be defined by its derivative with respect to the initial reference configuration as With the definition of the displacement u = r − r 0 , the deformation tensor F can also be calculated through where I represents the unit matrix.In total Lagrangian framework, the conservation of mass and the linear momentum corresponding to the solid mechanics can be expressed as where ρ is the density, J = det(F) and P the first Piola-Kirchhoff stress tensor and P = FS with S denoting the second Piola-Kirchhoff stress tensor.In particular, when the material is linear elastic and isotropic, the constitutive equation can be simply given by where λ and µ are the Lamé parameters [147], K = λ + (2µ/3) the bulk modulus and G = µ the shear modulus.
The relation between the two modulus reads with E denoting the Young's modulus and ν the Poisson's ratio.Note that the sound speed of solid structure is defined as c S = K/ρ.The Neo-Hookean material model can be defined in a general form with the introduction of the strain-energy density function Then, the second Piola-Kirchhoff stress S is derived as

Total Lagrangian formulation
In the TL formulation, the correction matrix of Eq. ( 17) of the kernel gradient correction is calculated form the initial reference configuration as [39] where denotes the gradient of the kernel function.Here, the subscript a and b are introduced to denote the solid particles.
It is worth noting that the correction matrix is only calculated once before the simulation as it is evaluated at the initial reference configuration.Then, the discretization form of the mass and momentum conservation equations, Eq.( 73), yields where f f denotes the force exerting on the solid particles due the existence of the fluid particles and P denotes the inter-particle averaged first Piola-Kirchhoff stress and is defined by Note that the first Piola-Kirchhoff stress tensor is computed from the constitutive law with the deformation tensor F given by

Stablized scheme
Without appropriate stabilization technique, the original TL formulation may exhibit spurious fluctuations especially in the vicinity of sharp spatial gradients.This deficiency can result in numerical instability and lead to wrongly predicted deformation for problems involving large strain.To rectify this deficiency, Lee et al. [148] proposed a Jameson-Schmidt-Turkel SPH (JST-SPH) method, which shows good performance of eliminating spurious pressure oscillations in the simulation of nearly incompressible solid.In JST-SPH methodology, the nodally conservative JST stabilization is additively decomposed into harmonic operator (2nd-order) and biharmonic operator (4th-order) which require excessive computational efforts [149].In a more recent work, Lee et al. [150] further proposed a total Lagrangian upwind SPH (TLU-SPH) method by introducing a characteristic-based Riemann solver in conjunction with a linear reconstruction procedure to guarantee the consistency and conservation of the overall algorithm.This method also shows good performance in the simulation of nearly and truly incompressible explicit fast solid dynamics with large deformations.
More recently, Zhang et al. [151] proposed an efficient artificial damping method by introducing a Kelvin-Voigt (KV) type damper for total Lagrangian formulation by introducing appropriate damping terms into the constitutive equation.Besides, many stabilization strategies for update Lagrangian formulation, e.g.artificial viscous fluxes [3,4,152], conservative strain smoothing regularization [10,153] and Riemann-based scheme [59], have also been proposed.In KV model, a viscous damper and a purely elastic spring connected in parallel are involved.Following the same idea, an elastic solid undergoing large strains can also be modeled with the mechanical components of springs and dashpots.Thus, the total stress σ total can be decomposed into two parts, i.e., the elastic stress σ S and the damper stress σ D as Where the damper stress is defined by Here, d (t)/dt denotes the strain rate and η the physical viscosity.Applying the KV model to TL-SPH formulation, the second Piola-Kirchhoff stress S can be rewritten as where S S is given by the constitutive equation of Eq. ( 74) or Eq. ( 76), and the damper S D is defined as By introducing a von Neumann-Richtmyer type scaling factor with the speed of sound c, the artificial viscosity π in Eq. ( 86) is defined by where α = 0.5 is a constant parameter and h denotes the smoothing length.
Figure 10 shows the validations of the total Lagrangian formulation and the KV-type damper for solid mechanics and its applications in bio-mechanics.Figure 10a presents the time histories of velocity and displacement in the length direction at the right tip end of an elastic cable which experiences a wave propagation initialized by imposing a velocity v = 5 m/s along the length direction on the right quarter [154].For the original TL formulation without stablized schemes, excessive oscillation and similar overshoots in the velocity and displacement profiles are exhibited.With both JST [148] and KV-type [154] stablized scheme, correct velocity and displacement are predicted as expected, whereas small overshoots are observed with the JST scheme [148].Figure 10b reports the deformed configuration with von Mises stress contour and the displacement of the free end for threedimensional bending rubber-like cantilever whose bottom face is clamped to the ground and its body is allowed to bend freely by imposing an initial uniform velocity [40].Compared with the numerical data in literature obtained by mesh-based method [155], the nonlinear deformation of the structure is accurately predicted by the TL formulation with the KV-type damper.The robustness and versatility in biomedical applications of the TL formulation is portrayed in Figure 10c where the C-shaped stent is considered by imposing initial velocity at its the top and bottom [154].To the best knowledge of the authors, this is first time that an SPH-based method is successfully extended to the simulation of realistic cardiovascular stent and this will open up interesting possibilities for modeling bio-mechanical applications.

Steady state solution
Coupling a mechanical system to converge to a static equilibrium state plays a key role in static and dynamic analyses.For SPH-based simulations, where an explicit time integration scheme is mainly applied, fast achieving the static equilibrium state is a very critical numerical challenging.Also, traditional implicit methods solving the entire system with iterative solvers are not applicable for large scale applications especially in nonlinear cases.Instead, dynamic relaxation, which was originally proposed for finite element method (FEM) [156,157,158], is attractive because its explicit iterative algorithm is simpler and more efficient.
Generally, there are two groups of dynamic relaxation technique, i.e., viscous dynamic relaxation (VDR) and kinetic dynamic relaxation (KDR) [159,160,161], with respect to the manner of applying damping.In VDR, an artificial viscous damping term is added into the equation of motion to reduce the number of iterations.While VDR is able to obtain system equilibrium without the loss of momentum conservation, it has one difficulty that the solution for nonlinear problems may be path dependent and vary with different damping ratio as noted in Ref. [162].Also, the relaxation process is slow if insufficient damping ratio is applied, on the other hand, excessive damping not only can lead to numerical stability issue for explicit damping scheme but also can hinder the system from achieving the correct final steady state as the damped velocity can be very small.The inefficiency of the damping leads low efficiency to achieve the final solution, which is the main drawback of VDR [163].As an alternative approach, the KDR was first proposed by Cundall [164] for static structure analysis.In the KDR, the damping is introduced to a dynamic system by resetting all current velocities to zero when kinetic energy peak is detected [158].After performing this relaxation procedure several times for successive local peak kinetic energy, the final state can be achieved and the static equilibrium solution is obtained.The procedure is simple with no damping coefficient required and has been applied to a wide range of engineering problems [163,165].However, it also exhibits two obvious drawbacks.One is that it violates the momentum conservation.Therefore, it cannot be applied to moving systems for their equilibrium state analysis due to the variation of the entire mechanical energy.The other is that the additional process for kinetic energy peak detection brings extra computational efforts [160,166].Concerning the development of dynamic relaxation in SPH method, the first effort is credited to Lin et al. [162] where a dynamic relaxation scheme is developed for shell-based SPH by exploiting the formulations in finite element analysis.However, the damping matrix  as well as the damping ratio have to be chosen suitably, which is not an easy task.
Recently, Zhu et al. [167] proposed a VDR-type dynamic relaxation method for SPH by introducing efficient momentum-conservative damping.Specifically, an artificial viscous force is first introduced into the momentum conservation equation which is rewritten as where f ν denotes the added damping term.Following the TL formulation, this viscous damping term can be discretized as where η is the dynamic viscosity.Note that along with the acoustic and body-force timestep size criteria, the time-step size during the simulation would be also constrained by where D = {1, 2, 3} for one-, two-or three-dimensional cases, respectively.This limitation may lead excessive computational efforts especially when large damping ratio and high resolution ratio are applied.To release this limitation, an operator splitting scheme [168,22] is first applied to decouple the momentum conservation equation Eq. ( 88) into the original momentum part and the damping part.Then, two operators S m and S d , which yield and are introduced as forward Euler scheme is adopted for time integration.Subsequently, the first order Lie-Trotter splitting scheme [169] is applied to approximate the solution from time t to t + ∆t by where the symbol • denotes the separation of each operator and indicates that S (∆t) d is applied after S (∆t) m .As demonstrated in Refs.[170,171], a larger time-step size is allowed when suitable implicit formulations are constructed to solve the viscous term.
To avoid large scale matrix operations for traditional implicit formulations, the entire-domain-related damping step is sequentially split into particle-by-particle operators, e.g. by second-order Strang splitting [172], as where N t denotes the total number of particles and D a the split damping operator corresponding to particle a. Two efficient schemes, the particle-by-particle splitting scheme and the pairwise splitting scheme, are then proposed for the local damping operator [167].The new time-step velocity updating for the entire field can be thus achieved by carrying out the local split operator to all particles for half a time step and then performing the operator to these particles in a reverse sequence for another half time step [173] as shown in Eq.( 94).

Particle-by-particle splitting scheme
In an implicit formulation, the local damping term in Eq. ( 89) can be rewritten as where v n+1 i j = v n i j + dv i − dv j with dv i and dv j representing the incremental change of velocity for particle a and its neighboring particles b induced by viscous acceleration.After denoting and the implicit formulation Eq.( 95) can be simplified to A gradient descent method [174] is then adopted to evaluate dv b and dv b .In Eq. ( 98), the gradient ∇E a with respect to variables (dv a , dv 1 , where k is known as the learning rate [174].By substituting Eqs. ( 99) and ( 100) into Eq.( 98), the learning rate can be obtained, i.e.
According to Eqs. ( 99) and ( 100), the incremental change of velocity by viscous damping can be thus achieved.In order to ensure momentum conservation, the velocities of neighboring particles are then modified by the above predicted incremental change.In summary, the local update of velocities includes two steps as follows.The first step calculates the incremental change for velocity by gradient descent method, i.e., where the superscript p denotes the predicted value.The second step ensures momentum conservation, which yields As the velocities are updated implicitly, much larger time-step size is allowed and the following viscous criterion which is about 100 times larger than the corresponding explicit method as presented in Eq. ( 90), is adopted.For solid, the artificial dynamic viscosity η = ρν as shown in Eq. ( 88) is defined by where E is the Young's modulus, L the characteristic length scale of the problem and β denotes a parameter relating to the body shape.Note that choosing different value for the parameter β may alter, though not much, the speed to final state.For fluid, the viscosity is defined by

Pairwise splitting scheme
The pairwise splitting scheme is inspired by the work of Ref. [170], where particle velocity is updated implicitly and locally in a pairwise fashion.By adopting the second-order Strang splitting [172], the damping operator corresponding to each particle i as given in Eq. ( 94) is further split based on its neighbors, i.e., where D a,b denotes the interaction between particle a and its neighbors.Specifically, the incremental changes for velocity of a specific particle pair induced by viscosity can be written in implicit form as Here, B b is defined in Eq. ( 96) and it is obvious that this process does not change the conservation of momentum.Then, dv a and dv b can be obtained straightforwardly by solving Eq. ( 108), which yields By sweeping over all neighboring particle pairs for half a time step and then over these particles in a reverse sequence for another half time step, the incremental changes for velocity of particle a and all its neighbors can be thus achieved.Compared to the particle-by-particle splitting method, this scheme leads more errors in solving viscosity due to the further splitting in pairwise fashion.However, it is unconditional stable and thus more suitable for problems with high spatial resolution and high damping ratio.

Random-choice strategy
It is worth noting that the added viscous force would hinder the system achieving correct steady state especially when the damping radio is large, which may lead to the different solutions of the nonlinear problems with different damping ratio [162].Thus, a suitable damping ratio has to be selected [162,175] for faster reaching to final state of the system with the aforementioned viscous damping methods.
To avoid this damping-ratio-related problem and relax the limitation on the choice of large damping ratio, a random-choice strategy is presented, in which the viscosity term is imposed randomly rather than at every time step.To achieve this, the artificial dynamic viscosity η is modified as where φ is a random number uniformly distributed between 0 to 1, and α = 0.2 a parameter determining the probability.Therefore, the resistance on displacement induced by the large artificial viscosity can be released randomly, which eliminates the damping-ratio-related issue and accelerates the achievement to the final state.Note that this strategy also helps to save much computational cost since the computation of damping is only carried out at a small fraction of time steps.The performance of the VDR-type dynamics relaxation scheme is validated by considering an elastic block sliding along a smooth slope accelerated by the gravity [167].Figure 11 presents the time histories of the distance between the block center and the slope, and the displacements of the block center in x− and y− direction.With the dynamics relaxation scheme, the steady state is quickly achieved by surpassing oscillations.

Hourglass control scheme
In the FEM method, hourglass modes represent zeroenergy modes in the sense that the element deforms without an associated increase of the elastic energy when reduced-integration elements are employed.Insufficient integral points lead to rank deficiency of FEM stiffness matrix, which further causes the non-uniqueness of governing equations.Similarly, the SPH method is also susceptible to hourglass modes as all field variables and their derivatives are evaluated at the same position [141,176].These modes cannot be detected and can be developing over time [177] if the number of integration points is reduced, and therefore the solution is polluted with arbitrary amounts of strain energy and entirely dominated by these modes [139].
To address the hourglass modes in SPH method, Swegle et al. [178] proposed to replace the strain measure by a non-local approximation based on gradient approach.Beissel and Belytschko [179] treated these singular modes by the addition to the potential energy functional of a stabilization term which contains the square of the residual of the equilibrium equation.A more straightforward idea is to introduce additional integral points to calculate derivatives away from particles with zero derivatives of the kernel function [152,180].Two sets of points, velocity points and stress points, are applied to discretize the calculation domain, one carrying the velocity and the other carrying the stress.While the velocity gradient and stress are computed on stress points, the divergence of stress is calculated using stress points as neighbor particles and then sampled at velocity points.The enhanced stability of additional stress points was confirmed [39,181,182,183,184].However, estimating the stresses induces extra computational efforts and how to place the stress points is till not fully addressed.
When the FEM using the one-point reduced finite element, a mean deformation gradient is obtained, resulting mean strain and stress over a single element.The SPH method also evaluates a mean strain and stress at the center of a particle through the weighted averaging over the neighboring particles.Recently, Ganzenmüller [139] recognized the analogy between SPH collocation and FEM using the one-point reduced element.Inspired by Flanagan and Belytschko [185], Ganzenmüller pointed out that the mean stress-strain description can only represent a fully linear velocity field, which implies node or particle displacement should be exactly described by the deformation gradient, and node or particle displacement incompatible with the linear deformation field are identified as the hourglass modes.The distance of particle a and b can be estimated by the deformation gradient F a as When the hourglass mode exists, the estimated distance between the two particles is inconsistent with the actual distance.This difference defines the error vector A correction force, proportional to the error vector, is introduced, i.e., an artificial stiffness is added to counteract hourglass modes.Then, the scalar δ a ab can be defined as δ a ab is the projection of error vector onto current particle distance vector.The hourglass correction force per unit volume is derived as The stiffness is linear in δ a ab , and described using the Young's modulus E of the material.A normalized smoothing kernel is used to be consistent with SPH collocation, and a explicit systematization via the arithmetic mean is applied as the hourglass forces, fa ab and fb ba , are unsymmetrical.Therefore, the smoothed and symmetric correction force between particle a and b can be expressed as where α is a dimensionless constant which determines the amplitude of the hourglass control.Finally, the total hourglass correction force of particle a over all neighboring particles is This hourglass control algorithm has successfully applied in geomaterials [186], FSI with GPU acceleration [187,188], etc.

Fluid-structure interaction
Fluid-structure interaction, where the structure represents either movable rigid or flexible structures is ubiquitous in natural phenomena, e.g.aerial animal flying, aquatic animal swimming and blood circulation, and also plays a crucial role in the design of many engineering systems, e.g.automobile, aircraft, spacecraft, engines and energy harvesting device.This phenomenon is characterized by the multiphysics coupling between the laws that describe fluid dynamics and structural mechanics.Due to the intrinsic complexity of the interaction between a movable or flexible structure and a surrounding or internal fluid flow, computational study of this type multiphysics problems is highly challenging.
The conventional FSI algorithms are based on meshbased methods, i.e., the finite difference method (FDM) [189], the FEM [190] and the finite volume method (FVM) [191], by implementing monolithic or partitioned approach.The monolithic approach treats the coupled problem as a whole with proper combination of the subsystem, applying a single solver to simultaneously solve the governing equations of the fluid and solid dynamics in FSI.One typical example is the arbitrary Lagrangian-Eulerian (ALE) description of the FEM method [192] where moving mesh is introduced to flud discretization for addressing the issues of unacceptable mesh distortion near the structure undergoing large deformations.This approach encounters difficulties of the convective terms treatment and the challenging of complex mesh regeneration, in particular when large structure deformation is evolved [193].The partitioned approach strives to solve each sub-problem separately, applying computational fluid and computational solid solves for the fluid and solid, respectively, with communication of FSI interface data.Typical example are the immersed-boundary method (IBM) [194] which utilizes two overlapped Lagrangian and Eulerian meshes.In IBM method, the fluid equation is solved on the Eulerian mesh and the effects of solid structure are taken into account by distributing the forces computed on the deformed Lagrangian mesh to the Eulerian counterpart using proper kernel function.Compared with the monolithic one, the partitioned approach suffers from Lagrangian-Eulerian mismatches on the kinematics and the distribution of solid structure forces due to the fairly weak coupling formulation.
As an alternative for tackling FSI problems, the meshless methods, i.e., the SPH [1, 2, 67], the MPS [195] and the DEM [196] provide unified monolithic approach with pure Lagrangian discretization of both the fluid and solid equations.In recent years, the meshless methods have attracted significant attention in studying FSI problems, owing to their peculiar advantages in handling material interfaces [18,137] and the capability of capturing violent events such as wave impact and breaking [64].Promising results have been obtained by the weakly-compressible SPH method [13,14,27,64,188,197,198,199,200,201], incompressilbe SPH method [144,202,203], MPS method [204], SPH-DEM or MPS-DEM methods [205,206] and their combinations with FEM by using partition approach [207,208,209,210,211].
In this part, we focus on the recent developments of unified SPH method for FSI problems and special attention are devoted to the FSI interface treatment, multiresolution discritzation and time stepping schemes, which are key components of the accurate and efficient FSI algorithm.Concerning the applications of FSI algorithm in engineering, comprehensive reviews can be found in Refs.[9,14,15,26,28,212].

Treatments of FSI interface
In unified SPH-FSI computation, the fluid-strucutre coupling is resolved by treating the surrounding movable or flexible structure as moving solid boundary for fluid with imposing free-or no-slip boundary condition at the fluid-structure interface.
Concerning the treatment of movable or flexible boundary, several methods have been proposed and they are generally categorized into three schemes, i.e., boundary force particle, dummy particle and one-sided Riemann scheme.In the first scheme, the structure particle is behaving as the moving boundary force particle [213] and a repulsive force is introduced to prevent particle penetration.This scheme was first proposed by Monaghan and Kajtar [213], where one layer boundary particles provide Lennard-Jones potential repulsive force for fluid particle, and was improved by Liu et al. [214] and further by Zhang et al. [215] with introducing a new numerical approximation scheme for estimating field functions of solid particles.In the second scheme, the structure is presented by the dummy particle whose velocity and pressure are interpolated from fluid particles [55,84] to solve governing equations.Notwithstanding its wide application, this approach exhibits excessive computational efforts which are inherently expensive in three-dimensional simulations due to the data interpolation.In the third scheme, the onesided Riemann problem is constructed along the structure norm and solved to determine the FSI coupling [67].This scheme has demonstrated its robustness, accuracy and efficiency [49,67,145].
Note that there are other schemes, for example ghost particles [5] and semi-analytical approach [216,217,218], have been applied for handing the FSI interface in the coupling of particle-based method and FEM [207,210], which is not included in this survey.

Boundary force particle
The boundary force particle scheme is firstly proposed by Monaghan and Kajtar [213] for approximating arbitrarily shaped boundaries.The movable boundary is modeled by one layer boundary particles with particle spacing as a factor of 3 less than the fluid particle spacing, as shown in the left panel of Figure 12, and these particles interact with the fluid particles by forces depending on the separation of the particles and pre-determiend parameters.Then, the force f s i in Eq. ( 18) acting on a fluid particle i, due to the presence of the neighboring solid particle a, is given by [213,219] where 32 (1 + 5 2 q + 2q 2 )(1.5 − q) 5 q < 1.5 0 q ≤ 1.5 ; q = r ia /h.(118) Here, β = ν max /(ρ 0 d p 2 0 ) with ν = 1 8 αhc and ∆ = d p 0 /3 is the separation of boundary particles.
This solid boundary treatment is widely applied in SPH simulations for dealing with FSI interface treatment involving complex solid shape in single-and multi-phase flows [213,220], and also extended for SPH-FEM coupling scheme [207].However, using the artificial repulsive forces violates the kernel truncation in the immediate vicinity of the solid boundaries as only a single layer of particles are required to mimic the boundaries.To address this issue, Liu et al. [214] developed a coupled dynamic SBT scheme, termed as CD-SBT, by introducing ghost particles along with the repulsive force particles, as shown in the right panel of Figure 12.In the CD-SBT scheme, the repulsive particles are similar to that of Ref. [213] with identical particle spacing of fluid particle.Ghost particles are located outside the repulsive ones and initially generated in a regular or irregular distribution [214].An improved repulsive force in the form of with is proposed.Also, both the repulsive particles and ghost particles are dynamically evolved in the SPH approximation of the governing equations, and their density and velocity can be interpolated from the fluid particles by where W new represents the corrected kernel function with Shepard filter or MLS method.Its straightforward to note that the repulsive force particle provides a penetration force and its combination with ghost particle restores consistency by extending the full support domain of fluid particles.The CD-SBT has shown its accuracy and robustness in the simulation of multi-phase flow [221], freesurface flows interacting with movable rigid objects [222] and hydro-elastic FSI [198].

Dummy particle
In dummy particle scheme, the solid is discretized with dummy particles in a layer of width r c , the cutoff radius of  the kernel function, along the interface or the particles of the flexible structure to represent dummy particles when interacting with fluid particles.As shown in the left panel of Figure 13, the fluid particles (in blue) near the wall interact with the dummy particles (in black) which lie within the support radius of the smoothing kernel function.Then, the force f s acting on the fluid is decomposed into the pressure force f s:p and viscous force f s:ν which are defined as where the density-weighted inter-particle averaged pressure p ia is defined as and the dummy particle velocity v a is extrapolated from the fluid phase as to impose the no-slip boundary condition.Note that the v a of Eq. ( 124) represents the velocity of the movable solid or flexible structure.And the dummy particle pressure p a of Eq. ( 123) can be calculated with a summation over all contribution of the neighboring fluid particles by [55] or [84] Here, a a is solid acceleration.With the interpolated pressure p a , the dummy particle density ρ a can be derived from the EoS of Eq. (19).Compared with the interpolation of Eq. ( 125), Eq. ( 126) is written in the more general formulation with moving solid present, making it a suitable choice for FSI involving the moving or flexible structure.
With the dummy particle, the kernel truncation of fluid particles close to the structure is avoided, ensuring ap-proximation accuracy and consistency.The dummy particle has been widely applied in the simulation of freesurface flow [84,18], multi-phase flow [18,50] or multiphase FSI [203].Notwithstanding its wide application, the dummy particle results excessive computational efforts which are inherently expensive in three-dimensional simulations due to the introduction of physical variable interpolation.

One-sided Riemann-based scheme
Similar to the dummy particle scheme, the one-sided Riemann scheme also represents the movable or flexible structure with dummy particles, while a one-sided Riemann problem is constructed along the solid normal and solved to realize the FSI coupling [67] as shown in the right panel of Figure 13.
In the one-sided Riemann scheme [67], the pressure forces f s:p i is rewritten as where p * is the Riemann solution of the one-sided Riemann problem whose left and right states are defined as Here, n a is the local normal vector pointing from solid to fluid, and the pressure p a = p i + ρ i g • r ia .With the dummy particle pressure, its density ρ a is also calculated through the EoS presented in Eq. (19).Compared with the aforementioned dummy particle scheme, the present one is more simple and efficient due to the fact that the one-sided Riemann problem is solved in a particle-byparticle fashion and no interpolation of states for the solid particles is required.
For each solid particles, the normal vector in the reference configuration can be calculated by [67,4] where the summation is over wall particles only.For static solid structure, the normal vector is not altered during the computation.For flexible structure, the normal vector should be updated accordingly when deformation occurs.To void of frequent summation computation in Eq. ( 129), the updated normal vector for flexible structure can be obtained by where Q is the orthogonal matrix of the deformation tensor F and can be calculated with polar decomposition.Figure 14 presents the numerical investigation of wave interaction with an oscillating wave surge converter (OWSC) [223] with SPHinXsys library [12] by using Riemann-based WCSPH method with one-sided Riemann-based fluid-solid interface treatment.It is observed that smooth velocity fields are produced even when complex interactions between the wave and the flap are involved.Also, wave-structure interaction and wave loading are well predicted in comparison with the experimental data [224], numerical results obtained with commercial software FLUENT [224] and SPH results in the literature [225,226].

Multi-resolution scheme
When applying the particle-based solver for modeling FSI problems, single spatial-temporal resolution, where a uniform particle spacing is used for discretizing the entire computational domain and a single time step being the smallest one of these required by the fluid and solid structure is used for time integration [13], is commonly employed.In such a case, the single-resolution approach is computationally expensive and high memory consumption in the application where the structure requires locally refined resolution or the global computational domain is quite larger with respect to the critical sub-domain.Therefore, developing a multi-resolution scheme for particle-based FSI solver is desirable in the computational efficiency point of view.
For particle-based simulation, different accurate, stable and consistent multi-resolution schemes have been developed for discretizating the fluid equations and they are generally classified into four classes: adaptive particle refinement (APR) with or without particle splitting/merging [117,227,228,229,230], non-spherical particle scheme [231,232], domain-decomposition based scheme [233,234] or the hybrid scheme [235,236,237].Notwithstanding these progress, the development of multi-resolution scheme for particle-base FSI solver has merged in the very recent years.The pioneering work is credited to Khayyer et al. [204] where a multi-resolution scheme is developed for MPS-FSI solver.Then, Sun et al. [203] extended the multi-resolution scheme developed by Barcarolo et al. [235] to the simulation of multi-phase hydroelastic FSI problems.More recently, Zhang et al. [23] proposed a multi-resolution SPH method for fluid-flexible structure interaction with special attention in dealing with enforcing the momentum conservation and force matching at the fluid-structure interface.
It is worth noting that consistent particle resolution is applied through FSI interface in Ref. [203] where the APR scheme is applied in predefined area interface located in the fluid domain, with proper particle splitting/merging [235].On the other hand, different particle resolution is adopted in the FSI interface of Refs.[204,23,238] with proper handling of momentum conservation and force matching.In this work, we focus on the second approach as the first one involves splitting/merging scheme which is not the main objective of the survey.

Multi-resolution discretizaiton
In the multi-resolution framework of Ref. [23], the fluid and solid equations are discretized by different spatial-temporal resolutions.In this case, the solid structure can be resolved at a higher spatial resolution, and the computational efficiency is enhanced when a lower resolution discretization for the fluid is sufficient.This strategy is suitable for applications where the structure is relatively thin, i.e. the structure has a considerable small spatial scale compared with fluid, or when the structure has a high Poisson ratio which results smaller time step size than that required by the fluid.
With different spatial resolutions are applied across the FSI interface, the interaction pressure force f s:p i and viscous force f s:ν i are rewritten as where h f denotes the smoothing length used for fluid with the assumption of h f h s , ensuring that a fluid particle i can be searched and tagged as a neighboring particle of a solid particle a which is located in the neighborhood of particle i.Having Eq. ( 131) in hand, the fluid forces exerting on the solid structure can be derived straightforwardly.
5.2.2.Multi-time stepping with position-based Verlet scheme Following Ref. [22,145], the time-step criterion for the solid integration is given as With the advection criterion ∆t F ad of Eq. ( 69) and the acoustic criterion ∆t F ac of Eq. ( 70) in hand, three different time step sizes are introduced.Generally, ∆t S < ∆t F ac , due to the fact that c S > c F .Other than choosing ∆t S as the single time step for both fluid and structure, one can carry out the structure time integration κ = [ ∆t F ac ∆t S ] + 1 times, where [•] represents the integer operation, during one acoustic time step of fluid integration.As different time steps are applied in the integration of fluid and solid equations, the issue of force mismatch in the fluidstructure interaction may be encountered.That is, in the imaginary pressure and velocity calculation, the velocity and acceleration of solid particles in Eq. ( 131) may present several different values updated after each ∆t S .Another issue is that the momentum conservation in the fluid and structure coupling may be violated.To address the force-calculation mismatch, Zhang et al. [23] proposed to calculate the imaginary pressure p d a and velocity v d a as where v a and dv a dt represents the single averaged velocity and acceleration of solid particles during a fluid acoustic time step.Also, to address the momentum conservation issue, Zhang et al. [23] developed a position-based Verlet scheme.Instead of starting with a half step for velocity followed by a full step for position and another half step for velocity as in the velocity-based Verlet scheme [84], the position-based Verlet does the opposite: a half step for position followed by a full step for velocity and another half step for position.Figure 15 depicts the velocity-and position-based Verlet schemes assuming that κ = 4 for the integration of fluid and solid equations.Note that since the position is updated twice and the velocity once with the acceleration at the half step, using the Taylor expansion one can find that the position-based scheme has the same 2nd-order accuracy as the original one.
In the position-based Verlet scheme, as the velocity field is updated only once in the current fluid acoustic time step criterion, time marching of the momentum equations for fluid and solid are exactly consistent as the velocity marching interval (∆t F ac ) n = κ−1 κ=0 (∆t S ) κ , as shown Figure 15.Therefore, the position-based Verlet algorithm achieves strict momentum conservation in fluid-structure coupling, when multiple time steps is employed.In contrast, the velocity-based Verlet scheme does not guarantee momentum conservation as 0.5 (∆t F ac ) n−1 + (∆t F ac ) n 0.5 κ−1 κ=0 (∆t S ) κ−1 + (∆t S ) κ , as also shown in Figure 15.
Figure 16 reports the validation of the multiresolution scheme implemented in the open-source library SPHinXsys [12,146] by simulating two FSI benchmark tests, i.e., flow-induced vibration of a beam attached to a cylinder and dam-break flow with elastic gate [145].For both tests, the deformation of the flexible structure induced by the fluid-structure interaction is accurately captured in comparison with experimental data [13] and numerical data in the literature [13,144,200,239,240]. Concerning the computational efficiency, approximated seedup in the order of 10 2 is achieved compared with the simulation in single-resolution scenario as reported in Ref. [23].

Particle and mesh generation
Generating high-quality unstructured mesh or particle distributions is essentially important to mesh-based or particle-based methods in scientific computing [5,242].However, complex geometries are always involved in industrial applications bringing a critical challenge for generating high-quality particle distributions or mesh for arbitrarily complex geometry.
For particle-based methods, there are generally two approaches to generate the initial particle distributions, i.e., (a) initiating particles on a lattice structure and (b) generating particles on a volume element mesh.The first approach, positioning particles on a cubic lattice structure, is widely used in particle-based methods community.For example, Dominguez et al. [243] proposed a pre-processing tool for the DualSPHysics library where particles are generated on lattice structure and threedimensional object is represented by particle model with excluding the outside particles.In this approach, the particles are equally distributed, however, a very fine spatial resolution is needed to correctly portray the complex geometry.The second approach generating particles at the center of tetra-or hexahedron volume elements has been widely used in application of bird strike [244,245] and provided by state-of-art commercial pre-processing tools.This approach can accurately portray the complex surface, however, comprise drawbacks of non-uniform particle spacing and volume which may reduce the interpolation accuracy.Recently, the weighted Voronoi tessellation (WVT) method has been applied for generating initial particle distribution by Diehl et al. [246] for SPH astrophysical simulation and by Siemann and Ritt [247] for SPH modeling of bird-strike.Also, Vela et al. [248] proposed an algorithm for constructing complex initial density distributions with low noise.Concerning high-quality mesh generation, various mesh generation techniques have also been developed, e.g.advancing front/layer methods [249,250] initiating meshing from the boundary to domain interior, re-finement based Delaunay triangulation [251,252] inserting new Steiner points into a Delaunay mesh, centroidal Voronoi tessellations (CVT) [253,254,255,256] and particle-based method [257,258,259,260,261] in which a relaxation strategy basing on the physical analogy between a simple mesh and a truss structure is applied.Among them, the particle-based mesh generation method has been widely studied due to the efficiency and versatility feature.
More recently, Fu et al. [120] and Zhu et al. [262] presented a novel application of the particle-relaxation in SPH methodology for high-quality unstructured mesh and body-fitted particle distribution, respectively, for arbitrarily complex geometry.Ji et al. [263,264] further improved the particle-relaxation for mesh generation by exploiting multi-phase algorithm and introducing feature boundary correction term.The general procedure consists of three steps.First, the geometry surface is represented by zero level-set function by parsing corresponding computer-aided design (CAD) file [262].Second, several steps of particle-relaxation is conducted by solving a set of physically-motivated model equations in the SPH methodology [120].In this step, proper surface bonding techniques were developed for achieving body-fitted feature of particle distribution.Third, a set of neighboring particles generates a locally valid Voronoi diagram at the interior of the domain by using Delaunay triangulation    [13] (upper panel), vertical (bottom left panel) and horizontal (bottom right panel) displacements of the free end of the plate and their comparisons with experimental [13] and numerical [144] data.method [120].Note that the third step is only for mesh generation.

Surface representation
By parsing a CAD file, the geometry surface can be constructed and represented by the zero level-set of the signed-distance function Γ = {(x, y, z) |φ (x, y, z, t) = 0} .
Then, the normal direction N = n x , n y , n z T of the surface can be evaluated from To discretize the level-set function, a Cartesian background mesh is generated in the whole computational domain.The level-set value φ is equal to the distance from the cell center to the geometry surface.Besides, the negative phase with φ < 0 is defined if the cell center is inside the geometry and positive phase with φ > 0 otherwise.

Particle relaxation
Starting from a preconditioned Lattice or random particle distribution generated inside the domain of the geometry, a physics-driven relaxation process is introduced to define particles evolution.The relaxation is governed by the momentum conservation equation where v is the advection velocity and η = 0.2hρ|v|.In Fu et al. [120], p i j = 1 2 (p i + p j ) and the particle pressure is defined by an equation of state which incorporates a target density field ρ t and p 0 is the reference pressure [120].In Zhu et al. [262], a constant pressure p i j = p 0 is applied as a homogeneous particle distribution can be obtained by applying the transportvelocity formulation [109,127,137] with a constant density and background pressure.Then, the particle evolution is defined by where a n = F n p .Note that only the instant acceleration is considered for the evolution and particle velocity is set to zero at the beginning of each time step to achieve a fully stationary state following Ref.[120,127,137].For numerical stability, the time-step size ∆t is constrained by the body force criterion ∆t ≤ 0.25 h |dv/dt| .(139)

Surface bounding method
To achieve the body-fitted feature, a suitable boundary condition treatment is required.In Ref. [120], a dynamic ghost-particle method enforcing symmetry conditions at all domain boundaries is adopted.However, it is challenge to construct ghost particles for complex geometries.Zhu et al. [262] proposed a simple surface particle bounding method.Specifically, the level-set value φ i and normal direction N i of each particle are first interpolated from the background mesh using trilinear interpolation.Then, particles position is updated according to where ∆x denotes the initial particle spacing.Figure 17 presents the illustration of surface particles bounding.When the particle locates outside of the geometry, it will be enforced back on the surface following the normal direction and a body-fitted particle distribution can be achieved accordingly.Note that, the surface particles are relocated at φ = − 1 2 ∆x instead of φ = 0 implying that the material interface assumed to be located at φ = 0 .
Figure 18 portrays the generated unstructure mesh for tyra and gear body, and particle model for anotomy heart and propeller.It is obvious that the particle-relaxation generates high-quality globally optimized adaptive isotropic meshes and well-regularized particle distribution for three-dimensional body with high geometric complexity.

Conclusion
In this paper, we present a concise review of SPH method on methodology development and recent achievement with highlights of aspects including numerical algorithms for fluid dynamics, solid mechanics and FSI, and novel applications in mesh and particle generations.Fundamentals and theory of the SPH method are first summarized.Recent developments of Riemann-based SPH method are presented with key aspects of Riemann-solver with dissipation limiter and high-order data reconstructions with MUSCL, WENO and MOOD schemes.Techniques for particle neighbor searching and efficient update of particle configuration are recalled.Concerning the total Lagrangian formulations, stablized schemes, steady state solution and hourglass control algorithms are reported.For FSI coupling, treatments of FSI interface and discretization schemes in multi-resolution scenario are surveyed.Last but not least, recent novel SPH applications in mesh and particle generations are reviewed.Abundant validations and benchmark test for demonstrating the computational accuracy, convergence, efficiency and stability are also supplied in this survey.

Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Figure 1 :
Figure 1: Construction of Riemann problem along the interacting line of particles i and j.

Figure 2 :
Figure 2: Simplified Riemann fan with two intermediate states.

Figure 3 :
Figure 3: Taylor-Green vortex flow with Re = 100: Decay of the kinetic energy by using Riemann-based SPH method with the Linearized Riemann solver with and without the dissipation limiter, and the traditional SPH method with artificial viscosity (α = 0.02).

Figure 4 :
Figure 4: Numerical modeling of free-surface flows by using the Riemann-based SPH method with linearized Riemann solver and dissipation limiter: Snapshots of particle distribution with pressure contour (left panel) and the time history of the numerical and experimental pressure signals (right panel) for dam-break (a) and sloshing (b) flows.

Figure 5 :
Figure 5: Sketch of multi-phase particles interacting with solid particles along the normal vector through the one-side Riemann problem.

Figure 7 :
Figure7: Numerical study of one-dimensional acoustic wave: the convergence of the density error as a function of particle resolution by using the Baseline scheme, MUSCL with the Sweby limiter and IS-WENO reconstructions.

Figure 8 :
Figure 8: Sketch of the Riemann-based SPH (top panel) and the one with MOOD posteriori limiting paradigm (bottom panel).

Figure 9 :
Figure 9: Sketch of the different approaches used to create the neighbour particle list: The cell-linked list (left panel) and the Verlet list (right panel) approaches.

Figure 10 :
Figure 10: Total Lagrangian SPH formulation for solid mechanics: (a) Velocity (left panel) and displacement (right panel) profiles for wave propagation in an elastic cable with different stablized scheme.(b) Deformed configurations with von Mises stress contour (left panel) and the displacement profile at the free end (right panel) for bending rubber-like cantilever.(c) The deformed configuration with von Mises stress contour of C-shaped stent.

Figure 11 :
Figure 11: Numerical investigation of the block sliding along slope.The configuration (upper panel) and the time histories of the distance between the block center and the slope (a) and displacement of the block center in x− and y− directions (b).

Figure 12 :
Figure 12: Sketch of multi-phase particles interacting with solid particles along the normal vector through the one-side Riemann problem.

Figure 13 :
Figure 13: Sketch of multi-phase particles interacting with solid particles along the normal vector through the one-side Riemann problem.

Figure 14 :
Figure 14: Wave interaction with an OWSC investigated by SPHinXsys library: (a) Free surface and flap motion with fluid particles colored by the velocity magnitude (upper panel).(b) Validations of the flap rotation (bottom left panel) and wave load on the flap (bottom right panel).
(a) Flow-induced vibration of a elastic beam attached to a cylinder.et al.(Exp) SPHinXsys Khayyer et al. 2018 (b) Dam break flow through an elastic gate.

Figure 16 :
Figure 16: Numerical investigation of FSI problems with SPHinXsys library in multi-resolution scenario.(a) Flow-induced vibration of a elastic beam attached to a cylinder: snapshots of vorticity field and deformed bream configuration with von Mises contour (left panel) and the comparisons of the amplitude and the frequency of the beam oscillation with data in the literature.(b) Dam-break flow through an elastic gate : snapshots compared against experimental frames[13] (upper panel), vertical (bottom left panel) and horizontal (bottom right panel) displacements of the free end of the plate and their comparisons with experimental[13] and numerical[144] data.

Figure 17 :
Figure 17: An illustration of surface particles bounding.

Figure 18 :
Figure 18: Mesh and particle generations for three-dimensional body with complex geometry.(a) Delaunay triangulation of the unstructure for tyra and tear body [264].(b) Particle distribution of anotomy heart and propeller [262].
3, and β 12 are the smoothness indicators for the candidate stencils,