in Maritime and Oﬀshore Applications

. The development of supercomputing technologies has enabled a shift towards high-ﬁdelity simulations that is used to complement physical modelling. At the Technology Centre for Oﬀshore and Marine, Singapore (TCOMS), such simulations are used for high-resolution investigations into particular aspects of ﬂuid-structure interactions in order to better understand and thereby predict the generation of important ﬂow features or the complex hydrodynamic interactions between components onboard ships and ﬂoating structures. In addition, by building on the outputs of such simulations, data-driven models of actual physical systems are being developed, which in turn can be used as digital twins for real-time predictions of the behaviour and responses when subjected to complex real-world environmental loads. In this paper, examples of the high-resolution investigations, as well as the development of digital twins, are described and discussed.


Introduction
The maritime and offshore industries are transforming to improve efficiency, safety and sustainability. With the advancement of sensing, computational and communication technologies, engineers are now able to gain better insights into how ships and offshore structures respond when subjected to environment loads. This enables better risk management and reduces downtime. The ability to carry out full-order simulations of fluid-structure interactions with higher fidelity, also enables us to attain deeper insights into the complex flow processes that may be present in scaled model tests. Concurrently, there is a push towards harnessing data in order to evolve digital twins of physical systems that can be used for the performance prediction of assets such as ships and offshore structures, as well as systems such as TCOMS' deepwater ocean basin (DOB) facility.
In this paper, the preliminary efforts using Harmonic Polynomial Cell (HPC) method to develop high-fidelity numerical models are described. The first example is to create a digital twin of the TCOMS' deepwater basin facility. The movement of wave paddles, the generated wave components, as well as the fullynonlinear interaction between those wave components are modelled and simulated. High-fidelity simulations are carried out to investigate the potential onset of small-scale flow phenomena that may arise from the wave paddle mechanisms and the influence on the quality of generated waves. In the meantime, the application of parallel computations on evaluating the hydrodynamics of autonomous vessels is discussed. The objective is to develop a digital twin of the vessel for testing and implementing remotely-operated and autonomous navigation control systems and algorithms. The two series of work will be integrated into TCOMS' cyber-physical modelling framework later on, and to be coupled with physical model tests of marine structures. This is expected to generate new insights into the behaviour of marine systems in real operating environments.

Digital Twin of a Large-Scale Wave Basin
Ships and offshore structures need to be designed for high sea states where the predominant forcing is due to wave loading. Mooring line failures arising from large-amplitude slow drift motions, under-deck slamming due to vanishing air gap and wave-overtopping are possible scenarios that may require investigations. Due to their large displaced volume of offshore floating structures, the wave loads are dominated by inertial effects with viscous processes playing a secondary role. It is thus reasonable to use a physical wave basin facility to carry out scaled-down experiments based on Froude similarity to estimate the hydrodynamic loads and responses of these structures.
Within the context of the linear potential flow theory, the boundary element method (BEM) in frequency-domain has been widely applied, and it is the most efficient because the unknowns are only distributed over the mean wetted hull surface with the utilization of Green's identity. In the fully-nonlinear freesurface flow problems, however, the computational time and memory required by the BEM solving in time-domain increase strongly as the number of unknowns increases because the coefficient matrix for the unknowns is full. [11] argue that a field-solver based on the finite element method (FEM) is faster than the BEM for solving the wave-making problem because a sparse matrix is involved in the solution. The conventional BEM involves quadratic memory usage, O(N 2 ), and requires O(N 2 ) operations for an iterative solver or O(N 3 ) operations if a direct method is used. Here, N is the number of unknowns; thus, large-scale storage and inefficient computation are considered bottleneck problems in the conventional BEM.
In order to enhance the investigation of the fore-mentioned hydrodynamic phenomena and facility better understanding of the underlying flow physics, a high-fidelity numerical wave tank has been developed to augment the physical deepwater ocean basin (DOB) in TCOMS. The numerical wave tank has the same dimensions (60 m × 48 m × 12 m) as the DOB and is similarly equipped with numerical representations of hinged-flap wave paddles on two sides. This allows us to simulate the wave generation, as well as the wave propagation across the domain of the DOB. A potential-flow based Harmonic Polynomial Cell (HPC) method, which is a numerical solver with accuracy higher than third order, is used [10].
A three-dimensional Cartesian coordinate system Oxyz is defined with the Oxy plane coinciding with the undisturbed free surface and Oz axis orienting positively upwards. The fluid domain is discretised into overlapping hexahedral cells with 26 grid points. The velocity potential within each cell is represented as a linear combination of N harmonic polynomials: where P j (X, Y, Z) with j = 1, 2, . . . , N mean harmonic polynomials associated with Legendre polynomials in a spherical coordinate system. Here, X, Y and Z are local coordinates relative to the stencil centre. Because the harmonic polynomials satisfy the Laplace equation naturally, there is no need to impose the Laplace equation. By imposing Eq. (1) on the 26 stencil points, one can obtain a linear equation system in the form of: Here, N is not necessarily equal to 26. If N < 26, the least square fitting can be used. By taking the inverse of Eq. (2), we can obtain the vector {b}: where c i,j are elements of the matrix (2) gives rise to: Equation (4) indicates that the velocity potential at any point in the cell can be interpolated by the velocity potential on the surrounding nodes of the cell. By setting x = 0, y = 0 and z = 0, one can obtain the continuity equation at the stencil centre: On the solid boundaries, the Neumann-type boundary condition requiring the derivative of the potential is implemented by directly taking the derivative of harmonic polynomials: The HPC method will yield a sparse coefficient matrix with a maximum band-width 27. On the solid boundaries, the Neumann-type boundary condition is satisfied. On the free surface, the kinematic and dynamic free-surface boundary conditions are satisfied: where E(x, y, t) represents the free-surface elevation.
In the time-domain HPC method, the Dirichlet-type condition is satisfied via prescribing the velocity potential and elevation on the free surface. A semi-Lagrangian scheme is used to track the free surface. The explicit fourth-order Runge-Kutta scheme is used to integrate the boundary conditions Eqs. (7a) and (7b) to update the potential and elevation of the free surface at each time step. To ensure stability, a Savitzky-Golay filter [9] is used to remove possible sawtooth waves.
In contrast to the industry-standard Computational Fluid Dynamics (CFD) solvers, such as Star-CCM+, Fluent and OpenFOAM, which solves the Navier-Stokes equations and the Poisson equation for pressure, there is only one unknown (velocity potential) on each node in the present HPC method compared to four unknowns (three velocity components and pressure) in the CFD solvers. Therefore, the present HPC solver is relatively efficient.
To give an example, Fig. 1 shows a snapshot of wave field with heading angle 45 • for a rectangular wave basin with dimension of 30 m × 30 m × 4 m. The same numerical domain was used to investigate the physics of spurious waves generated by a row of wave paddles, for more details see [5]. In this numerical example, there are roughly 1.6 × 10 7 unknowns and the non-zero elements in the sparse matrix will be up to 4.24 ×10 8 . In order to capture the important flow physics as much as possible, paddle movements are accounted for -this requires updating meshes attached to paddles at every time step. When numerically implementing the HPC method, the local coefficient matrix with dimension 26 × 26 was solved by the Linpack library, and the global sparse matrix was solved by the GMRES solver within Portable, Extensible Toolkit for Scientific Computation (PETSc) library [1]. The computation was conducted on the platform of National Supercomputing Centre (NSCC) with CPU of Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60 GHz, and a total of 5 nodes at 24 cores per node were used. It takes 23s to run each time step, and the efficiency is acceptable for the present application using the NSCC facilities. Nevertheless, the computational load for the numerical DOB is still non-trivial due to the large size of the domain being modelled. The method shows a great ability to resolve small scale wave features which are important to understand the remaining uncertainties inherent in the simulations.

Investigation of Gusset Effect in Between Wave Paddles
The DOB uses a dry-back wave paddle system, where gussets are used between the individual paddles to prevent water from entering the rear side of the paddles. Given that there will be a small amount of water trapped in the groove formed by the gusset, it is of interest to investigate whether there are any 'jetting' effects arising from the movement of the paddles as they generate waves, and whether there are higher harmonic wave components arising from the gap between paddles, which may affect the quality of the underlying generated waves.
CFD analysis is used to investigate this phenomenon and to quantify whether the ripples or 'jetting' effects could affect the shape of the underlying generated waves downstream of the wavemaker. Figure 2(a) shows the initial set up of two wave paddles and a layout of a gusset between two adjacent paddles and ripple strips. Figure 2(b) illustrates the shape and different parts of the gusset. The gusset is modelled and simplified for CFD simulations. Bolts, nuts, washers and other joining mechanisms are removed and the geometry of the rubber bladder is modelled using a B-Spline curve to fit the manufacturer's requirements. The gap is intended to be as small as possible and there is an overlap in the ripple strip in the manufactured geometry. However, during operation, the forces acting on the flexible ripple strips create a gap due to the deformation.
In these simulations, the gap is modelled at a constant width of 5 mm. The wave paddles are modelled with mesh morphing using the B-Spline morphing method [4]. The paddle motion is generated using a first-order (linear) wave paddle signal.
The simulation solves Reynolds-Averaged Navier-Stokes equations by using the concept of a turbulent eddy viscosity prescribed by the Boussinesq approximation. This is achieved via k − ω SST model by solving the transport equations of turbulent kinetic energy, k and specific dissipation rate, ω [7]. The air-water interface is captured using a Volume of Fluid (VoF) method where the distribution of phases and the position of the interface are described by the fields of phase volume fraction. A high order interface capturing (HRIC) method is used to mimic the convective transport of immiscible fluid components such as air and water [8].
The computational requirements to simulate the gusset effects are extensive, due to the following considerations: -Requirement of very fine mesh in and around the gusset and gaps to resolve the local flow features; -Requirement of fine mesh at free surface to resolve the higher harmonic wave components; -Requirement of moving mesh -rigid body motion to simulate paddle flapping motion; and -High fidelity spatial and temporal discretisation.
For the cases presented in this paper, the mesh size for the computation is 14.4 million cells. The simulation was run on National Supercomputing Centre (NSCC) compute nodes equipped with Intel(R) Xeon(R) CPU E5-2690 v3 2.60 GHz, and a total of 5 nodes at 24 cores per node were used. This amounted to a total of 120 CPU cores running the simulation. The total compute time for each time step of 0.001 s and 5 inner iterations for the pressure-velocity coupling took approximately 3.6 s and the simulation was run for 30 s.
Preliminary results from a simulation of a row of 6 paddles with 5 gussets in between the paddles are shown in Fig. 3. It is observed that the combined effect of the gusset and ripple strips produce scattering (ripple) of higher harmonic wave components, similar to Huygens principle.
Closer examination of the higher harmonic wave components is conducted by looking at the simulated surface elevation (wave) -time series at several locations along the centreline of the computational domain. Figure 4 shows the results at three locations (0.05 m, 0.1 m and 0.7 m downstream of the wave paddles). It can be observed that the higher harmonic wave components decay rapidly within 0.7 m of advection along the tank. Therefore, it can be concluded that the presence of such higher harmonic waves do not alter the form of the underlying primary waves downstream of the wave paddles. It is important to note that high resolution computations are required to adequately resolve the fine flow details of interest. Therefore the capabilities afforded by NSCC are essential to achieve the outcome through adequate computational power.

Digital Twin of Marine Vessel for Remote and Autonomous Navigation
The maritime industry evolves quickly towards remotely controlled and autonomous vessels for more reliable and sustainable missions. This transformation is driven partly by the digitalisation trend, involving sensing, big data and deep learning analytics, and partly by the need to reduce the operating cost of manning a vessel. The marine autonomous surface vessel (MASS) technologies adopted as of this date are typically rely on situational awareness and predictive analytics methods in relatively calm sea-states. One of TCOMS' missions is to carry out further investigations and gain deeper understanding of MASS hydrodynamic response in challenging sea-states, essentially the manoeuvrability of vessels under the disturbance of wind, waves and currents. This enables the development of the hydrodynamic digital twin for smart vessels, in terms of providing accurate projection of their future states and therefore improves the effectiveness of steering actions, particularly for route planning and collision avoidance.  In order to capture vessel's seakeeping and manoeuvring behavior, and the non-linear interactions between hull, propellers, rudder and environmental loads, we adopted the unsteady RANS-based CFD computations, applying the overset grid technique to solve for the fluid flow pressure distribution, resulting force and hence 6-DoF global motions of the vessel. The conservation equations of mass and momentum are discretised using the Finite Volume Method (FVM). Often, a significant number of grid points (6.0 M-15.0 M) is required to resolve the complex physical shape of ship hull and its appendages, as well as the sharp interface between water and air over a sufficiently large volume. Simulations as such can only be carried out by parallel computations with hundreds of CPUs under the MPI communication protocol, which are accessible from NSCC. CFD computations carried out in this section share the common assumption of incompressible Newtonian fluid properties. The conservation equations for mass, momentum and energy are used in their integral form as the mathematical basis. The fluid is regarded as a continuum, which assumes that the matter is continuously distributed in space. The concept of continuum enables us to define velocity, pressure, temperature, density and other important quantities as continuous functions of space and time. When solving high Reynolds number problems, turbulence has to be considered, and to fully resolve the turbulent flow physics with Direct Numerical Simulations (DNS), grid size close to Kolmogorov scale (in micrometer) is necessary. However, this is considered to be unrealistic for ship hydrodynamic applications where the domain size is usually in hundreds of meters. Therefore, Reynolds-averaged Navier-Stokes (RANS) is introduced to simplify the calculation of turbulence quantities. The generic transport equation of the RANS model can be written in the following form [3], where φ stands for the transported variable such as velocity potentials, Γ φ is the diffusion coefficient and q φS and q φV stand for the surface exchange terms and volume sources, respectively. The momentum and energy equations can also be written in the discrete form to facilitate the numerical solution. The closure of the transport equation is by the k − ω SST turbulence model [7]. Terms in Eq. 8 can be replaced by the turbulence kinetic energy k and the specific dissipation rate ω in Table 1. Details of the closure coefficients and the auxiliary relations can be found in the above mentioned literature.
The FVM method of descretisation is applied to solve for the numerical solution of the transport equations. The solution domain is discretised by an unstructured mesh composed of a finite number of contiguous control volumes (CVs) or cells. Each control volume is bounded by a number of cell faces which compose the CV-surface and the computational points are placed at the center of each control volume. The discretisation of each particular term in Eq. 8 is summarised in Table 2.
It is noted the pressure does not feature in the continuity equation of incompressible fluid which therefore cannot be directly used as an equation for pressure. A possible way around this problem is to solve the momentum and continuity equations simultaneously in a coupled manner. The strategy adopted in most of our unsteady computations to resolve the pressure-velocity coupling is based on the PIMPLE algorithm, a combination of Pressure Implicit with Splitting of Operator (PISO) and Semi-Implicit Method for Pressure-Linked Equations (SIMPLE) [2]. The computation of pressure-velocity coupling normally takes a significant amount of CPU power especially for cases with large amount of grids. Computations carried out under this part of scope focus on evaluation of the manoeuvring and seakeeping performance of the MASS. One of the most computational intensive cases is to compute the free running dynamic manoeuvres under self-propelled conditions. The simulation mesh normally consists of several layers and hierarchies of overset grids, to resolve the motions of propellers or thrusters meanwhile capturing the ship dynamics. A typical case as demonstrated in Fig. 5 consists of a number of grid around 10.0 M. The timestep for the computation to march forward has to be sufficiently small to solve for the propeller blade rotational motion. Pressure over the ship's hull and its appendages are integrated to a force and moment matrix and feed into the 6-DoF motion equations of the vessel with respect to its centre of gravity. The solved ship motions are inherited to the overset region of the computational domain at each time step, and perform hole-cutting, and flow interpolation between reconstructed stencils. The MASS self-propulsion case in Fig. 5 utilised 600 CPUs on NSCC, and was computed for 120 h to produce a simulation of vessel motions lasts 60 s. The simulation provides a full-order prediction of the vessel's states (motions & velocities) over time when it is undertaking prescribed steering actions. System identification techniques such as the Support Vector Machine [6] and the Extended Kalman Filter [12] are intended to be applied on the full-order CFD results to derive the mathematical model of the MASS. The carried out computations also enable us to gain deeper understanding of the MASS' hydrodynamics under environmental disturbances. Figure 6 is one of the cases we carried out to investigate the seakeeping behaviour of our MASS design. The focus here is to evaluate the second-order mean drift force in surge Fig. 6. Computation of TCOMS' conceptual MASS design advancing in regular waves direction, which is also known as the wave added resistance. Both model and full scale computations are carried out in order to minimise uncertainties of scale effects. The numerical case consists 15.0M grids, and requires 600 CPUs to run for 5-7 days for each scenario.
Another aspect that is vital for the development of MASS hydrodynamic digital twin is the mathematical model of the azimuth thrusters, which is represented as torque and thrust curves under a variety of inflow conditions. Figure 7 presents the attempts we made for quantifying the thruster performance through RANS based CFD simulations. A typical CFD computation here consists of 4.0 to 5.5M mesh grids, and normally runs on 120 CPUs for approximately 24 h of wall time. More parametric studies will be carried out in near future to capture more complex flow physics involved in hull-thruster and thruster-thruster interactions. Studies on how to optimise the grid and domain discretisation while retaining the required accuracy of the solutions are ongoing to ease the computational burden as much as possible.

Concluding Remarks
In this paper, we have provided examples of how high-fidelity simulations of fluid-structure interactions are used to investigate the complex wave generation and interaction in the TCOMS DOB, the small-scale 'jetting effects' and the possible generation of undesirable higher-harmonic waves. The former example can be considered to be an initial effort into the creation of a digital twin of the wave basin facility, complemented by the deeper understanding local flow phenomena provided by the latter. These efforts, together with other research and development work being undertaken at TCOMS, will pave the way towards the development of a coupled numerical-physical modelling capabilities.
We have also described the digital twinning of MASS through the use of fullorder CFD simulations, where the seakeeping and manoeuvring characteristics of vessel, as well as the propulsive performance of the thrusters are captured. This effort will be extended in the coming months to include parametric studies to account for interactions between each of the two thrusters, as well as between the thruster and the ship hull. Outputs from the parametric studies and the simulations of the full vessel model will subsequently be used to evolve datadriven models that will enable real-time predictions of how the MASS will behave and respond under various control inputs and environmental conditions. This is necessary for the development of autonomous navigation systems that is able to accurately steer the vessel along the planned route, under the influence of environmental loads and in tight operational scenarios.