Fine scale reconstruction (VIC#) by implementing additional constraints and coarse-grid approximation into VIC+ 

This study proposes a method that complements Vortex-In-Cell plus (VIC+) (Schneiders and Scarano, Exp Fluids 57:139, 2016), a data assimilation technique that reconstructs a dense flow field from sparse particle tracks. Here, the focus is on the treatment of boundary conditions. In the VIC+ method, the choice of boundary conditions significantly affects a large part of the inner domain through their role as Dirichlet boundary conditions of the Poisson equations. By nature, there are particle tracks on one side of the boundaries, and often, due to experimental limitations, the track density is low, just close to the boundaries. This lack of data near the boundaries leads to a poor iterative update of the boundary condition for VIC+. Overall, the VIC+ method tends to be sensitive about the specific choice of the initial conditions, including the inner domain and the boundaries. Without prior flow information, a large padded volume has been proposed to achieve stable and reliable convergence, at the cost of a large number of additional unknowns that need to be optimized. The present method pursues the following concepts to resolve the above issues: use of the smallest possible padding size, reconstruction starting with “all zero” initial conditions, and progressive correction of the boundary conditions by considering the continuity law and the Navier–Stokes equation. These physical laws are incorporated as additional terms in the cost function, which so far only contained the disparity between PTV measurements and the VIC+ reconstruction. Here, the Navier–Stokes equation allows an instantaneous pressure field to be optimized simultaneously with the velocity and acceleration fields. Moreover, the scale parameters in VIC+ are redefined to be directly computed from PTV measurement instead of using the initial condition, and new scaling factors for the additional cost function terms are introduced. A coarse-grid approximation is employed in order to both improve reconstruction stability and save computation time. It provides a subsequent finer-grid with its low-resolution result as an initial condition while the interrogation volume slightly shrinks. A numerical assessment is conducted using synthetic PTV data generated from the direct numerical simulation data of forced isotropic turbulence from the Johns Hopkins Turbulence Database. Improved reconstructions, especially near the volume boundary, are achieved while the virtues of VIC+ are preserved. As an experimental assessment, the existing data from a time-resolved water jet is processed. Two reconstruction domains with different sizes are considered to compare the boundary of the smaller domain with the inside of the larger one. Visible enhancements near the boundary of the smaller domain are observed for this new approach in time-varying flow fields despite the limited input from PTV data.


Introduction
Particle image velocimetry (PIV) using multiple cameras is the most favorable technique for obtaining three-dimensional (3D) flow information in a non-intrusive manner, and nowadays, its applications are expanded to measurements of timeresolved flow information. There are two fundamental ways of extracting flow information from temporally successive 3D particle fields: cross-correlation-based methods and particle tracking methods. Cross-correlation provides a spatially filtered flow field on a Cartesian grid, whereas particle tracking velocimetry (PTV) yields flow information at individual particle positions with a higher spatial resolution, but relatively higher uncertainty is inevitable (Kähler et al. 2012a;2012b). PTV is, therefore, preferred when the flow contains high shear, such as a turbulent boundary layer and complex vortical structures. The recently introduced Shake-the-Box (STB, Schanz et al. 2013; technique achieved accurate and ghostless 4D reconstruction of Lagrangian tracks from densely seeded flows using the advanced iterative particle reconstruction (IPR, Wieneke 2012) and has demonstrated its improved spatial resolution and usefulness of visualizing complex flow phenomena.
PIV results on a Cartesian grid readily provide information at an arbitrary point using simple interpolation models, whereas PTV data requires an intermediate data processing step to obtain such information. Several methods are proposed in the literature for converting PTV data into data on a regular grid. The most widely used method is the adaptive Gaussian windowing technique (AGW, Agüí and Jiménez 1987). AGW adaptively changes its kernel size to deal with imbalanced local particle concentration and thus provides a fully-filled flow field. Its results suffer from the spatial filtering effect similar to the cross-correlation. Reconstructing an unstructured grid using Voronoi tessellation and Delaunay triangulation (Neeteson and Rival 2015) enables discrete vector calculus operations for further analyses such as pressure evaluation without the necessity of information on a Cartesian grid. Unlike the above methods, which arithmetically deal with PTV data, data assimilation-based methods optimize a flow field on a Cartesian grid based on physical laws in a least-square sense. Recently, two pioneering methods, 'FlowFit' (Gesemann 2015;Gesemann et al. 2016) and 'Vortex-in-Cell plus' (VIC+, Schneiders and Scarano 2016), have been introduced. They have two notable benefits, i.e., high spatial resolution and physically reasonable flow variables. A drawback is the considerable amount of computational resources required due to O (n log n) complexity, whereas the arithmetic methods have O (n) complexity. Moreover, it is also compounded by a large number of unknowns from a denser grid scheme.
FlowFit and VIC+ share the same context but have different working principles about basis functions, ways of considering the physical laws, and pressure field evaluation. FlowFit optimizes a set of B-spline weightings, and VIC+ optimizes radial basis functions (RBF, Casa and Krueger 2013) weightings. Once the basis functions are determined for each grid point in both methods, their linear combination provides continuous flow variables and derivatives on and between the grid points. Considering how the governing equation is dealt with, FlowFit explicitly penalizes deviations from continuity by a corresponding term in the cost function, whereas VIC+ reconstructs velocity and acceleration fields through the Poisson equations where continuity is inherently achieved (Christiansen 1973;Schneiders et al. 2014) by utilizing the vorticity transport equation based on the inviscid Navier-Stokes equation. For the pressure field evaluation, FlowFit optimizes both velocity and pressure fields to which an acceleration field is subordinated. On the contrary, VIC+ evaluates the pressure field using the Navier-Stokes equation (van Oudheusden 2013) after its optimization is completed. Despite the different aspects stated above, subsequent research using either FlowFit  or VIC+ ) has successfully achieved significantly improved spatial resolutions of turbulent structures.
Since FlowFit and VIC+ have different procedures to reconstruct the flow field, one can expect that their computational residuals, which quantify biases in the continuity equations and the Navier-Stokes equation, have different distribution trends. Even though the recently conducted comparative assessment within the European NIOPLEX consortium (van Gent et al. 2017) has dealt with the reconstruction quality and errors, the physical validity of the reconstruction itself has not been assessed yet. However, the residual distributions can be predicted based on how the disparity between measurement and reconstruction contributes to the optimization. Let us assume that the spatial resolution of the computation domain is high enough to resolve small-scale structures. Since FlowFit corrects velocity and pressure fields directly using the physical laws, the corresponding residuals might be evenly distributed over the computation domain. Contrastively, VIC+ optimizes a complete vorticity field but velocity and acceleration only at the boundaries. Complete velocity and acceleration fields are subsequently reconstructed. Therefore, the residuals tend to be quite large, close to the boundary. Consequently, expanding the computational domain by padding in all boundary-normal directions is recommended (Schneiders et al. 2016).
There are two important practical preconditions to get optimal VIC+ results: First, as mentioned above, the computational domain needs to be expanded considerably with padding, yielding a significant increment in the computational cost. Second, computation of the initial condition on the Cartesian grid is a laborious task because the initial condition needs to be 'smooth' enough to avoid a slow convergence or trapping at a weak local minimum. The present study introduces and investigates four ideas to avoid those issues: 1. Smallest padding size for expanding the computational domain 2. Initial condition with zeros 3. Additional constraints based on physical laws 4. Coarse-grid approximation for smooth and fast convergence The first and second ideas address the mentioned preconditions, and the others compensate for the resulting side effects. In order to avoid the possibility that the boundary condition is erroneously corrected and consequently contaminates the inner domain, the present study extends the penalization approach of FlowFit: additional constraints are derived from the continuity equation and the Navier-Stokes equation. Also, the present method minimizes the curl of the pressure gradient (McClure and Yarusevych 2017), which is not guaranteed to be zero in VIC+. Note that the pressure gradient in FlowFit is directly obtained from the pressure field and thus is curl-free. The coarse-grid approximation is a workflow similar to the iterative multigrid approach (Scarano and Riethmüller 2000), widely accepted and used in the PIV community. The VIC+ result from the coarse-grid is used as an initial condition for a subsequent reconstruction on the finer grid. Meanwhile, the physical size of the computation domain slightly shrinks, and therefore, padding over a large domain can be expected.
The new method is assessed using both DNS-based and experimental PTV data. The PTV data is synthetically generated from the direct numerical simulation data of forced isotropic turbulence from the Johns Hopkins Turbulence Database (JHTDB, Li et al. 2008). Here, the numerical assessment evaluates reconstruction qualities, bias errors, and computation residuals concerning reconstruction conditions such as the padding size, the noisy PTV input, and the availability of acceleration input. The experimental assessment considers STB tracks evaluated from the circular water jet experiment (Violato and Scarano 2011). Two reconstruction domains, where the smaller one is a subset of another, and thus its two boundaries are on the inner grid points of another. Such different reconstruction domains allow a comparison of flow variables at the overlapping grid points and their time histories.

Description of method
The present method has its theoretical foundation on VIC+ (Schneiders and Scarano 2016)

Review of the VIC+ method
VIC+ optimizes the vector of variables, ξ, defined on a Cartesian grid. ξ is a set of radial basis function weights for a vorticity field, ω, a velocity boundary, u| ∂Ω , and an acceleration boundary, ∂u/∂t| ∂Ω , respectively, as defined by, where β 1 and β 2 are the scale parameters to let the values in different physical units be comparable during the iterative optimization. β 1 and β 2 are defined by the standard deviations of each element of ξ as defined by Eqs. (2) and (3): where N Ω is the number of grid points on which ξ ω is defined and N ∂Ω is the number of boundary grid points on which ξ u|∂Ω and ξ ∂u/∂t|∂Ω are defined. ω, u| ∂Ω , and ∂u/∂t| ∂Ω , are calculated from ξ at each grid point, x, on the Cartesian grid with the spacing h, through Eqs. (4) , (5) (6), where the radial basis function (RBF), ϕ(r), is defined, Here the grid spacing, h, is recommended to be set by the particle concentration C, which implies that N Ω equals 64 (= 4 3 ) times of N PTV , where N PTV is the number of PTV realizations. ω, u| ∂Ω , and ∂u/∂t| ∂Ω yield a velocity field, u, and an acceleration field, ∂u/∂t, through the Poisson equations defined by Eqs. (9) and (10), where the vorticity time derivative, ∂ω/∂t, is obtained from the vorticity transport equation by invoking the Navier-Stokes equation under incompressible and inviscid conditions, A material acceleration field, Du/Dt, is then discretely computed by, Disparities between the reconstruction and the PTV data are collected at every particle position, as defined by Eqs. (13) and (14), Here, the tri-linear interpolation obtains the vectors on the fractional grid, i.e., u s and Du/Dt| s . The consideration of the different physical units yields the final cost function, where α is the weighting coefficient defined by the standard deviations from the PTV measurement, The VIC+ method generally deals with large systems, and thus, the limited-memory Broyden-Fletcher-Goldfarb-Shanno method (L-BFGS, Lui and Nocedal 1989) is adopted to iteratively optimize such a large number of unknowns within a limited amount of memory. The gradient of J with respect to the optimization variable, ∂J/∂ξ, is evaluated by the adjoint approach, which provides a gradient with similar computational complexity to the cost function (Courant and Hilbert 1962;Lewis and Derber 1985). As a convergence criterion regarding the change in the cost function, 10 −6 of the initial value of J is selected.

Computation time complexity
As mentioned earlier, the padded volume is required to suppress that an erroneous boundary affects the inner domain. Given that VIC+ already requires considerable computational resources both in memory size and in time, the padded volume further elevates the required resources. For example, Schneiders et al. (2016) used the constant padding length, 6 mm, at five boundary surfaces of the measurement volume of 19 × 4 × 41 mm 3 , and Schneiders and Scarano (2016) padded the reconstruction volume by 30% at all the boundaries except into the wall. These lead to computation volumes that are 5.4 and 4.1 times the measurement volumes, respectively. The present study focused on the computation time because of its irreplaceability more than the memory space. At each L-BFGS iteration, VIC+ solves 12 Poisson equations that account for most of the total computation time, i.e., Eqs. (9), (10), and the corresponding adjoint procedures. The FFTbased (Cooley and Tukey 1965) and the multi-grid-based (Trottenberg et al. 2001) solvers are widely used to solve a Poisson equation, and their computation time complexity is known as linearithmic in the number of unknowns, i.e., O(n log n). Meanwhile, L-BFGS has a linear time complexity per iteration, O(mn), where m is suggested in order of 5-40 (Lui and Nocedal 1989). Note that the number of required L-BFGS iterations for convergence depends on respective measurement input and the dimension of reconstruction, and therefore, the present study only focuses on the time complexity of solving the Poisson equations. Assuming VIC+ reconstructing a cubic volume, which covers N 3 grid points, and N p grid points pad the volume at all boundaries, the number of unknowns of the Poisson equations is Figure 1 illustrates a part of the computation domain with the three different kinds of grid points, i.e., boundary points, padded points, and interrogated points. Figure 2 demonstrates the time complexities of two padding strategies, i.e., fixed number and proportional, normalized by the case with N p = 0. It is shown that the extended volume with large padding leads to a significant increment in the time complexity. Therefore, the present method targets N p = 2, which is the minimum to avoid the direct adjoint generation on boundary grid points by the PTV measurement through the tri-cubic interpolation function. Note that the minimum N p would be 1 for tri-linear interpolation.

The present method
In addition to the smallest padding size, other new features integrated into VIC+ are described as the zero initial condition in Sect. 2.3.1, the additional constraints from the physical laws and the corresponding cost functions in Sec. 2.3.2, the additional adjoint procedures in Sect. 2.3.3, and the coarse-grid approximation in Sect. 2.3.4, respectively.

Zero initial condition
The reconstruction starts from an initial condition with zeros to avoid the possibility that a badly chosen initial condition can either slow down convergence or even get stuck in a weak solution. Even though AGW can provide a fully-filled initial condition, validating whether the initial condition is smooth enough to avoid trapping in local optimization minima is laborious and thus avoided here. Moreover, reconstructions from different initial conditions are not identical due to accumulated differences during the iterative L-BFGS optimization. They are probably similar to each other, and thus, all of them can be regarded as correct ones. However, if there is a noticeable difference in the resulting reconstructions, it is not easy to distinguish which condition should be chosen. On the other hand, adopting the zero initial condition enables consistency in VIC+ reconstruction. With Eqs. (2) and (3), it is impossible to define the scale parameters, β 1 and β 2 , due to the initial conditions with all zeros. The alternative definitions are, therefore, introduced as β 1# and β 2# , where the subscript # indicates that the present method redefines the parameters of VIC+. β 1# , which represents a magnitude ratio of vorticity to velocity, is defined by, where the factor of 2.0 is introduced to emphasize the vorticity. For the magnitude ratio of acceleration to velocity, β 2# /β 1# , there are two candidates from the PTV measurement and the finite differentiation, respectively, and their geometric mean is chosen as, Consequently, β 2# , which represents a magnitude ratio of vorticity to acceleration, is defined by, β 1# and β 2# are set before the optimization and kept constant during the optimization. When the acceleration from the PTV measurement is not available, e.g., 2-pulse PTV measurement, β 2# /β 1# and β 2# are defined by,

Additional constraints
In VIC+, the boundary condition consisting of u ∂Ω and ∂u/∂t| ∂Ω , are iteratively corrected. According to Eq. (15), any boundary condition, which leads to a minimum cost function, can replace the old one even if the new one is physically invalid, and consequently, that can contaminate the inner domain. To avoid such an ill-posed boundary condition and strengthen the overall reconstruction reliability, the present method examines each grid point using the continuity equations, the Navier-Stokes equations, and vector calculus identities. The vorticity field, ω, is the most fundamental variable of VIC+ because the velocity field, u, and the acceleration field, ∂u/∂t, are calculated from it. Since VIC+ directly optimizes ω instead of using a curl of u, ω is not guaranteed to be divergence-free. A corresponding term for the cost function is defined by, Divergences of u and ∂u/∂t are not zero near the boundary because of the prescribed boundaries in Eqs. (8) and (9) Corresponding cost function terms are defined through Eqs. (24) and (25), The instantaneous pressure field, p, is evaluated at each iteration by solving a Poisson equation while the viscosity term is neglected (van Oudheusden, 2013), Here, Neumann boundary conditions are employed to all boundary points except a single point to avoid the singularity, and the resulted pressure field is then adjusted with a constant shift to have an averaged value of zero. Considering the Navier-Stokes equation, the irrotational component of ∇p is used as an additional cost function term, Besides, the solenoidal component of ∇p, widely used as an uncertainty (Lynch and Scarano 2014;Wang et al. 2016; McClure and Yarusevych 2017; among others), can be employed as, The final cost function of the present method is defined by, (29) J u and J Du/Dt are obtained through Eqs. (13) and (14), but using the tri-cubic interpolation. α 1# is the newly defined scaling factor by, where η is the ratio of relative uncertainties in the velocity to that in the material acceleration and is defined by, where ε u,PTV and ε Du/Dt,PTV are the uncertainties of the PTV measurement while γ is their ratio, γ = ε u,PTV /ε Du/Dt,PTV . When u PTV and Du/Dt| PTV are obtained by the least-square fitting using the second-order polynomial model over (2L + 1) time-steps, γ can be estimated as 0.289, 0.592, and 0.866 for L = 1, 2, and 3, respectively (see Appendix A). α 2# is the scaling factor for the time-derivative terms and is defined by, f AC is the weighting factor for the additional constraints and is defined by, where N grid is the number of grid points and the constant factor, λ AC , is chosen as 8 (see Appendix B), leading to f AC = 1/8 under the recommended grid spacing using Eq. (8). f DN is the denoising factor and its default value is 1.0 (see Appendix B). A selection of larger f DN than 1.0 is expected to make the velocity field smoother by emphasizing the nature of spatial integration in evaluating p.

Adjoint procedures
After the cost function is calculated, the gradient of the optimization variable, ∂J/∂ξ, is evaluated based on the adjoint procedures introduced by Giering and Kaminski (1998). This manuscript mainly addresses the adjoints regarding the additional constraints because the other adjoints and their transformation to the others are identical to those in VIC+ (Schneider and Scarano 2016). Please note that all sets of vectors, vector fields, and scalar fields in this section are vectorized, i.e., u ≡ vec(u), while all the adjoints are defined on the Cartesian grid. The adjoint variables are named with the prefix δ*, which is attached to the corresponding variables, by following the naming convention of the previous researches.
The fractional vectors at each particle position in Eqs. (13) and (14) are interpolated from the vector fields on the Cartesian grid, where A is the tri-cubic interpolation matrix. The corresponding adjoints, δ*u PTV and δ*Du/Dt| PTV , are evaluated from the disparity between the reconstructed vector fields and the PTV measurement, The irrotational and solenoidal terms of ∇p can be rewritten in matrix-vector notations, where G is the gradient operator matrix, and C is the curl operator matrix. Considering Eqs. (27) and (28), the material acceleration adjoint, δ*Du/Dt, is calculated by, δ*Du/Dt is then transformed to δ*∂u/∂t| mat and δ*u mat , respectively, through the same procedures introduced by VIC+. The divergences of velocity, acceleration, and vorticity are defined by, where D is the divergence operator matrix. Analogous to Eq. (39), the adjoints from the divergences are expressed as, The adjoint for the acceleration, δ*∂u/∂t, is then summed as, Same with VIC+, δ*∂u/∂t is transformed to δ*∂ω/∂t and is then, in turn, transformed to δ*u acc and δ*ω acc . After that, the velocity adjoint, δ*u, is summed as, Consequently, δ*u is transformed to the vorticity adjoints, δ*ω vel . And lastly, the vorticity adjoint, δ*ω, is summed as, The boundary adjoints, δ*u| ∂Ω and δ*∂u/∂t| ∂Ω , are initialized by the same procedure in VIC+. The boundary values of δ*u and δ*∂u/∂t are then added to δ*u| ∂Ω and δ*∂u/∂t| ∂Ω , respectively.
The gradient of the optimization variable, ∂J/∂ξ, is calculated from the obtained adjoints, δ*ω, δ*u| ∂Ω , and δ*∂u/∂t| ∂Ω . Redefining the RBF sampling of Eqs. (4) -(6) with the matrix for RBF operations, Ф, yields As the adjoint for the optimization variable can be regarded as the gradient of the optimization variable, ∂J/∂ξ, is defined by,

Coarse-grid approximation
Reconstruction on a coarse-grid provides an initial condition for a subsequent reconstruction on a finer grid. Before the coarse-grid approximation, it is required to generate coarse grids and determine their number by the following pre-procedure.
Step 0 Let the grid spacing h 0 = h for the final-grid, Ω 0 = Ω, and initialize that k = 0.
Step 6 The number of coarse-grids, K, is k. Note that K > 0. Figure 3 illustrates an example of the generated coarsegrids with the final grid. The reconstruction starts from the coarsest grid, Ω K , and continues sequentially to the final grid, Ω. After each reconstruction on the coarse grid, Ω k where k > 0, the tri-cubic interpolation initializes ω, u| ∂Ω , and ∂u/∂t| ∂Ω on Ω k−1 . Here, the interpolation fractions are fixed as either 0.25 or 0.75. The optimization variable for Ω k , ξ k , is then initialized as, Here, Ф −1 is the inverses of the RBF. Each element of ξ k is computed using the conjugate gradient method.
At each iteration of the L-BFGS optimization, the present method solves 13 Poisson equations, i.e., one for p in addition to 12 in VIC+. The algebraic multigrid method (Stüben 2001), which iteratively updates its solution until the residual falls below the prescribed threshold, ε PE , is adopted to solve the Poisson equations. Because the residuals of the Poisson equations are expected to be much smaller than from the cost function, a larger threshold, 100 ε PE , is used for the coarse-grid schemes (Ω k where k > 0) and the first three-quarters of iterations for the final-grid, Ω. Here, ε PE is chosen as the minimum of 10 −8 and 10 −8 σ u,PTV . The number of L-BFGS iterations is prescribed before the reconstruction, and the same number is applied to the coarse grids.

Numerical assessment
The numerical assessment evaluates the performance of the present methods using synthetically generated data, so that ground truth is available. It mainly addresses the effects of the additional constraints and the coarse-grid approximation, as the other possible considerations, such as reconstruction reliability and effect of seeding density, are already well addressed by Schneiders and Scarano (2016).

Synthetic track generation and data processing
"Forced Isotropic Turbulence" of the Johns Hopkins Turbulence Database (JHTDB, Li et al. 2008) simulated from a direct numerical simulation (DNS) is chosen as the reference data. As shown in Fig. 4a, 64 cubic sub-volumes are placed 128 h apart from the neighboring ones. Each sub-volume is discretized with the same grid spacing of the reference, h = π/512, and thus has 128 3 grid points. 32,000 particles, according to Eq. (8), are randomly distributed in each subvolume (Fig. 4b). The reference flow fields (u, ∂u/∂t, and p) on the Cartesian grid and the 2nd-order Lagrangian trajectories at the particle positions (u PTV and Du/Dt| PTV ) are sampled at t 0 = 5.028 through the web-service interface hosted by JHTDB. Here, the 4th-order Lagrange polynomial interpolation takes into account the Lagrangian information on the fractional position on the Cartesian grid by assuming an inviscid flow. Two sets of particle tracks, clean and noisy, are generated based on u PTV and Du/Dt| PTV . The particle tracks follow the second-order polynomial over three time-steps whose temporal interval is 0.005. For the clean case, u PTV and Du/ Dt| PTV are identical to the DNS reference. η in Eq. (31) is then assumed as 1.0 because the floating-point truncation is the sole source of errors, e.g., ε u,PTV /σ u,PTV = ε Du/Dt,PTV /σ Du/Dt,PTV . For the noisy case, Gaussian white noise is added to x-, y-, and z-vector elements, respectively, with respective standard deviations, 5% of σ u,PTV for u PTV and 25% of σ Du/Dt,PTV for Du/Dt| PTV , and that leads to η = 0.2. The resulting velocity and material acceleration vectors, respectively, contain 8.66% and 43.3% of random errors.
The generated particle tracks are subjected to the present method and the Gaussian windowing technique (GW), which uses a fixed windowing size. Here, a pressure field from the GW results is computed using Eq. (26). For the present method, four features are investigated; the padding size (N p ), the additional constraints (f AC ), the coarse-grid approximation (K), and the denoising factor (f DN ). At each grid size, 100 L-BFGS iterations are conducted, i.e., 100 (K + 1) iterations in total, assuming that the change of cost function between successive iterations at the last iteration is less than 5.0 × 10 −5 of the initial value of J. Although this threshold can be a maximum of five times that of VIC+ (10 −6 ), the changes by more iterations are neglectable.
This numerical assessment evaluates three dimensionless variables; the reconstruction quality (Q), the standard deviation of bias errors (E), and the computational residual (R). Q and E describe the similarity and the difference, respectively, between the reconstruction and the DNS reference and are evaluated based on the cross-correlation and the standard deviations as respectively defined by Eqs. (54) and (55), Here v can be one of the field variables, i.e., u, ∂u/∂t, and p. R indicates the validity of the reconstruction based on the cost functions of Eqs. (23), (24), (25), (27), and (28) and is defined by, Here v is either u or ∂u/∂t according to the scale unit used in the cost function, J f (v) .
The distance d represents the shortest distance from the interrogation boundary along the Cartesian grid. As illustrated in Fig. 5, d has positive values inside the interrogation domain and negative values otherwise. At the outermost grid points of the interrogation domain, d is equal to 0.5 h. It can be expected that regressed reconstruction is unavoidable in the vicinity of the interrogation boundary because of unavailable information beyond the interrogation boundary (d/h < 0) and the less populated PTV data (N PTV = N Ω /64 from Eq. (8)).

Padding size, N p
The effect of the padding size, N p , is investigated using the clean case in the single sub-volume, indicated as the red box in Fig. 4a. Figure 5 shows the residual distributions with respect to various N p , from 2 to 40, and d/h. Note that ω of GW is a curl of u, and thus its divergence is zero (Fig. 5a). On the other hand, because ω of the present method is directly updated during the L-BFGS optimization that estimates a Hessian matrix of solution from ∂J/∂ξ, its divergences are slightly biased from zero due to the truncation. Here, the adjoints are mainly driven  Fig. 5a shows plateaus in the interrogation domain. The residuals of GW show steep increments as close to d/h = 0 due to the small number of sampled particles (Fig. 5b-e). Meanwhile, the residuals when d is larger than half of the window size, i.e., d/h > 4, keep similar levels because of the spatial filtering effect and that each grid point is not directly associated with the neighboring grid points. Compared to GW, the present method shows significant improvements in R(div(u)) and R(div(∂u/∂t)), which is likely to be achieved by the velocity-vorticity relation in the vortex method (Eqs. (9) and (10)), as discussed in the introduction. The present method shows better pressure-related residuals than GW ( Fig. 5d and e) as well. The selection of larger N p in the present method is always effective in reducing the residuals at d/h = 0 except for R(div(ω)), that is, in accordance with VIC+. It is also shown that the additional constraints, f AC , and the coarse-grid approximation, K, are able to reduce the residuals further. By adopting the smallest padding size, N p = 2, the residuals in Fig. 5a-5e when f AC = 1/8 and K = 3 are reduced as 7.9, 4.2, 17, 8.8, and 7.5%, respectively, of those when f AC = K = 0. f AC mostly achieves this improvement while K slightly contributes. R(div(u)) and R(div(∂u/∂t)) are largest at the computational boundaries and are continuously reduced as d/h increases, and that implies that the prescribed computation boundaries cause the large residuals and their influence gradually decrease as far from it through the Poisson equations (Eqs. (9) and (10)). Unlike u and Du/ Dt, ω and p only consider the relations between the interrogation point and the neighboring grid points, and therefore, corresponding residuals, R(div(ω)), R(irr(∇p)), and R(sol(∇p)), show evenly distributed when d/h > 4. Figure 6 compares the instantaneous fields from three N p options, i.e., N p = 2, 16, 40, with respect to different combinations of f AC and K. In Fig. 6a, the results on the interrogation domain, d/h > 0, shows no significant visual difference, whereas the results on the padded domain have distinctive contours. In the padded volume, the difference caused by K is more significant than that by f AC . For example, the regions where u x > 1.05 (indicated as dark red in Fig. 6a) are only shown on the padded domain when K = 3 and N p = 16 or 40. Figure 6b demonstrates enhancement achieved by f AC . Note that the exponential contour levels exaggerate the smaller residuals. Even though two velocity fields when K = 0 (first and third columns in Fig. 6a) look similar to each other, the velocity divergence dramatically increases when f AC = 0. Figure 7 plots the standard deviations of bias errors, E, obtained from two sets of grid points, i.e., outermost interrogation grid points (d/h = 0.5) and the entire interrogation domain (d/h ≥ 0.5). Obviously, E (u) and E (∂u/∂t) from d/h = 0.5 are higher than those from d/h ≥ 0.5. Using f AC = 1/8 and K = 3 suppresses the increments of E (u) and E (∂u/∂t) for small values of N p so that the corresponding errors when N p = 2 have similar values to those when N p = 40 ( Fig. 7a and b). On the contrary, E (p) increases with larger N p (Fig. 7c), which means that ∇p far from the interrogation domain prejudices the pressure reconstruction. It might be because VIC+ lastly computes p without considering the padded volume after reconstructing u and Du/Dt.
The computation times with and without the coarse-grid approximation are compared with respect to N p in Fig. 8. When K = 0, the result shows a good agreement with the theoretical estimation from Eq. (17). The larger threshold (100 ε PE ), introduced with the coarse-grid approximation, helps to save the computation time when N p < 10. This advantage is lost when N p > 14 because the coarse-grid generation procedure (Sect. 2.3.4) generates slightly enlarged domains due to its rounding-up effect.
Using the smallest padding size, N p = 2, is the best option in the practical aspects, i.e., the residuals, the reconstruction qualities, and the computation time. Therefore the following numerical assessment, which further investigates the additional constraints and the coarse-grid approximation, is conducted using N p = 2.

Statistical assessment for f AC and K
The additional constraints (f AC ) and the coarse-grid approximation (K) are statistically investigated using every subvolumes. The assessment deals with both the clean and the noisy cases and the existence of Du/Dt| PTV to simulate a reconstruction from 2-pulse PTV measurement. For the noisy case, multiple denoising factors, f DN = 1.0, 2.5, and 5.0, are applied, while a single factor of 1.0 is used for the clean case. The computational residual (R), reconstruction qualities (Q), and the standard deviations of bias errors (E) with respect to d/h are shown in Figs. 9 and 10. The results from the sampling method (GW) are only provided when available. Compared to GW, the present method inherits the remarkable virtues from VIC+ , i.e., smaller R, higher Q (smaller E), the ability to reconstruct time-derivative-related fields, i.e., ∂u/∂t and p, without Du/Dt| PTV . Figure 9, except for R(div(ω)) when the cost function is evaluated without Du/Dt| PTV , shows the same trend as Fig. 5 that using f AC and K reduces all the computational residuals. Note that the magnitudes of R(div(ω)) are less than 0.04%, which is relatively smaller than other residuals. Fig. 10 Reconstruction qualities, Q, and standard deviation of bias errors, E, with respect to d/h. Red labels in the vertical axes indicate that corresponding scales differ from others in the same row When Du/Dt| PTV is not available, R(div(ω)) at the inner grid points (d/h > 4) is extremely small when f AC = 0. It is because u PTV is a sole error source on which ω only depends through Eq. (9). When f AC = 1/8, the other flow variables affect R(div(ω)), and therefore, all the flow variables depend on each other, even though Du/Dt| PTV is not available. It implies that ω requires both u PTV and Du/Dt| PTV to connote the full-time-derivative information. Although VIC+ and the present study can provide p without Du/Dt| PTV , its reliability, which relates to how much Du/ Dt| PTV contributes to ω and thus to ∂u/∂t, is not yet quantified and remains as a further study. When Du/Dt| PTV is not available, Q and E concerning ∂u/∂t and p are significantly improved by adopting f AC and K (Fig. 10a2, 10a3, 10b2, and  10b3). Here, the more effective improvement by { f AC = 0 and K = 3}than by { f AC = 1/8 and K = 0} supports that the large padding size is essential for 2-pulse PTV data (Schneiders et al. 2016).
In Figs. 9 and 10, the results from the clean and noisy cases show a similar trend, while the results from the noisy case have larger R, less Q, and larger E. In Fig. 10b, E (u) and E (∂u/∂t) seem to have the achievable minima related to the spatial resolution and noise levels in the PTV data. For the clean case, GW has roughly 10% of E (u) and 50% of E (∂u/∂t), whereas the present method has roughly 5% and 25%, respectively. Those differences are mainly caused by the different length scales of the spatial filters, i.e., the Gaussian window and RBF, which are larger than the smallest turbulent scale. For the noisy case, E (u) and E (∂u/∂t) of the present method increase to roughly 10% and 40%, respectively, whereas GW shows little increments because of its spatial filtering effect, which suppresses the random noise while losing the spatial resolution. Considering the preset noise levels, i.e., 8.66% and 43.3%, it can be said that the present method is also affected by the measurement noise. Interestingly, using the larger f DN , e.g., 2.5 and 5.0, for the noisy case shows the improvements in R (irr(∇p)) and R (sol(∇p)), and thus, results in the slight improvements in Q (u) and E (u) while sacrificing the others. It implies that the factors introduced in this study appropriately interact with the others as intended.

Instantaneous flow structure
Figures 11 and 12 demonstrate the instantaneous flow structures using isosurfaces of λ 2 and pressure, respectively, on the single sub-volume indicated as the red box in Fig. 4a. Four sets are used as the input data, i.e., clean and noisy cases with and without Du/Dt| PTV . Three combinations of the parameters of the present method, i.e., {f AC = K = 0}, {f AC = 1/8, K = 3, f DN = 1.0}, and {f AC = 1/8, K = 3, f DN = 5.0}, are tested (Figs. 11a-c and 12a-c). Note that f DN = 5.0 is only tested for the noisy case. The corresponding results are compared to those from the sampling method (GW, Figs. 11d and 12d) and the DNS reference (Figs. 11e and 12e). Because GW yields the same velocity field regardless of the existence of Du/Dt| PTV and cannot reconstruct a pressure field without Du/Dt| PTV , no corresponding results are provided.
The vortical structures from the present method (Fig. 11a-c) are less populated than the DNS reference because of the slightly smaller magnitudes of λ 2 due to the limited spatial resolution, h. As discussed in the previous section, the existence of Du/Dt| PTV does not result in a notable difference in the reconstructed velocity fields. When {f AC = K = 0}, p from without Du/Dt| PTV shows poor reconstructions because the computation boundary issue is not resolved (right two columns of Fig. 12a). Otherwise, the present method shows a good agreement with the DNS reference compared to GW. It is shown that f AC and K can be an alternative solution instead of using large N p to resolve the computation boundary issue.
The noisy PTV input understandably yields noisy structures in both λ 2 and p but with different shapes, i.e., scattered small structures in the isosurfaces of λ 2 and small fluctuations on the isosurfaces of p. It is because the structure scale of p is larger than that of λ 2 according to the spatially integrated nature of p. The spatial filtering effect of GW results in the underestimated magnitudes of λ 2 and the smoothed isosurfaces of p. The vortical structures from the exaggerated denoising factor, f DN = 5.0, seem to be robust against the measurement noise (Fig. 11c). Here, the relatively larger vortical structures are emphasized, while reconstructions of  (Fig. 12c). Figure 13 demonstrates how the flow structures are detailed as the grid gets finer through the coarse-grid approximation. The clean case is used for the demonstration. Note that the different λ 2 values are selected for the isosurfaces because of the different grid spacings at each grid, i.e., h k = 2 k h. The number of computation grid points at each grid is 22 3 , 38 3 , 69 3 , and 132 3 for k = 3, 2, 1, and 0, respectively. The larger vortical structures on the coarser grids can be regarded as unresolved clusters of small structures and are segmented into smaller ones and are strengthened on the subsequent finer grids. The pressure structures are detailed as the grid gets fine, but their magnitude levels are retained from the coarse grids regardless of the grid spacings.

Coarse-grid approximation
The computation times concerning the coarse-grid approximation and the noise level of the input PTV data are compared in Fig. 14. Except for file system input and output, the elapsed times are averaged for all the 64 sub-volumes and then normalized for the clean case when K = 0. The proportions for the respective grids when K = 3 are shown as percentages of the total sum and are consistent regardless of the noise level of the PTV input. Note that f AC = 1/8 is applied to all four cases, and the corresponding computation time is 0.4% of the total. The noisy input induces approximately 9% more computation time. The coarse-grid approximation enables slightly faster computation, that is, in the same manner as the discussion on Fig. 8 in Sect. 3.2. Here, the larger threshold for the Poisson equations compensates for the computation time overhead caused by the reconstructions on the coarse-grids, Ω k where k > 0, which is estimated as 15% of the computation time for the final-grid, Ω.

Experimental assessment
The experimental assessment deals with an existing data set from the time-resolved tomographic PIV measurement of a circular jet in water (Violato and Scarano 2011). The experimental parameters are the same as the reference, in which the reader can find detailed information, except for Fig. 15 Input particle tracks and reconstructions, visualized by isosurfaces of λ 2 (a) and material acceleration along the jet axis, Du y /Dt, (b), with respect to different computation domains; a Color-coded by the velocity component on the jet axis, u y , b Color-coded by Du y /Dt the camera framerate and the measurement volume. The diameter of the nozzle exit, D, was 10 mm, and the exit velocity of the jet was controlled to be 0.5 m/s. The corresponding Reynolds number regarding D, Re D , is 5000. As a tracer, neutrally buoyant polyamide particles with 56 µm diameter were seeded with a concentration of 0.037 ppp. The tomographic system recorded the image sequences at a framerate of 1.3 kHz, which consists of a doubled Nd-YLF laser and four CMOS cameras with 1024 × 1024 pixels resolution. The cylindrical measurement volume was 50 mm (diameter) × 70 mm (height). The time-resolved particle tracks are evaluated by using the STB technique. Particle positions, velocities, and material accelerations are obtained from the second-order polynomial trajectory model with a filter length of 9, i.e., L x = L u = L Du/Dt = 4 (see Appendix A), and then are used as input data of the present method (left column in Fig. 15). Two interrogation domains, Ω full and Ω partial , that share the same center point are discretized with a grid spacing of h = 0.47 mm, obtained based on Eq. (8). Ω full has a dimension of 28 × 57 × 28 mm 3 , and Ω partial has two-thirds of that height (38 mm). The corresponding numbers of grids are 61 × 121 × 61 and 61 × 81 × 61, respectively. Here, the top and bottom boundaries of Ω partial are 20 h apart from those of Ω full, and the other four boundaries of Ω partial are a part of the boundary of Ω full . The flow fields on Ω full are reconstructed by using{f AC = 1/8, K = 3}. For Ω partial , two combinations of the parameters, i.e., {f AC = 1/8, K = 3} and {f AC = 0, K = 0}, are used. The particles outside of the respective interrogation domains are not used in the reconstruction. A total of 157 frames (N t ), which corresponds to 120 ms, are reconstructed. Figure 15 compares the instantaneous flow fields using isosurfaces of λ 2 and Du y /Dt, respectively. All the reconstructed flow fields look well, even though the computational boundaries, fixed as the Dirichlet boundary condition, are located 3 h apart from the outermost grid points because of N p = 2. Comparing the results from Ω full and Ω partial using the same computation parameters, there are slight differences near the top boundary of Ω partial (at y 1 in second and third columns in Fig. 15). The reconstructions with and without f AC and K look similar to each other, excluding that the sizes of isosurface structures near the top and bottom boundaries of Ω partial are a bit smaller when {f AC = 0, K = 0} than those when {f AC = 1/8, K = 3} (third and fourth columns in Fig. 15). The present method without the additional constraints and the coarse-grid approximation insufficiently recovers the boundary condition. It also accords with the results in Fig. 10. In order to further investigate the results, the time histories of the selected vector components at z = 0 on the bottom boundary of Ω partial (y = y 0 ) are obtained; u y in Fig. 16a, Du y / Dt in Fig. 17a, and Du x /Dt in Fig. 18a. Figures 16 -18 also provide the contours on the xz-plane sampled at the respectively selected time-steps, which describe well the characteristic structure of the vortex ring; t = 74 ms for Figs. 16b, 17, 18b and t= 67 ms for Fig. 17b. Because the vortex ring, which passes through y 0 , is expected to have a clear ring shape, the reconstruction at y 0 on Ω partial can be assessed by comparing with that on Ω full . In Fig. 16a, the most variable component, u y , shows similar time histories for all the computation conditions but slightly more fluctuations in the time axis for Ω partial . Defining r = (x 2 + z 2 ) 1/2 , the maximum u y is observed at r = 0.4D and the minimum at r = 0.8D (Fig. 16b). In Fig. 16b, despite the unemployed particles beyond the interrogation boundary, the reconstruction on Ω partial when {f AC = 1/8, K = 3} shows a good agreement with that on Ω full , whereas the reconstruction when {f AC = 0, K = 0} shows the underestimated distribution. The same trend is emphasized in the material derivative, as shown in Figs. 17 and 18, where the results from Ω partial are more underestimated and noisy than Fig. 16. Here, using {f AC = 1/8, K = 3} helps VIC+ to recover the vectors on the outermost grid points close to the reconstruction on Ω full , and thus to follow the flow trends both in the time axis and in the xz-plane.
Unlike at the bottom boundary of Ω partial (y = y 0 ), the flow structure at the top boundary (y = y 1 ) is more complicated because of the vortex breakdown. In order to quantify the disparities caused by the unused PTV particles due to the smaller interrogation domain (Ω partial ), the root-mean-square (RMS) deviations of u and Du/Dt from the result on Ω full are evaluated at each grid point for the time series through Eqs. (57) and (58), Figure 19 compares the RMS deviations on the xz-planes at respective positions, i.e., y = y 0 , y 0 + 4 h, y 1 -4 h, and y 1 . The RMS deviations are maximum at the boundary ( Fig. 19a and d) and understandably smaller at the 4 h inner planes ( Fig. 19b and c). Note that the grid points on the 4 h inner plane do not have the direct penalty of the fewer particle tracks, and thus, it can be regarded that their RMS deviations are affected by the outer grid points, including the computation boundary. Because the vortex breakdown generates a cluster of small and complex flow structures, whose degree of freedom (DOF) goes over the number of tracer particles, the reconstruction at y = y 1 can be regarded as an under-defined problem. In this case, using f AC and K also cannot resolve them, as shown in Fig. 19a. It implies that the ratio of the particle concentration, C, to DOF strictly limits maximum reconstruction performance, and therefore, more tracer particles and corresponding smaller h are required. Figure 20 shows the xz-plane-averaged u RMS and Du/Dt| RMS with respect to y. The averaged RMS deviations at y 1 are about twice that at y 0 because of the vortex breakdown. The results from {f AC = 1/8 and K = 3} decrease faster as far from the boundaries than that from { f AC = K = 0} and keeps significantly small values at the inner domain.

Conclusion
The present study is conducted for an efficient and convenient application of the VIC+ method. The reconstruction starts from zeros to avoid the laborious task of estimating the initial condition and thus ensure safe convergence. The scale parameters are redefined to be evaluated from PTV measurement instead of the initial condition. In order to save the computation resource by minimizing the padded computation volume, the additional constraints derived from the physical laws, adopted from FlowFit, and the coarse-grid approximation are implemented. The additional constraints help the iterative optimization to correct the boundary condition progressively during its convergence. Their consequential adjoint procedure is integrated into that of VIC+. Since the coarse-grid approximation deals with a larger physical space than the final one, it provides a pseudo-effect of the larger padding size. Also, the stratified thresholds in the Poisson solvers help to save computation time. An instantaneous pressure field is simultaneously optimized in addition to velocity and acceleration fields. The numerical assessment using the DNS data quantifies the effects of the padding size and the newly introduced ideas. Using the additional constraints and the coarse-grid approximation shows improved results even though the smallest padding size is adopted. The present method has the same virtues as the novel data assimilation methods, i.e., VIC+ and FlowFit. Moreover, it is confirmed that its ability to reconstruct an instantaneous pressure field without acceleration input data is inherited from VIC+. In the experimental assessment, which deals with the time-resolved PTV measurement and thus has a low dynamic range, the present method shows the well-reconstructed flow fields. The effects of the computation boundary and the lack of particle information beyond the outermost grid points are analyzed using two computation domains with different dimensions. The corresponding results indicate that the present method can recover the flow structures in the vicinity of the boundary from the limited PTV input information.

Uncertainty of polynomial trajectory model
The uncertainties of PTV measurement can be expressed by the use of the uncertainty of particle position. Let us assume both that the second-order polynomial, symmetric in the discrete-time domain and so defined over an odd number of time-steps (2L + 1), is used as a trajectory model and that the standard deviation of the particle position errors, ε x , is constant as a uniform value (Richter 1995). The least-square fitting uncertainties can then be expressed with the thirdorder derivatives (Richter 1995;Jensen and Pedersen 2004;de Kat and van Oudheusden 2012;van Gent et al. 2018; among others) through Eqs. (59)-(61), When the third-order terms are small enough to be neglected, the ratio γ can be expressed only with L u and L Du/Dt ,

Constant factors (λ AC and f DN ) from the L-curve method
The constant factors λ AC and f DN play roles in balancing measurement and the physical law-based cost functions. Therefore, their appropriate selections are essential to avoid uncorrected or over-smoothed results due to too small or too large factors, respectively. Since they regulate a reconstructed flow field, they can be obtained according to the L-curve approach (Hansen 1992;Hansen and O'Leary 1993), which was proposed for ill-posed problems. The L-curve plots the regularization-based residual on the y-axis against the measurement-based residual on the x-axis concerning the varying regularization parameter. Its L-shaped corner indicates that the main residual which dominates the solution is shifting from one to another. Hence, its corresponding regularization parameter can be accepted as a balanced choice. The present study proposes universal factors, which were experientially obtained by collecting several PTV cases, to ensure a reasonable reconstruction at the first attempt. Two experimental cases conducted in the air are additionally considered. The following table briefly summarizes acquisition frequency, grid spacing, and nondimensionalized magnitudes of velocity, material acceleration, and vorticity.  Figure 21 illustrates L-curves where the first row from the synthetic PTV data and the second row from the experimental cases. Here, J PTV , J AC , and J p are defined by, The first and second columns of Fig. 21 show two parametric curve sets, for λ AC when f DN = 1 and for f DN when λ AC = 8, respectively, and say that the choices of constants, λ AC = 8 and f DN = 1, are appropriate. The third column shows how the increased f DN functions in noisy PTV inputs by only (63) (65) J p = J irr(∇p) + J sol(∇p) .
focusing on the disparity in material acceleration and justifies the choice of f DN = 5 for the noisy case.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.

Fig. 21
L-curves for λ AC (a1, b1) and f DN (a2, a3, b2, b3). Solid lines indicate the PTV data assessed in the present study, while the dashed lines are additionally considered