1 Introduction

In recent years, three-dimensional flow measurement techniques have rapidly been developed, particularly in the form of tomographic particle-image velocimetry (PIV) (Elsinga et al. 2006). Formerly, three-dimensional particle-tracking velocimetry (PTV) has been in use for more than a decade (Chang et al. 1984; Kasagi and Nishino 1991; Maas et al. 1993) to measure volumetric flows at severely lower computational expense. Continued research and development have been conducted to improve three-dimensional PTV performance. Particularly, the matching algorithms of two consecutive particle distributions, which present a significant challenge due to the inherent ambiguities of three-dimensional reconstruction techniques, have been the focus of further development.

The most widespread approach is a probability relaxation algorithm proposed by Baek and Lee (1996), which is based on the assumption that neighboring particles exhibit similar displacements. Using this rigidity assumption, it iteratively adapts the probability of all possible displacements. Ohmi and Li (2000) introduced various improvements to this method. Most notably, an empirical relation for rigidity identification was introduced instead of a constant deviation threshold, which might not be suitable for all flow conditions such as regions with high velocity gradients. An alternative approach to determine suitable parameters was proposed by Kim and Lee (2002). A regular PIV evaluation was used as a predictor to approximate suitable values locally for a subsequent PTV evaluation. A matching approach based on neuronal propagation was compared to the relaxation and nearest neighbor algorithm by Pereira et al. (2006). The relaxation approach achieved a significantly better matching performance. Mikheev and Zubtsov (2008) proposed to use the particle intensity to identify similar particles across exposures to have more information for the correct identification of particle correspondences. While this method can be applied in two-dimensional measurements, its suitability for volumetric flows has yet to be shown. Since the final intensity of reconstructed particles in a measurement volume is subject to multiple influences beyond the particles’ properties, e.g., overlapping particle images or intensity being assigned to ghost particles, a robust use of this technique might prove difficult.

To overcome matching ambiguities and ghost particle levels, some techniques exploit the temporal coherence of real particles. This allows to reduce the number of ghost particles, which do not typically follow coherent trajectories over the time of multiple exposures. Additionally, a known trajectory can be used as a predictor for the next particle location to significantly lower matching ambiguities. One example of such an approach was investigated by Willneff (2002), who achieved a significantly improved matching rate. Kieft et al. (2002) employed a similar scheme, in which even a simple nearest neighbor matching in conjunction with a trajectory-based predictor yielded satisfactory results. However, only a relatively small number of particles were used in their study. A more modern predictor method was introduced by Schanz et al. (2014), whose approach incorporates a time marching track prediction and screen-space projection optimizations which improved the spatial accuracy of the particle distributions significantly. Additionally, this method allows the evaluation of densely seeded volumes even surpassing typical tomographic PIV (tPIV) parameters. However, time series recordings cannot be obtained of all volumetric flow measurement tasks. Depending on the magnification, velocities, or simply the availability of high-speed cameras, such trajectory-based methods might not be feasible. For such measurements, multi-pulse techniques were developed by Novara et al. (2016a, b, 2019). However, such techniques require a significantly more complex measurement setup including two lasers and twice the amount of cameras or rely on two illuminations being recorded in one camera exposure. Therefore, double exposure techniques remain the most common type of volumetric flow measurements.

Owing to tomographic particle-image velocimetry, three-dimensional flow measurements have become ubiquitous in experimental fluid mechanics. The area of application has been extended through faster evaluation techniques, e.g., more efficient processing algorithms (Atkinson and Soria 2009; Discetti and Astarita 2012), and motion-tracking enhancement methods have improved the quality of the results (Novara et al. 2010). Motion-tracking enhancement results in a more robust evaluation of densely seeded double-exposure recordings at significantly increased computational cost. Unlike particle-tracking velocimetry, a cross-correlation method of a tomographically reconstructed volumetric intensity distribution similar to two-dimensional PIV is used. While this has proved more robust than current explicit matching techniques in PTV, the exact particle correspondences and locations are lost and the cross-correlation result can be considered a mean displacement of each interrogation volume including multiple real and potentially coherent ghost particles (Elsinga et al. 2011).

In this study, an iterative matching scheme, which improves current three-dimensional double exposure PTV techniques, is presented. It is based on the relaxation approach by Baek and Lee (1996) and exploits known techniques in PIV and PTV. Hence, it is suitable for densely seeded measurement volumes, while retaining the advantage of low computational cost compared to tomographic PIV. This method is described in Sect. 2. Particularly, its robustness against high ghost particle levels, which are expected from triangulated dense particle clouds, and the evaluation parameters describing the rigidity, e.g., the considered neighbor particles and the allowed displacement deviations, are demonstrated for synthetic data in Sect. 3. In Sect. 4, an integrated iterative evaluation scheme consisting of the particle position triangulation and matching is introduced and applied to synthetic images and real measurement data. The iterative evaluation scheme is analyzed based on the synthetic data in Sect. 4.1. The results from real recordings are compared to standard tomographic PIV results in Sect. 4.2. Finally, a summary and conclusions are given in Sect. 5.

2 Iterative particle matching

The evaluation scheme is based on the probability relaxation algorithm (Baek and Lee 1996) illustrated in Fig. 1. In a double exposure measurement setup, particle positions can be obtained via simple triangulation yielding one point cloud per exposure. For each position \(\mathbf {x}_i\) in the first point cloud, all \(N_\mathrm{p}\) potential candidate points \(\mathbf {y}_j\) in the second exposure are determined such that the norm of the resulting displacement \(\mathbf {d}_{ij} = \mathbf {x}_i - \mathbf {y}_j\) remains within a predefined search range \(R_\mathrm{s}\). Additionally, all \(N_\mathrm{n}\) neighbor particles \(\mathbf {x}_k\) in the first exposure, i.e., within \(|\mathbf {x}_k-\mathbf {x}_i| \le R_\mathrm{n}\), are detected. Each potential match \(i \rightarrow j\) as well as the possibility of not finding a suitable match are initially considered equally likely and assigned the matching probability \(P_{i}^{*(0)}\) and \(P_{ij}^{(0)}\)

$$\begin{aligned} P_{ij}^{(0)} = P_{i}^{*(0)} = \frac{1}{N_{ p,i} + 1}. \end{aligned}$$
(1)
Fig. 1
figure 1

Schematic of the probability relaxation scheme (Baek and Lee 1996)

The probabilities of all candidate displacements \(P_{ij}\) are iteratively updated by

$$\begin{aligned} P_{ij}^{(m+1)} = P_{ij}^{(m)}\, \left( A + B \sum _{k=1}^{N_\mathrm{n}}{\sum _{l \in N_{p,k}}{Q_{ij,kl} P_{kl}}} + C F_{ij} \right) \end{aligned}$$
(2)

until convergence is reached. The expression \(Q_{ij,kl}\) expresses whether the displacements \(\mathbf {d}_{ij}\) and \(\mathbf {d}_{kl}\) fulfill the rigidity condition, i.e., their deviation is smaller than \(R_\mathrm{q}\). The term \(F_{ij}\) increases the displacement’s updated probability if it is close to a predicted position \(\tilde{\mathbf {y}}_i\):

$$\begin{aligned} Q_{ij,kl}= \,& {} {\left\{ \begin{array}{ll} 1 &{} \text {if } |\mathbf {d}_{ij} - \mathbf {d}_{kl}| \le R_\mathrm{q} \\ 0 &{} \text {otherwise} \end{array}\right. } \text {,} \end{aligned}$$
(3)
$$\begin{aligned} F_{ij}=\, & {} {\left\{ \begin{array}{ll} 1 &{} \text {if } |\mathbf {x}_i + \mathbf {d}_{ij} - \tilde{\mathbf {y}}_i| \le R_\mathrm{q} \\ 0 &{} \text {otherwise} \end{array}\right. }. \end{aligned}$$
(4)

Typical values for the iteration parameters A, B, and C are 0.3, 3, and 1. Since this iteration scheme allows the sum of all probabilities to exceed unity, the candidate match and no-match probabilities are normalized after each iteration. In the originally proposed algorithm, this is the only mechanism by which the no-match probability \(P_{i}^{*}\) is updated. For a more detailed description of the matching process, the reader is referred to Baek and Lee (1996).

Particularly in densely seeded measurement volumes, a very high ghost level is expected. Tomographically reconstructed volumes retain a difference in intensity, size, and shape between ghost and real particles (de Silva et al. 2013) allowing a relatively robust cross-correlation. Using the computationally much less expensive triangulation, these distinguishing features are not detected such that the algorithmic complexity is shifted to the matching phase.

One of the major difficulties of the relaxation matching approach is finding suitable values for the matching parameters \(R_\mathrm{s}\), \(R_\mathrm{n}\), and \(R_\mathrm{q}\). For the candidate search radius \(R_s\), a rough and conservatively large estimate is generally available since the maximum displacement is usually adjusted through the pulse separation by the user.

Determining a suitable neighbor radius \(R_\mathrm{n}\) is more complicated since it depends on multiple factors, e.g., seeding density, homogeneity, and ghost level. Additionally, this parameter severely influences the required computing time since it determines the number of neighbors and their candidate matches for which \(Q_{ij,kl}\) has to be evaluated. To avoid readjusting this parameter for each measurement situation, the approach is modified for a fixed number of neighbors \(N_\mathrm{n}\). This results in a more consistent and predictable behavior, which is independent of the seeding density and possible inhomogeneity. While regions of low seeding density cannot resolve small flow structures, a smooth transition to higher resolved regions is guaranteed, which prevents spurious matches in such regions to influence their neighbors. When a sufficiently large neighbor count is selected, the probabilities can converge even in point clouds dominated by ghost particles.

Finally, the deviation between neighboring displacements \(R_\mathrm{q}\), which still satisfy the rigidity condition, is critical for good results. The ideal value varies greatly between measurements or even locations within a single measurement. In a uniform flow, the accepted deviation has to account for the spatial uncertainty of particle positions only, since the theoretical displacements are identical. However, the deviations between neighboring true displacements strongly depend on the local flow gradient and distance of neighboring particles in realistic flows. Hence, finding finding a generally accepted value is hardly possible. Therefore, it is necessary to use a relatively large value to avoid neglecting true particles. This impairs the convergence rate of the algorithm and increases the risk of detecting spurious displacements. Multiple approaches have been proposed. To solve this problem, Ohmi and Li (2000) introduced an empirical linear relation in which the allowed deviation was scaled by the local absolute velocity. This yields a somewhat improved matching performance at the cost of splitting one evaluation parameter into one scaling factor and a constant. Additionally, the relation does not consider the reason for the locally varying optimal parameters. Kim and Lee (2002) used a predictor based on standard PIV to deduce the spatial distribution of the evaluation parameters. In the context of three-dimensional measurements, this seems hardly feasible since it will entail an entire tomographic reconstruction, which will prevent any reduction in computational cost.

Volumetric double exposure particle distributions are classified by the tracking parameter \(\Phi \), which is the ratio of the mean distance between neighboring particles \(d_0\) and the maximum displacement between the exposures \(|\mathbf {d}|_{\text {max}}\)

$$\begin{aligned} \Phi = \frac{1}{|\mathbf {d}|_{\text {max}}}\underbrace{\left( \frac{3}{4 \pi \rho _0}\right) ^{\frac{1}{3}}}_{d_0}, \end{aligned}$$
(5)

with the volumetric number density of particles \(\rho _0\) . The tracking parameter \(\Phi \) gives an indication of complexity and matching potential of the measurement situation (Pereira et al. 2006). Since the modified matching approach is meant for densely seeded volumes, a high amount of ghost particles is expected resulting in an extremely low tracking parameter. Pereira et al. (2006) showed that at tracking parameters below 0.5, the number of detected matches of the relaxation algorithm severely deteriorates although it remains superior as to computational efficiency to the tested alternate methods. Especially due to the ghost particle count, a tracking parameter above this threshold cannot be guaranteed. Therefore, the matching algorithm is extended to a multi-pass scheme.

Fig. 2
figure 2

Multi-pass dewarping procedure

The underlying principle schematically illustrated in Fig. 2 is similar to PIV multi-pass evaluations (Soria 1996). This allows PIV algorithms to decrease the interrogation window size independent of the total displacement after an initial evaluation pass has detected and compensated the large scale motions. Similarly, the iterative particle matching scheme evaluates the original particle distributions \(\mathbf {x}_i^{(0)}\) and \(\mathbf {y}_j\) during its initial pass, which yields only a fraction of all real matches due to the low tracking parameter \(\Phi \). Based on the \(N_n\) closest detected displacements \(\mathbf {d}_{kl}^{(0)}\) of the first pass, a displacement prediction \(\tilde{\mathbf {d}}_{ij}^{(0)}\) is determined locally for each particle position \(\mathbf {x}_i^{(0)}\) in the first exposure. If \(\mathbf {d}_{ij}^{(0)}\) is already available, it is included in the generation of the displacement predictor \(\tilde{\mathbf {d}}^{(0)}_{ij}\). The result is used to dewarp the particle positions \(\mathbf {x}_i^{(0)}\) according to

$$\begin{aligned} \mathbf {x}^{(1)}_i = \mathbf {x}^{(0)}_i + \tilde{\mathbf {d}}_{ij}^{(0)} \text {.} \end{aligned}$$
(6)

The matching procedure between the new distribution \(\mathbf {x}_i^{(n_\text {pass})}\) and \(\mathbf {y}_j\) can be repeated multiple times until a desired maximum dewarping pass number \(n_\text {pass}\) is reached. This scheme significantly improves the tracking parameter \(\Phi \) during each matching pass, since it reduces the effective displacement \(|\mathbf {d}|_{\text {max}}\) for the subsequent matching pass massively, which leads to an increased displacement detection rate. Ideally, this eventually results in a uniform zero displacement field, which can easily be matched due to the \(\Phi \) approaching infinity. The resulting particle correspondences \(i \rightarrow j\) are used to determine the final displacements \(\mathbf {d}_{ij}\) based on the original particle positions \(\mathbf {x}_i^{(0)}\) and \(\mathbf {y}_j\). Additionally, the search range \(R_\mathrm{s}\) can be reduced in between passes, which reduces the computational cost immensely. This procedure allows for a relatively small \(R_\mathrm{q}\), which significantly reduces the number of spurious matches. Since the local displacement gradients are hidden from the matching algorithm by the dewarping pass, no spatial variation of \(R_\mathrm{q}\) is required.

It is of paramount importance to prevent spurious matches which could lead to losing neighboring matches entirely. Two measures are taken to avoid this behavior and ensure convergence. First, an overdetermined third-order polynomial spatial interpolation method is used for the dewarping step resulting in a smooth dewarping displacement distribution, which dampens the effect of possibly invalid spurious matches. Second, an iterative unstructured median test is applied after each matching pass similar to Mikheev and Zubtsov (2008). The potential loss of some valid displacements is favorable to a potential spurious prediction, which could prevent the convergence completely, since even a coarse and imprecise prediction and dewarping step improves the tracking parameter by reducing the remaining displacement.

The multi-pass matching procedure also solves issues with no-match probability. The original algorithm can only detect a non-matching particle if not a single candidate displacement has a neighbor which fulfills the rigidity condition. Only then, the no-match probability increases. Particularly, in densely seeded volumes with many candidate displacements per particle, this is highly unlikely such that invalid matches become more likely. To solve this issue, Ohmi and Li (2000) proposed a modified stricter updating scheme for the no-match probability, which increases the no-match probability if only few neighbors match a candidate displacement in comparison to the total number of neighboring displacements. However, this introduces more empirically determined input parameters, which require tuning for each application. With the displacement prediction and a decreasing search range in the proposed multi-pass matching, particles without valid matches in the second exposure simply do not detect any candidate displacement after a few iterations.

Besides the matching procedure, the triangulation is incorporated in the evaluation procedure as shown in Fig. 3. To keep \(\Phi \) high and provide a well-conditioned particle distribution for the matching algorithm, the triangulation and the matching procedure are performed multiple times with the allowed triangulation error \(\epsilon \) as defined by Wieneke (2008) which is increased after each iteration. At low \(\epsilon \) values, only few particles are detected and the ghost level stays low resulting in a high tracking parameter \(\Phi \), even though not all real particles in one frame might have their corresponding match in the second frame and vice versa. The resulting displacement predictor virtually reduces the displacements such that \(\epsilon \) can be raised which increases the particle density without decreasing the tracking parameter to inappropriate values. By iterating the entire process a few times, the displacement prediction becomes accurate enough such that very dense particle distributions can be accurately matched. Typically, three–five passes suffice before additional iterations yield only few additional matches.

Fig. 3
figure 3

Three-dimensional PTV algorithm. Novelties are marked by bold lines

In a recent study by Fuchs et al. (2016), the chance of spurious matches is reduced by imposing an unambiguity condition on a particle basis. Particles whose constituting image peaks are associated with more than one particle in more than two views are discarded. This method reduces the number of ghost particles, which always share an image peak with a real particle, simplifying the matching problem immensely. However, particles which fulfill this unambiguity condition become increasingly unlikely at seeding densities of typical tomographic PIV measurements. When the ghost level increases significantly, it is likely that most image peaks are shared with a ghost particle and discarded. This is of particular significance when particle positions are triangulated instead of detected by a MART reconstruction because triangulation typically results in higher ghost levels. Therefore, a somewhat less intrusive method is proposed in the context of the iterative triangulation. Following the initial triangulation and matching pass, such image peaks which have already been used in a detected displacement, i.e., at least one resulting particle has successfully been matched, are not considered in the subsequent triangulation pass. The particles of such matches are simply retained from the previous triangulation pass. In this one match per peak (OMPP) approach, a match between two particles serves as some kind of validation since the fraction of coherent ghost particles fulfilling the rigidity condition is much lower than the total number of coherent ghost particles. Unlike the proposed method by Fuchs et al. (2016), it does not discard particles and image peaks entirely even though they are recorded. Once a real displacement has been detected, other particles corresponding to the same peak in the particle image have an increased probability of being ghost particles. However, overlapping particle images can potentially lead to some particles being neglected if they do not match in the same triangulation pass. This procedure significantly reduces the required computing time at the expense of some particle matches. Overall, this approach yields similar results in terms of valid particle detection rate at the expense of a non-negligible ghost fraction when compared to the method proposed by Fuchs et al. (2016) without requiring the time and memory intensive MART reconstruction. The robustness of the matching algorithm against high ghost rates is showcased in Sect. 3. Note that the iterative triangulation and matching procedure do function even when OMPP is disabled.

Fig. 4
figure 4

Schematic of a single tetrahedral cell for the divergence minimization process

Finally, the displacements are post-processed by a finite volume divergence reduction approach similar to the method proposed by de Silva et al. (2013). However, unlike the Cartesian grid, an unstructured tetrahedral grid is computed via Delaunay triangulation for the scattered PTV results. Therefore, the local divergence cannot be determined by a simple finite difference stencil. An alternative approach is proposed by Gesemann et al. (2016), in which the scattered displacements are interpolated to a grid based on a B-Spline interpolation scheme. A cost function is used to determine the B-Spline coefficients such that constraints such as a divergence of zero can be imposed. A sketch of a tetrahedral cell with its constituting displacements is shown in Fig. 4. Note that for the sake of simplicity, all displacements are denoted by single indices k instead of ij since the constituting particle indices i and j are of minor importance in this context. Each cell yields one linear equation of the form

$$\begin{aligned} \frac{1}{V_\text {cell}} \sum _{f = 1}^4 \left[ {\mathbf {a}_f \cdot \underbrace{\left( \frac{1}{3}\sum _{k \in f}{\mathbf {d}_k }\right) }_{\mathbf {d}_f} } \right] = 0, \end{aligned}$$
(7)

where \(\mathbf {a}_f\) is the outward facing normal vector of the surface f with a length of the surface area, \(\mathbf {d}_k\) are the displacements at the corners of the current cell, and \(V_\text {cell}\) is the cell volume. The equations of all cells form the zero-divergence matrix D.

$$\begin{aligned} D \cdot \begin{pmatrix} \mathbf {d}^*_1, \mathbf {d}^*_2, \ldots , \mathbf {d}^*_{N_\text {d}} \end{pmatrix}^\text {T} = \mathbf {0}. \end{aligned}$$
(8)

The quantity \(N_\text {d}\) is the number of displacements. The displacements are marked by an asterisk to symbolize that they are subject to the optimization and different from the raw measurement data. To prevent the system from yielding the trivial solution, an additional equation is introduced per degree of freedom, which requires the measured displacements to remain constant. This can easily be achieved by an identity matrix E and a vector consisting of the raw displacements. This results in an overdetermined system which can be linearly optimized. To prevent the system from becoming too stiff or the data from being excessively smoothed, the system equations are weighted such that the residual caused by the divergence of the raw solution equals the theoretical residual, which would result from modifying all displacements by their positional uncertainty. The divergence residual can readily be determined as

$$\begin{aligned} \epsilon _\text {div} = \left\Vert D \cdot \begin{pmatrix} \mathbf {d}_1, \mathbf {d}_2, \ldots , \mathbf {d}_{N_\text {d}} \end{pmatrix}^\text {T}\right\Vert _2 \text {.} \end{aligned}$$
(9)

The uncertainty residual is determined by the quadratic addition of the spatial residuals \(\epsilon _i\) and \(\epsilon _j\) of the contributing particle positions \(\mathbf {x}_i\) and \(\mathbf {y}_j\)

$$\begin{aligned} \epsilon _\text {uncert} = \sqrt{\sum _{k = 1}^{N_\text {d}} {\epsilon ^2_k}} = \sqrt{\sum _{k = 1}^{N_\text {d}} {\epsilon ^2_{ij}}} = \sqrt{\sum _{k = 1}^{N_\text {d}} {\left( \epsilon ^2_i + \epsilon ^2_j \right) }} . \end{aligned}$$
(10)

The spatial residuals \(\epsilon _i\) and \(\epsilon _j\) are obtained from the distances between the lines of sight from each camera and the particle position

$$\begin{aligned} \epsilon _i = \frac{1}{\sqrt{N_\mathrm{c}}}\min _{\mathbf {s}} \left\Vert \begin{pmatrix} \mathbf {t}_1 &{} \ldots &{} \ldots &{} 0\\ \vdots &{}\mathbf {t}_2 &{} &{} \vdots \\ \vdots &{} &{} \ddots &{} \vdots \\ 0&{} \ldots &{} \ldots &{}\mathbf {t}_{N_\text {c}} \end{pmatrix} \cdot \mathbf {s} + \begin{pmatrix} \mathbf {b}_1\\ \mathbf {b}_2\\ \vdots \\ \mathbf {b}_{N_\text {c}} \end{pmatrix} - \begin{pmatrix} \mathbf {x}_i\\ \mathbf {x}_i\\ \vdots \\ \mathbf {x}_i \end{pmatrix} \right\Vert _2 \text {,} \end{aligned}$$
(11)

where \(\mathbf {t}_\mathrm{c}\) are the directional vectors corresponding to a peak in camera c, \(\mathbf {b}_\mathrm{c}\) is the camera position, \(N_\mathrm{c}\) is the number of cameras, and \(\mathbf {s}\) is the length vector of the lines of sight, which is adjusted to minimize the residual. The residual \(\epsilon _j\) is evaluated analogously by replacing \(\mathbf {x}_i\) by \(\mathbf {y}_j\). The final system of equations reads

$$\begin{aligned} \begin{pmatrix} \epsilon _\text {uncert}/\epsilon _\text {div} \cdot D\\ \hline \\ E \end{pmatrix} \cdot \begin{pmatrix} \mathbf {d}^*_1\\ \mathbf {d}^*_2\\ \vdots \\ \mathbf {d}^*_{N_\text {d}} \end{pmatrix} = \begin{pmatrix} 0\\ \vdots \\ 0\\ \hline \\ \mathbf {d}_1\\ \mathbf {d}_2\\ \ldots \\ \mathbf {d}_{N_\text {d}} \end{pmatrix} . \end{aligned}$$
(12)

Its solution \(\begin{pmatrix} \mathbf {d}^*_1, \mathbf {d}^*_2, \ldots , \mathbf {d}^*_{N_\text {d}} \end{pmatrix}^\text {T}\) consists of the filtered displacements, which exhibit a significantly reduced divergence distribution, while remaining within the uncertainty range of the raw solution.

On homogeneous isotropic Cartesian grids, the filter effect of this procedure is largely confined to the smallest resolvable length scale, since small scales can significantly affect gradient-based quantities, e.g., the divergence, with relatively small alterations in the raw data. This characteristic grid length scale is ubiquitous on such grids which can result in a distinct peak in the energy spectrum without any physical cause. The scattered data obtained by the PTV scheme do not possess such a distinct length scale due to the variation of tetrahedral cell sizes. Additionally, only displacements corresponding to actual particles are considered such that no unnecessary degrees of freedom, which are caused by simple interpolation without providing additional information in homogeneous isotropic grids, are introduced.

The matching performance of the multi-pass relaxation is investigated using synthetic particle distributions in Sect. 3. The entire iterative process including the triangulation as shown in Fig. 3 is demonstrated using synthetic test cases in Sect. 4.1 and real data from a standard tomographic PIV evaluation in section 4.2.

3 Synthetic matching validation

In this section, the robustness and effectiveness of the proposed matching procedure are investigated independent of any triangulation. Therefore, a synthetic test data set based on a burgers vortex flow is generated in a three-dimensional domain defined by a volume \(100 \times 100 \times 100\) of an arbitrary unit. The vortex parameters are identical to the test data set used in Pereira et al. (2006) except for an increased z-wise shear rate induced velocity yielding a somewhat more distorted vortex shape as shown in Fig. 5. This initial test case was chosen to offer a comparison between single iteration matching as previously performed by Pereira et al. (2006) and multiple matching iterations. However, it does not necessarily represent highly real applications due to its very smooth and low gradient nature.

Fig. 5
figure 5

Synthetic test flow for matching performance similar to Pereira et al. (2006). For the sake of visibility, only 2000 vectors are shown

The synthetic displacement field is used to generate two particle clouds consisting of 10,000 homogeneously distributed particles each such that the mean particle distance \(d_0\) is constant

$$\begin{aligned} d_0 = \left( \frac{3}{4\pi }\underbrace{\frac{V}{N_{\text {seed}}}}_{1 / \rho _0}\right) ^{\frac{1}{3}} \approx 2.879. \end{aligned}$$
(13)

The tracking parameter \(\Phi \) can be varied by increasing or decreasing the displacement between the particle position extractions. As discussed in Sect. 2, the maximum particle displacement usually is one of the target parameters during the design of the experiment such that its rough value is known. Therefore, the search range \(R_\mathrm{s}\) is set to \(R_\mathrm{s} = 1.2 \cdot |\mathbf {d}|_{\text {max}}\) in all the following matching procedures.

The neighbor count \(N_\mathrm{n}\) is set to 27 such that only a local region is considered for the rigidity condition, while providing a sufficient statistical basis for the algorithm to converge. With increasing matching pass count, the influence of this value decreases since the dewarped displacement field should be approximately uniform such that all neighbors fulfill the rigidity condition. During the initial passes, a relatively large number is desirable to find all potentially valid neighbors. To include a large number of neighbors should not be critical since the more remote displacements simply yield \(Q_{ij,kl} = 0\) in non-uniform velocity fields such that they do not alter the results of the probability update scheme. The somewhat increased likelihood of finding spuriously valid neighbors when increasing \(N_\mathrm{n}\) is shared among all displacement candidates. Therefore, valid displacements are still identifiable by the count of similar neighbor displacements, which exceeds the spurious valid neighbor rate. However, this diminishes the signal–noise ratio possibly leading to a decreased convergence rate of the probability iteration scheme.

In addition to the tracking parameter \(\Phi \), realistic measurement conditions can vary immensely in ghost count. Therefore, the ghost rate

$$\begin{aligned} c_{\text {ghost}}= & {} \frac{N_{\text {ghost}}}{N_{\text {seed}}} = \frac{N_{\text {ghost}}}{N_{\text {real}} + N_{\text {ghost}}}, \end{aligned}$$
(14)

can be adjusted independently of the tracking parameter. This requires keeping \(d_0\) constant, which is achieved by keeping the total number of particles \(N_{\text {seed}}\) constant as well. Therefore, the number of total particles is split between the number of particles which are part of both point clouds \(N_{\text {real}}\) and the number of incoherent randomly placed particles \(N_{\text {ghost}}\).

Since the values of the parameters \(R_\mathrm{s}\) and \(N_\mathrm{n}\) can easily be selected appropriately, the sensitivity to the rigidity parameter \(R_\mathrm{q}\) represents the most critical parameter of the matching scheme. Therefore, synthetic data sets are tested while varying the tracking parameter \(\Phi \), the ghost rate \(c_{\text {ghost}}\), and the number of matching passes. Additionally, the number of dewarping passes is varied such that the main user-defined parameters are tested. The conditions under which the results in Fig. 6 are obtained are quite similar to the most challenging conditions tested by Pereira et al. (2006) and serve as a reference for the proposed iterative procedure. Since the procedure is meant to work under more difficult conditions with a high level of ghost particles and even lower tracking parameters, the following Figs. 78, and 9 present such results to determine a suitable value for \(R_\mathrm{q}\) and number of dewarping passes under such conditions. Finally, the matching procedure is rerun for multiple ghost rates and tracking parameters with the previously determined value for \(R_\mathrm{q}\) such that the impact of the properties of the particle distributions can be determined. These results are shown in Fig. 10.

Figure 6 shows the matching rate and the spurious match rate under ideal conditions, i.e., no ghost particles are present. Thus, each particle in one point cloud has a corresponding particle in the second point cloud. The tracking parameter \(\Phi \) is set to 0.3. Under similar conditions, i.e., \(\Phi = 0.35\) and \(c_{\text {ghost}}= 0.05\), Pereira et al. (2006) report \(\approx 80\%\) true matches and 12% spurious matches based on the total number of particles. The current method yields no spurious matches for any number of matching passes. Additionally, this result is independent of the selected rigidity parameter. The lack of ghost particles severely limits the number of potential spurious matches, while the guaranteed number of neighbors provides sufficiently detailed local statistics to eliminate the remaining mismatches. As a result, even one matching pass yields a wide range of acceptable \(R_\mathrm{q} \in [0.5, 2]\) which achieves a matching rate well above 90%. In passes two and three, this range is shifted towards lower values for \(R_\mathrm{q}\). Higher pass numbers result in nearly perfect detection rates across the entire range of \(R_\mathrm{q}\).

Fig. 6
figure 6

Correct (left) and false particle matches (right), \(\Phi = 0.3\), \(c_{\text {ghost}}= 0\)

In Fig. 7, the ghost rate is significantly increased to \(c_{\text {ghost}}= 0.75\) while keeping the tracking parameter constant, i.e., \(\Phi = 0.3\). In this case, a narrow range of \(R_\mathrm{q} \in [1, 1.5]\) yields 80% of possible matches after one pass. In this configuration, up to half as many false matches, e.g., 40% of real particles, can occur. Generally, the number of spurious matches increases significantly with less strict rigidity parameters. For passes two and three, a similar trend as in Fig.  6 can be observed. The ideal range of \(R_\mathrm{q}\) is shifted towards smaller values of \(R_\mathrm{q} \in [0.1, 0.8]\), in which spurious detections vanish. Larger values of \(R_\mathrm{q}\) do not produce favorable results. Nonetheless, the detection of false matches is significantly reduced for all values of \(R_\mathrm{q}\) after five passes while the valid detection fraction reaches one for \(R_\mathrm{q} < 0.5\) and remains comparable to the results after fewer passes for higher values of \(R_\mathrm{q}\). Generally, it is shown that a strict rigidity parameter, i.e., up to 5% of the maximum displacement \(\mathbf {d}_{\text {max}}\), and multiple matching passes can significantly improve the result quality by reducing the number of spurious matches to approximately zero while detecting nearly all true matches. In the study by Pereira et al. (2006), \(R_\mathrm{q}\) was set to 20% of \(R_\mathrm{s}~\widehat{=}~2.3\). Note that this value yields the matching rate of 80% in the case without ghost points as shown in Fig. 6, which is consistent with the quoted study.

Fig. 7
figure 7

Correct (left) and false particle matches (right), \(\Phi = 0.3\), \(c_{\text {ghost}}= 0.75\)

The parameter study is repeated for a reduced tracking parameter of \(\Phi = 0.2\). The results shown in Figs. 8 and 9 are similar to the previous results. As expected, they present slightly reduced valid matching rates and increased spurious matching rates for the high ghost rate. Overall, it can be concluded that detecting a large fraction of real displacements in particle distributions with large ghost rates via single pass matching requires \(R_\mathrm{q}\) to assume values in a range, in which large amounts of spurious matches occur. After multiple passes, nearly all real matches can be detected while spurious matches are filtered out. Generally, multi-pass matching works best at small values for \(R_\mathrm{q}\), i.e., up to 5% of the search range \(R_\mathrm{s}\), which is significantly less than the 20% proposed by Pereira et al. (2006). When particle distributions are dewarped, no large \(R_\mathrm{q}\) is needed for valid matches. Additionally, very low values for the rigidity parameter can be compensated by increasing the number of matching passes as long as the initial pass provides a prediction which can improve the tracking parameter \(\Phi \) for the next pass.

Fig. 8
figure 8

Correct (left) and false particle matches (right), \(\Phi = 0.2\), \(c_{\text {ghost}}= 0\)

Fig. 9
figure 9

Correct (left) and false particle matches (right), \(\Phi = 0.2\), \(c_{\text {ghost}}= 0.75\)

After having determined a suitable range for \(R_\mathrm{q}\), the feasibility of matching at a very low tracking parameter \(\Phi \) with high ghost rates \(c_{\text {ghost}}\) in comparison to other values found in literature is tested. Such configurations are expected in high particle density situations ordinarily only suited for evaluation by tomographic PIV or methods relying on time series exposures. Therefore, the test case evaluations are rerun with \(R_\mathrm{q} = 0.4\) for \(\Phi \in [0.15, 0.2, 0.3, 0.4, 0.5]\) and \(c_{\text {ghost}}\in [0, 0.25, 0.5, 0.75]\). The results are shown in Fig. 10. Generally, the results are consistent with the observations of the \(R_\mathrm{q}\) variation. At the given \(R_\mathrm{q}\), no spurious matches are detected unless the algorithm fails to match particles entirely or an intermediate pass result is considered. After five matching passes, the majority of test configurations can be matched with a success rate of nearly 100%. This is even true when the first pass only yields very few valid matches. However, the most critical cases with \(\Phi = 0.15\) and \(c_{\text {ghost}}\ge 0.5\) fail entirely. Note that no values are given for \(c_{\text {ghost}}= 0.75\) and \(\Phi = 0.15\) because the algorithm did not converge within a reasonable execution time. Apart from these two extreme cases, a clear tendency can be observed. Increasing the number of matching passes or the tracking parameter \(\Phi \) improve the matching performance while an increased ghost rate \(c_{\text {ghost}}\) deteriorates it. As a result, the plots for various ghost rates appear similar but shifted towards higher tracking parameters when the ghost rate is increased. These results show that even a large number of ghost particles do not prevent a correctly matched result at a moderate tracking parameter \(\Phi \). With multiple triangulation and matching loops, this parameter can be kept at reasonable values as discussed in Sect. 2.

Fig. 10
figure 10

Correct (top) and false particle matches (bottom) for multiple very low tracking parameter distributions, \(R_\mathrm{q} = 0.4\), \(c_{\text {ghost}}= 0, \, 0.25, \, 0.5, \, 0.75\) (left to right)

Since one of the motivations for the multi-pass particle matching scheme is the computational efficiency, the computing times of the matching evaluations from Fig. 10 are presented in Fig. 11. Figure 11 (left) shows the total execution times of each individual matching pass. The execution time strongly depends on the point cloud condition indicated by the tracking parameter \(\Phi \) due to the setup of the investigation. Particularly, adjusting the search range \(R_\mathrm{s}\) to the maximum displacement \(\mathbf {d}_{\text {max}}\) causes the computing times to increase significantly when \(\Phi \) is decreased. In effect, a low tracking parameter results in a large search range such that many potential displacement candidates are detected. When the particle distribution is homogeneous, the number of candidates should scale with \(\Phi ^{-3}\). The number of neighboring candidates behaves analogously. Therefore, the necessary evaluations of the term \(Q_{ij,kl}\) even scale with \(\Phi ^{-6}\).

Furthermore, the analysis of the total pass execution times shows that the subsequent passes after the initial pass are significantly less computationally expensive. In some cases, the computing times of the second pass are two orders of magnitude lower. This is caused by the reduction of the search range in those passes as well as the better statistical conditioning of the problem requiring fewer iterations to converge. However, the latter effect is fairly small as illustrated in Fig. 11 (right), which shows the relative execution time needed to update the probabilities until they converge in comparison to the total execution time of the matching pass. Generally, it can be concluded that this iterative probability update is only a minor portion of the entire process, which on average represents only 5% of the total computing time and never exceeds 15%. For a two-dimensional problem, Fuchs et al. (2017) claim to have achieved a reduction in computing time by approximately 70% in comparison to the computing time required by the iteration scheme by Ohmi and Li (2000) by employing a non-iterative approach. Considering the current results, this seems not a general but a very specific result for their particular testing conditions since the largest portion of the execution time is spent on tasks apart from the probability iteration, e.g., building a binary tree of the particle positions and performing searches, which all matching approaches must perform. These tasks appear to dominate the processing time required for the probability iteration. This is even true in the initial pass, in which no dewarping is performed.

Fig. 11
figure 11

Total matching computing times (left) and fraction required for probability convergence (right), \(R_\mathrm{q} = 0.4\), \(c_{\text {ghost}}= 0.25\)

4 Channel flow test case

The entire evaluation procedure is validated by applying it to real measurement data and comparing the findings to tomographic PIV results. The raw data set and the tomographic PIV results of a wavy channel flow were taken from Rubbert et al. (2019). A sketch of the experiment is shown in Fig. 12. The measurement setup consists of four PCO Edge 5.5 cameras in a linear arrangement recording a thick light sheet in wall-normal orientation with a thickness of 10 mm. The field of view includes the wave crest to the trough in the streamwise direction. In the wall-normal direction, a region from the centerline of the channel to approximately 2 mm above the wall is captured. The reconstruction volume is \(50 \times 45 \times 10\,\,\hbox { mm}\). The bulk Reynolds number is 3200 which corresponds to the velocity of 1 m/s. At this Reynolds number, flow reversal is observed in the slope and trough region of the wave. The results obtained from tomographic PIV evaluation including a motion tracking enhancement pass (Novara et al. 2010) are used for comparison. For a more detailed description of the setup, the reader is referred to the original paper (Rubbert et al. 2019).

Fig. 12
figure 12

Experimental setup for volumetric measurements of a wavy channel flow (Rubbert et al. 2019)

A sample image having been pre-processed including an intensity normalization is shown in Fig. 13. The pre-processing procedure was originally used for the tomographic PIV evaluation. No further processing specific to the PTV procedure was applied. Based on the result from the peak detection, the particle density is approximately \(ppp = 0.04\).

Fig. 13
figure 13

Measurement image originally used for tomography (left) and magnification (right), \(ppp = 0.04\). Note that the images are inverted for better visibility

Note that the tomographic PIV reference results are subject to measurement uncertainties as well. To assert the exact measurement error, fully synthetic data sets were generated based on a DNS of the wavy channel flow described in Rubbert et al. (2019) to serve as accurate reference data. To allow for comparisons with the recorded measurement data and results, the synthetic test cases use a similar camera setup, field of view, and illuminated volume. To obtain the first exposure of the test cases, a uniform random particle distribution was generated within the selected volume. The particle projections were mapped onto the virtual cameras according to the calibration. The particle positions in each image were subjected to noise in the form of a Gaussian distribution with a standard deviation of 0.15 px to simulate remaining calibration imperfections after a volume self calibration. Based on the resulting positions in image space, synthetic images are generated using a Gaussian as the point-spread function with standard deviations between 0.7 px to 1.2 px. The image intensity of each particle was selected based on a uniform distribution spanning a range of 300% from darkest to brightest. Finally, white noise was added to the images with a magnitude of 30% of the darkest particle image. The particle positions of the second exposure were obtained via Runge–Kutta 4–5 integration of the corresponding velocity field extracted from the DNS. A virtual pulse separation was chosen such that the synthetic test cases possess the same average displacement in pixels as the experimental recordings of roughly 25 px to 30 px. Then, the same procedure was applied to the new particle positions. The reconstruction quality obtained by the proposed evaluation scheme is assessed in Sect. 4.1. Section 4.2 describes the results of the experimental recordings and compares them to standard tomographic PIV results

Due to the similarity of the synthetic and recorded test cases, the same evaluation parameters can be used. Unlike for the data in Sect. 3, the particle positions have to be extracted from the image sets by a simple triangulation based on a Tsai camera model calibration (Tsai 1986). As described in Sect. 2, three loops of the triangulation and matching procedure with successively increasing values for the allowed triangulation error \(\epsilon \) were executed. Using the OMPP procedure, image peaks which resulted in a matching pair of particles were deactivated for the following iteration loop to reduce the complexity of the matching problem without discarding any particle images entirely. The parameters of the evaluation procedure are given in Table 1. The search range \(R_\mathrm{s} = 0.7\,\,\mathrm{mm}\) was chosen based on a very rough visual approximation of the particle displacement and calibration images. Due to the fast evaluation times, the rigidity parameter \(R_\mathrm{q} = 0.025\,\,\mathrm{mm}\) could be selected based on multiple test runs. Its final value is consistent with the findings in Sect. 3 since it is below 5% of the search range. To mitigate the measurement noise due to particle position uncertainties, the divergence reduction approach detailed in Sect. 2 is applied.

Table 1 Evaluation parameters of the iPTV procedure

4.1 Synthetic results

Table 2 Detection statistics of the synthetic test cases

Processing strategies with OMPP and DivPP were tested for multiple particle densities such that eight separate test cases were obtained. The results are summarized in Table 2. The particle detection rate strongly depends on the source density. At low source densities, more than 90% percent of all particles can be detected correctly. With increasing source density, this number drops to 67%. This trend was expected, due to the increasing probability of overlapping particle images which impairs the triangulation process. More complex particle reconstruction algorithms such as tomographic reconstruction (Elsinga et al. 2006) or iterative particle reconstruction (Wieneke 2012) achieve higher detection rates, however, at significantly increased computing times. Applying OMPP reduces the number of detections. At low source densities, this effect is negligible due to the low number of overlapping particle images which might result in a single detected peak. The test case with the highest particle density only yields approximately 75% of the previously detected particles without OMPP. However, the ghost rate \(c_{\text {ghost,p}}\) is reduced significantly even at very low source densities at which 15% of all particle detections are ghost particles. Using OMPP, this ghost rate is reduced to nearly zero. At high particle densities, the ghost rate is reduced from 80.5%, i.e., four ghost particles are falsely detected for every real particle, to 58.5% or 1.41 ghosts per valid detection, which is a significant improvement in signal–noise ratio. This ghost rate can be handled by the proposed iterative matching method and reduces the computational complexity immensely. Additionally, a reduction in the median distance to the closest real particle \(\epsilon _{\mathbf {x},\text {p}}\) is observed in the high source density case. This is caused by the reduction in available peaks during the late triangulation iterations when the triangulation tolerance is high. Matched particles are more likely to be real particles which only require a triangulation tolerance exceeding the calibration error when overlapping particle images cover the individual particle position in the image. On the other hand, the number of ghost particles scales with the triangulation tolerance to the fifth power in this setup such that their majority is found when a relatively high triangulation tolerance and no OMPP is used.

In a fully random matching scenario, the fraction of detected matches should be the probability of detecting both particles, which is the single particle detection probability squared. However, fewer matches than expected are obtained at low particle densities. One possible explanation is that the volume boundary effects, i.e., particles entering and leaving the observed space, play a larger role under such conditions. In the high particle density cases, the matching approach exceeds this estimation. Therefore, particles are more likely visible and detected in the second exposure, if they have been detected in the first exposure as well. This is more obvious when detection rates already act as a selection of potentially more visible particles.

The accuracy of the matched particle displacements is quantified by \(\epsilon _{\mathbf {d},\text {real}}\), which represents the median deviation of the detected displacement from the reference displacement, i.e., the synthetically generated displacement based on the DNS. Note that all \(\epsilon _{\mathbf {d},\text {real}}\) are smaller than or equal to the median positional uncertainty of the particles \(\epsilon _{\mathbf {x},\text {p}}\) even though a fully random selection of particles should yield \(\sqrt{2}\epsilon _{\mathbf {x},\text {p}}\). Applying OMPP further improves these findings. This illustrates that OMPP and the iterative matching scheme pick best candidate displacements. The inherent smoothness constraint discards overly deviating candidates. Nonetheless, every test case exhibits a number of ghost displacements. Such displacements either consist of coherent ghost particles or at least one constituting particle is a ghost particle. While this phenomenon is fairly limited in the low and moderate source density case, the high seeding density cases result in a ghost displacement rate \(c_{\text {ghost,d}}\) of 0.375, which can be reduced to 0.145 with OMPP. While ghost matches show an increased deviation \(\epsilon _{\mathbf {d},\text {ghost}}\) from the reference DNS data, they are still too similar to the actual flow field to consider them outliers. Interestingly, the divergence optimization has no effect when OMPP is disabled as shown through cases E and G. However, when OMPP is enabled, the median deviation is significantly decreased from case F to case H. Since the divergence optimization procedure is a physically motivated approach, even ghost matches are optimized such that they resemble the reference data very closely. The very different effectiveness of the divergence post-processing in cases G and H suggests a close relation between the number of degrees of freedom and the smoothness of the reference data. If the number of degrees of freedom is too large with respect to the complexity of the flow they represent, deviations from the reference are not eliminated. Instead, the deviation field appears to be made divergence free. A similar phenomenon might occur on the Cartesian grids of the original publication of this method (de Silva et al. 2013). To further improve the results, alternative methods such as FlowFit (Gesemann et al. 2016) and VIC+ (Schneiders and Scarano 2016) make use of the acceleration data when available such that further physically motivated constraints can be employed.

Fig. 14
figure 14

Distribution of displacement errors with respect to the reference DNS data for selected highly seeded test cases, \(ppp = 0.064\)

In the following, only cases E, F, and H will be discussed further since they represent the best solution to the most challenging reconstruction and matching problem. Figure 14 shows the distribution of displacement deviations of such cases. The deviation distributions behave similar to median values. Ghost matches have a much wider deviation distribution than real matches in most cases. Using OMPP yields a slight reduction in overall deviation. However, when OMPP and the divergence post-processing are combined, the distribution is compressed to very small deviations and ghost matches become almost indistinguishable from real matches. To localize potential failures of the multi-iterative PTV method, the relative displacement detection density and systematic and random differences \(\Delta \mathbf {v}_\text {sys}\) and \(\Delta \mathbf {v}_\text {rms}\) between the results of case H and a velocity reference were calculated

$$\begin{aligned} \Delta \mathbf {v}_\text {sys}= \,& {} \frac{1}{N_\text {sample}}\sum _{n=1}^{N_\text {sample}}{\left( \mathbf {v}_\text {PTV} - \mathbf {v}_\text {ref} \right) } \end{aligned}$$
(15)
$$\begin{aligned} \Delta \mathbf {v}_\text {rms}= \,& {} \sqrt{\frac{1}{N_\text {sample}}\sum _{n=1}^{N_\text {sample}}{\left( \mathbf {v}_\text {PTV} - \mathbf {v}_\text {ref} \right) ^2}}. \end{aligned}$$
(16)

The relative detection density is shown in Fig. 15. Note that the original distribution of particles is uniform, which is illustrated by the detected results indicating that the proposed method is equally applicable under all investigated conditions. Local deviation statistics are shown in Fig. 16. While a slight bias and random deviation are found in the shear layer above the wave’s trough, the overall magnitude of these deviations is approximately 1% or 0.25 px to 0.33 px.

Fig. 15
figure 15

Relative displacement detection count averaged in the z-direction

Fig. 16
figure 16

Systematic (top) and random (bottom) differences in streamwise (u), wall-normal (v), and spanwise (w) velocity components between synthetic PTV and DNS in the center plane

4.2 Experimental results

The result quality in real measurements is assessed by comparing the particle tracking and the tomographic PIV results in two planes within the measurement volume using 1050 snapshots. One plane was placed in the volume center while an off-center plane is located in a distance 3.5 mm off the center plane. A mirror plane on the opposite side of the center plane was investigated as well. Since the results in both off-center planes were similar, the results of only one off-center plane are presented for the sake of brevity.

For a comparison beyond a pure visual, qualitative inspection, the unstructured PTV results were interpolated by the multi-level b-spline interpolation scheme (Lee et al. 1997) onto the same Cartesian grid on which the tomographic PIV data were evaluated. As a first indication of the quality of the results, the unstructured PTV data, the PTV data interpolated onto the grid in the center plane, and the tomographic result are shown for a single snapshot in Fig. 17. Note that the interpolation algorithm returns values within a convex hull of all input data points. Therefore, a narrow region between the wave crest and the expansion slope exhibits a nearly uniform velocity distribution which is derived from particles which are no immediate neighbors. In the tomographic results, only regions with a correlation coefficient above 0.1 are shown. Overall, the center plane results are quite similar. However, the very slow region right above the wave’s slope appears more pronounced in the tomographic PIV data. Since the proposed PTV approach also shows such a distribution downstream, it is unlikely that the rough first triangulation and matching loop cannot resolve this region or wrongfully predicts a different velocity distribution such that the subsequent loops cannot recover the correct displacement. Rather, a large portion of the affected region is located in the proximity of the concave wall which the interpolation algorithm cannot reproduce. The sharp contour of the velocity distribution supports the assumption that an interpolation artifact is observed. However, due to the near boundary location, it is possible that the polynomial dewarping interpolation is largely based on displacements located closer to the channel center which might impact the accuracy and robustness of the local prediction. Since the proposed algorithm is able to resolve the near wall region well in all synthetic test cases, the observed phenomenon is most likely caused by the limited visibility of the particles which necessitated the applied image masking in this area. Therefore, the tomographic PIV data in this region appear likewise uncertain. Further downstream, the larger region with very small velocities does not exhibit this problem. With further effort on implementing the PTV scheme, potential near-wall detection failures could be counteracted by assigning individual search ranges and relaxed rigidity conditions for candidate displacements to each particle. Under the given conditions, i.e., when the displacements on which the local dewarping is based are not uniformly distributed around the current location, the iterative reduction in search range \(R_\mathrm{s}\) could be omitted and the probability update parameter C defined in Sect. 2 could be reduced.

Fig. 17
figure 17

Resulting distributions of the velocity in main flow direction in the center plane of the measurement volume. Scattered data from PTV showing only one out of 25 vectors (left), grid interpolated PTV data (center), and tomographic PIV reference data (right)

Figure 18 shows the differences between the PTV results and the tomographic PIV results in the three velocity components in the center plane. In the largest portion of the flow, the systematic differences are essentially zero. However, in the near wall region immediately downstream of the crest, the PTV method exhibits a systematic over-prediction of up to 8% with respect to tomographic PIV in the streamwise component. The possible reason, i.e., a combination of the interpolation in the boundary regions combined with decreased search range and the probability weighting of meeting the prediction C, has been discussed above. A combination of uncertainties in the tomographic PIV reference and the convex hull interpolation range is more like to be the underlying cause. The wall-normal and spanwise components show a much smaller systematic deviation from the tomography results. The random velocity differences between both data sets are nearly zero for the entire flow regime. Somewhat higher levels are also observed in the region confined to the boundaries.

Fig. 18
figure 18

Systematic (top) and random (bottom) differences in streamwise (u), wall-normal (v), and spanwise (w) velocity components between PTV and tomographic PIV in the center plane

The same analysis was conducted in the off-center plane as shown in Fig. 19. The general distribution of the systematic and the random differences are very similar to the observations in the center plane. However, the magnitude of the errors and size of the affected regions is massively reduced such that the final vector fields are hardly distinguishable.

Fig. 19
figure 19

Systematic (top) and random (bottom) differences in streamwise (u), wall-normal (v), and spanwise (w) velocity components between PTV and tomographic PIV in the off-center plane

One of the main benefits of the proposed PTV evaluation method is its superior computational efficiency. Table 3 summarizes the computing times per sample on a workstation machine with two six-core Intel Xeon 2600v2 processors and a laptop with a quad-core mobile Core i7-4770HQ. The iterative PTV evaluation vastly outperforms the tomographic PIV method even without the OMPP deactivation of already matched peaks. Besides evaluating the measurement data much quicker, the reduced computing time enables to efficiently test multiple parameter settings before the bulk of the data is processed such that the result quality is more robust. Note that due to its significantly reduced memory requirement of approx. 2.5 GB over the 40 GB+ required by tomographic PIV, the PTV method is suitable for typical desktop computers and laptops.

Table 3 Execution times for the evaluation on two systems

Note that in terms of information density, the raw numbers appear to favor tomographic PIV. The current data set results in a velocity grid of \(156 \times 160 \times 26 = 648,960\) vectors. However, the velocity vectors are based on interrogation volumes with a 50% overlap such that only 81,120 vectors are independent. Only 62,933 of these vectors represent actual velocities since the Cartesian grid necessary for tomographic PIV overlaps with the wavy wall. For comparison, the iterative PTV method yielded on average 40,500 vectors per sample when the OMPP method is applied to reduce the complexity. When this feature is disabled, more than 100,000 displacements are detected on average. Based on the synthetic test cases, the total number of trackable particles is estimated 119,000 for the given source density. As previously shown in Sect. 4.1, the number of additionally detected real displacements is increased by approximately 60%, while coherent ghost displacement numbers rise significantly. However, the ghost rate of the real data is difficult to assess since the particle density and volume setup are similar but not identical. Generally, the test without the OMPP approach did not meaningfully change the resulting statistics. Since the use of OMPP could be shown to significantly increase the effectiveness of the divergence optimization, reduce the computing time, and limit the occurrence of coherent ghost matches, it represents the more sensible approach in most cases.

5 Conclusion and outlook

A multi-iterative nested 3D-PTV evaluation scheme suitable for double exposure measurements at seeding densities previously unsuitable for PTV was proposed. Its robustness with respect to the particle density and ghost rate was shown for synthetic test cases. A comparison to classical tomographic PIV data showed that the proposed method is suitable as a replacement for tPIV. Locally confined deviations in the experimental data are likely due to the reflection near the boundary. They can be addressed by future modification of the matching iteration scheme or even the simple use of a boundary condition, which can be achieved by the introduction of zero displacement on the wall.

The benefit of this technique is its reduced computational requirement which allows more thorough testing and preparation of the evaluation settings, generally faster results from large amounts of raw data, and the use standard desktop computers instead of very well equipped high-performance computers. The current performance can be considered a base line for future development. Additionally, the unstructured particle displacements do not cause any sort of inherent filtering and allow more immediate access to the raw displacement data.

While the iterative particle tracking scheme yields good results for classic double exposure measurements, it might as well be suitable for other use cases. One potential approach might be the use as a robust and computationally efficient initialization for the iterative particle reconstruction (IPR) technique proposed by Wieneke (2012), which requires similar computational resources to tomographic PIV to reconstruct accurate virtually ghost free particle distributions. A combination of both methods has the potential to increase the execution performance while maintaining the accuracy of IPR.