Main results of the 4th International PIV Challenge
 5.7k Downloads
 36 Citations
Abstract
In the last decade, worldwide PIV development efforts have resulted in significant improvements in terms of accuracy, resolution, dynamic range and extension to higher dimensions. To assess the achievements and to guide future development efforts, an International PIV Challenge was performed in Lisbon (Portugal) on July 5, 2014. Twenty leading participants, including the major system providers, i.e., Dantec (Denmark), LaVision (Germany), MicroVec (China), PIVTEC (Germany), TSI (USA), have analyzed 5 cases. The cases and analysis explore challenges specific to 2D microscopic PIV (case A), 2D timeresolved PIV (case B), 3D tomographic PIV (cases C and D) and stereoscopic PIV (case E). During the event, 2D macroscopic PIV images (case F) were provided to all 80 attendees of the workshop in Lisbon, with the aim to assess the impact of the user’s experience on the evaluation result. This paper describes the cases and specific algorithms and evaluation parameters applied by the participants and reviews the main results. For future analysis and comparison, the full image database will be accessible at http://www.pivChallenge.org.
1 Introduction
Overview of cases provided within the 4th International PIV Challenge
Case  Description  Provider  Image type  Number of sets 

A  Microscopic PIV  Kähler/Cierpka  Real  600 Single exposed doubleframe images 
B  Timeresolved PIV  Kähler/Hain  Real  1044 Single exposure images 
C  Resolution/accuracy of tomoPIV  Astarita/Discetti  Synthetic  1 Single exposed doubleframe image^{a} (4 views) 
D  Timeresolved tomoPIV of complex flow  Astarita/Discetti  Synthetic  50 Single exposure images (4 views) 
E  Stereoscopic PIV  Vlachos/La Foy  Real  1800 Single exposure images (2 views) 
F  Standard PIV (interactive)  Sakakibara  Real  1 Single exposed doubleframe image 
In 2012, motivated by the PIV research effort within the EU project AFDAR (Advanced Flow Diagnostics for Aeronautical Research), it was decided to initiate another International PIV Challenge to assess the global development efforts that have strongly enhanced the accuracy, resolution and dynamic range of planar PIV but, for the first time, also to assess the capabilities of volumetric PIV techniques, as their significance and popularity has strongly increased over the last years.
The primary aim of the International PIV Challenge is to assess the current state of the art of PIV evaluation techniques and to guide future development efforts. This requires a broad range of cases each addressing specific problems of general interest. Case A addresses challenges specific to 2D microscopic PIV \((\upmu \hbox {PIV})\) experiments, such as preprocessing, depth of correlation and enormous flow gradient effects. Case B consists of a real 2D timeresolved image sequence, to access the potential in terms of measurement accuracy, resulting from the strong developments in the field of high repetition rate lasers, highspeed cameras and multiframe evaluation techniques. Case C consists of a synthetic 3D data set to study the accuracy and resolution of 3D–PIV reconstruction and evaluation methods and the effect of particle concentration. Case D is a synthetic timeresolved 3D image sequence of a turbulent flow to investigate the benefits of using the time history of particles for 3D–PIV. Case E is again an experimental data set measured with a stereoscopic PIV setup to assess the errors due to calibration, selfcalibration and loss of pairs. Cases A–E were provided to all participants prior to the PIV Challenge. Case F, which consists of experimental images of particles whose displacement field is precisely predetermined, was made available to all the other 80 attendees of the workshop with the target of evaluating the images within 2 h. The primary aim was to assess the impact of the user experience on the evaluation result by comparing the evaluation result obtained by PIV users and PIV developers. An overview of the test cases is given in Table 1.
2 Organization
Scientific steering committee
Name  Country  Institution 

Christian J. Kähler  Germany  Bundeswehr University Munich 
Tommaso Astarita  Italy  University of Naples Federico II 
Pavlos P. Vlachos  USA  Purdue University 
Jun Sakakibara  Japan  Meiji University 
Scientific advisory committee
Name  Country  Institution 

Michel Stanislas  France  Ecole Centrale de Lille 
Koji Okamoto  Japan  University of Tokyo 
Ronald J. Adrian  USA  Arizona State University 
3 Review
Thanks to the continuing developments in laser, camera and computer power but also evaluation algorithms the capability of PIV will further advance. Thanks to the increasing userfriendliness of commercial software packages and hardware components, provided by the leading PIV system providers, the popularity and spreading of the technique will also continue to raise. However, it will be seen in this paper that the evaluation results strongly depend on the processing parameters selected by the user. This shows the strong relevance of the user’s knowledge, experience, persistence and attitude. Therefore, continuing effort is essential to improve the understanding and evaluation procedures of the technique and to establish procedures to quantify the uncertainty of the measurements. We hope that the 4th International PIV Challenge and this paper will provide a valuable service to the community to further enhance the PIV technique.
4 List of Challenge Participants
List of contributors and cases selected for processing
Acronym  Company/university  Country  Contact person  A  B  C  D  E 

ASU  Arizona State University  USA  John Charonko  \(\times\)  \(\times\)  
BUAA  Beijing University of Aeronautics and Astronautics  China  Qi Gao  \(\times\)  \(\times\)  
Dantec  Dantec Dynamics A/S  Denmark  Vincent Jaunet  \(\times\)  \(\times\)  \(\times\)  \(\times\)  
DLR  German Aerospace Center (DLR)  Germany  Christian Willert  \(\times\)  \(\times\)  \(\times\)  \(\times\)  \(\times\) 
INSEAN  CNRINSEAN  Italy  Massimo Miozzi  \(\times\)  \(\times\)  
IOT  Institute of Thermophysics SB RAS  Russia  Mikhail Tokarev  \(\times\)  \(\times\)  \(\times\)  \(\times\)  \(\times\) 
IPP  Institut Pprime  France  Laurent David  \(\times\)  \(\times\)  \(\times\)  
LANL  Los Alamos National Lab  USA  John Charonko  \(\times\)  \(\times\)  \(\times\)  \(\times\)  \(\times\) 
LaVision  LaVision GmbH  Germany  Dirk Michaelis  \(\times\)  \(\times\)  \(\times\)  \(\times\)  \(\times\) 
MicroVec  MicroVec Inc.  China  Wei Runjie  \(\times\)  \(\times\)  
MPI  Max Planck Institute for Dynamics and SelfOrganization  Germany  Holger Nobach  \(\times\)  
ONERA  ONERA  France  Benjamin Leclaire  \(\times\)  
TCD  Trinity College Dublin  Ireland  Tim Persoons  \(\times\)  \(\times\)  
TSI  TSI Incorporated  USA  Dan Troolin  \(\times\)  \(\times\)  
TsU  Tsinghua University  China  Qiang Zhong  \(\times\)  
TUD  Technical University Delft  Netherlands  Kyle Lynch  \(\times\)  \(\times\)  \(\times\)  \(\times\)  
UniG  University of Göttingen  Germany  Martin Schewe  \(\times\)  
UniMe  University of Melbourne  Australia  Dougal Squire  \(\times\)  \(\times\)  
UniNa  University of Naples Federico II  Italy  Gennaro Cardone  \(\times\)  
URS  University of Rome La Sapienza  Italy  Monica Moroni  \(\times\) 
5 Case A
5.1 Case description and measurements
PIV is a wellestablished technique even for the analysis of flows in systems whose effective dimensions are below 1 mm. However, the analysis of flows in microsystems differs from macroscopic flow investigations in many ways (Meinhart et al. 1999). Due to the volume illumination, the thickness of the measurement plane is mainly determined by the aperture of the imaging lens and therefore outoffocus particle images are an inherent problem in microscopic PIV investigations. Consequently, digital image preprocessing algorithms are essential to enhance the spatial resolution by filtering out the unwanted signal caused by outoffocus particles. Furthermore, the low particle image density achievable in microfluidic systems at large magnification and large particle image displacements as well as strong spatial gradients require sophisticated evaluation procedures, which are less common in macroscopic PIV applications. To bring the micro and macroPIV community together, a demanding microscopic PIV case, measured in a microdispersion systems, was considered in this PIV Challenge.
Microdispersion systems are very important in the field of process engineering, in particular food processing technology. These systems produce uniform droplets in the submicrometer range if driven with pressure around 100 bar (Kelemen et al. 2015a, b). The basic geometry consists of a straight microchannel (\(80 \,\upmu \hbox {m}\) in depth) with a sharp decrease in cross section and a sharp expansion thereafter as illustrated in Fig. 3. Due to this sudden change in width (from 500 to \(80 \,\upmu \hbox {m}\)), the velocity changes significantly by about 230 m/s within the field of view. The strong convective acceleration stretches large primary droplets into filaments. Due to the normal and shear forces in the expansion zone, the filaments break up into tiny droplets which persist in the continuous phase of the emulsion (fat in homogenized milk for instance). The flow includes cavitation at the constriction due to the large acceleration, strong inplane and outofplane gradients (Scharnowski and Kähler 2016) in the boundary layers of the channel and shear layers of the jet flow and, in case of twophase fluids, dropletwall and droplet–droplet interaction. Thus, the flow characteristics are difficult to resolve due to the large range of spatial and velocity scales.
The timeaveraged flow field is quite uniform and of small velocity prior to the inlet of the channel, see (1) in Fig. 3. Toward the inlet, the flow is highly accelerated (2) and a fast channel flow develops with very strong velocity gradients close to the walls (3). Cavitation occurs at different places, especially at the inlet of the small channel in regions of very high velocities (2). This can already be seen in the images. Taking an average image over images 1–250, cavitation bubbles can be seen at both sides of the inlet as shown in Fig. 4 (left). Later, these cavitation bubbles diminish as can be seen by averaging images 300–600, see Fig. 4 (right). Due to this effect, strong velocity changes are expected in this area as the flow state alternates between two modes, namely one with and without cavitation bubbles. It is evident that the velocity change will result in strong local Reynolds stresses on average, although both flow states are fully laminar. The Reynolds stresses are simply an effect of the change in flow state, which results in a mean flow measurement that does not exist at all in the experiment.
5.2 Image quality for \(\upmu \hbox {PIV}\)
In contrast to macroscopic PIV, the image quality for microscopic imaging is often poor due to low light intensities, small particles and optical aberrations. As typically for cameras with a CMOS architecture, the gain for different pixels might also be different, which results in socalled hot and cold pixels (Hain et al. 2007). Hot pixels show a very large intensity, which results in a bright spot in the image, whereas cold pixels have a low or even zero intensity, which results in a dark spot.
These hot and cold pixels must be taken into account during the evaluation process to avoid bias errors. If not removed, they cause strong correlation peaks at zero velocity due to their high intensity. Usually, this effect can be corrected by subtracting a dark image already during the recording. However, sometimes this does not work sufficiently well, especially when cameras are new on the market. Therefore, it was decided to keep these hot pixels (with constantly high intensity values) and cold pixels (with constantly low gain and low or zero intensity value) in the images and leave it up to the participant to correct for them, instead of using the internal camera correction. Furthermore, the signaltonoise ratio (SNR) is very low, due to the use of small fluorescent particles as well as the large magnifications. In the current case, the ratio of the mean signal to background intensity was below two. In some images also blobs of high intensity can be seen that result from particle agglomerations.
An additional problem was the time delay between successive frames which was set to the minimal interframing time of 120 ns. Due to jitter in the timing, crosstalk between the frames can be observed in some images, i.e., particle images of both exposures appear in one frame.
The pairwise occurrence of particle images with the same distance in a single frame indicates a double exposure with almost the same intensity. These images can be identified by the average intensity of the different frames, which is, in the case of double exposure, significantly higher than in average. In total, only 6 out of the 600 images show significant crosstalk. Strategies to cope with this problem might include image preprocessing or simply the identification and exclusion of these images. However, if not taken into account, high correlation values at zero velocity bias the measurements.
In comparison with macroscopic PIV applications, the particle image density is due to the large magnification typically lower. In order to enhance the quality of the correlation peak and the resolution, ensemble correlation approaches are typically applied (Westerweel et al. 2004).
Another inherent problem for microfluidic investigations is the socalled depth of correlation (DOC) as already mentioned at the beginning of the section. In standard PIV, a thin laser light sheet is generated to illuminate a plane in the flow which is usually observed from large distances. The depth of focus of the camera is typically much larger than the thickness of the laser light sheet, and therefore, the depth of the measurement plane is determined by the thickness of the light sheet at a certain intensity threshold which depends also on the size of the particles, the camera sensitivity and the lens settings. In microfluidic applications, due to the small dimensions of the microchannels, the creation and application of a thin laser light sheet is not possible. Volume illumination is used instead. As a consequence of this, the thickness of the measurement plane is defined by the depth of field of the microscope objective lens. In \(\upmu \hbox {PIV}\) experiments, the DOC is commonly defined as twice the distance from the object plane to the nearest plane in which a particle becomes sufficiently defocused so that it no longer contributes significantly to the crosscorrelation analysis (Meinhart et al. 2000). In the case of vanishing outofplane gradients within the DOC, the measurements are not biased, but the correlation peak can be deformed due to nonhomogeneous particle image sizes, which can reduce the precision of the peak detection. Unfortunately in many cases, the DOC is on the order of the channel dimension and thus large outofplane gradients are present. Under certain assumptions, a theoretical value can be derived for the DOC based on knowledge of the magnification, particle diameter, numerical aperture, refractive index of the medium and wavelength of the light. The DOC can also be determined by an experiment, and very often the experimental values are larger since aberrations appear. In the current case, the theoretical value is \(\hbox {DOC}_{{\mathrm {theory}}}=17.6\,\upmu \hbox {m}\), whereas the experimentally obtained value gives \(\hbox {DOC}_{{\mathrm {exp}}}=31.5\,\upmu \hbox {m}\) (Rossi et al. 2012). Since the tiny channel had a squared cross section of \(80 \times 80\,\upmu \hbox {m}^2\) , this covers more than onethird of the channel and significant bias errors can be expected. However, adequate image preprocessing or nonnormalized correlations can minimize this effect (Rossi et al. 2012). Alternatively, 3D3C methods can be applied to fully avoid bias errors due to the volume illumination (Cierpka and Kähler 2012).
In general, image preprocessing is very important for microscopic PIV to increase the signaltonoise ratio and avoid large systematic errors due to the depth of correlation (Lindken et al. 2009). Consequently, most of the teams applied image preprocessing to enhance the quality of the images.
5.3 Participants and methods
In total, 23 institutions requested the experimental images. Since the images were very challenging, not all of them succeeded in submitting results in time. Results were submitted by 13 institutions. The acronyms used in the paper and details about the data submitted are listed in Table 5. For the data sets named evaluation 1 (eval1), the participants that applied spatial correlation such as conventional window crosscorrelation or sum of correlation by means of window correlation were forced to use a final interrogation window size of \(32 \times 32\) pixels. Multipass, window weighting, image deformation, etc. were allowed. The fixed interrogation windows size allows for a comparison of the different algorithms without major bias due to spatial resolution effects (Keane and Adrian 1992; Kähler et al. 2012a, b). However, the experience and attitude of the user has a very pronounced effect on the evaluation result, and different window sizes may have been favored by the different users to alter the smoothness of the results. Therefore, the teams were free to choose the interrogation window size in case of evaluation 2 (eval2). The evaluation parameters are summarized in Table 5. Some special treatments are described in the following if they differ significantly from the standard routines.
Dantec used close to the walls a special shaped wall window so that the contribution from particles further away can be excluded and thus the systematic velocity overestimation can be minimized. In addition, a Nsigma validation \((\sigma = 2.5)\) was used to detect and exclude outliers that may appear in groups due to the large overlap.
A feature tracking algorithm was applied by INSEAN (Miozzi 2004), which solves the optical flow equation in a local framework. The algorithm defines the best correlation measure as the minimum of the sum of squared differences (SSD) of pixel intensity corresponding to the same interrogation windows in two subsequent frames. After a linearization, the SSD minimization problem is iteratively solved in a least square style, by adopting two different models of a pure translational window motion and in the second step an affine window deformation. The deformation parameters are given directly by the algorithm solution (Miozzi 2005). The velocity for the individual images was only considered where the solutions of the linear system corresponding to the minimization problem exist. Subpixel image interpolation using a fifthorder BSpline basis (Unser et al. 1993; Astarita and Cardone 2005) was performed. Inplane loss of pairs was avoided by adopting a pyramidal image representation and subpixel image interpolation using a bicubic scheme.
The peak detection scheme of IOT was based on the maximum width of the normalized correlation function at a value of 0.5. The assessment of the mean velocity and the velocity fluctuations started from the center of mass histogram method, and then it was used as an initial guess for an elliptic 2D Gaussian fitting with a nonlinear Levenberg–Marquardt optimization by the upper cap (0.5–1) of the correlation peak points. The displacement fluctuations were obtained by the shape of the ensemble correlation function following the approach of Scharnowski et al. (2012). A global displacement validation (\(20\le DX\le 50\) px and \(30 \le DY\le 30\) px and 53 px for vector length) was applied. Outliers after each iteration were replaced by \(3\times 3\) moving average.
For peak finding, TCD used a \(3\times 3\) 2D Gaussian estimator (Nobach and Honkanen 2005) that ranks peaks according to their volume instead of their height. For intermediate steps, a \(3\times 3\) spatial median filter was used to determine outliers (Westerweel and Scarano 2005). They were replaced by lowerranked correlation peaks or local median of the neighbors. Two further median smoothing operations were applied.
An iterative multigrid continuous window deformation interrogation algorithm was applied by TU Delft to individual images with decreasing window size starting with \(128\) pixels (Scarano and Riethmuller 2000) with bilinear interpolation for the velocity and \(8\times 8\) sinc interpolation for pixel intensities (Astarita and Cardone 2005). Close to the walls, a weighting function was applied to the nonmasked region.
Evaluation parameter for case A
Team  Type of evaluation  Code description  Final IW size [px]  Window weighting  Final VS [px]  Fit  Preprocessing  Mask generation  Postprocessing 

Dantec  PIV eval1 mean/rms  Adaptive multipass, window deformation, universal outlier detection between passes, predictor from ensemble average  32 × 32  Wall windowing (only near the walls)  4  2D Gauss  Harmonic mean subtraction, reduction of crosstalk, min/max contrast enhancement  Algorithmic  Temporal NSigma validation (Sigma = 5.0) 
DLR  PIV eval1/2, mean/rms  Multipass algorithm, coarsetofine resolution pyramid, starting at 256 × 256, using images deformed a priori with predictor field for eval2  32 × 32 (eval1), 24 × 24 (eval2)  None  4  3 × 3 2D Gauss  Median filtering, mean intensity subtraction, mild smoothing  Manual, from rms image  Normalized median filter (Scarano and Westerweel 2005), 3 × 3 neighborhood difference filter 
INSEAN  optical flow, eval1/2, mean/rms  Multipass pyramidal algorithm, minimization of sum of squared differences, tracking  32 × 32 (eval1), 24 × 24 (eval2)  Gauss  \(<\)2  Subtract mean img., subtract local mean, histogram stretching  Algorithmic  Multivariate filter in u’v’ space, 2 px std. dev. Gauss filter  
IOT  PIV eval1/2 mean/rms  Multipass algorithm, ensemble correlation (eval1), single–pixel correlation (eval2) (Karchevskiy et al. 2016)  32 × 32 (eval1), 2 × 2 (eval2)  None  2  Elliptic 2D Gauss  Subtract mean image, local median 3 × 3, subtract local median 7 × 7, lowpass Gaussian 5 × 5  Manual  Velocity range validation (eval1), moving average filter 10 × 10, kriging interpolation 
LANL  PIV eval1 mean, PTV eval2 mean/rms  Multipass deform algorithm, ensemble correlation, robust phase correlation for eval1 (Eckstein et al. 2008), particle ID by dynamic thresholding, multiparametric particle matching (size, intensity, position) for eval2  32 × 32  Gauss (Eckstein et al. 2009)  2 (eval1), varied (eval2)  1D 3 pt Gauss (eval1), 2D Gauss (eval2)  Subtract mean image separately on A/B, subtract attenuated BA/AB, threshold background level  Algorithmic (max intensity then morphological operations)  None 
LaVision  PIV eval1/2 mean/rms  Singlepass crosscorrelation with predictor by ensemble correlation  32 × 32 (eval1), 16 × 16 (eval2)  8 (eval1), 4 (eval2)  Minimum subtraction, hot pixel correction, offset subtraction  Vector averaging for \(\pm 10\) pixel  
MicroVec  PIV eval1 mean/rms  Multipass algorithm  32 × 32  none  2  1D Gauss  3 × 3 Gauss filter  manual  Normalized median filter 
TCD  PIV eval1/2 mean/rms  Multipass FFT algorithm (Persoons and O’Donovan 2010), continuous window deformation, nonsquare interrogation windows  32 × 32 (eval1), 128 × 32 (eval2)  None  2 (eval1), 32 × 16 (eval2)  2D Gauss (special ranking)  None  Manual  3 × 3 Gauss filter 
TSI  PIV eval1 mean/rms, eval2 mean  Multipass, cascading window deformation, RohalyHart (Rohaly et al. 2002), spot offset for eval1 (Wereley and Gui 2003), ensemble correlation for eval2  32 × 32  Gauss  8  2D Gauss  Mean subtraction, noise removal  Manual  RohalyHart (Rohaly et al. 2002), 3 × 3 universal median filter, 3 × 3 Gauss filter 
TUD  PIV eval1/2 mean/rms  Iterative image deformation initialized using ensemble correlation predictor  32 × 32  Gauss, \(\alpha = 2.5\)  4  1D Gauss  Historical minimum subtraction, average normalization, spatial bandpass filtering, thresholding  Manual  Universal outlier detection (Scarano and Westerweel 2005), vector reallocation (Theunissen et al. 2008) 
UniG  PIV eval1 mean/rms  Direct crosscorrelation, multigrid/pass, mod. Whittaker deform., multipass peak finder (Schewe 2014), ensemble correlation  32 × 32  Triangular and bellshaped window weighting  2  2 × 1D Gauss  Only gray value clipping  Manual and algorithmic  3 × 3 medianbased peak prediction within the multipeak finder, but no direct vector postprocessing 
UniMe  PIV eval1/2 mean/rms  Multigrid algorithm, multiple pass spurious correction, iterative mean offset with adaptive thresholding, multiple correlation spurious correction  32 × 32 (eval1), 16 × 16 (eval2)  None  2  1D Gauss  Background subtraction, Gaussian smoothing, median filtering  Manual  3 × 3 median criterion validation with cubic interpolation (Scarano and Westerweel 2005), secondorder Savitzky–Golay filter 
UniNa  PIV eval1/2 mean  Multipass algorithm(Astarita and Cardone 2005), ensemble correlation  33 × 33 (eval1), 11 × 11 (eval2)  Blackman window weighting  1  3 Points  Local minimum subtraction, bandpass median filter, threshold  None  Median filter (Scarano and Westerweel 2005), smoothing 2D wavelet (Ogden 1997) 
5.4 Mask generation
Mask generation and width of the small channel
Team  Mask generation  Small channel width in px 

Dantec  Algorithmic, temporal maximum, spatial smoothing, thresholding  176 
DLR  Manual from rms image  182 
INSEAN  Algorithmic  174 
IOT  Manual  180 
LANL  Algorithmic, maximum image and morphological operations  190 
LaVision  (Manual)  162 
MicroVec  Manual  178 
TCD  Manual  184 
TSI  Manual  194 
TUD  Manual  178 
UniG  Manual and algorithmic  178 
UniMe  Manual  190 
UniNa  None  160 
5.5 Results
5.5.1 Evaluation 1
The mean flow field component in xdirection is shown in Figs. 5 and 6 in the first column. In many experiments, the real or ground truth is not known; however, all physical phenomena due to the flow, tracer particles, illumination, imaging, registration, discretization and quantization of this experiment are realistic, which is not achievable using synthetic images. Furthermore, the ground truth is only helpful if only small deviations exist between the results, which is not the case here. Therefore, a statistical approach is well suited which identifies results that are highly inconsistent with the results of the other teams or which are unphysical taking fluid mechanical considerations into account or which are not explainable based on technical grounds. In principle, all the teams could resolve the average characteristics of the flow. The fluid is accelerated in the first part of the chamber, later a channel flow forms in the contraction and a free jet, leaning to the lower part within the images, develops behind the channel’s outlet. For the mean displacement, the magnitude is in coarse agreement for all participants. Obvious differences can be observed in terms of data smoothness (MicroVec for instance), the acceleration of the flow toward the inlet (compare Dantec, Lavision and UniNa), the symmetry of the flow in the channel with respect to its center axis (TCD and LANL), the contraction of the streamlines at the outlet of the channel (compare TSI and UniG with others). The major differences can be found close to the walls, especially in the inlet region where the cavitation bubble forms in the first 250 images. Here the data are in particular more noisy for INSEAN, LANL and TCD.
Dantec used a special wall treatment in the algorithm to estimate the nearwall flow field. For other teams (MicroVec, TCD, TSI, TUD, UniG and UniMe), it seems that the velocity was forced to decay to zero at the wall, which results in very strong differences for the gradients in the small channel. These effects can be seen in the displacement profiles for evaluation 2 in Fig. 10. Although it seems reasonable to take the nearwall flow physics into account, this procedure is very sensitive to the definition of the wall location, which is strongly user dependent as shown above. Integrating the displacement in xdirection shows differences of 28 % (of the mean for all participants) between the smallest and larges values for the volume flow rate in the small channel, which is remarkable. It is also obvious that image preprocessing is crucial to avoid wrong displacement estimates due to the hot pixels and crosstalk between frames, which results in an underestimation of the displacement. Image preprocessing was not performed by TCD and regions of zero velocity (see dark spots close to the inlet at the upper part) can be seen. Also in the case of MicroVec, where only smoothing was applied, artifacts from the hot pixels can be clearly seen. All the other participants used a background removal by subtraction of the mean, median or minimum image and additional smoothing which works reasonably fine. Dantec, DLR, IOT, LANL, LaVision, TU Delft and UniNa used special image treatments to reduce the effect of the depth of correlation. The underestimation of the center line displacement can be minimized by this treatment as shown by the slightly higher mean displacements in xdirection (indicated in the histograms in the third column in Figs. 5 and 6) in comparison with the teams which used only image smoothing.
The mean fluctuation flow fields for the xdirection are displayed in the second column of Figs. 5 and 6. As indicated in Table 5, some teams (LANL, UniG, UniNa) did not provide fluctuation fields. Compared with the mean displacement fields, much larger differences can be observed in the fields. The differences clearly illustrate the sensitivity of the velocity measurement on the evaluation approach and the need for reliable measures for the uncertainty quantification. In general, the fluctuations in x and ydirection are in the same order of magnitude. Turbulence in microchannels is usually difficult to achieve, even if the Reynolds number based on the width of the small channel and the velocity of O(200) m/s is \(Re>\) 16,000 and thus above the critical Reynolds number. Therefore, it has to be kept in mind that this is not a fully developed channel flow. However, due to the temporal cavitation at the inlet, which also has an effect on the mean velocity in this area because of the blockage, fluctuations are expected in the inlet region of the channel. During the time when cavitation occurs, the velocity in that region is almost zero (although no tracer particles enter the cavitation), whereas it is about 40–50 pixels in cases where no cavitation is present. Also close to the channel walls, strong fluctuations are expected because of the finite size of the particle images. This is obvious because at a certain wall distance particles from below and above, which travel with a strongly different velocity, contribute to the signal. Consequently, virtual fluctuations become visible, caused by seeding inhomogeneities. The same effects appear in the thin shear layers behind the outlet of the channel. This is confirmed in the results where in the small channel and in the side regions of the jet strong fluctuations can be found. If the mean rms displacement value for the different measurements is divided by the mean displacement, fluctuations of 20–30 % are reached. The majority of the teams found the largest rms values at the inlet region where the cavitation bubble forms and in the free jet region where the shear layers develop. The mean rms levels for \(DX_{{\mathrm {rms}}}\) are between 2.06 pixels (INSEAN) and 3.26 pixels (IOT). The values for TCD (5.05 pixels) are much higher than the values for the other teams and should therefore be taken with care.
Probably, artifacts due to the data processing close to the wall are leading to the high rms levels in the rounded corners shown by IOT, TCD, TSI. TUD, UniMe and the unphysical high rms levels at the walls of the small channel, which were not measured by the other participants. In addition, the data of TCD show very large rms values close to all walls, which is caused by the wall treatment. Moreover, the mean displacement profiles of TCD show outliers and the large rms values away from the walls are caused by these outliers.
LaVision applied a global histogram filter, only allowing fluctuation levels that do not exceed \(\pm 10\) pixels from the reference values, the rms values therefore showed a mean value of only 2.16 pixels for the xdirection. IOT obtained the fluctuations by evaluating the shape of the ensemble correlation function following the approach of Scharnowski et al. (2012), which shows often a bit higher values than vector processing since no additional smoothing is applied. The same trend as just described holds also true for the rms displacement in ydirection.
The last column of Figs. 5 and 6 reveals the probability density functions (pdfs) for the displacement in x and ydirection. The velocity pdfs are in good agreement among the different teams as small deviations cannot be resolved using pdf distributions. For DX a very broad peak around \(\approx 2\) pixels results from the large recirculation region. A second broad peak at \(\approx\)8 pixels stems from the mean channel flow. The very high velocities in the small channel do not result in a single peak, but show an almost constant contribution in the range of large displacements up to 40 pixels. Very large peaks of a single preferred velocity can be seen in the pdfs of INSEAN, MicroVec, UniNa and UniMe at zero velocity which result from crosstalk or hot pixels. Additional strong peaks for distinct displacement are visible for LANL and UniG in the low velocity region. The data of DLR also show a small broader peak at \(\approx\)35 pixels, which was not found by the other teams. The pdfs for the other teams are quite smooth and show the biggest differences in the gap between the first and the second broad peak.
The pdf of DY is centered around slightly negative values for the displacement, and again the agreement between the teams is very good. However, MicroVec and TSI show significantly larger values at zero displacement. The pdf for TCD is centered around zero.
5.5.2 Evaluation 2
For evaluation 2, the participants were free to chose the appropriate window size. All teams have used the same preprocessing (if applied) as in evaluation 1. About one quarter of the teams (Dantec, MicroVec, TUD, UniG) have chosen \(32\times 32\) pixel windows and the same processing parameters as in evaluation 1; therefore, the same data will be used for comparison.
Some teams only changed the window size. Namely, INSEAN, LaVision and UniMe lowered the final window size to \(16\times 16\) pixels. UniMe did not use the Savitzki–Golay filter and UniNa set the final window size to \(11\times 11\) pixels. For TCD the final window size was \(128\times 32\) pixels on a \(32\times 16\) pixel grid. The evaluation parameters, as far as they differ from evaluation 1, are described in the following and listed in Table 5.
The DLR team used a predictor field, computed using the pyramid approach of evaluation 1 stopping at a sampling size of \(32\times 32\) pixels on a grid with \(8 \times 8\) pixel spacing. Each image pair was then processed individually by first subjecting it to full image deformation based on the predictor field. Then a pyramid scheme was used, starting at an initial window size of \(64\times 64\) pixels, which limits the maximum displacement variations to about ±20 pixels. The final window size was \(24\times 24\) pixels at a grid distance of \(8 \times 8\) pixel spacing which was subsequently upsampled to the requested finer grid of \(2 \times 2\) pixels.
IOT employed an ensemble correlation with \(2\times 2\) pixel windows. The search area was \(128\times 64\) pixels. The correlation function was multiplied by four neighbors and then the peaks were preprocessed and a Gaussian fit was applied to determine the peak position for the mean displacement and rms displacements from the shape of the correlation peaks (Scharnowski et al. 2012).
LANL used for evaluation 2 a PTV method using a multiparametric tracking (Cardwell et al. 2010). Particle images were identified using a dynamic thresholding method, which allows for the detection of dimmer particles in close proximity to brighter ones. Their position was determined using a least squares Gaussian estimation with subpixel accuracy (Brady et al. 2009). The multiparametric matching allows for the particles to be matched using not only their position but also a weighted contribution of their size and intensity between image pairs. To further increase the match probability, the particle search locations were preconditioned by the velocity field determined using an ensemble PIV correlation. The PTV data were then interpolated (search windows of \(32\times 32\) pixels) with inverse distance weighting to a \(2\times 2\) pixel grid. When three or less vectors are present in the corresponding window, the window size was increased by 50 %.
The mean flow fields for the xcomponent are shown in Figs. 7 and 8 in the first column. For TCD the amount of outliers was reduced, because of the enlargement of the window in xdirection. The mean displacement field for evaluation 2 of TCD looks much smoother now, although artifacts due to hot pixels can still be seen. The difference for TSI’s results is that correlation averaging was used instead of individual correlations, which also tends to lower the velocity estimate by smoothing if normalized. IOT used ensemble correlation of \(2\times 2\) pixel windows from the beginning, instead of an iterative averaged correlation as did for evaluation 1. This results in a more noisy pattern. In general, the differences to evaluation 1 are not very pronounced in the mean fields. Regarding the velocity profiles, the differences among the participants are larger for evaluation 2 instead for evaluation 1 where processing parameters were fixed. This indicates that in principle all algorithms provide reasonable results, but a great difference is made by the choice of the parameters which are dependent on the users experience and intention.
The rms flow fields are shown in Figs. 7 and 8 in the second column. Please note that TSI and UniNa did not provide fluctuation fields for evaluation 2. Here larger differences are visible from evaluation 1 and evaluation 2 results as expected. For the DLR data, the levels of the rms values are smaller and focus on regions close to the channel walls and in the shear layers of the evolving jet flow. For INSEAN, the mean rms levels are larger, and also stronger signatures of regions of high rms values can be seen. For the data of IOT, very high and very low levels of rms values are visible. The patterns occur to be very noisy, which is probably a result of the smaller window size and thus a noisy correlation peak. Since this correlation peak is analyzed with a Gaussian fit function, the results of this function might show large fluctuations. The levels of the rms value for TCD’s results are on the same order as found by the other teams, they are also much smoother than for evaluation 1, which is attributed to the larger window size. For UniMe, the rms values are above the other teams levels and even larger than for evaluation 1. Part of this behavior is probably due to different filtering schemes. This detrimental effect can also be seen in the subpixel rms displacement (not shown). However, especially for INSEAN, the large value at zero subpixel displacement from evaluation 1 was significantly decreased.
In Fig. 10, the displacement in x and ydirections for evaluation 2 is shown for a line with \(x = 900\) px, which is in the channel but far away from the cavitation region. Significant differences can be seen near the walls, which is due to the different wall treatments and the ability of the algorithms to resolve the gradients close to the wall. As discussed above, the width of the channel changes due to the different approaches in the mask generation. If the volume flow rate is determined by an integration of the displacement along the wallnormal direction, differences of up to 28 % between the participants occur. Although the differences on the center line in xdirection are small in this representation, for the ydirection large differences can be seen on the right side of Fig. 10. Even the sign changes for different teams and the magnitude of the error can reach up to 7 px in some places.
5.6 Conclusion

In \(\upmu \hbox {PIV}\) usually fluorescent tracer particles are used to separate the signal from the background using optical filters. Thus one may assume that preprocessing is not needed to enhance the signaltonoise ratio. However, the analysis of case A shows that image preprocessing is important to avoid incorrect displacement estimations caused by typical camera artefacts such as cold and hot pixel or crosstalk between camera frames. The latter appears if the laser pulse separation is comparable to the interframing time of the digital cameras. The influence of gain variations for different pixels is usually lowered using image intensity correction implemented in most camera software. If it can not fully be balanced, the analysis shows that the different background removal methods applied by the teams work similarly well. However, it is evident from the results provided by MicroVec and TCD that smoothing does not work effectively to eliminate fixed pattern noise.

It is well known that the precise masking of solid boundaries before evaluating the measurements is a very important task. The masking affects the size of the evaluation domain and thus the region where flow information can be measured. Thus a precise masking is desirable to maximize the flow information. However, the present case shows that this is associated with large uncertainties. The estimated width of the small straight channel varied by more than 20 % among all submissions. This unexpected result illustrates the strong influence of the user. To avoid this userdependent uncertainty, universal digital masking techniques are desirable to address this serious problem in future measurements.

Another interesting effect could be observed by comparing the strongly varying velocity profiles across the small channel. Due to the spatial correlation analysis, the flow velocity close to solid boundaries is usually overestimated. However, as the spatial resolution effect was almost identical for all teams in the case of evaluation 1 the variation can be attributed to the special treatment of nearwall flow. While most teams did not use special techniques, variations are mainly caused by the masking procedure. In contrast, UniMe seems to resolve the boundary layer effect much better than the other teams. This was achieved by implementing the noslip condition in the evaluation approach. Although this modelbased approach shows the expected trend of the velocity close to the wall, it is evident that the results are strongly biased by defining the location of the boundaries. Unfortunately, the definition of the boundary location is associated with large uncertainties, as already discussed, and therefore, this modelbased approach is associated with large uncertainties. Therefore, it would be more reliable to make use of evaluation techniques which enhance the spatial resolution without making any model assumption such as singlepixel PIV or PTV evaluation techniques (Kähler et al. 2012b). However, as the uncertainty of these evaluation techniques is not always better than spatial correlation approaches, it is recommended to evaluate the measurements in a zonallike fashion as done in numerical flow simulations, where RANS simulations are coupled with LES or even DNS simulations to resolve the flow unsteadiness at relevant locations. The zonallike evaluation approach minimizes the global uncertainty of the flow field if the nearwall region is evaluated using PTV (to avoid uncertainties due to spatial filtering or zero velocity assumptions at the wall) while at larger wall distances spatial correlation approaches are used (because the noise can be effectively suppressed using statistical methods).

Furthermore, the precise estimation of the mean velocity is very important for the calculation of higherorder moments. The strong variation of the rms values illustrates that the uncertainty quantification is very important because the differences, visible in the results, deviate much more than expected from general PIV uncertainty assumptions. Here future work is required to quantify the reliability of a PIV measurement.

Since in microscopic devices the flow fields show large inplane and outofplane gradients and are inherently threedimensional, it is beneficial to use techniques that allow a reconstruction of all three components of the velocity vector in a volume (Cierpka and Kähler 2012). Of special interest are techniques that allow a depth coding in 2D images as confocal scanning microscopy (Park et al. 2004; Lima et al. 2013), holographic (Choi et al. 2012; Seo and Lee 2014) or lightfield techniques (Levoy et al. 2006), defocusing methods (Tien et al. 2014; Barnkob et al. 2015) or the introduction of astigmatic aberrations (Cierpka et al. 2011; Liu et al. 2014).
6 Case B
6.1 Case description
The Kolmogorov length scale cannot be resolved using the experimental setup. This is typical for most of the PIV measurements since the field of view is usually relatively large in order to observe the largescale flow structures. However, the spatial resolution is in the order of the Taylor scale which means that most of the vortices which occur in the turbulent spectrum can be resolved.

Low signal strength and signaltonoise ratio

Small particle image size due to the large pixel size of the camera

Large dynamic velocity range (DVR)

Large turbulent intensities

Strong outofplane motion due to 3D effects (turbulence, separation)

Laser light reflections at the walls.
6.2 Evaluation of case B
The participants had to perform two different evaluations. Evaluation 1 is a standard PIV evaluation, and evaluation 2 is more advanced.
6.2.1 Evaluation 1
For evaluation 1, the participants were forced to use the images \(B [ i1 ].\hbox {tif}\) and \(B [ i+1 ].\hbox {tif}\) for computing the displacement fields at time steps [ i ], with \(i=11 \ldots 1034\). The displacements obtained from this evaluation are divided by 2 in order to obtain the displacement from one time step to the next time step. The final interrogation window size was \(32 \times 32\,{\mathrm {px}}\) with \(93.75\,\%\) overlap (\(2\,{\mathrm {px}}\) vector spacing). Techniques such as multipass, window weighting, image deformation and masking were allowed.
6.2.2 Evaluation 2
For evaluation 2, the teams were free to choose any images to obtain the displacement fields at time steps [ i ], with \(i=11 \ldots 1034\). Techniques such as multipass, window weighting, image deformation and masking were allowed, and the teams had to select an optimal interrogation window size based on their experience and attitude. However, the vector grid was specified by a spacing of \(2\,{\mathrm {px}}\) to allow for a comparison with evaluation 1. The participants were also encouraged to apply advanced multiframe evaluation techniques, as first proposed in Hain and Kähler (2007). Within the third International PIV Challenge (Stanislas et al. 2008), it was already shown that multiframe approaches are able to reduce the uncertainty strongly.
6.3 Participants and methods
Participants of case B and applied methods for evaluation 1
Team  Type of evaluation  Code description  Final IW size [px]  Window weighting  Final vector spacing before interpolation [px]  Fit  Preprocessing  Mask generation  Postprocessing 

Dantec  PIV  Multipass algorithm  \(32 \times 32\)  No  2  2D Gauss  Background subtraction + intensity normalization  Algorithmic  Temporal NSigma validation (Sigma = 5.0) 
DLR  PIV  dual frame, coarsetofine pyramid correlation (\(96 \times 94\) start)  \(32 \times 32\)  No  2  \(5 \times 5\) 2D Gauss  Subtract mean image (combined subtract and divide), intensity capping of 2.5 % pixels, pixel mirroring on mask edge  Manual, from maximum image  Normalized median filter, Zscore validation (4 sigma) 
INSEAN  OF  Multipass pyramidal algorithm, minimization of sum of squared differences, tracking  \(32 \times 32\)  Gaussian window weighting  <2  N/A  Subtract mean img + subtract local mean + histogram stretch  Algorithmic  Natural neighborg interpolation with Bezier–Berenstain patches 
IOT  PIV  Multipass multigrid algorithm  \(32 \times 32\)  Gaussian window weighting  2  3Point least square  Fourier high pass with the cutoff frequency 56 Hz  Manual  spatial \(7 \times 7\) and temporal adaptive median filters, Gaussian weighted interpolation, Fourier lowpass with the cutoff frequency 300 Hz 
IPP  PIV  Multipass algorithm  \(32 \times 32\)  Gaussian window weighting  2  3Point 1D Gauss  Subtract mean image, subtract local mean (normalized FFT)  Manual  \(5 \times 5\) Median outlier correction 
LANL  PIV  Multipass deform algorithm, robust phase correlation  \(32 \times 32\)  Gaussian window weighting, \(16 \times 16\) resolution  2  3Point 1D Gauss  Median subtraction, multiplied by mask  Mean intensity, threshold, morphological operations, with manual edits  Universal outlier detection, \(16 \times 16\) px (\(8 \times 8\) vec) Gaussian filter 
MicroVec  PIV  Multipass algorithm  \(32 \times 32\)  No  2  3Point 1D Gauss  \(3 \times 3\) Gauss filter  Manual  Normalized median filter 
MPI  PIV  Multipass algorithm, window subpixel shift and firstorder deformation, Whittaker image interpolation (2 frames distance)  \(32 \times 32\)  2Dtriangular window  2  2D Gauss, correction of particle image intensity variations  No  No  Universal outlier detection, second peak validation 
TCD  PIV  Multipass algorithm, continuous window deformation  \(32 \times 32\)  No  2  1D Gauss  Highpass spatial filter (5 pixel sliding background)  Manual  \(3 \times 3\) Gauss filter 
TSI  PIV  Multipass, cascading window deformation, Rohaly–Hart, spot offset  \(32 \times 32\)  Gaussian window weighting  8  2D Gauss  Subtract mean image, noise removal  Manual  Rohaly–Hart, \(3 \times 3\) universal median filter, \(3 \times 3\) Gauss filter 
TsU  PIV  Multipass, multigrid, image deformation algorithm  \(32 \times 32\)  No  2  3Point 1D Gauss  Replace nofluid regions with pregenerated particle image, median filter, local contraststretching transformation  Manual  \(3 \times 3\) Gauss filter 
TUD  PIV  Multipass algorithm with window deformation  \(32 \times 32\)  Gaussian window weighting  2  3Point 1D Gauss  intensity normalization with respect to timeaveraged temporal minimum subtraction and \(3 \times 3\) Gaussian smoothing  Manual  No 
UniMe  PIV  Multigrid algorithm with iterative window deformation, multiplication of correlation plane  \(32 \times 32\)  No  2  2D Gauss, 1D Gauss  Subtract mean image, histogram clipping, Gaussian smoothing, local image normalization  Manual  Validation median criterion, cubic interpolation 
Participants of case B and applied methods for evaluation 2
Team  Type of evaluation  Code description  Final IW size [px]  Window weighting  Final vector spacing before interpolation [px]  Fit  Preprocessing  Mask generation  Postprocessing 

Dantec  FTC (fluid trajectory correlation)  Secondorder polynomial fit to trajectory across 5 temporal neighbor images  \(32 \times 32\)  No  8  2D Gauss  Background subtraction + intensity normalization + \(3 \times 3\) Gaussian blur  Algorithmic  Bilinear interpolation to 2 pixel vector spacing 
DLR  PIV  Tripleframe, coarsetofine pyramid correlation (96x94 start), image separation \(\pm 2\)  \(32 \times 16\)  No  2  5x5 2D Gauss  Subtract mean image (combined subtract and divide), intensity capping of 2.5 % pixels, pixel mirroring on mask edge  Manual, from maximum image  Normalized median filter, Zscore validation (4 sigma), Gaussian smoothing (sigma=node spacing), Gaussian smoothing in time (\(\pm 2\) fields) 
INSEAN  OF  Multipass pyramidal algorithm, minimization of sum of squared differences, tracking  \(28 \times 28\)  Gaussian window weighting  2< at first step. Free to evolve (<2) with trajectory sway  N/A  Subtract mean image. Subtract local mean. Histogram stretch  Algorithmic  Temporal Savitzky and Golay filter on trajectories. Natural neighborg interpolation with Bezier–Berenstain patches 
IOT  PIV  Adaptive sampling Multipass algorithm with pyramid correlation over 5 image frames  Adaptive, square IW’s from 16 to 48 px  Gaussian window weighting  2  3Point least square  Fourier high pass with the cutoff frequency 56 Hz  Manual  Spatial \(7 \times 7\) and temporal adaptive median filters, Gaussian weighted interpolation, Fourier lowpass with the cutoff frequency 300 Hz 
IOTPTV  PTV  Relaxation PTV algorithm  3Point least square  Subtract mean image  Manual  Spatial \(7 \times 7\) and temporal adaptive median filters, gaussian weighted interpolation, fourier lowpass with the cutoff frequency 300 Hz  
IPP  PIV  Multipass algorithm, FTEE (fluid trajectory evaluation based on an ensembleaveraged crosscorrelation)  \(32 \times 32\)  Gaussian window weighting  2  3Point 1D Gauss  Subtract mean image, subtract local mean (normalized FFT)  Manual  \(5 \times 5\) Median outlier correction 
LANL  PIV  Multipass algorithm, fluid trajectory correlation  \(32 \times 32\)  Gaussian window weighting, \(16 \times 16\) resolution  2  3Point 1D Gauss  Median subtraction, multiplied by mask  Mean intensity, thresholded, morphological operations, with manual edits  Universal outlier detection in space and time 
LaVision  PIV  Multipass algorithm, pyramid correlation  \(24 \times 24\)  Gaussian window weighting  6  3Point 1D Gauss  Subtraction of average of each pixel over time  Manual  None 
MPI  PIV  Multipass algorithm, window subpixel shift and firstorder deformation, Whittaker image interpolation (4 frames distance)  \(32 \times 32\)  2Dtriangular window  \(2 \times 2\)  2D Gauss, correction of particle image intensity variations  No  No  Universal outlier detection, second peak validation 
TCD  PIV with HDR  Multipass algorithm, continuous window deformation  \(32 \times 32\)  No  4  1D Gauss  Highpass spatial filter (5 pixel sliding background)  Manual  \(3 \times 3\) Gauss filter 
TSI  PIV  Multipass, cascading window deformation, Rohaly–Hart, spot offset  \(32 \times 32\)  Gaussian window weighting  8  2D Gauss  Subtract mean image, noise removal  Manual  Rohaly–Hart, \(3 \times 3\) universal median filter, \(3 \times 3\) Gauss filter 
TsU  PIV  Multipass, multigrid, image deformation algorithm  \(32 \times 32\)  No  4  3Point 1D Gauss  Replace nofluid regions with pregenerated particle image, median filter, local contraststretching transformation  Manual  \(3 \times 3\) Gauss filter 
TUD  PIV  Multipass algorithm with window deformation, multiframe pyramid correlation  \(16 \times 16\)  Gaussian window weighting  4  3Point 1D Gauss  Intensity normalization with respect to timeaveraged temporal minimum subtraction and \(3 \times 3\) Gaussian smoothing  Manual  Secondorder polynomial regression in spacetime (\(5 \times 5\) spatial kernel, 9 samples in time) 
UniMe  PIV  Multigrid algorithm with window deformation, multiplication of correlation plane, multiframe correlation multiplication  \(16 \times 16\)  No  2  2D Gauss, 1D Gauss  Subtract mean image, histogram clipping, gaussian smoothing, local image normalization  Manual  Validation median criterion, cubic interpolation 
URS  PTVOF  Hybrid lagrangian particle tracking  \(11 \times 11\)  No  2  N/A  No  No  Adaptive Gaussian arithmetic average 
6.4 Results
In order to compare the evaluations of the teams, different plots are shown in the following. At first, instantaneous flow fields are presented. Afterward, PDFs of the displacements are shown in order to investigate the peaklocking effect and to see the differences between evaluation 1 and 2. Line plots of mean velocities and rms are shown in addition to see differences in the statistics from different results. A comparison of the temporal as well as spatial characteristics by means of Fourier transforms completes the analysis of case B.
Instantaneous flow fields represented by the displacement component \(\hbox {d}x\) for evaluation 1 are shown in Fig. 14. All teams, except INSEAN, used crosscorrelation evaluation. Rather, INSEAN applied an optical flow approach. The flow fields of Dantec, DLR, INSEAN, IOT, TsU and UniMe look quite similar. In comparison, the displacement fields of IPP, LANL, MicroVec and MPI are much more noisy. Especially in the MicroVec evaluation, strong gradients are observed. TSI applied a filter which results in an unphysical strong smoothing of structures and gradients which is also visible in foregoing plots.
It has to keep in mind that for, e.g., evaluation 1 the displacements obtained from the correlation of two images with a temporal distance of \(2 \cdot \Delta t\) has been divided by 2 in order to get the virtual displacement from one time step to the next one. Therefore, peak locking is observed in these figures for some teams at locations \(\Delta x = i \cdot 0.5\,{\mathrm {px}}\), with \(i \in {\mathbb {Z}}\). Concerning the peak locking, there is a strong difference between the participants. While some curves show nearly no peak locking, some other have a severe peak locking for evaluation 1 as well as evaluation 2. However, for many teams, the peak locking is strongly reduced or even vanishes by using the advanced evaluation technique. This results in an increased accuracy. The optical flow evaluation of INSEAN agrees well with the crosscorrelation evaluations of, e.g., Dantec and DLR.
The rms of the displacements \(\hbox {d}x\) along the line \(x=100\,{\mathrm {px}}\) is shown in Figs. 20 and 21. Compared with the mean displacement fields, the scatter of the data is higher. However, many teams provided quite similar results. Looking at subplot 1 in Figs. 20 and 21, it can be seen that the scatter between the teams for evaluation 2 is smaller than the scatter for evaluation 1 (not considering the IOTPTV evaluation). The conclusion can be drawn that a proper evaluation by means of an advanced evaluation technique reduces the random error. However, not every advanced technique is able to do this, as can bee seen when comparing the subplots 3 in Figs. 20 and 21. In these cases, the scatter between the teams for evaluation 2 is larger than the scatter for evaluation 1, showing that the effect of the user is larger than the differences due to the specific evaluation techniques applied here.
For evaluation 1, the curves show a similar behavior, except for IOT. In the postprocessing, the teams performed a temporal filtering which leads to the suppression of highfrequency fluctuations. Comparing the results of the other participants shows that even at high frequencies the amplitude of \(\hbox {d}x\) does not significantly decrease. This is expected as these highfrequency fluctuations are caused by the noise. However, from the physical point of view such high frequency should not occur, cf. the discussion on the Kolmogorov scales in Sect. 6.1 and considering the averaging effect due to the final interrogation window size. The curves of evaluation 2 show a decreasing amplitude with increasing frequency for some teams who apply sophisticated evaluation techniques, which means that the measurement accuracy is increased. Typically the highfrequency oscillations are damped by locally increasing the particle image displacement and thus reducing the relative measurement error.
Using tophat quadratic interrogation windows, the locations of the roots in these plots are theoretically determined from the sinc function which is the Fourier transform of the rectangular function. This leads to root locations at frequencies \(f=\frac{n}{IW}\) with \(n \in \{ 1, 2, 3,\ldots \}\) and IW being the size of the interrogation window in pixels. For many teams, strongly decreased amplitudes are observed at these frequencies. However, this is not the case for all teams for different reasons. One of these reasons is, using window weighting which decreases the mentioned effect. A second reason is the method of postprocessing which was applied by the participants. Depending on the kind of postprocessing, a filtering/smoothing effect is obtained in this step which leads to a modification of the spatial frequency spectrum. A third reason may be that some participants did not perform the evaluation on a grid with size \(2 \times 2 \,{\mathrm {px}}\). Rather they performed the evaluation on a coarser grid and interpolated the data on the fine grid. However, this effect should be negligible since as least the low spatial frequencies are not affected by this method.
6.5 Conclusions

The discussion shows that results from evaluation 1 are quite consistent, but the results from evaluation 2 show significant differences, in particular when the rms fields are considered. This implies that the results depend significantly on the user and his/her experience and intention. This holds in particular true for the parameters of the preprocessing, the evaluation as well as the postprocessing. Thus it is dangerous to use PIV as black box because the application of the technique still requires substantial expert knowledge and experience.

The analysis also shows that the uncertainty of the technique can be greatly reduced by making use of multiframe evaluation techniques because they evaluate the temporal history of particle image trajectories. However, in order to benefit from the temporal analysis of the particle pattern, different aspects in an sophisticated evaluation method have to be considered. In the case of strongly oversampled image sequences, as provided here, a damping of the amplitudes at higher frequencies is acceptable from the physical point of view. However, caution is advised not to suppress physically relevant frequencies. A smooth field in space and time does not necessarily mean a high evaluation accuracy.

Another strong advantage of multiframe evaluation techniques is the reduction in bias errors due to the peaklocking effect. How a displacement at a certain location is obtained by a sophisticated evaluation method determines the level of the peaklocking reduction. Choosing an optimal temporal separation between correlated images at a certain location may lead to an increased relative measurement accuracy and will reduce peak locking. However, if a displacement at a certain location is determined not only from a single correlation but from a kind of (weighted) average from many correlations with different temporal separations, the peaklocking effect can be averaged out in principle. Finally, the multiframe analysis makes it possible to compensate also acceleration and curvature effects, which is important for highly accurate measurements in strongly unsteady flows.
7 Case C
7.1 Abbreviations
 BIMART

Block Iterative Multiplicative Algebraic Reconstruction Technique
 CoSaMP

Compressed Sampling Matching Pursuit
 FTC

Fluid Trajectory Correlation
 FTEE

Fluid Trajectory Evaluation based on an Ensembleaveraged crosscorrelation
 MART

Multiplicative Algebraic Reconstruction Technique
 MLOS

Multiplicative Line of Sight
 MinLOS

Minimum Line of Sight
 MTE

Motion Tracking Enhancement
 ppp

Particles per pixel
 PTV

Particle Tracking Velocimetry
 PVR

Particle Volume Reconstruction
 SEF

Spectral Energy Fraction
 SFIT

Spatial Filtering Improved Tomographic PIV
 SMART

Simultaneous Multiplicative Algebraic Reconstruction Technique
 StB

Shake the Box
7.2 Case description
In the 3D scenario, the spatial resolution is notoriously difficult to be predicted, as it depends on several parameters. The particles concentration and the reliability of the algorithm for the reconstruction of the 3D particles distribution play a key role. Generally, a compromise is required between dense particles distributions and sufficiently accurate reconstruction. To date, the tradeoff is left to the personal experience of the experimenter. The purpose of the Test case C is to assess the performances of the reconstruction and processing algorithms in terms of final spatial resolution and accuracy.
The test case is performed on synthetic images of a complex flow field. The optimization of the particle image density in terms of maximum spatial resolution with minimum loss of accuracy is the main challenge of this test case.
The particles are randomly located in a \(48 \times 64 \times 16\,{\mathrm {mm}}^3\) region. Particles are also generated outside of the measurement region (in a volume extended by 24 and 32 mm on both sides on the x and y direction, respectively) in order to reproduce the experimental conditions of a laser slab of 16 mm thickness illuminating the measurement region. A small displacement is imposed to the outer particles in order to avoid trivial suppression by images subtraction. Projected images of 4 cameras (\(1280 \times 1600\) pixels at 16bit discretization; pixel pitch 6.5 \(\upmu \hbox {m}\)) are provided with image density ranging between 0.01 ppp (particles per pixel) and 0.15 ppp. Sample images for low, medium and large density are reported in Fig. 26. The cameras are angled at \({\approx}{\pm} 35^\circ\) in both directions (deviations from these values are due to accommodation of the Scheimpflug angles while maintaining the imaged volume centered in the images). The magnification in the origin of the reference system is about 0.15 for all the cameras and images are generated with \(f_{\#} = 11\), thus resulting in a particle image diameter of about 2.5 pixels.
 1.
3D vortices are obtained as a combination of sines and cosines in the three directions, with wavelengths varying between 1.6 and 16 mm, corresponding to 32 and 320 voxels at 20 vox/mm. The amplitude is about 1 voxel and varies with the wavelength and the seeding density.
 2.
A periodic pseudojet with sinusoidal profile is obtained with a combination of shortwaves, varying between 8 and 1 mm (160 and 20 voxels at 20 vox/mm) in y and z, and longwaves (along x) cosines. The maximum amplitude is about 2 voxels and varies with the wavelength and the seeding density.
 3.
Step variations of the velocity fields are a consolidated test to obtain the impulsive response of the processing algorithm. Here it is expected that the PIV processing influences mainly the slope of the filtered stepwise variation, while the quality of the reconstruction determines the upper and lower values of the step. A poor reconstruction will be more affected by ghost particles, thus resulting in smoother velocity gradients.
 4.
A marker is included to identify the image density chosen by the participant. The marker consists in a region of constant displacement, which has been easily caught by all the participants.
7.3 Algorithms
Algorithms used by the participants to test case C
Team  Image density (ppp)  Calibration  Reconstruction  PIV processing  Postprocessing 

ASU  0.050  Thirdorder polynomial mapping in x, y, and second in z Soloff et al. (1997)  Image segmentation (Ding and Adrian 2014) at 20 vox/mm. Intensity value on images interpolated with cubic spline. Intensity distributions smoothed with Gaussian filter (\(3 \times 3 \times 3\), std 0.6)  Planar 2D multipass image deformation method on xz and xy planes (summing 20 planes in the y and z direction respectively). Interrogation volume side 1.0 mm  Gaussian smoothing (\(9 \times 9 \times 9\), std 2.4) 
BUAA  0.075  Thirdorder polynomial mapping, in x and y and 1st in z  10 Intensityenhanced MART (inverse diffusion equation) iterations at 20 vox/mm. Gaussian smoothing of the original images (\(3 \times 3\), std 0.5). Gaussian smoothing between the iterations (Discetti et al. 2013)  Iterative discrete windows offset. Interrogation volume side 1.6 mm  Gaussian smoothing (\(5 \times 5 \times 5\), std 0.8). PODbased postprocessing to reduce noise 
Dantec  0.075  Direct linear transformation  10 MART iterations after initialization with MinLOS (Maas et al. 2009) at 20 vox/mm. Gaussian blurring on \(3 \times 3 \times 3\) voxels, iterated 5 times  3D Least squares matching algorithms Westfeld et al. (2010). Interrogation volume side 1.65 mm  Median filter (\(3 \times 3 \times 3\)) 
DLR  0.075  Two plane model Wei and Ma (1991) with optical transfer function Schanz et al. (2013b)  15 ASMART iterations and volume smoothing (Discetti et al. 2013). 3D particles identification and StB. Search for matching particles in both frames. Iteration of these steps for three times using the residual images Schanz et al. (2013a), Schanz et al. (2016)  Artificial volume with 6 voxels diameter Gaussian blobs, generated at reconstructed particle positions. 3D crosscorrelation with volume deformation. Interrogation volume side 1.6 mm  Moving average smoothing (\(2 \times 2 \times 2\)) 
Hacker  0.075  Perfect  Perfect at 20 vox/mm  Iterative volume deformation with a top hat moving average approach (Astarita 2006). Interrogation volume side 1.6 mm  None 
IoT  0.100  Pinhole camera model Tsai (1987).  20 SMART iterations after initialization with MLOS followed by 5 MTE iterations each composed of 20 SMART (Novara et al. 2010) at 20 vox/mm. Volume extended by 4 mm on each side to minimize edge effects  Iterative volume deformation method. Interrogation volume side 1.4 mm  None 
IPP  0.050  Pinhole camera model Tsai (1987)  8 BIMART (Byrne 2009) iterations, block size 4, relaxation parameter 0.38 at 20 vox/mm. Volume extended by 2.4 mm on each side to minimize edge effects  Iterative volume deformation with Gaussian window \((\alpha = 2)\). Interrogation volume side 2.35 mm  None 
LANL  0.075  Direct linear transformation  Optimized MLOS with exponent 1 (La Foy and Vlachos 2011) at 20 vox/mm. Particles suppression in order to get ppp matching with the images. Background suppression (1 % of maximum)  Iterative volume deformation with Motion Tracking Blanking and Robust Phase Correlation (Eckstein et al. 2008). Interrogation volume side 1.0 mm. Gaussian windowing (effective size 0.8 mm)  Gaussian smoothing (std 2 voxels) 
LaVision  0.075  Pinhole camera model Tsai (1987)  14 SMART iterations after initialization with MLOS followed by 5 MTE iterations each composed of 14 SMART (Novara et al. 2010) at 40 vox/mm. Gaussian smoothing \(3 \times 3 \times 3\) between iterations (Discetti et al. 2013)  Iterative volume deformation with fast direct cc (Discetti and Astarita 2012). Interrogation volume side 2.4 mm. Gaussian windowing (effective size 1.2 mm)  None 
ONERA  0.075  Pinhole camera model Tsai (1987).  25 PVR+SMART iterations after initialization with MLOS (Champagnat et al. 2014) at 45 vox/mm. Decimation to reach 23 vox/mm.  FOLKI3D algorithm (Cheminet et al. 2014) based on least square optimization. Interrogation volume side 1.8 mm. Gaussian windowing (effective size 1.2 mm)  None 
ONERA_PTV  0.030  Pinhole camera model Tsai (1987)  Local maxima detection after initialization with MLOS, refinement with 15 PVRCoSaMP iterations (Cornic et al. 2013)  3D Particle tracking embedded within the reconstruction step  Natural neighbor interpolation on structured grid 
TUD  0.075  Thirdorder polynomial mapping in x, y, and second in z Soloff et al. (1997)  5 MART iterations followed by 5 MTE iterations each composed of 5 MART (Novara et al. 2010) at 20 vox/mm. Gaussian smoothing \(5 \times 5 \times 5 \ (\alpha = 2)\) between the iterations (Discetti et al. 2013)  Iterative volume deformation with fast direct cc (Discetti and Astarita 2012) and Gaussian windowing. Interrogation volume side 1.6 mm  None 
7.4 Reconstruction
In order to have a fair comparison of the reconstruction algorithms, the participants have been asked to submit a reconstructed volume at a reference image density of 0.075 ppp. From this point on, unless otherwise stated, the reference resolution will be 20 vox/mm. The reconstruction has been performed by the participants on a larger volume; the depth of the volume is increased from 16 to 24 mm (320–480 voxels at 20 vox/mm).
Particles positions in the reconstructed volumes are determined by identification of local maxima on \(3 \times 3 \times 3\) voxel kernels. A subvoxel accuracy position estimate is obtained by Gaussian interpolation on a 3point stencil along the three spatial directions. A particle is identified as “true particle” if located at a distance smaller than 2 voxels (i.e., 0.1 mm at 20 vox/mm) from a particle of the exact distributions.

Number of true and ghost particles \((N_T, N_G)\);
 Quality factor, as defined in Elsinga et al. (2006b), i.e., the correlation factor between the true and the reconstructed intensity distributions, as defined in Eq. 5:where \(I_{ijk}\) and \(I_{ijk}^{e}\) are the intensity values of the reconstructed and the exact distributions, respectively. Since each participant used its own grid (different voxel origin, resolution, etc.), in order to evaluate Q all the reconstructed volumes are interpolated on a common grid by using thirdorder spline interpolation;$$\begin{aligned} Q = \frac{\sum I_{ijk}I_{ijk}^{e}}{\sqrt{\sum ( I_{ijk} )^2 \sum ( I_{ijk}^{e} )^2}} \end{aligned}$$(5)

Mean intensity of true and ghost particles \(\left( \left\langle I_T \right\rangle , \, \left\langle I_G \right\rangle \right)\);
 Weighted levels WL and power ratio PR, as defined in Eqs. 6 and 7:$$\begin{aligned} \hbox {WL}= & {} \frac{N_T \left\langle I_T \right\rangle }{N_G \left\langle I_G \right\rangle } \end{aligned}$$(6)$$\begin{aligned} \hbox {PR}= & {} \frac{N_T}{N_G} \left( \frac{\left\langle I_T \right\rangle }{\left\langle I_G \right\rangle }\right) ^2 \end{aligned}$$(7)
The number of true and ghost particles is reported in Fig. 27 for the full volume and for the central part, obtained by cutting 300 voxels in the x and y directions from the borders. The participants are sorted by increasing percentage of ghost particles in the full volume. For all the participants except LaVision and IoT, the particles are directly identified on the provided volumes. Since the volumes submitted by LaVision and IoT were saturated on the top intensity level, it was not possible to perform the same operation on the original volumes. For this reason, the analysis is conducted on the interpolated volumes used also for the Q factor calculation. The values are expressed in percentage of the total number of true particles in the exact distribution. In some of the provided reconstructions, there is a significant percentage of true particles which is lost in the process (see for instance ASU, LANL). This might be addressed by different reasons. ASU uses a direct method which tends to lose edge in case of overlapping particles. LANL uses the same direct method, but in addition particles are removed to match the image density. This procedure could be dangerous since the intensity distributions of true and ghost particles might not be statistically separated, and thus the weakest true particles might be erroneously removed. Additionally, note that the highest values of Q are achieved by the participants using a Gaussian smoothing during the reconstruction process, as proposed by Discetti et al. (2013). While this smoothing is not expected to change significantly the number of ghost and true particles, it regularizes the solution and redistributes the intensity from the reconstruction artifacts to the true particles, thus improving the quality of the reconstruction.
The quality factor evaluated according to Eq. 5 is reported in Fig. 29. Participants using PTVbased methods (DLR) cannot be included. With only the exceptions of LaVision and ONERA, all the participants provided reconstructions with \(Q < 0.75\), which is commonly considered as the minimum value for an acceptable reconstruction. It is worth to stress that Q is evaluated on the volumes at 0.075 ppp, while some of the participants decided to use a lower image density to extract the velocity fields (ASU, IPP); thus, the quality factor might be larger. Again, algebraic methods suffer due to border effects while direct methods are relatively insensitive to this issue.
7.5 Displacement field

3D modulation transfer function (MTF) as a function of discrete wavelengths \(\lambda _{vort}\) in the regions of the 3D vortices;

2D MTF as a function of the wavelength \(\lambda _\mathrm{jet}\) in the region of the sinusoidal jet;

Cutoff wavelengths where the MTFs drop below 0.8;

Total, random and systematic errors (respectively indicated with \(\delta\), \(\sigma\), \(\beta\))

Wavelengths at which the total error becomes larger than 0.25 voxels at 20 vox/mm;

Profile of the response to the step function: step amplitude and equivalent wavelength corresponding to the slope of the detected step variation.
7.5.1 3D vortices
Interrogation volumes size used by the participants expressed in voxels at 20 vox/mm
Team  W (vox) 

ASU  20 
BUAA  24 
Dantec  33 
DLR  32 
Hacker  32 
IoT  28 
IPP  47 
LANL  16 
LaVision  24 
ONERAPTV  – 
ONERA  24.4 
TUD  32 
7.5.2 Jet
7.5.3 Step function
The equivalent wavelength (extracted with the procedure of Fig. 37) and the random error \(\sigma\) around the maximum and minimum values of the step are presented in the form of scatter plot in Fig. 39. Both quantities are presented in voxels at the reference resolution of 20 vox/mm. The best performances are achieved by teams placed in the bottomleft corner of the scatter plots (DLR, LaVision, ONERA, ONERAPTV). ONERAPTV does not appear on the right scatter plot due to the large error induced by the undetected outliers. Some participant’s results are characterized by very different corresponding wavelengths on the two steps (for instance IPP and TUD); the reason is to be sought on the effect of overshooting, which is present only on one of the two components. For almost all the other participants, the equivalent wavelength obtained from the two steps is consistent.
7.6 Conclusions of test case C

Artificial additional operations (apart from triangulation/algebraic reconstruction) are beneficial when using both the exposures, as in the MTEbased reconstruction algorithms or in the shakethebox method. Methods based on removing particles via thresholding appear less effective due to the superposition between the intensity statistical distribution of true and ghost particles.

As a general guideline, since the quality of the reconstruction might have dramatic effects on the final outcome of the process, it is recommended to avoid pushing toward high particle image densities. Reducing slightly the particle image density below the typically used value of \(0.05\,{\mathrm {ppp}}\), at the present state, might be very rewarding in terms of reconstruction quality while turning down only weakly the spatial resolution. Nevertheless, careful analysis of the reconstructed distributions can still provide some edge, since some participants were able to perform better than Hacker even if not working on the same exact volumes.
8 Case D
8.1 Case description
The measurement of the underlying cascade turbulence mechanism at sufficiently high Reynolds number is a challenge for both computational and experimental fluid mechanics in terms of required spatial and dynamic velocity range. The establishment of accurate threedimensional threecomponent anemometry techniques is of fundamental importance to both characterize the evolution and organization of the turbulent coherent structures and to provide a benchmark for numerical codes. In the test case D, a 3D PIV experiment is simulated imposing the velocity field of a direct numerical simulation (DNS) of an isotropic incompressible turbulent flow. The velocity fields and particle trajectories are directly extracted from the turbulence database of the group of Prof. Meneveau at John Hopkins University (http://turbulence.pha.jhu.edu/); details on the database and how to use it are provided by Perlman et al. (2007) and Li et al. (2008).
Relevant turbulent scales in the DNS domain, the physical and voxels spaces
DNS domain  Physical space (mm)  Voxels space  

Taylorscale Reynolds number  433  
Integral length scale  1.376  44.85  897 vox (20 vox/mm) 
Taylor length scale  0.118  3.85  77 vox (20 vox/mm) 
Kolmogorov length scale  0.00287  0.09  1.9 vox (20 vox/mm) 
8.2 Algorithms
Algorithms used by the participants to test case D
Team  Calibration  Reconstruction  PIV processing  Postprocessing 

ASU  Pinhole camera model Tsai (1987)  Image Segmentation (Ding and Adrian 2014) at 20 vox/mm. Intensity distributions smoothed with Gaussian filter (\(3 \times 3 \times 3\), std 0.6)  Planar 2D multipass image deformation method on xz and xy planes. Interrogation volume side 1.0 mm  Triplepulse PIV formula optimization (Ding et al. 2013), Circulation method by Landreth and Adrian (1990) 
BUAA  Thirdorder polynomial mapping, in x and y and 1st in z  10 Intensityenhanced MART (inverse diffusion equation) iterations at 20 vox/mm. Gaussian smoothing of the original images (\(3 \times 3\), std 0.5).Gaussian smoothing between the iterations (Discetti et al. 2013)  Volume deformation algorithm, 4 iterations (predictor with windows shift) Interrogation volume side 1.6 mm  Gaussian smoothing (\(5 \times 5 \times 5\), std 0.6), secondorder central difference scheme for vorticity 
DLR  Two plane model Wei and Ma (1991) with Optical transfer function Schanz et al. (2013b)  3D particles identification and StB (Schanz et al. 2016). Search for matching particles in the first 4 frames. Trajectories initialization with tracking (TomoPIV used as predictor). Optimized particles identification with predicted positions form trajectories and IPR (Wieneke 2013)  Particle tracking over the entire set  Secondorder spline curve to fit data on regular grid. Penalization on divergence and high spatial frequencies 
Hacker  Perfect  Perfect  Iterative volume deformation with a top hat moving average approach (Astarita 2006). Interrogation volume side 1.6 mm  None 
IoT  Direct Linear Transformation  15 SMART iterations after initialization with MLOS followed by 5 MTE iterations, performed on 2 objects, each composed of 15 SMART (Novara et al. 2010) at 20 vox/mm. Volume extended by 6.4 mm on z to minimize edge effects  Iterative volume deformation. Interrogation volume side 1.6 mm  Fields averaged on 3 consecutive couples (pyramid scheme). Moving average filter \(3 \times 3 \times 3\) to identify and replace outliers on edges. Gradient correction technique to reduce the divergence 
IPP  Pinhole camera model Tsai (1987)  8 BIMART Byrne (2009) iterations, block size 4, relaxation parameter 0.38 at 20 vox/mm  FTEE Jeon et al. (2014). Interrogation volume side 2.0 mm Gaussian windowing \((\alpha = 2)\), secondorder polynomial trajectory using 9 images  None 
LANL  Direct Linear Transformation  Optimized MLOS with exponent 1 (La Foy and Vlachos 2011) at 20 vox/mm. Particles suppression in order to get ppp matching with the images Background suppression (1 % of maximum) and Gaussian smooth  Iterative volume deformation with Motion Tracking Blanking and Robust Phase Correlation (Eckstein et al. 2008). Interrogation volume side 2.0 mm (effective side 1.0 mm), Gaussian windowing  Gaussian smoothing (std 1.25) and Multiframe analysis with FTC 4thorder Noise Optimized Compact Richardson scheme for vorticity 
LaVision  Pinhole camera model Tsai (1987)  14 SMART iterations after initialization with MLOS followed by 5 MTE iterations each composed of 14 SMART (Novara et al. 2010). MTE performed on 7 objects at 20 vox/mm. Gaussian smoothing \(3 \times 3 \times 3\) between iterations (Discetti et al. 2013)  Iterative volume deformation with fast direct cc (Discetti and Astarita 2012). Interrogation volume side 1.6 mm. Gaussian windowing. Fluid Trajectory Correlation (Lynch and Scarano 2013), 11 steps, secondorder polynomial fit  Polynomial fit of second order to \(3 \times 3 \times 3\) vectors for the vorticity 
TUD  Thirdorder polynomial mapping in x, y, and second in z Soloff et al. (1997)  5 MART iterations followed by 2 MTE iterations each composed of 5 MART (Novara et al. 2010). MTE performed on 3 objects at 20 vox/mm. Gaussian smoothing \(3 \times 3 \times 3\) between the iterations (Discetti et al. 2013)  Iterative volume deformation with fast direct cc (Discetti and Astarita 2012). Interrogation volume side 1.6 mm. Gaussian windowing. Fluid Trajectory Correlation (Lynch and Scarano 2013), 7 steps, secondorder polynomial fit  Vorticity with secondorder central differences 
8.3 Reconstruction
8.4 Displacement field
8.4.1 Turbulent flow features
8.4.2 Temporal history
8.4.3 Turbulent spectra and spatial resolution
The results are reported in Fig. 51 in form of scatter plots to correlate \(F_Q\) with the quality of the reconstruction or the power ratio. In contrast to the SEF case, which was calculated mainly on large and medium scales, there is a significant difference between the performances of the shakethebox method (DLR) and the multiframe reconstructioncorrelation with MTE and FTC (LaVision and TUD). Furthermore, the gap between BUAA, IoT and IPP is reduced; this is possibly an indication that, unless the full time information is used in the entire process, the differences on smallscale measurement when using algebraic methods are not relevant. From the scatter plots of Fig. 51, the same groups of Fig. 50 can be highlighted. Similarly to the case of the SEF, there is a correlation with the quality of the reconstruction, but it is quite weak. BUAA, IoT and IPP provide very similar results in terms of \(F_Q\) with substantially different performances in the reconstruction. In order to achieve a perceivable improvement of the performances, a significant reconstruction quality/power ratio jump is required.
8.4.4 Measurement accuracy assessment
The scatter plot of Fig. 53 demonstrates that a correlation between the normalized divergence and the measurement error exists, but with some words of caution. While teams that used standard twoframe or multiframe implementation of tomographic PIV based on iterative algebraic methods (BUAA, IPP, LaVision, TUD) are quite well aligned in the scatter plot (with the exception of IoT, which modified the divergence in postprocessing), the teams working with direct methods obtain results with low level of divergence when compared with the total error. Furthermore, ASU and LANL used similar processing algorithms and resulted in very similar performances according to the several adopted metrics, while in this case the standard deviation of the divergence is remarkably different. Therefore, the divergence criterion might have some utility in assessing the accuracy performance for algebraic methods in 3D PIV (and, of course, under the condition of not being tricked). However, its use on data obtained with direct method is less advisable.
8.5 Conclusions

In general, it is difficult to find a correlation between power ratio, quality factor and the different metrics introduced for the error (spectral energy fraction, factor of correlation of the second invariant of the velocity gradient tensor, error, etc.). However, it can be stated that there is a broad correlation according to how refined the full algorithm analysis is.

The divergence test is interesting, unless it is not “tricked” with algorithms that artificially reduce the divergence in the measured fields. The relationship between measurement error and standard deviation of divergence is approximately linear within a range of relatively small total error (up to about 0.5 voxels). However, optimizing the measured velocity fields using physical criteria seems beneficial in some cases.

The poor reconstruction performance of direct methods affects the resolution at the medium–large frequencies, but the main flow field features are still captured even if the provided images were characterized by a relatively large particle image density. This is due to the limited velocity gradients along the depth direction. However, in such situations direct methods can be very appealing for providing a predictor or a very fast preview of the reconstructed flow field due to their intrinsic simplicity.

The exploitation of the time coherence both in the reconstruction and in the displacement estimation considerably improves the spatial resolution. As predicted, enforcing the time coherence in both reconstruction and displacement estimation provides the best results. Additionally, an intense crosstalk between the two steps of the process is very beneficial (e.g., in the shakethebox method). Furthermore, PTVbased methods with oriented particle search and refinement seem to overcome classical algebraic reconstruction + crosscorrelationbased methods, at least on synthetic images, without noise.
9 Case E
9.1 Case description and measurements
Case E aimed to evaluate the stateoftheart for planar stereoscopic PIV measurements using real experimental data. The experiment that generated the case E data was performed at Virginia Tech’s Advanced Experimental Thermofluids Research Laboratory (AEThER Lab) and was based on timeresolved measurements of an impulsively started axisymmetric vortex ring flow. The experimental images acquired were purposefully subject to realworld sources of error that are representative of stereo experiments and that are difficult to model in simulated images. These error sources included focusing effects, diffraction and aberration artifacts, forward and backward scattering intensity variation, sensor noise, and calibration error.
Data were acquired simultaneously from five cameras in such a way that any combination of two out of the five cameras could yield a stereoscopic measurement data set. Hence, the experiment delivered a range of data sets with different levels of accuracy including a case for which an optimum stereoscopic configuration was approximated. In addition, the data set was processed as a threedimensional tomographic PIV measurement that was then used as a reference for evaluating the accuracy of the stereoPIV measurements. This comparison was based on the notion that a tomographic PIV system measures the threedimensional velocity field within a laser sheet more accurately than traditional stereoPIV measurements (Michaelis and Wieneke 2008; Wieneke and Taylor 2006; Elsinga et al. 2006a).
A high resolution, low error estimate of the velocity field had to be created from the tomographic data to compare with the participant stereoPIV velocity field submissions. This “ground truth” solution was used to complete the error analysis; however, since the data were collected from an experiment, the true velocity field within the physical fluid was unknown. Thus while the ground truth solution represents a more accurate measurement of the true field compared to what can be provided by stereoPIV, it is still subject to measurement errors.
The threedimensional intensity field was calculated using LaVision DaVis 8.1.6. The images were preprocessed using background subtraction and particle intensity normalization. A thirdorder polynomial camera calibration (Soloff et al. 1997) was fit to the calibration grid images and then refined using volumetric selfcalibration (Wieneke 2008). The threedimensional intensity field was then reconstructed using 10 MART iterations (Elsinga et al. 2006b). Following this, the intensity field was used to calculate the threedimensional velocity field and the position of the laser sheet.
Processing parameters used in each pass of the ground truth solution calculation used for the case E data set
Pass  Grid spacing  Window resolution  Window dimension  Correlation  Multiframe method  Validation  Smoothing 

1  32 × 32 × 16  64 × 64 × 32  128 × 128 × 64  RPC  Pyramid  UOD, threshhold  Gaussian 
2  24 × 24 × 16  48 × 48 × 32  96 × 96 × 64  RPC  Pyramid  UOD, threshhold  Gaussian 
3  12 × 12 × 8  48 × 48 × 32  96 × 96 × 64  RPC  Pyramid  UOD, threshhold  Gaussian 
The next step in calculating the ground truth velocity field was to determine the precise location of the laser sheet within the threedimensional reconstruction. This is important for two reasons. First, the threedimensional velocity field must be interpolated onto the twodimensional coordinate grid to yield data comparable to stereo data. Second, stereoPIV measurement error is decreased by performing selfcalibration; however, the selfcalibration process maps the velocity field coordinate system from the calibration grid coordinate system onto a different coordinate system based upon the laser sheet position. Thus the coordinate system used for measuring the velocity field in stereoPIV and the coordinate system used for measuring the velocity field in tomographic PIV are different. Physically this difference is due to the fact that the coordinate plane of the calibration grid and the plane of the laser sheet may not be parallel and may not intersect along a line including the calibration grid origin. The transformation between these two coordinate systems corresponds to a rotation and a translation. This transformation changes the velocity components, but does not change the velocity magnitude nor any physical quantities derived from the velocity field such as forces or stresses. The effect of this transformation will typically be quite small, and in this case, applying the transformation caused the velocity vectors to change by a smaller magnitude than the expected PIV uncertainty. Thus the transformation negligibly changed the ground truth solution, but it was included for completeness.
The location of the laser sheet was calculated by summing the tomographically reconstructed intensity fields over all frames to produce a timeaveraged intensity field and then fitting a plane to the peak of the average intensity field. Once the precise location of the laser sheet within the reconstruction was determined, the velocity field was transformed onto the laser sheet coordinate system and interpolated onto the specified coordinates given to the participants using a cubic interpolation algorithm to yield the ground truth solution. The velocity components of the ground truth solution are denoted by \(U^{*}_{{\mathrm {true}}}\), \(V^{*}_{{\mathrm {true}}}\), and \(W^{*}_{{\mathrm {true}}}\).
9.2 Evaluation of case E
A set of 100 images from Camera 1 and Camera 3 was provided to the participants for evaluation. The images were saved in an uncompressed 16bit TIFF format with a resolution of 1024\(\times\)1024 pixels. Furthermore, calibration images were provided showing a grid translated to seven different Zaxis positions with a spacing of \(Z = 1 \, {\mathrm {mm}}\). The grid consisted of two levels spaced \(3 \, {\mathrm {mm}}\) apart containing a rectilinear grid of \(3.2 \, {\mathrm {mm}}\) diameter dots. The dots on the grid were spaced \(15 \, {\mathrm {mm}}\) apart both horizontally and vertically.
The participants were required to submit a 2D3C (twodimensional/planar, threecomponent) velocity field measured at 121 \(\times\) 121 vector locations spaced \(\Delta X = \Delta Y = 0.45\, {\mathrm {mm}}\) apart. Vector fields from the central 50 image pairs were requested allowing the participants to perform multiframe processing methods; however, the participants were free to choose any processing method to produce the velocity vectors. In addition, the participants were asked to provide the vorticity field evaluated at the same coordinate grid as the velocity field as well as estimates of the vortex circulation calculated at the 50 time steps. The vorticity and circulation results showed negligible variation between the participants and provided little additional insight. Therefore, for the sake of brevity, a discussion of these results is omitted here.
9.3 Results
List of institutions that participated in the case E evaluation and their respective acronyms
Institution  Acronym 

Dantec Dynamics A/S  Dantec 
German Aerospace Center  DLR 
Institute of Thermophysics SB RAS  IOT 
LaVision GmbH  LaVision 
Los Alamos National Lab  LANL 
Hacker reconstruction cases and their respective acronyms
Stereo method  Cameras  Acronym 

Generalized (calibration)  1 and 3  HC13 
Generalized (calibration)  2 and 4  HC24 
Geometric  1 and 3  HG13 
Geometric  2 and 4  HG24 
List of processing parameters used by each team participating in case E
Institution  Dantec  DLR 

Calibration model  Thirdorder polynomial  Firstorder pinhole 
Image preprocessing  Normalized by local maximum  Sequence minimum subtraction, maximum intensity clipping 
Stereo method  Geometric  Geometric 
Multiframe method  None  Weighted sum of triple correlation displacements 
First pass window size  48 × 48  96 × 96 
Final pass window size  24 × 24  32 × 32 
Final pass mesh size  5 × 5  8 × 8 
Vorticity calculation  Fit secondorder polynomial  Circulation method with 3 × 3 Kernal 
Circulation calculation  Weighted sum of vorticity  Line integral 
Institution  IOT  LaVision 

Calibration model  Thirdorder polynomial  Firstorder pinhole 
Image preprocessing  Sequence minimum subtraction  Sliding background subtraction 
Stereo method  Generalized  Geometric 
Multiframe method  Weighted pyramid correlations  Weighted sum of correlations 
First pass window size  Camera 1: 40 × 40, Camera 3: 64 × 64  96 × 96 
Final pass window size  Camera 1: 20 × 20, Camera 3: 40 × 40  48 × 48 
Final pass mesh size  Camera 1: 5 × 5, Camera 3: 8 × 8  6 × 6 
Vorticity calculation  Least squares fit  Central difference 
Circulation calculation  Integration of vorticity  N/A 
Institution  LANL  Hacker 

Calibration model  Thirdorder polynomial  Thirdorder polynomial 
Image preprocessing  Subtraction of camera artifacts  Sequence minimum subtraction, intensity normalization 
Stereo method  Geometric  Geometric and generalized 
Multiframe method  Fluid trajectory correlation  Pyramid correlations 
First pass window size  32 × 32  64 × 64 
Final pass window size  16 × 16  32 × 32 
Final pass mesh size  4 × 4  4 × 4 
Vorticity calculation  4thorder compact noise optimized Richardson  4thorder explicit noise optimized Richardson 
Circulation calculation  Integration of vorticity  Integration of vorticity 
Using the ground truth volumetric reconstruction velocity estimation, the absolute mean error for each of the measured velocity vector components was calculated as \(\Delta U^{*} =  U^{*}  U^{*}_{{\mathrm {true}}} \), \(\Delta V^{*} =  V^{*}  V^{*}_{{\mathrm {true}}} \) and \(\Delta W^{*} =  W^{*}  W^{*}_{{\mathrm {true}}} \). The results are shown in Fig. 57a using bars to represent the mean absolute error and whiskers to show the corresponding standard deviation of the error. For all cases, the inplane velocity errors were below 0.05 pixels—with DLR and IOT showing marginally lower errors compared to the other teams and approximately equal to the hacker cases. The other three teams (Dantec, LaVision and LANL) showed approximately equal inplane errors with only the Dantec Vcomponent appearing higher. The corresponding standard deviations of the inplane velocity components were comparable for all cases.
The outofplane W component results exhibit more pronounced differences with errors varying from approximately 0.05 pixels for the HC24 and HG24 cases to approximately 0.12 pixels for Dantec and LANL cases. DLR and IOT produced the lowest errors approximately equal to 0.07 pixels while LaVision produced data with errors around 0.1 pixels. The amplification of the error of the outofplane component is anticipated and is captured by the calculation of the error ratios \(e_r \left( U^{*}, W^{*} \right) = \mu \left( \Delta W^{*} \right) / \mu \left( \Delta U^{*} \right)\), and \(e_r \left( V^{*}, W^{*} \right) = \mu \left( \Delta W^{*} \right) / \mu \left( \Delta V^{*} \right)\). The error ratios of the participant data are shown in Fig. 57b and compared against the theoretically expected ratios based upon the geometric configuration of the cameras. The theoretical error ratio for Cameras 2 and 4 equals 1.2 and for Cameras 1 and 3 equals 2.9. Considering the processing parameters employed by each participant (Table 16) and the results of Fig. 57, it appears that the inplane accuracy was only affected by the choice of the multiframe processing algorithm. The results also show that choice of the camera calibration model had relatively little impact on the error since the data from DLR and IOT produced nearly identical errors, although these teams used pinhole and polynomial calibration models, respectively. Moreover, the decision to use either the geometric or the generalized reconstruction algorithms did not have a noticeable impact on the results since neither method produced significant differences in the measured errors in either the participant or the hacker data.
9.4 Conclusions
The results of the error analysis of the participant submissions and the hacker cases show several important considerations with respect to stereoPIV experiments. The most pronounced observation that emerged from the analysis is that collecting and processing highquality planar 2C2D velocity field data will yield highquality 3C2D data. This process includes designing the experimental setups to collect PIV data in as optimal conditions as possible—for example, placing the cameras in an optimal stereo configuration was shown by the hacker cases HC24 and HG24 to produce lower outofplane errors. In addition, using the most advanced PIV algorithms available to process the twodimensional data—in particular the multiframe methods—yielded lower inplane and outofplane errors.
The error analysis also showed that the choice of camera calibration model and the choice of the stereo reconstruction methods have relatively little effect on the quality of the data, as long as the inplane velocity error was already low. There was relatively little optical distortion in the data collected for the case E experiment; however, in more complicated experiments, using a higherorder camera calibration model such as the thirdorder polynomial model may still be beneficial. Furthermore, while camera selfcalibration could not be directly addressed by case E, the results show that performing an accurate selfcalibration is vital to producing highquality stereo reconstructions and thus it is important to ensure that collected calibration data are accurate. The result analyzed herein suggests that the selfcalibration may be the most critical element in the stereoPIV processing based on the current state of the method.
10 Case F
10.1 Introduction
Images of particles with “known” displacement are important in estimating the error associated with PIV image analysis. While synthetic images are often used for this purpose (Kähler et al. 2012b), images of real particles are physically more accurate if the particle displacement field is precisely predetermined. For Case F images are obtained by recording the particles in a liquid column rotating at a steady uniform angular velocity. In this flow field, any velocity component in a Cartesian coordinate is proportional to the linear distance, and vorticity is uniform everywhere.
10.2 Method
A plexiglass cylinder 10 mm deep \(\times\) 65 mm in diameter was filled with water/glycerin solution containing silvercoated hollowglass particles (SHGS10, \(10\,\upmu {\mathrm {m}}\), Dantec). The cylinder was then covered with a glass plate (#62606, BK7, 1/4” Edmund Scientific) to allow optical access through the glass without distortion of the image due to deformation of the liquid surface. The cylinder was placed on the turntable of an audio record player that spun at a constant speed of \(\omega = 33+1/3\) rpm (=\(10 \pi /9\) rad/s). After a few minutes, the flow inside the cylinder reached steady solidbody rotation. A laser beam produced by an NdYAG laser (EverGreen 70, 2 \(\times\) 70 mJ, 15 Hz, Quantel) was expanded by a cylindrical lens to form a laser light sheet approximately 4 mm thick. The light sheet illuminated a horizontal cross section of the fluid in the cylinder, and particles were imaged on a CMOS camera (Fastcam SA3, 1024 \(\times\) 1024 pixels, Photron) through a lens (AF Nikkor 28–105 mm, Nikon). The focal length and Fnumber were set at 180 mm and 22, respectively, and thus the dimensions of the field of view were 38 \(\times\) 38 mm. Trigger signals to determine the timing of camera exposure and laser pulse were sent from a timing generator (Type 9618, 8 channels, Quantum Composers). While exposure time of the camera was set at 8 ms, a framestraddling scheme allowed to set time intervals between the two laser pulses fired during each frame period at \(\Delta t = 4\) ms. A total of 1000 image pairs were acquired in 100 s.
While flow inside the cylinder was expected to be stationary relative to the frame rotating around the center of the turntable, a negligible amount of convection remained. As a result of this convection, there was uncertainty in the displacement field involved in the particle image. It was identified as follows. The doublepulse laser was fired at \(t=0\) and \(t=\Delta t_{{\mathrm {rev}}}\), where \(\Delta t_{{\mathrm {rev}}} = 2 \pi / \omega\) was identical to the time period of one revolution of the cylinder, and particle images were exposed on two independent image frames. Particle displacement between the images would be zero everywhere if the convection were absent; actual displacement, however, was \(\delta _{{\mathrm {rev}}} = 0.53\) mean \(\pm 0.32\) s.t.d. pixels. Thus, the uncertainty of displacement in the particle images of \(\Delta t = 4\) ms interval used for this study was \(\delta = \delta _{{\mathrm {rev}}} \Delta t / \Delta t_{{\mathrm {rev}}} = 1.2 \cdot 10^{3}\) mean \(\pm 0.7 \cdot 10^{3}\) s.t.d. pixels.
10.3 Evaluation of the image
Contributors of Case F and their evaluation parameters
Contributor  Image preprocessing  Interrogation scheme  Final interrogation window size  Iteration  Image interpolation scheme  Window shift direction  Correlation peak fitting  Image postprocessing 

F01  Highpass filter  FFT  \(32 \times 32\)  Multiple  Bspline 6th  Symmetric  Gaussian  None 
F02  None  FFT,DCC  \(24 \times 24\)  Multiple  Sinc \(8 \times 8\)  Symmetric  Gaussian  None 
F03  None  FFT  –  Multiple  Sinc \(8 \times 8\)  Symmetric  Gaussian  None 
F04  –  Robust phase correlation  \(48 \times 48\)  Multiple  Sinc \(8 \times 8\)  –  Gaussian  – 
F05  None  FFT  –  Multiple  Bspline  Symmetric  Gaussian  – 
F06  Histogram threshold: 10 % highest values are removed—zero mean and std normalization  DCC  \(17 \times 17\)  Multiple  Bspline 3rd  Symmetric  Gradient  None 
F07  None  FFT  \(24 \times 24\)  Multiple  Bspline 3rd  Symmetric  Gaussian  – 
F08  None  DCC  –  Single  None  Asymmetric  Gaussian  None 
F09  None  DCC  \(32 \times 32\)  Multiple  Whittaker  Symmetric  Gaussian  – 
F10  None  Minimization of sum of squared errors  \(15 \times 15\)  Multiple  Bspline 5th  Symmetric  –  Interpolation by natural neighbors using Bezier–Berenstain patches 
F11  –  –  \(32 \times 32\)  –  –  –  –  – 
F12  –  FFT  –  Multiple  Biquadratic  Symmetric  Gaussian  – 
F13  –  –  \(16 \times 16\)  –  –  –  –  
F14  –  FFT  –  Multiple  Bicubic  Symmetric  Gaussian  – 
F15  None  Adaptive PIV  \(24 \times 24\)  Multiple  Bicubic  Symmetric  Gaussian  None 
F16  –  FFT  \(16 \times 16\)  Multiple  Bicubic  Symmetric  Gaussian  – 
F17  None  FFT  –  Multiple  Bilinear  –  Gaussian  Gaussian filter 
F18  None  FFT  \(16 \times 16\)  Multiple  Bicubic  Symmetric  Gaussian  None 
F19  –  FFT  \(16 \times 16\)  Multiple  None  Symmetric  Gaussian  – 
F20  –  DCC  –  Multiple  Modified Whittaker and splines  Asymmetric  Gaussian  None 
F21  Gaussian filter  FFT  \(16 \times 16\)  Multiple  –  Symmetric  Gaussian  None 
F22  Image has been enlarged to \(2048 \times 2048\)  DCC  \(8 \times 8\)  Multiple  Bilinear  Symmetric  Gaussian  Data has been reduced to half scale to fit the original images 
F23  Highpass filter  LSM  \(17 \times 17\)  Multiple  –  –  –  – 
F24  –  DCC  \(32 \times 32\)  Single  None  –  Parabolic  None 
F25  None  Multipass crosscorrelation  \(64 \times 64\)  –  –  –  –  Linear least square filter 
F26  Particle intensity normalization filter  FFT  \(32 \times 32\)  Multiple  –  –  Gaussian  Smoothing 
F27  Histogram equalization  FFT  \(32 \times 32\)  Multiple  Bspline 5th  Symmetric  Gaussian  Lowpass filtering 
F28  None  FFT  \(32 \times 32\)  Multiple  Bcpline 6th  Symmetric  Gaussian  Smoothing 
F29  “Highpass filter background subtraction Gaussian filter to recover particles”  FFT  \(32 \times 32\)  Multiple  –  –  Gaussian  Gaussian smoothing 
10.4 Results and discussion
11 Conclusion
The 4th International PIV Challenge has gained strong attention expressed by the high number of teams that were willing to evaluate the data and the large audience present at the workshop in Lisbon in 2014. Thanks to significant improvements, extensions and innovations since the last PIV Challenge, \(\upmu \hbox {PIV}\), timeresolved PIV, stereoscopic PIV and tomographic PIV are standard techniques for velocity measurements in transparent fluids. Today, uncertainties for the displacement as low as 0.05 pixel can be reached with a variety of evaluation techniques, provided the image quality and particle image density are appropriate and problems due to flow gradients and solid boundaries can be neglected. This implies that the quality of the equipment (laser, camera, lenses) is still an important factor for accurate PIV measurements. Surprisingly, the largest uncertainties are introduced by the user who selects the evaluation parameters based on knowledge, experience and intuition. A solid understanding of the underlying principles of the technique and practical experience with the components involved, their alignment but also preliminary knowledge of the flow under investigation is essential to reach the low uncertainties mentioned above. Another general conclusion is that sophisticated PTV algorithms can outperform stateoftheart PIV algorithms in terms of uncertainty. However, it is to early to predict whether this finding will initiate a transition from PIV toward PTV evaluation approaches on a long term or whether future evaluation strategies will combine both methods to minimize the uncertainty as much as possible.
Case A \((\upmu \hbox {PIV})\) illustrates that image preprocessing is very important and its application can affect the results quite strongly. Therefore, care must be taken to achieve the desired quality of the results. Another remarkable finding is that the handling of boundaries is a predominant source of errors. The results show that the estimated width of the small channel varied by more than 20 % among all teams. Therefore, robust digital masking approaches are needed to reduce the strong uncertainty induced by the human factor. Another important conclusion that can be drawn from the results is that care must be taken by integrating modelbased approaches in the evaluation procedure. The results clearly show that making use of the noslip condition at boundaries leads to the expected velocity decrease in the boundary layers even when the nearwall flow cannot be resolved. However, since the exact wall location is required to apply this condition large uncertainties arise if the aforementioned uncertainties in determining the location of the boundaries have to be considered. For this reason, objective measures for the quantification of the PIV uncertainty need to be further developed. Instead of using model assumptions, it is also possible to reduce the uncertainty by using evaluation approaches with higher spatial resolution such as singlepixel ensemble correlation or even particle tracking approaches.
Case B shows again that the different evaluation strategies lead to quite similar results if the basic design and operation rules of PIV are taken into account. However, when the evaluation parameter can be freely selected, as done in case of evaluation 2 the human factor becomes essential and the results differ strongly. For instance, if a PIV user prefers smooth velocity fields, he or she will select other parameters from those selected by users who prefer high spatial resolutions. The use of multiframe evaluation techniques can greatly reduce the uncertainty, and with PTV algorithms, the influence of the user decreases due to the fact that spatial resolution effects can be ignored. However, when multiframe evaluation techniques are used, other aspects need to be considered with care. In the case of strongly oversampled image sequences, as provided here, a damping of the amplitudes at higher frequencies is acceptable from the physical point of view. However, caution is advised not to suppress physically relevant frequencies. A smooth field in space and time does not necessarily mean a high evaluation accuracy.
Case C indicates that the optimization of the particle image density in terms of maximum spatial resolution with minimum loss of accuracy is the main challenge of 3D tomographic PIV. The results showed that the choice of an appropriate metric for the quality of the reconstruction is controversial, even in the case of synthetic data. The commonly used criterion \(Q > 0.75\) is simple and straightforward, but it should be used with caution considering its high sensitivity to the particle images’ positioning and, more importantly, diameter. The use of a particle/intensitybased approach is proposed for synthetic data, which seems to be more efficient in capturing differences between the participants. In particular, the power intensity ratio provides a measure of the ’crosscorrelation power’ of the particles, thus better highlighting the effects of ghosts on the PIV analysis. This metric has the additional advantage of being simple and easily extended to PTV methods. Artificial additional operations (apart from triangulation/algebraic reconstruction) are beneficial when using both exposures, as in the MTEbased reconstruction algorithms or in the shakethebox method. Methods based on removing particles via thresholding appear less effective due to the superposition between the intensity statistical distribution of true and ghost particles. As a general guideline, it is recommended to avoid pushing toward high particle image densities.
Case D is a synthetic image sequence of a 3D tomographic PIV configuration portraying the velocity field of a direct numerical simulation of an isotropic incompressible turbulent flow. The main challenge of this test case is to correctly evaluate the velocity gradient components in the presence of many different length scales within the flow field. The results showed that it is difficult to find a correlation between power ratio, quality factor and the different metrics introduced for the error (spectral energy fraction, factor of correlation of the second invariant of the velocity gradient tensor, error, etc.). However, it can be stated that there is a broad correlation according to how refined the full algorithm analysis is. The exploitation of the time coherence both in the reconstruction and in the displacement estimation considerably improves the spatial resolution as observed in case B. As predicted, enforcing the time coherence in both the reconstruction and displacement estimation provides the best results. Additionally, an intense crosstalk between the two steps of the process is very beneficial (e.g., in the shakethebox method). Furthermore, PTVbased methods with oriented particle search and refinement seem to overcome classical algebraic reconstruction and crosscorrelationbased methods, at least on synthetic images, without noise.
Case E aimed to evaluate the stateoftheart for planar stereoscopic PIV measurements using real experimental data. The experimentally acquired images were purposefully subject to realworld sources of error that are representative of stereo experiments and that are difficult to model in simulated images. These error sources included focusing effects, diffraction and aberration artifacts, forward and backward scattering intensity variation, sensor noise and calibration errors. The most pronounced observation that emerged from the analysis is that collecting and processing highquality planar 2C2D velocity field data will yield highquality 3C2D data. The error analysis also showed that the choice of camera calibration model and the choice of the stereo reconstruction methods had relatively little effect on the quality of the data, as long as the inplane velocity error was already low. Furthermore, the results show that performing an accurate selfcalibration is vital to producing highquality stereo reconstructions and thus it is important to ensure that collected calibration data are accurate. The results analyzed herein suggest that the selfcalibration may be the most critical element in the stereoPIV processing based on the current state of the method. However, it is important to bear in mind that the selfcalibration reduces the uncertainty of the velocity measurements at the cost of increasing the uncertainty of the vector location. Consequently, this approach may fail in the case of flows with strong gradients.
The results of case F showed apparent difference of both bias and random errors between the contributors. Interpolation scheme such as SINC or Bspline has a tendency of reducing the bias error, while the bilinear or bicubic scheme has a larger bias error. Advantage of the use of symmetric offset scheme with secondorder accuracy to the asymmetric offset is evident in the mean velocity field. The random error was distributed within a range of 0.05–0.1 pixels in standard deviation for most of contributors who did not apply smoothing during postprocessing.
Notes
Acknowledgments
The authors like to thank Springer Science+Business Media (www.springer.com), PCO AG (www.pco.de) and InnoLas Laser GmbH (www.innolaslaser.com) for their financial support to make the workshop possible in Lisbon on the July 5, 2014. Without the participants of the 4th International PIV Challenge, this paper could not be written, and we would like to thank all of them for their effort and the fruitful discussions.
References
 Adrian RJ (1997) Dynamic ranges of velocity and spatial resolution of particle image velocimetry. Meas Sci Technol 8:1393–1398CrossRefGoogle Scholar
 Adrian RJ, Westerweel J (2011) Particle image velocimetry. CambridgeGoogle Scholar
 Astarita T (2006) Analysis of interpolation schemes for image deformation methods in PIV: Effect of noise on the accuracy and spatial resolution. Exp Fluids 40:977–987CrossRefGoogle Scholar
 Astarita T (2007) Analysis of weighting windows for image deformation methods in PIV. Exp Fluids 43:859–872CrossRefGoogle Scholar
 Astarita T, Cardone G (2005) Analysis of interpolation schemes for image deformation methods in PIV. Exp Fluids 38:233–243CrossRefGoogle Scholar
 Barnkob R, Kähler CJ, Rossi M (2015) General defocusing particle tracking. Lab Chip 15:3556–3560CrossRefGoogle Scholar
 Brady MR, Raben SG, Vlachos PP (2009) Methods for digital particle image sizing (DPIS): Comparisons and improvements. Flow Meas Instrum 20:207–219CrossRefGoogle Scholar
 Byrne C (2009) Blockiterative algorithms. Int Trans Oper Res 16:427–463MathSciNetCrossRefzbMATHGoogle Scholar
 Cardwell ND, Vlachos PP, Thole KA (2010) A multiparametric particle pairing algorithm for particle tracking in single and multiphase flows. Meas Sci Technol 22:105406CrossRefGoogle Scholar
 Champagnat F, Cornic P, Cheminet A, Le Besnerais G, Plyer A (2014) Tomographic PIV: Particles versus blobs. Meas Sci Technol 25:084002Google Scholar
 Cheminet A, Leclaire B, Champagnat F, Plyer A, Yegavian R, Besnerais GL (2014) Accuracy assessment of a LucasKanade based correlation method for 3D PIV. In: 17th international symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, July 7–10Google Scholar
 Choi YS, Seo KW, Sohn MH, Lee SJ (2012) Advances in digital holographic microPTV for analyzing microscale flows. Opt Lasers Eng 50:39–45CrossRefGoogle Scholar
 Cierpka C, Kähler CJ (2012) Particle imaging techniques for volumetric threecomponent (3D3C) velocity measurements in microfluidics. J Viz 15:1–31Google Scholar
 Cierpka C, Rossi M, Segura R, Kähler CJ (2011) On the calibration of astigmatism particle tracking velocimetry for microflows. Meas Sci Technol 22:015401CrossRefGoogle Scholar
 Cierpka C, Lütke B, Kähler CJ (2013) Higher order multiframe particle tracking velocimetry. Exp Fluids 54:1533CrossRefGoogle Scholar
 Cornic P, Champagnat F, Cheminet A, Leclaire B, Besnerais GL (2013) Computationally efficient sparse algorithms for tomographic PIV reconstruction. In: 10th international symposium on particle image velocimetry—PIV13, Delft, The Netherlands, July 1–3Google Scholar
 Ding L, Adrian RJ (2014) Surface segmentation technique for tomographic PIV: Adaptive surface and iterative 2D interrogation. In: 17th international symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, July 7–10Google Scholar
 Ding L, Discetti S, Adrian RJ, Gogineni S (2013) Multiplepulse PIV: Numerical evaluation and experimental validation. In: 10th international symposium on particle image velocimetry—PIV13, Delft, The Netherlands, July 1–3Google Scholar
 Discetti S, Astarita T (2012) Fast 3D PIV with direct sparse crosscorrelations. Exp Fluids 55:1437–1451CrossRefGoogle Scholar
 Discetti S, Natale A, Astarita T (2013) Spatial filtering improved tomographic PIV. Exp Fluids 54:1505CrossRefGoogle Scholar
 Dudderar TD, Simpkins PG (1977) Laser speckle photography in a fluid medium. Nature 270:45–47CrossRefGoogle Scholar
 Eckstein A, Vlachos PP (2009) Digital particle image velocimetry (DPIV) robust phase correlation. Meas Sci Technol 20:055401Google Scholar
 Eckstein A, Charonko J, Vlachos PP (2008) Phase correlation processing for DPIV measurements. Exp Fluids 45:485–500CrossRefGoogle Scholar
 Eckstein A, Charonko J, Vlachos PP (2009) Assessment of advanced windowing techniques for digital particle image velocimetry (DPIV). Meas Sci Technol 20:075402CrossRefGoogle Scholar
 Elsinga GE, van Oudheusden BW, Scarano F (2006a) Experimental assessment of TomographicPIV accuracy. In: 13th international symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, June 26–29Google Scholar
 Elsinga GE, Scarano F, Wieneke B, van Oudheusden BW (2006b) Tomographic particle image velocimetry. Exp Fluids 41:933–947CrossRefGoogle Scholar
 Hain R, Kähler CJ (2007) Fundamentals of multiframe particle image velocimetry (PIV). Exp Fluids 42:575–587CrossRefGoogle Scholar
 Hain R, Kähler CJ, Tropea C (2007) Comparison of CCD, CMOS and intensified cameras. Exp Fluids 42:403–411CrossRefGoogle Scholar
 Jeon YJ, Chatellier L, David L (2014) Fluid trajectory evaluation based on an ensembleaveraged crosscorrelation in timeresolved PIV. Exp Fluids 55:1766CrossRefGoogle Scholar
 Kähler CJ, Scharnowski S, Cierpka C (2012a) On the resolution limit of digital particle image velocimetry. Exp Fluids 52:1629–1639CrossRefGoogle Scholar
 Kähler CJ, Scharnowski S, Cierpka C (2012b) On the uncertainty of digital PIV and PTV near walls. Exp Fluids 52:1641–1656CrossRefGoogle Scholar
 Kähler CJ, Scharnowski S, Cierpka C (2016) Highly resolved experimental results of the separated flow in a channel with streamwise periodic constrictions. J Fluid Mech 796:257–284CrossRefGoogle Scholar
 Karchevskiy MN, Tokarev MP, Yagodnitsyna AA, Kozinkin LA (2016) Correlation algorithm for computing the velocity fields in microchannel flows with high resolution. Thermophys Aeromech 22:745–754CrossRefGoogle Scholar
 Keane RD, Adrian RJ (1990) Optimization of particle image velocimeters. I. double pulsed systems. Meas Sci Technol 1:1202CrossRefGoogle Scholar
 Keane RD, Adrian RJ (1992) Theory of crosscorrelation analysis of PIV images. Appl Sci Res 49:191–215CrossRefGoogle Scholar
 Kelemen K, Crowther FE, Cierpka C, Hecht LL, Kähler CJ, Schuchmann HP (2015a) Investigations on the characterisation of laminar and transitional flow conditions after high pressure homogenisation orifices. Micro Nano 18:599–612Google Scholar
 Kelemen K, Gepperth S, Koch R, Bauer HJ, Schuchmann HP (2015b) On the visualization of droplet deformation and breakup during highpressure homogenization. Micro Nano 19:1139–1158CrossRefGoogle Scholar
 La Foy R, Vlachos PP (2011) Reprojection tomographic particle image velocimetry reconstruction. In: 9th international symposium on particle image velocimetry—PIV11, Kobe, Japan, July 1–3Google Scholar
 Landreth CC, Adrian RJ (1990) Measurement and refinement of velocity data using high image density analysis in particle image velocimetry. In: Adrian RJ et al (eds) Applications of laser anemometry to fluid mechanics. Springer, BerlinGoogle Scholar
 Levoy M, Ng R, Adams A, Footer M, Horowitz M (2006) Light field microscopy. ACM Trans Graph 25:924–934CrossRefGoogle Scholar
 Li Y, Perlman E, Wan M, Yang Y, Meneveau C, Burns R, Chen S, Szalay A, Eyink G (2008) A public turbulence database cluster and applications to study Lagrangian evolution of velocity increments in turbulence. J Turb 31:9zbMATHGoogle Scholar
 Lima R, Ishikawa T, Imai Y, Yamaguchi T (2013) Confocal microPIV/PTV measurements of the blood flow in microchannels. In: Micro and nano flow systems for bioanalysis. Springer, New York, pp 131–151Google Scholar
 Lindken R, Rossi M, Grosse S, Westerweel J (2009) MicroParticle Image Velocimetry \((\mu \text{ PIV })\): recent developments, applications, and guidelines. Lab Chip 9:2551–2567CrossRefGoogle Scholar
 Liu Z, Frijns AJH, Speetjens MFM, Steenhoven AA (2014) Particle focusing by AC electroosmosis with additional axial flow. Micro Nano 18:1115–1129CrossRefGoogle Scholar
 Lynch K, Scarano F (2013) A highorder timeaccurate interrogation method for timeresolved PIV. Meas Sci Technol 24:035305CrossRefGoogle Scholar
 Maas HG, Gruen A, Papantoniou D (1993) Particle tracking velocimetry in threedimensional flows. Exp Fluids 15:133–146CrossRefGoogle Scholar
 Maas HG, Westfeld P, Putze T, Boetkjaer N, Kitzhofer J, Brücker C (2009) Photogrammetric techniques in multicamera tomographic PIV. In: 8th international symposium on particle image velocimetry—PIV09, Melbourne, Victoria, Australia, Aug 25–28Google Scholar
 Meinhart CD, Wereley ST, Santiago JG (1999) PIV measurements of a microchannel flow. Exp Fluids 27:414–419CrossRefGoogle Scholar
 Meinhart CD, Wereley ST, Gray MHB (2000) Volume illumination for twodimensional particle image velocimetry. Meas Sci Technol 11:809–814CrossRefGoogle Scholar
 Michaelis D, Wieneke B (2008) Comparison between tomographic PIV and stereo PIV. In: 14th international symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, July 07–10Google Scholar
 Miozzi M (2004) Particle image velocimetry using feature tracking and Delaunay tessellation. In: 12th international symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, July 12–15Google Scholar
 Miozzi M (2005) Direct measurement of velocity gradients in digital images and vorticity evaluation. in: 6th international symposium on particle image velocimetryPIV05, Pasadena, USA, Sept 21–23Google Scholar
 Nobach H, Honkanen M (2005) Twodimensional Gaussian regression for subpixel displacement estimation in particle image velocimetry or particle position estimation in particle tracking velocimetry. Exp Fluids 38:511–515CrossRefGoogle Scholar
 Novara M, Batenburg KJ, Scarano F (2010) Motion trackingenhanced MART for tomographic PIV. Meas Sci Technol 21:035401CrossRefGoogle Scholar
 Ogden RT (1997) Essential wavelets for statistical applications and data analysis. Birkhäuser, New YorkCrossRefzbMATHGoogle Scholar
 Park JS, Choi CK, Kihm K (2004) Optically sliced microPIV using confocal laser scanning microscopy (CLSM). Exp Fluids 37:105–119CrossRefGoogle Scholar
 Perlman E, Burns R, Li Y, Meneveau C (2007) Data exploration of turbulence simulations using a database cluster. In: Supercomputing SC07, ACM, IEEE, Reno, USA, Nov 10–16Google Scholar
 Persoons T, O’Donovan TS (2010) High dynamic velocity range particle image velocimetry using multiple pulse separation imaging. Sensors 11:1–18CrossRefGoogle Scholar
 Raffel M, Willert C, Wereley S, Kompenhans J (2007) Particle image velocimetry. Springer, BerlinGoogle Scholar
 Rapp C, Manhart M (2011) Flow over periodic hills: an experimental study. Exp Fluids 51:247–269CrossRefGoogle Scholar
 Rohaly J, Frigerio F, Hart DP (2002) Reverse hierarchical PIV processing. Meas Sci Technol 13:984CrossRefGoogle Scholar
 Rossi M, Segura R, Cierpka C, Kähler CJ (2012) On the effect of particle image intensity and image preprocessing on depth of correlation in microPIV. Exp Fluids 52:1063–1075CrossRefGoogle Scholar
 Scarano F, Riethmuller ML (2000) Advances in iterative multigrid PIV image processing. Exp Fluids 29:S051–S060CrossRefGoogle Scholar
 Scarano F, Westerweel J (2005) Universal outlier detection for PIV data. Exp Fluids 39:1096–1100CrossRefGoogle Scholar
 Schanz D, Gesemann S, Schröder A (2013a) Shake the box: a highly efficient and accurate tomographic particle tracking velocimetry (TOMOPTV) method using prediction of particle positions. In: 10th international symposium on particle image velocimetry—PIV13, Delft, The Netherlands, July 1–3Google Scholar
 Schanz D, Gesemann S, Schröder A, Wieneke B, Novara M (2013b) Nonuniform optical transfer functions in particle imaging: calibration and application to tomographic reconstruction. Meas Sci Technol 24:024009CrossRefGoogle Scholar
 Schanz D, Gesemann S, Schröder A (2016) Shakethebox: Lagrangian particle tracking at high particle image densities. Exp Fluids 57:70Google Scholar
 Scharnowski S, Kähler CJ (2016) Estimation and optimization of lossofpair uncertainties based on PIV correlation functions. Exp Fluids 57:23CrossRefGoogle Scholar
 Scharnowski S, Hain R, Kähler CJ (2012) Reynolds stress estimation up to singlepixel resolution using PIVmeasurements. Exp Fluids 52:985–1002CrossRefGoogle Scholar
 Schewe M (2014) Development and GPU based acceleration of PIV analysis software. Master’s thesis, University of GöttingenGoogle Scholar
 Schröder A, Schanz D, Michaelis D, Cierpka C, Scharnowski S, Kähler CJ (2015) Advances of PIV and 4DPTV “ShakeTheBox” for turbulent flow analysis—the flow over periodic hills. Flow Turbul Combust 95:193–209CrossRefGoogle Scholar
 Sciacchitano A, Scarano F, Wieneke B (2012) Multiframe pyramid correlation for timeresolved PIV. Exp Fluids 53:1087–1105CrossRefGoogle Scholar
 Seo KW, Lee SJ (2014) Highaccuracy measurement of depthdisplacement using a focus function and its cross correlation in holographic PTV. Opt Exp 22:15542–15553CrossRefGoogle Scholar
 Soloff SM, Adrian RJ, Liu ZC (1997) Distortion compensation for generalized stereoscopic particle image velocimetry. Meas Sci Technol 8:1441–1454CrossRefGoogle Scholar
 Stanislas M, Okamoto K, Kähler CJ (2003) Main results of the First International PIV Challenge. Meas Sci Technol 14:53–89CrossRefGoogle Scholar
 Stanislas M, Okamoto K, Kähler CJ, Westerweel J (2005) Main results of the second international PIV challenge. Exp Fluids 39:170–191CrossRefGoogle Scholar
 Stanislas M, Okamoto K, Kähler CJ, Westerweel J, Scarano F (2008) Main results of the third international PIV challenge. Exp Fluids 45:27–71CrossRefGoogle Scholar
 Theunissen R, Scarano F, Riethmuller ML (2008) On improvement of PIV image interrogation near stationary interfaces. Exp Fluids 45:557–572CrossRefGoogle Scholar
 Tien WH, Dabiri D, Hove JR (2014) Colorcoded threedimensional micro particle tracking velocimetry and application to micro backwardfacing step flows. Exp Fluids 55:1–14CrossRefGoogle Scholar
 Tsai RY (1987) A versatile camera calibration technique for high accuracy 3D machine vision metrology using offtheshelf TV cameras and lenses. IEEE J Rob Autom 4:323–344CrossRefGoogle Scholar
 Unser M, Aldroubi A, Eden M (1993) Bspline signal processing: part 2efficient design and applications. IEEE Trans Signal Proc 41:834–848CrossRefzbMATHGoogle Scholar
 Wei GQ, Ma SD (1991) Two plane camera calibration: A unified model. In: Conference on computer vision and pattern recognition, Hawaii, USA, June 3–9Google Scholar
 Wereley ST, Gui L (2003) A correlationbased central difference image correction (CDIC) method and application in a fourroll mill flow PIV measurement. Exp Fluids 34:42–51CrossRefGoogle Scholar
 Wereley ST, Meinhart CD (2001) Secondorder accurate particle image velocimetry. Exp Fluids 31:258–268CrossRefGoogle Scholar
 Westerweel J (1997) Fundamentals of digital particle image velocimetry. Meas Sci Technol 8:1379–1392CrossRefGoogle Scholar
 Westerweel J (2008) On velocity gradients in PIV interrogation. Exp Fluids 44:831–842CrossRefGoogle Scholar
 Westerweel J, Scarano F (2005) Universal outlier detection for PIV data. Exp Fluids 39:1096–1100CrossRefGoogle Scholar
 Westerweel J, Geelhoed PF, Lindken R (2004) Singlepixel resolution ensemble correlation for microPIV applications. Exp Fluids 37:375–384CrossRefGoogle Scholar
 Westfeld P, Maas HG, Pust O, Kitzhofer J, Brücker C (2010) 3D Least Squares Matching for Volumetric Velocimetry Data Processing. In: 15th international symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, July 5–8Google Scholar
 Wieneke B (2008) Volume selfcalibration for 3D particle image velocimetry. Exp Fluids 45:549–556CrossRefGoogle Scholar
 Wieneke B (2013) Iterative reconstruction of volumetric particle distribution. Meas Sci Technol 24:024008Google Scholar
 Wieneke B, Taylor S (2006) Fatsheet PIV with Computation of Full 3DStrain Tensor using Tomographic Reconstruction. In: 13th International symposium on applications of laser techniques to fluid mechanics, Lisbon, Portugal, June 26–29Google Scholar
 Willert C (1997) Stereoscopic digital particle image velocimetry for application in wind tunnel flows. Meas Sci Technol 8:1465–1479CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.