Case description and measurements
PIV is a well-established technique even for the analysis of flows in systems whose effective dimensions are below 1 mm. However, the analysis of flows in microsystems differs from macroscopic flow investigations in many ways (Meinhart et al. 1999). Due to the volume illumination, the thickness of the measurement plane is mainly determined by the aperture of the imaging lens and therefore out-of-focus particle images are an inherent problem in microscopic PIV investigations. Consequently, digital image preprocessing algorithms are essential to enhance the spatial resolution by filtering out the unwanted signal caused by out-of-focus particles. Furthermore, the low particle image density achievable in microfluidic systems at large magnification and large particle image displacements as well as strong spatial gradients require sophisticated evaluation procedures, which are less common in macroscopic PIV applications. To bring the micro- and macro-PIV community together, a demanding microscopic PIV case, measured in a microdispersion systems, was considered in this PIV Challenge.
Microdispersion systems are very important in the field of process engineering, in particular food processing technology. These systems produce uniform droplets in the sub-micrometer range if driven with pressure around 100 bar (Kelemen et al. 2015a, b). The basic geometry consists of a straight microchannel (\(80 \,\upmu \hbox {m}\) in depth) with a sharp decrease in cross section and a sharp expansion thereafter as illustrated in Fig. 3. Due to this sudden change in width (from 500 to \(80 \,\upmu \hbox {m}\)), the velocity changes significantly by about 230 m/s within the field of view. The strong convective acceleration stretches large primary droplets into filaments. Due to the normal and shear forces in the expansion zone, the filaments break up into tiny droplets which persist in the continuous phase of the emulsion (fat in homogenized milk for instance). The flow includes cavitation at the constriction due to the large acceleration, strong in-plane and out-of-plane gradients (Scharnowski and Kähler 2016) in the boundary layers of the channel and shear layers of the jet flow and, in case of two-phase fluids, droplet-wall and droplet–droplet interaction. Thus, the flow characteristics are difficult to resolve due to the large range of spatial and velocity scales.
To characterize the single-phase flow, a \(\upmu \hbox {PIV}\) experiment was performed at the Bundeswehr University Munich within the DFG (German Research Foundation) research group 856 (Microsystems for Particulate Life-Science-Products). The channel was driven by a pressure of 200 bar. Fluorescent polystyrene particles with a diameter of \(1 \,\upmu \hbox {m}\) were added to the flow and imaged by a Zeiss Axio Observer microscope with a 20\(\times\) magnification lens. To capture the flow, a sCMOS camera with an interframing time of 120 ns was used. The particles were illuminated using a Litron Nano-PIV double-pulse Nd:YAG laser with 4 ns pulse length. 600 Double-frame images were recorded. The field of view covers approximately \(1400 \times 600\,\upmu {\text {m}}^2\) on \(2560 \times 1230\) pixels. The main challenges of this experiment are:
-
Extremely large velocity gradients (Keane and Adrian 1990; Westerweel 2008)
-
High dynamic velocity range (Adrian 1997)
-
Depth of correlation
-
Optical aberrations due to the thick window
-
Low signal-to-noise ratio (SNR) and cross-talk
-
Cavitation
The time-averaged flow field is quite uniform and of small velocity prior to the inlet of the channel, see (1) in Fig. 3. Toward the inlet, the flow is highly accelerated (2) and a fast channel flow develops with very strong velocity gradients close to the walls (3). Cavitation occurs at different places, especially at the inlet of the small channel in regions of very high velocities (2). This can already be seen in the images. Taking an average image over images 1–250, cavitation bubbles can be seen at both sides of the inlet as shown in Fig. 4 (left). Later, these cavitation bubbles diminish as can be seen by averaging images 300–600, see Fig. 4 (right). Due to this effect, strong velocity changes are expected in this area as the flow state alternates between two modes, namely one with and without cavitation bubbles. It is evident that the velocity change will result in strong local Reynolds stresses on average, although both flow states are fully laminar. The Reynolds stresses are simply an effect of the change in flow state, which results in a mean flow measurement that does not exist at all in the experiment.
At the outlet of the small channel, a free jet with a very thin shear layer develops, indicated by (4). This jet tends to bend toward one wall, because of the 2D geometry. The preferred position might change for subsequent experiments. However, the jet remains attached to one wall for the rest of the experiment and a recirculation region forms (5). The flow field therefore features a high dynamic spatial range (DSR) and a very high dynamic velocity range (DVR) between the fast channel flow and the almost stagnant and reverse flow regions. Since the aim of the current experiment is to qualify the whole flow field at once, the magnification was chosen such that by using the smallest interframing time of 120 ns results in displacements that can still be processed. Increasing the magnification further was not possible, because the minimum interframing time of 120 ns would than have been to large and the velocity evaluation would not have been possible.
Image quality for \(\upmu \hbox {PIV}\)
In contrast to macroscopic PIV, the image quality for microscopic imaging is often poor due to low light intensities, small particles and optical aberrations. As typically for cameras with a CMOS architecture, the gain for different pixels might also be different, which results in so-called hot and cold pixels (Hain et al. 2007). Hot pixels show a very large intensity, which results in a bright spot in the image, whereas cold pixels have a low or even zero intensity, which results in a dark spot.
These hot and cold pixels must be taken into account during the evaluation process to avoid bias errors. If not removed, they cause strong correlation peaks at zero velocity due to their high intensity. Usually, this effect can be corrected by subtracting a dark image already during the recording. However, sometimes this does not work sufficiently well, especially when cameras are new on the market. Therefore, it was decided to keep these hot pixels (with constantly high intensity values) and cold pixels (with constantly low gain and low or zero intensity value) in the images and leave it up to the participant to correct for them, instead of using the internal camera correction. Furthermore, the signal-to-noise ratio (SNR) is very low, due to the use of small fluorescent particles as well as the large magnifications. In the current case, the ratio of the mean signal to background intensity was below two. In some images also blobs of high intensity can be seen that result from particle agglomerations.
An additional problem was the time delay between successive frames which was set to the minimal interframing time of 120 ns. Due to jitter in the timing, cross-talk between the frames can be observed in some images, i.e., particle images of both exposures appear in one frame.
The pairwise occurrence of particle images with the same distance in a single frame indicates a double exposure with almost the same intensity. These images can be identified by the average intensity of the different frames, which is, in the case of double exposure, significantly higher than in average. In total, only 6 out of the 600 images show significant cross-talk. Strategies to cope with this problem might include image preprocessing or simply the identification and exclusion of these images. However, if not taken into account, high correlation values at zero velocity bias the measurements.
In comparison with macroscopic PIV applications, the particle image density is due to the large magnification typically lower. In order to enhance the quality of the correlation peak and the resolution, ensemble correlation approaches are typically applied (Westerweel et al. 2004).
Another inherent problem for microfluidic investigations is the so-called depth of correlation (DOC) as already mentioned at the beginning of the section. In standard PIV, a thin laser light sheet is generated to illuminate a plane in the flow which is usually observed from large distances. The depth of focus of the camera is typically much larger than the thickness of the laser light sheet, and therefore, the depth of the measurement plane is determined by the thickness of the light sheet at a certain intensity threshold which depends also on the size of the particles, the camera sensitivity and the lens settings. In microfluidic applications, due to the small dimensions of the microchannels, the creation and application of a thin laser light sheet is not possible. Volume illumination is used instead. As a consequence of this, the thickness of the measurement plane is defined by the depth of field of the microscope objective lens. In \(\upmu \hbox {PIV}\) experiments, the DOC is commonly defined as twice the distance from the object plane to the nearest plane in which a particle becomes sufficiently defocused so that it no longer contributes significantly to the cross-correlation analysis (Meinhart et al. 2000). In the case of vanishing out-of-plane gradients within the DOC, the measurements are not biased, but the correlation peak can be deformed due to non-homogeneous particle image sizes, which can reduce the precision of the peak detection. Unfortunately in many cases, the DOC is on the order of the channel dimension and thus large out-of-plane gradients are present. Under certain assumptions, a theoretical value can be derived for the DOC based on knowledge of the magnification, particle diameter, numerical aperture, refractive index of the medium and wavelength of the light. The DOC can also be determined by an experiment, and very often the experimental values are larger since aberrations appear. In the current case, the theoretical value is \(\hbox {DOC}_{{\mathrm {theory}}}=17.6\,\upmu \hbox {m}\), whereas the experimentally obtained value gives \(\hbox {DOC}_{{\mathrm {exp}}}=31.5\,\upmu \hbox {m}\) (Rossi et al. 2012). Since the tiny channel had a squared cross section of \(80 \times 80\,\upmu \hbox {m}^2\) , this covers more than one-third of the channel and significant bias errors can be expected. However, adequate image preprocessing or non-normalized correlations can minimize this effect (Rossi et al. 2012). Alternatively, 3D3C methods can be applied to fully avoid bias errors due to the volume illumination (Cierpka and Kähler 2012).
In general, image preprocessing is very important for microscopic PIV to increase the signal-to-noise ratio and avoid large systematic errors due to the depth of correlation (Lindken et al. 2009). Consequently, most of the teams applied image preprocessing to enhance the quality of the images.
Participants and methods
In total, 23 institutions requested the experimental images. Since the images were very challenging, not all of them succeeded in submitting results in time. Results were submitted by 13 institutions. The acronyms used in the paper and details about the data submitted are listed in Table 5. For the data sets named evaluation 1 (eval1), the participants that applied spatial correlation such as conventional window cross-correlation or sum of correlation by means of window correlation were forced to use a final interrogation window size of \(32 \times 32\) pixels. Multi-pass, window weighting, image deformation, etc. were allowed. The fixed interrogation windows size allows for a comparison of the different algorithms without major bias due to spatial resolution effects (Keane and Adrian 1992; Kähler et al. 2012a, b). However, the experience and attitude of the user has a very pronounced effect on the evaluation result, and different window sizes may have been favored by the different users to alter the smoothness of the results. Therefore, the teams were free to choose the interrogation window size in case of evaluation 2 (eval2). The evaluation parameters are summarized in Table 5. Some special treatments are described in the following if they differ significantly from the standard routines.
Dantec used close to the walls a special shaped wall window so that the contribution from particles further away can be excluded and thus the systematic velocity overestimation can be minimized. In addition, a N-sigma validation \((\sigma = 2.5)\) was used to detect and exclude outliers that may appear in groups due to the large overlap.
A feature tracking algorithm was applied by INSEAN (Miozzi 2004), which solves the optical flow equation in a local framework. The algorithm defines the best correlation measure as the minimum of the sum of squared differences (SSD) of pixel intensity corresponding to the same interrogation windows in two subsequent frames. After a linearization, the SSD minimization problem is iteratively solved in a least square style, by adopting two different models of a pure translational window motion and in the second step an affine window deformation. The deformation parameters are given directly by the algorithm solution (Miozzi 2005). The velocity for the individual images was only considered where the solutions of the linear system corresponding to the minimization problem exist. Subpixel image interpolation using a fifth-order B-Spline basis (Unser et al. 1993; Astarita and Cardone 2005) was performed. In-plane loss of pairs was avoided by adopting a pyramidal image representation and subpixel image interpolation using a bicubic scheme.
The peak detection scheme of IOT was based on the maximum width of the normalized correlation function at a value of 0.5. The assessment of the mean velocity and the velocity fluctuations started from the center of mass histogram method, and then it was used as an initial guess for an elliptic 2D Gaussian fitting with a nonlinear Levenberg–Marquardt optimization by the upper cap (0.5–1) of the correlation peak points. The displacement fluctuations were obtained by the shape of the ensemble correlation function following the approach of Scharnowski et al. (2012). A global displacement validation (\(-20\le DX\le 50\) px and \(-30 \le DY\le 30\) px and 53 px for vector length) was applied. Outliers after each iteration were replaced by \(3\times 3\) moving average.
For peak finding, TCD used a \(3\times 3\) 2D Gaussian estimator (Nobach and Honkanen 2005) that ranks peaks according to their volume instead of their height. For intermediate steps, a \(3\times 3\) spatial median filter was used to determine outliers (Westerweel and Scarano 2005). They were replaced by lower-ranked correlation peaks or local median of the neighbors. Two further median smoothing operations were applied.
An iterative multi-grid continuous window deformation interrogation algorithm was applied by TU Delft to individual images with decreasing window size starting with \(128\) pixels (Scarano and Riethmuller 2000) with bilinear interpolation for the velocity and \(8\times 8\) sinc interpolation for pixel intensities (Astarita and Cardone 2005). Close to the walls, a weighting function was applied to the non-masked region.
The peak finder used by UniG separates in the first step the most significant peaks of the correlation function and evaluates their particular characteristics. The peak quality is rated based on several individually weighted criteria which are related to, e.g., peak shape and peak height relative to the local image contrast. In the next step, a second peak-rating run is performed with an additional criterion that takes into account information from the peaks in neighboring interrogation windows. This method is superior to a simple interpolation-based error filter, because its second peak rating replaces any ordinary outlier substitution (Schewe 2014).
Table 5 Evaluation parameter for case A
Mask generation
Among all submissions, the mask generation (either algorithmic or manual) is very different. The smallest distance between masked points for eval1 (i.e., minimum channel width) or points where the velocity reaches exactly zero (if no mask provided) range between 160 ...194 pixels, which is more than 20 % of the channel width. The values for the different participants are given in Table 6. No trend can be seen concerning automated or manual mask generation, large as well as small channel width can be found for both algorithmic and manual mask generation. This large scatter already illustrates the strong effect of the participant’s attitude on the evaluation domain and thus on the results.
Table 6 Mask generation and width of the small channel
Results
Evaluation 1
Most of the participants used their own masks or excluded data outside of the channel by setting the flag to zero. However, the SNR at the left an right border of the images decreases due to inhomogeneous illumination. Therefore, values for \(x \le 120\) px and \(x \ge 2400\) px were also excluded from the analysis.
The mean flow field component in x-direction is shown in Figs. 5 and 6 in the first column. In many experiments, the real or ground truth is not known; however, all physical phenomena due to the flow, tracer particles, illumination, imaging, registration, discretization and quantization of this experiment are realistic, which is not achievable using synthetic images. Furthermore, the ground truth is only helpful if only small deviations exist between the results, which is not the case here. Therefore, a statistical approach is well suited which identifies results that are highly inconsistent with the results of the other teams or which are unphysical taking fluid mechanical considerations into account or which are not explainable based on technical grounds. In principle, all the teams could resolve the average characteristics of the flow. The fluid is accelerated in the first part of the chamber, later a channel flow forms in the contraction and a free jet, leaning to the lower part within the images, develops behind the channel’s outlet. For the mean displacement, the magnitude is in coarse agreement for all participants. Obvious differences can be observed in terms of data smoothness (MicroVec for instance), the acceleration of the flow toward the inlet (compare Dantec, Lavision and UniNa), the symmetry of the flow in the channel with respect to its center axis (TCD and LANL), the contraction of the streamlines at the outlet of the channel (compare TSI and UniG with others). The major differences can be found close to the walls, especially in the inlet region where the cavitation bubble forms in the first 250 images. Here the data are in particular more noisy for INSEAN, LANL and TCD.
Dantec used a special wall treatment in the algorithm to estimate the near-wall flow field. For other teams (MicroVec, TCD, TSI, TUD, UniG and UniMe), it seems that the velocity was forced to decay to zero at the wall, which results in very strong differences for the gradients in the small channel. These effects can be seen in the displacement profiles for evaluation 2 in Fig. 10. Although it seems reasonable to take the near-wall flow physics into account, this procedure is very sensitive to the definition of the wall location, which is strongly user dependent as shown above. Integrating the displacement in x-direction shows differences of 28 % (of the mean for all participants) between the smallest and larges values for the volume flow rate in the small channel, which is remarkable. It is also obvious that image preprocessing is crucial to avoid wrong displacement estimates due to the hot pixels and cross-talk between frames, which results in an underestimation of the displacement. Image preprocessing was not performed by TCD and regions of zero velocity (see dark spots close to the inlet at the upper part) can be seen. Also in the case of MicroVec, where only smoothing was applied, artifacts from the hot pixels can be clearly seen. All the other participants used a background removal by subtraction of the mean, median or minimum image and additional smoothing which works reasonably fine. Dantec, DLR, IOT, LANL, LaVision, TU Delft and UniNa used special image treatments to reduce the effect of the depth of correlation. The underestimation of the center line displacement can be minimized by this treatment as shown by the slightly higher mean displacements in x-direction (indicated in the histograms in the third column in Figs. 5 and 6) in comparison with the teams which used only image smoothing.
The mean fluctuation flow fields for the x-direction are displayed in the second column of Figs. 5 and 6. As indicated in Table 5, some teams (LANL, UniG, UniNa) did not provide fluctuation fields. Compared with the mean displacement fields, much larger differences can be observed in the fields. The differences clearly illustrate the sensitivity of the velocity measurement on the evaluation approach and the need for reliable measures for the uncertainty quantification. In general, the fluctuations in x- and y-direction are in the same order of magnitude. Turbulence in microchannels is usually difficult to achieve, even if the Reynolds number based on the width of the small channel and the velocity of O(200) m/s is \(Re>\) 16,000 and thus above the critical Reynolds number. Therefore, it has to be kept in mind that this is not a fully developed channel flow. However, due to the temporal cavitation at the inlet, which also has an effect on the mean velocity in this area because of the blockage, fluctuations are expected in the inlet region of the channel. During the time when cavitation occurs, the velocity in that region is almost zero (although no tracer particles enter the cavitation), whereas it is about 40–50 pixels in cases where no cavitation is present. Also close to the channel walls, strong fluctuations are expected because of the finite size of the particle images. This is obvious because at a certain wall distance particles from below and above, which travel with a strongly different velocity, contribute to the signal. Consequently, virtual fluctuations become visible, caused by seeding inhomogeneities. The same effects appear in the thin shear layers behind the outlet of the channel. This is confirmed in the results where in the small channel and in the side regions of the jet strong fluctuations can be found. If the mean rms displacement value for the different measurements is divided by the mean displacement, fluctuations of 20–30 % are reached. The majority of the teams found the largest rms values at the inlet region where the cavitation bubble forms and in the free jet region where the shear layers develop. The mean rms levels for \(DX_{{\mathrm {rms}}}\) are between 2.06 pixels (INSEAN) and 3.26 pixels (IOT). The values for TCD (5.05 pixels) are much higher than the values for the other teams and should therefore be taken with care.
Probably, artifacts due to the data processing close to the wall are leading to the high rms levels in the rounded corners shown by IOT, TCD, TSI. TUD, UniMe and the unphysical high rms levels at the walls of the small channel, which were not measured by the other participants. In addition, the data of TCD show very large rms values close to all walls, which is caused by the wall treatment. Moreover, the mean displacement profiles of TCD show outliers and the large rms values away from the walls are caused by these outliers.
LaVision applied a global histogram filter, only allowing fluctuation levels that do not exceed \(\pm 10\) pixels from the reference values, the rms values therefore showed a mean value of only 2.16 pixels for the x-direction. IOT obtained the fluctuations by evaluating the shape of the ensemble correlation function following the approach of Scharnowski et al. (2012), which shows often a bit higher values than vector processing since no additional smoothing is applied. The same trend as just described holds also true for the rms displacement in y-direction.
The last column of Figs. 5 and 6 reveals the probability density functions (pdfs) for the displacement in x- and y-direction. The velocity pdfs are in good agreement among the different teams as small deviations cannot be resolved using pdf distributions. For DX a very broad peak around \(\approx -2\) pixels results from the large recirculation region. A second broad peak at \(\approx\)8 pixels stems from the mean channel flow. The very high velocities in the small channel do not result in a single peak, but show an almost constant contribution in the range of large displacements up to 40 pixels. Very large peaks of a single preferred velocity can be seen in the pdfs of INSEAN, MicroVec, UniNa and UniMe at zero velocity which result from cross-talk or hot pixels. Additional strong peaks for distinct displacement are visible for LANL and UniG in the low velocity region. The data of DLR also show a small broader peak at \(\approx\)35 pixels, which was not found by the other teams. The pdfs for the other teams are quite smooth and show the biggest differences in the gap between the first and the second broad peak.
The pdf of DY is centered around slightly negative values for the displacement, and again the agreement between the teams is very good. However, MicroVec and TSI show significantly larger values at zero displacement. The pdf for TCD is centered around zero.
Evaluation 2
For evaluation 2, the participants were free to chose the appropriate window size. All teams have used the same preprocessing (if applied) as in evaluation 1. About one quarter of the teams (Dantec, MicroVec, TUD, UniG) have chosen \(32\times 32\) pixel windows and the same processing parameters as in evaluation 1; therefore, the same data will be used for comparison.
Some teams only changed the window size. Namely, INSEAN, LaVision and UniMe lowered the final window size to \(16\times 16\) pixels. UniMe did not use the Savitzki–Golay filter and UniNa set the final window size to \(11\times 11\) pixels. For TCD the final window size was \(128\times 32\) pixels on a \(32\times 16\) pixel grid. The evaluation parameters, as far as they differ from evaluation 1, are described in the following and listed in Table 5.
The DLR team used a predictor field, computed using the pyramid approach of evaluation 1 stopping at a sampling size of \(32\times 32\) pixels on a grid with \(8 \times 8\) pixel spacing. Each image pair was then processed individually by first subjecting it to full image deformation based on the predictor field. Then a pyramid scheme was used, starting at an initial window size of \(64\times 64\) pixels, which limits the maximum displacement variations to about ±20 pixels. The final window size was \(24\times 24\) pixels at a grid distance of \(8 \times 8\) pixel spacing which was subsequently up-sampled to the requested finer grid of \(2 \times 2\) pixels.
IOT employed an ensemble correlation with \(2\times 2\) pixel windows. The search area was \(128\times 64\) pixels. The correlation function was multiplied by four neighbors and then the peaks were preprocessed and a Gaussian fit was applied to determine the peak position for the mean displacement and rms displacements from the shape of the correlation peaks (Scharnowski et al. 2012).
LANL used for evaluation 2 a PTV method using a multi-parametric tracking (Cardwell et al. 2010). Particle images were identified using a dynamic thresholding method, which allows for the detection of dimmer particles in close proximity to brighter ones. Their position was determined using a least squares Gaussian estimation with subpixel accuracy (Brady et al. 2009). The multi-parametric matching allows for the particles to be matched using not only their position but also a weighted contribution of their size and intensity between image pairs. To further increase the match probability, the particle search locations were preconditioned by the velocity field determined using an ensemble PIV correlation. The PTV data were then interpolated (search windows of \(32\times 32\) pixels) with inverse distance weighting to a \(2\times 2\) pixel grid. When three or less vectors are present in the corresponding window, the window size was increased by 50 %.
LaVision subdivided the images with respect to the modes with and without the cavitation bubble present at the entrance. By this procedure, the large fluctuations due to the velocity change at the wall should be completely avoided but also the effect of the cavitation on the mean flow field due to the contraction. For comparison to the other teams, both data sets were averaged, weighted by the number of images that belong to them. TSI used correlation averaging instead of individual correlation planes.
The mean flow fields for the x-component are shown in Figs. 7 and 8 in the first column. For TCD the amount of outliers was reduced, because of the enlargement of the window in x-direction. The mean displacement field for evaluation 2 of TCD looks much smoother now, although artifacts due to hot pixels can still be seen. The difference for TSI’s results is that correlation averaging was used instead of individual correlations, which also tends to lower the velocity estimate by smoothing if normalized. IOT used ensemble correlation of \(2\times 2\) pixel windows from the beginning, instead of an iterative averaged correlation as did for evaluation 1. This results in a more noisy pattern. In general, the differences to evaluation 1 are not very pronounced in the mean fields. Regarding the velocity profiles, the differences among the participants are larger for evaluation 2 instead for evaluation 1 where processing parameters were fixed. This indicates that in principle all algorithms provide reasonable results, but a great difference is made by the choice of the parameters which are dependent on the users experience and intention.
The rms flow fields are shown in Figs. 7 and 8 in the second column. Please note that TSI and UniNa did not provide fluctuation fields for evaluation 2. Here larger differences are visible from evaluation 1 and evaluation 2 results as expected. For the DLR data, the levels of the rms values are smaller and focus on regions close to the channel walls and in the shear layers of the evolving jet flow. For INSEAN, the mean rms levels are larger, and also stronger signatures of regions of high rms values can be seen. For the data of IOT, very high and very low levels of rms values are visible. The patterns occur to be very noisy, which is probably a result of the smaller window size and thus a noisy correlation peak. Since this correlation peak is analyzed with a Gaussian fit function, the results of this function might show large fluctuations. The levels of the rms value for TCD’s results are on the same order as found by the other teams, they are also much smoother than for evaluation 1, which is attributed to the larger window size. For UniMe, the rms values are above the other teams levels and even larger than for evaluation 1. Part of this behavior is probably due to different filtering schemes. This detrimental effect can also be seen in the subpixel rms displacement (not shown). However, especially for INSEAN, the large value at zero subpixel displacement from evaluation 1 was significantly decreased.
To have a closer look to the results, the displacement along the centerline at \(y = 620\) px is shown for evaluation 2 in Fig. 9. Lines “A” and “B” correspond to the horizontal profiles shown in Figs. 10 and 11, respectively. The gray shaded region indicates the small channel length. The agreement in the acceleration region is quite good for all participants. On the center line, the differences in terms of the standard deviation for DX are 0.81 px for \(x=900\) and 0.77 px for \(x=1300\) px, excluding the data of MicroVec. However, close to the small channel and in the inlet region, deviations occur due to the cavitation. On the center line, the data agree again fairly well within the small channel and in the free jet region. In this representation, differences can only be seen in the declaration region, where even the direction of the flow among the participants differs. Interestingly, the data of LANL for evaluation 2 appear smoother, although particle tracking was applied instead of cross-correlation. However, the mean velocities are very similar to slightly higher values for the PTV results, which might be attributed to the reason that usually only brighter in-focus particles are taken into consideration for the tracking (Cierpka et al. 2013).
In Fig. 10, the displacement in x- and y-directions for evaluation 2 is shown for a line with \(x = 900\) px, which is in the channel but far away from the cavitation region. Significant differences can be seen near the walls, which is due to the different wall treatments and the ability of the algorithms to resolve the gradients close to the wall. As discussed above, the width of the channel changes due to the different approaches in the mask generation. If the volume flow rate is determined by an integration of the displacement along the wall-normal direction, differences of up to 28 % between the participants occur. Although the differences on the center line in x-direction are small in this representation, for the y-direction large differences can be seen on the right side of Fig. 10. Even the sign changes for different teams and the magnitude of the error can reach up to 7 px in some places.
Figure 11 displays the ability to resolving large gradients the x-displacement after the small channel in the region of the jet. The high-velocity region in the jet was resolved by all participants. Again, in the middle of the channel, the results match very well. With increasing distance from the center line, the gradients show large differences not only in spatial location but also in magnitude. It seems that for example TSI and MicroVec used a considerable higher amount of smoothing which results in a smaller region of high momentum fluid in comparison with the other participants.
Conclusion
The analysis of case A shows that even after more than three decades of evaluation algorithm development the interrogation of 2D2C \(\upmu \hbox {PIV}\) measurements is still challenging when strong spatial gradients and large dynamic velocity ranges or solid boundaries are present, and it is evident that a lot of knowledge and experience is necessary to produce reliable and correct results.
-
In \(\upmu \hbox {PIV}\) usually fluorescent tracer particles are used to separate the signal from the background using optical filters. Thus one may assume that preprocessing is not needed to enhance the signal-to-noise ratio. However, the analysis of case A shows that image preprocessing is important to avoid incorrect displacement estimations caused by typical camera artefacts such as cold and hot pixel or cross-talk between camera frames. The latter appears if the laser pulse separation is comparable to the interframing time of the digital cameras. The influence of gain variations for different pixels is usually lowered using image intensity correction implemented in most camera software. If it can not fully be balanced, the analysis shows that the different background removal methods applied by the teams work similarly well. However, it is evident from the results provided by MicroVec and TCD that smoothing does not work effectively to eliminate fixed pattern noise.
-
It is well known that the precise masking of solid boundaries before evaluating the measurements is a very important task. The masking affects the size of the evaluation domain and thus the region where flow information can be measured. Thus a precise masking is desirable to maximize the flow information. However, the present case shows that this is associated with large uncertainties. The estimated width of the small straight channel varied by more than 20 % among all submissions. This unexpected result illustrates the strong influence of the user. To avoid this user-dependent uncertainty, universal digital masking techniques are desirable to address this serious problem in future measurements.
-
Another interesting effect could be observed by comparing the strongly varying velocity profiles across the small channel. Due to the spatial correlation analysis, the flow velocity close to solid boundaries is usually overestimated. However, as the spatial resolution effect was almost identical for all teams in the case of evaluation 1 the variation can be attributed to the special treatment of near-wall flow. While most teams did not use special techniques, variations are mainly caused by the masking procedure. In contrast, UniMe seems to resolve the boundary layer effect much better than the other teams. This was achieved by implementing the no-slip condition in the evaluation approach. Although this model-based approach shows the expected trend of the velocity close to the wall, it is evident that the results are strongly biased by defining the location of the boundaries. Unfortunately, the definition of the boundary location is associated with large uncertainties, as already discussed, and therefore, this model-based approach is associated with large uncertainties. Therefore, it would be more reliable to make use of evaluation techniques which enhance the spatial resolution without making any model assumption such as single-pixel PIV or PTV evaluation techniques (Kähler et al. 2012b). However, as the uncertainty of these evaluation techniques is not always better than spatial correlation approaches, it is recommended to evaluate the measurements in a zonal-like fashion as done in numerical flow simulations, where RANS simulations are coupled with LES or even DNS simulations to resolve the flow unsteadiness at relevant locations. The zonal-like evaluation approach minimizes the global uncertainty of the flow field if the near-wall region is evaluated using PTV (to avoid uncertainties due to spatial filtering or zero velocity assumptions at the wall) while at larger wall distances spatial correlation approaches are used (because the noise can be effectively suppressed using statistical methods).
-
Furthermore, the precise estimation of the mean velocity is very important for the calculation of higher-order moments. The strong variation of the rms values illustrates that the uncertainty quantification is very important because the differences, visible in the results, deviate much more than expected from general PIV uncertainty assumptions. Here future work is required to quantify the reliability of a PIV measurement.
-
Since in microscopic devices the flow fields show large in-plane and out-of-plane gradients and are inherently three-dimensional, it is beneficial to use techniques that allow a reconstruction of all three components of the velocity vector in a volume (Cierpka and Kähler 2012). Of special interest are techniques that allow a depth coding in 2D images as confocal scanning microscopy (Park et al. 2004; Lima et al. 2013), holographic (Choi et al. 2012; Seo and Lee 2014) or light-field techniques (Levoy et al. 2006), defocusing methods (Tien et al. 2014; Barnkob et al. 2015) or the introduction of astigmatic aberrations (Cierpka et al. 2011; Liu et al. 2014).