Experiments in Fluids

, Volume 36, Issue 2, pp 355–362

XPIV–Multi-plane stereoscopic particle image velocimetry


  • A. Liberzon
    • Multiphase Flow LaboratoryFaculty of Mechanical Engineering
    • Institute of Hydromechanics and Water Resources ManagementETH
  • R. Gurka
    • Multiphase Flow LaboratoryFaculty of Mechanical Engineering
    • Department of Mechanical EngineeringThe Johns Hopkins University
    • Multiphase Flow LaboratoryFaculty of Mechanical Engineering

DOI: 10.1007/s00348-003-0731-9

Cite this article as:
Liberzon, A., Gurka, R. & Hetsroni, G. Exp Fluids (2004) 36: 355. doi:10.1007/s00348-003-0731-9


We introduce the three-dimensional measurement technique (XPIV) based on a Particle Image Velocimetry (PIV) system. The technique provides three-dimensional and statistically significant velocity data. The main principle of the technique lies in the combination of defocus, stereoscopic and multi-plane illumination concepts. Preliminary results of the turbulent boundary layer in a flume are presented. The quality of the velocity data is evaluated by using the velocity profiles and relative turbulent intensity of the boundary layer. The analysis indicates that the XPIV is a reliable experimental tool for three-dimensional fluid velocity measurements.

1 Introduction

Experimental investigation of turbulent flows requires techniques that allow three-dimensional measurements with high spatial and temporal resolutions. Particle Image Velocimetry (PIV) is a state-of-the-art method in fluid dynamics research that provides high spatial resolution in two-dimensional slice of the flow (Adrian 1991; Raffel et al. 1998), and appears to be an appropriate basis for three-dimensional velocity measurements. The technique has only technological limitations to achieve a temporal resolution due to the illumination source (lasers) and recording media (CCD) frequencies which are available today.

Several extensions of the classical PIV system have been proposed to achieve more than two-dimensional, two-component velocity information. The most common extension is the application of the second CCD camera to acquire the stereoscopic view of the flow, and thus to achieve the out-of-plane component of the velocity on a plane (Raffel et al. 1998). Replication of the stereoscopic PIV (SPIV) system (i.e., two double lasers and four CCD cameras) and combination of optics and sophisticated synchronization, proposed by Kähler and Kompenhans (1999), advance the PIV toward two planes, and have been successfully utilized to achieve three component velocity fields with stereoscopic PIV temporal capability (e.g., Kähler et al. 2000).

Another improvement of the PIV was proposed by Willert and Gharib (1992) where they illuminated a flow volume and used the defocus principle to identify three-dimensional locations of seeded particles. Recently this technique was successfully applied to measure trajectories of bubbles in two-phase flow (Pereirra et al. 2000).

An interesting three-dimensional system is the Holographic PIV (HPIV), which uses volume illumination and complicated holographic recording procedures. Several configurations of in-line and off-line axis illumination and recording were implemented to turbulent flow experiments by Zhang et al. (1997) and Barnhart et al. (1994), among others. An improved version of HPIV is the light-in-flight HPIV (Hinrichs and Hinsch 1994; Böhmer et al. 1996), which provides higher precision by using limited coherence light, and analysis of separate planes within the measured volume. The holographic concept is superior to the other methods since it can provide an instantaneous three-dimensional field with high spatial resolution. There is no arguing that the HPIV will eventually serve as a tool to analyze turbulent flows. However, currently holography does not provide the ability to collect data statistically and is limited to relatively simple flow configurations.

Characterization of coherent motions in turbulent boundary layer demands statistically significant, three-dimensional description of the flow. Understanding the drawbacks and advantages of the obtainable measurement systems led to the development of the multi-plane stereoscopic velocimetry technique (XPIV). The technique applies the principles of multi-sheet illumination, stereoscopic imaging and particle image defocusing. The experimental technique implemented with a stereoscopic PIV system, based on an additional optics and image processing algorithm. Section 2 presents the experimental setup and optical configuration. An image processing algorithm is described in Sect. 3, and followed by results in Sect. 4. Lastly, Sect. 5 contains a summary and concluding remarks.

2 Experimental setup

This work is a part of the experimental research on coherent structures in a turbulent boundary layer in a flume. The flume is made of glass and has dimensions of 4.9×0.32×0.1 m. The flow field at the specific flume was measured and characterized in previous research of Hetsroni et al. (1996) and Kaftori et al. (1994) by using hot-wire anemometry, LDV and flow visualization techniques. A schematic diagram of the configuration is shown in Fig. 1. The Reynolds number was 20000, based on the water height. The velocity field was measured at a distance 2.5 m from the entrance. A stereoscopic PIV system contains a double Nd:YAG laser (170 mJ/pulse, 15 Hz) and two 1 K×1 K, 30 fps CCD cameras, mounted to satisfy the Scheimpflug condition. Hollow glass sphere particles with an average diameter of 11 µm, were used for seeding. The calibration procedure and PIV cross-correlation analysis were performed by using Insight 3.2 (TSI Incorporated 1999a).
Fig. 1.

Experimental facility: 1) Flume, 2) water reservoirs, 3) piping, 4) double Nd:YAG laser and its power supply unit, 5) optical table, 6) frame 7) CCD 1 (left), 8) CCD 2 (right), 9) 45° mirror, 10) quarter plate, 11) beam splitting unit, 12) cylindrical lens, 13) three parallel laser sheets. The coordinate system is as follows: x - streamwise, y - wall normal and, z - spanwise directions

2.1 Optical arrangement

The optical arrangement is presented in Fig. 2. The laser beam with linear vertical polarization passes the spherical lens (1) to produce a focused laser sheet at the area of interest, and then it is turned up, from its horizontal direction, toward the optical array, using a 45° high energy laser mirror (2). The beam splitting array comprises of four components in the specified order. The beam passes through the zero order quarter plate (3) mounted on the rotation mount, which allows the angle of the linear polarization of the output beam to be changed. Next is a high energy cube polarizing beamsplitter (4) of 1.25 cm which transmits s-polarized light and reflects p-polarized light. Thus, if the laser beam at its entrance is totally s-polarized, the light will be transmitted almost completely and the output of the array will be only two parallel beams. In our case, we can control the partition of the laser beam energy between the first (lowest) plane and two other sheets. Using an appropriate angle of the quarter plate we can achieve the 1/3–2/3 energy splitting between the reflected and transmitted beams, respectively. Next component is non polarizing cube beamsplitter (5) of identical size (1.25 cm) with the predefined 50%–50% relation between the transmitted and reflected beams. Next is a right-angle prism of 1.25 cm (6), which is used as a mirror to approve the identical distances between three beams. Finally, all three parallel beams pass the cylindrical lens (7) to get three parallel laser sheets. We should note that the polarization properties of the laser sheets are not important in our technique, but only their intensity and alignment characteristics. All the optical components were mounted on the same optical board. Two beamsplitters and the prism were placed in a slot to maintain their co-alignment and were attached without gluing (high energy laser beam can damage the glue, resulting in beam aberrations). The presented optical arrangement forms three parallel laser sheets with a known, physically defined distance of 1.25 cm between them (i.e., the size of cube beamsplitters and the prism) and with an equal intensity. The extension of this optical arrangement to produce four, five or more planes is straightforward. On the other side, it is possible to use the same splitting idea, and more complicated optical arrangement with mirrors and thin plate beamsplitters, to achieve three or more parallel planes with variable distances between them, rather than having a fixed distance scheme. We chose the optical components to be the smallest ones that are available (0.5”) from a commercial catalog. On the other hand, the measurement technique could be improved by using a CCD with a higher dynamic range (e.g., 12 bit camera), quantum efficiency and/or resolution (e.g., 2000×2000 pixels).
Fig. 2.

Schematic view of the optical array

2.2 Image recording and calibration

In the present work we illuminated the flow area with three laser sheets oriented parallel to the flume bottom, with the lowest plane located at y+≈80 (Liberzon et al. 2001). Both CCD cameras were focused on the most distant plane at the recording stage and the camera field of view was 60×60 mm. As it is presented in the scheme (see Fig. 1), the CCD cameras were placed under the flume and the most far plane is the highest plane, which was located at y=30.4 mm, and the other two planes were located 17.7 and 5 mm from the bottom of the flume, respectively. The distance between two consequential planes defines the third dimension resolution (i.e., y) that is equal to 12.7 mm. The PIV analysis was performed on two sets of data, by using interrogation areas of (i) 128×128 pixels with overlapping of 25%, and (ii) 64×64 pixels with overlapping of 50%, respectively. For the present case, 30 and 100 image pairs were acquired and analyzed, respectively, and a standard median filter was applied to remove erroneous vectors. The number of removed vectors was less than 10% of the whole vector field. The resolution in streamwise and spanwise directions is defined by the grid spacing of the cross-correlation analysis, x=z=32 pixel=2.15 mm. The images were acquired, in the usual manner, by using a synchronizer and two frame grabbers and were saved as uncompressed, 8 bit gray level TIFF files to the hard disk.

A calibration procedure was performed three times by acquiring three pairs of images, using a two-plane calibration grid. Each pair was recorded for one of the laser sheet planes while both cameras were focused on the calibrated plane. The calibration images were analyzed using PIVCalib software (TSI Incorporated 1999b) based on the image warping method and three calibration files were formed for three planes. Later, PIV images for each plane, prepared according to our algorithm (presented later), were analyzed separately using the calibration files and cross-correlation PIV analysis.

3 Image processing algorithm

PIV images were acquired by using illumination intensity and camera aperture set to achieve particle images from all three planes with optimal concentrations and distribution. The particle images at the upper plane, on which the cameras were focused, are obviously brighter and smaller than images of the particles from the lowest and the middle planes.

3.1 Pre-processing of images

Figure 3 (left) depicts the PIV image of 256×256 pixels, that is about 1/4 of the acquired image. The figure contains particle images from different planes on the non-uniform illuminated background. The first image processing operation is to enhance the PIV images by removing the background non-uniform illumination and adjusting the image contrast (Young et al. 1998). The background illumination was removed by a gray scale morphology operator, “top-hat”, using circular structure element B with a radius of 12 pixels:
Fig. 3.

Original three plane PIV image (left), and enhanced three plane PIV image (right)

$$J = I - \left( {I \circ B} \right) = I - \left( {\left( {\ominus} \right) \oplus B} \right)$$

Is the operated image.


Is the resulted image.


Is the gray scale ‘opening’ operator.

Is the erosion operator.

Is the dilation operator.

The gray scale morphology operations are usually faster than their linear filter analogy and were performed using the Image Processing Toolbox of Matlab (Math Works Inc.). In addition, the image contrast is adjusted by stretching the gray level intensity histogram to the lowest and the highest values (i.e., for 8 bit images, 0 and 255, respectively). The enhanced, preprocessed, image is shown in Fig. 3. The following section describes the image-processing algorithm used to identify particle images as objects in the PIV image and classify them to one of three groups of particles, according to the illumination plane.

3.2 Particle images in the plane of focus

Particle images originating at the plane of focus (“focused particles”) are obviously different from the particle images from planes that are not in focus, or “defocused particles”1. Focused particles images are small and bright, i.e., they consist of 3–5 pixel objects and include saturated pixels of the maximum image gray level (in our case, for 8 bit images: I=28−1=255). In addition to the saturated pixels, there are several neighboring pixels which belong to the same particle images, however their brightness (gray level intensity) is significantly smaller. We found that an additional threshold of the gray level intensity introduces too much noise, and we decided to identify the particle images by using morphology image reconstruction (analogy to the region growing or propagation algorithms): We define such objects as follows:
  1. 1.

    At the first stage (zero iteration) the saturated pixels are selected:

$$I^{\left( 0 \right)}=\left\{ {x \in \left. I \right|I\left( x \right)=255} \right\}$$
  1. 2.
    Image reconstructing algorithm makes use of the identified objects as a “marker” image and the enhanced PIV image as a “mask” image to define the real boundaries of the particle image Ifocus by iterative conditional dilation procedure. The n iteration image I(n) is calculated as follows:
    $$I^{\left( n \right)}=\left\{ {I^{\left( {n - 1} \right)} \oplus dB} \right\} \cap I,\;\;\;I^{\left( n \right)} \ne I^{\left( {n - 1} \right)} $$
    while the iterations are repeated until there is no change between the images. dB denotes the small structure element, such as circular element of 1 pixel radius, or 3×3 pixels square element. Figure 4 schematically describes the reconstruction or region growing principle: the identified binary image propagates toward the original image but does not pass the object boundary.
  1. 3.

    The focused particle image (i.e., object) has to be small; therefore, we can filter out the objects with a size larger than the threshold, TA. This area-based filtering procedure was performed using a gray level morphology opening, apparently the fastest and most efficient filter in this case. The result of the method described above is presented in Fig. 5. The image shows the gray level image with the objects on the flat, zero-level background.

Subtraction of the focused image from the multi-plane image is the next stage of the plane discrimination procedure. The objects that are defined as focused particles are subtracted from the original image, and the removed pixels are filled by the locally smoothed pixels, using a “top-hat” operator with a circularly-shaped structuring element of 3 pixels radius. An example of the “defocus image” that contains two defocus planes is shown in Fig. 6, together with the original image for comparison. Note that the defocus image on the right does not include bright and small focused particles, but it is a gray scale image without sharp discontinuities.
Fig. 4.

Schematic view of the reconstruction principle used in the region growing algorithm: (− −) Dashed line shows the one-dimensional signal, (—·—) line is for the identified saturated pixels and, () line presents the reconstructed object

Fig. 5.

Original image (left) and the image with particles in the focus plane (right)

Fig. 6.

Original (left) and defocus planes image (right)

3.3 Discrimination between two defocus planes

The discrimination between defocused particles, in two well-defined planes, is based on the object property (size) based segmentation, i.e., separation between small and large objects. The implementation of the segmentation algorithm consists of two main steps:(i) definition and identification of the objects, and, (ii) classification (segmentation) of the objects into the two clusters (groups) based on the size parameter. Originally, we expected that it would be possible to discriminate between defocused particles using additional parameters, such as intensity, gradient magnitude, etc. However, the experimental components (imaging and laser optics, etc.) and setup reduced the significance of these parameters. Thus, only the size or area parameter was found to be a good discriminate characteristic of the specific particle images.

3.3.1 Object definition and identification

The particle image objects are defined and identified using a gradient-based segmentation procedure. The gradient was calculated by using the morphology gradient method and Canny’s gradient method (Canny 1986). Both methods provided robust and sharp results for the presented PIV images (Fig. 7), and could be used interchangeably. In addition, gradient surfaces were handled as gray level images. This kind of treatment facilitates to obtain the well defined, contrast objects on the background, instead of object edges image, by filling the high-gradient disks and enhancing the gradient maps. Enhancement and filling operations were implemented with gray level morphology procedure. Figure 8 demonstrates the original and enhanced gradient images.
Fig. 7.

Defocus planes image (left) and gradient map as a gray level image (right)

Fig. 8.

Gradient image (left) and enhanced gradient map (right)

Enhanced gradient images were used to identify objects in the defocused planes by using following procedure:

The gradient image was thresholded by using a contrast thresholding procedure in order to select only objects with the strong gradient.


The “broken edges” were connected by morphology closing with line structuring elements.


Connected gradient borders were filled by a morphology smoothing operator.


The image was segmented based on the gray level intensity thresholding.

The result of this identification procedure is the binary image that includes 1’s at all locations that are identified as objects (“true” Boolean values) and 0’s at all other locations (“false” Boolean values). The resulting binary image (shown in Fig. 9), facilitates the use of fast mathematical morphology (binary) operations.
Fig. 9.

Defocus particles image (left) and the identified objects in a binary image (right)

3.3.2 Classification of objects

Using as an input the binary image, calculated by using gradient-based segmentation method, we segment the identified objects into the two clusters, based on their size. The segmentation is based on the fixed size threshold value, which is considered by using the granulometry technique. This technique is implemented by means of the iterative morphological opening of the binary image with an ascending set of the identical structuring elements, and by counting the number of removed pixels at each iteration:
$$S_B^n=\sum\limits_I {\left\{ {\left( {I \circ \left( n \right)B} \right) - \left( {I \circ \left( {n + 1} \right)B} \right)} \right\}} $$
where \(nB=\underbrace {B \otimes B \otimes B... \otimes B}_n\) and \(\sum\limits_I {} \) produces an estimation of the area of the objects removed at the nth iteration. Note that the objects in the real image are not perfect geometric shapes, and some of object boundary pixels are removed during iterations for the small n values. However, the object of the specific size nB is removed completely at that iteration, and significantly changes the \(S_B^n \) value. Figure 10 presents the size distribution of the defocused image, estimated by \(S_B^n \), and based on the circular structure element. In addition, the plot of the derivative \(\partial S_B^n /\partial n\) indicates that the image contains two separable populations of the objects with the size of 1B and 4B. The defocused images were separated into the two planes by using two area-based filtering operations.
Fig. 10.

Size distribution (granulometry) of the binary image (left) and its derivative (right)

4 Results and discussion

The three pairs of separated images were analyzed using the cross-correlation PIV algorithms. Figure 11 presents an instantaneous three-dimensional velocity map in the turbulent boundary layer in the flume.
Fig. 11.

Instantaneous three-dimensional velocity field

Figure 12 demonstrates streamwise velocity profiles, where each profile relates to the different spanwise location and represents the ensemble average of the velocity fields and of all streamwise locations at the same spanwise coordinate. The deviation of velocity profiles is a direct result of the simultaneous presentation of velocity profiles in different spanwise locations. In addition, Fig. 12 consists of box-plot of the streamwise velocity data from the PIV measurements in separate y planes, while the other planes are blocked physically during the acquisition. The agreement between the box-plot and the profiles presents the validation of the XPIV technique. It is worth noting, that the deviation of the streamwise velocity increases toward the wall, as it might be expected in a turbulent boundary layer. This result is shown in a clearer way in the turbulence intensity results (normalized by the averaged velocity component in the streamwise direction, \( {u}'_{i} /\bar{U}_{1} \) and presented in Fig. 13 for the streamwise, wall normal, and spanwise velocity components, for the three planes. The turbulence intensity in the streamwise direction increases from the upper to the lower plane, consistent to known the effect of the strengthening of velocity fluctuations close to the wall and the presented figure is in good agreement with the classical data in turbulent boundary layer (Hinze 1975), and previous measurements in a flume (Kaftori et al. 1994, 1998). It is noteworthy, that the out-of-plane velocity component (i.e., u2) measured by means of the stereoscopic PIV approach, is inherently more erroneous than the in-plane velocity components (Raffel et al. 1998).
Fig. 12.

Streamwise velocity average profiles measured by using XPIV(-o) and box-plot of the PIV measurements in separate y planes(|-[]-|)

Fig. 13.

Turbulent intensity for streamwise \( {u}'_{1} /\bar{U}_{1} \) (■), wall-normal \( {u}'_{2} /\bar{U}_{1} \) (⋆), and spanwise \( {u}'_{3} /\bar{U}_{1} \) (●) velocity components for three planes

5 Concluding remarks

A qualitative and quantitative measure of the velocity and velocity gradients data is demonstrated by velocity profiles and relative turbulent intensity in the turbulent boundary layer. From the experimental results and analysis it is proposed that the XPIV measurement system is a suitable tool for the three-dimensional velocity measurements. The ability to achieve a three-dimensional measurement will provide new insight into turbulence research in general and on coherent structures in particular. The ability to estimate experimentally the Navier-Stokes equations in its different forms (RANS, enstrophy, kinetic energy, etc.) on the one hand and to characterize the flow patterns (vortices, bursting, etc.) on the other will enable one to determine the role of coherent structures in turbulent flows. Further effort will be focused on refining the image-processing algorithm, and improving the robustness of the technique.


Note that there are two kinds of defocused particles, illuminated by two different laser sheets



The authors wish to thanks Dr. Miriam Zacksenhouse for her helpful review and comments on an earlier draft.

Copyright information

© Springer-Verlag 2004