Experiments in Fluids

, Volume 44, Issue 3, pp 469–480

A novel method for three-dimensional three-component analysis of flows close to free water surfaces

Research Article

DOI: 10.1007/s00348-007-0453-5

Cite this article as:
Jehle, M. & Jähne, B. Exp Fluids (2008) 44: 469. doi:10.1007/s00348-007-0453-5


Initial effort is made to establish a new technique for the measurement of three-dimensional three-component (3D3C) velocity fields close to free water surfaces. A fluid volume is illuminated by light emitting diodes (LEDs) perpendicularly to the surface. Small spherical particles are added to the fluid, functioning as a tracer. A monochromatic camera pointing to the water surface from above records the image sequences. The distance of the spheres to the surface is coded by means of a supplemented dye, which absorbs the light of the LEDs according to Beer–Lambert’s law. By applying LEDs with two different wavelengths, it is possible to use particles variable in size. The velocity vectors are obtained by using an extension of the method of optical flow. The vertical velocity component is computed from the temporal brightness change. The setup is validated with a laminar falling film, which serves as a reference flow. Moreover, the method is applied to buoyant convective turbulence as an example for a non stationary, inherently 3D flow.

1 Introduction

In order to investigate air–water gas exchange, a detailed knowledge of the flow field within and below the water-side viscous boundary layer is needed (Bannerjee and MacIntyre 2004; Jähne and Haußecker 1998). Therefore, important quantities, such as shear stresses, velocity profiles, dissipation rates, and path lines, have to be determined.

The measurement technique has to meet the following requirements:
  • The interesting flow is inherently three-dimensional: Interesting features of the flow are (microscale-) wave-breaking (Banner and Phillips 1974), (micro-) Langmuir circulations (Melville et al. 1998) and turbulence. All of these have in common, that they are 3D phenomena. The classical measuring setup, like particle image velocimetry (PIV) (Raffel et al. 1998), uses laser light sections and yields only a slice of the flow-field (Melville et al. 1998; Okuda 1982; Peirson and Banner 2003). Thus it does not reveal the three-dimensionality of the flow.

  • Ultimately we are interested in the flow inside and below the water-side viscous boundary layer at a wind-driven water surface undulated by wind waves, which is of thickness O (1 mm). In contrast to that, waves may have amplitudes of O (100 mm). Because of this discrepancy, it is hardly possible to observe the flow field statically from the side, which would be a necessary condition of using laser light sections. Either, we have to use a sophisticated wave tracking mechanism, or we have to look from above, perpendicular to the water surface. Some 3D techniques [like tomographic PIV (Elsinga et al. 2006) or 3D PTV (Maas et al. 1993)] cannot be applied in their standard forms as the flow of interest is near the interface of two media with different refractive indices.

  • Waves and turbulence are non time stationary processes. Because in this case the Lagrangian path lines are different from the Eulerian stream lines, we have to track particles in an image sequence. In our approach we are not only capable to record time resolved data, but also we make use of its spatio-temporal structure.

Section 2 of this paper is concerned with the basic principles of our measurement technique. Both, the reconstruction of the 3D position and of the three-component velocity of the tracer particles representing the flow will be addressed. In order to realize these ideas, we had to design a new measurement setup, and we had to implement the algorithms, which will be addressed in Sect. 3. First experimental results with three setups are given in Sect. 4.

2 Measurement of particle depth by absorption

The basic concept of our measurement technique is based on retrieving 3D information from 2D data—the intensity (gray value) being the source of the depth (i.e. the coordinate perpendicular to the image plane).

2.1 Monochromatic method

The precursor of our method originally was proposed by Debaene et al. (2005) in the context of biofluidmechanics. Li et al. (2006) have proposed a technique called “multilayer nano-particle image velocimetry”, which makes use of the exponential law in the same way as Debaene et al. (2005) but operates with evanescent-wave illumination of fluorescent colloidal tracers. Because of the fast exponential decay, this setup is however not suitable for depth ranges that are of interest for the investigation of the flow in and close to viscous shear boundary layers.

We will summarize the basics of the precursor called monochromatic method in the following. The technique developed by the authors of this paper will be described in the next section.

The intention of (Debaene et al. 2005) was, to estimate the wall shear stress, which influences the properties of the fluid. For the calculation of the wall shear stress, 3D information about the flow field near the wall must be at hand. Therefore a measurement technique capable to acquire and process 3D data has to be found.

Like in other tracer-based flow measurement methods, small, reflective, floating particles are supplemented to the fluid. The tracer particles have to be spherical, and their size distribution has to be narrow for reasons explained later. Unlike in particle imaging velocimetry the fluid is illuminated voluminously by light of a specific spectrum. A dye is added to the fluid, which absorbs light of a certain wavelength. The particles are recorded by a monochromatic camera, which points perpendicularly to the surface. 1

The dye limits the penetration depth of the light into the flow according to Beer–Lambert’s law. The intensity Ip of the light approaching the particle is
$$ I_p(z)=I_0 \exp{-z/\tilde{z}_*}, $$
where I0 is the light’s intensity before penetrating into the fluid, z is the distance of the particle’s surface from the water surface, and \(\tilde{z}_*\) is the penetration depth (Fig. 1). The light is reflected by the particle, and passes the distance z again, before approaching the surface with the intensity
$$ I(z)=I_p \exp{-z/\tilde{z}_*}=I_0 \exp{-2z/\tilde{z}_*}=I_0 \exp{-z/z_*}, $$
where \(z_*=\tilde{z}_*/2\) was introduced for convenience. Within the illuminated layer the particles appear more or less bright, depending on their normal distance to the water surface: particles near the surface appear brighter, i.e. they have a higher gray value than particles farther away from the surface. The correlation between the recorded intensity I(z) of a particle and its distance to the surface, which is expressed in terms of the hypothetical intensity I0 of the particle at the surface and z*, can be assessed experimentally.
Fig. 1

Monochromatic method. A monochromatic beam of light penetrates the dyed fluid with the intensity I0, and hits the particle with intensity Ip after covering the distance z. After reflecting, it passes through the dye again, and hits the camera sensor with the intensity I. The intensity decrease can be calculated using Beer–Lambert’s law

Fig. 2

Bichromatic method. Two monochromatic beams of light penetrate the dyed fluid. Because their penetration depths differ, their intensities progress differently

The particle’s intensity I(z) is mapped to a gray value g(I(z)) by the procedure of imaging. For simplicity, we assume, that the response curve of the camera is linear, i.e. we are allowed to write
$$ g(z)=g_0 \exp -z/z_*, $$
which can be solved for the depth z as follows:
$$ z=z_*(\ln g_0 - \ln g). $$
In order to eliminate the depth z in Eq. (3) one has to know:
  • The gray value of the particle at the surface g0 = g(z = 0). Therefore we require, that the particles are exactly spherical, and that they all have to be of the same size. The latter requirement can be relaxed by the requirement of a narrow size-distribution.

  • The penetration depth z* of light of a specific wavelength into a certain medium, which can be retrieved by means of calibration.

2.2 Bichromatic method

The greatest restriction of the monochromatic method is the tightness of the size distribution of the tracer particles. In order to use particles variable in size, we have to illuminate with light of two distinct wavelengths (i.e. two different penetration depths: z*1 and z*2). One can write down Beer–Lambert’s law for each wavelength:
$$ g_1(z)=g_{01} \exp -z/z_{*1} \quad \hbox{and} \quad g_2(z)=g_{02} \exp -z/z_{*2}. $$
We solve this equation system for the depth of the particle:
$$ z(g_1,g_2)=\frac{z_{*1} z_{*2}}{z_{*1} - z_{*2}} \left(\ln\left( \frac{g_1}{g_2} \right) + \ln \left(\frac{g_{02}}{g_{01}}\right) \right). $$
Note, that here the depth of the particle merely depends on the ratio of the intensities g01/g02, which is for all particles the same, and which can be calibrated.

Besides its applicability to systems with heterodisperse particles the bichromatic method has one further benefit compared to the monochromatic method: The particles are allowed to be imaged as streaks: The particle gray values g1 and g2 can be multiplied by a common attenuation factor (depending on the exposure time), which cancels out while calculating the depth according to Eq. (6).

Figure 3 illustrates the different coverage of the emission spectra of the light sources by the absorption spectrum of the dye.
Fig. 3

Measured Luxeon III Emitter spectra (royal blue and blue) together with the absorption spectrum of tartrazine acid yellow (Sigma–Aldrich, CAS No 1934-21-0). The relative absorbance of tartrazine corresponds to a penetration depth of roughly 5 mm (8 mm) at a wavelength of 455 nm (470 nm) assuming a dye-concentration of ctartrazine = 20 mg/l as in the convection-tank experiments in Sect. 3

2.3 Velocity estimation

In the presented technique, correlation of particle patterns (like in PIV) is not feasible for velocity estimation, because the image sequences typically consist of many layers of particles, each one moving with its own speed. Besides that, particles may move in direction orthogonal to the image plane, commonly referred to as “out-of-plane motion”. We make use of an extended optical-flow based approach, in order to obtain the motion of individual particles.

In order to estimate a particle’s velocity here two cases are considered. First we assume that the suspended particles move parallel to the surface, so that z won’t change. The gray value then remains constant for all times, so that its temporal change is zero. We can apply the chain rule to obtain the total temporal derivative of g:
$$ \frac{\hbox{d}g}{\hbox{d}t}=\frac{\partial g}{\partial x}\frac{\partial x}{\partial t} + \frac{\partial g}{\partial y} \frac{\partial y}{\partial t} + \frac{\partial g}{\partial t} = \frac{\partial g}{\partial x} u + \frac{\partial g}{\partial y} v + \frac{\partial g}{\partial t} = 0. $$
This equation is known as brightness change constraint equation, the basic equation of differential optical flow methods. In this case the optical flow represents the components of the particle’s velocity parallel to the surface: (u,v).

Optical flow-based techniques have established themselves in the computer vision community for more than twenty years, but were used in the context of fluid flow analysis only recently. For a concise overview we refer to Tropea et al. (2007) and the references therein.

Secondly if the particles do not move parallel to the surface, i.e. with z being not constant, the gray values will change with time in the two-dimensional frame (looking down) according to:
$$ \frac{\hbox{d}g}{\hbox{d}t}=- g_0 \frac{1}{z_*} \frac{\partial z}{\partial t} \exp(-z/z_*)=-\frac{1} {z_*} \frac{\partial z}{\partial t} g=-\frac{w}{z_*} g, $$
where we substituted the expression for Beer–Lambert’s law (Eq. 3). We find that, identifying w/z* with a relaxation constant κ, the brightness change in this special case can be modeled by an exponential decay as given in Haussecker and Fleet (2001):
$$ g(t)=g_0\exp(-\kappa t) \quad \hbox{and} \quad \frac{\hbox{d}g}{\hbox{d}t}=-\kappa g. $$
Note that we expressed the temporal change of the z-coordinate by the out-of-plane velocity-component w. With Eq. (7), Eq. (8) becomes:
$$ u \frac{\partial g}{\partial x} + v \frac{\partial g}{\partial y} + w \frac{g}{z_*} + \frac{\partial g}{\partial t} = 0. $$
This can be written as a product of data vector d, which contains the gray values and their spatial and temporal partial derivatives, and parameter vector p, which contains the velocities, we are interested in:
$$ {\user2{d}} \cdot {\user2{p}}^T = \left(\frac{\partial g}{\partial x}, \frac{\partial g}{\partial y}, \frac{g}{z_*}, \frac{\partial g}{\partial t} \right) \cdot \left(\begin{array}{*{20}l}u\\v\\w\\1\\\end{array}\right) = 0.$$
We have obtained one equation for three unknowns: the three components of the particle’s velocity (u,v,w). In order to sufficiently constrain the problem, one combines corresponding equations for points in a sufficiently large spatio-temporal neighbourhood, so that one ends up in a generally over-determined equation system, which can be solved in a total-least-squares sense (Haußecker and Jähne 1997). Besides of the parameter vector containing the sought velocities, our technique yields confidence measures characterising the structure of the local spatio-temporal neighborhood.

2.4 Accuracy assessment

For simplicity we treat the monochromatic method, which can be considered as a special case of the bichromatic method, assuming z*1 >> z*2.

Error propagation of Eq. (4) leads to:
$$ \frac{\sigma_z}{z_*} = \frac{\sigma_g}{g}, $$
where σz is the error in depth and σg is the uncertainty (noise) of the recorded maximum gray value of the particle. We assume the relative uncertainty of the gray value σg/g to be about 5%. From Eq. (12), it can be inferred that the error in depth σz is 0.05 z*. In Tropea et al. (2007) it is pointed out that the inverse signal-to-noise-ratio σg/g depends on the sensor type and on the irradiation to the sensor chip. Generally σg/g varies between 1 and 10%. This uncertainty is composed of the various kinds of sensor noise (like photon shot noise, electronic noise, dark current noise) and of the problems, which are caused by the sampling of small particles using pixels limited in size. For further discussions about the origin and constituents of the error σg, see Jehle (2006).
The range of depth, where reliable measurements can be achieved, scales linearly with the penetration depth z*. According to Eq. (3) the imaged gray value of a particle in a depth of 3 z* has dropped to 5% of its magnitude at the surface g0:
$$ \frac{g(z=3z_*)}{g_0}=0.05\approx\frac{\sigma_g}{g}. $$
Beyond a depth of more than 3z* the fraction of noise in the signal gets too high for a reliable measurement.
Another limiting factor is the focal depth δz of the optical system which calculates according Tropea et al. (2007) to:
$$ \delta z=4 \left(1+\frac{1}{M_0} \right)^2 f_{\#}^2 \lambda, $$
where M0 is the image magnification, f# is the aperture number and λ is the wavelength. Choosing M0 = 1/5, f# = 4 and λ = 500 nm as a realistic example, the focal depth δz is approximately 1 mm. In our examination we assume that the irradiance is high enough to choose a sufficiently high aperture number, so that δz gets sufficiently large.

The accuracy of our measurements in the considered range of depth can be expressed by the number of laser light sections which would be needed to sample the volume using scanning PIV (Brücker 1995): Assuming an inverse signal-to-noise ratio of 5%, according to Eq. (12), the error in depth amounts 5% of the penetration depth z*. Thus about 20 different gray values in the range of z* (about 60 different gray values in the range of 3z*) are distinguishable. These gray values can be assigned to corresponding laser light sections.

3 Measurement setup and data analysis

This section gives a brief description of the hardware of the measurement technique and of the data analysis. A detailed treatment of each of the hardware components and of the image processing can be found in Jehle (2006). The different experimental setups will be adressed in Sect. 4.

3.1 Measurement setup

3.1.1 Particles as tracer

Like PIV or PTV, our method is based on determining position and velocity of small particles, which are added to the fluid. These have to fulfill the properties of (1) being capable to follow the fluid ideally (particle size, specific weight), (2) being visible as brightly as possible (reflectance, particle size) and (3) scattering light in such a way, that Beer–Lambert’s law holds. A basic requirement to do so is, that we operate in the geometric scattering range, which provides a requirement for the particle size in relation to the light frequency. Moreover the shape of the particles is affected, which has to be spherical.

We use hollow glass spheres (respectively silver-coated ceramic spheres) of diameter a = 30 (100) μm and specific weight 0.6 (1.1) g/cm3. The fall/rise velocity of particles of this size and density is neglible compared to the fluid’s velocities, so that they follow the fluid almost ideally, and their size and material properties lead to sufficient visibility. Because their normalized diameters
$$ q=\pi a /\lambda $$
are much greater than unity (assuming the wavelength λ of the incoming light to be 500 nm), and non-coherent light is used, scattering follows the laws of geometrical optics (van de Hulst 1981). Its propagation can be calculated using Snell’s law and Fresnel’s formulas in very good approximation.

The a = 30 μm hollow glass spheres with mean density ρ = 0.6 g/cm3 are used in the falling film experiment (2), the a = 100 μm silver-coated ceramic spheres with mean density ρ = 1.1 g/cm3 are used in the convection tank experiment (3). Before they were used in the experiments we selected the spheres having roughly the same density as the fluid by means of sedimentation: Particles with density of 1 g/cm3 float in water, while heavier particles move to the ground and lighter particles swim.

3.1.2 Light emitting diodes (LEDs) as light sources

In contrast to conventional PIV, in our experiments we cannot illuminate using laser light sections. The reason is, that our measurement method is based on observing particles in a volume, not just in a slice. LEDs have established themselves as reliable, efficient, bright and inexpensive light sources during the last years. Because the overall irradiation of the lighting setup is critical for a high-contrast imaging of the tracer particles, we used standard high power LEDs (Luxeon III Emitter). Each LED supplies an energy flux of 450 mW (royal blue: 455 nm), 480 mW (blue: 470 nm) or 165 mW (cyan: 520 nm). For our experiments, we have arranged 20 royal blue and 20 blue LEDs (in the falling film case) and 5 royal blue and 5 blue LEDs (in the convection tank case) in compact illumination units with sufficient cooling. In both cases the LEDs are grouped in an annular shape, where the LEDs of the two different wavelengths alternate. We chose this symmetrical setup, because the light paths through the liquid must be the same for each of the two wavelengths.

Figure 3 shows the absorption spectrum of the tartrazine dye (yellow) together with the measured emission spectra of the Luxeon III Emitter LEDs. The spectrum of the royal blue LEDs has a greater overlap with the dye-spectrum than the blue LEDs. That means, that the penetration depth of light stemming from the royal blue LEDs is shorter than the one stemming from the blue LEDs. Exploiting this property, it is possible to reconstruct the depth of particles variable in size according to Sect. 2. Tartrazine dye (which commonly is used in food industry) exhibits high solubility in water, no toxicity and low pricing.

3.1.3 Imaging setup

Like any other conventional digital sequence imaging system, our setup consists of optical components, a digital camera, which is connected to the computer hardware, and electronics, which is responsible for synchronizing the individual processes. Figure 4 shows the arrangement of the various components used for the falling-film measurements: illumination, telecentric lens, aperture, camera optics and CCD high-speed camera. Because the telecentric mapping is a parallel projection, the imaged lateral dimensions of the measured object are not dependent on their distance from the camera sensor. For this reason there is no variation of the imaged particle sizes and magnitudes of displacement depending on their z-coordinate. Thus a simple 2D-calibration is sufficient.
Fig. 4

Photograph of the imaging setup used for the falling film experiments

3.2 Data analysis

The major intention of preprocessing is to prepare the image sequences for the feature extraction step, i.e. to condition the images in such a way, that the later analysis routines do not depend where they are applied in the image locally, but that we have to set global thresholds only, and that the number of thresholds can be reduced to a minimum. Therefore the acquired images undergo a radiometric calibration, which compensates for potential nonlinearity of the CCD-chip response, and for the inhomogeneity of the sensor array. Simultaneously the images are corrected regarding inhomogeneous illumination. To get rid of a large part of the background, which interferes with the subsequent image sequence analysis, a minimum image is subtracted.

Segmentation in our context means the separation of the interesting objects, the tracer particles, from the background and from each other. Segmentation can be regarded as one necessary step to the extraction of features like centre of gravity or brightest resp. mean gray value of a particle. In order to do this, we apply the region-growing algorithm, which is described in Hering (1996) in detail. It is based on searching for the local maxima in the image, and then subsequently adding adjacent pixels using prior information of the shape of a typical particle (area, eccentricity), and of the image noise. For the brightness of a particle we use its brightest gray value. We have done experiments, using the mean gray value of a particle, or applying a Gaussian fit to the particles’ gray value distribution, but we did not find any improvements in accuracy compared to the maximum gray value.

All three components of the particles’ velocity vectors are determined using the optical flow-based method described in Sect. 3.

In order to determine the third spatial dimension, the depth z of a particle, according to Eq. 6 the maximum gray value of one and the same same particle, recorded at the two wavelengths, is needed. Because the LEDs are triggered alternately, the particle undergoes a displacement between the two recordings. Thus, one has to establish correspondences of the same particle between one image and the other. To minimize the search radius, the particle positions of the second image are transformed towards the particle positions of the first image, using the previously determined velocity vector field. A similar technique is described in Cowen and Monismith (1997), where PIV information is used to improve particle tracking.

The information about position (x,y,z) and velocity (u,v,w) of the particles result in an irregularly sampled three-dimensional three-component (3D3C)-Eulerian velocity vector field. One can use interpolation schemes [for example the adaptive Gaussian windowing method (Agüi and Jimenez 1987)] to obtain a dense motion field from which derived quantities can be calculated, like shear rates and vorticity. An alternative is the Lagrangian representation, which yields the path lines of the flow.

An overview of the data analysis is given by Fig. 5.
Fig. 5

Flowchart of the algorithm to calculate the Eulerian and Lagrangian velocity fields using the bichromatic method. In the preprocessing step the raw data is separated into two subsequences. To each of these subsequences the various procedures of preprocessing and segmentation are applied. By performing correspondence-analysis two gray values are assigned to one and the same physical particle; thus its depth can be calculated. Using one subsequence, the optical flow can be estimated. We have arrived at an unequally spaced 3D3C velocity vector field, which can be interpolated to a regularly spaced one. On the other hand, the particles can be connected to trajectories

4 Experiments

The new technique was verified with three experimental setups. In Sect. 4.1 the applicability of Beer–Lambert’s law to depth estimation is demonstrated. A laminar falling film (Sect. 4.2) provides a well-known velocity profile, so that the accuracy of the measurements can be tested. Convective turbulence (Sect. 4.3) is a good test case for a more complex flow.

4.1 Applicability of Beer–Lambert’s law

4.1.1 Idea

The following experiment demonstrates the validity of Beer–Lambert’s law the measurement technique is based upon. Using Eq. (6) the lateral position z of a particle can be retrieved, given its apparent intensities while recording with two distinct wavelengths g1 and g2. By introducing the abbreviations η = ln(g1/g2), zred = (z*1z*2)/(z*1z*2) and V = ln(g02/g01) the former can be rewritten to
$$ z(\eta)=z_{\rm red} \eta + z_{\rm red} V, $$
which constitutes a linear dependency between z and η. We can check this, by acquiring ηi at various depths zi, and subsequently fitting a straight line to the data. The slope turns out to be zred and the intercept yields zredV.

4.1.2 Experimental setup and data analysis

The particles were fixed in a 2 mm thick layer of agarose-gel. The gel with the particles was immersed into dyed fluid using a linear positioner (see Fig. 6, left). By moving the table the relative change of the width of the covering layer can be controlled. Using thin rods, water-displacement can be neglected, so that the movement of the table equals the relative change in depth.
Fig. 6

Calibration via linear positioner. Left Sketch of the experimental setup. Right Reprojection of the data. The estimated depth using the fitted values for zred and V is plotted against the true depth

This procedure is carried out automatically by centrally controlling the motion of the table and image acquisition. Images were taken at 80 measurement points, separated 125 μm, covering a total distance of 10 mm. Preprocessing and segmentation were carried out, and the correspondences between the particles acquired at 455 nm, and those acquired at 470 nm were established. Using the maximum gray values of the N distinct particles at the two wavelengths the mean of η, 〈η〉 = ∑i = 1N ηi/N, is calculated in dependency of z. By applying a linear ordinary least squares-fit to the data both zred and V can be extracted.

4.1.3 Results

Figure 6, right, shows the reprojection of the data zreproj(z) using the fit-parameters according to Eq. (16). We see a broad variance in the individual data-points, which are marked blue, but the mean values fit very well to the line zreproj = z. Note, that with increasing depth the reprojected data become less exact.

4.2 Measurements in a falling film

4.2.1 Idea

One of the most basic laminar flows, which are achievable in the laboratory, is the flow in a falling film on an inclined plane. Flow parameters like the thickness of the film can easily be varied by changing throughput, inclination angle and viscosity. Its simplicity and versatility qualifies this flow for a physical reference our technique can be tested at.

The stationary velocity profile (i.e. the dependency of the bed parallel velocity u of the distance from the film surface z), which evolves in a laminar falling film, can be written as
$$ u(z)=\frac{g \sin \alpha}{2 \nu} (b^2-z^2), $$
where α is the angle to the horizontal, g is the acceleration of gravity and ν is the fluid’s viscosity. The thickness of the film b depends on the previous parameters and on the throughput Q and width of the film d as follows:
$$ b=\sqrt [3]{\frac{3 Q \nu}{d g \sin \alpha}}. $$
For an illustration of the parabolic velocity profile see Fig. 7, top.
Fig. 7

Falling film measurements. Top Illustration of the parabolic velocity profile and its depending parameters. Bottom Example of a measured velocity profile. The bed parallel velocity is plotted against the distance from the water surface. Here the penetration depths are z*1 = 0.4 mm and z*2 = 0.25 mm. Because the flow is stationary, the bed parallel velocities are ensemble averaged in z-windows of width 50 μm. Reliable depth estimates can be achieved up to ≈0.8 mm, which corresponds to about three times z*2

4.2.2 Experimental setup and data analysis

To test our measurement technique in a falling film, we constructed a tank of 2,300 mm length and 200 mm width, whose slope is adjustable continuously from 0 to 10°. The flow is driven solely by gravity. Image acquisition was done using a high-speed camera at a resolution of 512 × 512 pixels2 and a frame rate of 125 or 250 Hz. The light source used in this experiment contained 2 × 20 LEDs (royal blue and blue), consuming an overall power of about 40 W. We used hollow glass spheres working as tracer, and tartrazine dye functioning as absorber.

Data analysis was performed as presented in Sect. 3.2. The maximum achievable displacements are limited by the temporal sampling theorem. The temporal sampling theorem (or Nyquist criterion) states, that the motion between two images, i.e. the optical flow, should be less than half the smallest local spatial scale. That means, that an upper limit for the measurable velocity of a tracer particle is given by its imaged spatial dimensions, because imaged particles contain no texture. An imaged pixel size of 28 μm/pixel was chosen. Assuming a slight pre-smoothing of the particle, this limit is about 4 pixels/frame, which correspond to a maximum measurable velocity of about 14 mm/s recording with a frame rate of 250 Hz. One can improve this limit by mounting the imaging setup on a linear positioner, which moves with about half of the expected maximum flow velocity relative to the fluid.

4.2.3 Results

Figure 7, bottom, shows an example of an obtained velocity-profile. Note, that, because the particles move from right to left, the maximum bed parallel velocity is negative. Due to the relative motion of the positioner, the deeper particles move with positive speed. Because the flow is stationary, we are allowed to average the bed parallel velocities in z-windows of width 50 μm. The results can be fitted with a theoretically predicted parabola very well.

4.3 Measurements in a convection tank

4.3.1 Idea

The measurements in the falling film showed, that our technique is capable to reproduce the exact flow fields for stationary laminar flows. In contrast to the laminar falling film, convective turbulence represents a flow, which is intrinsically 3D and non time stationary.

Buoyancy is realized by heating the fluid in a tank from below. According to Rayleigh’s theory (1916), buoyant convective flows become turbulent, when the Rayleigh number (depending on the vertical dimension d of the fluid layer, on the temperature difference from top to bottom surface T1T2 and on material properties of the fluid: kinematic viscosity ν, thermal diffusivity DH and thermal expansion coefficient α)
$$ \hbox{Ra}=\frac{\hbox{buoyancy\;force}}{\hbox{viscous\;force}}= \frac{g \alpha d^3 (T_1-T_2)}{D_H \nu}, $$
exceeds a certain critical value, which is about 54,000 in the case of a free upper surface and a rigid bottom wall. In our experiment, convection is additionally driven by evaporation causing latent heat transfer. Figure 8 shows a schematic sketch of the conditions in our experiments.
Fig. 8

Sketch of buoyant evaporative convection: The water in the isolated tank is heated from below. Evaporation at the upper surface causes a transfer of latent heat. The vapour is transported by “dry air”

4.3.2 Experimental setup and data analysis

For the convection measurements a tank of dimensions 200 × 200 × 40 mm3 was constructed. Because the temperature differences are of the order of 10°C, our flows are in the highly turbulent range, even if we use a water–glycerol mixture (ratio about 1:1) as fluid. Compared to a falling film, convection is a slow process, so we are able to apply a camera exhibiting very good sensor characteristics (noise, linearity) running at a frame rate of 30 Hz and a resolution of 640 × 480 pixels2. Due to the much lower frame rate, the exposure times could be extended, so that 2 × 5 LEDs (royal blue and cyan) were sufficient. In our experiments the imaged pixel size was chosen to 55 μm/pixel, so that we applied the silver coated ceramic spheres working as tracer. The concentration of the tartrazine dye (25 mg/l) was adjusted in order to dimensionalize the observable volume to 35 × 26 × 15 mm3. The water–glycerol mixture was heated constantly with a power of 20.8 W; the arising vapour was transported by dry air, which streamed with constantly 5 l per min through the tank. Because the measurement depth is about 15 mm, we expect a temperature variation of about 3° only, corresponding to a variation of index-of-refraction of less than 0.5%0 (for water at a wavelength of 650 nm), which can be considered negligible.

Again, data analysis was performed as presented in Sect. 3.2.

4.3.3 Results

Figures 9 and 10 display some of the results obtained by applying our method to convective turbulence. The irregularly sampled velocity vectors were interpolated onto a regular grid using the adaptive Gaussian windowing method. 2D slices of the 3D field are shown; the component of the velocity field perpendicular to the plane is color coded. Furthermore the vertical profiles of all three components of the mean and rms velocities are presented. Figure 9 shows the situation four minutes after the heating was turned on. The seeding particles in the deeper layers move with a maximum speed of about 1 pixel/frame (≈1.5 mm/s). There is almost no motion in the upper layers. This behaviour is reflected in the vertical profiles. In Fig. 10, where we have heated 63 minutes, the convective motion has become faster (up to 2 pixels/frame, which is 3 mm/s) and more chaotic. Looking at the interpolated vector fields, upward and downward moving cells can be identified. The fluctuations in horizontal velocity urms and vrms start near zero at the surface and reach a maximum in a depth of about 5–7 mm, then they are damped. The vertical motions tend to neutralize themselves, i.e. wmean equals zero for all measurable depths, but wrms increases with increasing depth monotonously. Looking at the vertical profiles of the rms velocities, we indeed find a qualitatively similar behaviour of our results and the results of Bukhari and Siddiqui (2006).
Fig. 9

Flow after four minutes of heating. The water and air temperatures are 22.98 and 25.15°C, respectively; the relative air humidity is 32%. Top Interpolated velocity vector fields starting from the deepest layer (distance from the surface z = 9.5 mm) moving upwards. The colour code ranges from −4.5 mm/s (blue) to 4.5 mm/s (red). Bottom Vertical profiles of the mean and rms velocities (red: u, blue: v, blackw). Here 1 pixel/frame ≈ 1.5 mm/s and 0.01 cm/frame ≈ 1.5 mm/s

Fig. 10

Flow after 63 min of heating. The water and air temperatures are 29.00 and 25.89°C, respectively; the relative air humidity is 81%. Further description see Fig. 9

5 Conclusion

A novel image-based technique for 3D3C fluid flow measurement is presented, which is suited for the investigation of flows close to free surfaces. By coding the depth of tracer-particles using a light-absorbing dye, it is possible to reconstruct their 3D position using one single camera pointing to the water surface from above. The three components of the particles’ velocities can be computed using an extended optical-flow based procedure. The velocity component perpendicular to the image plane is inferred from temporal brightness changes of the imaged particles.

Using a linear positioner, the applicability of Beer–Lambert’s law to the present setup could be tested. We showed, that the depth positions of tracer particles, which are variable in size, could be reconstructed with the expected accuracy.

The velocity profile in a laminar falling film served as a physical “ground truth” to test our technique. The new technique reproduces the predicted parabolic profile well. The measurement setup had two limitations: Firstly, the used high-speed camera was of rather poor quality regarding the spatial homogeneity of the camera sensor. Secondly, due to the relatively fast moving flow, high frames rates and short exposure times were required, which ultimately resulted in image sequences with a poor signal-to-noise ratio.

The experiments in a convection tank differ in two ways from the measurements in the falling film: firstly, the motions of the flow are slower, so that a higher-quality camera and a higher resolution could be employed. Secondly, the turbulent flow is intrinsically 3D and non time stationary. The drawback of this kind of flow is, that there is no analytic solution at hand. In this case we are restricted to qualitative evaluation and to comparative measurements by other researchers.


Though Debaene et al. (2005) were interested in the flow field close to a rigid wall, the authors of this paper ultimately want to measure the velocity field close to a free surface. As long as this surface is not bent, the coordinate z represents the distance of the particle’s surface, which is orthogonal to the flat surface.



We gratefully acknowledge the support by the German Research Foundation (DFG, JA 395/11-2) within the priority program “Bildgebende Messverfahren für die Strömungsanalyse”.

Copyright information

© Springer-Verlag 2008

Authors and Affiliations

  1. 1.Laboratoire d’Etudes Aérodynamiques (UMR 6609-CNRS)Futuroscope Poitiers CedexFrance
  2. 2.Interdisciplinary Center for Scientific ComputingHeidelberg UniversityHeidelbergGermany

Personalised recommendations