Abstract
A wavelet-based optical flow method for high-resolution velocimetry based on tracer particle images is presented. The current optical flow estimation method (WOF-X) is designed for improvements in processing experimental images by implementing wavelet transforms with the lifting method and symmetric boundary conditions. This approach leads to speed and accuracy improvements over the existing wavelet-based methods. The current method also exploits the properties of fluid flows and uses the known behavior of turbulent energy spectra to semi-automatically tune a regularization parameter that has been primarily determined empirically in the previous optical flow algorithms. As an initial step in evaluating the WOF-X method, synthetic particle images from a 2D DNS of isotropic turbulence are processed and the results are compared to a typical correlation-based PIV algorithm and previous optical flow methods. The WOF-X method produces a dense velocity estimation, resulting in an order-of-magnitude increase in velocity vector resolution compared to the traditional correlation-based PIV processing. Results also show an improvement in velocity estimation by more than a factor of two. The increases in resolution and accuracy of the velocity field lead to significant improvements in the calculation of velocity gradient-dependent properties such as vorticity. In addition to the DNS results, the WOF-X method is evaluated in a series of two-dimensional vortex flow simulations to determine optimal experimental design parameters. Recommendations for optimal conditions for tracer particle seed density and inter-frame particle displacement are presented. The WOF-X method produces minimal error at larger particle displacements and lower relative error over a larger velocity dynamic range as compared to correlation-based processing.
Similar content being viewed by others
References
Adrian RJ (2005) Twenty years of particle image velocimetry. Exp Fluids 39:159
Alvarez L, Castano CA, García M, Krissian K, Mazorra L, Salgado A, Sánchez J (2009) A new energy-based method for 3D motion estimation of incompressible PIV flows. Comput Vis Image Underst 113:802
Atcheson B, Heidrich W, Ihrke I (2009) An evaluation of optical flow algorithms for background oriented schlieren imaging. Exp Fluids 46(3):467
Beylkin G (1992) On the representation of operators in bases of compactly supported wavelets. SIAM J Numer Anal 6(6):1716
Beyou S, Cuzol A, Gorthi SS, Mémin E (2013) Weighted ensemble transform Kalman filter for image assimilation. Tellus A 65(1):18803
Black MJ, Anandan P (1996) The robust estimation of multiple motions: parametric and piecewise-smooth flow fields. Comput Vis Image Underst 63(1):75
Cai S, Mémin E, Dérian P, Xu C (2018) Motion estimation under location uncertainty for turbulent fluid flows. Exp Fluids 59:8
Carlier J, Wieneke B (2005) Report 1 on production and diffusion of fluid mechanics images and data. Fluid project deliverable 1.2. European Project “Fluid image analisys and description” (FLUID). http://www.fluid.irisa.fr
Cassisa C, Simoens S, Prinet V, Shao L (2011) Subgrid scale formulation of optical flow for the study of turbulent flows. Exp Fluids 51(6):1739
Chen X, Zillé P, Shao L, Corpetti T (2015) Optical flow for incompressible turbulence motion estimation. Exp Fluids 56:8. https://doi.org/10.1007/s00348-014-1874-6
Corpetti T, Heitz D, Arroyo G, Mémin E, Santa-Cruz A (2006) Fluid experimental flow estimation based on an optical-flow scheme. Exp Fluids 40(1):80. https://doi.org/10.1007/s00348-005-0048-y
Corpetti T, Mémin E, Pérez P (2002) Dense estimation of fluid flows. IEEE Trans Pattern Anal Mach Intell 24(3):365–380
Daubechies I, Sweldens W (1998) Factoring wavelet transforms into lifting steps. J Fourier Anal Appl 4(3):247–269
Dérian P, Almar R (2017) Wavelet-based optical flow estimation of instant surface currents from shore-based and UAV videos. IEEE Trans Geosci Remote Sens 55(10):5790
Dérian P, Héas P, Herzet C, Mémin É (2012) Wavelet-based fluid motion estimation. In: Bruckstein AM, ter Haar Romeny BM, Bronstein AM, Bronstein MM (eds) Scale space and variational methods in computer vision. SSVM 2011. Lecture notes in computer science, vol 6667. Springer, Berlin, Heidelberg
Dérian P, Héas P, Herzet C, Mémin E (2013) Wavelets and optical flow motion estimation. Num Math 6:116
Dérian P, Mauzey CF, Mayor SD (2015) Wavelet-based optical flow for two-component wind field estimation from single aerosol lidar data. J Atmos Ocean Technol 32(10):1759
Hart DP (1999) Super-resolution PIV by recursive local-correlation. J Vis 10:1–10
Heitz D, Héas P, Mémin E, Carlier J (2008) Dynamic consistent correlation-variational approach for robust optical flow estimation. Exp Fluids 45:595
Heitz D, Mémin E, Schnörr C (2010) Variational fluid flow measurements from image sequences: synopsis and perspectives. Exp Fluids 48:369. https://doi.org/10.1007/s00348-009-0778-3
Horn BKP, Schunck BG (1981) Determining optical flow. Artif Intell 17:185–203
Héas P, Herzet C, Mémin E, Heitz D, Mininni PD (2013) Bayesian estimation of turbulent motion. IEEE Trans Pattern Anal Mach Intell 35(6):1343–1356
Kadri-Harouna S, Dérian P, Héas P, Mémin E (2013) Divergence-free wavelets and high order regularization. Int J Comput Vis 103:80–99. https://doi.org/10.1007/s11263-012-0595-7
Keane RD, Adrian RJ (1990) Optimization of particle image velocimeters. I. Double pulsed systems. Meas Sci Technol 1:1202. https://doi.org/10.1088/0957-0233/1/11/013
Kähler CJ, Scharnowski S, Cierpka C (2012) On the resolution limit of digital particle image velocimetry. Exp Fluids 52(6):1629
Liu T, Merat A, Makhmalbaf MHM, Fajardo C, Merati P (2015) Comparison between optical flow and cross-correlation methods for extraction of velocity fields from particle images. Exp Fluids 56:166
Liu T, Shen L (2008) Fluid flow and optical flow. J Fluid Mech 614:253. https://doi.org/10.1017/S0022112008003273
Mallat SG (1989) Multiresolution approximations and wavelet orthonormal bases of \(\text{ l }^2\text{(r) }\). Trans Am Math Soc 315(1):69–87
Mallat SG (2009) A wavelet tour of signal processing. Elsevier, New York
Plyer A, Besnerais GL, Champagnat F (2016) Massively parallel Lucas Kanade optical flow for real-time video processing applications. J Real Time Image Process 11(4):713
Pope SB (2001) Turbulent flows. IOP Publishing, Bristol
Scarano F (2002) Iterative image deformation methods in PIV. Meas Sci Technol 13:R1
Schanz D, Gesemann S, Schröder A (2016) Shake-the-box: Lagrangian particle tracking at high particle image densities. Exp Fluids 57(5):70
Schmidt M (2005) minFunc: unconstrained differentiable multivariate optimization in matlab. https://www.cs.ubc.ca/~schmidtm/Software/minFunc.html
Stanislas M, Okamoto K, Kähler CJ (2003) Main results of the first international PIV challenge. Meas Sci Technol 14:R63
Stanislas M, Okamoto K, Kähler CJ, Scarano F (2008) Main results of the third international PIV challenge. Exp Fluids 45(1):27
Stanislas M, Okamoto K, Kähler CJ, Westerweel J (2005) Main results of the second international PIV challenge. Exp Fluids 39(2):170
Susset A, Most JM, Honoré D (2006) A novel architecture for a super-resolution PIV algorithm developed for the improvement of the resolution of large velocity gradient measurements. Exp Fluids 40(1):70
Takehara K, Adrian RJ, Etoh GT, Christensen KT (2000) A Kalman tracker for super-resolution PIV. Exp Fluids 29(1):S034–S041
Taubman D, Marcellin M (2002) JPEG2000: standard for interactive imaging. Proc IEEE 90:1336
Tokumaru PT, Dimotakis PE (1995) Image correlation velocimetry. Exp Fluids 19:1
Ullman S (1979) The interpretation of visual motion. Massachusetts Inst of Technology Pr, Oxford
Westerweel J (1997) Fundamentals of digital particle image velocimetry. Meas Sci Technol 8:1379
Westerweel J, Elsinga GE, Adrian RJ (2013) Particle image velocimetry for complex turbulent flows. Ann Rev Fluid Mech 45:409
Wu Y, Kanade T, Li C, Cohn J (2000) Image registration using wavelet-based motion model. Int J Comput Vis 38(2):129
Yuan J, Schnörr C, Mémin E (2007) Discrete orthogonal decomposition and variational fluid flow estimation. J Math Imaging Vis 28:67
Zillé P, Corpetti T, Shao L, Chen X (2014) Observation model based on scale interactions for optical flow estimation. IEEE Trans Image Process 23(8):3281
Acknowledgements
This work was partially sponsored by the Air Force Office of Scientific Research under Grant FA9550-16-1-0366 (Chiping Li, Program Manager).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1: Wavelet transforms and signal decomposition
A wavelet is a mathematical function that can be used to divide a given function f(x) into different scale components. A wavelet transform is the representation of f(x) through a linear combination of a set of basis functions that are translations and dilations of a fast-decaying function known as the mother wavelet, \(\psi\), along with an associated scaling function, \(\varphi\). Specifically, a wavelet transform of f(x) first computes the set of inner products of f(x) with a wavelet atom \(\psi _{j,n}\) at scales j and positions n, resulting in a set of detail coefficients, \(d_j [n] = \langle f, \psi _{j,n} \rangle\). Wavelet atoms are defined by scaling and translating the mother wavelet \(\psi\) as \(\psi _{j,n} = \frac{1}{2^j} \psi \left( \frac{x-2^j n}{2^j} \right)\). Associated with the “high-pass” mother wavelet is a scaling function \(\varphi\) that gives a low-pass representation of the function f(x). Thus, at each scale j and position n, approximation coefficients are computed as \(a_j [n] = \langle f, \varphi _{j,n} \rangle\), where \(\varphi _{j,n} = \frac{1}{2^j} \varphi \left( \frac{x-2^j n}{2^j} \right)\). The wavelet transform at each scale j produces a set of detail coefficients \(d_j\) and approximation coefficients \(a_j\), both of which are contained within \(\varTheta\) as shown below.
In signal processing, one has a discrete signal \(f \left[ x_i \right]\) with a length of \(2^F\). The wavelet transforms are applied at increasingly coarser scales, starting from the finest scale \(\left( j = F \right)\) down to a predetermined coarse scale \(j_0<F\). At each subsequent scale, the approximation coefficients are divided into coarser approximations and details; that is, the transform at each scale \(j-1\) operates on the coarse approximation from the next finest scale \(a_j\) to produce \(d_{j-1}\) and \(a_{j-1}\). Thus, a discrete wavelet transform (DWT) applied to \(f \left[ x_i \right]\) forms a multiscale representation of the signal, where the detail coefficients from all scales \(d_{j_0}, d_{j_0+1},..., d_{F-1}, d_F\) and the remaining coarse approximation \(a_C\) are stored as:
which also has a length of \(2^F\) like the original signal \(f \left[ x_k \right]\). The exact mathematical details of the lifting DWT used by WOF-X are given in Mallat (2009) and Daubechies and Sweldens (1998). If the wavelets and scaling functions form an orthonormal basis as is the case of the biorthogonal 9–7 wavelets used by WOF-X, then the wavelet transform can be inverted and \(f \left[ x_k \right]\) is recovered exactly. Thus, it is concluded that the output from a wavelet transform is a set of wavelet coefficients that exactly describe the input signal in the wavelet basis.
1.1 Wavelet transforms of images
The above procedure can be applied to higher-dimensional signals (i.e. images) by applying the wavelet transform along each dimension isotropically at each scale, demonstrated schematically in Fig. 15.
For the wavelet-based optical flow described in the current manuscript, the generic signal \(f \left[ x_{k_1}^1,x_{k_2}^2 \right]\) is replaced with the individual velocity components \(v_i \left( \underline{x} \right)\) (which are two-dimensional). Truncated transforms are formed by setting all detail coefficients, including the mixed coefficients, above a specified scale L to zero. The truncated transform is then inverted to obtain a coarse approximation of the original signal \(f_L \left[ x_{k_1}^1,x_{k_2}^2 \right]\).
1.2 Derivative regularization in the wavelet domain
This section outlines the construction of the regularization term \(J_\mathrm{{R}}\) in Eq. 6 and its gradient. The proof that the following definition of \(J_\mathrm{{R}}\) indeed penalizes the third derivative of the velocity field is described in Kadri-Harouna et al. (2013) and will not be given here. Kadri-Harouna et al. (2013) show that the regularization term \(J_\mathrm{{R}}\) for penalization of the third derivative of a velocity field \(\underline{v}\) is given by the following:
where \(\underline{\psi }\) is the wavelet transform of the velocity field \(\underline{v}\) and the superscript \(^V\) indicates the vectorization operation. \(\underline{\varLambda }\) is a matrix matching the dimensions of \(\underline{\psi }\) whose elements are \(4^{3j}\) (see Fig. 15) for every scale above \(j_0\) and 0 for every element corresponding to \(\underline{a}_{j_0}\). Its gradient with respect to \(\underline{\psi }\) is easily found:
Appendix 2: Synthetic particle image generation
Synthetic particle images are generated in a similar manner as described by Carlier and Wieneke (2005). Each particle i is centered at a location \(\left( x_0^i,y_0^i\right)\) in the image domain, with diameter \(d_i\) and a center brightness value \(c_i\). The \(d_i\) values are selected from a normal distribution with mean \(\bar{d}\) and standard deviation \(\sigma _d\). The center brightness is a simulation of out-of-plane displacement in a real-world experiment, and is given by \(c_i = d_i^2 \exp \left( -b_i^2 \right)\), where \(b_i\) is normally distributed with a mean of zero and a standard deviation \(\sigma _b\). The \(c_i\) values are then normalized, such that the largest particle in the domain has a value of \(c_i = 1\). The intensity distribution of particle i is then given by the following:
A simulated digitized image can be created by considering the brightness at some pixel k. Its brightness is computed by summing the contribution of each particle and “digitizing” by integration over the pixel. The integration is done by convolving the intensity function of each particle with a rectangle function and the Dirac delta function as follows in Eq. 16:
Carrying out the convolution for pixel width \(\varDelta\), the resulting synthetic image is determined by the following:
Pixels are synthetically saturated when their calculated pixel value \(I \left( x_k,y_k \right)\) is greater than one. These pixel values are then set to a value of one.
Rights and permissions
About this article
Cite this article
Schmidt, B.E., Sutton, J.A. High-resolution velocimetry from tracer particle fields using a wavelet-based optical flow method. Exp Fluids 60, 37 (2019). https://doi.org/10.1007/s00348-019-2685-6
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00348-019-2685-6