Abstract
The calibration of a multi-camera system is a crucial step of volumetric flow measurements with photogrammetric methods. Conventional calibration methods are based on recording hardware targets, which are placed in the cameras fields of view. Calibrating in confined spaces with those methods is associated with an increased technical or mechanical effort. This work presents a calibration method without the use of a hardware target. Instead, crossing laser beams are introduced into the volume for creating unique calibration points. The underlying algorithms discussed in this paper for detecting the laser beams are: Ransac algorithm, Template Matching (via cross correlation) and Probabilistic Hough Transformation. The algorithms are tested with experimental data and synthetic data.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
The calibration of a multi-camera setup for volumetric flow measurement techniques, such as 3D-Particle Tracking Velocimetry (3D-PTV) (Maas et al. 1993), Tomographic Particle Image Velocimetry (Tomo-PIV) (Scarano 2012) or Shake-the-Box (STB) (Schanz et al. 2016), is an essential and crucial step at the beginning of every measurement campaign. In order to achieve accurate and high-quality reconstruction results, extensive effort should be made to perform a thorough and precise geometrical calibration. Although advanced calibration correction schemes have been proposed in the past (Wieneke 2008; Novara et al. 2019; Bruecker et al. 2020), they can only compensate for a certain error.
As a standard calibration procedure, the use of a calibration plate with well-defined geometrical properties is the most common approach. These targets have one (Albarelli et al. 2010; Bell et al. 2016; Chen et al. 2012; Huang et al. 2013; Lucchese and Mitra 2002; Weng et al. 1992; Pedersini et al. 1999a, b; Placht et al. 2014; Yang et al. 2018) or two planes (Maas et al. 1993) or other geometrically well-defined shapes (Heikkila 2000; Lavest et al. 1998). The markings on the target can be checkerboards (Albarelli et al. 2010; Bell et al. 2016; Chen et al. 2012; Lucchese and Mitra 2002; Weng et al. 1992; Placht et al. 2014; Yang et al. 2018), which are mostly used with single-plane-targets, or circular markers (Maas et al. 1993; Datta et al. 2009; Heikkila 2000; Huang et al. 2013; Lavest et al. 1998; Pedersini et al. 1999a, b). Zhang and Member (2000) investigated a combination of rectangular and special circular patterns, which allow distinct identification of the calibration points. Shen and Hornsey (2011) used non-planar targets for the calibration of visual sensor networks with multiple cameras. These methods were mostly applied in non-confined measurement volumes.
There are different calibration approaches for confined measurement domains or complicated experimental setups, where an installation of a commercially available calibration target is not really feasible. The first obvious approach is manufacturing a specially designed target if the measurement volume is accessible in some way. A two plane target with circle markers in the appropriate size can be manufactured with sufficient accuracy using CNC machining processes or 3D printing. Another approach is an ex-situ calibration as done by Daher et al. (2019). They build a minimal model with the relevant optical elements of the measurement domain and placed the calibration target inside. The camera rig is then moved between the calibration model and the real model. Improving the partly inaccurate calibrations with volume self-calibration (Wieneke 2008) can lead to descent calibration quality.
Another approach for confined spaces is the dumbbell calibration (Lu and Payandeh 2007). Gülan et al. (2012) used this method in an experimental study with particle tracking velocimetry in a model of the ascending aorta. The calibration target consists of two points with a defined distance. It is freely moved around the measurement volume. A basic calibration is done with a two plane target behind the measurement domain. The dumbbell calibration then optimizes the previously performed basic calibration.
Wieneke (2005) developed a self-calibration algorithm for Stereo-PIV, for which he described a calibration procedure for confined spaces. The initial calibration is done in air. The calibration model is than fitted to the measurement domain by implementing a 3-media model (as proposed by Maas (1996)). Then, the calibration is optimized with the self-calibration algorithm.
The previously presented calibration methods are capable to be applied in confined spaces. However, they all rely on hardware targets that have to be placed in the cameras’ fields of view. In addition, some of the camera rigs have to be moved after calibration, which can lead to errors. The non-invasive calibration method presented in this paper is intended to provide a target-less alternative, which can be applied directly in the measurement volume without the need to build a calibration model or move the camera assembly.
Other non-intrusive calibration methods were investigated by Schosser et al. (2016) and Hishida and Sakakibara (2000). Schosser et al. (2016) did tomographic PIV and 3D-PTV in a Tesla turbine rotor. They used the reflections on surfaces as calibration points. This made it possible to calibrate the camera system for a measurement domain, which was just a few millimeters in height. Hishida and Sakakibara (2000) did a similar approach as presented in this paper but with three crossing laser beams and the crossing point was manually selected. Therefore, the focus of this paper is on the cross-detection algorithms for process automization and the use of two crossing laser beams for a less complex assembly.
In Hardege et al. (2022), this lasercross calibration was already compared to a hardware target with two planes. There was a good agreement between the calibration methods and the calibrations were used for evaluating a 3D-PTV measurement. In advance to the Ransac algorithm used by Hardege et al. (2022) for the lasercross detection, this work compares different methods for the cross or line detection. The experimental setup is schematically shown and different approaches of inserting the lasercross into the measurement domain are addressed. The creation of synthetic data with Blender and the three different cross-/line-detection algorithms are applied: Ransac (Fischler and Bolles 1981), Probabilistic Hough Transformation (Kälviäinen et al. 1995) and Template Matching (Lewis 1995). The synthetic and experimental data is used to compare the different algorithms.
2 Setups
2.1 Experimental setup
The experimental setup is a \(90^\circ\) bend pipe test rig. The calibration will be utilized for 3D-PTV measurements downstream the \(90^\circ\) bend section. Schematics of the cross section view and the top down view of the measurement domain are shown in Fig. 1. The measurement domain consists of an octagonal body with acrylic glass as outer shell (M1), which is filled with silicone (M2: Elastosil RT 601) and the pipe section with the flowing water-glycerol mixture in the middle (M3). The refractive index of the water-glycerol matches that of the silicone. Therefore, there is no optical distortion introduced by the circular perimeter. For the 3D-PTV the water-glycerol mixture is loaded with polyamide particles with a diameter of 50 µm. The particle density is \(0.03\,\)particles per pixel. These particles are also illuminated by the laser used for the calibration. The recording of the data is done with three Phantom VEO710L and Zeiss Milvus 2/100 M Optics.
The lasercross setup utilizes a collimated diode pumped solid state laser module with a typical emission wavelength of 532 nm (CW532-050; Type: Nd:YVO4/KTP; 1.5 mm beam diameter; 0.8mrad beam divergence; 50 mW radiated power; Roithner Laser Technik GmbH). The laser beam is split by a beamsplitter after being focused with a 600 mm lens, and thus, the two beams are thinnest at the intersection point (about 300 µm diameter). The beams are redirected by surface mirrors, to form a \(90^\circ\) cross. The assembly is fixed to a three-axis positioning table, driven by stepper motors, which have an absolute positioning error of 1µm (LINOS x.act LT Series). Via this assembly the lasercross is moved to predefined positions. In a prior paper (Hardege et al. 2022) the lasercross was manually translated with linear positioning tables, which resulted in non-reproducible calibration positions with rather low accuracy. By using stepper motors (positioning accuracy: \(1\,\)µm) to drive the linear stages for this study, the random positioning error is reduced.
A schematic of the beam path is shown in Fig. 1b. Because of the refraction, there is a constant scaling factor between the coordinates of the lasercross assembly (\(X_A\), \(Y_A\), \(Z_A\)) and the resulting world coordinates (\(X_W\), \(Y_W\), \(Z_W\)) used for the calibration. For \(\alpha~=~45^\circ\) the X-coordinate can be scaled by a simple trigonometric relation:
with \(\gamma\) being half the angle between the two crossing beams or the angle of refraction from material M1 to material M2.
2.2 Virtual setup
The virtual setup is used to generate synthetic data for examining the influence of the beam diameter on the calibration quality. With the virtual setup the positions of the lasercross and the beam quality are kept constant, while only the beam diameter is varied.
BlenderFootnote 1 is used to simulate a virtual three camera setup with overlapping field of views on the modeled lasercross with the aim of rendering synthetic measurement images. Blender is an open-source, cross-platform 3D computer graphics software package characterized by its extensive suite of tools for modeling, texturing, animation, compositing, and rendering.
The virtual setup in Blender consists of three camera objects and the lasercross, which is mimicked by two perpendicular cylinders. The focal length of the cameras is set to 100 mm, the dynamic range is 8bit grey scale, the angle between the cameras is \(45^\circ\), the resolution of the rendered images is 1920x1920\(px^2\) The camera objects and the lasercross are oriented as shown in the experimental setup in Fig. 1a. Images rendered from this virtual setup can be seen in Fig. 2. The diameter of the cylinders is varied, and the associated diameter of the beam in pixels is measured from the upper camera.
With an animation setup and compositing nodes, the calibration images were automatically generated while keeping the positions of the lasercross constant and varying the beam diameter.
3 Models and algorithms: calibration and cross detection
The calibration process determines the camera focal length and the translation and orientation of the projection system with respect to the global coordinate system (intrinsic and extrinsic parameters).
This work uses a linear calibration model, developed by Hall et al. (1982). A comparative study of Joshi et al. (2013) showed that the error level of the linear Hall calibration model is comparable to other nonlinear models (which are Tsai (1987) and Soloff et al. (1997)) if there is no or low optical distortion in the captured images. The synthetic data used in this paper have no optical distortion, and the experimental data are recorded with Zeiss Milvus 2/100 mm Planar optics with a relative distortion \(<0.1\%\) with the maximum near the edges of the images (according to datasheet). In conclusion, the Hall model is sufficiently accurate for the analysis performed.
The lasercross positions and their corresponding image coordinates are used to calculate the calibration matrices via direct linear transform (Förstner and Wrobel 2016).
This work utilizes and compares three different line-/cross-detection algorithms: Ransac Algorithm, Probabilistic Hough Transformation, Template Matching. The programming was done in python.
Ransac: The random sample consensus algorithm (Fischler and Bolles 1981) is used to fit a linear model to each of the laser beams. The basis for the Ransac algorithm are binary images. An overview of the detection steps is shown in Fig. 3, showing the raw image, the binarized image and the final line fit of the lasercross. The images are converted from gray scale images to binary images via Otsu’s Thresholding (Otsu 1979) (Python implementation by Bradski (2000)) to determine a global threshold for each image. The pixels within the threshold of the first detected line are removed and the Ransac algorithm is repeated for the second line. The intersection point of the two lines is the calibration point.
The Ransac algorithm repeatedly selects random subsets of data points (consensus sets), leading to varying results across different runs, making it non-deterministic in nature. Because of this random nature, the Ransac algorithm needs to be repeated multiple times. Outliers of the calculated lasercross positions (which might occur because of the algorithms non-deterministic outcome) from these multiple runs are eliminated by utilizing mean and standard deviation.
The Ransac algorithm is capable of fitting models to experimental data containing a significant percentage of errors, as stated by Fischler and Bolles (1981). With regard to this work these errors can be introduced by stray light or reflections inside the measurement volume (see Fig. 3). This work uses the Ransac implementation of the SciKit-Learn-library (Pedregosa v 2011).
Probabilistic Hough transformation: The Probabilistic Hough Transformation (PHT) fits multiple lines to each beam. By distinguishing between the angles of the lines they can be assigned to each laser beam. The intersection points of every intersecting combination of two lines is calculated and from these, a mean intersection point is computed. Figure 4 shows the result of the PHT algorithm being applied to Fig. 3a with each colored line being a line detected by the PHT algorithm. The minimum line length passed to the algorithm must be set as high as possible, so that no reflection is incorrectly detected as a line.
An implementation of the Progressive Probabilistic Hough Transformation developed by Galamhos et al. (1999) and implemented in the SciKit-Image-library (van der Walt et al. 2014) is used in this work. The main differences between the Standard Hough Transformation (SHT) and the PHT are, that the SHT returns infinite lines, while the PHT returns line segments and the PHT uses a subset of random points and statistic methods for reduced computation time.
Template matching: The Template Matching (TM) (Lewis 1995) needs the user to mark a template of the cross in each view on one of the images of the dataset. In the first step, the user draws a rectangular area over the crossing section of one of the recorded calibration images for each camera separately. These marked areas are cropped to be the templates. The marked area is five times the beam diameter (in pixels; proved to be suitable through experience) in width and height. In the second step the user marks the center of the crossing point in each template. This is necessary, as the center of the crossing beams is not necessarily the center of the template. These templates are then fitted to every image of each view via cross correlation. Figure 5 shows an exemplary correlation plane with the TM algorithm applied to Fig. 3a.
In detail, the algorithm uses the match_template implementation of the SciKit-Image-library (van der Walt et al. 2014). For subpixel accuracy, the peak of the correlation map is calculated via a 2D Gaussian fit implemented by the lmfit-library (Newville et al. 2016).
4 Results
This section presents the results of the synthetic data first and then those of the experimental data second. The calibration quality in this section is quantified by the reprojection error (Hartley and Zisserman 2004). The reprojection error is calculated per point as the Euclidean distance between the imaged point and its reprojected point. Reprojection errors as quality quantification for a calibration as a whole are presented as arithmetic means with standard deviations.
The synthetic data uses 60 calibration points in total. The synthetic data is used to examine the influence of the beam diameter on the calibration quality, while the beam quality is at its optimum.
Figure 6 shows these results. It shows the reprojection error over the beam diameter for the three cross-detection algorithms. The solid line denotes the mean of the reprojection error and the transparent part denotes the confidence interval (\(95\%\)) of the reprojection error (applied to only one side for better clarity). With the synthetic data the Template Matching algorithm shows the best results over all beam diameters, but for the smaller beam diameters, it was not possible to calculate the calibration matrices. This is caused by the Template Matching returning inaccurate crossing points for the small beam diameters.
The Ransac algorithm yields an equally good calibration for small beam diameters, but an increased reprojection error for increased beam diameters. In contrast to the deficits of these two detection algorithms, the Probabilistic Hough Transformation has a nearly constant reprojection error independent from the beam diameter. The vertical line in Fig. 6 denotes the mean beam diameter, which is present in the experimental data.
Table 1 summarizes the number of calibration points used for recording the experimental data. IDs 1.. have an increasing number of points, while IDs 2.. have an increasing number of xy planes (parallel plane to middle camera sensor) with a constant number of calibration points.
Figure 7 shows a comparison of the mean reprojection error over the cross-detection methods and them being applied to the experimental data and synthetic data. For the experimental data, the mean was calculated over all the measurements given in Table 1. From the synthetic data, the equivalent beam diameter compared to the experimental data is used (vertical line in Fig. 6). The transparent part denotes the confidence interval (\(95~\%\)) of the reprojection error. The calibration with cross detection via Template Matching shows the best results when applied to the synthetic data. The same is true for the experimental data, but the reprojection error being closer to the other cross-detection methods. Because of the difference between the experimental and synthetic results, it can be stated that the Template Matching strongly depends on the quality of the beams. The beam quality of the experimental data is significantly poorer compared to the synthetic data due to the recording conditions, which can also vary greatly depending on the application (see Fig. 3). Because of the cross detection on the experimental data via Probabilistic Hough Transformation and Ransac being in good agreement with the synthetic data, they are mostly stable against the quality of the beam.
Figure 8 and Fig. 9 show the average reprojection errors (solid bars) and the \(95\%\) confidence intervals (transparent bars) for calibrations with the experimental data and the different cross-detection algorithms for the different measurement setups shown in Table 1.
In comparison, the calibration with Ransac cross detection has the highest reprojection errors followed by the calibration with Probabilistic Hough Transformation, followed by calibration with Template Matching.
Moreover, the Template Matching has the lowest reprojection errors and is the most unaffected cross-detection method with experimental data with regard to the difference in the mean over the measurement IDs and their respective confidence interval. Although it showed the biggest difference in calibration quality between synthetic data and experimental data, it is most unaffected by the lasercross positions chosen for calibration.
5 Discussion
Prior work with the lasercross calibration done by Hardege et al. (2022) has primarily examined how well the lasercross calibration compares to calibration with a typical two-level target. It also investigated how susceptible the calibration is to random positioning errors. It was shown that the calibration with lasercross obtained similar results. The Ransac algorithm was used for the cross detection.
Based on these findings, in this new work, different lasercross-detection methods were tested. Furthermore, the influence of the shape of the calibration point cloud and the influence of the overall lasercross positions used for calibration was investigated.
The calibration with Template Matching algorithm leads to the smallest reprojection errors with synthetic data and the experimental data, in comparison with the calibrations utilizing the other cross-detection algorithms. However, it relies on user input for the template and uses the most computation time, which is directly dependent on the image resolution because of the cross correlation. Moreover, it seems to be strongly dependent on the beam quality, as the comparison between synthetic and experimental data shows.
The calibration with the Ransac algorithm obtained the highest reprojection errors, its calibration quality strongly relies on the beam diameter. Regarding the results, the reprojection errors of the calibrations using the Ransac algorithm are impracticable for photogrammetric flow measurements. While the Ransac algorithm obtained good quality calibration in Hardege et al. (2022), this can be due to a difference in beam diameter
The calibration with Probabilistic Hough Transformation obtained reprojection errors between the other two methods. It is mostly unaffected by beam quality and beam diameter. The absolute reprojection error with this cross-detection method is below 1px, which makes it suitable for photogrammetric measurements, especially when using optimizations like Volume Self-Calibration. In terms of computation time it is the fastest cross detection compared to the other methods presented.
In Fig. 8, a trend is visible, where the reprojection error is increased with an increasing number of calibration points or in other words: according to the reprojection error the calibration quality decreases with an increasing number of calibration points. Furthermore, the reprojection error seems to converge to a certain value with an increasing amount of calibration points. The described effect is especially noticeable with the TM and PHT cross-detection algorithms. This could be due to overfitting (Hawkins 2004), as it is known from the field of machine learning. Overfitting occurs when a model becomes too complex relative to the amount of training data available. This complexity often manifests as an excessive sensitivity to individual data points or noise. In addition, the calibration is calculated and tested with the same set of calibration points, there is no cross validation of the calibration (which is common practice in the field of camera calibration). Applied to the calibration, this means that the model has an apparently better fit with fewer calibration points, because the calibration model is oversensitive to the few points. This means that the result of the calibration might deviate more from reality, but fits the calibration points well, i.e., it is oversensitive to the noise in the position of the calibration points. With a higher number of calibration points, the model is no longer as much susceptible to noise from the individual calibration points. Thus, the model may represent reality more accurately, but the error of the individual points, i.e., the mean of the reprojection error, is increased in comparison. This would also explain the convergence to a constant mean reprojection error, assuming that the error in the position of the calibration points (or their detection) is not subject to any systematic error. Similar effects of calibrations with the best reprojection error having the worst reconstruction errorFootnote 2 were, for example, reported by Koide and Menegatti (2019). It can therefore be concluded that the largest practicable number of calibration points should be used for the calibration.
Availability of data and materials
Data can be provided by the authors upon request.
Notes
Free and Open 3D Creation Software under GNU GPL; https://www.blender.org/.
The reconstruction error is used as a proxy for the model’s fit to reality.
References
Albarelli A, Rodolà E, Torsello A (2010) Robust camera calibration using inaccurate targets. In: British machine vision conference, BMVC 2010—proceedings. https://doi.org/10.5244/C.24.16
Bell T, Xu J, Zhang S (2016) Novel Method for out-of-focus camera calibration. Appl Opt 55(9):2346. https://doi.org/10.1364/ao.55.002346
Bradski G (2000) The OpenCV Library. Dr Dobb’s J Softw Tools
Bruecker C, Hess D, Watz B (2020) Volumetric calibration refinement of a multi-camera system based on tomographic reconstruction of particle images. Optics 1(1):114–135. https://doi.org/10.3390/OPT1010009
Chen J, Benzeroual K, Allison RS (2012) Calibration for high-definition camera rigs with marker chessboard. In: IEEE computer society conference on computer vision and pattern recognition workshops, pp 29–36. https://doi.org/10.1109/CVPRW.2012.6238905
Daher P, Lacour C, Lefebvre F, et al (2019) Tomographic PIV calibration procedure in confined optical engine geometry. In: 13th international symposium on particle image velocimetry
Datta A, Kim JS, Kanade T (2009) Accurate camera calibration using iterative refinement of control points. In: 2009 IEEE 12th international conference on computer vision workshops, ICCV Workshops 2009, pp 1201–1208. https://doi.org/10.1109/ICCVW.2009.5457474
Fischler MA, Bolles RC (1981) RANSAC: random sample paradigm for model consensus—a apphcatlons to image fitting with analysis and automated cartography. Gr Image Process 24(6):381–395
Förstner W, Wrobel BP (2016) Photogrammetric computer vision. Springer
Galamhos C, Matas J, Kittler J (1999) Progressive probabilistic Hough transform for line detection. In: Proceedings 1999 IEEE computer society conference on computer vision and pattern recognition (Cat. No PR00149), vol 1, pp 554–560. https://doi.org/10.1109/CVPR.1999.786993
Gülan U, Lüthi B, Holzner M et al (2012) Experimental study of aortic flow in the ascending aorta via particle tracking velocimetry. Exp Fluids 53(5):1469–1485. https://doi.org/10.1007/S00348-012-1371-8
Hall EL, Tio JB, McPherson CA et al (1982) Measuring curved surfaces for robot vision. Computer 15(12):42–54. https://doi.org/10.1109/MC.1982.1653915
Hardege R, Janke T, Burkert J, et al (2022) Non-invasive laser-cross calibration for volumetric flow measurements in closed channels. In: 20th international symposium on application of laser and imaging techniques to fluid mechanics
Hartley R, Zisserman A (2004) Multiple view geometry in computer vision. Cambridge University Press. https://doi.org/10.1017/cbo9780511811685
Hawkins DM (2004) The problem of overfitting. J Chem Inf Comput Sci 44(1):1–12. https://doi.org/10.1021/ci0342472
Heikkila J (2000) Using Circular Control Points. IEEE Trans Pattern Anal Mach Intell 22(10):1066–1077
Hishida K, Sakakibara J (2000) Combined planar laser-induced fluorescence-particle image velocimetry technique for velocity and temperature fields. Exp Fluids. https://doi.org/10.1007/s003480070015
Huang L, Zhang Q, Asundi A (2013) Flexible camera calibration using not-measured imperfect target. Appl Opt 52(25):6278–6286. https://doi.org/10.1364/AO.52.006278
Joshi B, Ohmi K, Nose K (2013) Comparative study of camera calibration models for 3D particle tracking velocimetry. Int J Innov Comput Inf Control 9(5):1971–1986
Kälviäinen H, Hirvonen P, Xu L et al (1995) Probabilistic and non-probabilistic Hough transforms: overview and comparisons. Image Vis Comput 13(4):239–252. https://doi.org/10.1016/0262-8856(95)99713-B
Koide K, Menegatti E (2019) General hand-eye calibration based on reprojection error minimization. IEEE Robot Autom Lett 4(2):1021–1028. https://doi.org/10.1109/LRA.2019.2893612
Lavest JM, Viala M, Dhome M (1998) Do we really need an accurate calibration pattern to achieve a reliable camera calibration? Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol 1406, pp 158–174. https://doi.org/10.1007/BFb0055665
Lewis JP (1995) Fast template matching, Vision interface, Quebec City, pp 15–19
Lu Y, Payandeh S (2007) Dumbbell calibration for a multi-camera tracking system. In: Canadian conference on electrical and computer engineering, pp 1472–1475. https://doi.org/10.1109/CCECE.2007.396
Lucchese L, Mitra SK (2002) Using saddle points for subpixel feature detection in camera calibration targets. In: Asia-Pacific conference on circuits and systems, pp 191–195. https://doi.org/10.1109/APCCAS.2002.1115151
Maas HG (1996) Contributions of digital photogrammetry to 3-D PTV. In: Three-dimensional velocity and vorticity measuring and image analysis techniques. Lecture notes from the short course held in Zürich, pp 191–207. https://doi.org/10.1007/978-94-015-8727-3_9
Maas HG, Gruen A, Papantoniou D (1993) Particle tracking velocimetry in three-dimensional flows. Exp Fluids 15(2):133–146. https://doi.org/10.1007/BF00190953
Newville M, Stensitzki T, Allen DB, et al (2016) Lmfit: non-linear least-square minimization and curve-fitting for python. Astrophys Source Code Libr ascl–1606
Novara M, Schanz D, Geisler R, et al (2019) Instantaneous self-calibration of a 3D imaging system in industrial facility with strong vibrations. In: 13th international symposium on particle image velocimetry, pp 1133–1143
Otsu N (1979) A threshold selection method from gray-level histograms. IEEE Trans Syst Man Cybern 9(1):62–66
Pedersini F, Sarti A, Tubaro S (1999) Accurate and simple geometric calibration of multi-camera systems. Signal Process 77(3):309–334. https://doi.org/10.1016/S0165-1684(99)00042-0
Pedersini F, Sarti A, Tubaro S (1999) Multi-camera systems: calibration and applications. IEEE Signal Process Mag 16(3):55–65. https://doi.org/10.1109/79.768573
Pedregosa F, Varoquaux G, Gramfort A et al (2011) Scikit-learn: machine learning in Python. J Mach Learn Res 12:2825–2830
Placht S, Fürsattel P, Mengue EA, et al (2014) ROCHADE: robust checkerboard advanced detection for camera calibration. In: Computer Vision—ECCV 2014 lecture notes in computer science, vol 8692, pp 766–779. https://doi.org/10.1007/978-3-319-10593-2_50
Scarano F (2012) Tomographic PIV: principles and practice. Meas Sci Technol 24(1):012001. https://doi.org/10.1088/0957-0233/24/1/012001
Schanz D, Gesemann S, Schröder A (2016) Shake-the-box: Lagrangian particle tracking at high particle image densities. Exp Fluids 57(5):1–27. https://doi.org/10.1007/S00348-016-2157-1
Schosser C, Fuchs T, Hain R et al (2016) Non-intrusive calibration for three-dimensional particle imaging. Exp Fluids 57(5):1–5. https://doi.org/10.1007/s00348-016-2167-z
Shen E, Hornsey R (2011) Multi-camera network calibration with a non-planar target. IEEE Sens J 11(10):2356–2364. https://doi.org/10.1109/JSEN.2011.2123884
Soloff SM, Adrian RJ, Liu ZC (1997) Distortion compensation for generalized stereoscopic particle image velocimetry. Meas Sci Technol 8(12):1441–1454. https://doi.org/10.1088/0957-0233/8/12/008
Tsai RY (1987) A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses. IEEE J Robot Autom 3(4):323–344. https://doi.org/10.1109/JRA.1987.1087109
van der Walt S, Schönberger JL, Nunez-Iglesias J et al (2014) scikit-image: image processing in Python. PeerJ 2:e453. https://doi.org/10.7717/peerj.453
Weng J, Cohen P, Herniou M (1992) Camera calibration with distortion models und accuracy evaluation. IEEE Trans Pattern Anal Mach Intell 14(10):965–980. https://doi.org/10.1109/34.159901
Wieneke B (2005) Stereo-PIV using self-calibration on particle images. Exp Fluids 39(2):267–280. https://doi.org/10.1007/s00348-005-0962-z
Wieneke B (2008) Volume self-calibration for 3D particle image velocimetry. Exp Fluids 45(4):549–556. https://doi.org/10.1007/s00348-008-0521-5
Yang T, Zhao Q, Wang X et al (2018) Sub-pixel chessboard corner localization for camera calibration and pose estimation. Appl Sci. https://doi.org/10.3390/app8112118
Zhang Z, Member S (2000) A flexible new Technique.pdf. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334
Funding
Open Access funding enabled and organized by Projekt DEAL. This research was funded by the German Research Foundation (DFG), project number 387065607; 459285942, and Gefördert durch die Deutsche Forschungsgemeinschaft (DFG), project number 387065607; 459285942.
Author information
Authors and Affiliations
Contributions
RH was involved in the conceptualization, methodology, software, investigation, and writing the main manuscript. TR contributed to the conceptualization, software, and manuscript editing and reviewing. KB assisted in supervising and manuscript editing and reviewing. RS contributed to supervising and manuscript editing and reviewing.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
Not applicable.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Hardege, R., Rockstroh, T., Bauer, K. et al. Non-invasive calibration for volumetric flow measurements in confined spaces. Exp Fluids 64, 193 (2023). https://doi.org/10.1007/s00348-023-03729-1
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00348-023-03729-1