1 Introduction

New studies in biophysics and fluid mechanics require the quantitative imaging of large-scale field experiments. Such studies include the large-scale Lagrangian tracking of bats and bird flocks (Theriault et al. 2014; Attanasi et al. 2015), super-large-scale particle image velocimetry measurements using natural snowfall (Toloui et al. 2014) and recent advancements in tomographic-PIV (Jux et al. 2018).

Obtaining reliable two- and three-dimensional imaging data in these large-field experiments is challenging and requires a camera calibration that is accurate down to the smallest physical length scale of interest. Non-linear polynomial camera mappings (Soloff et al. 1997; Wieneke 2005, 2008) are often used in laboratory experiments (Kühn et al. 2010), but their application at length scales beyond that of the laboratory is, in practice, limited. First, the size of the calibration target is limited, such that it only covers a small portion of the measurement. Second, the absence of conventional laboratory equipment, providing access to the measurement volume, does not support the accurate spatial positioning of the target, required in conventional calibration procedures.

In the present work, we combine the pinhole camera model (Tsai 1987) with non-linear polynomial camera mappings used in experimental fluid mechanics (Soloff et al. 1997) to perform a multiple camera calibration over a large-scale measurement volume inside the tank of the aquarium located in the Rotterdam zoo. Our method integrates the use of the pinhole camera model with a non-linear camera mapping to correct for optical distortion across refractive interfaces (Belden 2013). Our approach uses the framework of projective geometry in computer vision (Hartley and Zisserman 2004) and apply advanced self-calibration techniques (Svoboda et al. 2005; Shen et al. 2008).

Here we apply the planar checkerboard calibration technique by Zhang (2000), see also Zhang (1998, 1999), Sturm and Maybank (1999), Menudet et al. (2008) and Bouguet (2015). This approach eliminates the need to accurately position the calibration target, as required in conventional calibration procedures. Instead, the checkerboard calibration target is moved to arbitrary and unknown positions and orientations, here with the help of a team of divers. Second, by sequentially acquiring multiple calibration images while freely moving the calibration target, we achieve a camera calibration that spans over length scales much larger than the calibration target itself. This approach yields an accurate calibration over the measurement volume with a characteristic length scale on the order of several tens of meters.

We process the camera calibration in steps (Heikkilä and Silvén 1997). First, we correct for optical distortions (Fryer and Brown 1986) by rectifying the curved lines of the checkerboard images (Prescott and McLean 1997; Devernay and Faugeras 2001). Second, we perform a calibration based on a single view for each camera following (Zhang 2000). Finally, we combine the single views and find the positions and orientations of the cameras over multiple views (Geiger et al. 2012) and optimize the calibration for spatial accuracy and consistency between the different views.

The camera calibration yields accurate results. To assess the validity of the camera calibration, we compare the estimated effective focal length obtained from it against the true value of the focal length of the lenses. We quantify the spatial accuracy of the camera calibration, by computing the skewness of optical rays associated with the multiple views. Our calibration allows us to use linear ray-tracing (Hedrick 2008) to track and triangulate multiple fish swimming over the entire visual depth of the tank. The method is versatile and can be implemented in field experiments over large length scales and for measurement volumes that are challenging to access experimentally.

2 Camera setup and calibration procedure

We image inside the large tank of the aquarium in the Rotterdam zoo in a measurement volume of dimensions \(\sim 10 \times 25 \times 6 \; \mathrm {m}^3\) (Fig. 1). We use four 5.5 megapixel sCMOS cameras with wide-angle lenses of focal length \(f_{\mathrm {lens}}=24 \; \mathrm {mm}\) (Nikkor AF-\(24 \; \mathrm {mm}\)) that cause significant variations in magnification over the large depth-of-field of \(\mathrm {DOF} \approx 25 \; \mathrm {m}\) (Adrian and Westerweel 2011).

The camera setup is positioned behind an acrylic window of thickness \(\sim 50 \ \mathrm {cm}\). Optical distortions in the image plane are due to refraction at the water (\(n_{\mathrm {water}} = 1.363\)) /acrylic (\(n_{\mathrm {acryl}}=1.51\)) and the acrylic/air (\(n_{\mathrm {air}}=1.0003\)) interfaces (Sedlazeck and Koch 2012), where n is the refractive index. The optical access is limited to the acrylic window, which constrains the spacing between the cameras to \(\varDelta H \approx 1 \; \mathrm {m}\) in height and \(\varDelta W \approx 6 \; \mathrm {m}\) in width, and limits the relative angles between the cameras from \(5{^\circ }\) to \(20{^\circ }\) (Fig. 1).

To calibrate the camera setup we image a planar checkerboard calibration target of dimensions \(1.5 \times 1.8 \ \mathrm {m}^2\) with \(5 \times 6\) tiles (Geiger et al. 2012) that are of area \(A_{\mathrm {tile}} = 30 \times 30 \; \mathrm {cm}^2\). This calibration target is moved within the aquarium by a team of divers that swim with the checkerboard under arbitrary and unknown positions and orientations throughout the aquarium.

Fig. 1
figure 1

The four-camera setup at the Rotterdam zoo. a Schematic representation of the large measurement volume of \(\sim 10 \times 25 \times 6 \; \mathrm {m}^3\) along the \(\mathrm {DOF} \approx 25 \; \mathrm {m}\), including the flat checkerboard of dimensions \(1.5 \times 1.8 \; \mathrm {m}^2\) that is positioned by a team of divers. b Optical access through the large window spanning \(2 \times 8 \; \mathrm {m}^2\) including the camera setup at relative spacing of \(\varDelta H \approx 1\; \mathrm {m}\) by \(\varDelta W \approx 6 \; \mathrm {m}\). c An example of calibration images acquired while positioning the checkerboard

2.1 Image processing

The different images of the calibration target are processed to identify the curves and the nodes corresponding to the gridlines and intersections between the tiles of the checkerboard (Fig. 2). In our application, using a checkerboard calibration target is advantageous over using a pattern of dots. This is because the image gradient obtained from a checkerboard determines grid points more accurately and more robustly over the large depth-of-field of our experiment.

The calibration images are converted to the image gradient using a Savitsky–Golay image differentiation approach (Meer and Weiss 1992) to mark locations on the gridlines between the tiles. For each image, we then fit a set of polynomial curves \(\gamma _i(t)\) to the \(I=9\) gridlines between the tiles of the checkerboards using the local intensity values from the image gradient. These fitting curves are written as \(\gamma _i(t)=\sum _k {\mathbf {a}}^k_i t^{k-1}\), where \({\mathbf {a}}^k_i\) are two-dimensional vectors. Here the parameter t varies within the interval \(t \in [0 , 1]\) over the checkerboard image, such that \(\gamma _i(t=0)\) and \(\gamma _i(t=1)\) correspond to the beginning and end-points of the gridline of the checkerboard image, see the lines on Fig. 2. At last, we find the \(J=20\) intersections between all the gridlines as a set of nodes \({\mathbf {x}}_{j}\) in the image plane of each camera. Here \({\mathbf {x}}_{j}=[x_j \ y_j ]^T\) with \((\bullet )^T\) the vector transpose, where the numbering \(j=1 \cdots J\) is consistent between the different camera views (Geiger et al. 2012) (Fig. 2).

2.2 Distortion correction

Following the path of an optical ray for a single camera in Fig. 2, the linear path is refracted across the air/water interface (Belden 2013). This causes optical distortions in the image plane for each camera and, therefore, we rectify the image plane by dewarping the optical distortion. The coordinates \({\mathbf {x}}=[x \; y]^T\) in the image plane are mapped to distortion-corrected image coordinates \(\hat{{\mathbf {x}}}=[{\hat{x}} \; {\hat{y}}]^T\) by determining a distortion mapping \(\hat{{\mathbf {x}}}=m({\mathbf {x}})\). This mapping ensures that collinear points in the object domain are projected as collinear points in the dewarped image plane and, therefore, supports linear ray tracing. For moderate optical distortions, a polynomial distortion map (Soloff et al. 1997) is sufficient. In this study, we write the distortion map as

$$\begin{aligned} \begin{aligned} \hat{{\mathbf {x}}}&= {\mathbf {x}} + \sum _k {\mathbf {c}}_k \phi ^k( {\mathbf {x}} ) = {\mathbf {x}} + {\mathbf {c}}_1 x^2 + {\mathbf {c}}_2 xy + {\mathbf {c}}_3 y^2 \\&\quad + {\mathbf {c}}_4 x^3 + {\mathbf {c}}_4 x^2y + {\mathbf {c}}_5 xy^2 + {\mathbf {c}}_6 y^3. \end{aligned} \end{aligned}$$
(1)

The distortion map is determined by rectifying the curved gridlines in the N original calibration images. We follow Devernay and Faugeras (2001) and minimize the percentage of deflection along the gridlines. We consider the nodes \({\mathbf {x}}_{j}^n\) along a particular gridline \(\gamma _i^n(t)\) and their images \(\hat{{\mathbf {x}}}_{j}^n\) and \({\hat{\gamma }}_i^n(t)\) in the dewarped image plane through the map \(m({\mathbf {x}})\). We fit a straight line \({\hat{\ell }}_i^n\) through the nodes \(\hat{{\mathbf {x}}}_{j}^n\) and compute the point-line distance \(d(\hat{{\mathbf {x}}}_{j}^n,{\hat{\ell }}_{i}^n)\). The parameters \({\mathbf {c}}_k\) defining the distortion map are then determined by solving a minimization problem over all gridlines in all calibration images:

$$\begin{aligned} \min _{{\mathbf {c}}_k} \sum _{i,n} \sum _{ \begin{array}{c}\mathrm {nodes} j \\ \mathrm {along} {\hat{\gamma }}_i \end{array} } \frac{d(\hat{{\mathbf {x}}}_{j}^n,{\hat{\ell }}_{i}^n)^2}{\left\| {\hat{\gamma }}_i^n(1)-{\hat{\gamma }}_i^n(0) \right\| ^2}. \end{aligned}$$
(2)

This minimization problem can be solved efficiently using a steepest descend algorithm and numeric integration techniques described in Boyd and Vandenberghe (2004). This approach directly extends to larger optical distortions requiring more elaborate distortion models (see Appendix A for the air/water interface and Supplementary-Fig. 8). An example of a dewarped image can be found in Fig. 2.

Fig. 2
figure 2

Image processing and the geometry of the optical path. a Processed calibration image with the identified gridlines fitted as second-order polynomial curves (green lines) and the nodes at the gridline intersections (red squares). The gridlines are numbered from \(i=1\) to 9 and the nodes from \(j=1\) to 20. b The dewarped calibration image corresponding to a in which the gridlines are rectified for minor optical distortion using the mapping of Eq. 1. Supplementary-Fig. 8 provides an example of dewarping for severe optical distortion. c The geometry of the optical path where the optical rays are refracted across the air/water interface (neglecting the acrylic window for simplicity) and the positioning of the (virtual-)camera coordinate system \([{\tilde{x}} \; {\tilde{y}} \; k]^T\) with the origin at the camera center \({\mathbf {X}}^c\) in world coordinate system \([X \; Y \; Z]^T\). d The local coordinate system \([X^o \; Y^o]^T\) attached to the plane of the checkerboard. The indexing i corresponds to the gridlines and j to the locations of the nodes

2.3 Camera calibration and projective geometry

We consider a physical point in the object domain \({\mathbf {X}}\) of coordinates \({\mathbf {X}}=[X \; Y \; Z]^T\) in a world coordinate system and its projected image \(\hat{{\mathbf {x}}} = [{\hat{x}} \; {\hat{y}}]^T\) in the dewarped image plane of a single camera. The calibration is defined by the mapping function F, such that \(\hat{{\mathbf {x}}} = F({\mathbf {X}})\). Our method uses the framework of projective geometry to express the mapping function F and implicitly assumes a pinhole camera model. In the following, we outline the main notations used in projective geometry; a more complete introduction can be found in Hartley and Zisserman (2004).

We make use of augmented vectors to represent points in both the image plane and in the object domain. The coordinates in the dewarped image plane \(\hat{{\mathbf {x}}}\) are augmented to the ray-tracing vector \(\tilde{{\mathbf {x}}}\) such that \(\tilde{{\mathbf {x}}} = [k{\hat{x}} \; k{\hat{y}} \; k]^T\), where k is a scaling parameter in the direction of the principal optical axis. The associated inverse function that projects \(\tilde{{\mathbf {x}}}\) back to \(\hat{{\mathbf {x}}}\) is defined as the projection \(p(\tilde{{\mathbf {x}}}) =[{\tilde{x}} \; {\tilde{y}}]^T/k=\hat{{\mathbf {x}}}\). Similarly, the world coordinates \({\mathbf {X}}\) are augmented to a homogeneous vector as \(\tilde{{\mathbf {X}}}=[X \; Y \; Z \; 1]^T\) (Hartley and Zisserman 2004). Using augmented vectors a geometric transformation, consisting of a rotation and a translation, is simply written as a matrix multiplication \([R \ {\mathbf {t}}] \tilde{{\mathbf {X}}}\), where R is a rotation matrix and \({\mathbf {t}}\) is a translation vector.

With these notations, the mapping function can be written in the following form that is widely used in projective geometry:  (Hartley and Zisserman 2004):

$$\begin{aligned} \hat{{\mathbf {x}}} = F({\mathbf {X}}) = p( K [R \ {\mathbf {t}}] \tilde{{\mathbf {X}}}). \end{aligned}$$
(3)

Here K is the \(3 \times 3\) camera calibration matrix and \([R \ {\mathbf {t}}]\) is a \(3 \times 4\) matrix, with R the \(3 \times 3\) rotation matrix, and \({\mathbf {t}}\) the \(3 \times 1\) translation vector. The matrix K in Eq. 3 has the following form:

$$\begin{aligned} K=\begin{bmatrix} \alpha _x & \quad s& \quad p_x \\ 0& \quad \alpha _y& \quad p_y \\ 0& \quad 0& \quad 1 \end{bmatrix}, \end{aligned}$$
(4)

where \(\alpha _x= f r_x\) and \(\alpha _y=f r_y\) are scale factors, with f the focal length of the lens in \(\mathrm {mm}\) and \(r_x\) and \(r_y\) the pixel pitch of the sCMOS sensor in \((\mathrm {px}/ \mathrm {mm})\). s is the pixel skew, characterizing the angle between the x and the y pixel axes, and \([p_x \ p_y]^T\) are the coordinates of the principal point at the intersection between the optical axis and the dewarped image plane (Hartley and Zisserman 2004). The elements of K are often referred to as the intrinsic camera parameters, representing the characteristic properties of the camera itself (Hartley and Zisserman 2004), while \([R \ {\mathbf {t}}]\) are referred to as the extrinsic parameters representing the position of the camera with respect to the world coordinate system (Fig. 2).

Together, K, R, \({\mathbf {t}}\) define the mapping function F of Eq. 3 and have to be determined for each of the cameras separately. In the following, we use the superscript \(c=1 \cdots 4\) and the notations \(K^{\text {c}}\), \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\), when we distinguish explicitly between the different cameras. We omit the superscript for clarity when no distinction between the cameras is needed; the details of the algorithms can be found in Appendices.

2.4 Single-camera calibration

First, the camera matrix K is determined for each of the four separate cameras by calibrating a single camera using the method developed by Zhang (2000). We consider a local coordinate system \({\mathbf {X}}^o=[X^o \; Y^o \; Z^o]^T\) attached to the planar checkerboard in the object domain, where \(Z^o=0\) corresponds to the plane of the checkerboard. In this coordinate system, the J nodes at the intersections between the gridlines have known coordinates \({\mathbf {X}}^o_j=[X_j^o \ Y_j^o \ 0]^T\). The nodes \({\mathbf {X}}^o_j\) are mapped to their images \(\hat{{\mathbf {x}}}_j^n\) following the formalism used in Eq. 3. This transformation can be written as \(p( K [R^n \ {\mathbf {t}}^n] \tilde{{\mathbf {X}}})^o_j\), where the rotation matrix \(R^n\) and translation vector \({\mathbf {t}}^n\) characterize the position of the checkerboard in the object domain. Geometrically, this corresponds to a rotation and translation of the checkerboard plane in the object domain, followed by a projection on the image plane. The camera calibration matrix K and the positioning of the checkerboard, characterized by \(R^n\) and \({\mathbf {t}}^n\), are determined following Zhang (2000) and are refined by minimizing the following functional:

$$\begin{aligned} \min _{K, R^n, {\mathbf {t}}^n} \sum _{j,n} \left\| \hat{{\mathbf {x}}}^n_j - p( K [R^n \ {\mathbf {t}}^n] \tilde{{\mathbf {X}}}_j^o ) \right\| ^2 . \end{aligned}$$
(5)

2.5 Multiple-camera calibration

To complete the calibration over the multiple cameras, we determine the rotation \(R^{\text {c}}\) and the translation \({\mathbf {t}}^{\text {c}}\) representing the position of each of the cameras in the world coordinate system. \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\) correspond to the extrinsic camera parameters described in Sect. 2.3 and a first estimate is deduced directly from the calibration of single cameras performed in the previous step. Selecting two different calibrated cameras, we use the Kabsch algorithm (Kabsch 1976) to estimate the relative positions of the two cameras by comparing the positions of the checkerboards, \(R^{n,{\text {c}}}\) and \({\mathbf {t}}^{n,{\text {c}}}\), for these two views. Considering all camera pairs, we determine the relative positions between all the views and deduce a first estimate for \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\); see Appendix B for detail. Next, for each of the N calibration images, we estimate position \(R^n\) and \({\mathbf {t}}^n\) of the checkerboard in the world coordinate system. We do this by averaging the position estimates \(R^{n,{\text {c}}}\) and \({\mathbf {t}}^{n,{\text {c}}}\) obtained from the four separate single-camera calibrations.

In the last step, we compute the final values for \(K^{\text {c}}\), \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\) by minimizing the reprojection errors from all cameras and calibration images. For this optimization step, we use as initial conditions, the camera matrices \(K^{\text {c}}\) obtained from Sect. 2.4, the positions \(R^{\text {c}}\) and \({\mathbf {t}}^{\text {c}}\) obtained from the Kabsch algorithm and the estimates of the checkerboard positions \(R^n\) and \({\mathbf {t}}^n\). We define the reprojection error \(\varepsilon ^{n,{\text {c}}}_j\) for each of the N calibration images and in each camera view as

$$\begin{aligned} \varepsilon ^{n,{\text {c}}}_j = \frac{1}{\sqrt{ \hat{{\mathcal {A}}}^{n,{\text {c}}}_j}} \left\| \hat{{\mathbf {x}}}^{n,{\text {c}}}_j - p\left( K^{\text {c}} [R^{\text {c}} \ {\mathbf {t}}^{\text {c}}] \begin{bmatrix} R^n&{\mathbf {t}}^n \\ {\mathbf {0}}^T&1 \end{bmatrix} \tilde{{\mathbf {X}}}_j^o \right) \right\| . \end{aligned}$$
(6)

The reprojection error \(\varepsilon ^{n,{\text {c}}}_j\) is normalized by \(\hat{{\mathcal {A}}}^{n,{\text {c}}}_j\), which is the area in pixels of a single tile on the checkerboard projection in the dewarped image plane (see Appendix C). Hence, \(\varepsilon ^{n,{\text {c}}}_j\) provides a measure of the error relative to the size of the checkerboard. This error is relatively independent of the location of the calibration target within the large depth of field and the positions of the cameras and, therefore, independent of the apparent size of the checkerboard in the image plane. The final parameters for the calibration function F for each view are obtained from the minimization of the following summation:

$$\begin{aligned} \min _{K^{\text {c}}, R^{\text {c}}, {\mathbf {t}}^{\text {c}}, R^n, {\mathbf {t}}^n} \sum _{{\text {c}}}\sum _{j,n} \left( \varepsilon ^{n,{\text {c}}}_j \right) ^2. \end{aligned}$$
(7)

The resulting camera calibration is shown in Fig. 3.

Fig. 3
figure 3

The resulting camera calibration. a The calibrated views that include physical scales on a reference plane at \(k=1 \ \mathrm {m}\) in the depth of field for each view. b Three-dimensional reconstruction of a random selection of checkerboards used for the calibration, in yellow the reconstructed (virtual) cameras in front of the acrylic window of Fig. 1

3 Assessment of the calibration method

In the following, we evaluate the performance of the calibration. First, we report the extrinsic and intrinsic camera parameters from Table 1 and discuss their physical interpretation. Second, we assess the robustness and convergence of the method as a function of the number of calibration images used. Third, we study the spatial accuracy of the calibration and identify the sources of error.

3.1 Intrinsic and extrinsic camera parameters

First, we consider the numerical values of the extrinsic camera parameters obtained from a calibration, see Table 1. As discussed in Sect. 2.3, these parameters characterize the spatial position and orientation of each camera. The reconstructed positions of the cameras are in agreement with the experimental setup, with cameras 4 and 1 positioned above cameras 3 and 2, and cameras 4 and 3 positioned on the left-hand side while cameras 1 and 2 are located on the right-hand side (as shown in Fig. 1), see Table 1. Furthermore, the relative distances between cameras are also in agreement with the experimental scene as we find the horizontal distance between cameras \(\varDelta W\approx 5.6 \; \mathrm {m}\) and a vertical distance between cameras \(\varDelta H \approx 1.2 \; \mathrm {m}\), as deduced from Table 1. Likewise, the reconstructed camera orientations are consistent with the cameras on the right-hand side oriented with a positive angle \(\alpha\), while the cameras on the left-hand side are oriented with a comparable negative angle \(\alpha\).

In addition to reconstructing the position of the camera, the calibration procedure reconstructs the intrinsic camera properties, which we compare to the specification of the instrumentation. The coefficients of the camera calibration matrix K of Eq. 4 are provided for each camera in Table 1. We focus on the values of the focal length of the lenses and deduce an effective focal length \(f_{\mathrm {eff}}\) directly from the coefficients as

$$\begin{aligned} f_{\mathrm {eff}}=\sqrt{\frac{\alpha _x \alpha _y {\tilde{J}} {\tilde{n}}^2}{ r^2 }}, \end{aligned}$$
(8)

where r is the resolution of the camera sensor in \(\mathrm {px/mm}\), which is known from the camera specifications, \({\tilde{J}}\) represents a correction factor for the image expansion due to optical distortion (see Appendix D) and \({\tilde{n}}=n_{\mathrm {air}} / n_{\mathrm {water}}\) corrects for the magnification due to refraction at the air/water interface. Our calibration yields values for the effective focal length of the four cameras of \(f_{\mathrm {eff}} = 23.73 \pm 0.82 \; \mathrm {mm}\). This reconstruction of the focal length lies within 1–2 \(\%\) of the actual focal length \(f_{\mathrm {lens}} = 24 \; \mathrm {mm}\) of the lenses that were used. Hence, we find that both the extrinsic and intrinsic camera parameters deduced from our calibration procedure are in agreement with the dimensions and characteristics of our experimental setup.

Table 1 Numerical values of the calibration parameters

3.2 Convergence and robustness

The camera calibration is obtained by minimizing the sum of the squared reprojection errors \(\varepsilon _j^{n,{\text {c}}}\) over all four cameras, all N calibration images and all nodes on the checkerboard, see Eq. 7. The camera calibration converges to low values for \(\varepsilon _j^{n,{\text {c}}}\) see Table 1. Here, the average normalized reprojection error is on the order of \(\varepsilon _j^{n,{\text {c}}} \sim 2 \%\) of the size of a checkerboard tile, which corresponds to an error of less than \(1 \ \mathrm {cm}.\)

The camera calibration requires a minimum of two non-coplanar checkerboard images (Zhang 2000). Increasing the number of calibration images increases the sampling of the measurement volume and, therefore, improves the reliability of the calibration. We further characterize the performance of the method as a function of the number of calibration images used, by randomly selecting different subsets of checkerboard images.

We first consider the effective focal length \(f_{\mathrm {eff}}\) of Eq. 8 deduced from the matrix K to characterize the quality of the position in space of the checkerboards and the cameras. In Fig. 4, we select different subsets of calibration images ranging from \(N=2\) to \(N=50\) images and compute \(f_{\mathrm {eff}}\) from the associated calibration. Figure 4a represents the ratio between \(f_{\mathrm {eff}}\) and \(f_{\mathrm {lens}}\) as a function of N. With a low number of \(N=2\) to 10 calibration images, the ratio \(f_{\mathrm {eff}} / f_{\mathrm {lens}}\) is far from one and the focal length is under- and over-estimated by \(50\mathrm {\%}\), indicating an unreliable calibration. Increasing the number of calibration images to \(N=15\) shows that the estimated focal length \(f_{\mathrm {eff}}\) approaches \(f_{\mathrm {lens}}\) and represents a clear improvement of the calibration. Further increasing N beyond \(N=15\), the ratio \(f_{\mathrm {eff}} / f_{\mathrm {lens}}\) does not significantly converge further (Fig. 4a).

Second, we consider the reprojection errors \(\varepsilon ^{n,{\text {c}}}_j\) in Fig. 4b. For a low number of \(N=2\) to 10 calibration images, the average reprojection error is as high as \(\varepsilon ^{n,{\text {c}}}_j \sim 50\) to \(60 \ \%\). Increasing the number of calibration images from \(N=15\) to 50 shows an additional decrease of the normalized reprojection error from \(\varepsilon ^{n,{\text {c}}}_j \sim 5\) to \(2 \ \%\) (Fig. 4b) and inset. This shows that 15 calibration images are sufficient to achieve a valid calibration. Further increasing the number of calibration images improves the convergence for the camera calibration while the ratio \(f_{\mathrm {eff}} / f_{\mathrm {lens}}\) remains at a value close to 1.

Fig. 4
figure 4

Quality assessment of the camera calibration procedure for different subsets of randomly selected calibration images. a The ratio in estimated focal length and the true focal length of the used lenses \(f_{\mathrm {eff}} / f_{\mathrm {lens}}\) as function of the number of calibration images N. Different symbols and colors indicate different selections of images. b Averaged reprojection errors \(\varepsilon ^{n,{\text {c}}}_j\) for each camera as a function of N. The error bars indicate the standard deviation of the error per camera

3.3 Spatial accuracy of the camera calibration

By inverting the camera matrix K, one can directly associate an optical ray in the object domain to a point in the dewarped image plane. For an ideal calibration, the four optical rays associated with the images of the same point on each of the four cameras should intersect at a unique location in the object domain. In practice, the four optical rays are skew lines and do not intersect at a single point. Here, we characterize the spatial accuracy of our camera calibration by estimating the skewness among the four optical rays.

For this, we use the nodes identified at gridline intersections on the N calibration images. We proceed by evaluating the four optical rays associated with each node of each calibration image. We then triangulate the location of each node by finding the point \({\mathbf {X}}_j^n\) in the object domain that minimizes the sum of the squared distances from the point to the four optical rays. We report for each node the skewness \(s_j^n\) as the average distance from the triangulated location \({\mathbf {X}}_j^n\) to the four optical rays (see Appendix E for more detail).

The calibration images were acquired over the entire depth of the tank and used to characterize the spatial accuracy, by reporting the skewness as a function of the position along the Z-axis of the world coordinate system (Fig. 5a). We find the skewness to be mostly uniform over the large depth of the measurement volume \(Z=5-25 \ \mathrm {m}\). Our calibration yields a high spatial accuracy characterized by an average skewness of less than \(1 \ \mathrm {cm}\). Only a slight increase in the skewness can be observed towards the back of the aquarium although the average skewness still remains below \(1 \ \mathrm {cm}\). This small increase is due to the decrease in spatial resolution and the decrease in angles between the optical path from the different views.

The variation in skewness over the height and width of the tank are represented in Fig. 5b, c. Figure 5b is a map of the skewness in a XY-plane at the front of the aquarium, while Fig. 5c provides a similar map at the back of the aquarium. Both are qualitatively similar and the skewness remains small over the depth of the tank. For reference, Fig. 5d, e shows the sampling density, which indicates the number of checkerboard images that were used at a location to compute the calibration. It is noteworthy that the center of the tank was better sampled than the sides and the bottom. We find that the skewness, and hence the spatial accuracy, is relatively constant over a large part of the measurement volume, but increases towards the edges of the tank. This is likely a result of the lower sampling away from the camera center, as well as unresolved optical distortions at the edges of the image plane.

Fig. 5
figure 5

Spatial accuracy of the camera calibration. a The average back-projection error in \(\mathrm {cm}\) over the depth of the tank Z. b The back-projection error over the width and height of the measurement volume at the front of the tank from \(Z=4\) to \(Z=17 \ \mathrm {m}\). c The back-projection error at the back of the tank from \(Z=17\) to \(Z=24 \ \mathrm {m}\). d, e The sampling density of checkerboards associated with (b, c), respectively

4 Application to field experiments

We demonstrate the effectiveness of the calibration method by performing three-dimensional measurements in the large aquarium at the Rotterdam Zoo. We begin by focusing on an element which is easily identifiable on each camera view and show that we can accurately triangulate the position. We end our validation by tracking the three-dimensional position of fish of various sizes that are freely swimming over the depth of the tank.

Large predatory fish in the aquarium, such as sharks, swim through the entire aquarium. They provide a good target to evaluate the robustness of the camera calibration as their sharply tipped fins provide easily recognizable and well-defined features. Figure 6 shows how accurately such features can be triangulated with our calibration method. We first identify the vertex of the right-hand side pectoral fin of a shark on each camera view. Similar to Sect. 3.3, we directly associate the vertices, identified on each of the four images, with four optical rays in the object domain. We triangulate the position of the vertex and find a skewness of only \(0.35 \ \mathrm {cm}\), which demonstrates the accuracy of the method. This small skewness is illustrated in Fig. 6, where, for each camera view, the optical rays associated with the other camera views are projected on the image plane onto the epipolar lines (Hartley and Zisserman 2004). The epipolar lines intersect nearly perfectly at the identified vertex of the shark fin. The inset in Fig. 6 presents a closeup view from which one can infer the reprojection error from the slight skew between the optical rays. This associated reprojection error is of only \(1.11 \ \mathrm {px}\).

Fig. 6
figure 6

Triangulation of the vertex of a shark fin. For each camera view: the point corresponding to the vertex of the shark fin is identified with a marker, while the three lines correspond to the epipolar lines associated with the three markers of the remaining three camera views. The color coding is consistent across the multiple views, for example, the vertex on camera 1 is identified with a red marker and the epipolar lines associated with this point in cameras 2, 3, 4 are red. Likewise the vertex in cameras 2, 3, 4 and the corresponding epipolar lines are respectively represented in green, blue and magenta. The inset on camera 2 zooms on the vertex of the fin and shows that the epipolar lines intersect at pixel accuracy

Further, we demonstrate that our calibration supports the tracking of several fish simultaneously over large distances by tracking a small group of six tuna fish (Fig. 7). By triangulation, we reconstruct the three-dimensional time-resolved position of the group (Fig. 7b). Using an in-house automated tracking code, we track the group of six individual fish (tuna) swimming away from the cameras over a large distance of \(7 \ \mathrm {m}\). Together with the triangulated shark fin, this experiment demonstrates the robustness and accuracy of the calibration and its potential use in large-scale field experiments. The calibration supports accurate triangulation over a large distance along the depth of field. This makes it of interest to further application for the tracking of particles (Schanz et al. 2015), birds (Attanasi et al. 2015), insects (van der Vaart et al. 2019), fish and other animals, and the study of fluid motion using tomographic methods (Elsinga et al. 2006) for large-scale field experiments.

Fig. 7
figure 7

Tracking and triangulation of a group of tuna fish inside the measurement volume. a The group of tracked tuna fish in the image plane. b Top view of the tuna fish swimming over a distance of \(\sim 7 \ \mathrm {m}\) within the depth of the tank. c The three-dimensional reconstruction of the fish swimming in object space including the camera position of Fig. 3. In all visualizations, the color code of the tracks corresponds to the linear velocity in object space

5 Conclusion

Here, we have described and characterized a versatile, accurate and robust calibration method, which supports the three-dimensional tracking and triangulation of multiple fish. Our method is of particular interest to large-scale fields experiments, when spatial access to the measurement volume is limited and laboratory equipment to precisely position the target cannot be installed. The method does not require a large calibration target to be moved with known displacements. Rather, we use a freely moving checkerboard calibration target, which is much smaller than the measurement volume itself. The calibration mapping uses the framework of projective geometry, which assumes linear ray tracing. It combines a pinhole camera model for the linear ray tracing, with a non-linear camera mappings commonly used in experimental fluid mechanics to correct for optical distortion. All the algorithms necessary for the implementation of the calibration method are described here with details provided in Appendices.

The calibration method has been implemented to obtain an accurate and consistent multiple-view camera calibration in the large-scale aquarium of the Rotterdam Zoo. Here, the calibration target was positioned arbitrarily by a team of divers. We have characterized the spatial accuracy of the calibration to be less than \(2 \%\) of the size of a checkerboard tile, corresponding to \(1 \ \mathrm {cm}\) over the entire measurement volume that spans several tens of meters. When correcting for the difference in refractive index of air and seawater, we find that our method reconstructs both the camera position and the intrinsic properties of the camera such as the focal length of the lenses. Selecting different subsets of calibration images in a quality assessment, we show that in our case 15 calibration images are sufficient to achieve a valid calibration. Increasing the number of checkerboard images and better sampling the measurement volume further improves the spatial accuracy. The resulting camera calibration allows accurate imaging and three-dimensional tracking over a large measurement volume, here with application to biological fluid mechanics and the tracking of fish.