1 Introduction

This article presents an oblique underwater laser triangulation sensor system, including its measurement principle and system layout as well as a detailed accuracy analysis. The presented sensor model consists of a laser line projector and a camera, both placed in the same housing, enabling a compact and flexible system design with a fixed base. The target application of this sensor are sub-mm accurate measurements in a close range of up to 25 cm. When mounted on an unmanned water vehicle (UWV), this triangulation sensor enables mapping of the surface of lake and river bottoms in shallow water zones. It is also suitable for measurements in hydromechanical laboratory channels. The system is composed of affordable components, enabling highly accurate underwater measurements at a low cost. It furthermore enables multiple simultaneous measurements along the laser line with a high lateral resolution. It also closes the gap underneath the minimum depth of echo sounders that is typically at least 20 cm. Besides the mapping of rivers and lakes, another application could be the inspection of immersed industrial or archaeological objects, e.g. from an unmanned underwater vehicle, where high accuracies are needed.

In the following sections, the measurement principle of the presented underwater triangulation system will be described. The calibration concept will be outlined and tested for a prototype of the sensor. A theoretical assessment of the error influences of various calibration parameters on the 3D measurement will be presented, and achievable accuracies of the prototype will be practically evaluated by measuring test objects inside a water tank.

2 Background and State of the Art

Laser lightsheet triangulation is a well-established optical measurement method, frequently used for instance for part inspection in industrial applications. When used in air, usually a red line laser and a camera are mounted convergent on a rigid base. The line laser emits a planar lightsheet that can be modelled as a plane. The object to be measured is placed underneath the triangulation sensor, and the laser line is reflected on the object and projected into the camera image, where it can be detected. When the relative orientation between camera and laser diode is known, object coordinates along the line can be determined by spatial intersection of image rays with the laser plane. To optimize depth accuracy and to exploit the camera field of view, camera and laser are tilted towards each other, resulting in preferably near-orthogonal intersection angles. Adding movement to either the triangulation sensor or the object upgrades the triangulation sensor to an optical 3D measurement device.

Applying laser triangulation techniques underwater requires some adaptions. A first major difference is the obvious choice of a suitable laser source, necessitated by the increasing opaqueness of water for higher wavelengths (Hale and Querry 1973) and the low water penetration depth for the red or near-infrared wavelengths of typical laser triangulation systems. This will usually result in the choice of a green or blue laser source. Furthermore, both the laser diode and the camera need to be placed in watertight housings. This results in refraction influences on the laser plane and the image rays at the air-glass and the glass-water interfaces. When laser and camera are placed in two separate housings, their interfaces can be arranged in a way that refraction influence is minimized. For the laser, that would be a planar interface placed orthogonally to the laser direction. This only leads to a decrease of the lightsheet opening angle. The camera can either be equipped with a spherical dome lens minimizing all refraction effects, or placed behind a planar surface parallel to the image sensor. The planar surface results in radial symmetric distortion effects that can largely be compensated with conventional photogrammetric camera calibration methods (Shortis 2015).

For the design of a flexible modular underwater laser triangulation system, it may be desirable to have camera and laser source placed in one housing. In this case, camera and laser will necessarily have to be arranged oblique to the interface to obtain near-orthogonal intersection angles on the object surface. When the laser lightsheet hits the interface in an oblique angle, it will be deformed, resulting in a curved projection profile. Therefore, strict geometric modelling for precise underwater 3D measurement requires the consideration of aspects of multimedia photogrammetry (with the media air, glass and water), both for the projected laser lightsheet and for the camera. Moreover, several deteriorating effects of measurements through water have to be regarded in system design and in accuracy potential assessment.

While a wide range of industrial in-air laser triangulation systems are commercially available, only a few experimental studies for underwater laser triangulation systems have been presented in the literature. Several authors have presented applications of underwater laser triangulation, but mostly either neglected or simplified aspects of multimedia photogrammetry. Tetlow and Spours (1999) described a laser triangulation system with a rotating mirror, that can be used from a re-motely operated vehicle (ROV) to scan underwater work sites and acquire data for CAD models. They focus on aspects of image processing to segment underwater laser stripe images in turbid water, but do not address aspects of multimedia photogrammetry in geometric modelling and only achieve accuracies of a few centimetres. A similar approach, based on a linear array CCD sensor and a scanning mirror deflecting a laser beam, is shown in Moore and Jaffe (2002). Herein, the refractive index is considered in depth calculation, but the curvature of the projected line is avoided by an orthogonal system adjustment, and the basis between linear array sensor and scanned line is arranged across-track rather than in-track (as in conventional laser triangulation system design). Ekkel et al. (2015) present a system for the analysis of the quality of underwater welding. They avoid the geometric modelling of the projected lightsheet by imaging it with a stereo camera, whose measurements are processed using a simplified multimedia approach given by Ross (2014). Roman et al. (2010) used a ROV-mounted 532 nm sheet laser and a camera to create high resolution bathymetric maps of underwater archaeological sites (without considering multimedia photogrammetry aspects) and also reported accuracies at centimetre level. Van der Lucht et al. (2018) developed a triangulation based underwater laser scanning system consisting of a machine vision camera and a green line laser for the 3D acquisition of semi-submerged structures. They show a model to correct for refraction of the laser line and camera rays at the water–air boundary. They proceed from a pointwise model, but then treat the laser lightsheet as a plane for calculation stability reasons. This restricts the laser plane to be orthogonal to the water interface. A similar model has been used by Klopfer et al. (2017) in their study on the potential of the Microsoft Kinect range imager for underwater measurements (with the expected result of a penetration depth limited to 30–40 cm due to the wavelength of the light source). Bleier et al. (2019) present a custom-built system consisting of two rotating line lasers and LED flashes installed on each side of a sensor bar of an underwater mining vehicle, also avoiding the necessity of strict multimedia geometry modelling. Matos et al. (2020) present a method for underwater laser triangulation and conduct an accuracy analysis in a depth range of 150 to 290 mm. They consider refraction for the image rays, but assume a planar laser lightsheet. Palomer et al. (2017) replace the planar lightsheet model by an elliptical cone to approximate the multimedia geometry induced lightsheet deformation in a system for underwater robot arm manipulation (Palomer et al. 2018). The approach requires a rather complex calibration strategy and still delivers remaining errors in the order of about 2 mm.

Several authors also used underwater laser triangulation for environmental monitoring purposes. Gonzales et al. (2007) used a two camera system, where the modelling of the lightsheet is dispensable, in combination with a simplified geometric model assuming the horizontal sensor axis parallel with the water surface, to observe sedimentation processes in a laboratory channel. Røy et al. (2002) developed a lightsheet technology to analyse the relation between surface roughness and three-dimensional diffusive fluxes of marine sediments at 100 µm resolution. In their case, geometric modelling is less relevant, as only the relative measure of local roughness is determined rather than absolute depth coordinates. Noss et al. (2018) developed a hand-held laser triangulation system with a green laser to measure riverbed topography for micro- and meso-habitat surveys in streams, but do not mention handling of refraction effects.

Meanwhile, there are also commercial underwater laser triangulation systems, for instance the Voyis scanners from seatronics (2020). They provide scanners with different baselines for different depth ranges with accuracies reaching the sub-millimetre range. The scanners are using special underwater laser and camera optical heads to avoid the multimedia photogrammetric modelling mentioned above, which results in rather high system costs. The same holds for the Seavision system from Kraken Robotics (2021), which uses a tricolour laser system in combination with three underwater cameras to produce 3D point clouds with RGB colour attributes. Newton Labs (2020) also offers a range of underwater triangulation scanners with different varying base lengths. A review of active optical underwater 3D scanners present in the literature and commercially available products is presented in Castillón et al. (2019).

The majority of previous studies assume the laser lightsheet to be planar even underwater. This requires a specific setup of the system where the laser plane is arranged orthogonal to the water interface. When the laser lightsheet hits the interface in an oblique angle, it will be deformed. A rotation of the laser plane around one of the axis of the interface results in a curved lightsheet underwater and therefore in a curved projection line on a planar object surface (Figs. 1 top, 2 left). The curved lightsheet can be parameterised with an elliptical cone, which can then be intersected with the image observations, as shown by Palomer et al. (2018). When the laser plane is furthermore rotated around a second axis, the lightsheet deformation becomes more complex, e.g. S-shaped (Figs. 1 bottom, 2 right).

Fig. 1
figure 1

Deformation of laser lightsheet at air-glass and glass-water interfaces. Rotated around the y-axis with 45° (top and bottom) and additionally around the x-axis with 30° (bottom)

Fig. 2
figure 2

Deformed laser line on a planar underwater surface, rotated around the y-axis with 45° (left and right) and additionally around the x-axis with 30° (right)

This paper uses an approach that considers all rotations of the laser and the camera relative to the interfaces. The measurement method is therefore suitable for a variety of underwater triangulation sensor systems where the laser and the camera are placed in the same housing. A strict geometric multimedia modelling enables high precision measurements regardless of the orientation the components. The calibration procedure was already outlined in a more theoretical manner in Sardemann et al. (2021). For this present article, a system of a green laser line projector and an industrial camera was set up to analyse the applicability of the method both in theory and in practical experiments. The sensor system was tested in static as well as in scanning mode, where it was moved linearly along a static object.

The remainder of this paper is structured as follows. First, the design of the evaluated sensor system will be presented. This is followed by a description of the utilized methods for measurement and calibration. After a short analysis of the covered measurement volume a theoretical accuracy analysis is presented. It shows the influence of the calibrated input parameters on the error of the measured depth and presents the achievable depth accuracy in three depths along the measurement volume. The estimated accuracy is then validated in four experiments using two test objects. The test objects are first measured with a stationary sensor using only one scanline. Then, they are scanned with a moved sensor system, using an additional camera and the 6-DOF method to determine the position and orientation of each scanline. The multiple scanlines are merged to a dense point cloud of the object. The article closes with a summary and an outlook.

3 System Design

Our underwater triangulation system consists of an industrial camera and a green line laser, both mounted in the same watertight glass housing. The applied camera and laser are a Mako-G419 (Allied Vision 2021) and a Flexpoint MVmicro (Laser Components 2020). Having both sensors in the same housing offers the advantage of a compact and flexible modular system design with a fixed base that can be used on unmanned surface or underwater vehicles, where at least the bottom interface of the housing is immersed into water. However, this constrains camera and laser to be oblique to the interface, leading to a challenging task in modelling the spatial intersection of image ray and laser lightsheet when determining 3D coordinates (Sect. 4).

The goal of the presented system is to enable highly accurate measurements for close range applications in the range of up to 25 cm. Thus, the base between camera and laser was chosen to be approx. 15 cm, tilting both sensors towards the centre, enabling near-orthogonal intersection angles over the depth range. An exact calculation of the covered measurement volume can be found in Sect. 5. Figure 3 shows the setup, while Table 1 lists its components and parameters. The ground sampling distance (GSD) for the oblique camera setup depends on measured depth and image position and is given for 100 mm depth at the centre of the image.

Fig. 3
figure 3

System setup

Table 1 System parameters

4 Measurement Method

To receive 3D point coordinates of water bottom points, the laser line has to be detected in the image. In theory, the laser brightness decreases perpendicular to line direction in a Gaussian manner. Its peak could therefore be found by fitting a Gaussian curve. In real environments, this is overlaid by a random speckle pattern. Therefore, Least Squares Matching (LSM) with a template consisting of several line profiles is more reliable to determine the centre of the line (Sect. 4.1). The Gaussian intensity profile in line direction may be compensated using a Powell lens.

When the interior and exterior orientations (IOR and EOR) of the camera are known, a corresponding 3D image ray can be calculated for each image point along the line. The image ray can be intersected with sub-beams of the laser lightsheet when the EOR of the laser diode is also known. Herein, refraction at the air-glass and glass-water interfaces must be considered (Sect. 4.2).

The system and its components have to be calibrated. The parameters of interest are the orientation parameters of both the camera and the laser relative to each other and relative to the air-glass and glass-water interfaces (Sect. 4.3). The following paragraphs follow the concepts of Sardemann et al. (2021).

4.1 Line Measurement

As outlined before, the laser appears as a curved line in the image. Depending on the camera-to-laser setup, it will usually have a preeminent direction either from top to bottom or from left to right in the image. Adapted to that setup, it makes sense to detect the centre of the line either row- or column-wise. From a precision point of view, it would be favourable to use a monochromatic camera with a bandpass filter in the wavelength of the laser. To keep this setup suitable for low-cost, a RGB camera was used, considering only the green channel for line measurement. Using an RGB camera also delivers the advantage of simultaneous generation of an orthophoto of the water bottom as a by-product. Figure 4 shows the line in a measurement image, where the line runs from top to bottom and is therefore measured row-wise.

Fig. 4
figure 4

Laser line in measurement image (left) with row- (blue) and patch-based (red) detection for detail (centre) and complete line (right). Measurement image has been cut on both sides

In theory, the intensity of the line decreases in a Gaussian manner perpendicular to the line on the object plane. This suggests conducting a Gaussian fit perpendicular to the line. In a setup where the predominant line direction is vertical, a row-wise fit can be applied to find a first approximation for the centre of the line for every image row. In an iterative process, this may also be used to determine the local direction of the line and conduct a Gaussian fit perpendicular to the line. The result of the row-wise fits is depicted as blue circles in Fig. 4 (centre). It can be observed, that the line appearance is overlaid by a strong speckle pattern that influences the fitted peak of the Gaussian curve. Figure 5 shows three adjacent profiles from that line and the resulting Gaussian fits. The fitted peak varies by three pixels. The standard deviation of the fit-based peak was approximately 0.5 pixels in practical experiments.

Fig. 5
figure 5

Grey-value profiles of three adjacent rows in the green channel in the centre of Fig. 4 (centre) with fitted Gaussian curves (dashed). The circles depict their peaks (1064.44, 1065.42 and 1067.05 pixel). The vertical lines depict the LSM-based results (1066.77, 1066.95 and 1067.10 pixel). Adjacent profiles are indicated with different colours

Row-wise approaches like the Gaussian fit are strongly influenced by speckle noise, reducing the achievable accuracy. Therefore, a least-squares-matching (LSM) approach is utilized to stabilize the detection of the line centre. Herein, a synthetic patch of multiple Gaussian rows is searched for in every row of the measurement image, enabling only translation in y’ and a horizontal scale factor out of the standard six affine parameters (Fig. 6). The red line in Fig. 4 and the vertical lines in Fig. 5 show the LSM-based result, using a 15 × 15 pixel patch. A standard deviation of 0.1 to 0.2 pixels was achieved in practical experiments, depending on depth and surface texture.

Fig. 6
figure 6

LSM: patch (left) and matching result (right)

4.2 Determination of 3D coordinates

Each image point along the line can be used to determine a 3D object coordinate. Therefore, its image ray needs to be traced through the air-glass and glass-water interfaces. The refracted vector has to be calculated with a 3D representation of Snell’s law (a derivation can be found in Glassner (1989)):

$${\overrightarrow{v}}_{2}=\mathrm{snell}\left({\overrightarrow{v}}_{1},{n}_{1},{n}_{2},\overrightarrow{N}\right)= \frac{{n}_{1}}{{n}_{2}}{\overrightarrow{v}}_{1}+\left(\frac{{n}_{1}}{{n}_{2}}\left(-\overrightarrow{N}\cdot {\overrightarrow{v}}_{1}\right)-\sqrt{1+{\left(\frac{{n}_{1}}{{n}_{2}}\right)}^{2}\left(\left(-\overrightarrow{N}\cdot {\overrightarrow{v}}_{1}\right)-1\right)}\right)\overrightarrow{N},$$
(1)

where \({\overrightarrow{v}}_{\{\mathrm{1,2}\}}\) are the ray vectors in media 1 and 2. \({n}_{\{\mathrm{1,2}\}}\) are the refraction indices of media 1 and 2. \(\overrightarrow{N}\) is the normal of interface between the two media.

First, an image ray \({\overrightarrow{v}}_{\mathrm{air}}^{\mathrm{image}}\) in air is calculated from the camera EOR and IOR and the observed image point. Then, Eq. 1 is applied at the air-glass interface, giving the direction of the vector. Its position can be determined by intersection of the original ray with the interface. The resulting ray is than refracted a second time at the glass-water interface, resulting in a 3D image vector \({\overrightarrow{v}}_{\mathrm{water}}^{\mathrm{image}}\) inside the water (red line in Fig. 7):

Fig. 7
figure 7

Determination of 3D coordinate by intersection of image point (red x in small image) with laser lightsheet

$${\overrightarrow{v}}_{\mathrm{glass}}^{\mathrm{image}}=\mathrm{snell}\left({\overrightarrow{v}}_{\mathrm{air}}^{\mathrm{image}},{n}_{\mathrm{air}},{n}_{\mathrm{glass}},{\overrightarrow{N}}_{\mathrm{glass}}\right),$$
(2)
$${\overrightarrow{v}}_{\mathrm{water}}^{\mathrm{image}}=\mathrm{snell}\left({\overrightarrow{v}}_{\mathrm{glass}}^{\mathrm{image}},{n}_{\mathrm{glass}},{n}_{\mathrm{water}},{\overrightarrow{N}}_{\mathrm{water}}\right).$$
(3)

The image ray needs to be intersected with the laser lightsheet to receive a 3D coordinate. Since the deformed lightsheet cannot be parameterized overall, it is split into sub-beams that are considered as separate rays \({\overrightarrow{v}}_{\mathrm{air}}^{\mathrm{subbeam}}\) and can be calculated using the exterior orientation of the laser diode. The sub-beams are refracted individually using Eqs. 2 and 3. Each refracted image ray intersects only with one specific sub-beam of the laser lightsheet. This sub-beam can be found recursively by splitting the lightsheet in two halves until the nearest points of the skew image- and laser rays is underneath a given threshold. The resulting sub-beam \({\overrightarrow{v}}_{\mathrm{water}}^{\mathrm{subbeam}}\) is depicted in magenta in Fig. 7. The measured object point is the intersection of sub-beam and image ray in water:

$${P}_{\mathrm{object}}={\overrightarrow{v}}_{\mathrm{water}}^{\mathrm{subbeam}}\cap {\overrightarrow{v}}_{\mathrm{laser}}^{\mathrm{image}}.$$
(4)

4.2.1 Exterior orientation determination for scanning

As shown in the previous section, a scan of a curved line can be measured from a single image. To receive a 3D point cloud, either the object or the triangulation sensor has to be moved. The presented triangulation sensor is developed with the objective of its application on UWV for scanning water body surfaces. Therefore, the sensor is designed to be moved, and multiple line scans can be merged to a point cloud based on the known or measured movement of the sensor relative to the object. This may be achieved by different approaches:

  • For outdoor applications on an UWV, an on board GNSS and/or inertial measurement system would be a typical choice (Sardemann et al. 2018). However, with accuracies of approximately 0.1° (low-cost IMU orientation) and up to 2 cm (RTK-position), this may deteriorate the achieved sub-mm accuracies of each line scan.

  • On an underwater vehicle, GNSS is not available. A common method here is to use acoustic signals for positioning. Maurelli et al. (2021) present a review of various active and passive underwater localisation techniques.

  • The images captured by the RGB camera of the triangulation sensor can be used to detect features on the water body bottom in consecutive images and use those for a strip triangulation. The image data may furthermore be used to generate a colour orthophoto of the water bottom (Bodenmann et al. 2017). The laser line area in the image could either be excluded from that process, or a second image with different exposure settings and inactive laser could be captured at every position.

  • The geometry of the ground can also be utilized for a SLAM based method. Massot-Campos et al. (2016) generate local 3D models from laser line triangulation and use those for the orientation determination of following measurements in an iterative process.

  • In laboratory applications, the triangulation sensor can be operated on a fixed rail construction, with the position measured by linear encoders.

  • A more flexible, yet accurate solution for close range applications is a six-degrees-of-freedom (6-DOF) procedure as described below.

6-DOF has been chosen for the accuracy analysis experiments shown in Sect. 7. A calibrated, non-planar panel with markers is solidly attached to the sensor system, and its object coordinates are determined relative to the triangulation sensor. An additional camera is placed statically on a tripod to record the moving panel. For each image of the static camera, a spatial resection is calculated, providing the EOR of the static camera in the coordinate system of the moving panel. Transforming the apparent camera motion parameters into panel motion enables the calculation of the position and orientation of the panel for each shot. The known relative orientation between panel and triangulation sensor system gives the position of the triangulation sensor in a coordinate system centred in the static camera. A detailed description of the mathematical model and the accuracy potential of 6DOF can be found in Luhmann (2009). Figure 8 shows the result for three example images.

Fig. 8
figure 8

6-DOF determination. The stationary camera is marked with a magenta circle. Panel (dots), triangulation camera (x) and laser (square) coordinates are given in 6-DOF camera coordinates for three images in red, green and blue (from left to right)

4.3 Calibration

For the determination of 3D coordinates using Eq. 4, it is necessary to know the IOR and EOR of the camera and the EOR of the laser relative to the interfaces. These parameters have to be determined in a suitable calibration procedure. The calibration procedure used here utilises a concept presented by Sardemann et al. (2021), which was inspired by a concept presented by Mulsow et al. (2006) for a laser lightsheet based water surface measurement technique. The basic concept is to make a number of sub-beams of the laser lightsheet distinguishable and to trace these rays. From the intersection of multiple sub-beams, the position and orientation of the laser diode can be determined. Sub-beams are distinguished using a calibration pattern that consists of two planar levels and is placed inside the water underneath the triangulation sensor (Fig. 9 left). The top level is gridded, leading to a line with small gaps where the light passes to the bottom level. On the bottom level, only small parts of the line appear. Two lines can be observed on the measurement image. One is almost continuous but interrupted and the other one is only fractional (Fig. 9 bottom right). The gaps of the top level line (red) represent corresponding sub-beams to the pieces on the bottom level (blue). The two lines can be measured with LSM (Sect. 4.1) and serve as observations for a bundle adjustment.

Fig. 9
figure 9

Calibration pattern (top), setup (bottom left) and image (bottom right) with observations on top (red) and bottom (blue) level

From the image observations, 3D coordinates can be calculated using approximate values for camera, laser and interface orientations (Sect. 4.2). The parameters are then adjusted iteratively until all sub-beams intersect at the laser diode centre with a minimum distance. A condition to the adjustment is that the object coordinates of the two lines fall on the two calibration levels. Depending on the number of observations, it is theoretically possible to calibrate camera, laser and interface EORs, as well as refraction indices of the three media. However, due to correlations between parameters, only the laser EOR was calibrated in the adjustment, while the camera IOR and EOR parameters were determined separately before water and housing were added, using the markers on the calibration pattern and additional scale bars. The resulting calibrated set up is shown in Fig. 10.

Fig. 10
figure 10

Calibration result: Refracted image rays intersect with refracted sub-beams (dark green) on top level (red) and bottom level (blue)

5 Measurement Volume

Knowing the parameters of the calibrated system, the measurement volume can be calculated. For every pixel position of the measurement image, an unambiguous object coordinate can be calculated using the formulae of Sect. 4.2. Thus, a look-up-table (LUT) can be calculated, giving the X-, Y- and Z-coordinates for every pixel. Figure 11 shows the three components of that LUT. The image area on the right side of the 0 mm line in Z is only of theoretical interest, since this corresponds to an intersection of image ray and laser lightsheet above the glass surface. There are also image areas where the laser plane is never seen (dark blue in Fig. 11). For these image areas, no 3D coordinate can be determined. It can be observed that lines of same depth appear as curved lines in the image (black isolines in Fig. 11 (Z) and coloured lines in Fig. 12).

Fig. 11
figure 11

Look-Up-Tables giving the corresponding x-, y- and z-coordinates for every pixel. In Z, isolines for the reference depths of 0, 50, 125 and 250 mm (from right to left) are shown

Fig. 12
figure 12

Measurement volume: The thin red line depicts the maximum measurement volume. The bold lines represent object coordinates for the depths of 50 mm (red), 125 mm (magenta) and 250 mm (blue)

By calculating the 3D object coordinates for the image borders, the maximum possible measurement volume (3D bounding box) can be calculated. In the setup shown here, the maximum depth is 248 mm. The width of the measurement volume is 227 mm in X and 81 mm in Y. Figure 12 shows the measurement volume and the position of the three reference depths, corresponding to the dashed and dotted lines in Fig. 11.

6 Statistical Accuracy Analysis

The accuracy of 3D coordinates with a calibrated underwater triangulation system is effected by various parameters (Sect. 4.2). These include the image measurement of the laser line, IOR and EOR of the camera and EOR of the laser diode. These parameters can only be determined with a certain accuracy and thus contribute to the error budget of 3D measurement. The influence of each parameter on the X-, Y- and Z-coordinate depends on its precision, the system setup and the location of the measured laser in the image. The overall error can be assessed by the law of error propagation. Herein, the influence of each parameter of Eq. 1 to the resulting 3D position can be estimated by calculating the derivative of the equation with respect to that parameter and scale it with its standard deviation. The following paragraphs address the resulting standard deviation of 3D coordinates measured across the whole image using the previously calibrated triangulation system. Since the sensor will mostly be used for depth determination, the evaluations will focus on the Z-coordinate. The errors are exemplarily shown for the three reference depths of 50 mm, 125 mm and 200 mm.

The following analyses only consider the accuracy in a profile scan obtained from a single image, neglecting the additional errors caused by system movement determination when merging multiple profiles.

6.1 Influence of image measurement

In the presented setup, the preeminent direction of the laser line is horizontal in the image. The line centre is therefore measured row-wise in the image. LSM and Gauss-fitting methods lead to sub-pixel accuracies in x′ in the order of 0.1–0.2 pixel. Therefore, the influence of an assumed standard deviation of 0.15 pixel is tested. Figure 13a shows that the resulting error in Z increases with depth with a maximum standard deviation of 0.027 mm in Z at 200 mm depth.

Fig. 13
figure 13

Variance propagation. Influence of various parameters on depth standard deviation in three reference depths of 50 mm (red), 125 mm (green) and 200 mm (blue)

6.2 Influence of Camera IOR

The camera used for triangulation was calibrated beforehand, and its IOR was considered stable for all measurements. The most prominent influence of the camera IOR on the triangulation measurement is the focal length. It has been calibrated with a standard deviation of 0.004 mm. This results in a depth error of up to 0.06 mm over the whole measurement volume, depending on the depth (Fig. 13b).

6.3 Influence of Camera EOR

The camera position and orientation relative to the interface were determined in a bundle adjustment using a calibration pattern (Sect. 4.3). A standard deviation of about 0.05 mm in position and 0.02° for orientation was achieved herein. The glass interface is considered as the XY-plane of the coordinate system. Therefore, only the Z-coordinate of the camera relative to this plane is considered. X and Y have no influence and are set to zero. A standard deviation of 0.05 mm for the cameras Z-coordinate results in standard deviations of up to 0.08 mm for depth determination (Fig. 13c). The orientation errors result in up to 0.04 mm (ω and φ) and up to 0.015 mm (κ).

6.4 Influence of Laser EOR

The laser position and orientation were calibrated as shown in Sect. 4.3. The position was determined with standard deviations of 0.07 mm (X), 0.08 mm (Y) and 0.12 m (Z). This results in depth standard deviations of up to 0.035 mm, 0.15 mm and 0.12 mm, respectively (Fig. 13d). The orientation was determined with standard deviations of 0.02° (ω), 0.06° (φ) and 0.04° (κ) resulting in up to 0.12 mm (Fig. 13e).

6.5 Influence of Refractive Index

The refractive index of water nw has been taken from the literature. Its value depends among other factors on the temperature of the water, which was not stable amongst all experiments. When a variation of 10° is assumed, nw has an inaccuracy of approximately 0.001 (Schiebener et al. 1990). This results in a depth standard deviation of up to 0.17 mm (Fig. 13f).

6.6 Summarized Estimation of Accuracies

The previous paragraphs show, that the measurement accuracy is dependent on various inputs and their standard deviations. All influences contribute to the total error budget with individual errors of up to 0.18 mm. A decrease of accuracy with increasing depth can be observed for all parameters. Unfortunately, image measurement accuracy also decreases with depth (and thus decreasing line width). Rotations of camera and laser around the Y- and Z-axis (φ and κ) have minimum effects close to the centre of the image, where the rotation axis is. All error sources can be summarized (squared) to a total estimation of the error budget using the law of error propagation. Herein, correlations between all parameters are neglected, as the parameters have been determined in independent processes that do not provide the required information. Assuming the previously mentioned standard deviations leads to the depth accuracies shown in Fig. 13g. Standard deviations of 0.15 to 0.35 mm can be expected in the whole measurement volume under perfect conditions. They increase with depth and decrease towards the image centre. The accuracy of the line measurement affects the depth measurement linearly. It is therefore an important factor especially considering relative depth deviations. The determination of absolute 3D measurements is furthermore effected by the quality of the system calibration. Thus, a proper balance between the two components should be achieved. The refractive index of water also has a significant influence on the measurement. In laboratory experiments, it could be determined with high precision in a multimedia bundle adjustment (Mulsow 2010). Instabilities in the mount of camera, laser and housing may lead to higher standard deviations.

7 Experiments and de Facto Achieved Accuracies

In addition to the theoretical accuracy potential analysis in the previous section, the previously calibrated underwater laser triangulation sensor prototype was used for practical testing in a water tank. Experiments were conducted in two setups. First, it was used with a single shot, measuring along the laser line profile. This measurement can be compared to the theoretically assessed accuracies of Sect. 6. Secondly, the sensor system was moved along two test objects to investigate the accuracy in scanning mode. In those experiments, 6-DOF was used for EOR determination (Sect. 4.2.1). This second setup is lastly more relevant for the applicability on an unmanned water or underwater vehicle.

7.1 Reference Objects

Two objects were used for testing. The first is a very stable metal test object (Fig. 14, top) with dimensions of approximately 300 mm (length) × 65 mm (width) × 45 mm (height), containing three height levels. For reference measurement, markers have been attached to the object. The markers have been measured in a close range bundle adjustment, using calibrated scale bars and a DSLR camera and the Aicon 3D Studio software. Plane fits of the three levels give the heights of 0.006 mm, 20.195 mm and 40.135 mm with RMSs of 0.070 mm, 0.028 mm and 0.004 mm (Fig. 14, bottom).

Fig. 14
figure 14

Test object. Image (top) and 3D point cloud with plane fits (bottom)

The second object is an aquarium decoration shipwreck model. It is made out of plastic and has dimensions of approximately 260 mm (length) × 75 mm (width) × 80 mm (height). It was placed lying on one side (Fig. 15 left). The geometry of this object is more complicated, including holes and rough surfaces. The reference measurement was conducted with a GOM ATOS triple scan triangulation scanner (GOM 2021). The resulting mesh has a resolution of approx. 0.1 mm and accuracies in the order of 0.02 mm (Fig. 15, right).

Fig. 15
figure 15

Wreck model. Image (left) and 3D mesh (right)

7.2 Single Profile Scan

The two test objects were placed in water underneath the sensor system and measured with a single image resulting a single profile scan. Figure 17 shows the measurement image and the detected laser line. For a better comprehension of the situation, a brighter version of the measurement image was also included (Fig. 17a).

Figure 18 shows the calculated 3D coordinates and the differences to the reference heights. Especially the test object with markers produces outliers in the image measurement, which affect the object coordinates. After a simple threshold-based outlier exclusion, the majority of points (99%) are within ± 3σ (2.24 mm). A bi-modal behaviour can be observed in the histogram (Fig. 18d, left). This is mainly caused by the different surface colours of the test object. The white markers are overexposed, while the black areas appear very dark in the image. The overexposure is furthermore mainly in the direction of the laser, leading to an intensity dependent image measurement. Figure 16 highlights that the measured line centre tends to the left in the white areas. This effect may be eliminated using a more evenly coloured object (as it is usually the case for water bottoms), by a correction method based on intensity or line width or using multiple images with different exposure settings.

Fig. 16
figure 16

Detail of image measurement of test object with white markers on black background

Fig. 17
figure 17

Measurement images of a single profile scan of test object with markers and wreck model

Fig. 18
figure 18

Results of a single profile scan of test object with markers and wreck model. Object points are outlier adjusted with ± 3σ

The wreck model has a surface texture with less contrast leading to normal-distributed of image measurement errors and thus also of object coordinate distances to the reference mesh (Fig. 18d, right). It is also not infected by outliers and shows a standard deviation of 0.38 mm. However, the distances to the test object are strongly influenced by the registration of scanline and reference mesh, leading to a decentred histogram.

Figure 18c includes the estimated standard deviations following the calculations of Sect. 6, showing that the experimentally observed reference-measurement distances are well comparable to the theoretically calculated standard deviation

Table 2 summarises the results of the single scan mode.

Table 2 Results of single image scan

7.3 Scanning Mode

The two objects were also measured in a scanning procedure. For that purpose, the triangulation sensor was moved linearly in steps of approximately 2–5 mm. For each step, a measurement image of the laser line was recorded and the corresponding 3D scan profile was calculated. Furthermore, an orientation image was captured with an additional camera on a tripod for every step to calculate the EOR of each shot using the 6-DOF approach as outlined in Sect. 4.2.1. The 6-DOF camera was placed approximately 1.5 m away from the panel and provided EOR values for the triangulation sensor with approximately 0.01° and 0.05 mm standard deviations over all experiments. The measurement setup is depicted in Fig. 19.

Fig. 19
figure 19

Setup for scanning mode

The first test object was scanned with seven lines (Fig. 20a, left). Planes were fitted in the three height levels with an RMS of approximately 0.2 mm per plane. Figure 20c left, d left show the distances of the object points to the fitted planes. Large distances can be observed, where the laser has hit a white marker. The histogram shows the same bi-modal as in Sect. 7.2, resulting from the different influences of white and black surface colours of the test object. The standard deviation of all points is 0.3 mm. After an exclusion of outliers with distances greater than ± 3σ (0.9 mm), a standard deviation of 0.20 mm can be observed.

Fig. 20
figure 20

Scanning mode results of wreck model (70 scanlines) and test object with markers (seven scanlines). Outliers > 3σ were excluded beforehand

The wreck model was scanned with 70 scan positions. The point cloud resulting from the single scans referenced by 6-DOF is shown in Fig. 20a, right). An iterative-closest-points algorithm was used to align the underwater triangulation point cloud with the reference mesh (Fig. 20b, right). The distances between triangulated point cloud and reference mesh have a standard deviation of 0.4 mm. Figure 20c–d right show the point to mesh distances after an exclusion of outliers greater than 3σ (1.2 mm). The outlier-free distances show a normal distribution with a standard deviation of 0.29 mm. This includes both the errors of single scan triangulation and EOR determination of the moving sensor. Systematic errors occur at positions, where the reference mesh shows holes, that originate from markers, which were needed for the reference scan. Furthermore, it can be observed, that some complete lines show higher distances. This might be caused by an inaccurate 6DOF determination. The 6-DOF determination has a larger influence on the accuracy using the complex wreck model than with the plane based test object, where only the measured depth is compared to the three planes ignoring the lateral location. Table 3 summarises the results for the measurement of both test objects in scanning mode.

Table 3 Results for test object with markers in scanning mode

8 Summary and Outlook

In this article, an oblique underwater laser lightsheet triangulation sensor concept was presented and evaluated. The system concept with camera and laser placed in one housing allows for a compact and flexible design, but requires the development of a dedicated geometric model for 3D coordinate determination and system calibration. The presented prototype triangulation sensor consisting of an industrial camera and a green line laser with 17.5 cm base is designed to measure depths of up to 25 cm. Theoretical error analyses showed that accuracies of 0.2 to 0.4 mm can be achieved with such a sensor system. Practical experiments confirmed these theoretical estimations. When used in scanning mode, additional errors may occur from the positioning of each line scan. The achieved accuracy is suitable for various applications.

The sensor will be integrated into a multisensory UWV (unattended water vehicle, Sardemann et al. (2018)) to conduct in situ measurements of riverbeds in shallow areas. Future work will also consider an adaption of the LSM-based line measurement to varying water bottom brightness as well as an extension of the concept from a single laser line to multiple lines obtained with a diffractive element in the laser optics. Future work should also include investigations on the most suitable method for the determination of position and orientation of the sensor in outdoor experiments, since the 6DOF method that was used in this article is limited at larger distances or underwater.