Circular Laser/CameraBased Attitude and Altitude Estimation: Minimal and Robust Solutions
 1.1k Downloads
Abstract
This paper proposes a basic structured light system for pose estimation. It consists of a circular laser pattern and a camera rigidly attached to the laser source. We develop a geometric modeling that allows to efficiently estimate the pose at scale of the system, relative to a reference plane onto which the pattern is projected. Three different robust estimation strategies, including two minimal solutions, are also presented with this geometric formulation. Synthetic and real experiments are performed for a complete evaluation, both quantitatively and qualitatively, according to different scenarios and environments. We also show that the system can be embedded for UAV experiments.
Keywords
Conic Structured light Epipolar geometry Robust pose estimation1 Introduction
Pose estimation is an essential step in many applications such as 3D reconstruction [1] or motion control [2]. Many solutions based on a single image have been proposed in past years. These systems use the image of a perceived object or surface in order to estimate the related rigid transformation [3].
When a monocular vision system and a known object are used, the problem is well known as PnP (PerspectivenPoints) [4, 5, 6, 7]. In this case, the matching between known 3D points and their projection in the image allows to deduce the pose. For a calibrated stereovision sensor, the epipolar geometry and a direct triangulation between 2D matched points of stereoscopic images allow both to reconstruct the scene at scale and to estimate the pose of the camera. When the stereovision system is not calibrated and we do not have any knowledge about the 3D structure of the scene, the epipolar geometry can still be estimated, in the form of the fundamental matrix, but the final 3D reconstruction is only projective [3]. Finally, if we consider a single calibrated camera in motion, the essential matrix between two acquired images can be estimated from matched 2D points as well as the pose, but only up to scale [8].
All the previous methods are classified as passive because they only exploit images acquired under existing lighting conditions and without controlling the camera motion. They require the scene to be textured in order to extract discriminant features that can be matched easily. If the scene is globally homogeneous with very few remarkable features, the previous methods will mostly fail. Thus, when the scene is globally homogeneous, the best way to handle the problem without introducing assumptions about the material of the ground surface and about the lighting present in the scene is to employ active sensors that use the deformation of a projected known pattern in order to estimate the pose. These methods are also known as structured light [9], and one of the most popular sensors is undoubtedly the Kinect sensor [10].
The projected pattern can be obtained from a projector or a laser and different shapes and codings can be used [11]. Globally, patterns are based either on discernable points that have to be matched independently or on general shapes such as lines, grids, conics that have to be extracted in acquired images. The Kinect sensor is widely used in mobile robotics, but suffers from several downsides. First of all, its size and weight make it difficult to embed on a drone with a low payload. On the other hand, its field of view and its range of operation are limited: The field of view is around \(57^{\circ }\) and the sensor runs from 0.6 to 4 m. It consequently has a close range blind spot that makes it unusable in a critical stage such as the landing of a drone. Moreover, since this type of sensor uses an infrared pattern, it is very sensitive to the material on which the pattern is projected, and is sensitive to the infrared light of the sun, which makes it unsuitable for outdoor applications.
In this paper, we propose a complete and simple laser–camera system for pose estimation based on a single image. The pattern consists in a simple laser circle, and no matching is required for the pose estimation. Using a circular pattern is very interesting because its projection onto a reference plane is a general conic; this has shown to be a strong mathematical tool for computational geometry [12]. Recently, in [13] the authors proposed a nonrigid system based on a conic laser pattern and an omnidirectional camera for a similar aim as ours. In their approach, rather than calibrating the complete system (laser–camera) they propose to detect simultaneously the laser emitter and the projected pattern in the image in order to estimate the pose.
In [14], an algebraic solution of our system was developed while a geometrical approach was given in [15]. This paper is an extension of the latter for which we propose different improvements. First, a complete dedicated calibration method is presented, giving improved results. Next, we propose a new robust algorithm that simultaneously estimates conic and pose parameters and that is particularly efficient and accurate. Finally, we present extensive simulations and experimental results with ground truth measures that allow comparison and quantitative evaluations of the approach in different environment settings.
The paper is organized as follows. The following section briefly describes notations and provides some basic material required in this paper. Section 3 describes our camera/laser setup and formulates the pose estimation problem. Section 4 gives a first solution to pose estimation. In Sect. 5, we then propose different robust approaches for the conic detection and the pose estimation. In Sect. 6, a new method to calibrate the system is presented. Finally, Sect. 7 presents the different simulation and experimental results, evaluations and comparisons. It is followed by a section with conclusions.
2 Basic Material and Notations
This section provides some mathematical materials required in this paper. Concerning notation: Matrices and vectors are denoted by bold symbols, scalars by regular ones. Geometric entities (planes, points, conics, projection matrices, etc.) are by default represented by vectors/matrices of homogeneous coordinates. Equality up to a scale factor, of such vectors/matrices, is denoted by the symbol \(\sim \).
2.1 Representing Quadrics and Conics
2.2 Representing a Pair of Planes
2.3 BackProjecting a Conic
3 Problem Formulation
We can immediately observe that with this input, not all 6 degrees of freedom of the camera/laser system’s pose can be determined. As for the 3 translational degrees of freedom, translation of the system parallel to the ground plane does not affect any of the inputs, in particular the image conic \(\mathbf {c}\) stays fixed in this case. The same holds true for rotations about the plane’s normal. As a consequence, we may determine 3 degrees of freedom of the pose: altitude above the plane and attitude relative to the plane (2 rotation angles—roll and pitch). Note that this is equivalent to determining the location of the ground plane relative to the camera/laser system. In the following sections, we thus describe methods to estimate the ground plane location.
4 A Geometric Solution for Altitude and Attitude Estimation
4.1 Geometrical Study
In particular, we study the degenerate members of the above family of quadrics, i.e., quadrics with vanishing determinant: \(\det (\mathbf {Q})=0\). The term \(\det (\mathbf {Q})\) is in general a quartic polynomial in the parameter x. Among its up to four roots, we always have roots \(x=0\) and \(x \rightarrow \infty \), corresponding to the cones \(\mathbf {C}\) and \(\mathbf {D}\). As for the other two roots, they may be imaginary or real, depending on the cones \(\mathbf {C}\) and \(\mathbf {D}\) generating the family. In our setting, we know that these two cones intersect in at least one conic (the conic on the ground plane). In this case, it can be proved (see “Appendix A.2”) that the remaining two roots are real numbers and identical to one another. Further, the degenerate quadric associated with that root is of rank 2 and hence represents a pair of planes. Finally (cf. “Appendix A.2”), one of the planes is nothing else than the ground plane, whereas the second plane of the pair is a plane that separates the optical centers of the camera and of the laser, i.e., the two optical centers lie on opposite sides of the plane. This is illustrated in Fig. 2.
4.2 Pose Estimation Method
The properties outlined in the previous section are used here to devise a pose estimation method for our scenario. Concretely, we wish to compute the ground plane’s location relative to the camera.
We still need to determine which one among these two planes is the ground plane. Obviously, the optical centers of camera and laser lie on the same side of the ground plane. From what is shown in “Appendix A.2”, the optical centers must lie on different sides of the second plane. It thus suffices to select the one plane among \(\mathbf {U}\) and \(\mathbf {V}\) relative to which the optical centers lie on the same side; this is the ground plane.
5 Robust Estimations
The methodology presented in Sect. 4 supposes that the cone associated with the projector (cone \(\mathbf {D}\) in Fig. 1) is known without error. Well, not exactly, since calibration errors exist; but to compute the cone, we do not need to make any image processing. In contrast, the cone associated with the camera (cone \(\mathbf {C}\) in Fig. 1) is computed by first extracting an ellipse \(\mathbf {c}\) in the camera image. Note that our approach is valid for the case of \(\mathbf {c}\) being a general conic; however, in our practical setting, it is always an ellipse, so we stick to this in the following. A potential problem is that outliers may affect the estimation of the ellipse. For instance, these outliers can appear when the laser projector intercepts a ground plane partially occluded by objects. To still work in this case, one can resort to a RANSAC scheme to compute the ellipse \(\mathbf {c}\). In this section, we propose three robust estimations: one based on a 5point RANSAC to estimate the ellipse in the image plane, one based on a 3point RANSAC to estimate the ellipse by taking into account the epipolar geometry and one based on a 3point RANSAC to directly estimate the ground plane (and consequently the altitude and attitude of our system), without estimating the ellipse.
5.1 The PlanePair 5Point (PP5) Algorithm
The method for estimating altitude and attitude presented in Sect. 4 requires the computation of the ellipse \(\mathbf {c}\). In this section, we explain how to estimate it with all points and then with 5 points using a RANSAC scheme. This robust estimation is denominated the PlanePair 5point (PP5) algorithm.
5.2 The PlanePair 3Point (PP3) Algorithm
Three points are not enough in general to compute an ellipse, but in our case we have additional information, not used so far: We know the epipolar geometry between the camera and the projector. This epipolar geometry provides additional constraints since the two cones (\(\mathbf {C}\) and \(\mathbf {D}\)) must be tangent to the same epipolar planes. Considering Fig. 3, for instance, both cones are tangent to the plane spanned by the two optical centers and the black lines on the cones. There is also a second epipolar plane that is tangent to both cones, behind them.
The analogous in 2D is as follows: Consider the circle in the projector image plane. There are two epipolar lines, i.e., lines that contain the epipole and that are tangent to that circle. The two corresponding epipolar lines in the camera image must be tangent to the ellipse we are looking for in the camera image. This is the epipolar constraint for images of conics [19].
5.3 A Minimal Solution: The Ground Plane 3Point (GP3) Algorithm
The fitting of the ellipse from 3 points is feasible, as shown in Sect. 5.2, but not quite simple. It turns out that it is simpler to directly solve the problem we are interested in: the estimation of the ground plane. The intersection of the two cones in 3D gives, as shown in Figs. 2 and 3, two conics in 3D. One of them is the trace of the projected circle on the ground plane and the support plane of that conic is hence the ground plane, expressed in the reference system in which the cones are represented (the camera frame in our case).
Let us consider now 3 points in the camera image that are assumed to lie on the ellipse \(\mathbf {c}\). What we can now do is to actually backproject these 3 points to 3D, i.e., to compute their lines of sight. We then intersect the laser cone \(\mathbf {D}\) with each of these lines, giving in general two intersection points each. There are thus \(2^3=8\) possible combinations of 3D points associated with our 3 image points, and one of them must correspond to points lying on the ground plane. Selecting this correct solution can be done by embedding this scheme into a RANSAC, as explained below.

if \(\Delta <0\), there is no real solution and consequently no real intersection between the cone and the ray,

if \(\Delta = 0\), there is only one real solution (\(\lambda = \frac{c_1}{c_2}\)), corresponding to a line tangent to the cone,

if \(\Delta >0\), there are two intersections: \(\lambda =\frac{ c_1 \pm \sqrt{\Delta }}{c_2}\).

Lower computational cost than the general 5point fitting method (many fewer RANSAC samples need to be considered as shown in Sect. 5).

Higher robustness as shown in Sect. 7.

The solution computed from 3 points satisfies all geometric constraints (the epipolar constraints actually); this means that the intersection of cones will be exact. On the contrary, if one first estimates a general ellipse in the camera image and then intersects its cone with the cone from the projector: That problem is overconstrained and the solution will not be an exact intersection of the cones. The numerical solution obtained with such a 5point method may be worse than the 3point method.
6 Calibration
Calibration is a necessary step to run our algorithms on real data. In our system, we have three elements to calibrate: the projector, the camera and the relative pose between the camera and the laser.
Regarding the projector, we suppose that we know the opening angle of the laser cone since it is given by the manufacturer or it can easily be measured.
The camera is calibrated by a conventional method, using a checkerboard pattern [20].
The main problem thus lies in the estimation of the relative pose between the laser and the camera. Pose consists normally of three translation/position parameters and three rotation parameters. Since the laser cone is circular, rotation about its axis is irrelevant in our application; hence, only two rotation parameters need and can be determined.
Our method uses a planar surface with a known texture, e.g., a calibration pattern. In that case, the pose of the planar surface relative to the camera can be computed [7].
It is theoretically possible to perform the calibration from one image. Nevertheless, for best results, one should combine all available images, in a bundle adjustment fashion.
One way of doing this is as follows. We have to optimize the pose of the laser cone relative to the camera and for this, we need to define a cost function. One possibility is to sample points of the ground plane ellipses and to minimize the sum of squared distances between the sampled points and the ellipses that are generated by cutting the cone with the ground plane, where the cone is a function of the pose parameters to be optimized. Minimizing this sum of squared distances allows to optimize the cone parameters. Such a pointbased cost function is more costly to optimize than, for instance, a cost function that compares ellipses as such (e.g., that compares the symmetric \(3\times 3\) matrices representing ellipses), but should be better suited.
The optimization of the proposed cost function can be done in several different ways; here we describe a solution analogous to one proposed for fitting conics to points in [21]. It requires to optimize, besides the cone parameters, one parameter per point that expresses the position of each point on the cone.
To ensure the convergence of the algorithm, the optimization is achieved in two steps: We firstly optimize only the \(\gamma _i\) before the reestimation of all the parameters (\(\alpha , \beta , \mathbf {v}, \gamma _i\)).
7 Experiments
To verify the validity of the proposed methods, we perform experiments using both simulated data and real images. The latter have been acquired with a camera–laser system and a motion capture system as ground truth for quantitative comparisons.
7.1 Synthetic Evaluation
In these first experiments, we generate a set of laser points on the ground floor, given the intrinsic parameters of the camera and of the laser as well as their relative pose. We then introduced different noises in the simulated data such as image noise, outliers, noise on intrinsic and extrinsic parameters. The performances of the three proposed algorithms are evaluated by comparing the mean error of the respective estimated altitude, roll and pitch angles over a thousand trials.
7.1.1 Evaluation Under Image Noise
In order to evaluate the robustness of the three algorithms in presence of image noise, we have added different levels of noise to the pixel coordinates of the image points lying on the image of the laser beam’s intersection with the ground plane. We then propose to compare the mean error of the estimated altitude, roll and pitch angles obtained from the three methods over a thousand trials. Results are shown in Fig. 7.
The GP3 algorithm gives the best results for the altitude estimation while for the attitude estimation (roll and pitch) the PP3 and GP3 have similar performances. We believe that the 5point is the most sensitive since less constraints than the two other approaches are used.
7.1.2 Evaluation Under Varying Outlier Ratios
Proportion of outliers from which algorithms fail
PP5 (%)  PP3 (%)  GP3 (%)  

Proportion of outliers  75  86  85 
7.1.3 Evaluation Under Varying Calibration Noise (Intrinsic Parameters)
7.1.4 Evaluation Under Varying Calibration Noise (Extrinsic Parameters)
In this case, we introduced noise on the extent of the baseline between camera and laser. Results are given in Fig. 10. As illustrated in this figure, the baseline has a stronger influence on the altitude estimation than on the attitude. All the proposed algorithms seem to react in the same way for the altitude estimation. The PP3 and GP3 algorithms give better results for the attitude estimation than the PP5.
7.1.5 Evaluation under Varying Ground Plane Noise
Complementary to the outliers previously treated, we also introduced noise in the ground plane points coordinates. The aim is to simulate what would happen with a nonuniform ground (presence of gravel or grass). Results are given in Fig. 11. As illustrated in this figure, the nonuniform plane has a strong influence on the altitude and attitude estimations. The PP3 and GP3 algorithms give the best results, in particular for the altitude estimation.
7.2 Experiments on Real Data with ViconBased Ground Truth
Altitude, pitch and roll errors of the real experiment
PP5  PP3  GP3  

Altitude error (mm)  11.28 ± 5.91  7.90 ± 4.51  7.52 ± 4.12 
Pitch error (\(^{\circ }\))  1.19 ± 0.86  0.67 ± 0.39  0.66 ± 0.37 
Roll error (\(^{\circ }\))  1.25 ± 0.89  0.78 ± 0.41  0.76 ± 0.36 
The camera used in the experiments is a uEye color camera from IDS with an image resolution of \(1600\times 1200\) pixels and a 60fps framerate. The color helps the laser segmentation in the image since the laser produces a red light. The laser is a Z5M18BF635c34 from ZLaser which provides a red light (635 nm) with a power of 5 mW. It is equipped with a circle optic with an opening angle of 34\(^{\circ }\).
For the evaluation of the accuracy of our algorithms, we used a handheld system as shown in Fig. 12. The camera and the laser are mounted on a trihedron to facilitate the positioning of the markers of the motion capture system.
Due to the low power of the laser and the dark color of the floor, the experiments are conducted in a dark environment as in our previous works [14, 15]. The lights are nevertheless not totally turned off since the camera has to observe a calibration pattern. The processing pipeline to detect the conic points in the image is simple. The color image is firstly converted from the RGB space into the HSV one. Then, a fixed threshold is applied only on the Hchannel since it contains the colorimetric information and we are looking for the red light of the laser. There is no additional processing; the outliers are directly removed by using the three proposed algorithms.
A first dataset is acquired for the calibration of the system as explained in Sect. 6. This dataset is composed of 16 images where the laser projection and a calibration pattern are visible as shown in Fig. 5. The relative pose of the laser with respect to the camera is initialized by measuring it roughly. This first estimation is represented in Fig. 13a. An intermediate and the final estimations after the algorithm convergence are shown, respectively, in Fig. 13b, c. The average error after calibration is less than 1.6 mm per point.
A second dataset composed of 106 images has then been acquired without the calibration pattern. The trajectory of this second dataset is represented in Fig. 14. The ground truth is given by the Vicon system. The results of our algorithms are given in Fig. 15 and in Table 2.
As we can see, the three algorithms provide a reliable estimate of the altitude and attitude of our system. PP3 and GP3 algorithms have a similar performance, and they provide a better accuracy than the PP5 algorithm.
As previously shown in [15], our system can also be mounted on a UAV vehicle with a similar baseline to the handheld experiment. This experiment aimed to demonstrate the feasibility of a UAV positioning application as shown in Fig. 16.
8 Conclusion
This paper proposes different approaches to estimate the altitude and attitude of a mobile system equipped with a circular laser and a camera. We propose a geometric formulation and three robust methods for estimating the pose from 5 or 3 points. The results of the synthetic and real experiments show that the two 3point approaches are the most robust because they use additional constraints for solving the problem. A new calibration approach, based on a bundle adjustment with one parameter per point, is also proposed to estimate the relative pose between the camera and the laser. As future work, we could study if the projection of the cone axis on the ground plane brings additional constraints since this point is visible on the images, or even what would be the advantage of using several concentric circles instead of a single one. The addition of geometric constraints could provide a better accuracy as demonstrated in [24].
References
 1.Park, S., Subbarao, M.: Automatic 3d model reconstruction based on novel pose estimation and integration techniques. Image Vis. Comput. J. 22(8), 623–635 (2004)CrossRefGoogle Scholar
 2.Hong, Y., Lin, X., Zhuang, Y., Zhao, Y.: Realtime pose estimation and motion control for a quadrotor UAV. In: World Congress on Intelligent Control and Automation (WCICA), Shenyang, China, pp. 2370–2375 (2014)Google Scholar
 3.Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2004)CrossRefzbMATHGoogle Scholar
 4.Hesch, J., Roumeliotis, S.: A direct leastsquares (DLS) method for PnP. In: International Conference on Computer Vision (ICCV), Barcelona, Spain, pp. 383–390 (2011)Google Scholar
 5.Nister, D., Stewenius, H.: A minimal solution to the generalised 3point pose problem. J. Math. Imaging Vis. 27(1), 67–79 (2007)MathSciNetCrossRefGoogle Scholar
 6.Bujnak, M., Kukelova, Z., Pajdla, T.: A general solution to the P4P problem for camera with unknown focal length. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2008)Google Scholar
 7.Lepetit, V., MorenoNoguer, F., Fua, P.: EPnP: an accurate o(n) solution to the PnP problem. Int. J. Comput. Vis. 81(2), 155–166 (2009)CrossRefGoogle Scholar
 8.Nister, D.: An efficient solution to the fivepoint relative pose problem. IEEE Trans. Pattern Anal. Mach. Intell. 26, 756–770 (2004)CrossRefGoogle Scholar
 9.Batlle, J., Mouaddib, E., Salvi, J.: Recent progress in coded structured light as a technique to solve the correspondence problem: a survey. Pattern Recogn. 31(7), 963–982 (1998)CrossRefGoogle Scholar
 10.McIlroy, P., Izadi, S., Fitzgibbon, A.: Kinectrack: 3d pose estimation using a projected dense dot pattern. IEEE Trans. Visual Comput. Graphics 20(6), 839–851 (2014)CrossRefGoogle Scholar
 11.Salvi, J., Fernandez, S., Pribanic, T., Llado, X.: A state of the art in structured light patterns for surface profilometry. Pattern Recogn. 43(8), 2666–2680 (2010)CrossRefzbMATHGoogle Scholar
 12.Kim, J., Gurdjos, P., Kweon, I.: Euclidean structure from confocal conics: theory and application to camera calibration. Comput. Vis. Image Underst. 114(7), 803–812 (2010)CrossRefGoogle Scholar
 13.Paniagua, C., Puig, L., Guerrero, J.: Omnidirectional structure light in a flexible configuration. Sensors 13(10), 13903–13916 (2013)CrossRefGoogle Scholar
 14.Natraj, A., Demonceaux, C., Vasseur, P., Sturm, P.: Vision based attitude and altitude estimation for UAVs in dark environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), San Francisco, USA, pp. 4006–4011 (2011)Google Scholar
 15.Natraj, A., Sturm, P., Demonceaux, C., Vasseur, P.: A geometrical approach for vision based attitude and altitude estimation for UAVs in dark environments. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, pp. 4565 4570 (2012)Google Scholar
 16.Semple, J., Kneebone, G.: Algebraic Projective Geometry. Oxford University Press, Oxford (1952)zbMATHGoogle Scholar
 17.Hartenberg, R., Denavit, J.: A kinematic notation for lower pair mechanisms based on matrices. J. Appl. Mech. 77(2), 215–221 (1955)MathSciNetzbMATHGoogle Scholar
 18.Fischler, M., Bolles, R.: Random sample consensus : A paradigm for model fitting with applications to image analysis and automated cartography. In: Communications of the ACM, vol. 24, pp. 381–395 (1981)Google Scholar
 19.Kahl, F., Heyden, A.: Using conic correspondences in two images to estimate the epipolar geometry. In: Proceedings of the International Conference on Computer Vision, pp. 761–766 (1998)Google Scholar
 20.Bouguet, J.: Visual methods for threedimensional modeling. Ph.D. thesis, Thse de doctorat, California Institute of Technology. http://www.vision.caltech.edu/bouguetj/ (May 1999)
 21.Sturm, P., Gargallo, P.: Conic fitting using the geometric distance. In: Asian Conference on Computer Vision (ACCV), Tokyo, Japan, pp. 784–795 (2007)Google Scholar
 22.Levenberg, K.: A method for the solution of certain problems in least squares. Q. Appl. Math. 2, 164–168 (1944)MathSciNetCrossRefzbMATHGoogle Scholar
 23.Manecy, A., Marchand, N., Ruffier, F., Viollet, S.: X4mag: a lowcost opensource microquadrotor and its linuxbased controller. Int. J. Micro Air Veh. 7(2), 89–110 (2015)CrossRefGoogle Scholar
 24.Kim, J., Gurdjos, P., Kweon, I.: Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 27(4), 637–642 (2005)CrossRefGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.