Abstract
Point cloud registration is a central problem in many computer vision problems. However, ensuring global consistency of the results of pairwise registration of point clouds is still a challenge when there are multiple clouds because different scans should be converted to a common coordinate system. This paper describes a global refinement algorithm that first estimates rotations and then estimates parallel translations. For global refinement of rotations, a closed-form algorithm based on matrices is used. For global refinement of parallel translations, a closed-form algorithm is also used. The proposed algorithm is compared with other global refinement algorithms.
INTRODUCTION
For effectively solving problems that are performed by mobile robots, it is necessary to construct a 3D model (map) of the space surrounding the robot. An exact map allows the mobile robots to operate under complex conditions by use of only an onboard sensor. Creation of maps of the surrounding medium is called the problem of simultaneous localization and mapping (SLAM). The SLAM problem based on the use of graphs was proposed by Lu and Milios in 1997 [1]. The known approaches to registration of several point clouds consist of the pairwise registration stage and the global refinement stage. Pairwise registration includes attribute matching between pairs of point clouds and minimization of the sum of residuals over all such correspondences to estimate transformation parameters that establish the relative mutual arrangement for each pair of point clouds in the common coordinate system. Pairwise registration involves standard methods of point cloud alignment. The problem of point cloud registration in the three-dimensional space is a fundamental problem of computational geometry and computer vision.
In most cases, global refinement algorithms first find parameters of pairwise transformations by use of [2–14] and then uniformly redistribute errors using graph-based optimization [1, 15]. The graph-based SLAM problem involves the scan graph in which each scanning corresponds to a vertex and each edge corresponds to the spatial connection between pairs of nodes. Globally consistent registration of several point clouds by graph optimization was described in [16]. Further, for the uniform distribution of the error, least squares optimization is used [1]. In [17], the branch-and-bound strategy is used for the global solution of the objective function. The approach proposed in [18] uses surfaces and Bayesian filters for point cloud alignment. The main disadvantage of this method is its high computational cost. Other approaches to global refinement are based on general graph optimization [19], bundle adjustment [20], low-rank sparse decomposition [21], and kernel-based energy function [22].
In [23], the algorithm of global refinement of transformations for point clouds obtained by scanning of the urban environment was described. The algorithm described in [23] first performs global refinement for rotations by use of quaternions and then implements global refinement of parallel translations using the specificity of the urban environment. In the proposed work, for comparisons in computer simulation we use the global refinement algorithm for rotations by use of quaternions as it was presented in [23].
The global refinement algorithm described in the proposed paper first estimates rotations and then estimates parallel translations. For global refinement of rotations, the closed-form algorithm is used by means of matrices. For global refinement of parallel translations, the closed-form algorithm is used.
This paper is organized as follows. Section 1 presents the statement of the problem and describes algorithms of its solution. Section 2 presents results of computer simulation. Section 3 contains the conclusions.
1 GLOBAL REFINEMENT OF RESULTS OF PAIRWISE POINT CLOUD REGISTRATION
Let C0, C1, ..., Cs be the initial set of point clouds and (Rij, Tij), i, j = 0, 1, ..., s be results of pairwise cloud registration, where Cj is the reference cloud, Ci is the objective cloud, Rij ∈ SO(3) is the rotation matrix, and Tij ∈ R3 is the parallel translation vector.
Let (Ri, Ti), i = 0, 1, ..., s, denote transformation mapping of cloud Ci to the coordinate system of cloud C0. Interpreting point clouds and transformations as vertices and edges, respectively, we obtain a graph, an example of which is shown in Fig. 1. Global refinement of pairwise transformations is based on commutativity of cycles contained in the graph.
1.1. Global Refinement of Rotations
The condition of commutativity of cycles with respect to rotations means that the following conditions are satisfied:
where i, j = 0, 1, ..., s. Let us associate system of equations (1) with functional
where
Since the graph can contain not only edges, we replace functional J '(R) by functional J(R):
where
By the solution of system (1) we mean the solution of the following variational problem:
under condition Ri ∈ SO(3), i = 0, ..., s. Let J(Rk), k = 0, 1, ..., s denote a functional containing all summands with occurrences of variable Rk in J(R). We represent J(Rk), k = 0, 1, ..., s, as a sum of functionals J1(Rk) and J2(Rk):
Gradients \(\nabla {{J}_{1}}({{R}_{k}})\) and \(\nabla {{J}_{2}}({{R}_{k}})\) are specified by formulas:
Gradient \(\nabla J({{R}_{k}})\) takes form:
Taking into account that R0 = I, we introduce the following notation:
Equality \(\nabla J({{R}_{k}})\) = 0 takes form:
where k = 1, ..., s.
Vanishing of the gradient yields the following linear system of matrix equations:
Let us rewrite system of equations (12) numerically and calculate the affine solution of variational problem (4). We find projections \({{R}_{{1*}}}\), ..., \({{R}_{{s*}}}\) of obtained matrices R1, ..., Rs on SO(3):
where Uk and Vk are elements of the SVD-representation of matrix Rk.
1.2. Global Refinement of Parallel Translations
The condition of commutativity for cycles of the graph shown in Fig. 1 defines the following system of equations:
Let us associate system of equations (14) functional J(T):
where
By the solution of system (14) we mean the solution of the following variational problem:
where T = (T0, T1, ..., Ts). The gradient \(\nabla J(T)\) with respect to Tk, k = 0, 1, ..., s is calculated as follows:
Vanishing of the gradient with respect to Tk, k = 0, 1, ..., s yields the following equation:
Let Bk, k = 0, 1, ..., s, denote the following expression:
Variational problem (16) is reduced to solving a system of linear equations in vectors:
Let M denote the matrix in Eq. (20). Then, solving the variational problem is reduced to solving three systems of numerical equations:
where i = 1, 2, 3 is the number of the vector component.
2 COMPUTER SIMULATION
We denote the global refinement algorithm proposed in this paper as GR. Let us describe other algorithms under consideration.
For cloud Ck, k = 1, ..., s, we consider transformation Mk(k – 1) equal to the result of projection of matrix 1/2(R(k – 1)k + Rk(k – 1)) on SO(3). Let Tk, k = 1, ..., s, denote the parallel translation vector equal to vector 1/2(T(k – 1)k + Tk(k – 1)). The transformation ((M1M2 ... Mk), (T1 + T2 + ... + Tk)) maps cloud Ck to the coordinate system of cloud C0. We denote this global refinement algorithm as R_ICP.
In [23], the global refinement algorithm was described individually for rotations and parallel translations. The algorithm for rotations is based on using quaternions. Let GR_Q denote the global refinement algorithm that uses the rotation refinement algorithm described in [23] and the algorithm for parallel translation described in this paper.
The computer experiments were carried out with point clouds from the San Francisco Apollo-SouthBay Dataset [24]. Each cloud in the database contains approximately 100 000 points. We use cloud subsampling to approximately 10 000 points per cloud. The clouds in the dataset were obtained using a lidar mounted on a vehicle. The vehicle moved and the sensor scanned the ambient medium with a certain frequency. The obtained data set consists of a sequence of point clouds. In our experiments, the point clouds are taken from the dataset with a step of 4, i.e., for example, point clouds nos. 1, 5, 9, ... are considered.
The database contains information about transformation Mk mapping each cloud Ck to a certain global coordinate system. Matrix Mk has dimensions of 4 by 4 and specifies a rigid transformation in homogeneous coordinates. The transformation mapping cloud Ci to the coordinate system of cloud CJ is specified by matrix Mji_true = (Mj)–1Mi.
The experiments are organized as follows. Point cloud number k is fixed in the database. The point clouds with numbers k, k + 4, k + 8, ...k + 4 × 99 are considered. For all successive pairs of clouds (k + 4i, k + 4(i + 1)), i = 0, ..., 98, we find transformation (Rk + 4i, k + 4(i + 1), Tk + 4i, k + 4(i + 1)) mapping cloud Ck + 4(i + 1) to the coordinate system of cloud Ck + 4i and transformation (Rk + 4(i + 1), k + 4i, Tk + 4(i + 1), k + 4i) mapping cloud Ck + 4i to the coordinate system of cloud Ck + 4(i + 1) using the point-to-point ICP algorithm. Note that before the use of the ICP algorithm we apply to the reference cloud the coarse alignment algorithm, which means applying to this cloud the transformation obtained for the previous pair of clouds. The transforms relating clouds, the numbers of which differ by more than 4, are calculated using the superposition of intermediate transforms with respect to R and T, respectively.
In this paper, the following quality parameters of global refining algorithms are used. Parameter last_R = \({{\left\| {{{R}_{{{\text{(first}}{\text{, last)}\_\text{true}}}}} - {{R}_{{{\text{(first}}{\text{, last)}\_\text{est}}}}}} \right\|}_{{{{L}_{2}}}}}\), where R(first, last)_true and R(first, last)_est are the true and estimated transforms mapping the last considered cloud to the coordinate system of the first cloud, shows the global error of 3D scene reconstruction. Parameter last_T is defined similarly. Parameters avg_R and avg_T show the average errors with respect to R and T, respectively. Parameters max_R and max_T show the maximum errors. Figure 2 and Tables 1 and 2 show the operation accuracy of the GR_ICP, GR, and GR_Q algorithms for a series of clouds with the origin in cloud no. 1.
Figure 3 and Tables 3 and 4 show the operation accuracy of the GR_ICP, GR, and GR_Q algorithms for a series of clouds with the origin in cloud no. 501.
Figure 4 and Tables 5 and 6 show the operation accuracy of the GR_ICP, GR, and GR_Q algorithms for a series of clouds with the origin in cloud no. 1001. Figure 5 and Tables 7 and 8 show the operation accuracy of the GR_ICP, GR, and GR_Q algorithms for a series of clouds with the origin in cloud no. 1501.
3 CONCLUSIONS
In this paper, we use the global refining algorithm GR for constructing a 3D scene from a set of point clouds. The algorithm is compared with other possible methods of solving the problem of global refining of transformations. The accuracy of the algorithms is estimated using quality criteria that correspond to visual perception of the result accuracy. Computer simulation demonstrates efficiency of the proposed algorithm.
REFERENCES
F. Lu and E. Milios, “Globally consistent range scan alignment for environment mapping,” Autonomous Robots 4, 333–349 (1997).
P. Besl and N. McKay, “A method for registration of 3‑D shapes,” IEEE Trans. Pattern Anal. and Machine Intell. 2, 239–256 (1992).
Y. Chen and G. Medioni, “Object modeling by registration of multiple range images,” Image and Vision Comput. 10, 145–155 (1992).
A. Segal, D. Haehnel, and S. Thrun, “Generalized-ICP,” Robot. Sci. Syst. 5, 161–168 (2010).
J. Serafin and G. Grisetti, “Using extended measurements and scene merging for efficient and robust point cloud registration,” Robot. Auton. Syst. 92, 91–106 (2017).
A. Makovetskii, S. Voronin, V. Kober, and A. Voronin, “A regularized point cloud registration approach for orthogonal transformations,” J. Global Optimiz (2020).
A. Makovetskii, S. Voronin, V. Kober, and D. Tihonkih, “Affine registration of point clouds based on point-to-plane approach,” Procedia Eng. 201, 322–330 (2017).
B. Horn, “Closed-form solution of absolute orientation using unit quaternions,” J. Opt. Soc. America, Ser. A 4, 629–642 (1987).
B. Horn, H. Hilden, and S. Negahdaripour, “Closed-form solution of absolute orientation using orthonormal matrices,” J. Opt. Soc. America. Ser. A 5, 1127–1135 (1988).
S. Umeyama, “Least-squares estimation of transformation parameters between two point patterns,” IEEETPAMI 13 (4), 376–380 (1991).
A. Makovetskii, S. Voronin, V. Kober, and A. Voronin, “A point-to-plane registration algorithm for orthogonal transformations,” Proc. SPIE, 10752R (2018).
A. Makovetskii, S. Voronin, V. Kober, and A. Voronin, “A non-iterative method for approximation of the exact solution to the point-to-plane variational problem for orthogonal transformations,” Math. Methods Appl. Sci. 41 (18), 9218 –9230 (2018).
A. Makovetskii, S. Voronin, V. Kober, and A. Voronin, “Point cloud registration based on multiparameter functional,” Mathematics 9, 2589 (2021).
A. Makovetskii, S. Voronin, V. Kober, and A. Voronin, “Coarse point cloud registration based on variational functional,” Mathematics 11, 35 (2023).
D. Bornnmann, J. Elseberg, K. Lingmann, A. Nuechter, and J. Hertzberg, “Globally Consistent, 3D mapping with scan matching,” Robot. Auton. Syst. 56, 130–142 (2008).
P. W. Theiler, J. D. Wegner, and K. Schindler, “Globally consistent registration of terrestrial laser scans via graph optimization,” ISPRS J. Photogram. Remote Sens. 109, 126–138 (2015).
J. Yang, H. Li, and Y. Jia, “Go-ICP: Solving 3D registration efficiently and globally optimally,” in Proc. 2013 IEEE Int. Conf. on Comput. Vision, Sydney, NSW, Australia, Dec. 1–8, 2013 (IEEE, New York, 2013).
D. Huber and M. Hebert, “Fully automatic registration of multiple 3D data sets,” Image Vis. Comput. 21, 637–650 (2003).
R. Kuemmerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “A general framework for graph optimization,” in Proc. 2011 IEEE Int. Conf. Robotics and Automation, Shanghai, China, May 9–13, 2011 (IEEE, New York, 2011), pp. 3607–3613.
M. A. Lourakis and A. Argyros, “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Softw. 36, 1–30 (2009).
S. Wang, H. Y. Sun, H. C. Guo, L. Du, and T. J. Liu, “Multi-view laser point cloud global registration for a single object,” Sensors 18, 3729 (2018).
S. McDonagh and F. Robert, “Simultaneous registration of multi-view range images with adaptive kernel density estimation,” in Proc. IMA, 14th Mathematics of Surfaces, Birmingham, AL, USA, Sept., 11–13, 2013.
Nadisson Luis Pavan, Daniel Rodrigues dos Santos, and Khoshelham Kourosh, “Global registration of terrestrial laser scanner point clouds using plane-to-plane correspondences,” Remote Sens. 12, 1127 (2020).
W. Lu, Y. Zhou, G. Wan, S. Hou, and S. Song, “L3‑Net: Towards learning based LiDAR localization for autonomous driving,” in Proc. IEEE Conf. Comput. Vision and Pattern Recognition, Long Beach, USA, Iune 15–20, 2019 (IEEE, New York, 2019), pp. 6389–6398.
Funding
This work was supported in part by the Russian Science Foundation, project no. 21-11-00095.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors of this work declare that they have no conflicts of interest.
Additional information
Translated by A. Nikol’skii
Publisher’s Note.
Pleiades Publishing remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original online version of this article was revised: Modifications have been made to the Affiliations. Full information regarding the corrections made can be found in the erratum/correction for this article.
Rights and permissions
Open Access. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/
About this article
Cite this article
Makovetskii, A.Y., Kober, V.I., Voronin, S.M. et al. Global Refinement Algorithm for 3D Scene Reconstruction from a Sequence of Point Clouds. J. Commun. Technol. Electron. 68, 1499–1505 (2023). https://doi.org/10.1134/S1064226923120124
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1134/S1064226923120124