Keywords

1 Introduction

The localization of autonomous machines in a reference coordinate frame is a fundamental task and prerequisite for downstream applications reasoning about objects and tasks within this world frame. Dependent tasks are for example perception, planning, control, and many more [1]. Accurate localization can be achieved either through state estimation or simultaneous localization and mapping (SLAM). Usually, a Global Navigation Satellite System (GNSS) along with other sensors like an Inertial Measurement Unit (IMU) and In-Vehicle sensors (steering and velocity encoders) are employed for state estimation [2] while LiDARs and Cameras are primary sensors for SLAM [3]. Vehicle motion estimation through LiDAR odometry, even though can be prone to drift if not corrected with loop closures [6], provides a robust solution. LiDAR odometry and mapping for the sensor setups with the single LiDAR are studied extensively in [4,5,6]. Lately, decreasing LiDAR cost, caused the emergence of multi-sensor setups and works with multiple LiDARS are becoming available [8, 11]. Authors in [8] extend the LOAM [7] algorithm to accommodate multiple LiDARs, however, it is designed to work with LiDARS with common Field of View (FoV) for online calibration and sensors with higher resolution with feature-rich point cloud. Similarly, in [11], a multi-LiDAR localization and mapping pipeline is proposed based on KISS-ICP algorithm. The authors apply a separate scan registration for each LiDAR point cloud and create an offline map which is later used for online localization.

Departing from multiple sensors from the same type to setups with multiple LiDARs from various manufacturers or different tasks as short range and long range sensing, increases the difficulty to perform precise localization. Further, the LiDARs can be asynchronous with different update rates and hence pose a significant challenge to perform odometry estimation. In particular in cases where the sensor setup do not have overlapping FoV as in our experimental vehicle, Easymile [10], calibrating them to a common reference frame can be challenging due to missing jointly available information.

Fig. 1.
figure 1

Tightly coupled multi-LiDAR odometry for the odometry computation from two different LiDARs in the EasyMile setup

In this work, we propose a factor graph-based LiDAR odometry algorithm for an asynchronous multi-LiDAR setup. We employ an IMU as a base sensor to build the factor graph similar to our previous work [2] and architecture proposed in [3]. Kiss-ICP [6] is used to compute the odometry for each LiDAR using the locally constructed LiDAR map. Instead of trying to synchronize the LiDAR point-cloud or fusing them neglecting the time asynchronicity, we treat each LiDAR separately and estimate the odometry for each IMU update. We can turn the odometry estimation framework into state estimation by integrating the GNSS measurements into the factor graph. The major contribution of the proposed framework is threefold:

  • We propose a novel and modular LiDAR Odometry prediction framework for multiple asynchronous LiDARs with a non-aligning FoV.

  • The architecture is devised so that GNSS sensor can be added to perform state estimation.

  • We validate the proposed algorithm using a custom dataset collected using the EasyMile vehicle with 4 LiDARs, a GNSS Sensor, and an IMU.

2 Problem Formulation

Given the use of multiple sensors with diverse sets of information, we not only estimate the vehicle position and orientation but also its velocity and IMU’s bias. Assuming that \({X}_t^G\) represents the state for the factor graph,

$$\begin{aligned} {X}_t^G = \begin{bmatrix} _IP^G, _IR^G, _IV^G, B^I\end{bmatrix}_t \end{aligned}$$
(1)

where, \( _IP^G\), \( _IR^G\), and \( _IV^G\) are the IMU’s position, orientation, and velocity in the Global Reference Frame (G). \(B^I\) consists of the estimated accelerometer and gyroscope biases. The subindex t is used to represent the time instance.

We differentiate two case with GNSS measurements and without. In the first case we apply the algorithms as odometry computer and as in the latter case as state estimator. The major focus of the manuscript is dedicated to the odometry computer component and we will additionally demonstrate the state estimation capability.

3 Factor Graph Construction

The IMU sensor is used as the base sensor to develop the factor graph, and multiple LiDAR odometry factors are added online as they become available. The overall architecture of the proposed algorithm is illustrated by Fig. 1. The modular nature of the algorithm enables the addition and removal of sensors based on their availability. Since this study focuses on the LiDAR odometry component only, we will focus on how multiple LiDARs can be easily integrated into the framework using Robot Operating System (ROS). The architecture is based on the IMU timestamps, hence requiring to have IMU in the sensor setup, besides that, any other sensor providing relative odometry or absolute positions can also be added. The EasyMile sensor setup consists of the following sensor installation:

  • GNSS Sensor, located at the top of the vehicle

  • two 16 Plane Velodyne Lidars with \(180^0\) FoV facing front and back of the EasyMile vehicle, providing PointCloud at 10 Hz

  • two 4 Plane Sick long range Lidars with \(180^0\) FoV facing front and back of the EasyMile vehicle, providing PointCloud at 10 Hz

  • XSens IMU sensor providing acceleration and angular velocity measurements at 100 Hz

3.1 Graph Initialization

Initially, when the vehicle is stationary, IMU measurements are stored in a buffer for a predefined time window. The measurements are then analyzed to compute the orientation of the IMU in the local frame, except the yaw angle, which is initialized to zero. The IMU provides the acceleration and angular velocity measurements along the IMU reference frame. Using the gravity component measurement of the acceleration, the initial orientation of the IMU is computed using the following equations:

$$\begin{aligned} \begin{aligned} a_X = -gsin(\theta ) \\ a_Y = gsin(\psi )cos(\theta )\\ a_Z = gcos(\psi )cos(\theta )\\ \end{aligned} \end{aligned}$$
(2)

where, \(a_X\), \(a_Y\), and \(a_Z\) are the mean values of the buffered accelerometer measurements while the vehicle is stationary. \(\theta \) and \(\psi \) are the IMU roll and pitch angle in the Local Reference Frame L. g is the acceleration due to gravity. We neglect the bias and noise contribution in the IMU measurements in this initialization phase. The initial roll and pitch angle are calculated using Eq. 2.

Position and velocity values are initialized to zero. The roll and pitch angles are initialized using the step discussed above while the yaw angle is initialized to zero. For bias values of the accelerometer, user input or prior knowledge is utilized while the gyroscope bias is initialized as the mean value of the angular velocities when the vehicle is stationary. Once the initial state is defined, the factor graph is constructed by adding values at the IMU timestamps and factors connecting the corresponding sensor timestamps.

Fig. 2.
figure 2

Tightly coupled multi-LiDAR odometry for the odometry computation from two different LiDARs in the EasyMile setup

3.2 IMU Factor

First for each received IMU measurement, a value is added to the factor graph, in a similar approach to [3]. We apply the widely used pre-integrated factor proposed by [9]. Adding a value to the graph with each measurement is suitable for the IMUs with the lower frequency ranges, till 100 Hz, while for the IMUs with the higher frequency, some IMU measurements are stored in a buffer and values are added as per the user-defined, but lower frequency. The raw IMU measurements at time instance t consists of linear acceleration \( ^I\hat{a}_t\) and the angular velocities \( ^I\hat{\omega }_t\). The accelerometer and gyroscope models are then defined as:

$$\begin{aligned} \begin{aligned} ^I\hat{\omega }_t = ^I\omega _t + ^Ib_{\omega _t} + ^I\eta _t \\ ^I\hat{a}_t = _I^LR_t( ^Wa_t- ^Wg) + ^Ib_{a_t} + ^I\nu _t \end{aligned} \end{aligned}$$
(3)

where, \( ^I\omega _t\) and \( ^Ia_t\) are the true angular velocities and linear accelerations of the IMU. \( ^Ib_{\omega _t}\) and \( ^Ib_{a_t}\) are the angular velocity biases and linear acceleration biases. They are modeled as a random walk. The noise terms, \( ^I\eta _t\) and \( ^I\nu _t\), follow Gaussian distributions. \( _I^LR_t\) is the rotation matrix from the L frame to IMU reference frame. Readers are referred to [9] for further reading on the development of this factor.

3.3 Lidar Odometry and Odometry Factors

We study two approaches to creating a LiDAR odometry factor for the factor graph. The first approach is a lightly coupled odometry computer based on the work of [6]. In this approach, the odometry computer directly subscribes to the point cloud topics obtained from the LiDAR sensors, and the computed optometry is used as a binary factor connecting two values in the factor graph corresponding to the point cloud timestamps. The relative transformation between the two locations of the two consecutive point clouds in the local map is used as the factor between the values in the graph closest to these timestamps. A lightly coupled and completely separate mapping approach for sensor setup with multiple LiDARs can create unnecessary overhead to the algorithm if we create multiple LiDAR maps. In case of employing the ICP [6] approach for multiple LiDARs, different local maps need to be constructed, stored, and maintained for each LiDAR separately. Our previous work in [2] employed this technique to compute LiDAR odometry for two Velodyne LiDARs of the EasyMile sensor setup. In this work, the loosely coupled approach creates a single LiDAR map and localizes the newly obtained point cloud in the cumulative local map created using the point cloud of all the LiDARs instead of a single LiDAR. The local map is updated using the downsampled point cloud from each LiDAR following techniques employed in [6].

The second approach is a tightly coupled LiDAR odometry with the possibility to integrate multiple LiDARs into a single local map and use the optimized factor graph state to update the local map. The complete architecture of the multi-LiDAR odometry applied in this work is illustrated in Fig. 2. Note, that the LiDAR point cloud from different LiDARs is received asynchronously by the ROS network. Contrary to the common practices of fusing the point cloud from different LiDARs into one single point cloud and performing the odometry computation, our approach deals with the LiDAR point cloud separately. The addition of values to the factor graph with high-frequency IMU measurement allows to fuse asynchronous LiDARs odometry with IMU. As soon as the factor graph is initialized, the mapping module receives the point cloud from each LiDAR and adds the downsampled point cloud to the local map. The local map is stored in the IMU frame as the origin, hence the static transformation from IMU sensor to each LiDAR is used to add the LiDAR point cloud to the map. When the point cloud from each LiDAR is received at least once, the ICP algorithm based on KISS-ICP proposed on [6] is run to compute the odometry of the LiDAR in the L frame. Contrary to the [6], where a constant velocity model is used to predict the initial guess of the ICP algorithm, we use the predicted state from the IMU pre-integration model closest to the point cloud timestamp. This is illustrated in the Fig. 2.

Once the Lidar is localized in the L frame, the computed transformation is added as a unary factor connecting the corresponding IMU value. Fixed lag factor graph optimization is done to compute the estimated state at the LiDAR timestamp. This estimated transformation is used to update the local map instead of the ICP computed transformation as done in [6]. This is repeated for the asynchronous point clouds arriving from different LiDARs and is enabled easily by the design choice of using the IMU timestamps to create values in the factor graph.

3.4 GNSS Unary Factor

GNSS sensor provides the location of the vehicle in the G frame. They are transformed to the IMU reference point in the vehicle to be added to the factor graph as the unary factor using the known static transformation between the GNSS and the IMU.

$$\begin{aligned} \begin{aligned} ^{I}\hat{p}^{G} = \hat{p}^{G}_t + _{GN}R^I _{GN}p^I \end{aligned} \end{aligned}$$
(4)

where, \( ^{I}\hat{p}^{G}\) is the GNSS measurement transformed to the IMU location in the vehicle expressed in the G frame. \( _{GN}R^I\) and \( _{GN}p^I\) represent the rotation and translation component of the static transformation between IMU and the GNSS sensor.

4 Results and Comparison

To validate the proposed algorithm, we use the EasyMile vehicle and its sensor setup. We validate the algorithm against the LiDAR only odometry computation algorithm KISS-ICP [6] and LiDAR IMU fusion SLAM algorithm LIO-SAM [4]. The test is performed with the data collected around the Department of Mechanical Engineering, Politecnico Di Milano Bovisa Campus. We collect three distinct scenes to demonstrate the suitability of the proposed algorithm for the multi-lidar sensor setup.

Figures 3a, 3b and 4a show the qualitative results of the trajectory of the vehicle computed using the baseline algorithms and the proposed MLIO algorithm. For the baselines, the point cloud received from the different LiDARs are synchronized and then merged to provide a 360 FoV around the vehicle. The points in the vehicle trajectory are first aligned with the GNSS measurements temporally to find the correspondence and then Umeyama alignment algorithm [12] is used to perform the alignment. Qualitative observation of these trajectories obtained from the proposed and baseline algorithms shows that the proposed MLIO algorithm can provide the most consistent and accurate LiDAR odometry estimate at high frequency. In Fig. 4b, we also demonstrate the state estimation capability of the algorithm, which also takes as input the GNSS unary factor.

This observation is further buttressed by the results shown in Table 1. The results obtained in this table are for the input point cloud from the two Velodyne LiDARs only. The point clouds are first temporally synchronized using ROS message filters, merged and then fed to the baselines for LiDAR odometry estimation. However, for MLIO, such temporal synchronization is not necessary. In this table, we have listed results for two different variants of MLIO algorithm, \(MLIO-0\) for the loosely coupled variant and \(MLIO-1\) for the tightly coupled one. We can see that both the MLIO algorithms provide similar level of accuracy in average. In average, they provide approximately \(25\%\) increase in positional estimation accuracy while maintaining similar level of accuracy in terms of yaw angle estimation compared to LIO-SAM. They outperform KISS-ICP significantly by improving positional accuracy by \(40\%\) and yaw angle estimation by 16%.

Fig. 3.
figure 3

Ground truth and estimated vehicle trajectories. The red and green circles indicate the start point and GNSS failure point while the arrow shows the vehicle motion direction. Bottom Figure: Ground truth and estimated side slip angle

Fig. 4.
figure 4

Odometry and State estimation results from the proposed and the baseline algorithms

We also conduct some ablation studies about the LiDAR choice and the possibility of choosing LiDARs with different parameters and from different manufacturers into a single framework. In Table 2, the results of this study are illustrated. The \(KISS-S\) and \(MLIO-S\) refer to the framework only taking input from the SICK front and rear LiDARs, while \(KISS-SV\) and \(MLIO-SV\) take input from the front SICK and rear Velodyne LiDARs. We can observe that MLIO outperforms KISS-ICP for both frameworks in all the scenes. Furthermore, we can observe an interesting result, the performance of both the KISS-ICP and MLIO degrade when using the point clouds from the LiDARs from the different manufacturers. This degradation is more pronounced for the KISS-ICP algorithm. This could be because the LiDARS are calibrated approximately in the spatial frame due to their lack of common FoV. The temporal and spatial miscalibrations easily introduce noises into the local map and the point cloud fed to the algorithm. This effect is minimized in the MLIO algorithm due to the use of the factor graph to optimize the poses, hence the performance is not degraded to the same degree of KISS-ICP. This also demonstrates the capability of the algorithm to integrate multiple LiDARs from different manufacturers.

Table 1. Positional Root Mean Square Error (RMSE), Absolute Positional Error (APE) at the trajectory end and Yaw RMSE for predicted odometry computed using the baselines KISS-ICP [6], LIO-SAM [4] and the proposed Multiple LiDAR and Inertial Odometry (MLIO)
Table 2. Positional RMSE, APE and Yaw RMSE values for predicted odometry computed using the baselines KISS-ICP [6] and the proposed Multiple LiDAR and Inertial Odometry (MLIO)

5 Conclusions

In this work, we propose MLIO, a framework to estimate the LiDAR odometry using the input from multiple asynchronous LiDARs and an IMU sensor. The backend odometry for each LiDAR is computed using KISS-ICP while the factor graph using IMU pre-integration factors is used to compute optimized LiDAR odometry. We validate the algorithm on the dataset collected using EasyMile vehicle with the installation of the required sensor. We can demonstrate that the proposed MLIO algorithm is able to outperform the considered baselines, the LiDAR-only KISS-ICP and LiDAR and IMU based LIO-SAM in this custom dataset.

The proposed algorithm could be extended to include online calibration of the LiDARS which could significantly improve the overall odometry estimation result.