Skip to main content

SLAM Based on Multi-Sensor

  • Chapter
  • First Online:
Introduction to Intelligent Robot System Design

Abstract

Simultaneous localization and mapping (SLAM) is the key technology for mobile robots to achieve complete autonomy and multi-function. Mobile robots mainly carry sensors such as camera and LiDAR to obtain measurement of the environment. However, due to the lack of accuracy and robustness as well as the complexity of the scene, the SLAM using a single sensor will cause degradation issues and fail to meet the requirements. The applications of multi-sensor fusion can make up for the limitations and shortcomings of a single sensor and adapt to complex environmental changes. For example, the high-frequency output of IMU can handle rapid motion, while the feature tracking of camera can overcome the drift of IMU, and LiDAR point cloud can provide high-precision and long-distance depth information. In order to establish an information-rich environment map and achieve accurate positioning, the advantage of multi-sensor data complementarity can be used to realize multi-sensor fusion SLAM, so as to improve the robustness and adaptability of the robot in unknown dynamic and complex environment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 84.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Tixiao Shan, Brendan Englot, Drew Meyers, Wei Wang, Carlo Ratti, Daniela Rus. LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping. Proceedings of 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2020), pp:5135-5142, 24-30 Oct, 2020, Las Vegas, Nevada, USA.

  2. 2.

    https://gtsam.org/tutorials/intro.html

  3. 3.

    Wei Xu, Fu Zhang. FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter. IEEE Robotics and Automation Letters, 2021, Vol.6(2):3317-3324. Wei Xu, Yixi Cai, Dongjiao He, Jiarong Lin, Fu Zhang. FAST-LIO2: Fast Direct LiDAR-inertial Odometry. https://arxiv.org/abs/2107.06829v1

References

  1. Makarenko AA, Williams SB, Bourgault F, Durrant-Whyte HF (2002) An experiment in integrated exploration. Proceedings of 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2002), September 30–October 4, 2002, Lausanne, Switzerland, pp 534–539

    Google Scholar 

  2. Tedaldi D, Pretto A, Menegatti E (2014) A robust and easy to implement method for IMU calibration without external equipments. Proceedings of 2014 IEEE international conference on robotics and automation (ICRA 2014), pp:3042-3049, May 31–June 7, 2014, Hong Kong, China, pp 534–539

    Google Scholar 

  3. Zezao L (2019) Tightly coupled monocular visual inertial SLAM Combined with wheel speed sensor. University of Science and Technology, Wuhan

    Google Scholar 

  4. Board ISS (2006) IEEE standard specification format guide and test procedure for single-axis interferometric fiber optic gyros. IEEE Std, pp 1–83

    Google Scholar 

  5. Woodman OJ, Adams DE (2007) An introduction to inertial navigation. J Navig 9:249–259

    Google Scholar 

  6. Zheng W (2017) Research on key technologies of mapping and localization system for mobile robots. Huazhong University of Science and Technology, Wuhan

    Google Scholar 

  7. Thrun S, Burgard W, Fox D (2019) Probabilistic robotics (trans: Cao H, Tan Z, Shi X). Machinery Industry Press, Cambridge

    Google Scholar 

  8. Hess W, Kohler D, Rapp H, Andor D (2016) Real-time loop closure in 2D LIDAR SLAM. Proceedings of 2016 IEEE international conference on robotics and automation (ICRA 2016), 16–21 May 2016, Stockholm, Sweden, pp 1271–1278

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Appendices

Further Reading

Understand the characteristics and differences between tightly coupled and loosely coupled multiple sensors and the LIO-SAM and Fast-LIO tightly coupled methods.

A loosely coupled system provides flexibility to the combination of sensors with lower requirement of timestamp synchronization and typically requires lower computational cost. Taking LIO as an example, loosely coupled LiDAR-inertial Odometry (LIO) methods typically process the LiDAR and IMU measurements separately and fuse their results later. A common procedure of the loosely coupled approach is obtaining a pose measurement by registering a new scan and then fusing the pose measurement with IMU measurements. The separation between scan registration and data fusion reduces the computation load. However, it ignores the correlation between the system’s other states (e.g., velocity) and the pose of the new scan. Moreover, in the case of featureless environments, the scan registration could degenerate in certain directions and causes unreliable fusion in later stages.

Unlike the loosely coupled methods, tightly coupled LiDAR-inertial odometry methods typically fuse the raw feature points (instead of scan registration results) of LiDAR with IMU data. It needs to put the LiDAR point cloud features into the feature vector, so the dimension of the whole system state vector will be very high, which requires a high amount of calculation. Tightly coupled LIO will put the original observation of LiDAR and the original observation of IMU together for joint processing, considering the internal relationship and mutual influence. That is, a tightly coupled system jointly optimizes over both inertial and LiDAR sensor measurements, which provides higher accuracy in both mapping and tracking. The general idea is that IMU data is first used for the distortion of LiDAR observation, and then LiDAR observation and IMU observation will be thrown together into some form of state estimation model, and the purpose is to minimize the overall error of LiDAR observation and IMU observation. The final state quantities such as Position, Velocity, Orientation, Bias, and Gravity are estimated. There are two main approaches to tightly coupled LIO: optimization based and filter based.

LIO-SAM is a factor graph tightly coupled LiDAR inertial odometry via smoothing and mapping systemFootnote 1, which adds IMU pre-integration and GPS information and removes the inter-frame matching module. The inputs are IMU, point cloud, and GPS. The LIO-SAM method obtains the IMU pre-integration factor through IMU measurement between adjacent key frames. Then, the laser odometer factor is obtained by matching the key frame with the local map. When a new pose node is inserted into the factor graph, the GPS factor is added to the pose node. Finally, the loop factor is obtained by matching the key frame and the candidate loop key frame, and the overall map is optimized and constructed based on the optimization of the factor graph.

The imu_preintegration node in the LIO-SAM specifically performs pre-integration on the IMU. The pre-integration result is given to the scan to local map part at the back end to remove distortion and initial registration values. The registration result will in turn return to the imu_preintegration node. In this node, a sliding window optimization problem of no more than N frames is constructed based on gtsam (Georgia Tech Smoothing and Mapping) factor graph optimization libFootnote 2, which specifically optimizes the current bias of the IMU. The new bias will be used for IMU pre-integration at subsequent times. At the same time, the IMU pre-integration results are also used as IMU pre-integration factors, which are input into the back-end factor graph optimization problem, to participate in the optimization of the entire trajectory, as shown in Fig 6.15.

Fig. 6.15
A schematic representation of the L I O-S A M. It involves the G P S factor, scan matching, loop closure factor, LiD A R odometry factor, I M U preintegration factor, G P S measurement, and robot state node.

System structure of LIO-SAM

Fast-LIO and Fast-LIO2 fuse LiDAR feature points with IMU data using a tightly coupled iterated extended Kalman filter to allow robust navigation in fast-motion, noisy, or cluttered environments where degeneration occursFootnote 3. The LiDAR inputs are fed into the feature extraction module to obtain planar and edge features. Then the extracted features and IMU measurements are fed into the state estimation module for state estimation at 10Hz−50Hz. The estimated pose then registers the feature points to the global frame and merges them with the feature points map built so far. The updated map is finally used to register further new points in the next step.

The framework overview of Fast-LIO is shown in Fig 6.16. Fast-LIO is compatible with rotary mechanical LiDAR and solid-state LiDAR. Firstly, feature extraction is performed on the point cloud, and the feature points are distorted by IMU observation. Then, the LiDAR observation and IMU observation are placed in Iterated Kalman Filter to estimate the position, velocity, orientation, bias, and gravity state quantities at the current moment, all of which are located in the world coordinate system.

Fig. 6.16
A framework overview of the F A S T-L I O. It involves LiD A R inputs, I M U inputs, map, updated feature points, points accumulation, feature extraction, residual computation, and odometry.

Framework overview of FAST-LIO

Exercises

  1. 1.

    Calibrate an IMU sensor according to the IMU calibration method described.

  2. 2.

    Analyze the process of LiDAR and IMU extrinsic parameter calibration.

  3. 3.

    Kalman Filter is a Gaussian filter method while Particle Filter is a nonparametric filter method. What are the advantages and disadvantages, and application scenarios of these two methods?

  4. 4.

    What are the current improvements to address the limitations of the Kalman Filter method?

  5. 5.

    Explain the difference between filter-based SLAM and graph optimization-based SLAM.

  6. 6.

    What method does the Cartographer algorithm use to reduce computational resource consumption and ensure real time and accuracy in loop closure detection?

  7. 7.

    Refer to the method of environmental mapping under 2D data sets using the Cartographer algorithm and perform mapping under 3D data sets.

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Peng, G., Lam, T.L., Hu, C., Yao, Y., Liu, J., Yang, F. (2023). SLAM Based on Multi-Sensor. In: Introduction to Intelligent Robot System Design. Springer, Singapore. https://doi.org/10.1007/978-981-99-1814-0_6

Download citation

Publish with us

Policies and ethics