Skip to main content

Enabling Robust SLAM for Mobile Robots with Sensor Fusion

  • Chapter
  • First Online:
Autonomous Driving Perception

Abstract

Simultaneous Localization and Mapping (SLAM) is a fundamental problem in robotics. Over the past three decades, researchers have made significant progress in solving the probabilistic SLAM problem by presenting various theoretical frameworks, efficient solvers, and complete systems. As the development of autonomous robots (i.e., self-driving cars, legged robots) continues, SLAM systems have become increasingly popular for large-scale real-world applications. The evolution of SLAM is also often propelled by the emergence of new sensors or sensor combinations. This chapter provides an introduction to the commonly used sensors in mobile robots, followed by a comprehensive review of several classic SLAM systems from a modern perspective. Additionally, this chapter presents a real-world case of constructing a multi-sensor system and a challenging SLAM dataset, offering a valuable tutorial for researchers in developing their research platforms. Overall, this chapter aims to provide readers with a comprehensive guide to learn sensor fusion from theory to practice completely.

This work was supported by Guangdong Basic and Applied Basic Research Foundation, under project 2021B1515120032, Foshan-HKUST Project no. FSUST20-SHCIRI06C, and Project of Hetao Shenzhen Hong Kong Science and Technology Innovation Cooperation Zone (HZQB-KCZYB-2020083), awarded to Prof. Ming Liu.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 199.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://www.bostondynamics.com/spot.

  2. 2.

    https://navtechradar.com/explore/fmcw-radar.

  3. 3.

    https://ram-lab.com/file/site/fusionportable/dataset/fusionportable.

  4. 4.

    https://ouster.com/zh-cn/products/scanning-lidar/os1-sensor.

  5. 5.

    https://www.flir.eu/products/blackfly-s-usb3.

  6. 6.

    https://inivation.gitlab.io/dv/dv-docs/docs/external-camera-sync.

  7. 7.

    https://www.mathworks.com/help/vision/camera-calibration.html.

  8. 8.

    https://leica-geosystems.com/products/laser-scanners/scanners/blk360.

  9. 9.

    https://optitrack.com.

  10. 10.

    https://github.com/mp3guy/SurfReg.

References

  1. Andrew AM (2001) Multiple view geometry in computer vision. Kybernetes

    Google Scholar 

  2. Arun KS, Huang TS, Blostein SD (1987) Least-squares fitting of two 3-d point sets. IEEE Trans Pattern Anal Mach Intell 5:698–700

    Article  Google Scholar 

  3. Barfoot TD (2017) State estimation for robotics. Cambridge University Press

    Google Scholar 

  4. Barnes D, Gadd M, Murcutt P, Newman P, Posner I (2020) The oxford radar robotcar dataset: a radar extension to the oxford robot car dataset. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 6433–6438

    Google Scholar 

  5. Behley J, Stachniss C (2018) Efficient surfel-based slam using 3d laser range data in urban environments. Robot: Sci Syst

    Google Scholar 

  6. Bentley JL (1975) Multidimensional binary search trees used for associative searching. Commun ACM 18(9):509–517

    Article  MATH  Google Scholar 

  7. Beuchert J, Camurri M, Fallon M (2022) Factor graph fusion of raw GNSS sensing with IMU and lidar for precise robot localization without a base station. arXiv:2209.14649

  8. Bloesch M, Burri M, Omari S, Hutter M, Siegwart R (2017) Iterated extended kalman filter based visual-inertial odometry using direct photometric feedback. Int J Robot Res 36(10):1053–1072

    Article  Google Scholar 

  9. Bloesch M, Czarnowski J, Clark R, Leutenegger S, Davison AJ (2018) Codeslam-learning a compact, optimisable representation for dense visual slam. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2560–2568

    Google Scholar 

  10. Bowman SL, Atanasov N, Daniilidis K, Pappas GJ (2017) Probabilistic data association for semantic slam. In: IEEE international conference on robotics and automation (ICRA). IEEE, pp 1722–1729

    Google Scholar 

  11. Broome M, Gadd M, De Martini D, Newman P (2020) On the road: Route proposal from radar self-supervised by fuzzy lidar traversability. AI 1(4):558–585

    Google Scholar 

  12. Bryner S, Gallego G, Rebecq H, Scaramuzza D (2019) Event-based, direct camera tracking from a photometric 3d map using nonlinear optimization. In: 2019 international conference on robotics and automation (ICRA). IEEE, pp 325–331

    Google Scholar 

  13. Burnett K, Schoellig AP, Barfoot TD (2021) Do we need to compensate for motion distortion and doppler effects in spinning radar navigation? IEEE Robot Autom Lett 6(2):771–778

    Article  Google Scholar 

  14. Cadena C, Carlone L, Carrillo H, Latif Y, Scaramuzza D, Neira J, Reid I, Leonard JJ (2016) Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans Robot 32(6):1309–1332

    Article  Google Scholar 

  15. Campos C, Elvira R, Rodríguez JJG, Montiel JM, Tardós JD (2021) Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Trans Robot

    Google Scholar 

  16. Cao S, Lu X, Shen S (2022) Gvins: tightly coupled GNSS-visual-inertial fusion for smooth and consistent state estimation. IEEE Trans Robot 38(4):2004–2021

    Google Scholar 

  17. Cen SH, Newman P (2018) Precise ego-motion estimation with millimeter-wave radar under diverse and challenging conditions. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE, pp 6045–6052

    Google Scholar 

  18. Cen SH, Newman P (2019) Radar-only ego-motion estimation in difficult settings via graph matching. In: 2019 international conference on robotics and automation (ICRA). IEEE, pp 298–304

    Google Scholar 

  19. Cheng J, Chen Y, Zhang Q, Gan L, Liu C, Liu M (2022) Real-time trajectory planning for autonomous driving with gaussian process and incremental refinement. In: 2022 international conference on robotics and automation (ICRA). IEEE, pp 8999–9005

    Google Scholar 

  20. Chen X, Milioto A, Palazzolo E, Giguere P, Behley J, Stachniss C (2019) Suma++: Efficient lidar-based semantic slam. In: 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 4530–4537

    Google Scholar 

  21. Chen Y, Xin R, Cheng J, Zhang Q, Mei X, Liu M, Wang L (2022) Efficient speed planning for autonomous driving in dynamic environment with interaction point model. IEEE Robot Autom Lett 7(4):11 839–11 846

    Google Scholar 

  22. Delmerico J, Cieslewski T, Rebecq H, Faessler M, Scaramuzza D (2019) Are we ready for autonomous drone racing? the uzh-fpv drone racing dataset. In: 2019 international conference on robotics and automation (ICRA). IEEE, pp 6713–6719

    Google Scholar 

  23. Eckenhoff K, Yang Y, Geneva P, Huang G (2019) Tightly-coupled visual-inertial localization and 3-d rigid-body target tracking. IEEE Robot Autom Lett 4(2):1541–1548

    Google Scholar 

  24. Engel J, Koltun V, Cremers D (2017) Direct sparse odometry. IEEE Trans Pattern Anal Mach Intell 40(3):611–625

    Google Scholar 

  25. EUSPA (2021) What is GNSS. https://www.euspa.europa.eu/european-space/eu-space-programme/what-gnss

  26. Forster C, Carlone L, Dellaert F, Scaramuzza D (2016) On-manifold preintegration for real-time visual-inertial odometry. IEEE Trans Robot 33(1):1–21

    Google Scholar 

  27. Forster C, Pizzoli M, Scaramuzza D (2014) Svo: fast semi-direct monocular visual odometry. In: IEEE international conference on robotics and automation (ICRA). IEEE, pp 15–22

    Google Scholar 

  28. Forster C, Zhang Z, Gassner M, Werlberger M, Scaramuzza D (2016) Svo: semidirect visual odometry for monocular and multicamera systems. IEEE Trans Robot 33(2):249–265

    Google Scholar 

  29. Forsyth D, Ponce J (2011) Computer vision: a modern approach. Prentice Hall

    Google Scholar 

  30. Furgale P, Rehder J, Siegwart R (2013) Unified temporal and spatial calibration for multi-sensor systems. In: 2013 IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 1280–1286

    Google Scholar 

  31. Furrer F, Fehr M, Novkovic T, Sommer H, Gilitschenski I, Siegwart R (2018) Evaluation of combined time-offset estimation and hand-eye calibration on robotic datasets. In: Field and service robotics: results of the 11th international conference. Springer, pp 145–159

    Google Scholar 

  32. Gadd M, De Martini D, Newman P (2021) Unsupervised place recognition with deep embedding learning over radar videos (2021). arXiv:2106.06703

  33. Gallego G, Lund JE, Mueggler E, Rebecq H, Delbruck T, Scaramuzza D (2017) Event-based, 6-dof camera tracking from photometric depth maps. IEEE Trans Pattern Anal Mach Intell 40(10):2402–2412

    Article  Google Scholar 

  34. Gallego G, Lund JE, Mueggler E, Rebecq H, Delbruck T, Scaramuzza D (2017) Event-based, 6-dof camera tracking from photometric depth maps. IEEE Trans Pattern Anal Mach Intell 40(10):2402–2412

    Article  Google Scholar 

  35. Gallego G, Delbrück T, Orchard G, Bartolozzi C, Taba B, Censi A, Leutenegger S, Davison AJ, Conradt J, Daniilidis K, Scaramuzza D (2020) Event-based vision: a survey. IEEE Trans Pattern Anal Mach Intell 1–1

    Google Scholar 

  36. Gallego G, Rebecq H, Scaramuzza D (2018) A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 3867–3876

    Google Scholar 

  37. Gálvez-López D, Tardos JD (2012) Bags of binary words for fast place recognition in image sequences. IEEE Trans Robot 28(5):1188–1197

    Google Scholar 

  38. Geneva P, Eckenhoff K, Lee W, Yang Y, Huang G (2020) Openvins: a research platform for visual-inertial estimation. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 4666–4672

    Google Scholar 

  39. Geyer J, Kassahun Y, Mahmudi M, Ricou X, Durgesh R, Chung AS, Hauswald L, Pham VH, Mühlegg M, Dorn S et al (2020) A2d2: audi autonomous driving dataset. arXiv:2004.06320

  40. Han L, Gao F, Zhou B, Shen S (2019) Fiesta: fast incremental Euclidean distance fields for online motion planning of aerial robots. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 4423–4430

    Google Scholar 

  41. Heng L (2020) Automatic targetless extrinsic calibration of multiple 3d lidars and radars. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 10 669–10 675

    Google Scholar 

  42. Hong Z, Petillot Y, Wang S (2020) Radarslam: radar based large-scale slam in all weathers. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 5164–5170

    Google Scholar 

  43. Huang C, Gao F, Pan J, Yang Z, Qiu W, Chen P, Yang X, Shen S, Cheng K-T (2018) Act: An autonomous drone cinematography system for action scenes. In: 2018 IEEE international conference on robotics and automation (ICRA). IEEE, pp 7039–7046

    Google Scholar 

  44. ICRA (2021) Radar in robotics. https://sites.google.com/view/radar-robotics/home

  45. Jiao J, Chen F, Wei H, Wu J, Liu M (2023) Lce-calib: automatic lidar-frame/event camera extrinsic calibration with a globally optimal solution. arXiv:2303.09825

  46. Jiao J, Huang H, Li L, He Z, Zhu Y, Liu M (2021) Comparing representations in tracking for event camera-based slam. In: Proceedings of the IEEE/cvf conference on computer vision and pattern recognition, pp 1369–1376

    Google Scholar 

  47. Jiao J, Wei H, Hu T, Hu X, Zhu Y, He Z, Wu J, Yu J, Xie X, Huang H et al (2022) Fusionportable: a multi-sensor campus-scene dataset for evaluation of localization and mapping accuracy on diverse platforms. In: 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 3851–3856

    Google Scholar 

  48. Jiao J, Ye H, Zhu Y, Liu M (2021) Robust odometry and mapping for multi-lidar systems with online extrinsic calibration. IEEE Trans Robot

    Google Scholar 

  49. Khattak S, Nguyen H, Mascarich F, Dang T, Alexis K (2020) Complementary multi–modal sensor fusion for resilient robot pose estimation in subterranean environments. In: 2020 International conference on unmanned aircraft systems (ICUAS). IEEE, pp 1024–1029

    Google Scholar 

  50. Kilic C, Ohi N, Gu Y, Gross JN (2021) Slip-based autonomous zupt through gaussian process to improve planetary rover localization. IEEE Robot Autom Lett 6(3):4782–4789

    Google Scholar 

  51. Kim G, Park YS, Cho Y, Jeong J, Kim A (2020) Mulran: multimodal range dataset for urban place recognition. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 6246–6253

    Google Scholar 

  52. Kubelka V, Vaidis M, Pomerleau F (2022) Gravity-constrained point cloud registration. arXiv:2203.13799

  53. Kumar V, Michael N (2012) Opportunities and challenges with autonomous micro aerial vehicles. The International Journal of Robotics Research 31(11):1279–1291

    Article  Google Scholar 

  54. Le Gentil C, Tschopp F, Alzugaray I, Vidal-Calleja T, Siegwart R, Nieto J (2020) Idol: a framework for imu-dvs odometry using lines. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 5863–5870

    Google Scholar 

  55. Lee W, Eckenhoff K, Yang Y, Geneva P, Huang G (2020) Visual-inertial-wheel odometry with online calibration. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 4559–4566

    Google Scholar 

  56. Lee J, Hwangbo J, Wellhausen L, Koltun V, Hutter M (2020) Learning quadrupedal locomotion over challenging terrain. Sci Robot 5(47)

    Google Scholar 

  57. Leutenegger S, Lynen S, Bosse M, Siegwart R, Furgale P (2015) Keyframe-based visual-inertial odometry using nonlinear optimization. Int J Robot Res 34(3):314–334

    Google Scholar 

  58. Li M, Mourikis AI (2013) High-precision, consistent ekf-based visual-inertial odometry. Int J Robot Res 32(6):690–711

    Article  Google Scholar 

  59. Li K, Li M, Hanebeck UD (2021) Towards high-performance solid-state-lidar-inertial odometry and mapping. IEEE Robot Autom Lett 6(3):5167–5174

    Article  Google Scholar 

  60. Lin J, Zheng C, Xu W, Zhang F (2021) R2live: a robust, real-time, lidar-inertial-visual tightly-coupled state estimator and mapping. IEEE Robot Autom Lett 6(4):7469–7476

    Article  Google Scholar 

  61. Lin J, Liu X, Zhang F (2020) A decentralized framework for simultaneous calibration, localization and mapping with multiple lidars. arXiv:2007.01483

  62. Lin J, Zhang F (2020) Loam livox: a fast, robust, high-precision lidar odometry and mapping package for lidars of small FOV. In: 2020 ieee international conference on robotics and automation (ICRA). IEEE, pp 3126–3131

    Google Scholar 

  63. Lin J, Zhang F (2022) R3live: a robust, real-time, RGB-colored, lidar-inertial-visual tightly-coupled state estimation and mapping package. In: 2022 international conference on robotics and automation (ICRA). IEEE, pp 10 672–10 678

    Google Scholar 

  64. Liu Z, Zhang F (2021) Balm: bundle adjustment for lidar mapping. IEEE Robot Autom Lett 6(2):3184–3191

    Article  Google Scholar 

  65. Liu T, hai Liao Q, Gan L, Ma F, Cheng J, Xie X, Wang Z, Chen Y, Zhu Y, Zhang S et al (2021) The role of the hercules autonomous vehicle during the covid-19 pandemic: an autonomous logistic vehicle for contactless goods transportation

    Google Scholar 

  66. Liu Z, Liu X, Zhang F (2022) Efficient and consistent bundle adjustment on lidar point clouds. arXiv:2209.08854

  67. Maddern W, Pascoe G, Linegar C, Newman P (2017) 1 year, 1000 km: the oxford robotcar dataset. Int J Robot Res 36(1):3–15

    Article  Google Scholar 

  68. Magnusson M, Andreasson H, Nuchter A, Lilienthal AJ (2009) Appearance-based loop detection from 3d laser data using the normal distributions transform. In: 2009 IEEE international conference on robotics and automation. IEEE, pp 23–28

    Google Scholar 

  69. McCormac J, Handa A, Davison A, Leutenegger S (2017) Semanticfusion: dense 3d semantic mapping with convolutional neural networks. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE, pp 4628–4635

    Google Scholar 

  70. Mueggler E, Gallego G, Scaramuzza D (2015) Continuous-time trajectory estimation for event-based vision sensors, Technical Report

    Google Scholar 

  71. Mur-Artal R, Montiel JMM, Tardos JD (2015) Orb-slam: a versatile and accurate monocular slam system. IEEE Trans Robot 31(5):1147–1163

    Google Scholar 

  72. Mur-Artal R, Tardós JD (2017) Orb-slam2: an open-source slam system for monocular, stereo, and RGB-d cameras. IEEE Trans Robot 33(5):1255–1262

    Google Scholar 

  73. Newcombe RA, Izadi S, Hilliges O, Molyneaux D, Kim D, Davison AJ, Kohi P, Shotton J, Hodges S, Fitzgibbon A (2011) Kinectfusion: real-time dense surface mapping and tracking. In: 10th IEEE international symposium on mixed and augmented reality. IEEE, pp 127–136

    Google Scholar 

  74. Nguyen TM, Yuan S, Cao M, Lyu Y, Nguyen TH, Xie L (2021) Ntu viral: a visual-inertial-ranging-lidar dataset, from an aerial vehicle viewpoint. Int J Robot Res 02783649211052312

    Google Scholar 

  75. Pan L, Scheerlinck C, Yu X, Hartley R, Liu M, Dai Y (2019) Bringing a blurry frame alive at high frame-rate with an event camera. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 6820–6829

    Google Scholar 

  76. Park YS, Shin YS, Kim A (2020) Pharao: direct radar odometry using phase correlation. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2617–2623

    Google Scholar 

  77. Park C, Moghadam P, Williams JL, Kim S, Sridharan S, Fookes C (2021) Elasticity meets continuous-time: Map-centric dense 3d lidar slam. IEEE Trans Robot 38(2):978–997

    Article  Google Scholar 

  78. Qin T, Li P, Shen S (2018) Vins-mono: a robust and versatile monocular visual-inertial state estimator. IEEE Trans Robot 34(4):1004–1020

    Google Scholar 

  79. Qin T, Pan J, Cao S, Shen S (2019) A general optimization-based framework for local odometry estimation with multiple sensors. arXiv:1901.03638

  80. Qin C, Ye H, Pranata CE, Han J, Liu M (2020) Lins: A lidar-inerital state estimator for robust and fast navigation

    Google Scholar 

  81. Qiu K, Qin T, Gao W, Shen S (2019) Tracking 3-d motion of dynamic objects using monocular visual-inertial sensing. IEEE Trans Robot 35(4):799–816

    Google Scholar 

  82. Qiu K, Qin T, Pan J, Liu S, Shen S (2020) Real-time temporal and rotational calibration of heterogeneous sensors using motion correlation analysis. IEEE Trans Robot

    Google Scholar 

  83. Rebecq H, Horstschäfer T, Gallego G, Scaramuzza D (2016) Evo: a geometric approach to event-based 6-dof parallel tracking and mapping in real time. IEEE Robot Autom Lett 2(2):593–600

    Article  Google Scholar 

  84. Rebecq H, Ranftl R, Koltun V, Scaramuzza D (2019) High speed and high dynamic range video with an event camera. IEEE Trans Pattern Anal Mach Intell 43(6):1964–1980

    Article  Google Scholar 

  85. Rebecq H, Ranftl R, Koltun V, Scaramuzza D (2019) Events-to-video: bringing modern computer vision to event cameras. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 3857–3866

    Google Scholar 

  86. Reinke A, Palieri M, Morrell B, Chang Y, Ebadi K, Carlone L, Agha-Mohammadi AA (2022) Locus 2.0: robust and computationally efficient lidar odometry for real-time 3d mapping. IEEE Robot Autom Lett

    Google Scholar 

  87. Rosinol A, Abate M, Chang Y, Carlone L (2020) Kimera: an open-source library for real-time metric-semantic localization and mapping. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 1689–1696

    Google Scholar 

  88. Rublee E, Rabaud V, Konolige K, Bradski G (2011) Orb: an efficient alternative to sift or surf. In: International conference on computer vision, pp 2564–2571

    Google Scholar 

  89. Schubert D, Goll T, Demmel N, Usenko V, Stückler J, Cremers D (2018) The tum vi benchmark for evaluating visual-inertial odometry. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 1680–1687

    Google Scholar 

  90. Schwarting W, Alonso-Mora J, Rus D (2018) Planning and decision-making for autonomous vehicles. Robot Auton Syst Ann Rev Control

    Google Scholar 

  91. Segal A, Haehnel D, Thrun S (2009) Generalized-ICP. In: Robotics: science and systems, vol 2, Issue 4. Seattle, WA, p 435

    Google Scholar 

  92. Serafin J, Grisetti G (2015) Nicp: dense normal based point cloud registration. In: 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 742–749

    Google Scholar 

  93. Shan T, Englot B (2018) ‘Lego-loam: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 4758–4765

    Google Scholar 

  94. Shan T, Englot B, Duarte F, Ratti C, Rus D (2021) Robust place recognition using an imaging lidar. In: 2021 IEEE international conference on robotics and automation (ICRA). IEEE, pp 5469–5475

    Google Scholar 

  95. Shan T, Englot B, Meyers D, Wang W, Ratti C, Rus D (2020) Lio-sam: tightly-coupled lidar inertial odometry via smoothing and mapping. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 5135–5142

    Google Scholar 

  96. Shan T, Englot B, Ratti C, Rus D (2021) Lvi-sam: tightly-coupled lidar-visual-inertial odometry via smoothing and mapping. arXiv:2104.10831

  97. Siegwart R, Nourbakhsh IR, Scaramuzza D (2011) Introduction to autonomous mobile robots. MIT Press

    Google Scholar 

  98. Song H, Ding W, Chen Y, Shen S, Wang MY, Chen Q (2020) Pip: planning-informed trajectory prediction for autonomous driving. In: European conference on computer vision. Springer, pp 598–614

    Google Scholar 

  99. Sturm J, Engelhard N, Endres F, Burgard W, Cremers D (2012) A benchmark for the evaluation of RGB-d slam systems. In: IEEE/RSJ international conference on intelligent robots and systems. IEEE, pp 573–580

    Google Scholar 

  100. Sun P, Kretzschmar H, Dotiwalla X, Chouard A, Patnaik V, Tsui P, Guo J, Zhou Y, Chai Y, Caine B et al (2020) Scalability in perception for autonomous driving: Waymo open dataset. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 2446–2454

    Google Scholar 

  101. Tang TY, De Martini D, Newman P (2021) Get to the point: learning lidar place recognition and metric localisation using overhead imagery

    Google Scholar 

  102. Thrun S, Montemerlo M, Dahlkamp H, Stavens D, Aron A, Diebel J, Fong P, Gale J, Halpenny M, Hoffmann G et al (2006) Stanley: the robot that won the darpa grand challenge. J Field Robot 23(9):661–692

    Article  Google Scholar 

  103. Wang Y, Jiang Z, Gao X, Hwang JN, Xing G, Liu H (2021) Rodnet: radar object detection using cross-modal supervision. In: Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp 504–513

    Google Scholar 

  104. Wang W, Zhu D, Wang X, Hu Y, Qiu Y, Wang C, Hu Y, Kapoor A, Scherer S (2020) Tartanair: a dataset to push the limits of visual slam. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 4909–4916

    Google Scholar 

  105. Weiffenbach G (2019) Tropospheric and ionospheric propagation effects on satellite radio-doppler geodesy. In: Electromagnetic distance measurement. University of Toronto Press, pp 339–352

    Google Scholar 

  106. Wen W, Zhou Y, Zhang G, Fahandezh-Saadi S, Bai X, Zhan W, Tomizuka M, Hsu LT, Urbanloco: a full sensor suite dataset for mapping and localization in urban scenes. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2310–2316

    Google Scholar 

  107. Wisth D, Camurri M, Fallon M (2022) Vilens: visual, inertial, lidar, and leg odometry for all-terrain legged robots. IEEE Trans Robot

    Google Scholar 

  108. Wu KJ, Guo CX, Georgiou G, Roumeliotis SI (2017) Vins on wheels. In: 2017 IEEE international conference on robotics and automation (ICRA). IEEE, pp 5155–5162

    Google Scholar 

  109. Xu W, Zhang F (2021) Fast-lio: a fast, robust lidar-inertial odometry package by tightly-coupled iterated kalman filter. IEEE Robot Autom Lett 6(2):3317–3324

    Article  Google Scholar 

  110. Xu W, Cai Y, He D, Lin J, Zhang F (2022) Fast-lio2: fast direct lidar-inertial odometry. IEEE Trans Robot 38(4):2053–2073

    Article  Google Scholar 

  111. Yang B, Zhang Q, Geng R, Wang L, Liu M (2022) Real-time neural dense elevation mapping for urban terrain with uncertainty estimations. IEEE Robot Autom Lett 8(2):696–703

    Article  Google Scholar 

  112. Yang S, Choset H, Manchester Z (2022) Online kinematic calibration for legged robots. IEEE Robot Autom Lett 7(3):8178–8185

    Google Scholar 

  113. Ye H, Chen Y, Liu M (2019) Tightly coupled 3d lidar inertial odometry and mapping. In: 2019 IEEE international conference on robotics and automation (ICRA)

    Google Scholar 

  114. Ye H, Huang H, Liu M (2020) ‘Monocular direct sparse localization in a prior 3d surfel map. In: 2020 IEEE international conference on robotics and automation (ICRA). IEEE, pp 8892–8898

    Google Scholar 

  115. Yin J, Li A, Li T, Yu W, Zou D (2021) M2dgr: a multi-sensor and multi-scenario slam dataset for ground robots. IEEE Robot Autom Lett 7(2):2266–2273

    Article  Google Scholar 

  116. Yin H, Wang Y, Xiong R (2021) Improved radar localization on lidar maps using shared embedding. arXiv:2106.10000

  117. Yokozuka M, Koide K, Oishi S, Banno A (2020) Litamin: lidar-based tracking and mapping by stabilized ICP for geometry approximation with normal distributions. In: 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 5143–5150

    Google Scholar 

  118. Yokozuka M, Koide K, Oishi S, Banno A (2021) Litamin2: ultra light lidar-based slam using geometric approximation applied with kl-divergence. In: 2021 IEEE international conference on robotics and automation (ICRA). IEEE, pp 11 619–11 625

    Google Scholar 

  119. Yuan C, Xu W, Liu X, Hong X, Zhang F (2022) Efficient and probabilistic adaptive voxel mapping for accurate online lidar odometry. IEEE Robot Autom Lett 7(3):8518–8525

    Article  Google Scholar 

  120. Yu Y, Gao W, Liu C, Shen S, Liu M (2019) A gps-aided omnidirectional visual-inertial state estimator in ubiquitous environments. In: 2019 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 7750–7755

    Google Scholar 

  121. Zhang Z (2000) A flexible new technique for camera calibration. IEEE Trans Pattern Anal Mach Intell 22(11):1330–1334

    Google Scholar 

  122. Zhang J, Singh S (2018) Laser-visual-inertial odometry and mapping with high robustness and low drift. J Field Robot 35(8):1242–1264

    Article  Google Scholar 

  123. Zhang Z, Scaramuzza D (2018) A tutorial on quantitative trajectory evaluation for visual (-inertial) odometry. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 7244–7251

    Google Scholar 

  124. Zhang J, Singh S (2014) Loam: lidar odometry and mapping in real-time. In: Robotics: Science and Systems, vol 2, p 9

    Google Scholar 

  125. Zhang J, Singh S (2015) Visual-lidar odometry and mapping: low-drift, robust, and fast. In: 2015 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2174–2181

    Google Scholar 

  126. Zheng C, Zhu Q, Xu W, Liu X, Guo Q, Zhang F (2022) Fast-livo: fast and tightly-coupled sparse-direct lidar-inertial-visual odometry. In: 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 4003–4009

    Google Scholar 

  127. Zhou Y, Gallego G, Shen S (2021) Event-based stereo visual odometry. IEEE Trans Robot 37(5):1433–1450

    Article  Google Scholar 

  128. Zhou X, Zhu J, Zhou H, Xu C, Gao F (2021) Ego-swarm: a fully autonomous and decentralized quadrotor swarm system in cluttered environments. In: 2021 IEEE international conference on robotics and automation (ICRA). IEEE, pp 4101–4107

    Google Scholar 

  129. Zhu AZ, Thakur D, Özaslan T, Pfrommer B, Kumar V, Daniilidis K (2018) The multivehicle stereo event camera dataset: an event camera dataset for 3d perception. IEEE Robot Autom Lett 3(3):2032–2039

    Article  Google Scholar 

  130. Zhu Z, Peng S, Larsson V, Xu W, Bao H, Cui Z, Oswald MR, Pollefeys M (2022) Nice-slam: neural implicit scalable encoding for slam. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 12 786–12 796

    Google Scholar 

  131. Zou Z, Li L, Hu X, Zhu Y, Xue B, Wu J, Liu M (2023) Robust equipment-free calibration of low-cost inertial measurement units. IEEE Trans Instrum Meas

    Google Scholar 

  132. Zuo X, Geneva P, Lee W, Liu Y, Huang G (2019) Lic-fusion: lidar-inertial-camera odometry. In: 2013 IEEE/RSJ international conference on intelligent robots and systems. IEEE

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Jiao, J. et al. (2023). Enabling Robust SLAM for Mobile Robots with Sensor Fusion. In: Fan, R., Guo, S., Bocus, M.J. (eds) Autonomous Driving Perception. Advances in Computer Vision and Pattern Recognition. Springer, Singapore. https://doi.org/10.1007/978-981-99-4287-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-4287-9_7

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-4286-2

  • Online ISBN: 978-981-99-4287-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics