Skip to main content
Log in

A Survey on Global LiDAR Localization: Challenges, Advances and Open Problems

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Knowledge about the own pose is key for all mobile robot applications. Thus pose estimation is part of the core functionalities of mobile robots. Over the last two decades, LiDAR scanners have become the standard sensor for robot localization and mapping. This article aims to provide an overview of recent progress and advancements in LiDAR-based global localization. We begin by formulating the problem and exploring the application scope. We then present a review of the methodology, including recent advancements in several topics, such as maps, descriptor extraction, and cross-robot localization. The contents of the article are organized under three themes. The first theme concerns the combination of global place retrieval and local pose estimation. The second theme is upgrading single-shot measurements to sequential ones for sequential global localization. Finally, the third theme focuses on extending single-robot global localization to cross-robot localization in multi-robot systems. We conclude the survey with a discussion of open challenges and promising directions in global LiDAR localization. To our best knowledge, this is the first comprehensive survey on global LiDAR localization for mobile robots.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

(Source: LPDNet (Liu et al., 2019), used with permission.)

Fig. 5

(Source: adapted from G3Reg (Qiao et al., 2023), used with permission.)

Fig. 6

(Source: DiSCO (Xuecheng et al., 2021), used with permission.)

Fig. 7

(Source: SegMatch (Dubé et al., 2017), used with permission.) (Color figure online)

Fig. 8
Fig. 9

Similar content being viewed by others

References

  • Adolfsson, D., Castellano-Quero, M., Magnusson, M., Lilienthal, A. J., & Andreasson, H. (2022). Coral: Introspection for robust radar and lidar perception in diverse environments using differential entropy. Robotics and Autonomous Systems, 155, 104136.

    Article  Google Scholar 

  • Akai, N., Hirayama, T., & Murase, H. (2020). Hybrid localization using model-and learning-based methods: Fusion of Monte Carlo and e2e localizations via importance sampling. In Proceedings under IEEE international conference on robotics and automation (pp. 6469–6475).

  • Alijani, F., Peltomäki, J., Puura, J., Huttunen, H., Kämäräinen, J.-K., & Rahtu, E. (2022). Long-term visual place recognition. In 2022 26th international conference on pattern recognition (ICPR) (pp. 3422–3428). IEEE.

  • Ankenbauer, J., Lusk, P. C., & How, J. P. (2023). Global localization in unstructured environments using semantic object maps built from various viewpoints. In 2023 IEEE/RSJ international conference on intelligent robots and systems (IROS).

  • Aoki, Y., Goforth, H., Srivatsan, R. A., & Lucey, S. (2019). Pointnetlk: Robust & efficient point cloud registration using pointnet. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7163–7172).

  • Arandjelovic, ., Gronat, P., Torii, A., Pajdla, T., & Sivic, J. (2016). Netvlad: Cnn architecture for weakly supervised place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 5297–5307).

  • Bai, X., Luo, Z., Zhou, L., Chen, H., Li, L., Hu, Z., Fu, H., & Tai, C.-L. (2021). Pointdsc: Robust point cloud registration using deep spatial consistency. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 15859–15869).

  • Bai, X., Luo, Z., Zhou, L., Fu, H., Quan, L., & Tai, C.-L. (2020). D3feat: Joint learning of dense detection and description of 3d local features. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6359–6367).

  • Barfoot, T. D. (2017). State estimation for robotics. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Barnes, D., Gadd, M., Murcutt, P., Newman, P., & Posner, I. (2020). The oxford radar robotcar dataset: A radar extension to the oxford robotcar dataset. In Proceedings of international conference on robotics and automation (pp. 6433–6438).

  • Barron, J. T. (2019). A general and adaptive robust loss function. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4331–4339).

  • Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Gall, J., & Stachniss, C. (2021). Towards 3d lidar-based semantic scene understanding of 3d point cloud sequences: The semantickitti dataset. International Journal of Robotics Research, 40(8–9), 959–967.

    Article  Google Scholar 

  • Bennewitz, M., Stachniss, C., Behnke, S., & Burgard, W. (2009). Utilizing reflection properties of surfaces to improve mobile robot localization. In Proceedings of international conference on robotics and automation, (pp. 4287–4292).

  • Bernreiter, L., Khattak, S., Ott, L., Siegwart, R., Hutter, M., & Cadena, C. (2022). Collaborative robot mapping using spectral graph analysis. In 2022 international conference on robotics and automation (ICRA) (pp. 3662–3668). IEEE.

  • Bernreiter, L., Ott, L., Nieto, J., Siegwart, R., & Cadena, C. (2021). Spherical multi-modal place recognition for heterogeneous sensor systems. In Proceedings of International Conference on Robotics and Automation (pp. 1743–1750).

  • Bernreiter, L., Ott, L., Nieto, J., Siegwart, R., & Cadena, C. (2021). Phaser: A robust and correspondence-free global pointcloud registration. IEEE Robotics and Automation Letters, 6(2), 855–862.

    Article  Google Scholar 

  • Besl, P. J., & McKay, N. D. (1992). Method for registration of 3-d shapes. In Sensor fusion IV: Control paradigms and data structures (Vol. 1611, pp. 586–606). Spie.

  • Bharath Pattabiraman, Md., Patwary, M. A., Gebremedhin, A. H., Liao, W., & Choudhary, A. (2015). Fast algorithms for the maximum clique problem on massive graphs with applications to overlapping community detection. Internet Mathematics, 11(4–5), 421–448.

    Article  MathSciNet  Google Scholar 

  • Biber, P., & Straßer, W. (2003). The normal distributions transform: A new approach to laser scan matching. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (Vol. 3, pp. 2743–2748).

  • Boniardi, F., Caselitz, T., Kümmerle, R., & Burgard, W. (2017). Robust lidar-based localization in architectural floor plans. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 3318–3324).

  • Bosse, M., & Zlot, R. (2013). Place recognition using keypoint voting in large 3d lidar datasets. In Proceedings of international conference on robotics and automation (pp. 2677–2684).

  • Bosse, M., & Zlot, R. (2009). Keypoint design and evaluation for place recognition in 2d lidar maps. Robotics and Autonomous Systems, 57(12), 1211–1224.

    Article  Google Scholar 

  • Buehler, M., Iagnemma, K., & Singh, S. (2009). The DARPA urban challenge: Autonomous vehicles in city traffic (Vol. 56). New York: Springer.

    Book  Google Scholar 

  • Bülow, H., & Birk, A. (2018). Scale-free registrations in 3d: 7 degrees of freedom with Fourier Mellin soft transforms. International Journal of Computer Vision, 126(7), 731–750.

    Article  MathSciNet  Google Scholar 

  • Burnett, K., Yoon, D. J., Yuchen, W., Li, A. Z., Zhang, H., Shichen, L., Qian, J., Tseng, W.-K., Lambert, A., Leung, K. Y. K., Schoellig, A. P., & Barfoot, T. D. (2023). Boreas: A multi-season autonomous driving dataset. The International Journal of Robotics Research, 42(1–2), 33–42.

    Article  Google Scholar 

  • Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., & Leonard, J. J. (2016). Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics, 32(6), 1309–1332.

    Article  Google Scholar 

  • Cao, S., Lu, X., & Shen, S. (2022). GVINS: Tightly coupled GNSS–visual–inertial fusion for smooth and consistent state estimation. IEEE Transactions on Robotics, 38, 2004–2021.

    Article  Google Scholar 

  • Carballo, A., Lambert, J., Monrroy, A., Wong, D., Narksri, P., Kitsukawa, Y., Takeuchi, E., Kato, S., & Takeda, K. (2020). Libre: The multiple 3d lidar dataset. In Proceedings of the IEEE intelligent vehicles symposium (pp. 1094–1101). IEEE.

  • Carlevaris-Bianco, N., Ushani, A. K., & Eustice, R. M. (2016). University of Michigan north campus long-term vision and lidar dataset. The International Journal of Robotics Research, 35(9), 1023–1035.

    Article  Google Scholar 

  • Carlone, L., Censi, A., & Dellaert, F. (2014). Selecting good measurements via l1 relaxation: A convex approach for robust estimation over graphs. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 2667–2674).

  • Cattaneo, D., Vaghi, M., Fontana, S., Ballardini, A. L., & Sorrenti, D. G. (2020). Global visual localization in lidar-maps through shared 2d-3d embedding space. In Proceedings of international conference on robotics and automation, (pp. 4365–4371).

  • Cattaneo, D., Vaghi, M., & Valada, A. (2022). Lcdnet: Deep loop closure detection and point cloud registration for lidar slam. IEEE Transactions on Robotics, 38, 2074–2093.

    Article  Google Scholar 

  • Chang, M.-F., Dong, W., Mangelson, J., Kaess, M., & Lucey, S. (2021). Map compressibility assessment for lidar registration. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 5560–5567).

  • Chang, Y., Ebadi, K., Denniston, C. E., Ginting, M. F., Rosinol, A., Reinke, A., Palieri, M., Shi, J., Chatterjee, A., Morrell, B., et al. (2022). Lamp 2.0: A robust multi-robot slam system for operation in challenging large-scale underground environments. IEEE Robotics and Automation Letters, 7(4), 9175–9182.

    Article  Google Scholar 

  • Chebrolu, N., Läbe, T., Vysotska, O., Behley, J., & Stachniss, C. (2021). Adaptive robust kernels for non-linear least squares problems. IEEE Robotics and Automation Letters, 6(2), 2240–2247.

    Article  Google Scholar 

  • Chen, X., Läbe, T., Milioto, A., Röhling, T., Vysotska, O., Haag, A., Behley, J., & Stachniss, C. (2020). Overlapnet: Loop closing for lidar-based slam. In Proceedings of robotics: Science and systems conference.

  • Chen, X., Läbe, T., Nardi, L., Behley, J., & Stachniss, C. (2020). Learning an overlap-based observation model for 3D LiDAR localization. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems.

  • Chen, X., Milioto, A., Palazzolo, E., Giguère, P., Behley, J., & Stachniss, C. (2019). SuMa++: Efficient LiDAR-based Semantic SLAM. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems.

  • Chen, X., Vizzo, I., Läbe, T., Behley, J., & Stachniss, C. (2021). Range image-based LiDAR localization for autonomous vehicles. In Proceedings of international conference on robotics and automation.

  • Chen, Z. Liao, Y., Du, H., Zhang, H., Xu, X., Lu, H., Xiong, R., & Wang, Y. (2023). Dpcn++: Differentiable phase correlation network for versatile pose registration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 14366–14384.

  • Chen, R., Yin, H., Jiao, Y., Dissanayake, G., Wang, Y., & Xiong, R. (2021). Deep samplable observation model for global localization and kidnapping. IEEE Robotics and Automation Letters, 6(2), 2296–2303.

    Article  Google Scholar 

  • Chizat, L., Peyré, G., Schmitzer, B., & Vialard, F.-X. (2018). Scaling algorithms for unbalanced optimal transport problems. Mathematics of Computation, 87(314), 2563–2609.

    Article  MathSciNet  Google Scholar 

  • Cho, Y., Kim, G., Lee, S., & Ryu, J.-H. (2022). Openstreetmap-based lidar global localization in urban environment without a prior lidar map. IEEE Robotics and Automation Letters, 7(2), 4999–5006.

    Article  Google Scholar 

  • Choy, C., Dong, W., & Koltun, V. (2020). Deep global registration. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2514–2523).

  • Choy, C., Park, J., & Koltun, Vladlen (2019). Fully convolutional geometric features. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8958–8966).

  • Cohen, T. S., Geiger, M., Köhler, J., & Welling, M. (2018). Spherical cnns. In International conference on learning representations.

  • Cop, K. P., Borges, P. V. K., & Dubé, R. (2018). Delight: An efficient descriptor for global localisation using lidar intensities. In Proceedings of international conference on robotics and automation (pp. 3653–3660).

  • Cramariuc, A., Tschopp, F., Alatur, N., Benz, S., Falck, T., Brühlmeier, M., et al. (2021). Semsegmap–3d segment-based semantic localization. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 1183–1190).

  • Cramariuc, A., Bernreiter, L., Tschopp, F., Fehr, M., Reijgwart, V., Nieto, J., Siegwart, R., & Cadena, C. (2022). maplab 2.0–A modular and multi-modal mapping framework. IEEE Robotics and Automation Letters, 8, 520–527.

    Article  Google Scholar 

  • Cui, Yunge, Chen, Xieyuanli, Zhang, Yinlong, Dong, Jiahua, Wu, Qingxiao, & Zhu, Feng. (2022). Bow3d: Bag of words for real-time loop closing in 3d lidar slam. IEEE Robotics and Automation Letters, 8, 2828–2835.

    Article  Google Scholar 

  • Cui, J., & Chen, X. (2023). Ccl: Continual contrastive learning for lidar place recognition. IEEE Robotics and Automation Letters, 8, 4433–4440.

    Article  Google Scholar 

  • Cui, Y., Zhang, Y., Dong, J., Sun, H., & Zhu, F. (2022). Link3d: Linear keypoints representation for 3d lidar point cloud. arXiv preprintarXiv:2206.05927.

  • Cummins, M., & Newman, P. (2008). Fab-map: Probabilistic localization and mapping in the space of appearance. International Journal of Robotics Research, 27(6), 647–665.

    Article  Google Scholar 

  • Dellaert, F. (2012). Factor graphs and gtsam: A hands-on introduction. Technical report, Georgia Institute of Technology.

  • Dellaert, F., Fox, D., Burgard, W., & Thrun, S. (1999). Monte Carlo localization for mobile robots. In Proceedings of IEEE international conference on robotics and automation (Vol. 2, pp. 1322–1328).

  • Deng, H., Birdal, T., & Ilic, S. (2018). Ppfnet: Global context aware local features for robust 3d point matching. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 195–205).

  • Deng, J., Wu, Q., Chen, X., Xia, S., Sun, Z., Liu, G., Yu, W., & Pei, L. (2023). Nerf-loam: Neural implicit representation for large-scale incremental lidar odometry and mapping. In Proceedings of the IEEE international conference on computer vision.

  • Denniston, C. E., Chang, Y., Reinke, A., Ebadi, K., Sukhatme, G. S., Carlone, L., Morrell, B., & Agha-mohammadi, A. (2022). Loop closure prioritization for efficient and scalable multi-robot slam. IEEE Robotics and Automation Letters, 7(4), 9651–9658.

    Article  Google Scholar 

  • Di G., Luca, Aloise, I., Stachniss, C., & Grisetti, G. (2021). Visual place recognition using lidar intensity information. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 4382–4389).

  • Ding, X., Xu, X., Lu, S., Jiao, Y., Tan, M., Xiong, R., Deng, H., Li, M., & Wang, Y. (2022). Translation invariant global estimation of heading angle using sinogram of lidar point cloud. In Proceedings of international conference on robotics and automation, (pp. 2207–2214).

  • Du, J., Wang, R., & Cremers, D. (2020). Dh3d: Deep hierarchical 3d descriptors for robust large-scale 6dof relocalization. In Proceedings of the European conference on computer vision. Glasgow, UK.

  • Dubé, R., Cramariuc, A., Dugas, D., Nieto, J., Siegwart, R., & Cadena, C. (2018). Segmap: 3d segment mapping using data-driven descriptors. arXiv preprintarXiv:1804.09557.

  • Dubé, R., Dugas, D., Stumm, E., Nieto, J., Siegwart, R., & Cadena, C. (2017). Segmatch: Segment based place recognition in 3d point clouds. In Proceedings of international conference on robotics and automation (pp. 5266–5272).

  • Dube, R., Cramariuc, A., Dugas, D., Sommer, H., Dymczyk, M., Nieto, J., Siegwart, R., & Cadena, C. (2020). Segmap: Segment-based mapping and localization using data-driven descriptors. International Journal of Robotics Research, 39(2–3), 339–355.

    Article  Google Scholar 

  • Ebadi, K., Bernreiter, L., Biggie, H., Catt, G., Chang, Y., Chatterjee, A., et al. (2022). Present and future of slam in extreme underground environments. arXiv preprintarXiv:2208.01787.

  • Ebadi, K., Palieri, M., Wood, S., Padgett, C., & Agha-mohammadi, A. (2021). Dare-slam: Degeneracy-aware and resilient loop closing in perceptually-degraded environments. Journal of Intelligent & Robotic Systems, 102(1), 1–25.

    Article  Google Scholar 

  • Elhousni, M., & Huang, X. (2020). A survey on 3d lidar localization for autonomous vehicles. In Proceedings of IEEE intelligent vehicles symposium (pp. 1879–1884). IEEE.

  • Eppstein, D., Löffler, M., & Strash, D. (2010). Listing all maximal cliques in sparse graphs in near-optimal time. In International symposium on algorithms and computation (pp. 403–414). Springer.

  • Fan, Y., He, Y., & Tan, U.-X. (2020). Seed: A segmentation-based egocentric 3d point cloud descriptor for loop closure detection. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 5158–5163).

  • Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.

    Article  MathSciNet  Google Scholar 

  • Fox, D. (2001). Kld-sampling: Adaptive particle filters. Proceedings of Advances in Neural Information Processing Systems, 14, 713–720.

    Google Scholar 

  • Freund, Y., & Schapire, R. E. (1997). A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1), 119–139.

    Article  MathSciNet  Google Scholar 

  • Fujii, A., Tanaka, M., Yabushita, H., Mori, T., & Odashima, T. (2015). Detection of localization failure using logistic regression. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 4313–4318).

  • Gálvez-López, D., & Tardos, J. D. (2012). Bags of binary words for fast place recognition in image sequences. IEEE Transactions on Robotics, 28(5), 1188–1197.

    Article  Google Scholar 

  • Gao, H., Zhang, X., Yuan, J., Song, J., & Fang, Y. (2019). A novel global localization approach based on structural unit encoding and multiple hypothesis tracking. IEEE Transactions on Instrumentation and Measurement, 68(11), 4427–4442.

    Article  ADS  Google Scholar 

  • Garg, S., Fischer, T., & Milford, M. (2021). Where is your place, visual place recognition? arXiv preprintarXiv:2103.06443.

  • Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231–1237.

    Article  Google Scholar 

  • Gong, Y., Sun, F., Yuan, J., Zhu, W., & Sun, Q. (2021). A two-level framework for place recognition with 3d lidar based on spatial relation graph. Pattern Recognition, 120, 108171.

    Article  Google Scholar 

  • Granström, K., Callmer, J., Ramos, F., & Nieto, J. (2009). Learning to detect loop closure from range data. In Proceedings of international conference on robotics and automation (pp. 15–22).

  • Granström, K., Schön, T. B., Nieto, J. I., & Ramos, F. T. (2011). Learning to close loops from range data. International Journal of Robotics Research, 30(14), 1728–1754.

    Article  Google Scholar 

  • Guivant, J. E., & Nebot, E. M. (2001). Optimization of the simultaneous localization and map-building algorithm for real-time implementation. IEEE Transactions on Robotics and Automation, 17(3), 242–257.

    Article  Google Scholar 

  • Guo, Y., Bennamoun, M., Sohel, F., Min, L., Wan, J., & Kwok, N. M. (2016). A comprehensive performance evaluation of 3d local feature descriptors. International Journal of Computer Vision, 116(1), 66–89.

    Article  MathSciNet  Google Scholar 

  • Guo, J., Borges, P. V. K., Park, C., & Gawel, A. (2019). Local descriptor for robust place recognition using lidar intensity. IEEE Robotics and Automation Letters, 4(2), 1470–1477.

    Article  Google Scholar 

  • Hadsell, R., Chopra, S., & LeCun, Y. (2006). Dimensionality reduction by learning an invariant mapping. In 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR’06) (Vol. 2, pp. 1735–1742).

  • He, L., Wang, X., & Zhang, H. (2016). M2dp: A novel 3d point cloud descriptor and its application in loop closure detection. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 231–237).

  • Hendrikx, R. W. M., Bruyninckx, H. P. J., Elfring, J., & Van De Molengraft, M. J. G. (2022). Local-to-global hypotheses for robust robot localization. Frontiers in Robotics and AI, 171, 887261.

    Article  Google Scholar 

  • Hendrikx, R. W. M., Pauwels, P., Torta, E., Bruyninckx, H. P. J., & van de Molengraft, M. J. G. (2021). Connecting semantic building information models and robotics: An application to 2d lidar-based localization. In Proceedings of international conference on robotics and automation (pp. 11654–11660).

  • Herb, M., Weiherer, T., Navab, N., & Tombari, F. (2019). Crowd-sourced semantic edge mapping for autonomous vehicles. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 7047–7053).

  • Hess, W., Kohler, D., Rapp, H., & Andor, D. (2016). Real-time loop closure in 2d lidar slam. In Proceedings of international conference on robotics and automation (pp. 1271–1278).

  • He, J., Zhou, Y., Huang, L., Kong, Y., & Cheng, H. (2020). Ground and aerial collaborative mapping in urban environments. IEEE Robotics and Automation Letters, 6(1), 95–102.

    Article  Google Scholar 

  • Horn, B. K. P. (1987). Closed-form solution of absolute orientation using unit quaternions. Josa a, 4(4), 629–642.

    Article  ADS  Google Scholar 

  • Huang, S., Gojcic, Z., Usvyatsov, M., Wieser, A., & Schindler, K. (2021). Predator: Registration of 3d point clouds with low overlap. In 2021 IEEE/CVF conference on computer vision and pattern recognition (CVPR) (pp. 4265–4274).

  • Huang, X., Mei, G., & Zhang, J. (2020). Feature-metric registration: A fast semi-supervised approach for robust point cloud registration without correspondences. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 11366–11374).

  • Huang, Y., Shan, T., Chen, F., & Englot, B. (2021). Disco-slam: Distributed scan context-enabled multi-robot lidar slam with two-stage global-local graph optimization. IEEE Robotics and Automation Letters, 7(2), 1150–1157.

    Article  Google Scholar 

  • Hui, L., Yang, H., Cheng, M., Xie, J., & Yang, Jian (2021). Pyramid point cloud transformer for large-scale place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6098–6107).

  • Ito, S., Endres, F., Kuderer, M., Tipaldi, G.D., Stachniss, C., & Burgard, W.(2014). W-rgb-d: Floor-plan-based indoor global localization using a depth camera and wifi. In Proceedings of IEEE international conference on robotics and automation (pp. 417–422).

  • Jégou, H., Douze, M., Schmid, C., & Pérez, P. (2010). Aggregating local descriptors into a compact image representation. In 2010 IEEE computer society conference on computer vision and pattern recognition (pp. 3304–3311).

  • Jiang, B., & Shen, S. (2023). Contour context: Abstract structural distribution for 3d lidar loop detection and metric pose estimation. In 2023 IEEE international conference on robotics and automation (ICRA).

  • Jiang, P., Osteen, P., Wigness, M., & Saripalli, S. (2021). Rellis-3d dataset: Data, benchmarks and analysis. In Proceedings of international conference on robotics and automation (pp. 1110–1116).

  • Jiao, J., Wei, H., Hu, T., Hu, X., Zhu, Y., He, Z., Wu, et al. (2022) Fusionportable: A multi-sensor campus-scene dataset for evaluation of localization and mapping accuracy on diverse platforms. In 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp, 3851–3856). IEEE.

  • Johnson, J., Douze, M., & Jégou, H. (2019). Billion-scale similarity search with GPUs. IEEE Transactions on Big Data, 7(3), 535–547.

    Article  Google Scholar 

  • Jonschkowski, R., Rastogi, D., & Brock, O. (2018). Differentiable particle filters: End-to-end learning with algorithmic priors. arXiv preprintarXiv:1805.11122.

  • Jung, M., Yang, W., Lee, D., Gil, H., Kim, G., & Kim, A. (2023). Helipr: Heterogeneous lidar dataset for inter-lidar place recognition under spatial and temporal variations. arXiv preprintarXiv:2309.14590.

  • Kallasi, F., Rizzini, D. L., & Caselli, S. (2016). Fast keypoint features from laser scanner for robot localization and mapping. IEEE Robotics and Automation Letters, 1(1), 176–183.

    Article  Google Scholar 

  • Karkus, P., Cai, S., & Hsu, D. (2021). Differentiable slam-net: Learning particle slam for visual navigation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2815–2825).

  • Kendall, A., Grimes, M., & Cipolla, R. (2015). Posenet: A convolutional network for real-time 6-dof camera relocalization. In Proceedings of the IEEE international conference on computer vision (pp. 2938–2946).

  • Kim, G., & Kim, A. (2018). Scan context: Egocentric spatial descriptor for place recognition within 3d point cloud map. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 4802–4809).

  • Kim, G., Choi, S., & Kim, A. (2021). Scan context++: Structural place recognition robust to rotation and lateral variations in urban environments. IEEE Transactions on Robotics, 38, 1856–1874.

    Article  Google Scholar 

  • Kim, G., Park, Y. S., Cho, Y., Jeong, J., & Kim, A. (2020). Mulran: Multimodal range dataset for urban place recognition. In Proceedings of international conference on robotics and automation (pp. 6246–6253).

  • Kim, G., Park, B., & Kim, A. (2019). 1-day learning, 1-year localization: Long-term lidar localization using scan context image. IEEE Robotics and Automation Letters, 4(2), 1948–1955.

    Article  Google Scholar 

  • Knights, J., Moghadam, P., Ramezani, M., Sridharan, S., & Fookes, C. (2022). Incloud: Incremental learning for point cloud place recognition. In 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), (pp. 8559–8566). IEEE.

  • Knights, J., Vidanapathirana, K., Ramezani, M., Sridharan, S., Fookes, C., & Moghadam, P. (2023). Wild-places: A large-scale dataset for lidar place recognition in unstructured natural environments. In 2023 IEEE international conference on robotics and automation (ICRA) (pp. 11322–11328). IEEE.

  • Komorowski, J. (2021). Minkloc3d: Point cloud based large-scale place recognition. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 1790–1799).

  • Komorowski, J. (2022). Improving point cloud based place recognition with ranking-based loss and large batch training. In 2022 26th international conference on pattern recognition (ICPR) (pp. 3699–3705). IEEE.

  • Komorowski, J., Wysoczanska, M., & Trzcinski, T. (2021). Egonn: Egocentric neural network for point cloud based 6dof relocalization at the city scale. IEEE Robotics and Automation Letters, 7(2), 722–729.

    Article  Google Scholar 

  • Kong, X., Yang, X., Zhai, G., Zhao, X., Zeng, X., Wang, M., Liu, Yo., Li, W., & Wen, F. (2020). Semantic graph based place recognition for 3d point clouds. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 8216–8223).

  • Kramer, A., Harlow, K., Williams, C., & Heckman, C. (2022). Coloradar: The direct 3d millimeter wave radar dataset. International Journal of Robotics Research, 41(4), 351–360.

    Article  Google Scholar 

  • Kuang, H., Chen, X., Guadagnino, T., Zimmerman, N., Behley, J., & Stachniss, C. (2023). Ir-mcl: Implicit representation-based online global localization. IEEE Robotics and Automation Letters, 8(3), 1627–1634.

    Article  Google Scholar 

  • Kümmerle, R., Grisetti, G., Strasdat, H., Konolige, K., & Burgard, W. (2011). g 2 o: A general framework for graph optimization. In Proceedings of IEEE international conference on robotics and automation (pp. 3607–3613).

  • Labussière, M., Laconte, J., & Pomerleau, F. (2020). Geometry preserving sampling method based on spectral decomposition for large-scale environments. Frontiers in Robotics and AI, 7, 572054.

    Article  PubMed  PubMed Central  Google Scholar 

  • Lai, H., Yin, P., & Scherer, S. (2022). Adafusion: Visual-lidar fusion with adaptive weights for place recognition. IEEE Robotics and Automation Letters, 38, 1856–1874.

    Google Scholar 

  • Latif, Y., Cadena, C., & Neira, J. (2013). Robust loop closing over time for pose graph slam. International Journal of Robotics Research, 32(14), 1611–1626.

    Article  Google Scholar 

  • Lee, K., Lee, J., & Park, J. (2022). Learning to register unbalanced point pairs. arXiv preprintarXiv:2207.04221.

  • Lepetit, V., Moreno-Noguer, F., & Fua, P. (2009). Epnp: An accurate o(n) solution to the pnp problem. International Journal of Computer Vision, 81, 155–166.

    Article  Google Scholar 

  • Li, J., & Lee, G. H. (2019). Usip: Unsupervised stable interest point detection from 3d point clouds. In Proceedings of the IEEE conference on computer vision and pattern Recognition (pp. 361–370).

  • Li, L., Kong, X., Zhao, X., Huang, Tianxin, L., Wanlong, W., Feng, Z., Hongbo, & Liu, Y. (2021). Ssc: Semantic scan context for large-scale place recognition. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 2092–2099).

  • Li, X., Pontes, J. K., & Lucey, S. (2021). Pointnetlk revisited. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 12763–12772).

  • Liao, Y., Xie, J., & Geiger, A. (2022). Kitti-360: A novel dataset and benchmarks for urban scene understanding in 2d and 3d. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(3), 3292–3310.

    Google Scholar 

  • Li, Z., & Hoiem, D. (2017). Learning without forgetting. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(12), 2935–2947.

    Article  PubMed  Google Scholar 

  • Li, L., Kong, X., Zhao, X., Huang, T., Li, W., Wen, F., Zhang, H., & Liu, Y. (2022). Rinet: Efficient 3d lidar-based place recognition using rotation invariant neural network. IEEE Robotics and Automation Letters, 7(2), 4321–4328.

    Article  Google Scholar 

  • Lim, H., Kim, B., Kim, D., Mason Lee, E., & Myung, Hyun (2023). Quatro++: Robust global registration exploiting ground segmentation for loop closing in lidar slam. The International Journal of Robotics Research, 02783649231207654.

  • Lim, H., Yeon, S., Ryu, S., Lee, Y., Kim, Y., Yun, J., Jung, E., Lee, D., & Myung, H. (2022). A single correspondence is enough: Robust global registration to avoid degeneracy in urban environments. In 2022 international conference on robotics and automation (ICRA) (pp. 8010–8017). IEEE.

  • Lim, H., Hwang, S., & Myung, H. (2021). Erasor: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3d point cloud map building. IEEE Robotics and Automation Letters, 6(2), 2272–2279.

    Article  Google Scholar 

  • Lin, C. E., Song, J., Zhang, R., Zhu, M., & Ghaffari, M. (2022). Se (3)-equivariant point cloud-based place recognition. In 6th annual conference on robot learning.

  • Liu, Z., Suo, C., Zhou, S., Xu, F., Wei, H., Chen, W., Wang, H., Liang, X., & Liu, Y.H. (2019). Seqlpd: Sequence matching enhanced loop-closure detection based on large-scale point cloud description for self-driving vehicles. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 1218–1223).

  • Liu, J., Wang, G., Liu, Z., Jiang, C., Pollefeys, M., & Wang, H. (2023). Regformer: An efficient projection-aware transformer network for large-scale point cloud registration. In 2023 International Conference on Computer Vision.

  • Liu, Z., Zhou, S., Suo, C., Yin, P., Chen, W., et al. (2019). Lpd-net: 3d point cloud learning for large-scale place recognition and environment analysis. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2831–2840). Seoul, Korea.

  • Liu, T., Liao, Q., Gan, L., Ma, F., Cheng, J., Xie, X., Wang, Z., Chen, Y., Zhu, Y., Zhang, S., et al. (2021). The role of the hercules autonomous vehicle during the covid-19 pandemic: An autonomous logistic vehicle for contactless goods transportation. IEEE Robotics and Automation Magazine, 28(1), 48–58.

    Article  Google Scholar 

  • Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Proceedings of the IEEE international conference on computer vision (Vol. 2, pp. 1150–1157).

  • Lowry, S., Sünderhauf, N., Newman, P., Leonard, J. J., Cox, D., Corke, P., & Milford, M. J. (2015). Visual place recognition: A survey. IEEE Transactions on Robotics, 32(1), 1–19.

    Article  Google Scholar 

  • Lu, S., Xu, X., Yin, H., Chen, Z., Xiong, R., & Wang, Y. (2022). One ring to rule them all: Radon sinogram for place recognition, orientation and translation estimation. In 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 2778–2785). IEEE.

  • Lu, W., Zhou, Y., Wan, G., Hou, S., & Song, S. (2019). L3-net: Towards learning based lidar localization for autonomous driving. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6389–6398).

  • Luo, L., Cao, S.-Y., Han, B., Shen, H.-L., & Li, J. (2021). Bvmatch: Lidar-based place recognition using bird’s-eye view images. IEEE Robotics and Automation Letters, 6(3), 6076–6083.

    Article  Google Scholar 

  • Lusk, P. C., Fathian, K., & How, J. P. (2021). Clipper: A graph-theoretic framework for robust data association. In Proceedings of international conference on robotics and automation (pp. 13828–13834).

  • Ma, J., Chen, X., Jingyi, X., & Xiong, G. (2022). Seqot: A spatial-temporal transformer network for place recognition using sequential lidar data. IEEE Transactions on Industrial Electronics, 70(8), 8225–8234.

    Article  Google Scholar 

  • Maddern, W., Pascoe, G., Linegar, C., & Newman, P. (2017). 1 year, 1000 km: The oxford Robotcar dataset. International Journal of Robotics Research, 36(1), 3–15.

    Article  Google Scholar 

  • Magnusson, M., Andreasson, H., Nuchter, A., & Lilienthal, A. J. (2009a). Appearance-based loop detection from 3d laser data using the normal distributions transform. In Proceedings of international conference on robotics and automation (pp. 23–28).

  • Magnusson, M., Andreasson, H., Nüchter, A., & Lilienthal, A. J. (2009b). Automatic appearance-based loop detection from three-dimensional laser data using the normal distributions transform. Journal of Field Robotics, 26(11–12), 892–914.

    Article  Google Scholar 

  • Mangelson, J. G., Dominic, D., Eustice, R. M., & Vasudevan, R. (2018). Pairwise consistent measurement set maximization for robust multi-robot map merging. In Proceedings of international conference on robotics and automation (pp. 2916–2923).

  • Matsuzaki, S., Koide, K., Oishi, S., Yokozuka, M., & Banno, A. (2023). Single-shot global localization via graph-theoretic correspondence matching. arXiv preprintarXiv:2306.03641.

  • Ma, J., Zhang, J., Jintao, X., Ai, R., Weihao, G., & Chen, X. (2022). Overlaptransformer: An efficient and yaw-angle-invariant transformer network for lidar-based place recognition. IEEE Robotics and Automation Letters, 7(3), 6958–6965.

    Article  Google Scholar 

  • McGann, D., Rogers, J. G., & Kaess, M. (2023). Robust incremental smoothing and mapping (RISAM). In 2023 IEEE international conference on robotics and automation (ICRA) (pp. 4157–4163). IEEE.

  • Merfels, C., & Stachniss, C. (2016). Pose fusion with chain pose graphs for automated driving. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 3116–3123).

  • Mildenhall, B., Srinivasan, P. P., Tancik, M., Barron, J. T., Ramamoorthi, R., & Ng, R. (2021). Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM, 65(1), 99–106.

    Article  Google Scholar 

  • Milford, Mi. J., & Wyeth, G. F. (2012). Seqslam: Visual route-based navigation for sunny summer days and stormy winter nights. In Proceedings of IEEE International Conference on Robotics and Automation (pp. 1643–1649).

  • Milford, M., Shen, C., Lowry, S., Suenderhauf, N., Shirazi, S., Lin, G., et al. (2015) Sequence searching with deep-learnt depth for condition-and viewpoint-invariant route-based place recognition. In CVPR workshop (pp. 18–250).

  • Milioto, A., Vizzo, I., Behley, J., & Stachniss, C. (2019). Rangenet++: Fast and accurate lidar semantic segmentation. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 4213–4220).

  • Millane, A., Oleynikova, H., Nieto, J., Siegwart, R., & Cadena, C. (2019). Free-space features: Global localization in 2d laser slam using distance function maps. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 1271–1277).

  • Montemerlo, M., Roy, N., & Thrun, S. (2003). Perspectives on standardization in mobile robot programming: The Carnegie Mellon navigation (carmen) toolkit. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (Vol. 3, pp. 2436–2441).

  • Naseer, T., Burgard, W., & Stachniss, C. (2018). Robust visual localization across seasons. IEEE Transactions on Robotics, 34(2), 289–302.

    Article  Google Scholar 

  • Nielsen, K., & Hendeby, G. (2022). Survey on 2d lidar feature extraction for underground mine usage. IEEE Transactions on Automation Science and Engineering, 20, 981–994.

    Article  Google Scholar 

  • Nobili, S., Tinchev, G., & Fallon, M. (2018). Predicting alignment risk to prevent localization failure. In Proceedings of international conference on robotics and automation (pp. 1003–1010).

  • Oertel, A., Cieslewski, T., & Scaramuzza, D. (2020). Augmenting visual place recognition with structural cues. IEEE Robotics and Automation Letters, 5(4), 5534–5541.

    Article  Google Scholar 

  • Olson, E. (2011). Apriltag: A robust and flexible visual fiducial system. In Proceedings of the IEEE international conference on robotics and automation (pp. 3400–3407).

  • Olson, E., Walter, M. R., Teller, S. J., & Leonard, J. J. (2005). Single-cluster spectral graph partitioning for robotics applications. In Proceedings of the robotics: Science and systems conference (pp. 265–272).

  • Olson, E., & Agarwal, P. (2013). Inference on networks of mixtures for robust robot mapping. The International Journal of Robotics Research, 32(7), 826–840.

    Article  Google Scholar 

  • Pan, Y., Xiao, P., He, Y., Shao, Z., & Li, Z. (2021). Mulls: Versatile lidar slam via multi-metric linear least square. In Proceedings of international conference on robotics and automation (pp. 11633–11640).

  • Pan, Y., Xu, X., Li, W., Cui, Y., Wang, Y., & Xiong, R. (2021). Coral: Colored structural representation for bi-modal place recognition. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 2084–2091).

  • Paul, R., & Newman, P. (2010). Fab-map 3d: Topological mapping with spatial and visual appearance. In Proceedings of international conference on robotics and automation (pp. 2649–2656).

  • Peltomäki, J., Alijani, F., Puura, J., Huttunen, H., Rahtu, E., & Kämäräinen, J.-K. (2021). Evaluation of long-term lidar place recognition. In 2021 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 4487–4492). IEEE.

  • Pepperell, E., Corke, P. I., & Milford, M. J. (2014). All-environment visual place recognition with smart. In Proceedings of IEEE International Conference on Robotics and Automation (pp. 1612–1618). IEEE.

  • Pitropov, M., Garcia, D. E., Rebello, J., Smart, M., Wang, C., Czarnecki, K., & Waslander, S. (2021). Canadian adverse driving conditions dataset. International Journal of Robotics Research, 40(4–5), 681–690.

    Article  Google Scholar 

  • Pomerleau, F., Colas, F., Siegwart, R., et al. (2015). A review of point cloud registration algorithms for mobile robotics. Foundations and Trends® in Robotics, 4(1), 1–104.

    Article  Google Scholar 

  • Pramatarov, G., De Martini, D., Gadd, M., & Newman, P. (2022). Boxgraph: Semantic place recognition and pose estimation from 3d lidar. In 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 7004–7011). IEEE.

  • Pretto, A., Aravecchia, S., Burgard, W., Chebrolu, N., Dornhege, C., Falck, T., Fleckenstein, F., Fontenla, A., Imperoli, M., Khanna, R., et al. (2020). Building an aerial-ground robotics system for precision farming: An adaptable solution. IEEE Robotics and Automation Magazine, 28(3), 29–49.

    Article  Google Scholar 

  • Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 652–660).

  • Qiao, Z., Yu, Z., Jiang, B., Yin, H., & Shen, S. (2023). G3reg: Pyramid graph-based global registration using gaussian ellipsoid model. arXiv preprintarXiv:2308.11573.

  • Ramezani, M., Wang, Y., Camurri, M., Wisth, D., Mattamala, M., & Fallon, M. (2020). The newer college dataset: Handheld lidar, inertial and vision with ground truth. In 2020 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 4353–4360). IEEE.

  • Ratz, S., Dymczyk, M., Siegwart, R., & Dubé, R. (2020). Oneshot global localization: Instant lidar-visual pose estimation. In Proc. IEEE Int. Conf. Robot. Autom., pages 5415–5421.

  • Röhling, T., Mack, J., & Schulz, D. (2015). A fast histogram-based similarity measure for detecting loop closures in 3-d lidar data. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 736–741).

  • Rosen, D. M., Doherty, K. J., Espinoza, A. T., & Leonard, J. J. (2021). Advances in inference and representation for simultaneous localization and mapping. Annual Review of Control, Robotics, and Autonomous Systems, 4, 215–242.

    Article  Google Scholar 

  • Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). Orb: An efficient alternative to sift or surf. In 2011 International conference on computer vision (pp. 2564–2571).

  • Rusu, R. B., Blodow, N., & Beetz, M. (2009). Fast point feature histograms (fpfh) for 3d registration. In Proceedings of international conference on robotics and automation (pp. 3212–3217). Kobe, Japan.

  • Saarinen, J., Andreasson, H., Stoyanov, T., & Lilienthal, A. J. (2013). Normal distributions transform Monte-Carlo localization (NDT-MCL). In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 382–389).

  • Salti, S., Tombari, F., & Di Stefano, L. (2014). Shot: Unique signatures of histograms for surface and texture description. Computer Vision and Image Understanding, 125, 251–264.

    Article  Google Scholar 

  • Schaupp, L., Bürki, M., Dubé, R., Siegwart, R., & Cadena, C. (2019). Oreos: Oriented recognition of 3d point clouds in outdoor scenarios. In Proceedings 1999 IEEE/RSJ international conference on intelligent robots and systems (pp. 3255–3261).

  • Segal, A., Haehnel, D., & Thrun, S. (2009). Generalized-icp. In Proceedings of the robotics science and systems conference, (Vol. 2, pp. 435). Seattle, WA, USA.

  • Shan, T., Englot, B., Duarte, F., Ratti, C. & Rus, D. (2021). Robust place recognition using an imaging lidar. In Proceedings of international conference on robotics and automation (pp. 5469–5475).

  • Shi, S., Guo, C., Jiang, L., Wang, Z., Shi, J., Wang, X., & Li, H. (2020). Pv-rcnn: Point-voxel feature set abstraction for 3d object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 10529–10538).

  • Shi, C., Chen, X., Huang, K., Xiao, J., Lu, H., & Stachniss, C. (2021). Keypoint matching for point cloud registration using multiplex dynamic graph attention networks. IEEE Robotics and Automation Letters, 6, 8221–8228.

    Article  Google Scholar 

  • Siegwart, R., Nourbakhsh, I. R., & Scaramuzza, D. (2011). Introduction to Autonomous Mobile Robots. Cambridge: MIT Press.

    Google Scholar 

  • Siva, S., Nahman, Z., & Zhang, H. (2020). Voxel-based representation learning for place recognition based on 3d point clouds. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 8351–8357).

  • Somani Arun, K., Huang, T. S., & Blostein, S. D. (1987). Least-squares fitting of two 3-d point sets. IEEE Transactions on Pattern Analysis and Machine Intelligence, 5, 698–700.

    Article  Google Scholar 

  • Stachniss, C., & Burgard, W. (2005). Mobile robot mapping and localization in non-static environments. In aaai (pp. 1324–1329).

  • Stachniss, C., Grisetti, G., & Burgard, W. (2005). Information gain-based exploration using rao-blackwellized particle filters. In Proceedings of the Robotics: Science and Systems conference (Vol. 2, pp. 65–72.

  • Stachniss, C., Leonard, J. J., & Thrun, S. (2016). Simultaneous localization and mapping. Springer Handbook of Robotics (pp. 1153–1176).

  • Steder, B., Grisetti, G., & Burgard, W. (2010). Robust place recognition for 3d range data based on point features. In Proceedings of international conference on robotics and automation (pp. 1400–1405).

  • Steder, B., Rusu, R. B., Konolige, K., & Burgard, W. (2010). Narf: 3d range image features for object recognition. In IROS 2010 workshop: Defining and solving realistic perception problems in personal robotics (Vol. 44, p. 2).

  • Sun, L., Adolfsson, D., Magnusson, M., Andreasson, H., Posner, I., & Duckett, T. (2020). Localising faster: Efficient and precise lidar-based robot localisation in large-scale environments. In Proceedings of international conference on robotics and automation (pp. 4386–4392).

  • Sünderhauf, N., & Protzel, P. (2012). Switchable constraints for robust pose graph slam. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 1879–1884).

  • Tang, T. Y., De Martini, D., & Newman, P. (2021). Get to the point: Learning lidar place recognition and metric localisation using overhead imagery. In Proceedings of Robotics: Science and Systems, 2021.

  • Tang, L., Wang, Y., Ding, X., Yin, H., Xiong, R., & Huang, S. (2019). Topological local-metric framework for mobile robots navigation: A long term perspective. Autonomous Robots, 43(1), 197–211.

    Article  Google Scholar 

  • Thomas, H., Qi, C. R., Deschaud, J.-E., Marcotegui, B., Goulette, F., & Guibas, L. J. (2019). Kpconv: Flexible and deformable convolution for point clouds. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6411–6420).

  • Thrun, S., Burgard, W., & Fox, D. (2005). Probabilistic robotics. Cambridge: MIT Press.

    Google Scholar 

  • Tian, Y., Chang, Y., Arias, F. H., Nieto-Granda, C., How, J. P., & Carlone, Luca. (2022). Kimera-multi: robust, distributed, dense metric-semantic slam for multi-robot systems. IEEE Transactions on Robotics, 38, 2022–2038.

    Article  Google Scholar 

  • Tian-Xing, X., Guo, Y.-C., Li, Z., Ge, Yu., Lai, Y.-K., & Zhang, S.-H. (2023). Transloc3d: Point cloud based large-scale place recognition using adaptive receptive fields. Communications in Information and Systems, 23(1), 57–83.

    Article  Google Scholar 

  • Tinchev, G., Nobili, S., & Fallon, M. (2018). Seeing the wood for the trees: Reliable localization in urban and natural environments. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 8239–8246).

  • Tinchev, G., Penate-Sanchez, A., & Fallon, M. (2019). Learning to see the wood for the trees: Deep laser localization in urban and natural environments on a CPU. IEEE Robotics and Automation Letters, 4(2), 1327–1334.

    Article  Google Scholar 

  • Tinchev, G., Penate-Sanchez, A., & Fallon, M. (2021). Skd: Keypoint detection for point clouds using saliency estimation. IEEE Robotics and Automation Letters, 6(2), 3785–3792.

    Article  Google Scholar 

  • Tipaldi, G. D., & Arras, K. O. (2010). Flirt-interest regions for 2d range data. In Proceedings of international conference on robotics and automation (pp. 3616–3622).

  • Toft, C., Maddern, W., Torii, A., Hammarstrand, L., Stenborg, E., Safari, D., Okutomi, M., Pollefeys, M., Sivic, J., Pajdla, T., et al. (2020). Long-term visual localization revisited. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(4), 2074–2088.

    Article  Google Scholar 

  • Tolias, G., Avrithis, Y., & Jégou, H. (2013). To aggregate or not to aggregate: Selective match kernels for image search. In Proceedings of the IEEE international conference on computer vision (pp. 1401–1408).

  • Tombari, F., Salti, S., & Di Stefano, L. (2013). Performance evaluation of 3d keypoint detectors. International Journal of Computer Vision, 102(1), 198–220.

    Article  Google Scholar 

  • Usman, M., Khan, A. M., Ali, A., Yaqub, S., Zuhaib, K. M., Lee, J. Y., & Han, C.-S. (2019). An extensive approach to features detection and description for 2-d range data using active b-splines. IEEE Robotics and Automation Letters, 4(3), 2934–2941.

    Article  Google Scholar 

  • Uy, M. A., & Lee, G. H. (2018). Pointnetvlad: Deep point cloud based retrieval for large-scale place recognition. In Proceedings of IEEE conference on computer vision and pattern recognition (pp. 4470–4479).

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Łukasz, & Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30, 5998–6008.

    Google Scholar 

  • Vidanapathirana, K., Moghadam, P., Harwood, B., Zhao, M., Sridharan, S., & Fookes, C. (2021). Locus: Lidar-based place recognition using spatiotemporal higher-order pooling. In Proceedings of international conference on robotics and automation (pp. 5075–5081).

  • Vidanapathirana, K., Ramezani, M., Moghadam, P., Sridharan, S., & Fookes, C. (2022). Logg3d-net: Locally guided global descriptor learning for 3d place recognition. In Proceedings of international conference on robotics and automation (pp. 2215–2221).

  • Vizzo, I., Guadagnino, T., Mersch, B., Wiesmann, L., Behley, J., & Stachniss, C. (2023). Kiss-icp: In defense of point-to-point icp-simple, accurate, and robust registration if done the right way. IEEE Robotics and Automation Letters, 8(2), 1029–1036.

    Article  Google Scholar 

  • Vysotska, O., & Stachniss, C. (2019). Effective visual place recognition using multi-sequence maps. IEEE Robotics and Automation Letters, 4(2), 1730–1736.

    Article  Google Scholar 

  • Wang, Y., & Solomon, J. M. (2019). Deep closest point: Learning representations for point cloud registration. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3523–3532).

  • Wang, X., Marcotte, R. J., & Olson, E. (2019). Glfp: Global localization from a floor plan. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 1627–1632).

  • Wang, Y., Sun, Z., Xu, C.-Z., Sarma, S. E., Yang, J., & Kong, H. (2020). Lidar iris for loop-closure detection. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 5769–5775).

  • Wang, Y., Sun, Z., Xu, C.-Z., Sarma, S. E., Yang, J., & Kong, H. (2020). Lidar iris for loop-closure detection. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 5769–5775).

  • Wang, H., Wang, C., & Xie, L. (2020). Intensity scan context: Coding intensity and geometry relations for loop closure detection. In Proceedings of international conference on robotics and automation (pp. 2095–2101).

  • Wang, W., Wang, B., Zhao, P., Chen, C., Clark, R., Yang, B., Markham, A., & Trigoni, N. (2021). Pointloc: Deep pose regressor for lidar point cloud localization. IEEE Sensors Journal, 22(1), 959–968.

    Article  ADS  CAS  Google Scholar 

  • Wiesmann, L., Marcuzzi, R., Stachniss, C., & Behley, J. (2022). Retriever: Point cloud retrieval in compressed 3d maps. In Proceedings of international conference on robotics and automation (pp. 10925–10932).

  • Wiesmann, L., Milioto, A., Chen, X., Stachniss, C., & Behley, J. (2021). Deep Compression for Dense Point Cloud Maps. IEEE Robotics and Automation Letters, 6, 2060–2067.

    Article  Google Scholar 

  • Wiesmann, L., Nunes, L., Behley, J., & Stachniss, C. (2022). Kppr: Exploiting momentum contrast for point cloud-based place recognition. IEEE Robotics and Automation Letters, 8(2), 592–599.

    Article  Google Scholar 

  • Wilbers, D., Rumberg, L., & Stachniss, C. (2019). Approximating marginalization with sparse global priors for sliding window slam-graphs. In Proceedings of the IEEE international conference on robotics and automation (pp. 25–31).

  • Wolcott, R. W., & Eustice, R. M. (2015). Fast lidar localization using multiresolution Gaussian mixture maps. In Proceedings of international conference on robotics and automation (pp. 2814–2821).

  • Wurm, K. M., Hornung, A., Bennewitz, M., Stachniss, C., & Burgard, W. (2010). Octomap: A probabilistic, flexible, and compact 3d map representation for robotic systems. In ICRA 2010 workshop: Best practice in 3D perception and modeling for mobile manipulation (Vol. 2).

  • Xia, Y., Shi, L., Ding, Z., Henriques, J., & Cremers, D. (2023). Text2loc: 3d point cloud localization from natural language. arXiv preprintarXiv:2311.15977.

  • Xia, Y., Xu, Y., Li, S., Wang, R., Du, J., Cremers, D., & Stilla, U. (2021). Soe-net: A self-attention and orientation encoding network for point cloud based place recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 11348–11357).

  • Xie, Y., Zhang, Y., Chen, L., Cheng, H., Tu, W., Cao, D., & Li, Q. (2021). Rdc-slam: A real-time distributed cooperative slam system based on 3d lidar. IEEE Transactions on Intelligent Transportation Systems, 23, 14721–14730.

    Article  Google Scholar 

  • Xu, X., Lu, S., Wu, J., Lu, H., Zhu, Q., Liao, Y., Xiong, R., & Wang, Y. (2023). Ring++: Roto-translation-invariant gram for global localization on a sparse scan map. IEEE Transactions on Robotics, 39, 4616–4635.

    Article  Google Scholar 

  • Xuecheng, X., Yin, H., Chen, Z., Li, Y., Wang, Y., & Xiong, R. (2021). Disco: Differentiable scan context with orientation. IEEE Robotics and Automation Letters, 6(2), 2791–2798.

    Article  Google Scholar 

  • Xu, H., Zhang, Y., Zhou, B., Wang, L., Yao, X., Meng, G., & Shen, S. (2022). Omni-swarm: A decentralized omnidirectional visual-inertial-uwb state estimation system for aerial swarms. IEEE Transactions on Robotics, 38, 3374–3394.

    Article  Google Scholar 

  • Yan, F., Vysotska, O., & Stachniss, C. (2019). Global localization on openstreetmap using 4-bit semantic descriptors. In Proceedings of the 4th European conference on mobile robots (pp. 1–7).

  • Yang, J., Li, H., & Jia, Y. (2013). Go-icp: Solving 3d registration efficiently and globally optimally. In Proceedings of the IEEE International Conference on Computer Vision (pp. 1457–1464). Sydney, NSW, Australia.

  • Yang, H., Antonante, P., Tzoumas, V., & Carlone, L. (2020). Graduated non-convexity for robust spatial perception: From non-minimal solvers to global outlier rejection. IEEE Robotics and Automation Letters, 5(2), 1127–1134.

    Article  Google Scholar 

  • Yang, H., Shi, J., & Carlone, L. (2021). Teaser: Fast and certifiable point cloud registration. IEEE Transactions on Robotics, 37(2), 314–333.

    Article  Google Scholar 

  • Yew, Z. J., & Lee, G. H. (2018). 3dfeat-net: Weakly supervised local 3d features for point cloud registration. In Proceedings of the European conference on computer vision (pp. 607–623).

  • Yew, Z. J., & Lee, G. H. (2022). Regtr: End-to-end point cloud correspondences with transformers. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6677–6686).

  • Yin, H., Ding, X., Tang, L., Wang, Y., & Xiong, R. (2017). Efficient 3d lidar based loop closing using deep neural network. In Proceedings of IEEE international conference on robotics and biomimetics (pp. 481–486).

  • Yin, H., Tang, L., Ding, X., Wang, Y., & Xiong, R. (2018). Locnet: Global localization in 3d point clouds for mobile vehicles. In Proceedings of the IEEE intelligent vehicles symposium (pp. 728–733).

  • Yin, H., Tang, L., Ding, X., Wang, Y., & Xiong, R. (2019). A failure detection method for 3d lidar based localization. In Proceedings of the Chinese automation congress (pp. 4559–4563).

  • Yin, P., Yuan, S., Cao, H., Ji, X., Zhang, S., & Xie, L. (2023). Segregator: Global point cloud registration with semantic and geometric cues. In 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).

  • Yin, P., Zhao, S., Cisneros, I., Abuduweili, A., Huang, G., Milford, M., et al. (2022). General place recognition survey: Towards the real-world autonomy age. arXiv preprintarXiv:2209.04497.

  • Yin, P., Zhao, S., Ge, R., Cisneros, I., Fu, R., Zhang, J., Choset, H., & Scherer, S. (2022). Alita: A large-scale incremental dataset for long-term autonomy. arXiv preprintarXiv:2205.10737.

  • Yin, H., Lin, Z., & Yeoh, J. K. W. (2023). Semantic localization on BIM-generated maps using a 3D LiDAR sensor. Automation in Construction, 146, 104641.

    Article  Google Scholar 

  • Yin, H., Wang, Y., Ding, X., Tang, L., Huang, S., & Xiong, R. (2019). 3d lidar-based global localization using Siamese neural network. IEEE Transactions on Intelligent Transportation Systems, 21(4), 1380–1392.

    Article  Google Scholar 

  • Yin, P., Wang, F., Egorov, A., Hou, J., Jia, Z., & Han, J. (2022). Fast sequence-matching enhanced viewpoint-invariant 3-d place recognition. IEEE Transactions on Industrial Electronics, 69(2), 2127–2135.

    Article  Google Scholar 

  • Yin, H., Wang, Y., Tang, L., Ding, X., Huang, S., & Xiong, R. (2020). 3d lidar map compression for efficient localization on resource constrained vehicles. IEEE Transactions on Intelligent Transportation Systems, 22(2), 837–852.

    Article  Google Scholar 

  • Yin, H., Wang, Y., Wu, J., & Xiong, R. (2022). Radar style transfer for metric robot localisation on lidar maps. CAAI Transactions on Intelligence Technology, 8, 139–148.

    Article  Google Scholar 

  • Yin, H., Xuecheng, X., Wang, Y., & Xiong, R. (2021). Radar-to-lidar: Heterogeneous place recognition via joint learning. Frontiers in Robotics and AI, 8, 661199.

    Article  ADS  PubMed  PubMed Central  Google Scholar 

  • Yuan, W., Eckart, B., Kim, K., Jampani, V., Fox, D., & Kautz, J. (2020). Deepgmr: Learning latent gaussian mixture models for registration. In Proceedings of the IEEE conference on computer vision (pp. 733–750). Springer.

  • Yuan, C., Lin, J., Zou, Z., Hong, X., & Zhang, F. (2023). Std: Stable triangle descriptor for 3d place recognition. In 2023 IEEE international conference on robotics and automation (ICRA) (pp. 1897–1903). IEEE.

  • Yue, Y., Zhao, C., Wang, Y., Yang, Y., & Wang, D. (2022). Aerial-ground robots collaborative 3d mapping in gnss-denied environments. In Proceedings of international conference on robotics and automation (pp. 10041–10047).

  • Zeng, A., Song, S., Nießner, M., Fisher, M., Xiao, J., & Funkhouser, T. (2017). 3dmatch: Learning local geometric descriptors from rgb-d reconstructions. In Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1802–1811).

  • Zhang, J., & Singh, S. (2014). Loam: Lidar odometry and mapping in real-time. In Proceedings of the robotics: Science and systems conference (Vol. 2, pp. 1–9). Berkeley, CA.

  • Zhang, W., & Xiao, C. (2019). Pcan: 3d attention map learning using contextual information for point cloud based retrieval. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 12436–12445).

  • Zhang, Z. (1997). Parameter estimation techniques: A tutorial with application to conic fitting. Image and Vision Computing, 15(1), 59–76.

    Article  Google Scholar 

  • Zhao, S., Zhang, H., Wang, P., Nogueira, L., & Scherer, S. (2021). Super odometry: Imu-centric lidar-visual-inertial estimator for challenging environments. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 8729–8736).

  • Zheng, K. (2021). Ros navigation tuning guide. In Robot operating system (ROS) (pp. 197–226). Springer.

  • Zhong, S., Qi, Y., Chen, Z., Wu, J., Chen, H., & Liu, M. (2022). Dcl-slam: A distributed collaborative lidar slam framework for a robotic swarm. arXiv preprintarXiv:2210.11978.

  • Zhou, R., He, L., Zhang, H., Lin, X., & Guan, Y. (2022). Ndd: A 3d point cloud descriptor based on normal distribution for loop closure detection. In 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 1328–1335). IEEE.

  • Zhou, Q.-Y., Park, J., & Koltun, V. (2016). Fast global registration. In Proceedings of the European Conference on Computer Visio (pp. 766–782), Amsterdam, The Netherlands. Springer.

  • Zhou, Z., Zhao, C., Adolfsson, D., Su, S., Gao, Y., Duckett, T., & Sun, L. (2021). Ndt-transformer: Large-scale 3d point cloud localisation using the normal distribution transform representation. In Proceedings of international conference on robotics and automation (pp. 5654–5660).

  • Zhu, M., Ghaffari, M., & Peng, H. (2022). Correspondence-free point cloud registration with so (3)-equivariant implicit shape representations. In Conference on robot learning (pp. 1412–1422). PMLR.

  • Zhu, Y., Ma, Y., Chen, L., Liu, C., Ye, M., & Li, L. (2020). Gosmatch: Graph-of-semantics matching for detecting loop closures in 3d lidar data. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 5151–5157).

  • Zimmerman, N., Wiesmann, L., Guadagnino, T., Läbe, T., Behley, J., & Stachniss, C. (2022). Robust onboard localization in changing environments exploiting text spotting. In 2022 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 917–924). IEEE.

  • Zimmerman, N., Guadagnino, T., Chen, X., Behley, J., & Stachniss, C. (2023). Long-term localization using Semantic Cues in floor plan maps. IEEE Robotics and Automation Letters, 8(1), 176–183.

    Article  Google Scholar 

Download references

Acknowledgements

We would like to thank Dr. Xiaqing Ding for her constructive suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yue Wang.

Additional information

Communicated by Kong Hui.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by the National Nature Science Foundation of China under Grant 62373322, in part by the HKUST-DJI Joint Innovation Laboratory, and in part by the Hong Kong Center for Construction Robotics (InnoHK center supported by Hong Kong ITC).

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yin, H., Xu, X., Lu, S. et al. A Survey on Global LiDAR Localization: Challenges, Advances and Open Problems. Int J Comput Vis (2024). https://doi.org/10.1007/s11263-024-02019-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11263-024-02019-5

Keywords

Navigation