Skip to main content
Log in

Place recognition and navigation of outdoor mobile robots based on random Forest learning with a 3D LiDAR

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Place recognition and loop detection play an important role in the outdoor simultaneous localization and mapping (SLAM). In this paper, we present a place recognition and navigation method based on the random forest algorithm for outdoor mobile robot with three-dimensional (3D) laser point cloud data. The 3D point cloud-based place recognition employs a global feature extractor composed of well-designed geometric and statistical features without preprocessing for the effective training and construction of the random forest classifier. Then, the environment point cloud and node map are fed into the classifier for the place recognition task. The place recognition method is subsequently applied to the loop detection of the mobile robots. To begin with, the odometry pose nodes are sorted according to their location and distance, and are then fed into the random forest classifier for loop discrimination. Eventually, loop verification based on the overlap rate of two point clouds is performed to identify the true loop. The loop detection method is combined with S4-SLAM proposed earlier by us to form the new S4-SLAM2 algorithm. Node maps constructed via S4-SLAM2 perform global re-localization in the given map by combining the place recognition method and the point cloud registration. The proposed method was verified by extensive evaluations using the KITTI dataset, as well as real-world scenarios of outdoor environments. The loop detection recall was determined as 82%, with 100% precision. The S4-SLAM2 system also exhibited high localization and mapping accuracies, with a localization output rate of 10 Hz and an average localization drift lower than 1%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Code or Data Availability

Code and data generated or used during the study is available from the corresponding author by reasonable request.

References

  1. He, Y., Zhou, B., Li, X., Qian, K., Ma, X.: S4OM: a real-time Lidar odometry and mapping system based on Super4PCS scan-matching. In: Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics(ROBIO), pp. 212–217. IEEE (2018)

    Chapter  Google Scholar 

  2. Zhou, B., He, Y., Qian, K., Ma, X., Li, X.: S4-SLAM: a real-time 3D LIDAR SLAM system for ground/watersurface multi-scene outdoor applications. Auton. Robot. 45(1), 77–98 (2021)

    Article  Google Scholar 

  3. Mellado, N., Aiger, D., Mitra, N.J.: Super 4pcs fast global pointcloud registration via smart indexing. Comput. Graph. Forum. 33(5), 205–215 (2014)

    Article  Google Scholar 

  4. Magnusson, M: The three-dimensional normal-distributions transform: an efficient representation for registration, surface analysis, and loop detection. PhD dissertation, Örebro universitet. (2009)

  5. Wang, Z., Shen, Y., Cai, B., Saleem, M.T.: A brief review on loop closure detection with 3D point cloud. In: Proceedings of the 2019 IEEE International Conference on Real-Time Computing and Robotics (RCAR), pp. 929–934. IEEE (2019)

    Chapter  Google Scholar 

  6. Arshad, S., Kim, G.W.: Role of deep learning in loop closure detection for visual and lidar SLAM: a survey. Sensors. 21(4), 1243 (2021)

    Article  Google Scholar 

  7. Lui, W.L.D., Jarvis, R.: A pure vision-based topological SLAM system. Int. J. Robot. Res. 31(4), 403–428 (2012)

    Article  Google Scholar 

  8. Sprickerhof, J., Nüchter, A., Lingemann, K., Hertzberg, J.: A heuristic loop closing technique for large-scale 6d slam. Automatika. 52(3), 199–222 (2011)

    Article  Google Scholar 

  9. Beeson, P., Modayil, J., Kuipers, B.: Factoring the mapping problem: mobile robot map-building in the hybrid spatial semantic hierarchy. Int. J. Robot. Res. 29(4), 428–459 (2010)

    Article  Google Scholar 

  10. Ulrich, I., Nourbakhsh, I.: Appearance-based place recognition for topological localization. In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings Cat. No. 00CH37065, vol. 2, pp. 1023–1029. IEEE (2000)

    Google Scholar 

  11. Galvez-Lopez, D., Tardos, J.D.: Real-time loop detection with bags of binary words. In: Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 51–58. IEEE (2011)

    Google Scholar 

  12. Zhu, Y., Wang, J., Xie, L., Zheng, L.: Attention-based pyramid aggregation network for visual place recognition. In: Proceedings of the 26th ACM International Conference on Multimedia, pp. 99–107 (2018)

    Chapter  Google Scholar 

  13. Garg, S., Suenderhauf, N., Milford, M.: Semantic–geometric visual place recognition: a new perspective for reconciling opposing views. Int. J. Robot. Res. (2019). https://doi.org/10.1177/0278364919839761

  14. Merrill, N., Huang, G.: CALC2. 0: combining appearance, semantic and geometric information for robust and efficient visual loop closure. In: The Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4554–4561. IEEE (2019)

    Google Scholar 

  15. Garg, S., Milford, M.: SeqNet: learning descriptors for sequence-based hierarchical place recognition. IEEE Robot. Autom. Lett. 6(3), 4305–4312 (2021)

    Article  Google Scholar 

  16. Rusu, R.B., Blodow, N., Beetz, M.: Fast point feature histograms (FPFH) for 3D registration. In: Proceedings of the 2009 IEEE International Conference on Robotics and Automation, pp. 3212–3217. IEEE (2009)

    Chapter  Google Scholar 

  17. Prakhya, S.M., Liu, B., Lin, W., Jakhetiya, V., Guntuku, S.C.: B-SHOT: a binary 3D feature descriptor for fast keypoint matching on 3D point clouds. Auton. Robot. 41(7), 1501–1520 (2017)

    Article  Google Scholar 

  18. Steder, B., Rusu, R.B., Konolige, K., Burgard, W.: Point feature extraction on 3D range scans taking into account object boundaries. In: Proceedings of 2011 IEEE International Conference on Robotics and Automation, pp. 2601–2608. IEEE (2011)

    Chapter  Google Scholar 

  19. Magnusson, M., Andreasson, H., Nuchter, A., Lilienthal, A.J.: Appearance-based loop detection from 3D laser data using the normal distributions transform. In: Proceedings of the 2009 IEEE International Conference on Robotics and Automation, pp. 23–28. IEEE (2009)

    Chapter  Google Scholar 

  20. Granström, K., Schön, T.B.: Learning to close the loop from 3D point clouds. In: Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2089–2095. IEEE (2010)

    Chapter  Google Scholar 

  21. Uy, M.A., Lee, G.H.: Pointnetvlad: deep point cloud based retrieval for large-scale place recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4470–4479 (2018)

    Google Scholar 

  22. Dubé, R., Dugas, D., Stumm, E., Nieto, J., Siegwart, R., Cadena, C.: Segmatch: segment based place recognition in 3d point clouds. In: Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 5266–5272. IEEE (2017)

    Chapter  Google Scholar 

  23. Kong, X., Yang, X., Zhai, G., Zhao, X., Zeng, X., Wang, M., Wen, F.: Semantic graph based place recognition for 3d point clouds. In: Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 8216–8223. IEEE (2020)

    Chapter  Google Scholar 

  24. Dubé, R., Cramariuc, A., Dugas, D., Sommer, H., Dymczyk, M., Nieto, J., Cadena, C.: SegMap: segment-based mapping and localization using data-driven descriptors. Int. J. Robot. Res. 39(2–3), 339–355 (2020)

    Article  Google Scholar 

  25. Chen, X., Milioto, A., Palazzolo, E., Giguere, P., Behley, J., Stachniss, C.: Suma++: efficient lidar-based semantic slam. In: Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4530–4537. IEEE (2019)

    Chapter  Google Scholar 

  26. Kim, G., Kim, A.: Scan context: egocentric spatial descriptor for place recognition within 3d point cloud map. In: Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4802–4809. IEEE (2018)

    Chapter  Google Scholar 

  27. Liu, Z., Zhou, S., Suo, C., Yin, P., Chen, W., Wang, H., Liu, Y.H.: Lpd-net: 3d point cloud learning for large-scale place recognition and environment analysis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2831–2840 (2019)

    Google Scholar 

  28. Zaganidis, A., Zerntev, A., Duckett, T., Cielniak, G.: Semantically assisted loop closure in SLAM using NDT histograms. In: Proceedings of the 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4562–4568. IEEE (2019)

    Chapter  Google Scholar 

  29. Moosmann, F., Stiller, C.: Velodyne slam. In: In 2011 IEEE Intelligent Vehicles Symposium (IV), pp. 393–398. IEEE (2011)

    Chapter  Google Scholar 

  30. Dubé, R., Gollub, M.G., Sommer, H., Gilitschenski, I., Siegwart, R., Cadena, C., Nieto, J.: Incremental-segment-based localization in 3-d point clouds. IEEE Robot. Autom. Lett. 3(3), 1832–1839 (2018)

    Article  Google Scholar 

  31. Zhang, J., Singh, S.: Low-drift and real-time lidar odometry and mapping. Auton. Robot. 41(2), 401–416 (2017)

    Article  Google Scholar 

  32. Shan, T., Englot, B.: Lego-loam: lightweight and ground-optimized lidar odometry and mapping on variable terrain. In: Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4758–4765. IEEE (2018)

    Chapter  Google Scholar 

  33. Behley, J., Stachniss, C.: Efficient Surfel-based SLAM using 3D laser range data in urban environments. In: Proceedings of Robotics: Science and Systems(RSS). 2018 (2018)

  34. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  Google Scholar 

  35. Besl, P.J., McKay, N.D.: Method for registration of 3-D shapes. In: Sensor Fusion IV: Control Paradigms and Data Structures, vol. 1611, pp. 586–606. International Society for Optics and Photonics (1992)

    Chapter  Google Scholar 

  36. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The Kitti vision benchmark suite. In: Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 3354–3361. IEEE (2012)

    Chapter  Google Scholar 

  37. Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the Kitti dataset. Int. J. Robot. Res. 32(11), 1231–1237 (2013)

    Article  Google Scholar 

  38. Kim, G., Park, Y.S., Cho, Y., Jeong, J., Kim, A.: Mulran: multimodal range dataset for urban place recognition. In: Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 6246–6253. IEEE (2020)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Contributions

Bo Zhou: Supervision, Conceptualization, Methodology, Discussion, Writing - original draft.

Yi He: Methodology, Data collection, Validation, Experimentation, Writing - original draft.

Wenchao Huang: Discussion, Comparison experimentation, Writing - original draft.

Xiang Yu: Discussion, Experimentation, Writing - review & editing.

Fang Fang: Supervision, Discussion.

Xiaomao Li: Supervision, Discussion, Resources, Writing - review & editing.

Corresponding author

Correspondence to Xiaomao Li.

Ethics declarations

Conflict of Interests

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Ethical Approval

Not applicable.

Consent to Publish

Not applicable.

Consent to Participate

Not applicable.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, B., He, Y., Huang, W. et al. Place recognition and navigation of outdoor mobile robots based on random Forest learning with a 3D LiDAR. J Intell Robot Syst 104, 72 (2022). https://doi.org/10.1007/s10846-021-01545-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01545-5

Keywords

Navigation