Skip to main content

Advertisement

Log in

Similar but Different: A Survey of Ground Segmentation and Traversability Estimation for Terrestrial Robots

  • Regular Papers
  • Invited Paper
  • Published:
International Journal of Control, Automation and Systems Aims and scope Submit manuscript

Abstract

With the increasing demand for mobile robots and autonomous vehicles, several approaches for long-term robot navigation have been proposed. Among these techniques, ground segmentation and traversability estimation play important roles in perception and path planning, respectively. Even though these two techniques appear similar, their objectives are different. Ground segmentation divides data into ground and non-ground elements; thus, it is used as a preprocessing stage to extract objects of interest by rejecting ground points. In contrast, traversability estimation identifies and comprehends areas in which robots can move safely. Nevertheless, some researchers use these terms without clear distinction, leading to misunderstanding the two concepts. Therefore, in this study, we survey related literature and clearly distinguish ground and traversable regions considering four aspects: a) maneuverability of robot platforms, b) position of a robot in the surroundings, c) subset relation of negative obstacles, and d) subset relation of deformable objects.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. H. Lim, S. Hwang, and H. Myung, “ERASOR: Egocentric ratio of pseudo occupancy-based dynamic object removal for static 3D point cloud map building,” IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2272–2279, 2021.

    Article  Google Scholar 

  2. X. Chen, S. Li, B. Mersch, L. Wiesmann, J. Gall, J. Behley, and C. Stachniss, “Moving object segmentation in 3D Li-DAR data: A learning-based approach exploiting sequential data,” IEEE Robotics and Automation Letters, vol. 6, pp. 6529–6536, 2021.

    Article  Google Scholar 

  3. C. Sung, S. Jeon, H. Lim, and H. Myung, “What if there was no revisit? Large-scale graph-based SLAM with traffic sign detection in an HD map using LiDAR inertial odometry,” Intelligent Service Robotic, vol. 15, no. 2, pp. 161–170, 2022.

    Article  Google Scholar 

  4. H. Lim, L. Nunes, B. Mersch, X. Chen, J. Behley, H. Myung, and C. Stachniss, “ERASOR2: Instance-aware robust 3D mapping of the static world in dynamic scenes,” Robotics: Science and Systems, 2023, doi: https://doi.org/10.15607/RSS.2023.XIX.067.

  5. F. Pomerleau, P. Krüsi, F. Colas, P. Furgale, and R. Siegwart, “Long-term 3D map maintenance in dynamic environments,” Proc. of IEEE International Conference on Robotics and Automation, 2014, pp. 3712–3719.

  6. D. Yoon, T. Tang, and T. Barfoot, “Mapless online detection of dynamic objects in 3D LiDAR,” Proc. of Conference on Computer and Robot Vision, 2019, pp. 113–120.

  7. M. Oh, E. Jung, H. Lim, W. Song, S. Hu, E. M. Lee, J. Park, J. Kim, J. Lee, and H. Myung, “TRAVEL: Traversable ground and above-ground object segmentation using graph representation of 3D LiDAR scans,” IEEE Robotics and Automation Letters, pp. 7255–7262, 2022.

  8. X. Chen, B. Mersch, L. Nunes, R. Marcuzzi, I. Vizzo, J. Behley, and C. Stachniss, “Automatic labeling to generate training data for online LiDAR-based moving object segmentation,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6107–6114, 2022.

    Article  Google Scholar 

  9. F. Zeng, A. Jacobson, D. Smith, N. Boswell, T. Peynot, and M. Milford, “Enhancing underground visual place recognition with Shannon entropy saliency,” Proc. of Australasian Conference on Robotics and Automation, pp. 1–10, 2017.

  10. M. Y. Chang, S. Yeon, S. Ryu, and D. Lee, “SpoxelNet: Spherical voxel-based deep place recognition for 3D point clouds of crowded indoor spaces,” in Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020, pp. 8564–8570.

  11. A. J. Lee, S. Song, H. Lim, W. Lee, and H. Myung, “(LC)2: LiDAR-camera loop constraints for cross-modal place recognition,” IEEE Robotics and Automation Letters, pp. 3589–3596, 2023.

  12. S. Arshad and G.-W. Kim, “A robust feature matching strategy for fast and effective visual place recognition in challenging environmental conditions,” International Journal of Control, Automation, and Systems, vol. 21, no. 3, pp. 948–962, 2023.

    Article  Google Scholar 

  13. C. Park, H.-W. Chae, and J.-B. Song, “Robust place recognition using illumination-compensated image-based deep convolutional autoencoder features,” International Journal of Control, Automation, and Systems, vol. 18, pp. 2699–2707, 2020.

    Article  Google Scholar 

  14. S. J. Lee and S. S. Hwang, “Bag of sampled words: a sampling-based strategy for fast and accurate visual place recognition in changing environments,” International Journal of Control, Automation, and Systems, vol. 17, no. 10, pp. 2597–2609, 2019.

    Article  Google Scholar 

  15. J.-H. Choi, Y.-W. Park, J.-B. Song, and I.-S. Kweon, “Localization using GPS and VISION aided INS with an image database and a network of a ground-based reference station in outdoor environments,” International Journal of Control, Automation, and Systems, vol. 9, pp. 716–725, 2011.

    Article  Google Scholar 

  16. S.-Y. Park, S.-I. Choi, J. Moon, J. Kim, and Y. W. Park, “Localization of an unmanned ground vehicle based on hybrid 3D registration of 360 degree range data and DSM,” International Journal of Control, Automation, and Systems, vol. 9, pp. 875–887, 2011.

    Article  Google Scholar 

  17. T. Shan and B. Englot, “LeGO-LOAM: Lightweight and ground-optimized LiDAR odometry and mapping on variable terrain,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4758–4765, 2018.

  18. H. Lim, S. Hwang, S. Shin, and H. Myung, “Normal distributions transform is enough: Real-time 3D scan matching for pose correction of mobile robot under large odometry uncertainties,” Proc. of International Conference on Control, Automatation, and System, pp. 1155–1161, 2020.

  19. T. Shan, B. Englot, C. Ratti, and D. Rus, “LVI-SAM: Tightly-coupled LiDAR-visual-inertial odometry via smoothing and mapping,” Proc. of IEEE International Conference on Robotics and Automation, pp. 5692–5698, 2021.

  20. D. U. Seo, H. Lim, S. Lee, and H. Myung, “PaGO-LOAM: Robust ground-optimized LiDAR odometry,” Proc. of International Conference on Ubiquitous Robots, pp. 1–7, 2022.

  21. S. Song, H. Lim, A. J. Lee, and H. Myung, “DynaVINS: A visual-inertial SLAM for dynamic environments,” IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 11523–11530, 2022.

    Article  Google Scholar 

  22. S. Song, H. Lim, S. Jung, and H. Myung, “G2P-SLAM: Generalized RGB-D SLAM framework for mobile robots in low-dynamic environments,” IEEE Access, vol. 10, pp. 21370–21383, 2022.

    Article  Google Scholar 

  23. H. Lim, D. Kim, B. Kim, and H. Myung, “AdaLIO: Robust adaptive LiDAR-inertial odometry in degenerate indoor environments,” Proc. of International Conference on Ubiquitous Robots, 2023.

  24. A. Y. Hata and D. F. Wolf, “Feature detection for vehicle localization in urban environments using a multilayer Li-DAR,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 2, pp. 420–429, 2015.

    Article  Google Scholar 

  25. C. Stachniss and W. Burgard, “Mobile robot mapping and localization in non-static environments,” Proc. of National Conference on Artificial Intelligence, pp. 1324–1329, 2005.

  26. P. E. Sarlin, C. Cadena, R. Siegwart, and M. Dymczyk, “From coarse to fine: Robust hierarchical localization at large scale,” Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12716–12725, 2019.

  27. Y. Chen and G. Medioni, “Object modelling by registration of multiple range images,” Journal on Image and Vision Computing, vol. 10, no. 3, pp. 145–155, 1992.

    Article  Google Scholar 

  28. H. Lim, S. Yeon, S. Ryu, Y. Lee, Y. Kim, J. Yun, E. Jung, D. Lee, and H. Myung, “A single correspondence is enough: Robust global registration to avoid degeneracy in urban environments,” Proc. of IEEE International Conference on Robotics and Automation, pp. 8010–8017, 2022.

  29. B. Eckart, K. Kim, and J. Kautz, “HGMR: Hierarchical Gaussian mixtures for adaptive 3D registration,” Proc. of European Conference on Computer Vision, pp. 705–721, 2018.

  30. H. Lim, B. Kim, D. Kim, E. Mason Lee, and H. Myung, “Quatro++: Robust global registration exploiting ground segmentation for loop closing in LiDAR SLAM,” International Journal of Robotics Research, p. 02783649231207654, 2023.

  31. D. B. Gennery, “Traversability analysis and path planning for a planetary rover,” Autonomous Robots, vol. 6, no. 2, pp. 131–146, 1999.

    Article  Google Scholar 

  32. M. Wermelinger, P. Fankhauser, R. Diethelm, P. Krüsi, R. Siegwart, and M. Hutter, “Navigation planning for legged robots in challenging terrain,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1184–1189, 2016.

  33. E. M. Lee, J. Choi, H. Lim, and H. Myung, “REAL: Rapid exploration with active loop-closing toward large-scale 3D mapping using UAVs,” in Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4194–4198, 2021.

  34. D. Lee, E. M. Lee, H. Lim, S. Song, and H. Myung, “FARO-Tracker: Fast and robust target tracking system for UAVs in urban environment,” in Proc. of International Conference on Robot Intelligence Technology and Applications, pp. 226–234, 2022.

  35. J. Yu, Z. Chen, Z. Zhao, X. Wang, Y. Bai, J. Wu, and J. Xu, “Smooth path planning method for unmanned surface vessels considering environmental disturbance,” International Journal of Control, Automation, and Systems, vol. 21, no. 10, pp. 3285–3298, 2023.

    Article  Google Scholar 

  36. Y. Liu and Y. Jiang, “Robotic path planning based on a triangular mesh map,” International Journal of Control, Automation, and Systems, vol. 18, pp. 2658–2666, 2020.

    Article  Google Scholar 

  37. E.-Y. Kim, D.-S. Pae, and M.-T. Lim, “Road boundary detection using multi-channel LiDAR based on disassemble-reassemble-merge algorithm for autonomous driving,” International Journal of Control, Automation, and Systems, vol. 21, no. 11, pp. 3724–3733, 2023.

    Article  Google Scholar 

  38. J. Xue, Y. Dai, Y. Wang, and A. Qu, “Multiscale feature extraction network for real-time semantic segmentation of road scenes on the autonomous robot,” International Journal of Control, Automation, and Systems, pp. 1–11, 2023.

  39. X.-N. Cui, Y.-G. Kim, and H. Kim, “Floor segmentation by computing plane normals from image motion fields for visual navigation,” International Journal of Control, Automation, and Systems, vol. 7, pp. 788–798, 2009.

    Article  Google Scholar 

  40. D. S. Pae, Y. S. Jang, S. K. Park, and M. T. Lim, “Track compensation algorithm using free space information with occupancy grid map,” International Journal of Control, Automation, and Systems, vol. 19, pp. 40–53, 2021.

    Article  Google Scholar 

  41. W.-I. Park, D.-J. Kim, and H.-J. Lee, “Terrain trafficability analysis for autonomous navigation: A GIS-based approach,” International Journal of Control, Automation, and Systems, vol. 11, pp. 354–361, 2013.

    Article  Google Scholar 

  42. B. Park, J. Choi, and W. K. Chung, “Sampling-based retraction method for improving the quality of mobile robot path planning,” International Journal of Control, Automation, and Systems, vol. 10, pp. 982–991, 2012.

    Article  Google Scholar 

  43. T. Shan, J. Wang, B. Englot, and K. Doherty, “Bayesian Generalized kernel inference for terrain traversability mapping,” Proc. of PMLR Conference on Robot Learning, pp. 829–838, 2018.

  44. V. Suryamurthy, V. S. Raghavan, A. Laurenzi, N. G. Tsagarakis, and D. Kanoulas, “Terrain segmentation and roughness estimation using RGB data: Path planning application on the centauro robot,” Proc. of IEEE International Conference on Humanoid Robots, pp. 1–8, 2019.

  45. J. Hwangbo, J. Lee, A. Dosovitskiy, D. Bellicoso, V. Tsounis, V. Koltun, and M. Hutter, “Learning agile and dynamic motor skills for legged robots,” Science Robotics, vol. 4, no. 26, p. eaau5872, 2019.

    Article  Google Scholar 

  46. T. Miki, J. Lee, J. Hwangbo, L. Wellhausen, V. Koltun, and M. Hutter, “Learning robust perceptive locomotion for quadrupedal robots in the wild,” Science Robotics, vol. 7, no. 62, p. eabk2822, 2022.

    Article  Google Scholar 

  47. I. M. A. Nahrendra, B. Yu, and H. Myung, “DreamWaQ: Learning robust quadrupedal locomotion with implicit terrain imagination via deep reinforcement learning,” Proc. of IEEE International Conference on Robotics and Automation, pp. 5078–5084, 2023.

  48. I. Nahrendra, M. Oh, B. Yu, H. Lim, and H. Myung, “Robust recovery motion control for quadrupedal robots via learned terrain imagination,” arXiv preprint arXiv:2306.12712, 2023.

  49. N. Hu, S. Li, and F. Gao, “Multi-objective hierarchical optimal control for quadruped rescue robot,” International Journal of Control, Automation, and Systems, vol. 16, pp. 1866–1877, 2018.

    Article  Google Scholar 

  50. J. Byun, K. Na, B. Seo, and M. Roh, “Drivable road detection with 3D point clouds based on the MRF for intelligent vehicle,” Proc. of Conference on Field and Service Robotics, pp. 49–60, 2015.

  51. K. Na, B. Park, and B. Seo, “Drivable space expansion from the ground base for complex structured roads,” Proc. of IEEE International Conference on Systems, Man, and Cybernetics, 2016, pp. 373–378.

  52. H. Fu, H. Xue, and G. Xie, “MapCleaner: Efficiently removing moving objects from point cloud maps in autonomous driving scenarios,” MDPI Remote Sensing, vol. 14, no. 18, 4496, 2022.

    Google Scholar 

  53. T. Gomes, D. Matias, A. Campos, L. Cunha, and R. Roriz, “A survey on ground segmentation methods for automotive LiDAR sensors,” MDPI Sensors, vol. 23, no. 2, 601, 2023.

    Article  Google Scholar 

  54. P. Papadakis, “Terrain traversability analysis methods for unmanned ground vehicles: A survey,” Engineering Applications of Artificial Intelligence, vol. 26, no. 4, pp. 1373–1385, 2013.

    Article  Google Scholar 

  55. C. Sevastopoulos and S. Konstantopoulos, “A survey of traversability estimation for mobile robots,” IEEE Access, vol. 10, pp. 96 331–96 347, 2022.

    Article  Google Scholar 

  56. P. Borges, T. Peynot, S. Liang, B. Arain, M. Wildie, M. Minareci, S. Lichman, G. Samvedi, I. Sa, N. Hudson et al., “A survey on terrain traversability analysis for autonomous ground vehicles: Methods, sensors, and challenges,” Journal of Field Robotics, vol. 2, no. 1, pp. 1567–1627, 2022.

    Article  Google Scholar 

  57. S. Beycimen, D. Ignatyev, and A. Zolotas, “A comprehensive survey of unmanned ground vehicle terrain traversability for unstructured environments and sensor technology insights,” Engineering Science and Technology, an International Journal, vol. 47, pp. 101 457–101 483, 2023.

    Article  Google Scholar 

  58. H. Lim, M. Oh, and H. Myung, “Patchwork: Concentric zone-based region-wise ground segmentation with ground likelihood estimation using a 3D LiDAR sensor,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6458–6465, 2021.

    Article  Google Scholar 

  59. S. Lee, H. Lim, and H. Myung, “Patchwork++: Fast and robust ground segmentation solving partial under-segmentation using 3D point cloud,” in Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 13276–13283, 2022.

  60. I. Bogoslavskyi and C. Stachniss, “Fast range image-based segmentation of sparse 3D laser scans for online operation,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 163–169, 2016.

  61. J. Behley, M. Garbade, A. Milioto, J. Quenzel, S. Behnke, C. Stachniss, and J. Gall, “SemanticKITTI: A dataset for semantic scene understanding of LiDAR sequences,” Proc. of IEEE/CVF International Conference on Computer Vision, pp. 9297–9307, 2019.

  62. “Velodyne HDL-64E LiDAR sensor,” Velodyne HDL-64E specification, Accessed on: November 30, 2023. [Online]. Available: https://hypertech.co.il/wp-content/uploads/2015/12/HDL-64E-Data-Sheet.pdf.

  63. “OS0: Ultra-Wide View High-resolution Imaging LiDAR,” Ouster OS0-128 specification, Accessed on: November 31, 2023. [Online]. Available: https://data.ouster.io/downloads/datasheets/datasheet-revd-v2p0-os0.pdf.

  64. S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann et al., “Stanley: The robot that won the DARPA Grand Challenge,” Journal of Field Robotics, vol. 23, no. 9, pp. 661–692, 2006.

    Article  Google Scholar 

  65. A. Asvadi, P. Peixoto, and U. Nunes, “Detection and tracking of moving objects using 2.5D motion grids,” Proc. of IEEE International Conference on Intelligent Transportation Systems, pp. 788–793, 2015.

  66. M. A. Fischler and R. C. Bolles, “Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.

    Article  MathSciNet  Google Scholar 

  67. B. Douillard, J. Underwood, N. Kuntz, V. Vlaskine, A. Quadros, P. Morton, and A. Frenkel, “On the segmentation of 3D LiDAR point clouds,” Proc. of IEEE International Conference on Robotics and Automation, pp. 2798–2805, 2011.

  68. P. Narksri, E. Takeuchi, Y. Ninomiya, Y. Morales, N. Akai, and N. Kawaguchi, “A slope-robust cascaded ground segmentation in 3D point cloud for autonomous vehicles,” Proc. of IEEE International Conference on Intelligent Transportation Systems, pp. 497–504, 2018.

  69. F. Moosmann, O. Pink, and C. Stiller, “Segmentation of 3D LiDAR data in non-flat urban environments using a local convexity criterion,” in Proc. of IEEE Vehicles Symposium, pp. 215–220, 2009.

  70. A. Paigwar, Ö. Erkent, D. S. González, and C. Laugier, “GndNet: Fast ground plane estimation and point cloud segmentation for autonomous vehicles,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2150–2156, 2020.

  71. M. Himmelsbach, F. V. Hundelshausen, and H. J. Wuensche, “Fast segmentation of 3D point clouds for ground vehicles,” Proc. of IEEE Vehicles Symposium, pp. 560–565, 2010.

  72. D. Steinhauser, O. Ruepp, and D. Burschka, “Motion segmentation and scene classification from 3D LiDAR data,” Proc. of IEEE Vehicles Symposium, pp. 398–403, 2008.

  73. J. Cheng, D. He, and C. Lee, “A simple ground segmentation method for LiDAR 3D point clouds,” Proc. International Conference on Advances in Computer Technology, Information Science, and Communications, pp. 171–175, 2020.

  74. G. Kim and A. Kim, “Scan context: Egocentric spatial descriptor for place recognition within 3D point cloud map,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4802–4809, 2018.

  75. D. Zermas, I. Izzat, and N. Papanikolopoulos, “Fast segmentation of 3D point clouds: A paradigm on LiDAR data for autonomous vehicle applications,” Proc. of IEEE International Conference on Robotics and Automation, pp. 5067–5073, 2017.

  76. T. Chen, B. Dai, R. Wang, and D. Liu, “Gaussian-process-based real-time ground segmentation for autonomous land vehicles,” Journal of Intelligent and Robotic Systems, vol. 76, no. 3–4, pp. 563–582, 2014.

  77. P. Mehrabi and H. D. Taghirad, “A Gaussian process-based ground segmentation for sloped terrains,” Proc. IEEE International Conference on Robotics and Mechatronics, pp. 371–377, 2021.

  78. K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arxiv:1409.1556, 2014.

  79. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation,” Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3431–3440, 2015.

  80. A. Milioto, I. Vizzo, J. Behley, and C. Stachniss, “RangeNet++: Fast and accurate LiDAR semantic segmentation,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4213–4220, 2019.

  81. C. Choy, J. Gwak, and S. Savarese, “4D spatio-temporal convnets: Minkowski convolutional neural networks,” Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3075–3084, 2019.

  82. K. Wong, S. Wang, M. Ren, M. Liang, and R. Urtasun, “Identifying unknown instances for autonomous driving,” Proc. of PMLR Conference on Robot Learning, pp. 384–393, 2020.

  83. Y. Zou, W. Chen, L. Xie, and X. Wu, “Comparison of different approaches to visual terrain classification for outdoor mobile robots,” Pattern Recognition Letters, vol. 38, pp. 54–62, 2014.

    Article  Google Scholar 

  84. G. Reina, A. Milella, and R. Galati, “Terrain assessment for precision agriculture using vehicle dynamic modelling,” Biosystems Engineering, vol. 162, pp. 124–139, 2017.

    Article  Google Scholar 

  85. G. Reina, R. Galati, and A. Milella, “All-terrain estimation for mobile robots in precision agriculture,” Proc. IEEE International Conference on Industrial Technology, pp. 63–68, 2018.

  86. D. L. Sancho-Pradel and Y. Gao, “A survey on terrain assessment techniques for autonomous operation of planetary robots,” JBIS-Journal of the British Interplanetary Society, vol. 63, no. 5–6, pp. 206–217, 2010.

    Google Scholar 

  87. L. Matthies, M. Maimone, A. Johnson, Y. Cheng, R. Willson, C. Villalpando, S. Goldberg, A. Huertas, A. Stein, and A. Angelova, “Computer vision on Mars,” International Journal of Computer Vision, vol. 75, no. 1, pp. 67–92, 2007.

    Article  Google Scholar 

  88. C. Bai, J. Guo, L. Guo, and J. Song, “Deep multi-layer perception based terrain classification for planetary exploration rovers,” MDPI Sensors, vol. 19, no. 14, pp. 3102–3119, 2019.

    Article  Google Scholar 

  89. C. A. Brooks and K. Iagnemma, “Self-supervised terrain classification for planetary surface exploration rovers,” Journal of Field Robotics, vol. 29, no. 3, pp. 445–468, 2012.

    Article  Google Scholar 

  90. S. Chhaniyara, C. Brunskill, B. Yeomans, M. Matthews, C. Saaj, S. Ransom, and L. Richter, “Terrain trafficability analysis and soil mechanical property identification for planetary rovers: A survey,” Journal of Terramechanics, vol. 49, no. 2, pp. 115–128, 2012.

    Article  Google Scholar 

  91. R. M. Swan, D. Atha, H. A. Leopold, M. Gildner, S. Oij, C. Chiu, and M. Ono, “AI4MARS: A dataset for terrainaware autonomous driving on Mars,” in Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1982–1991, 2021.

  92. H. Yu, X. Lu, X. Ge, and G. Cheng, “Digital terrain model extraction from airborne LiDAR data in complex mining area,” Proc. IEEE International Conference on Geoinformatics, pp. 1–6, 2010.

  93. A. Angelova, D. Helmick, and P. Perona, “Fast terrain classification using variable-length representation for autonomous navigation,” Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2007.

  94. M. Bajracharya, A. Howard, L. H. Matthies, B. Tang, and M. Turmon, “Autonomous off-road navigation with end-to-end learning for the LAGR program,” Journal of Field Robotics, vol. 26, no. 1, pp. 3–25, 2009.

    Article  Google Scholar 

  95. P. Moghadam and W. S. Wijesoma, “Online, self-supervised vision-based terrain classification in unstructured environments,” Proc. of IEEE International Conference on Systems, Man, and Cybernetics, pp. 3100–3105, 2009.

  96. G. Best, P. Moghadam, N. Kottege, and L. Kleeman, “Terrain classification using a hexapod robot,” Proc. of Australasian Conference on Robotics and Automation, pp. 1–8, 2013.

  97. W. Hang, L. Baozhen, S. Weihua, C. Zihao, Z. Wenchang, R. Xudong, and S. Jinggong, “Optimum pipeline for visual terrain classification using improved bag of visual words and fusion methods,” IEEE Sensors, vol. 2017, pp. 1–25, 2017.

    Google Scholar 

  98. P. Filitchkin and K. Byl, “Feature-based terrain classification for littledog,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1387–1392, 2012.

  99. S. Y. Lee and D. M. Kwak, “A terrain classification method for UGV autonomous navigation based on SURF,” Proc. International Conference on Ubiquitous Robots and Ambient Intelligence, pp. 303–306, 2011.

  100. J. J. Thomas, “Terrain classification using multi-wavelength LiDAR data,” Naval Postgraduate School, Tech. Rep., 2015.

  101. C. R. Qi, H. Su, K. Mo, and L. J. Guibas, “PointNet: Deep learning on point sets for 3D classification and segmentation,” Proc. of IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 652–660, 2017.

  102. C. R. Qi, L. Yi, H. Su, and L. J. Guibas, “PointNet++: Deep hierarchical feature learning on point sets in a metric space,” Advances in Neural Information Processing Systems, pp. 5099–5108, 2017.

  103. L. Wellhausen, A. Dosovitskiy, R. Ranftl, K. Walas, C. Cadena, and M. Hutter, “Where should I walk? Predicting terrain properties from images via self-supervised learning,” IEEE Robotics and Automation Letters, vol. 4, no. 2, pp. 1509–1516, 2019.

    Article  Google Scholar 

  104. N. Hirose, A. Sadeghian, M. Vázquez, P. Goebel, and S. Savarese, “GoNet: A semi-supervised deep learning approach for traversability estimation,” in Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3044–3051, 2018.

  105. A. Valada and W. Burgard, “Deep spatiotemporal models for robust proprioceptive terrain classification,” International Journal of Robotics Research, vol. 36, no. 13–14, pp. 1521–1539, 2017.

  106. Y. Iwashita, K. Nakashima, A. Stoica, and R. Kurazume, “TU-Net and TDeepLab: Deep learning-based terrain classification robust to illumination changes, combining visible and thermal imagery,” Proc. IEEE Conference on Multimedia Information Processing, and Retrieval, pp. 280–285, 2019.

  107. B. Rothrock, R. Kennedy, C. Cunningham, J. Papon, M. Heverly, and M. Ono, “SPOC: Deep learning-based terrain classification for Mars rover missions,” AIAA SPACE, 2016. DOI: 10.2514/6.2016-5539

  108. D. Maturana, P.-W. Chou, M. Uenoyama, and S. Scherer, “Real-time semantic mapping for autonomous off-road navigation,” Proc. of Conference on Field and Service Robotics, pp. 335–350, 2018.

  109. V. Badrinarayanan, A. Kendall, and R. Cipolla, “SegNet: A deep convolutional encoder-decoder architecture for image segmentation,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 12, pp. 2481–2495, 2017.

    Article  Google Scholar 

  110. L. C. Chen, M. D. Collins, Y. Zhu, G. Papandreou, B. Zoph, F. Schroff, H. Adam, and J. Shlens, “Searching for efficient multi-scale architectures for dense image prediction,” Advances in Neural Information Processing Systems, pp. 8713–8724, 2018.

  111. A. Paszke, A. Chaurasia, S. Kim, and E. Culurciello, “ENet: A deep neural network architecture for real-time semantic segmentation,” arXiv preprint arXiv:1606.02147, 2016.

  112. H. Lim, H. Gil, and H. Myung, “MSDPN: Monocular depth prediction with partial laser observation using multistage neural networks,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 10750–10757, 2020.

  113. R. Manduchi, A. Castano, A. Talukder, and L. Matthies, “Obstacle detection and terrain classification for autonomous off-road navigation,” Autonomous Robots, vol. 18, no. 1, pp. 81–102, 2005.

    Article  Google Scholar 

  114. S. Kuthirummal, A. Das, and S. Samarasekera, “A graph traversal based algorithm for obstacle detection using Li-DAR or stereo,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3874–3880, 2011.

  115. G. Dubbelman, W. van der Mark, J. C. van den Heuvel, and F. C. Groen, “Obstacle detection during day and night conditions using stereo vision,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 109–116, 2007.

  116. W. Van Der Mark and D. M. Gavrila, “Real-time dense stereo for intelligent vehicles,” IEEE Transactions on Intelligent Transportation Systems, vol. 7, no. 1, pp. 38–50, 2006.

    Article  Google Scholar 

  117. G. Reina, A. Leanza, A. Milella, and M. Arcangelo, “Mind the ground: A power spectral density-based estimator for all-terrain rovers,” Measurement, vol. 151, pp. 107136–107151, 2020.

    Article  Google Scholar 

  118. H. Bay, T. Tuytelaars, and L. Van Gool, “SURF: Speeded up robust features,” Proc. of European Conference on Computer Vision, pp. 404–417, 2006.

  119. “DarkNet: Open source neural networks in C,” DarkNet main homepage, Accessed on: November 30, 2023. [Online]. Available: http://pjreddie.com/darknet/.

  120. E. Romera, J. M. Alvarez, L. M. Bergasa, and R. Arroyo, “ERFNet: Efficient residual factorized convnet for real-time semantic segmentation,” IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 263–272, 2017.

    Article  Google Scholar 

  121. O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional networks for biomedical image segmentation,” Proc. Medical Image Computing and Computer-Assisted Intervention, pp. 234–241, 2015.

  122. L. C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, “DeepLab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834–848, 2018.

    Article  Google Scholar 

  123. T. Cortinhal, G. Tzelepis, and E. E. Aksoy, “SalsaNext: Fast, uncertainty-aware semantic segmentation of LiDAR point clouds,” Proc. Advances in Visual Computing: International Symposium, pp. 207–222, 2020.

  124. B. Suger, B. Steder, and W. Burgard, “Traversability analysis for mobile robots in outdoor environments: A semi-supervised learning approach based on 3D LiDAR data,” Proc. of IEEE International Conference on Robotics and Automation, pp. 3941–3946, 2015.

  125. P. Krüsi, P. Furgale, M. Bosse, and R. Siegwart, “Driving on point clouds: Motion planning, trajectory optimization, and terrain assessment in generic nonplanar environments,” Journal of Field Robotics, vol. 34, no. 5, pp. 940–984, 2017.

    Article  Google Scholar 

  126. F. Ruetz, E. Hernández, M. Pfeiffer, H. Oleynikova, M. Cox, T. Lowe, and P. Borges, “OVPC Mesh: 3D free-space representation for local ground vehicle navigation,” Proc. of IEEE International Conference on Robotics and Automation, pp. 8648–8654, 2019.

  127. D. Langer, J. Rosenblatt, and M. Hebert, “A behavior-based system for off-road navigation,” IEEE Transactions on Robotics and Automation, vol. 10, no. 6, pp. 776–783, 1994.

    Article  Google Scholar 

  128. B. Hamner, S. Singh, S. Roth, and T. Takahashi, “An efficient system for combined route traversal and collision avoidance,” Autonomous Robots, vol. 24, no. 4, pp. 365–385, 2008.

    Article  Google Scholar 

  129. P. Fankhauser, M. Bloesch, and M. Hutter, “Probabilistic terrain mapping for mobile robots with uncertain localization,” IEEE Robotics and Automation Letters, vol. 3, no. 4, pp. 3019–3026, 2018.

    Article  Google Scholar 

  130. C. Zhang, J. Zhang, J. Wu, and Q. Zhu, “Vision-assisted localization and terrain reconstruction with quadruped robots,” Proc. of IEEE/RSJ International Conference on Intelligent Robots and Systems, 2022, pp. 13 571–13 577.

  131. D. Kim, D. Carballo, J. Di Carlo, B. Katz, G. Bledt, B. Lim, and S. Kim, “Vision aided dynamic exploration of unstructured terrain with a small-scale quadruped robot,” Proc. of IEEE International Conference on Robotics and Automation, pp. 2464–2470, 2020.

  132. H. Xue, H. Fu, L. Xiao, Y. Fan, D. Zhao, and B. Dai, “Traversability analysis for autonomous driving in complex environment: A LiDAR-based terrain modeling approach,” Journal of Field Robotics, vol. 40, no. 7, pp. 1779–1803, 2023.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hyun Myung.

Ethics declarations

The authors declare that there is no competing financial interest or personal relationship that could have appeared to influence the work reported in this paper.

Additional information

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This project was supported by the grant from Hanwha Aerospace as part of the development of autonomous driving technology for unstructured environments. The students are supported by BK21 FOUR.

Hyungtae Lim received his B.S. degree in mechanical engineering, and his M.S. and Ph.D. degrees in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2018, 2020, and 2023, respectively. He has been a postdoc in electrical engineering, KAIST. His research interests include SLAM (simultaneous localization and mapping), 3D registration, 3D perception, and long-term map management.

Minho Oh received his B.S. degree in convergence engineering from Daegu Gyeongbuk Institute of Science and Technology (DGIST), Daegu, Korea, in 2020 and an M.S. degree in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2022. He is currently pursuing a Ph.D. degree in electrical engineering, KAIST, Daejeon, Korea. His research interests include SLAM (simultaneous localization and mapping), 3D Vision, field robotics, and autonomous driving.

Seungjae Lee received his B.S. degree in electrical engineering and mechanical engineering, and an M.S. degree in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2021 and 2023, respectively. He is currently pursuing a Ph.D. degree in electrical engineering, KAIST, Daejeon, Korea. His research interests include SLAM (simultaneous localization and mapping), robotics, and autonomous driving.

Seunguk Ahn received his B.S. degree in electronic engineering from Kyungpook National University, Daegu, Korea, in 2010 and an M.S. degree in robotics program from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 2012. He was a researcher with the LG Electronics, Seoul, from 2012 to 2018. From 2018, he is a researcher with the Hanwha Aerospace. His current research interests include autonomous driving, perception, and SLAM (simultaneous localization and mapping).

Hyun Myung received his B.S., M.S., and Ph.D. degrees in electrical engineering from the Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea, in 1992, 1994, and 1998, respectively. He was a Senior Researcher with the Electronics and Telecommunications Research Institute, Daejeon, from 1998 to 2002, a CTO and the Director with the Digital Contents Research Laboratory, Emersys Corporation, Daejeon, from 2002 to 2003, and a Principal Researcher with the Samsung Advanced Institute of Technology, Yongin, Korea, from 2003 to 2008. Since 2008, he has been a Professor with the Department of Civil and Environmental Engineering, KAIST, and he was the Chief of the KAIST Robotics Program. From 2019, he is a Professor with the School of Electrical Engineering. His current research interests include autonomous robot navigation, SLAM (simultaneous localization and mapping), SHM (structural health monitoring), machine learning, AI, and swarm robots.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lim, H., Oh, M., Lee, S. et al. Similar but Different: A Survey of Ground Segmentation and Traversability Estimation for Terrestrial Robots. Int. J. Control Autom. Syst. 22, 347–359 (2024). https://doi.org/10.1007/s12555-023-0826-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12555-023-0826-4

Keywords

Navigation