Skip to main content
Log in

A Framework for Drivable Area Detection Via Point Cloud Double Projection on Rough Roads

  • Short Paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Drivable area detection is one of the essential functions of autonomous vehicles. However, due to the complexity and diversity of unknown environments, it remains a challenge specifically on rough roads. In this paper, we propose a systematical framework for drivable area detection, including ground segmentation and road labelling. For each scan, the point cloud is projected onto two different planes simultaneously, generating an elevation map and a range map. Different from the existing methods based on mathematical models, we accomplish the ground segmentation using image processing methods. Subsequently, road points will be filtered out from ground points and used to generate the road area with the assistance of a range map. Meanwhile, a two-step search method is utilized to create the reachable area from an elevation map. For the robustness of drivable area detection, Bayesian decision theory is introduced in the final step to fuse the road area and the reachable area. Since we explicitly avoid complex three-dimensional computation, our method, from both empirical and theoretical perspectives, has a high real-time capability, and experimental results also show it has promising detection performance in various traffic situations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Sontges, S., Althoff, M.: Computing the drivable area of autonomous road vehicles in dynamic road scenes. IEEE Trans. Intell. Transp. Syst. 19(6), 1855–1866 (2018)

    Article  Google Scholar 

  2. Haselich, M., et al.: Spline templates for fast path planning in unstructured environments. IEEE/RSJ Int. Conf. Intell. Rob. Syst. pp. 3545–3550 (2011)

  3. Liu, Z., Yu, S., Zheng, N.: A co-point mapping-based approach to drivable area detection for self-driving cars. Engineering. 4(4), 479–490 (2018)

    Article  Google Scholar 

  4. Wang, Z., et al.: Real-time Drivable Region Planning Based on 3D LiDAR. Int. Conf. Intell. Syst. Rem. Cont. pp. 563–572 (2019)

  5. Wang, H., Sun, Y., Liu, M.: Self-supervised drivable area and road anomaly segmentation using RGB-D data for robotic wheelchairs. IEEE Robot. Autom. Lett. 4(4), 4386–4393 (2019)

  6. Li, Q., et al.: A sensor-fusion drivable-region and lane-detection system for autonomous vehicle navigation in challenging road scenarios. IEEE Trans. Veh. Technol. 63(2), 540–555 (2013)

    Article  Google Scholar 

  7. Yang, B., Liang, M., Urtasun, R.: Hdnet: Exploiting hd maps for 3d object detection. 2th Conf. Rob. Learn, pp. 146–155 (2018)

  8. Chang, M., et al.: Argoverse: 3d tracking and forecasting with rich maps. Proc. IEEE Conf. Comput. Vis. Pattern Recognit. 8748–8757 (2019)

  9. Liu, Z., Yu, S., Wang, X., Zheng, N.: Detecting Drivable Area for Self-driving Cars: An Unsupervised Approach. arXiv: 1705.00451 (2017)

  10. Minaee, S., Boykov, Y., Porikli, F.: Image Segmentation Using Deep Learning: A Survey. arXiv: 2001.05566 (2020)

  11. Manz, M., Himmelsbach, M., Luettel, T.: Detection and tracking of road networks in rural terrain by fusing vision and LIDAR. IEEE/RSJ Int. Conf. Intell. Rob. Syst. pp. 4562–4568 (2011)

  12. Grilli, E., Menna, F., Remondino, F.: A review of point clouds segmentation and classification algorithms. Int. Arch. Photogramm. Remote. Sens. Spat. Inf. Sci. XLII-2/W3. 339–344 (2017)

  13. Zermas, D., Izzat, I., Papanikolopoulos, N.: Fast Segmentation of 3D Point Clouds: A Paradigm on LiDAR Data for Autonomous Vehicle Applications. Proc. IEEE Int. Conf. Rob. Autom. pp. 5067–5073 (2017)

  14. Rummelhard, L., et al.: Ground estimation and point cloud segmentation using SpatioTemporal Conditional Random Field. Proc. IEEE Intell. Veh. Symp. pp. 1105–1110 (2017)

  15. Chen, T., Dai, B., Wang, R., Liu, D.: Gaussian-process-based real-time ground segmentation for autonomous land vehicles. J. Intell. Robot. Syst. Theor. Appl. 76(3), 563–582 (2014)

  16. Zhu, Z., Liu, J.: Graph-based ground segmentation of 3D LIDAR in rough area. IEEE Conf. Technol. Prac. Robot Appl. pp. 1–5 (2014)

  17. Byun, J., et al.: Drivable Road Detection with 3D Point Clouds Based on the MRF for Intelligent Vehicle. Springer Tracts. Adv. Rob. pp. 49–60 (2015)

  18. Gao, B., et al.: Off-Road Drivable Area Extraction Using 3D LiDAR Data. Proc. IEEE Intell. Veh. Symp. pp. 1505–1511 (2019)

  19. Roynard, X., Deschaud, J., Goulette, F.: Classification of Point Cloud Scenes with Multiscale Voxel Deep Network. arXiv: 1804.03583 (2018)

  20. Zhang, W., Zhou, C., Yang J., Huang K.: LiSeg: Lightweight Road-object Semantic Segmentation In 3D LiDAR Scans for Autonomous Driving. Proc. IEEE Intell. Veh. Symp. pp. 1021–1026 (2018)

  21. Martínez, J., Morán, M., Morales, J., Reina, A., Zafra, M.: Field navigation using fuzzy elevation maps built with local 3D laser scans. Appl. Sci. 8(3), 397 (2018)

  22. Moosmann, F., Pink, O., Stiller, C.: Segmentation of 3d lidar data in non-flat urban environments using a local convexity criterion. Proc. IEEE Intell. Veh. Symp. pp. 215–220 (2009)

  23. Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., Fong, P., Gale, J., Halpenny, M., Hoffmann, G., Lau, K., Oakley, C., Palatucci, M., Pratt, V., Stang, P., Strohband, S., Dupont, C., Jendrossek, L.E., Koelen, C., Markey, C., Rummel, C., van Niekerk, J., Jensen, E., Alessandrini, P., Bradski, G., Davies, B., Ettinger, S., Kaehler, A., Nefian, A., Mahoney, P.: Stanley: the robot that won the darpa grand challenge. J. Field. Rob. 23(9), 661–692 (2006)

    Article  Google Scholar 

  24. Douillard, B., et al.: Hybrid elevation maps: 3D surface models for segmentation. IEEE/RSJ Int. Conf. Intelligent Rob. Syst. pp. 1532–1538 (2010)

  25. Bogoslavskyi, I., Stachniss, C., Bonn: Efficient Online Segmentation for Sparse 3D Laser Scans. PFG-J. Photogramm. Remote Sens. Geoinf. Sci. 85(1), 41–52 (2017)

  26. Burger, P., Wuensche, H.: Fast Multi-Pass 3D Point Segmentation Based on a Structured Mesh Graph for Ground Vehicles. Proc. IEEE Intell. Veh. Symp. pp. 2150–2156 (2018)

  27. Hughes, C., et al.: Drivespace: Towards context-aware drivable area detection. Intl. Symposium Electronic Imaging Science Technol. 2019(15), 42–1–42-9 (2019)

  28. Guiotte, F., et al.: Semantic segmentation of LiDAR points clouds: Rasterisation beyond Digital Elevation Models. IEEE Geosci. Remote Sens. Lett. 1–4 (2020)

  29. Meng, X., Lin, Y., Yan, L., Gao, X., Yao, Y., Wang, C., Luo, S.: Airborne LiDAR point cloud filtering by a multilevel adaptive filter based on morphological reconstruction and thin plate spline interpolation. Electronics. 8(10), 1153 (2019)

    Article  Google Scholar 

  30. Bogoslavskyi, I., Stachniss, C.: Fast range image-based segmentation of sparse 3D laser scans for online operation. IEEE Int. Conf. Intell. Rob. Syst. pp. 163–169 (2016)

  31. Luo, Z., Mohrenschildt, M., Habibi, S.: A probability occupancy grid based approach for real-time lidar ground segmentation. IEEE Trans. Intell. Transp. Syst. 21(3), 998–1010 (2019)

    Article  Google Scholar 

  32. Shan, T., Wang, J., Englot, B.: Bayesian generalized kernel inference for terrain traversability mapping. 2th Conf. Rob. Learn. pp. 829–838 (2018)

  33. Schreier, M., Willert, V., Adamy, J.: Compact representation of dynamic driving environments for ADAS by parametric free space and dynamic object maps. IEEE Trans. Intell. Transp. Syst. 17(2), 367–384 (2016)

    Article  Google Scholar 

  34. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern classification. John Wiley & Sons, Hoboken (2012)

    MATH  Google Scholar 

  35. Challa, S., Morelande, M.R., Musicki, D., Evans, R.J.: Fundamentals of object tracking. Cambridge University Press, Cambridge (2011)

    Book  Google Scholar 

  36. Behley, J., et al.: SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. Proc IEEE Int. Conf. Comput. Vision. pp. 9297–9307 (2019)

  37. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for Autonomous Driving? The KITTI Vision Benchmark Suite. Proc IEEE Comput Soc. Conf. Comput. Vision Pattern Recognit. pp. 3354–3361 (2012)

Download references

Acknowledgements

This work was supported by National Key Research and Development Program of China (2018YFD0700602, 2017YFD0700303, and 2016YFD0701401), Youth Innovation Promotion Association of the Chinese Academy of Sciences (Grant No. 2017488), Independent Research Project of Research Institute of Robotics and Intelligent Manufacturing Innovation, Chinese Academy of Sciences (Grant No. C2018005), Equipment Pre-research Program (Grant No. 301060603), and Technological Innovation Project for New Energy and Intelligent Networked Automobile Industry of Anhui Province.

Availability of Data and Material

The authors declare that all data and materials support our claims in the manuscript and comply with field standards. The data involved in our research include public dataset (SemKitti) and private dataset. The public dataset can be downloaded from the official website of SemKitti dataset. Our private dataset is currently not available.

Code Availability

The custom code is currently not available.

Funding

This work was supported in part by National Key Research and Development Program of China (2018YFD0700602, 2017YFD0700303, and 2016YFD0701401), Youth Innovation Promotion Association of the Chinese Academy of Sciences (Grant No. 2017488), Independent Research Project of Research Institute of Robotics and Intelligent Manufacturing Innovation, Chinese Academy of Sciences (Grant No. C2018005), Equipment Pre-research Program (Grant No. 301060603), and Technological Innovation Project for New Energy and Intelligent Networked Automobile Industry of Anhui Province.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization: Fengyu Xu, Zhiling Wang; Methodology: Fengyu Xu, Linglong Lin; Formal analysis and investigation: Fengyu Xu, Linglong Lin; Writing - original draft preparation: Fengyu Xu, Linglong Lin; Writing - review and editing: Fengyu Xu, Linglong Lin, Zhiling Wang; Funding acquisition: Huawei Liang, Zhiling Wang; Resources: Huawei Liang; Supervision: Huawei Liang, Zhiling Wang.

Corresponding authors

Correspondence to Zhiling Wang or Linglong Lin.

Ethics declarations

Ethics Approval

Not applicable.

Consent to Participate

Not applicable.

Consent for Publication

Not applicable.

Conflicts of Interest/Competing Interests

The authors have no conflicts of interest to declare that are relevant to the content of this article.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

ESM 1

(DOCX 353 kb)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xu, F., Liang, H., Wang, Z. et al. A Framework for Drivable Area Detection Via Point Cloud Double Projection on Rough Roads. J Intell Robot Syst 102, 45 (2021). https://doi.org/10.1007/s10846-021-01381-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-021-01381-7

Keywords

Navigation