Abstract
An accurate perception with a rapid response is fundamental for any autonomous vehicle to navigate safely. Light detection and ranging (LiDAR) sensors provide an accurate estimation of the surroundings in the form of 3D point clouds. Autonomous vehicles use LiDAR to realize obstacles in the surroundings and feed the information to the control units that guarantee collision avoidance and motion planning. In this work, we propose an obstacle estimation (i.e., detection and tracking) approach for autonomous vehicles or robots that carry a three-dimensional (3D) LiDAR and an inertial measurement unit to navigate in dynamic environments. The success of u-depth and restricted v-depth maps, computed from depth images, for obstacle estimation in the existing literature, influences us to explore the same techniques with LiDAR point clouds. Therefore, the proposed system computes u-depth and restricted v-depth representations from point clouds captured with the 3D LiDAR and estimates long-range obstacles using these multiple depth representations. Obstacle estimation using the proposed u-depth and restricted v-depth representations removes the requirement for some of the high computation modules (e.g., ground plane segmentation and 3D clustering) in the existing obstacle detection approaches from 3D LiDAR point clouds. We track all static and dynamic obstacles until they are on the frontal side of the autonomous vehicle and may create obstructions in the movement. We evaluate the performance of the proposed system on multiple open data sets of ground and aerial vehicles and self-captured simulated data sets. We also evaluate the performance of the proposed system with real-time captured data using ground robots. The proposed method is faster than the state-of-the-art (SoA) methods, though the performance of the proposed method is comparable with the SoA methods in terms of dynamic obstacle detection and estimation of their states.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Asvadi, A., Premebida, C., Peixoto, P., Nunes, U.: 3d lidar-based static and moving obstacle detection in driving environments. Robot. Auton. Syst. 83(C), 299–311 (2016). https://doi.org/10.1016/j.robot.2016.06.007
Beltran, D., Basañez, L.: A comparison between active and passive 3d vision sensors: Bumblebeexb3 and Microsoft kinect. Adv. Intell. Syst. Comput. 252, 725–734 (2013)
Bertozzi, M., Broggi, A., Fascioli, A., Nichele, S.: Stereo vision-based vehicle detection. In: IEEE Intelligent Vehicles Symposium 2000 (Cat. No.00TH8511), pp. 39–44 (2000)
Burlacu, A., Bostaca, S., Hector, I., Herghelegiu, P., Ivanica, G., Moldoveanu, A., Caraiman, S.: Obstacle detection in stereo sequences using multiple representations of the disparity map. In: International Conference on System Theory, Control and Computing (ICSTCC), pp. 854–859 (2016)
Dulău, M., Oniga, F.: Obstacle detection using a facet-based representation from 3-d lidar measurements. Sensors (2021). https://doi.org/10.3390/s21206861
Gago, R.M., Pereira, M.Y.A., Pereira, G.A.S.: An aerial robotic system for inventory of stockpile warehouses. Eng. Rep. (2021). https://doi.org/10.1002/eng2.12396
Gariepy, R., Mukherjee, P., Bovbel, P., Ash, D.: husky: Common Packages for the Clearpath Husky. https://github.com/husky/husky. Accessed 24 Aug 2022 (2019)
Geiger, A., Lenz, P., Stiller, C., Urtasun, R.: Vision meets robotics: the kitti dataset. Int. J. Robot. Res.: IJRR 32, 1231–1237 (2013)
Gibbs, G., Jia, H., Madani, I.: Obstacle detection with ultrasonic sensors and signal analysis metrics. Transp. Res. Procedia 28, 173–182 (2017)
Hornung, A., Wurm, K.M., Bennewitz, M., Stachniss, C., Burgard, W.: OctoMap: an efficient probabilistic 3D mapping framework based on octrees. Auton. Robot. (2013). https://doi.org/10.1007/s10514-012-9321-0
Huang, H.C., Hsieh, C.T., Yeh, C.H.: An indoor obstacle detection system using depth information and region growth. Sensors 15, 27116–27141 (2015)
Kadambi, A., Bhandari, A., Raskar, R.: 3d Depth Cameras in Vision: Benefits and Limitations of the Hardware, Chap. 1, pp. 3–26 (2014). https://doi.org/10.1007/978-3-319-08651-4_1
Kam, H., Lee, S.-H., Park, T., Kim, C.-H.: Rviz: a toolkit for real domain data visualization. Telecommun. Syst. 60, 337–345 (2015)
Keselman, L., Woodfill, J.I., Grunnet-Jepsen, A., Bhowmik, A.: Intel(r) realsense(tm) stereoscopic depth cameras. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 1267–1276 (2017)
Koenig, N., Howard, A.: Design and use paradigms for gazebo, an open-source multi-robot simulator. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (IEEE Cat. No.04CH37566), vol. 3, pp. 2149–2154 (2004)
Labayrade, R., Aubert, D., Tarel, J.P.: Real time obstacle detection on non flat road geometry through v-disparity representation. In: IEEE Intelligent Vehicles Symposium (2002)
Labayrade, R., Aubert, D.: In-vehicle obstacles detection and characterization by stereovision. In: 1st International Workshop on in-Vehicle Cognitive (2003)
Lin, J., Zhu, H., Alonso-Mora, J.: Robust vision-based obstacle avoidance for micro aerial vehicles in dynamic environments. In: 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 2682–2688. IEEE (2020a)
Lin, C.-C., Mao, W.-L., Chang, T.W., Chang, C.-Y., Abdullah, S.S.S.: Fast obstacle detection using 3d-to-2d lidar point cloud segmentation for collision-free path planning. Sens. Mater. 32, 2365–2374 (2020b)
Luiten, J., Fischer, T., Leibe, B.: Track to reconstruct and reconstruct to track. IEEE Robot. Autom. Lett. 5(2), 1803–1810 (2020)
Martinez, J.M.S., Ruiz, F.E.: Stereo-based aerial obstacle detection for the visually impaired. In: Workshop on Computer Vision Applications for the Visually Impaired, pp. 1–14 (2008)
Mateus Gago, R., Pereira, G.A.S., Pereira, M.Y.A.: Aerial Lidar Dataset of an Indoor Stockpile Warehouse. IEEE Dataport (2020). https://doi.org/10.21227/zyxc-wq04
Natural Point: Optitrack. Natural Point, Inc (2011)
Odelga, M., Stegagno, P., Bülthoff, H.H.: Obstacle detection, tracking and avoidance for a teleoperated uav. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2984–2990 (2016)
Oleynikova, H., Honegger, D., Pollefeys, M.: Reactive avoidance using embedded stereo vision for mav flight. In: IEEE International Conference on Robotics and Automation, vol. 2015, pp. 50–56 (2015)
OpenMANIPULATOR-X: Robot Simulation Made Easy. https://emanual.robotis.com/docs/en/platform/openmanipulator_x/overview/. Last accessed 30th June 2023
Qian, Y., Yan, S., Lukezic, A., Kristan, M., Kämäräinen, J.-K., Matas, J.: DAL-A deep depth-aware long-term tracker. In: International Conference on Pattern Recognition, pp. 7825–7832 (2021)
Quigley, M., Gerkey, B., Conley, K., Faust, J., Foote, T., Leibs, J., Berger, E., Wheeler, R., Ng, A.: Ros: an open-source robot operating system. In: IEEE International Conference on Robotics and Automation (ICRA) Workshop on Open Source Software (2009)
Saha, A., Dhara, B.C., Umer, S., Yurii, K., Alanazi, J.M., AlZubi, A.A.: Efficient obstacle detection and tracking using rgb-d sensor data in dynamic environments for robotic applications. Sensors (2022a). https://doi.org/10.3390/s22176537
Saha, A., Dhara, B.C., Umer, S., AlZubi, A.A., Alanazi, J.M., Yurii, K.: Corb2i-slam: an adaptive collaborative visual-inertial slam for multiple robots. Electronics (2022b). https://doi.org/10.3390/electronics11182814
Shan, T., Englot, B., Meyers, D., Wang, W., Ratti, C., Daniela, R.: Lio-sam: tightly-coupled lidar inertial odometry via smoothing and mapping. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 5135–5142. IEEE (2020)
Song, Y., Yao, J., Ju, Y., Jiang, Y., Du, K.: Automatic detection and classification of road, car, and pedestrian using binocular cameras in traffic scenes with a common framework. Complexity 2020, 1–17 (2020)
Tadic, V., Toth, A., Vizvari, Z., Klincsik, M., Sari, Z., Sarcevic, P., Sarosi, J., Biro, I.: Perspectives of realsense and zed depth sensors for robotic vision applications. Machines (2022). https://doi.org/10.3390/machines10030183
Turtlebot3: Personal Robot Kit. https://www.turtlebot.com. Last accessed 30th June 2023
Wang, T.-M., Shih, Z.-C.: Measurement and analysis of depth resolution using active stereo cameras. IEEE Sens. J. 21(7), 9218–9230 (2021). https://doi.org/10.1109/JSEN.2021.3054820
Wang, H., Zhang, X.: Real-time vehicle detection and tracking using 3d lidar. Asian J. Control 24(3), 1459–1469 (2022). https://doi.org/10.1002/asjc.2519
Wang, P., Gu, T., Sun, B., Huang, D., Sun, K.: Research on 3d point cloud data preprocessing and clustering algorithm of obstacles for intelligent vehicle. World Electr. Veh. J. (2022). https://doi.org/10.3390/wevj13070130
Wu, K., Otoo, E., Suzuki, K.: Optimizing two-pass connected-component labeling algorithms. Pattern Anal. Appl. 12(2), 117–135 (2009)
Xie, D., Xu, Y., Wang, R.: Obstacle detection and tracking method for autonomous vehicle based on three-dimensional lidar. Int. J. Adv. Robot. Syst. 16(2), 1729881419831587 (2019). https://doi.org/10.1177/1729881419831587
Yan, S., Yang, J., Käpylä, J., Zheng, F., Leonardis, A., Kämäräinen, J.-K.: Depthtrack: Unveiling the power of rgbd tracking. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10725–10733 (2021)
Yang, G., Chen, F., Wen, C., Fang, M., Liu, Y.H., Li, L.: A new algorithm for obstacle segmentation in dynamic environments using a rgb-d sensor. In: IEEE International Conference on Real-time Computing and Robotics, pp. 374–378 (2016)
Yang, G., Mentasti, S., Bersani, M., Wang, Y., Braghin, F., Cheli, F.: Lidar point-cloud processing based on projection methods: a comparison. In: 2020 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE), pp. 1–6 (2020)
Zhang, D.: Extended closing operation in morphology and its application in image processing. In: International Conference on Information Technology and Computer Science, vol. 1, pp. 83–87 (2009)
Zhang, Z.: Microsoft kinect sensor and its effect. IEEE Multimed. 19(2), 4–10 (2012)
Zheng, L., Zhang, P., Tan, J., Li, F.: The obstacle detection method of uav based on 2d lidar. IEEE Access 7, 163437–163448 (2019). https://doi.org/10.1109/ACCESS.2019.2952173
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Saha, A., Dhara, B.C. 3D LiDAR-based obstacle detection and tracking for autonomous navigation in dynamic environments. Int J Intell Robot Appl 8, 39–60 (2024). https://doi.org/10.1007/s41315-023-00302-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41315-023-00302-1