Efficient Online Segmentation for Sparse 3D Laser Scans

Original Article

Abstract

The ability to extract individual objects in the scene is key for a large number of autonomous navigation systems such as mobile robots or autonomous cars. Such systems navigating in dynamic environments need to be aware of objects that may change or move. In most perception cues, a pre-segmentation of the current image or laser scan into individual objects is the first processing step before a further analysis is performed. In this paper, we present an effective method that first removes the ground from the scan and then segments the 3D data in a range image representation into different objects. A key focus of our work is a fast execution with several hundred Hertz. Our implementation has small computational demands so that it can run online on most mobile systems. We explicitly avoid the computation of the 3D point cloud and operate directly on a 2.5D range image, which enables a fast segmentation for each 3D scan. This approach can furthermore handle sparse 3D data well, which is important for scanners such as the new Velodyne VLP-16 scanner. We implemented our approach in C++ and ROS, thoroughly tested it using different 3D scanners, and will release the source code of our implementation. Our method can operate at frame rates that are substantially higher than those of the sensors while using only a single core of a mobile CPU and producing high-quality segmentation results.

Keywords

Segmentation 3D laser Online Range image Sparse data Point cloud 

Zusammenfassung

Effiziente Online-Segmentierung für schwach besetzte 3D-Laserscans. Die schnelle und vollautomatische Interpretation eine Szene spielt beim Einsatz autonomer Autos oder mobiler Roboter eine zentrale Rolle und wird in nahezu allen dynamischen Umgebungen benötigt. Der erste Schritt eines typischen Perzeptionssystemes zur Szeneninterpretation ist häufig die Segmentierung der Szene in einzelne Bestandteile. In dieser Arbeit stellen wir ein effizientes Segmentierungsverfahren für 3D Laserscanner vor, welches mit mehreren 100 Hz auf handelsüblichen CPUs ausgeführt werden kann und gleichzeitig hochwertige Ergebnisse liefert. Wir erreichen die schnelle Verarbeitung, indem Berechnungen auf 3D Punktwolken vermieden und statt dessen direkt auf 2.5D-Entfernungsbildern durchgeführt werden. Neben der schnellen Berechnung kann so auch mit niedrig aufgelösten Laserscans gut umgegangen werden. Wir haben unseren Ansatz in C++ und ROS implementiert und mit verschiedenen Datensätzen evaluiert. Es zeigt sich, dass unser Verfahren die Laserdaten deutlich schneller verarbeitet als typische Laserscanner diese erzeugen und gleichzeitig eine qualitativ hochwertige Segmentierung der Szene liefert.

References

  1. Abdullah S, Awrangjeb M, Lu G (2014) LiDAR segmentation using suitable seed points for 3d building extraction. ISPRS–International Archives of the Photogrammetry, Remote Sensing and Spatial. Inf Sci 40(3):1–8Google Scholar
  2. Bansal M, Matei B, Sawhney H, Jung SH, Eledath J (2009) Pedestrian detection with depth-guided structure labeling. In: International Conference on Computer Vision Workshops, pp 31–38Google Scholar
  3. Behley J, Steinhage V, Cremers AB (2013) Laser-based segment classification using a mixture of bag-of-words. In: International Conference on Intelligent Robots and Systems, pp 4195–4200Google Scholar
  4. Bogoslavskyi I, Stachniss C, (2016) Fast range image-based segmentation of sparse 3d laser scans for online operation. In: International Conference on Intelligent Robots and SystemsGoogle Scholar
  5. Cabaret L, Lacassagne L, Oudni L (2014) A review of world’s fastest connected component labeling algorithms: speed and energy estimation. In: International Conference on Design and Architectures for Signal and Image Processing, pp 1–6Google Scholar
  6. Choe Y, Ahn S, Chung MJ (2012) Fast point cloud segmentation for an intelligent vehicle using sweeping 2d laser scanners. In: International Conference on Ubiquitous Robots and Ambient Intelligence, pp 38–43Google Scholar
  7. Dewan A, Caselitz T, Tipaldi G, Burgard W (2016) Motion-based detection and tracking in 3d lidar scans. In: IEEE International Conference on Robotics & AutomationGoogle Scholar
  8. Douillard B, Underwood J, Kuntz N, Vlaskine V, Quadros A, Morton P, Frenkel A (2011) On the segmentation of 3d lidar point clouds. In: IEEE International Conference on Robotics & Automation. IEEE, pp 2798–2805Google Scholar
  9. Douillard B, Underwood J, Vlaskine V, Quadros A, Singh S (2014) A pipeline for the segmentation and classification of 3d point clouds. International Symposium on Experimental Robotics. Springer, New York, pp 585–600Google Scholar
  10. Endres F, Plagemann C, Stachniss C, Burgard W (2009) Unsupervised discovery of object classes from range data using latent dirichlet allocation. Robotics Sci Syst 2:113–120Google Scholar
  11. Floros G, Leibe B (2012) Joint 2d-3d temporally consistent semantic segmentation of street scenes. IEEE Conference on Computer Vision and Pattern Recognition, pp 2823–2830Google Scholar
  12. Geiger A, Lenz P, Stiller C, Urtasun R (2013) Vision meets robotics: The KITTI dataset. Int J Robotics Res 32(11):1231–1237CrossRefGoogle Scholar
  13. Golovinskiy A, Funkhouser T (2009) Min-cut based segmentation of point clouds. International Conference on Computer Vision Workshops, pp 39–46Google Scholar
  14. Gorte B, Oude Elberink S, Sirmacek B, Wang J (2015) Tree separation and classification in mobile mapping lidar data. Int Arch Photogramm Remote Sens Spatial Inf Sci 40(3/W3):607–612Google Scholar
  15. Hackel T, Wegner JD, Schindler K (2016) Fast semantic segmentation of 3d point clouds with strongly varying density. ISPRS Annals of the Photogrammetry. Remote Sens Spatial Inf Sci 3:177–184Google Scholar
  16. Hanel A, Klöden H, Hoegner L, Stilla U (2015) Image based recognition of dynamic traffic situations by evaluating the exterior surrounding and interior space of vehicles. International Archives of the Photogrammetry, Remote Sensing and Spatial. Inf Sci 40(3):161–168Google Scholar
  17. Hebel M, Stilla U (2008) Pre-classification of points and segmentation of urban objects by scan line analysis of airborne lidar data. International Archives of Photogrammetry, Remote Sensing and Spatial. Inf Sci 37(B3a):105–110Google Scholar
  18. Hermans A, Floros G, Leibe B (2014) Dense 3d semantic mapping of indoor scenes from rgb-d images. In: IEEE International Conference on Robotics & Automation, pp 2631–2638Google Scholar
  19. Himmelsbach M, Hundelshausen FV, Wuensche H (2010) Fast segmentation of 3d point clouds for ground vehicles. IEEE Intelligent Vehicles Symposium, pp 560–565Google Scholar
  20. Klasing K, Wollherr D, Buss M (2008) A clustering method for efficient segmentation of 3d laser data. IEEE International Conference on Robotics & Automation, pp 4043–4048Google Scholar
  21. Korchev D, Cheng S, Owechko Y, Kim K (2013) On real-time lidar data segmentation and classification. Int Conf Image Process Comput Vision Pattern Recogn 1:42–49Google Scholar
  22. Kümmerle R, Ruhnke M, Steder B, Stachniss C, Burgard W (2013) A navigation system for robots operating in crowded urban environments. In: IEEE International Conference on Robotics & Automation, pp 3225–3232Google Scholar
  23. Leibe B, Schindler K, Cornelis N, Van Gool L (2008) Coupled object detection and tracking from static cameras and moving vehicles. IEEE Trans Pattern Anal Mach Intell 30(10):1683–1698CrossRefGoogle Scholar
  24. Leonard J, How J, Teller S, Berger M, Campbell S, Fiore G, Fletcher L, Frazzoli E, Huang A, Karaman S, Koch O, Kuwata Y, Moore D, Olson E, Peters S, Teo J, Truax R, Walter M, Barrett D, Epstein A, Maheloni K, Moyer K, Jones T, Buckley R, Antone M, Galejs R, Krishnamurthy S, Williams J (2008) A perception-driven autonomous urban vehicle. J Field Robotics 25(10):727–774CrossRefGoogle Scholar
  25. Menze M, Heipke C, Geiger A (2015) Joint 3d estimation of vehicles and scene flow. ISPRS Ann Photogramm Remote Sens Spatial Inf Sci 2(3/W5):427–434Google Scholar
  26. Moosmann F (2013) Interlacing self-localization, moving object tracking and mapping for 3d range sensors, Ph.D. thesis. KITGoogle Scholar
  27. Moosmann F, Pink O,, Stiller C (2009) Segmentation of 3d lidar data in non-flat urban environments using a local convexity criterion. In: Intelligent Vehicles Symposium, pp 215–220Google Scholar
  28. Ošep A, Hermans A, Engelmann F, Klostermann D, Mathias M, Leibe B (2016) Multi-scale object candidates for generic object tracking in street scenes. In: IEEE International Conference on Robotics & Automation, pp 3180–3187Google Scholar
  29. Petrovskaya A, Thrun S (2008) Model based vehicle tracking for autonomous driving in urban environments. Robotics Sci Syst. 34. http://www.roboticsproceedings.org/rss04/p23.pdf
  30. Pylvanainen T, Roimela K, Vedantham R, Itaranta J, Grzeszczuk R (2010) Automatic alignment and multi-view segmentation of street view data using 3d shape priors. In: Symposium on 3D Data Processing. Visualization and Transmission, vol 737, pp 738–739Google Scholar
  31. Savitzky A, Golay MJ (1964) Smoothing and differentiation of data by simplified least squares procedures. Anal Chem 36(8):1627–1639CrossRefGoogle Scholar
  32. Steinhauser D, Ruepp O, Burschka D (2008) Motion segmentation and scene classification from 3d lidar data. In: Intelligent Vehicles Symposium, pp 398–403Google Scholar
  33. Strom J, Richardson A, Olson E (2010) Graph-based segmentation for colored 3d laser point clouds. International Conference on Intelligent Robots and Systems, pp 2131–2136Google Scholar
  34. Teichman A, Thrun S (2012) Tracking-based semi-supervised learning. Int J Robotics Res 31(7):804–818CrossRefGoogle Scholar
  35. Velizhev A, Shapovalov R, Schindler K (2012) Implicit shape models for object detection in 3d point clouds. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial. Inf Sci 1(3):179–184Google Scholar
  36. Wang J, Shan J (2009) Segmentation of lidar point clouds for building extraction. Anual Conference of the American Society for Photogrammetry and Remote Sensing, pp 9–13Google Scholar
  37. Weinmann M, Jutzi B (2015) Geometric point quality assessment for the automated, markerless and robust registration of unordered tls point clouds. ISPRS Ann Photogramm Remote Sens Spatial Inf Sci 2(3/W5):89–96Google Scholar
  38. Weinmann M, Jutzi B, Hinz S, Mallet C (2015) Semantic point cloud interpretation based on optimal neighborhoods, relevant features and efficient classifiers. ISPRS J Photogramm Remote Sens 105:286–304CrossRefGoogle Scholar
  39. Wurm KM, Stachniss C, Burgard W (2008) Coordinated multi-robot exploration using a segmentation of the environment. International Conference on Intelligent Robots and Systems, pp 1160–1165Google Scholar

Copyright information

© Deutsche Gesellschaft für Photogrammetrie, Fernerkundung und Geoinformation (DGPF) e.V. 2017

Authors and Affiliations

  1. 1.Institute of Geodesy and GeoinformationUniversity of BonnBonnGermany

Personalised recommendations