3D Maps with Semantics
Understanding environments is essential if a robotic system shall operate autonomously. This chapter proposes a complete technical cognitive system, consisting of a mobile robot, a 3D laser scanner and a set of algorithms for semantic environment mapping. We define a semantic 3D map as follows:
A semantic 3D map for mobile robots is a metric map, that contains in addition to geometric information of 3D data points, assignments of these points to known structures or object classes.
Our approach uses 3D laser range and reflectance data on an autonomous mobile robot to perceive 3D objects. Starting from an empty map, several 3D scans acquired by the mobile robot Kurt3D in a stop-scan-go fashion, are merged in a global coordinate system by 6D SLAM as discribed in the previous parts of this book. Then, the coarse structure of the resulting 3D scene is interpreted using plane extraction and labeling exploiting background knowledge stored in a semantic net . Afterwards, the 3D range and reflectance data are transformed into images by off-screen rendering. A cascade of classifiers, i.e., a linear decision tree, is used to detect and localize the objects . Finally, the semantic map is presented using computer graphics. Figure 7.1 presents a system overview.
KeywordsPoint Cloud Mobile Robot Object Detection Iterative Close Point Autonomous Mobile Robot
Unable to display preview. Download preview PDF.