Range Sensing

Abstract

Range sensors are devices that capture the three-dimensional (3-D) structure of the world from the viewpoint of the sensor, usually measuring the depth to the nearest surfaces. These measurements could be at a single point, across a scanning plane, or a full image with depth measurements at every point. The benefits of this range data is that a robot can be relatively certain where the real world is, relative to the sensor, thus allowing the robot to more reliably find navigable routes, avoid obstacles, grasp objects, act on industrial parts, etc.

This chapter introduces the main representations for range data (point sets, triangulated surfaces, voxels), the main methods for extracting usable features from the range data (planes, lines, triangulated surfaces), the main sensors for acquiring it (Sect. 31.1 – stereo and laser triangulation and ranging systems), how multiple observations of the scene, for example, as if from a moving robot, can be registered (Sect. 31.3) and several indoor and outdoor robot applications where range data greatly simplifies the task (Sect. 31.4).

1-D

one-dimensional

2-D

two-dimensional

2.5-D

two-and-a-half-dimensional

3-D

three-dimensional

6-D

six-dimensional

ASIC

application-specific feature transform

DARPA

Defense Advanced Research Projects Agency

DOF

degree of freedom

DSP

digital signal processor

EM

expectation maximization

FMCW

frequency modulation continuous wave

FOV

field of view

FPGA

field-programmable gate array

GPS

global positioning system

GPU

graphics processing unit

ICP

iterative closest point

IMU

inertial measurement unit

IR

infrared

LADAR

laser radar

LED

light-emitting diode

LIDAR

light detection and ranging

LMS

laser measurement system

LOG

Laplacian of Gaussian

MLS

multilevel surface map

PCA

principal component analysis

PC

personal computer

PFH

point feature histogram

RANSAC

random sample consensus

SFM

structure from motion

SIFT

scale-invariant feature transform

SLAM

simultaneous localization and mapping

SNR

signal-to-noise ratio

SVD

singular value decomposition

TOF

time-of-flight

References

  1. 31.1
    H. Houshiar, J. Elseberg, D. Borrmann, A. Nüchter: A study of projections for key point based registration of panoramic terrestrial 3D laser scans, J. Geo-Spat. Inf. Sci. 18(1), 11–31 (2015)CrossRefGoogle Scholar
  2. 31.2
    Velodyne: High definition lidar, http://velodynelidar.com/ (2015)
  3. 31.3
    R. Stettner, H. Bailey, S. Silverman: Three-dimensional flash Ladar focal planes and time-dependent imaging, Int. J. High Speed Electron. Syst. 18(2), 401–406 (2008)CrossRefGoogle Scholar
  4. 31.4
    S.B. Gokturk, H. Yalcin, C. Bamji: A time-of-flight depth sensor – system description, issues and solutions, Computer Vis. Pattern Recognit. Workshop (CVPRW) (2004)Google Scholar
  5. 31.5
    T. Oggier, M. Lehmann, R. Kaufmannn, M. Schweizer, M. Richter, P. Metzler, G. Lang, F. Lustenberger, N. Blanc: An all-solid-state optical range camera for 3D-real-time imaging with sub-centimeter depth-resolution (SwissRanger), Proc. SPIE 5249, 534–545 (2003)CrossRefGoogle Scholar
  6. 31.6
    U. Wong, A. Morris, C. Lea, J. Lee, C. Whittaker, B. Garney, R. Whittaker: Red: Comparative evaluation of range sensing technologies for underground void modeling, Proc. IEEE/RSJ Int. Conf. Intell. Robots Syst. (IROS) (2011) pp. 3816–3823Google Scholar
  7. 31.7
    D.D. Lichti: A review of geometric models and self-calibration methods for terrestrial laser scanner, Bol. Cienc. Géod. 16(1), 3–19 (2010)Google Scholar
  8. 31.8
    G. Iddan, G. Yahav: 3D imaging in the studio (and elsewhere…), Proc. SPIE 4298 (2003) pp. 48–55Google Scholar
  9. 31.9
    TriDiCam GmbH: http://www.tridicam.de/en.html (2015)
  10. 31.10
    R. Hartley, A. Zisserman: Multiple View Geometry in Computer Vision (Cambridge Univ. Press, Cambridge 2000)MATHGoogle Scholar
  11. 31.11
    S. Barnard, M. Fischler: Computational stereo, ACM Comput. Surv. 14(4), 553–572 (1982)CrossRefGoogle Scholar
  12. 31.12
    D. Scharstein, R. Szeliski, R. Zabih: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms, Int. J. Computer Vis. 47(1–3), 7–42 (2002)CrossRefMATHGoogle Scholar
  13. 31.13
    D. Scharstein, R. Szeliski: Middlebury College Stereo Vision Research Page, http://vision.middlebury.edu/stereo (2007)
  14. 31.14
    R. Zabih, J. Woodfill: Non-parametric local transforms for computing visual correspondence, Proc. Eur. Conf. Comput. Vis., Vol. 2 (1994) pp. 151–158Google Scholar
  15. 31.15
    O. Faugeras, B. Hotz, H. Mathieu, T. Viéville, Z. Zhang, P. Fua, E. Théron, L. Moll, G. Berry, J. Vuillemin, P. Bertin, C. Proy: Real time correlation based stereo: algorithm implementations and applications, Int. J. Computer Vis. 47(1--3), 229–246 (2002)Google Scholar
  16. 31.16
    M. Okutomi, T. Kanade: A multiple-baseline stereo, IEEE Trans. Pattern Anal. Mach. Intell. 15(4), 353–363 (1993)CrossRefGoogle Scholar
  17. 31.17
    L. Matthies: Stereo vision for planetary rovers: stochastic modeling to near realtime implementation, Int. J. Comput. Vis 8(1), 71–91 (1993)CrossRefGoogle Scholar
  18. 31.18
    R. Bolles, J. Woodfill: Spatiotemporal consistency checking of passive range data, Proc. Int. Symp. Robotics Res. (1993)Google Scholar
  19. 31.19
    P. Fua: A parallel stereo algorithm that produces dense depth maps and preserves image features, Mach. Vis. Appl. 6(1), 35–49 (1993)CrossRefGoogle Scholar
  20. 31.20
    H. Moravec: Visual mapping by a robot rover, Proc. Int. Jt. Conf. Artif. Intell. (IJCAI) (1979) pp. 598–600Google Scholar
  21. 31.21
    A. Adan, F. Molina, L. Morena: Disordered patterns projection for 3D motion recovering, Proc. Int. Conf. 3D Data Process. Vis. Transm. (2004) pp. 262–269Google Scholar
  22. 31.22
    Videre Design LLC: http://www.videredesign.com (2007)
  23. 31.23
    Point Grey Research Inc.: http://www.ptgrey.com (2015)
  24. 31.24
    C. Zach, A. Klaus, M. Hadwiger, K. Karner: Accurate dense stereo reconstruction using graphics hardware, Proc. EUROGRAPHICS (2003) pp. 227–234Google Scholar
  25. 31.25
    R. Yang, M. Pollefeys: Multi-resolution real-time stereo on commodity graphics hardware, Int. Conf. Comput. Vis Pattern Recognit., Vol. 1 (2003) pp. 211–217Google Scholar
  26. 31.26
    K. Konolige: Small vision system. Hardware and implementation, Proc. Int. Symp. Robotics Res. (1997) pp. 111–116Google Scholar
  27. 31.27
    Focus Robotics Inc.: http://www.focusrobotics.com (2015)
  28. 31.28
    TYZX Inc.: http://www.tyzx.com (2015)
  29. 31.29
    S.K. Nayar, Y. Nakagawa: Shape from Focus, IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824–831 (1994)CrossRefGoogle Scholar
  30. 31.30
    M. Pollefeys, R. Koch, L. Van Gool: Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters, Int. J. Computer Vis. 32(1), 7–25 (1999)CrossRefGoogle Scholar
  31. 31.31
    A. Hertzmann, S.M. Seitz: Example-based photometric stereo: Shape reconstruction with general, Varying BRDFs, IEEE Trans. Pattern Anal. Mach. Intell. 27(8), 1254–1264 (2005)CrossRefGoogle Scholar
  32. 31.32
    A. Lobay, D.A. Forsyth: Shape from texture without boundaries, Int. J. Comput. Vis. 67(1), 71–91 (2006)CrossRefGoogle Scholar
  33. 31.33
  34. 31.34
    K. Khoshelham, S.O. Elberink: Accuracy and resolution of kinect depth data for indoor mapping applications, Sensors 12(5), 1437–1454 (2012)CrossRefGoogle Scholar
  35. 31.35
    P.J. Besl, N.D. McKay: A method for registration of 3D shapes, IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992)CrossRefGoogle Scholar
  36. 31.36
    Y. Chen, G. Medioni: Object modeling by registration of multiple range images, Image Vis. Comput. 10(3), 145–155 (1992)CrossRefGoogle Scholar
  37. 31.37
    Z. Zhang: Iterative Point Matching for Registration of Free–Form Curves, Tech. Rep. Ser., Vol. RR-1658 (INRIA–Sophia Antipolis, Valbonne Cedex 1992)Google Scholar
  38. 31.38
    S. Rusinkiewicz, M. Levoy: Efficient variants of the ICP algorithm, Proc. 3rd Int. Conf. 3D Digital Imaging Model. (2001) pp. 145–152CrossRefGoogle Scholar
  39. 31.39
    J.L. Bentley: Multidimensional binary search trees used for associative searching, Commun. ACM 18(9), 509–517 (1975)CrossRefMATHGoogle Scholar
  40. 31.40
    J.H. Friedman, J.L. Bentley, R.A. Finkel: An algorithm for finding best matches in logarithmic expected time, ACM Trans. on Math. Software 3(3), 209–226 (1977)CrossRefMATHGoogle Scholar
  41. 31.41
    M. Greenspan, M. Yurick: Approximate K-D tree search for efficient ICP, Proc. 4th IEEE Int. Conf. Recent Adv. 3D Digital Imaging Model. (2003) pp. 442–448Google Scholar
  42. 31.42
    L. Hyafil, R.L. Rivest: Constructing optimal binary decision trees is NP-complete, Inf. Proc. Lett. 5, 15–17 (1976)MathSciNetCrossRefMATHGoogle Scholar
  43. 31.43
    N.J. Mitra, N. Gelfand, H. Pottmann, L. Guibas: Registration of point cloud data from a geometric optimization perspective, Proc. Eurographics/ACM SIGGRAPH Symp. Geom. Process. (2004) pp. 22–31Google Scholar
  44. 31.44
    A. Nüchter, K. Lingemann, J. Hertzberg: Cached k-d tree search for ICP Algorithms, Proc. 6th IEEE Int. Conf. Recent Adv. 3D Digital Imaging Model. (2007) pp. 419–426Google Scholar
  45. 31.45
    K.S. Arun, T.S. Huang, S.D. Blostein: Least-squares fitting of two 3-D point sets, IEEE Trans. Pattern Anal. Mach. Intell. 9(5), 698–700 (1987)CrossRefGoogle Scholar
  46. 31.46
    B.K.P. Horn, H.M. Hilden, S. Negahdaripour: Closed–form solution of absolute orientation using orthonormal matrices, J. Opt. Soc. Am. A 5(7), 1127–1135 (1988)MathSciNetCrossRefGoogle Scholar
  47. 31.47
    B.K.P. Horn: Closed–form solution of absolute orientation using unit quaternions, J. Opt. Soc. Am. A 4(4), 629–642 (1987)CrossRefGoogle Scholar
  48. 31.48
    M.W. Walker, L. Shao, R.A. Volz: Estimating 3-d location parameters using dual number quaternions, J. Comput. Vis. Image Underst. 54, 358–367 (1991)CrossRefMATHGoogle Scholar
  49. 31.49
    A. Nüchter, J. Elseberg, P. Schneider, D. Paulus: Study of parameterizations for the rigid body transformations of the scan registration problem, J. Comput. Vis. Image Underst. 114(8), 963–980 (2010)CrossRefGoogle Scholar
  50. 31.50
    M.A. Fischler, R.C. Bolles: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Comm. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  51. 31.51
    N.J. Mitra, A. Nguyen: Estimating surface normals in noisy point cloud data, Proc. Symp. Comput. Geom. (SCG) (2003) pp. 322–328Google Scholar
  52. 31.52
    H. Badino, D. Huber, Y. Park, T. Kanade: Fast and accurate computation of surface normals from range images, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (2011) pp. 3084–3091Google Scholar
  53. 31.53
    D. Huber: Automatic Three-Dimensional Modeling from Reality, Ph.D. Thesis (Robotics Institute, Carnegie Mellon University, Pittsburg 2002)Google Scholar
  54. 31.54
    R.B. Rusu: Semantic 3D Object Maps for Everyday Manipulation in Human Living Environments, Dissertation (TU Munich, Munich 2009)Google Scholar
  55. 31.55
    Point Cloud Library (PCL): http://www.pointclouds.org (2015)
  56. 31.56
    J. Böhm, S. Becker: Automatic marker-free registration of terrestrial laser scans using reflectance features, Proc. 8th Conf. Opt. 3D Meas. Tech. (2007) pp. 338–344Google Scholar
  57. 31.57
    N. Engelhard, F. Endres, J. Hess, J. Sturm, W. Burgard: Real-time 3D visual SLAM with a hand-held camera, Proc. RGB-D Workshop 3D Percept. Robotics at Eur. Robotics Forum (2011)Google Scholar
  58. 31.58
    R. Schnabel, R. Wahl, R. Klein: Efficient RANSAC for point-cloud shape detection, Computer Graph. Forum (2007)Google Scholar
  59. 31.59
    A. Hoover, G. Jean-Baptiste, X. Jiang, P.J. Flynn, H. Bunke, D. Goldgof, K. Bowyer, D. Eggert, A. Fitzgibbon, R. Fisher: An experimental comparison of range segmentation algorithms, IEEE Trans. Pattern Anal. Mach. Intell. 18(7), 673–689 (1996)CrossRefGoogle Scholar
  60. 31.60
    U. Bauer, K. Polthier: Detection of planar regions in volume data for topology optimization, Proc. 5th Int. Conf. Adv. Geom. Model. Process. (2008)Google Scholar
  61. 31.61
    P.V.C. Hough: Method and means for recognizing complex patterns, Patent US 3069654 (1962)Google Scholar
  62. 31.62
    D. Borrmann, J. Elseberg, A. Nüchter, K. Lingemann: The 3D Hough transform for plane detection in point clouds – A review and a new accumulator design, J. 3D Res. 2(2), 1–13 (2011)CrossRefGoogle Scholar
  63. 31.63
    R. Lakaemper, L.J. Latecki: Extended EM for planar approximation of 3D data, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (2006)Google Scholar
  64. 31.64
    O. Wulf, K.O. Arras, H.I. Christensen, B.A. Wagner: 2D Mapping of cluttered indoor environments by means of 3D perception, Proc. IEEE Int. Conf. Robotics Autom. (ICRA) (2004) pp. 4204–4209Google Scholar
  65. 31.65
    G. Yu, M. Grossberg, G. Wolberg, I. Stamos: Think globally, cluster locally:A unified framework for range segmentation, Proc. 4th Int. Symp. 3D Data Process. Vis. Transm. (2008)Google Scholar
  66. 31.66
    W.E. Lorensen, H.E. Cline: Marching Cubes: A high resolution 3D surface construction algorithm, Computer Graph. 21(4), 163–169 (1987)CrossRefGoogle Scholar
  67. 31.67
    M. Alexa, J. Behr, D. Cohen-Or, S. Fleishman, D. Levin, C.T. Silva: Computing and rendering point set surfaces, IEEE Trans. Vis. Comput. Graph. 9(1), 3–15 (2003)CrossRefGoogle Scholar
  68. 31.68
    H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, W. Stuetzle: Surface reconstruction from unorganized points, Comput. Graph. 26(2), 71–78 (1992)CrossRefGoogle Scholar
  69. 31.69
    S. Melax: A Simple, fast and effective polygon reduction algorithm, Game Dev. 5(11), 44–49 (1998)Google Scholar
  70. 31.70
    M. Garland, P. Heckbert: Surface simplification using quadric error metrics, Proc. SIGGRAPH (1997)Google Scholar
  71. 31.71
    S. Izadi, D. Kim, O. Hilliges, D. Molyneaux, R. Newcombe, P. Kohli, J. Shotton, S. Hodges, D. Freeman, A. Davison, A. Fitzgibbon: KinectFusion: Real-time 3D reconstruction and interaction using a moving depth camera, ACM Symp. User Interface Softw. Technol. (2011)Google Scholar
  72. 31.72
    J.D. Foley, A. van Dam, S.K. Feiner, J.F. Hughes: Computer Graphics: Principles and Practice, 2nd edn. (Addison-Wesley, Reading 1996)MATHGoogle Scholar
  73. 31.73
    J. Elseberg, D. Borrmann, A. Nüchter: One billion points in the cloud -- An octree for efficient processing of 3D laser scans, ISPRS J. Photogramm. Remote Sens. 76, 76–88 (2013)CrossRefGoogle Scholar
  74. 31.74
    A. Hornung, K.M. Wurm, M. Bennewitz, C. Stachniss, W. Burgard: OctoMap: An efficient probabilistic 3D mapping framework based on octrees, Auton. Robots 34(3), 189–206 (2013)CrossRefGoogle Scholar
  75. 31.75
    F. Lu, E. Milios: Globally consistent range scan alignment for environment mapping, Auton. Robots 4, 333–349 (1997)CrossRefGoogle Scholar
  76. 31.76
    K. Konolige: Large-scale map-making, Proc. Natl. Conf. Artif. Intell. (AAAI) (2004) pp. 457–463Google Scholar
  77. 31.77
    A. Kelly, R. Unnikrishnan: Efficient construction of globally consistent ladar maps using pose network topology and nonlinear programming, Proc. Int. Symp. Robotics Res. (2003)Google Scholar
  78. 31.78
    D. Borrmann, J. Elseberg, K. Lingemann, A. Nüchter, J. Hertzberg: Globally consistent 3d mapping with scan matching, J. Robotics Auton. Syst. 56(2), 130–142 (2008)CrossRefGoogle Scholar
  79. 31.79
    E. Grimson, T. Lozano-Pérez, D.P. Huttenlocher: Object Recognition by Computer: The Role of Geometric Constraints (MIT Press, Cambridge 1990)Google Scholar
  80. 31.80
    Z. Zhang: Parameter estimation techniques: a tutorial with application to conic fitting, Image Vis. Comput., Vol. 15 (1997) pp. 59–76Google Scholar
  81. 31.81
    P. Benko, G. Kos, T. Varady, L. Andor, R.R. Martin: Constrained fitting in reverse engineering, Computer Aided Geom. Des. 19, 173–205 (2002)MathSciNetCrossRefMATHGoogle Scholar
  82. 31.82
    M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, D. Fulk: The digital Michelangelo project: 3D scanning of large statues, Proc. 27th Conf. Computer Graph. Interact. Tech. (SIGGRAPH) (2000) pp. 131–144Google Scholar
  83. 31.83
    I. Stamos, P. Allen: 3-D model construction using range and image data, Proc. IEEE Conf. Computer Vis. Pattern Recognit., Vol. 1 (2000) pp. 531–536Google Scholar
  84. 31.84
    S. Thrun, W. Burgard, D. Fox: A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping, Proc. IEEE Inf. Conf. Robotics Autom. (2000) pp. 321–328Google Scholar
  85. 31.85
    M. Bosse, R. Zlot, P. Flick: Zebedee: Design of a spring-mounted 3-D range sensor with application to mobile mapping, IEEE Trans. Robotics 28(5), 1104–1119 (2012)CrossRefGoogle Scholar
  86. 31.86
  87. 31.87
    S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L.-E. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nefian, P. Mahoney: Stanley: The robot that won the DARPA grand challenge, J. Field Robot. 23(9), 661–692 (2006)CrossRefGoogle Scholar
  88. 31.88
    R. Triebel, P. Pfaff, W. Burgard: Multi-level surface maps for outdoor terrain mapping and loop closing, Proc. IEEE Int. Conf. Intel. Robots Syst. (IROS) (2006)Google Scholar
  89. 31.89
    C. Eveland, K. Konolige, R. Bolles: Background modeling for segmentation of video-rate stereo sequences, Proc. Int. Conf. Computer Vis. Pattern Recog. (1998) pp. 266–271Google Scholar
  90. 31.90
    M. Agrawal, K. Konolige, L. Iocchi: Real-time detection of independent motion using stereo, IEEE Workshop Motion (2005) pp. 207–214Google Scholar
  91. 31.91
    K. Konolige, M. Agrawal, R.C. Bolles, C. Cowan, M. Fischler, B. Gerkey: Outdoor mapping and Navigation using stereo vision, Intl. Symp. Exp. Robotics (ISER) (2006)Google Scholar
  92. 31.92
    M. Happold, M. Ollis, N. Johnson: Enhancing supervised terrain classification with predictive unsupervised learning, Robotics: Sci. Syst. Phila. (2006)Google Scholar
  93. 31.93
    J. Lalonde, N. Vandapel, D. Huber, M. Hebert: Natural terrain classification using three-dimensional ladar data for ground robot mobility, J. Field Robotics 23(10), 839–862 (2006)CrossRefGoogle Scholar
  94. 31.94
    R. Manduchi, A. Castano, A. Talukder, L. Matthies: Obstacle detection and terrain classification for autonomous off-road navigation, Auton. Robots 18, 81–102 (2005)CrossRefGoogle Scholar
  95. 31.95
    J.-F. Lalonde, N. Vandapel, M. Hebert: Data structure for efficient processing in 3-D, Robotics: Sci. Syst. (2005)Google Scholar
  96. 31.96
    A. Kelly, A. Stentz, O. Amidi, M. Bode, D. Bradley, A. Diaz-Calderon, M. Happold, H. Herman, R. Mandelbaum, T. Pilarski, P. Rander, S. Thayer, N. Vallidis, R. Warner: Toward reliable off road autonomous vehicles operating in challenging environments, Int. J. Robotics Res. 25(5/6), 449–483 (2006)CrossRefGoogle Scholar
  97. 31.97
    P. Bellutta, R. Manduchi, L. Matthies, K. Owens, A. Rankin: Terrain perception for Demo III, Proc. IEEE Intell. Veh. Conf. (2000) pp. 326–331Google Scholar
  98. 31.98
    KARTO: Software for robots on the move, http://www.kartorobotics.com (2015)
  99. 31.99
    The Stanford Artificial Intelligence Robot: http://www.cs.stanford.edu/group/stair (2015)
  100. 31.100

Copyright information

© Springer-Verlag Berlin Heidelberg 2016

Authors and Affiliations

  1. 1.Google, Inc.Mountain ViewUSA
  2. 2.Informatics VII – Robotics and TelematicsUniversity of WürzburgWürzburgGermany

Personalised recommendations