Advertisement

Range Sensors

Reference work entry

Abstract

Range sensors are devices that capture the three-dimensional (3-D) structure of the world from the viewpoint of the sensor, usually measuring the depth to the nearest surfaces. These measurements could be at a single point, across a scanning plane, or a full image with depth measurements at every point. The benefits of this range data is that a robot can be reasonably certain where the real world is, relative to the sensor, thus allowing the robot to more reliably find navigable routes, avoid obstacles, grasp objects, act on industrial parts, etc.

This chapter introduces the main representations for range data (point sets, triangulated surfaces, voxels), the main methods for extracting usable features from the range data (planes, lines, triangulated surfaces), the main sensors for acquiring it (Sect. 22.1 – stereo and laser triangulation and ranging systems), how multiple observations of the scene, e.g., as if from a moving robot, can be registered (Sect. 22.2), and several indoor and outdoor robot applications where range data greatly simplifies the task (Sect. 22.3).

Keywords

Range Image Iterate Close Point Range Sensor Iterate Close Point Robotic Application 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Abbreviations

ASIC

application-specific integrated circuit

CP

cerebral palsy

CP

closest point

CP

complementarity problem

DARPA

Defense Advanced Research Projects Agency

FPGAs

field programmable gate array

GPS

global positioning system

ICP

iterative closest-point algorithm

LADAR

laser radar or laser detection and ranging

LIDAR

light detection and ranging

MLS

multilevel surface map

PC

Purkinje cells

PC

principal contact

RANSAC

random sample consensus

RGB

red, green, blue

SIFT

scale-invariant feature transformation

SLAM

simultaneous localization and mapping

References

  1. 22.1.
    Videre Design LLC: www.videredesign.com, accessed Nov 12, 2007 (Videre Design, Menlo Park 2007)
  2. 22.2.
    R. Hartley, A. Zisserman: Multiple view geometry in computer vision (Cambridge Univ. Press, Cambridge 2000)zbMATHGoogle Scholar
  3. 22.3.
    S. Barnard, M. Fischler: Computational stereo, ACM Comput. Surv. 14(4), 553–572 (1982)CrossRefGoogle Scholar
  4. 22.4.
    K. Konolige: Small vision system. hardware and implementation, Proc. Int. Symp. Robot. Res. (Hayama 1997) pp. 111–116Google Scholar
  5. 22.5.
    D. Scharstein, R. Szeliski, R. Zabih: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms, Int. J. Comput. Vis. 47(1/2/3), 7–42 (2002)CrossRefzbMATHGoogle Scholar
  6. 22.6.
    D. Scharstein, R. Szeliski: Middlebury College Stereo Vision Research Page, vision.middlebury.edu/stereo, accessed Nov 12, 2007 (Middleburry College, Middleburry 2007)
  7. 22.7.
    R. Zabih, J. Woodfill: Non-parametric local transforms for computing visual correspondence, Proc. Eur. Conf. on Computer Vision, Vol. 2 (Stockholm 1994) pp. 151–158Google Scholar
  8. 22.8.
    O. Faugeras, B. Hotz, H. Mathieu, T. Viéville, Z. Zhang, P. Fua, E. Théron, L. Moll, G. Berry, J. Vuillemin, P. Bertin, C. Proy: Real time correlation based stereo: algorithm implementations and applications, Tech. Report RR-2013, INRIA (1993)Google Scholar
  9. 22.9.
    M. Okutomi, T. Kanade: A multiple-baseline stereo, IEEE Trans. Patt. Anal. Mach. Intell. 15(4), 353–363 (1993)CrossRefGoogle Scholar
  10. 22.10.
    L. Matthies: Stereo vision for planetary rovers: stochastic modeling to near realtime implementation, Int. J. Comput. Vis. 8(1), 71–91 (1993)CrossRefMathSciNetGoogle Scholar
  11. 22.11.
    R. Bolles, J. Woodfill: Spatiotemporal consistency checking of passive range data, Proc. Int. Symp. on Robotics Research (Hidden Valley 1993)Google Scholar
  12. 22.12.
    P. Fua: A parallel stereo algorithm that produces dense depth maps and preserves image features, Mach. Vis. Appl. 6(1), 35–49 (1993)CrossRefGoogle Scholar
  13. 22.13.
    H. Moravec: Visual mapping by a robot rover, Proc. Int. Joint Conf. on AI (IJCAI) (Tokyo 1979) pp. 598–600Google Scholar
  14. 22.14.
    A. Adan, F. Molina, L. Morena: Disordered patterns projection for 3D motion recovering, Proc. Int. Conf. on 3D Data Processing, Visualization and Transmission (Thessaloniki 2004) pp. 262–269Google Scholar
  15. 22.15.
    Point Grey Research Inc.: www.ptgrey.com, accessed Nov 12, 2007 (Point Grey Research, Vancouver 2007)
  16. 22.16.
    C. Zach, A. Klaus, M. Hadwiger, K. Karner: Accurate dense stereo reconstruction using graphics hardware, Proc. EUROGRAPHICS (Granada 2003) pp. 227–234Google Scholar
  17. 22.17.
    R. Yang, M. Pollefeys: Multi-resolution real-time stereo on commodity graphics hardware, Int. Conf. Computer Vision and Pattern Recognition, Vol. 1 (Madison 2003) pp. 211–217Google Scholar
  18. 22.18.
    Focus Robotics Inc.: www.focusrobotics.com, accessed Nv 12, 2007 (Focus Robotics, Hudson 2007)
  19. 22.19.
    TYZX Inc.: www.tyzx.com, accessed Nov 12, 2007 (TYZX, Menlo Park 2007)
  20. 22.20.
    S.K. Nayar, Y. Nakagawa: Shape from focus, IEEE Trans. Patt. Anal. Mach. Intell. 16(8), 824–831 (1994)CrossRefGoogle Scholar
  21. 22.21.
    M. Pollefeys, R. Koch, L. Van Gool: Self-calibration and metric reconstruction inspite of varying and unknown intrinsic camera parameters, Int. J. Comput. Vis. 32(1), 7–25 (1999)CrossRefGoogle Scholar
  22. 22.22.
    A. Hertzmann, S.M. Seitz: Example-based photometric stereo: Shape reconstruction with general, varying BRDFs, IEEE Trans. Patt. Anal. Mach. Intell. 27(8), 1254–1264 (2005)CrossRefGoogle Scholar
  23. 22.23.
    A. Lobay, D.A. Forsyth: Shape from texture without boundaries, Int. J. Comput. Vis. 67(1), 71–91 (2006)CrossRefGoogle Scholar
  24. 22.24.
    F. Blais: Review of 20 years of range sensor development, J. Electron. Imag. 13(1), 231–240 (2004)CrossRefGoogle Scholar
  25. 22.25.
    R. Baribeau, M. Rioux, G. Godin: Color reflectance modeling using a polychromatic laser range sensor, IEEE Trans. Patt. Anal. Mach. Intell. 14(2), 263–269 (1992)CrossRefGoogle Scholar
  26. 22.26.
    D. Anderson, H. Herman, A. Kelly: Experimental Characterization of Commercial Flash Ladar Devices, Int. Conf. of Sensing and Technology (Palmerston North 2005) pp. 17–23Google Scholar
  27. 22.27.
    R. Stettner, H. Bailey, S. Silverman: Three-Dimensional Flash Ladar Focal Planes and Time-Dependent Imaging, Advanced Scientific Concepts, 2006; Technical Report (February 23, 2007): www.advancedscientificconcepts.com/images/Three Dimensional Flash Ladar Focal Planes-ISSSR Paper.pdf, accessed Nov 12, 2007 (Advanced Scientific Concepts, Santa Barbara 2007)
  28. 22.28.
    J.J. LeMoigne, A.M. Waxman: Structured light patterns for robot mobility, Robot. Autom. 4, 541–548 (1988)CrossRefGoogle Scholar
  29. 22.29.
    R.B. Fisher, D.K. Naidu: A Comparison of Algorithms for Subpixel Peak Detection. In: Image Technology, ed. by J. Sanz (Springer, Berlin, Heidelberg 1996)Google Scholar
  30. 22.30.
    J.D. Foley, A. van Dam, S.K. Feiner, J.F. Hughes: Computer Graphics: principles and practice (Addison Wesley, Reading 1996)zbMATHGoogle Scholar
  31. 22.31.
    B. Curless, M. Levoy: A Volumetric Method for Building Complex Models from Range Images, Proc. of Int. Conf. on Comput. Graph. and Inter. Tech. (SIGGRAPH) (New Orleans 1996) pp. 303–312Google Scholar
  32. 22.32.
    A. Hoover, G. Jean-Baptiste, X. Jiang, P.J. Flynn, H. Bunke, D. Goldgof, K. Bowyer, D. Eggert, A. Fitzgibbon, R. Fisher: An experimental comparison of range segmentation algorithms, IEEE Trans. Patt. Anal. Mach. Intell. 18(7), 673–689 (1996)CrossRefGoogle Scholar
  33. 22.33.
    M.A. Fischler, R.C. Bolles: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography, Commun. ACM 24(6), 381–395 (1981)CrossRefMathSciNetGoogle Scholar
  34. 22.34.
    H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, W. Stuetzle: Surface reconstruction from unorganized points, Comput. Graph. 26(2), 71–78 (1992)CrossRefGoogle Scholar
  35. 22.35.
    A. Hilton, A. Stoddart, J. Illingworth, T. Windeatt: Implicit surface-based geometric fusion, Comput. Vis. Image Under. 69(3), 273–291 (1998)CrossRefGoogle Scholar
  36. 22.36.
    H. Hoppe: New quadric metric for simplifying meshes with appearance attributes, IEEE Visualization 1999 Conference (San Francisco 1999) pp. 59–66Google Scholar
  37. 22.37.
    W.J. Schroeder, J.A. Zarge, W.E. Lorensen: Decimation of triangle meshes, Proc. of Int. Conf. on Comput. Graph. and Inter. Tech. (SIGGRAPH) (Chicago 1992) pp. 65–70Google Scholar
  38. 22.38.
    S. Thrun: A probabilistic online mapping algorithm for teams of mobile robots, Int. J. Robot. Res. 20(5), 335–363 (2001)CrossRefGoogle Scholar
  39. 22.39.
    J. Little, S. Se, D. Lowe: Vision based mobile robot localization and mapping using scale-invariant features, Proc. IEEE Inf. Conf. on Robotics and Automation (Seoul 2001) pp. 2051–2058Google Scholar
  40. 22.40.
    E. Grimson: Object Recognition by Computer: The Role of Geometric Constraints (MIT Press, London 1990)Google Scholar
  41. 22.41.
    P.J. Besl, N.D. McKay: A method for registration of 3D shapes, IEEE Trans. Patt. Anal. Mach. Intell. 14(2), 239–256 (1992)CrossRefGoogle Scholar
  42. 22.42.
    G. Turk, M. Levoy: Zippered Polygon Meshes from Range Images, Proc. of Int. Conf. on Comput. Graph. and Inter. Tech. (SIGGRAPH) (Orlando 1994) pp. 311–318Google Scholar
  43. 22.43.
    S. Thrun, W. Burgard, D. Fox: Probabilistic Robotics (MIT Press, Cambridge 2005)zbMATHGoogle Scholar
  44. 22.44.
    D. Haehnel, D. Schulz, W. Burgard: Mapping with mobile robots in populated environments, Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (IROS), Vol. 1 (Lausanne 2002) pp. 496–501Google Scholar
  45. 22.45.
    K. Konolige, K. Chou: Markov localization using correlation, Proc. Int. Joint Conf. on AI (IJCAI) (Stockholm 1999) pp. 1154–1159Google Scholar
  46. 22.46.
    D. Haehnel, W. Burgard: Probabilistic Matching for 3D Scan Registration, Proc. of the VDI-Conference Robotik 2002 (Robotik) (Ludwigsburg 2002)Google Scholar
  47. 22.47.
    F. Lu, E. Milios: Globally consistent range scan alignment for environment mapping, Auton. Robot. 4, 333–349 (1997)CrossRefGoogle Scholar
  48. 22.48.
    K. Konolige: Large-scale map-making, Proceedings of the National Conference on AI (AAAI) (San Jose 2004) pp. 457–463Google Scholar
  49. 22.49.
    A. Kelly, R. Unnikrishnan: Efficient Construction of Globally Consistent Ladar Maps using Pose Network Topology and Nonlinear Programming, Proc. Int. Symp of Robotics Research (Siena 2003)Google Scholar
  50. 22.50.
    K.S. Arun, T.S. Huang, S.D. Blostein: Least-squares fitting of two 3-D point sets, IEEE Trans. Patt. Anal. Mach. Intell. 9(5), 698–700 (1987)CrossRefGoogle Scholar
  51. 22.51.
    Z. Zhang: Parameter estimation techniques: a tutorial with application to conic fitting, Image Vis. Comput. 15, 59–76 (1997)CrossRefGoogle Scholar
  52. 22.52.
    P. Benko, G. Kos, T. Varady, L. Andor, R.R. Martin: Constrained fitting in reverse engineering, Comput. Aided Geom. Des. 19, 173–205 (2002)CrossRefMathSciNetGoogle Scholar
  53. 22.53.
    M. Levoy, K. Pulli, B. Curless, S. Rusinkiewicz, D. Koller, L. Pereira, M. Ginzton, S. Anderson, J. Davis, J. Ginsberg, J. Shade, D. Fulk: The Digital Michelangelo Project: 3D Scanning of Large Statues, Proc. 27th Conf. on Computer graphics and interactive techniques (SIGGRAPH) (New Orleans 2000) pp. 131–144Google Scholar
  54. 22.54.
    I. Stamos, P. Allen: 3-D Model Construction Using Range and Image Data, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Vol. 1 (Hilton Head Island 2000) pp. 531–536Google Scholar
  55. 22.55.
    R. Triebel, P. Pfaff, W. Burgard: Multi-level surface maps for outdoor terrain mapping and loop closing, Proc. of the IEEE Int. Conf. on Intel. Robots and Systems (IROS) (Beijing 2006)Google Scholar
  56. 22.56.
    S. Thrun, W. Burgard, D. Fox: A real-time algorithm for mobile robot mapping with applications to multi-robot and 3D mapping, Proc. IEEE Inf. Conf. on Robotics and Automation (San Francisco 2000) pp. 321–328Google Scholar
  57. 22.57.
    Y. Liu, R. Emery, D. Chakrabarti, W. Burgard, S. Thrun: Using EM to Learn 3D Models of Indoor Environments with Mobile Robots, Proc.Int. Conf. on Machine Learning (Williamstown 2001) pp. 329–336Google Scholar
  58. 22.58.
    M. Agrawal, K. Konolige, L. Iocchi: Real-time detection of independent motion using stereo, IEEE Workshop on Motion (Breckenridge 2005) pp. 207–214Google Scholar
  59. 22.59.
    The DARPA Grand Challenge: www.darpa.mil/grandchallenge05, accessed Nov 12, 2007 (DARPA, Arlington 2005)
  60. 22.60.
    S. Thrun, M. Montemerlo, H. Dahlkamp et al.: Stanley: The robot that won the DARPA Grand Challenge, J. Field Robot. 23(9), 661–670 (2006)CrossRefGoogle Scholar
  61. 22.61.
    C. Eveland, K. Konolige, R. Bolles: Background modeling for segmentation of video-rate stereo sequences, Proc. Int. Conf. on Computer Vision and Pattern Recog (Santa Barbara 1998) pp. 266–271Google Scholar
  62. 22.62.
    K. Konolige, M. Agrawal, R.C. Bolles, C. Cowan, M. Fischler, B. Gerkey: Outdoor mapping and Navigation using Stereo Vision, Intl. Symp. on Experimental Robotics (ISER) (Rio de Janeiro 2006)Google Scholar
  63. 22.63.
    J. Lalonde, N. Vandapel, D. Huber, M. Hebert: Natural terrain classification using three-dimensional ladar data for ground robot mobility, J. Field Robot. 23(10), 839–861 (2006)CrossRefGoogle Scholar
  64. 22.64.
    J.-F. Lalonde, N. Vandapel, M. Hebert: Data structure for efficient processing in 3-D, Robotics: Science and Systems 1, Cambridge (2005)Google Scholar
  65. 22.65.
    M. Happold, M. Ollis, N. Johnson: enhancing supervised terrain classification with predictive unsupervised learning, Robotics: Science and Systems (Philadelphia 2006)Google Scholar
  66. 22.66.
    R. Manduchi, A. Castano, A. Talukder, L. Matthies: Obstacle detection and terrain classification for autonomous off-road navigation, Auton. Robot. 18, 81–102 (2005)CrossRefGoogle Scholar
  67. 22.67.
    A. Kelly, A. Stentz, O. Amidi, M. Bode, D. Bradley, A. Diaz-Calderon, M. Happold, H. Herman, R. Mandelbaum, T. Pilarski, P. Rander, S. Thayer, N. Vallidis, R. Warner: Toward reliable off road autonomous vehicles operating in challenging environments, Int. J. Robot. Res. 25(5–6), 449–483 (2006)CrossRefGoogle Scholar
  68. 22.68.
    P. Bellutta, R. Manduchi, L. Matthies, K. Owens, A. Rankin: Terrain Perception for Demo III, Proc. of the 2000 IEEE Intelligent Vehicles Conf. (Dearborn 2000) pp. 326–331Google Scholar
  69. 22.69.
    KARTO: Software for robots on the move. www.kartorobotics.com, accessed Nov 12, 2007 (ISRI, Menlo Park 2007)
  70. 22.70.
    The Stanford Artificial Intelligence Robot: www.cs.stanford.edu/group/stair, accessed Nov 12, 2007 (Stanford Univ., Stanford 2007)
  71. 22.71.
    Perception for Humanoid Robots. www.ri.cmu.edu/projects/project_595.html, accessed Nov 12, 2007 (Carnegie Melon Univ., Pittsburgh 2007)

Copyright information

© Springer-Verlag 2008

Authors and Affiliations

  1. 1.School of InformaticsUniversity of EdinburghEdinburghUK
  2. 2.Artificial Intelligence CenterSRI InternationalMenlo ParkUSA

Personalised recommendations