Abstract
Here we propose a real-time method for low-drift odometry and mapping using range measurements from a 3D laser scanner moving in 6-DOF. The problem is hard because the range measurements are received at different times, and errors in motion estimation (especially without an external reference such as GPS) cause mis-registration of the resulting point cloud. To date, coherent 3D maps have been built by off-line batch methods, often using loop closure to correct for drift over time. Our method achieves both low-drift in motion estimation and low-computational complexity. The key idea that makes this level of performance possible is the division of the complex problem of Simultaneous Localization and Mapping, which seeks to optimize a large number of variables simultaneously, into two algorithms. One algorithm performs odometry at a high-frequency but at low fidelity to estimate velocity of the laser scanner. Although not necessary, if an IMU is available, it can provide a motion prior and mitigate for gross, high-frequency motion. A second algorithm runs at an order of magnitude lower frequency for fine matching and registration of the point cloud. Combination of the two algorithms allows map creation in real-time. Our method has been evaluated by indoor and outdoor experiments as well as the KITTI odometry benchmark. The results indicate that the proposed method can achieve accuracy comparable to the state of the art offline, batch methods.
Similar content being viewed by others
References
Andersen, R. (2008). Modern methods for robust regression. Sage University Paper Series on Quantitative Applications in the Social Sciences.
Anderson, S. & Barfoot, T. (2013). Towards relative continuous-time SLAM. In IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany.
Anderson, S., & Barfoot, T. (2013). RANSAC for motion-distorted 3D visual sensors. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Tokyo, Japan.
Badino, H., & Kanade, T. (2011). A head-wearable short-baseline stereo system for the simultaneous estimation of structure and motion. In IAPR Conference on Machine Vision Application, Nara, Japan.
Badino, A.Y.H., & Kanade, T. (2013). Visual odometry by multi-frame feature integration. In Workshop on Computer Vision for Autonomous Driving (Collocated with ICCV 2013). Sydney, Australia.
Bay, H., Ess, A., Tuytelaars, T., & Gool, L. (2008). SURF: Speeded up robust features. Computer Vision and Image Understanding, 110(3), 346–359.
Bellavia, F., Fanfani, M., Pazzaglia, F., & Colombo, C. (2013). Robust selective stereo slam without loop closure and bundle adjustment. Lecture Notes in Computer Science, 8156, 462–471.
Bosse, M., & and Zlot, R. (2009). Continuous 3D scan-matching with a spinning 2D laser. In IEEE International Conference on Robotics and Automation, Kobe, Japan.
Bosse, M., Zlot, R., & Flick, P. (2012). Zebedee: Design of a spring-mounted 3-D range sensor with application to mobile mapping. IEEE Transactions on Robotics, 28(5), 1104–1119.
de Berg, M., van Kreveld, M., Overmars, M., & Schwarzkopf, O. (2008). Computation geometry: Algorithms and applications (3rd ed.). Berlin: Springer.
Dong, H. & Barfoot, T. (2012). Lighting-invariant visual odometry using lidar intensity imagery and pose interpolation. In The 7th International Conference on Field and Service Robots, Matsushima, Japan.
Furgale, P., Barfoot, T. & Sibley, G. (2012). Continuous-time batch estimation using temporal basis functions. In IEEE International Conference on Robotics and Automation (ICRA), St. Paul, MN.
Geiger, A., Lenz, P. & Urtasun, R. (2012). Are we ready for autonomous driving? The kitti vision benchmark suite. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (pp. 3354–3361).
Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The KITTI dataset. International Journal of Robotics Research, 32, 1229–1235.
Guo, C.X., Kottas, D.G., DuToit, R.C., Ahmed, A., Li, R. & Roumeliotis, S.I. (2014). Efficient visual-inertial navigation using a rolling-shutter camera with inaccurate timestamps. In Proceedings of Robotics: Science and Systems, Berkeley, CA.
Hartley, R., & Zisserman, A. (2004). Multiple view geometry in computer vision. New York: Cambridge University Press.
Hong, S., Ko, H. & Kim, J. (2010). VICP: Velocity updating iterative closest point algorithm. In IEEE International Conference on Robotics and Automation (ICRA), Anchorage, Alaska.
Li, Y. & Olson, E. (2011) Structure tensors for general purpose LIDAR feature extraction. In IEEE International Conference on Robotics and Automation, Shanghai, China, May 9–13.
Li, M., & Mourikis, A. (2014). Vision-aided inertial navigation with rolling-shutter cameras. International Journal of Robotics Research, 33(11), 1490–1507.
Lu, W., Xiang, Z., & Liu, J. (2013). High-performance visual odometry with two-stage local binocular ba and gpu. In IEEE Intelligent Vehicles Symposium. Gold Coast City, Australia.
Moosmann, F., & Stiller, C. (2011). Velodyne SLAM. In IEEE Intelligent Vehicles Symposium (IV). Baden-Baden, Germany.
Murray, R., & Sastry, S. (1994). A mathematical introduction to robotic manipulaton. Boca Raton: CRC Press.
Nuchter, A., Lingemann, K., Hertzberg, J., & Surmann, H. (2007). 6D SLAM-3D mapping outdoor environments. Journal of Field Robotics, 24(8–9), 699–722.
Persson, M., Piccini, T., Mester, R., & Felsberg, M. (2015). Robust stereo visual odometry from monocular techniques. In IEEE Intelligent Vehicles Symposium. Seoul, Korea.
Pomerleau, F., Colas, F., Siegwart, R., & Magnenat, S. (2013). Comparing ICP variants on real-world data sets. Autonomous Robots, 34(3), 133–148.
Quigley, M., Gerkey, B., Conley, K., Faust, J., Foote, T., Leibs, J., et al. (2009). ROS: An open-source robot operating system. In Workshop on Open Source Software (Collocated with ICRA 2009). Kobe, Japan.
Rosen, D., Huang, G., & Leonard, J. (2014). Inference over heterogeneous finite-/infinite-dimensional systems using factor graphs and Gaussian processes. In IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China.
Rusinkiewicz, S. & Levoy, M. (2001). Efficient variants of the ICP algorithm. In Third International Conference on 3D Digital Imaging and Modeling (3DIM), Quebec City, Canada.
Rusu, R.B., & Cousins, S. (2011). 3D is here: Point Cloud Library (PCL). In IEEE International Conference on Robotics and Automation (ICRA), Shanghai, China, May 9–13.
Scherer, S., Rehder, J., Achar, S., Cover, H., Chambers, A., Nuske, S., et al. (2012). River mapping from a flying robot: State estimation, river detection, and obstacle mapping. Autonomous Robots, 32(5), 1–26.
Thrun, S., Burgard, W., & Fox, D. (2005). Probabilistic robotics. Cambridge, MA: The MIT Press.
Tong, C. H. & Barfoot, T. (2013). Gaussian process Gauss-Newton for 3D laser-based visual odometry. In IEEE International Conference on Robotics and Automation (ICRA), Karlsruhe, Germany.
Tong, C., Furgale, P., & Barfoot, T. (2013). Gaussian process Gauss-newton for non-parametric simultaneous localization and mapping. International Journal of Robotics Research, 32(5), 507–525.
Zhang, J. & Singh, S. (2014). LOAM: Lidar odometry and mapping in real-time. In Robotics: Science and Systems Conference (RSS), Berkeley, CA.
Zhang, J. & Singh, S. (2015). Visual-lidar odometry and mapping: Low-drift, robust, and fast. In IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, May
Zlot, R. & Bosse, M. (2012). Efficient large-scale 3D mobile mapping and surface reconstruction of an underground mine. In The 7th International Conference on Field and Service Robots, Matsushima, Japan.
Acknowledgments
The paper is based upon work supported by the National Science Foundation under Grant No. IIS-1328930. Special thanks are given to D. Huber, S. Scherer, M. Bergerman, M. Kaess, L. Yoder, S. Maeta for their insightful inputs and invaluable help.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Zhang, J., Singh, S. Low-drift and real-time lidar odometry and mapping. Auton Robot 41, 401–416 (2017). https://doi.org/10.1007/s10514-016-9548-2
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10514-016-9548-2