Abstract
In this paper, we propose a direct stereo visual odometry method which uses vertical lines to estimate consecutive camera poses. Therefore, it is well suited for poorly textured indoor environments where point-based methods may fail. We introduce a fast line segment detector and matcher detecting vertical lines, which occur frequently in man-made environments. We estimate the pose of the camera by directly minimizing the photometric error of the patches around the detected lines. In cases where not sufficient lines could be detected, point features are used as fallback solution. As our algorithm runs in real-time, it is well suited for robotics and augmented reality applications. In our experiments, we show that our algorithm outperforms state-of-the-art methods on poorly textured indoor scenes and delivers comparable results on well textured outdoor scenes.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
This paper is a revised and extended version of [3].
References
Geiger, A., Ziegler, J., Stiller, C.: StereoScan: dense 3D reconstruction in real-time. In: IEEE Intelligent Vehicles Symposion (2011)
Engel, J., Schöps, T., Cremers, D.: LSD-SLAM: large-scale direct monocular slam. In: Proceedings of European Conference on Computer Vision (2014)
Holzmann, T., Fraundorfer, F., Bischof, H.: Direct stereo visual odometry based on lines. In: 11th International Conference on Computer Vision Theory and Application (VISAPP) (2016)
Klein, G., Murray, D.: Parallel tracking and mapping for small AR workspaces. In: Proceedings of International Symposium on Mixed and Augmented Reality (2007)
Mei, C., Sibley, G., Cummins, M., Newman, P., Reid, I.: RSLAM: a system for large-scale mapping in constant-time using stereo. Int. J. Comput. Vis. 94, 198–214 (2010)
Weiss, S., Achtelik, M.W., Lynen, S., Achtelik, M.C., Kneip, L., Chli, M., Siegwart, R.: Monocular vision for long-term micro aerial vehicle state estimation: a compendium. J. Field Robot. 30, 803–831 (2013)
Elqursh, A., Elgammal, A.M.: Line-based relative pose estimation. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition, pp. 3049–3056. IEEE Computer Society (2011)
Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: dense tracking and mapping in real-time. In: Proceedings of International Conference on Computer Vision, pp. 2320–2327 (2011)
Engel, J., Stückler, J., Cremers, D.: Large-scale direct SLAM with stereo cameras. In: International Conference on Intelligent Robots and Systems (2015)
Forster, C., Pizzoli, M., Scaramuzza, D.: SVO: fast semi-direct monocular visual odometry. In: International Conference on Robotics and Automation (2014)
Grompone, R., Jakubowicz, J., Morel, J.M., Randall, G.: LSD: a fast line segment detector with a false detection control. IEEE Trans. Pattern Anal. Mach. Intell. 32, 722–732 (2010)
Hofer, M., Maurer, M., Bischof, H.: Improving sparse 3D models for man-made environments using line-based 3D reconstruction. In: International Conference on 3D Vision (2014)
Cortinovis, A.: PIXHAWK - attitude and position estimation from vision and IMU measurements for quadrotor control. Technical report, Computer Vision and Geometry Lab, Swiss Federal Institute of Technology (ETH) Zurich (2010)
Rosten, E., Drummond, T.: Fusing points and lines for high performance tracking. In: Proceedings of International Conference on Computer Vision (2005)
Ma, Y., Soatto, S., Kosecka, J., Sastry, S.S.: An Invitation to 3-D Vision: From Images to Geometric Models. Springer, Heidelberg (2003)
Levenberg, K.: A method for the solution of certain non-linear problems in least squares. Q. Appl. Math. 2, 164–168 (1944)
Marquardt, D.: An algorithm for least-squares estimation of nonlinear parameters. SIAM J. Appl. Math. 11(2), 431–441 (1963)
Bonarini, A., Burgard, W., Fontana, G., Matteucci, M., Sorrenti, D.G., Tardos, J.D.: RAWSEEDS: robotics advancement through web-publishing of sensorial and elaborated extensive data sets. In: International Conference on Intelligent Robots and Systems (2006)
Ceriani, S., Fontana, G., Giusti, A., Marzorati, D., Matteucci, M., Migliore, D., Rizzi, D., Sorrenti, D.G., Taddei, P.: Rawseeds ground truth collection systems for indoor self-localization and mapping. Auton. Robots 27, 353–371 (2009)
Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? The KITTI vision benchmark suite. In: Proceedings of IEEE Conference Computer Vision and Pattern Recognition (2012)
Agarwal, S., Mierle, K., et al.: Ceres solver. http://ceres-solver.org
Furgale, P., Rehder, J., Siegwart, R.: Unified temporal and spatial calibration for multi-sensor systems. In: International Conference on Intelligent Robots and Systems (2013)
Sturm, J., Engelhard, N., Endres, F., Burgard, W., Cremers, D.: A benchmark for the evaluation of RGB-D SLAM systems. In: International Conference on Intelligent Robots and Systems, pp. 573–580 (2012)
Acknowledgements
This project has been supported by the Austrian Science Fund (FWF) in the project V-MAV (I-1537).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Holzmann, T., Fraundorfer, F., Bischof, H. (2017). A Detailed Description of Direct Stereo Visual Odometry Based on Lines. In: Braz, J., et al. Computer Vision, Imaging and Computer Graphics Theory and Applications. VISIGRAPP 2016. Communications in Computer and Information Science, vol 693. Springer, Cham. https://doi.org/10.1007/978-3-319-64870-5_17
Download citation
DOI: https://doi.org/10.1007/978-3-319-64870-5_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-64869-9
Online ISBN: 978-3-319-64870-5
eBook Packages: Computer ScienceComputer Science (R0)