Fast and Accurate Structure and Motion Estimation

  • Johan Hedborg
  • Per-Erik Forssén
  • Michael Felsberg
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5875)


This paper describes a system for structure-and-motion estimation for real-time navigation and obstacle avoidance. We demonstrate a technique to increase the efficiency of the 5-point solution to the relative pose problem. This is achieved by a novel sampling scheme, where we add a distance constraint on the sampled points inside the RANSAC loop, before calculating the 5-point solution. Our setup uses the KLT tracker to establish point correspondences across time in live video. We also demonstrate how an early outlier rejection in the tracker improves performance in scenes with plenty of occlusions. This outlier rejection scheme is well suited to implementation on graphics hardware. We evaluate the proposed algorithms using real camera sequences with fine-tuned bundle adjusted data as ground truth. To strenghten our results we also evaluate using sequences generated by a state-of-the-art rendering software. On average we are able to reduce the number of RANSAC iterations by half and thereby double the speed.


Ground Truth Motion Estimation Obstacle Avoidance Forward Motion Distance Constraint 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, Cambridge (2003)Google Scholar
  2. 2.
    Pollefeys, M., et al.: Detailed real-time urban 3D reconstruction from video. International Journal of Computer Vision 78, 143–167 (2008)CrossRefGoogle Scholar
  3. 3.
    Martinec, D., Pajdla, T.: Robust rotation and translation estimation in multiview reconstruction. In: IEEE CVPR 2007 (2007)Google Scholar
  4. 4.
    2d3 Ltd., B. (2009),
  5. 5.
    Various: Special issues on the 2007 DARPA urban challenge, parts I- III. In: Buehler, M., Iagnemma, K., Singh, S. (eds.) Journal of Field Robotics, vol. 25. Wiley, Blackwell (2008)Google Scholar
  6. 6.
    Vedaldi, A., Guidi, G., Soatto, S.: Moving forward in structure from motion. In: IEEE International Conference on Computer Vision (2007)Google Scholar
  7. 7.
    Nistér, D.: An efficient solution to the five-point relative pose problem. IEEE TPAMI 6, 756–770 (2004)Google Scholar
  8. 8.
    Källhammer, J.-E., et al.: Near zone pedestrian detection using a low-resolution FIR sensor. In: Intelligent Vehicles Symposium, IV 2007 (2007)Google Scholar
  9. 9.
    Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. In: IEEE ICCV (2007)Google Scholar
  10. 10.
    Longuet-Higgins, H.: A computer algorithm for reconstructing a scene from two projections. Nature 293, 133–135 (1981)CrossRefGoogle Scholar
  11. 11.
    Fischler, M., Bolles, R.: Random sample consensus: a paradigm for model fitting, with applications to image analysis and automated cartography. Communications of the ACM 24, 381–395 (1981)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Tordoff, B., Murray, D.: Guided sampling and consensus for motion estimation. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 82–96. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  13. 13.
    Zhang, W., Kosecka, J.: A new inlier identification scheme for robust estimation problems. In: Robotics Science and Systems RSS02 (2006)Google Scholar
  14. 14.
    Chum, O., Matas, J.: Randomized RANSAC with t(d, d) test. In: BMVC 2002, pp. 448–457 (2002)Google Scholar
  15. 15.
    Nistér, D.: Preemptive RANSAC for live structure and motion estimation. Machine Vision and Applications 16, 321–329 (2005)CrossRefGoogle Scholar
  16. 16.
    Myatt, D., et al.: NAPSAC: High noise, high dimensional model parametrisation - it’s in the bag. In: BMVC 2002 (2002)Google Scholar
  17. 17.
    Wu, H., Chellappa, R., Sankaranarayanan, A.C., Zhou, S.K.: Robust visual tracking using the time-reversibility constraint. In: IEEE ICCV (2007)Google Scholar
  18. 18.
    Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. International Journal of Computer Vision 12, 43–77 (1994)CrossRefGoogle Scholar
  19. 19.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI 1981, pp. 674–679 (1981)Google Scholar
  20. 20.
    Shi, J., Tomasi, C.: Good features to track. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 1994, Seattle (1994)Google Scholar
  21. 21.
    Ringaby, E.: Optical flow computation on compute unified device architecture. In: Proceedings of SSBA 2009 (2009)Google Scholar
  22. 22.
    Sinha, S.N., Frahm, J.M., Pollefeys, M., Genc, Y.: GPU-based video feature tracking and matching. Technical report, UNC Chapel Hill (2006)Google Scholar
  23. 23.
    Hedborg, J., Skoglund, J., Felsberg, M.: KLT tracking implementation on the GPU. In: Proceedings SSBA 2007 (2007)Google Scholar
  24. 24.
    Kirk, D., Hwu, W.-m.W.: The CUDA programming model (2007)Google Scholar
  25. 25.
    Open source computer vision software library, O.A. (2009),
  26. 26.
    Lourakis, M., Argyros, A.: The design and implementation of a generic sparse bundle adjustment software package based on the Levenberg-Marquardt algorithm. Technical Report 340, Institute of Computer Science - FORTH (2004)Google Scholar
  27. 27.
    Engels, C., Stewénius, H., Nistér, D.: Bundle adjustment rules. In: Photogrammetric Computer Vision, PCV (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Johan Hedborg
    • 1
  • Per-Erik Forssén
    • 1
  • Michael Felsberg
    • 1
  1. 1.Department of Electrical EngineeringLinköping UniversitySweden

Personalised recommendations