Advertisement

Autonomous Vehicle Video Aided Navigation – Coupling INS and Video Approaches

  • Chris Baker
  • Chris Debrunner
  • Sean Gooding
  • William Hoff
  • William Severson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4292)

Abstract

As autonomous vehicle systems become more prevalent, their navigation capabilities become increasingly critical. Currently most systems rely on a combined GPS/INS solution for vehicle pose computation, while some systems use a video-based approach. One problem with a GPS/INS approach is the possible loss of GPS data, especially in urban environments. Using only INS in this case causes significant drift in the computed pose. The video-based approach is not always reliable due to its heavy dependence on image texture. Our approach to autonomous vehicle navigation exploits the best of both of these by coupling an outlier-robust video-based solution with INS when GPS is unavailable. This allows accurate computation of the system’s current pose in these situations. In this paper we describe our system design and provide an analysis of its performance, using simulated data with a range of different noise levels.

Keywords

Global Position System Kalman Filter Feature Point Extend Kalman Filter Inertial Navigation System 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Debrunner, C.H., Ahuja, N.: Segmentation and factorization-based motion and structure estimation for long image sequences. IEEE Trans. Pattern Anal. Mach. Intell. 20(2), 206–211 (1998)CrossRefGoogle Scholar
  2. 2.
    Davison, A.J.: Real-time simultaneous localisation and mapping with a single camera. In: International Conference on Computer Vision, Nice, France (2003)Google Scholar
  3. 3.
    Baker, C., Debrunner, C., Whitehorn, M.: 3D model generation using unconstrained motion of a hand-held video camera. In: The International Society for Optical Engineering. Springer, San Jose (2006)Google Scholar
  4. 4.
    Alenya, G., Martnez, E., Torras, C.: Fusing visual and inertial sensing to recover robot egomotion. Journal of Robotic Systems 21, 23–32 (2004)CrossRefGoogle Scholar
  5. 5.
    Diel, D.D., DeBitetto, P., Teller, S.: Epipolar Constraints for Vision-Aided Inertial Navigation. In: IEEE Workshop on Motion and Video Computing (WAVC/MOTION 2005). IEEE Computer Society, Los Alamitos (2005)Google Scholar
  6. 6.
    Deil, D.D.: Stochastic Constraints for Vision-Aided Inertial Navigation. In: Mechanical Engineering, Massachusetts Institute of Technology, p. 110 (2005)Google Scholar
  7. 7.
    Chai, L.: 3-D Motion and Structure Estimation Using Inertial Sensors and Computer Vision for Augmented Reality. In: Engineering Division, p. 110. Colorado School of Mines, Golden (2000)Google Scholar
  8. 8.
    Domke, J., Aloimonos, Y.: Integration of Visual and Inertial Information for Egomotion: a Stochastic Approach, Dept. of Computer Science, University of Maryland, College Park (2006)Google Scholar
  9. 9.
    Huster, A., Rock, S.M.: Relative Position Sensing by Fusing Monocular Vision and Inertial Rate Sensors. In: International Conference on Advanced Robotics, Proceedings of ICAR 2003, Coimbra, Portugal (2003)Google Scholar
  10. 10.
    Makadia, A., Daniilidis, K.: Correspondenceless Ego-Motion Estimation. In: IEEE International Conference on Robotics and Automation (2005)Google Scholar
  11. 11.
    Nguyen, K.: Inertial Data Fusion using Kalman Filter Methods for Augmented Reality. In: Engineering. Colorado School of Mines, Golden (1998)Google Scholar
  12. 12.
    Qian, G., Chellappa, R., Zheng, Q.: Robust structure from motion estimation using inertial data. J. Opt. Soc. Am. 18(12), 2982–2997 (2001)CrossRefGoogle Scholar
  13. 13.
    Rehbinder, H., Ghosh, B.K.: Mulit-rate fusion of visual and inertial data. In: International Conference on Multisensor Fusion and Integration for Intelligent Systems (2001)Google Scholar
  14. 14.
    Rihbinder, H., Ghosh, B.: Pose Estimation Using Line-Based Dynamic Vision and Inertial Sensors. IEEE Transactions on Automatic Control 48(2), 186–199 (2003)CrossRefGoogle Scholar
  15. 15.
    Roumeliotis, S.I., Johnson, A.E., Motgomery, J.F.: Augmenting inertial navigation with image-based motion estimation. In: IEEE International Conference on Robitics and Automation (2002)Google Scholar
  16. 16.
    Strelow, D., Singh, S.: Online Motion Estimation from Image and Inertial Measurements. In: Workshop on Integration of Vision and Inertial Sensors (INERVIS) (2002)Google Scholar
  17. 17.
    You, S., Neumann, U.: Integrated Inertial and Vision Tracking for Augmented Reality Registration. In: Virtual Reality Annual International Symposium. IEEE, Houston (1999)Google Scholar
  18. 18.
    You, S., Neumann, U., Azuma, R.: Hybrid inertial and vision tracking for augmented reality registration. In: IEEE Virtual Reality 1999. IEEE, Los Alamitos (1999)Google Scholar
  19. 19.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference (1988)Google Scholar
  20. 20.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with application to image analysis and automated cartography. Communications of the Association of Computing Machinery 24, 381–395 (1981)MathSciNetGoogle Scholar
  21. 21.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence (1981)Google Scholar
  22. 22.
    Shi, J., Tomasi, C.: Good Features to Track. In: IEEE Conference on Computer Visioin and Pattern Recognition (CVPR 1994), Seattle (1994)Google Scholar
  23. 23.
    Flenniken, W.: Modeling Inertial Measurement Units and Analyzing the Effect of their Errors in Navigation Applications. Auburn University, Auburn (2005)Google Scholar
  24. 24.
    Brown, R.G., Hwang, P.Y.C.: Introduction to random signals and applied Kalman filtering, 2nd edn., p. 502. J. Wiley, New York (1997)zbMATHGoogle Scholar
  25. 25.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2001)Google Scholar
  26. 26.
    Nister, D.: An efficient solution to the five-point relative pose problem. Computer Vision and Pattern Recognition (2003)Google Scholar
  27. 27.
    Ma, Y., Soatto, S.: An invitation to 3-d vision: from images to geometric models. Springer, New York (2004)zbMATHGoogle Scholar
  28. 28.
    Torr, P.H.: MLESAC: A new robust estimator with application to estimating image geometry. Computer Vision and Image Understanding 78, 138–156 (2000)CrossRefGoogle Scholar
  29. 29.
    Brown, R., Hwang, P.: Introduction to Random Signals and Applied Kalman Filtering, 3rd edn. Wiley, New York (1992)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Chris Baker
    • 1
  • Chris Debrunner
    • 1
  • Sean Gooding
    • 2
  • William Hoff
    • 2
  • William Severson
    • 1
  1. 1.PercepTek, Inc.Littleton
  2. 2.Colorado School of MinesGolden

Personalised recommendations