Journal of Intelligent & Robotic Systems

, Volume 61, Issue 1, pp 287–299

Fusion of IMU and Vision for Absolute Scale Estimation in Monocular SLAM

Authors

  • Gabriel Nützi
    • ETH Autonomous Systems Laboratory
    • ETH Autonomous Systems Laboratory
  • Davide Scaramuzza
    • ETH Autonomous Systems Laboratory
  • Roland Siegwart
    • ETH Autonomous Systems Laboratory
Article

DOI: 10.1007/s10846-010-9490-z

Cite this article as:
Nützi, G., Weiss, S., Scaramuzza, D. et al. J Intell Robot Syst (2011) 61: 287. doi:10.1007/s10846-010-9490-z
  • 1.8k Views

Abstract

The fusion of inertial and visual data is widely used to improve an object’s pose estimation. However, this type of fusion is rarely used to estimate further unknowns in the visual framework. In this paper we present and compare two different approaches to estimate the unknown scale parameter in a monocular SLAM framework. Directly linked to the scale is the estimation of the object’s absolute velocity and position in 3D. The first approach is a spline fitting task adapted from Jung and Taylor and the second is an extended Kalman filter. Both methods have been simulated offline on arbitrary camera paths to analyze their behavior and the quality of the resulting scale estimation. We then embedded an online multi rate extended Kalman filter in the Parallel Tracking and Mapping (PTAM) algorithm of Klein and Murray together with an inertial sensor. In this inertial/monocular SLAM framework, we show a real time, robust and fast converging scale estimation. Our approach does not depend on known patterns in the vision part nor a complex temporal synchronization between the visual and inertial sensor.

Keywords

IMU vision fusionAbsolute scaleMonocular SLAMKalman filter
Download to read the full article text

Copyright information

© Springer Science+Business Media B.V. 2010