Fusing Depth and Video Using Rao-Blackwellized Particle Filter

  • Amit Agrawal
  • Rama Chellappa
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3776)

Abstract

We address the problem of fusing sparse and noisy depth data obtained from a range finder with features obtained from intensity images to estimate ego-motion and refine 3D structure of a scene using a Rao-Blackwellized particle filter. For scenes with low depth variability, the algorithm shows an alternate way of performing Structure from Motion (SfM) starting with a flat depth map. Instead of using 3D depths, we formulate the problem using 2D image domain parallax and show that conditioned on non-linear motion parameters, the parallax magnitude with respect to the projection of the vanishing point forms a linear subsystem independent of camera motion and their distributions can be analytically integrated. Thus, the structure is obtained by estimating parallax with respect to the given depths using a Kalman filter and only the ego-motion is estimated using a particle filter. Hence, the required number of particles becomes independent of the number of feature points which is an improvement over previous algorithms. Experimental results on both synthetic and real data show the effectiveness of our approach.

Keywords

Feature Point Camera Motion Sequential Monte Carlo World Coordinate System Linear Subsystem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Broida, T., Chellappa, R.: Estimating the kinematics and structure of a rigid object from a sequence of monocular images. IEEE Trans. Pattern Anal. Machine Intell. 13, 497–513 (1991)CrossRefGoogle Scholar
  2. 2.
    Azarbayejani, A., Pentland, A.: Recursive estimation of motion, structure, and focal length. IEEE Trans. Pattern Anal. Machine Intell. 17, 562–575 (1995)CrossRefGoogle Scholar
  3. 3.
    Kitagawa, G.: Monte carlo filter and smoother for non-gaussian nonlinear state space models. J. Computational and Graphical Statistics 5(1), 1–25 (1996)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Gordon, N., Salmond, D., Smith, A.: Novel approach to nonlinear/non-gaussian bayesian state estimation. In: IEE Proc. Radar, Sonar and Navig, pp. 107–113 (1993)Google Scholar
  5. 5.
    Isard, M., Blake, A.: Condensation – conditional density propagation for visual tracking. Int’l J. Computer Vision 29(1), 5–28 (1998)CrossRefGoogle Scholar
  6. 6.
    Qian, G., Chellappa, R.: Structure from motion using sequential monte carlo methods. Int’l J. Computer Vision, 5–31 (2004)Google Scholar
  7. 7.
    Khan, Z., Balch, T., Dellaert, F.: A rao-blackwellized particle filter for eigentracking. In: Proc. Conf. Computer Vision and Pattern Recognition (2004)Google Scholar
  8. 8.
    Qian, G., Chellappa, R.: Bayesian self-calibration of a moving camera. In: To appear in CVIUGoogle Scholar
  9. 9.
    Schon, T., Gustafsson, F.: Marginalized particle filters for mixed linear/nonlinear state-space models. To appear in IEEE Trans. Signal Processing Google Scholar
  10. 10.
    Khan, Z., Balch, T., Dellaert, F.: Efficient particle filter based tracking of multiple interacting targets using an mrf-based motion model. In: IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (2003)Google Scholar
  11. 11.
    Casella, G., Robert, C.P.: Rao-blackwellisation of sampling schemes. Biometrika 83(1), 81–94 (1994)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Sawhney, H.S.: 3D geometry from planar parallax. In: Proc. Conf. Computer Vision and Pattern Recognition, pp. 929–934 (1994)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Amit Agrawal
    • 1
  • Rama Chellappa
    • 1
  1. 1.University of MarylandCollege ParkUSA

Personalised recommendations