Advertisement

Detection of Independently Moving Objects in Non-planar Scenes via Multi-Frame Monocular Epipolar Constraint

  • Soumyabrata Dey
  • Vladimir Reilly
  • Imran Saleemi
  • Mubarak Shah
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7576)

Abstract

In this paper we present a novel approach for detection of independently moving foreground objects in non-planar scenes captured by a moving camera. We avoid the traditional assumptions that the stationary background of the scene is planar, or that it can be approximated by dominant single or multiple planes, or that the camera used to capture the video is orthographic. Instead we utilize a multiframe monocular epipolar constraint of camera motion derived for monocular moving cameras defined by an evolving epipolar plane between the moving camera center and 3D scene points. This constraint is parameterized as a polynomial function of time, and unlike repeated computations of inter-frame fundamental matrix, requires the estimation of fewer unknowns, and provides a more consistent separation between moving and static objects for different levels of noise. This constraint allows us to segment out moving objects in a general 3D scene where other approaches fail because their initial assumptions do not hold, and provides a natural way of fusing temporal information across multiple frames. We use a combination of optical flow and particle advection to capture all motion in the video across a number of frames, in the form of particle trajectories. We then apply the derived multi-frame epipolar constraint to these trajectories to determine which trajectories violate it, thus segmenting out the independently moving objects. We show superior results on a number of moving camera sequences observing non-planar scenes, where other methods fail.

Keywords

Camera Motion False Detection Fundamental Matrix Plane Object Static Scene 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: CVPR (1999)Google Scholar
  2. 2.
    Jain, R., Nagel, H.: On the analysis of accumulative difference pictures from image sequences of real world scenes. PAMI (1979)Google Scholar
  3. 3.
    Javed, O., Rasheed, Z., Alatas, O., Shah, M.: A real time surveillance system for multiple overlapping and non-overlapping cameras. In: ICME (2003)Google Scholar
  4. 4.
    Zhong, J., Sclaroff, S.: Segmenting foreground objects from a dynamic textured background via a robust kalman filter. In: ICCV (2003)Google Scholar
  5. 5.
    Hayman, E., Eklundh, J.: Statistical background subtraction for a mobile observer. In: ICCV (2003)Google Scholar
  6. 6.
    Middal, A., Paragios, N.: Motion-based background subtraction using adaptive kernel density estimation. In: CVPR (2004)Google Scholar
  7. 7.
    Friedman, N., Russel, S.: Image segmentation in video sequences: A probabilistic approach. In: UAI (2000)Google Scholar
  8. 8.
    Haritaoglu, I., Harwood, D.D.L.: W4: Real-time surveillance of people and their activities. PAMI (2000)Google Scholar
  9. 9.
    Ali, S., Shah, M.: Cocoa: tracking in aerial imagery, vol. 6209. SPIE (2006)Google Scholar
  10. 10.
    Kaucic, R., Perera, A., Brooksby, G., Kaufhold, J., Hoogs, A.: A unified framework for tracking through occlusions and across sensor gaps. In: CVPR (2005)Google Scholar
  11. 11.
    Kang, J., Cohen, I., Yuan, C.: Detection and tracking of moving objects from a moving platform in presence of strong parallax. In: ICCV (2005)Google Scholar
  12. 12.
    Irani, M., Anandan, P.: A unified approach to moving object detection in 2d and 3d scenes. PAMI 20 (1998)Google Scholar
  13. 13.
    Sawhney, S.H., Guo, Y., Kumar, R.: Independent motion detection in 3d scenes. PAMI 22 (2000)Google Scholar
  14. 14.
    Cheng, H., Butler, D., Basu, C.: ViTex: Video to tex and its application in aerial video surveillance. In: CVPR (2006)Google Scholar
  15. 15.
    Xiao, J., Cheng, H., Han, F., Sawhney, H.: Geo-spatial aerial video processing for scene understanding and object tracking. In: CVPR (2008)Google Scholar
  16. 16.
    Pollard, T., Mundy, J.L.: Change detection in a 3-d world. In: CVPR (2007)Google Scholar
  17. 17.
    Wang, J., Adelson, E.: Representing moving images with layers. In: TIP (1994)Google Scholar
  18. 18.
    Tao, H., Sawhney, H.S., Kumar, R.: Object tracking with bayesian estimation of dynamic layer representations. TPAMI 24 (2002)Google Scholar
  19. 19.
    Ke, Q., Kanade, T.: A subspace approach to layer extraction. In: CVPR (2001)Google Scholar
  20. 20.
    Xiao, J., Shah, M.: Motion layer extraction in the presence of occlusion using graph cuts. TPAMI 27 (2005)Google Scholar
  21. 21.
    Jin, Y., et al.: Background modeling from a free-moving camera by multi-layer homography algorithm. In: ICIP (2008)Google Scholar
  22. 22.
    Sheikh, Y., Omar Javed, T.K.: Background subtraction for freely moving cameras. In: ICCV (2009)Google Scholar
  23. 23.
    Yilmaz, A., Shah, M.: Matching actions in presensce of camera motion. CVIU 105 (2006)Google Scholar
  24. 24.
    Hartley, R.I.: In defense of 8-point algorithm. TPAMI 19 (1997)Google Scholar
  25. 25.
    Kasturi, R., et al.: Framework for performance evaluation of face, text, and vehicle detection and tracking in video: Data, metrics, and protocol. PAMI 31(2), 319–336 (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Soumyabrata Dey
    • 1
  • Vladimir Reilly
    • 1
  • Imran Saleemi
    • 1
  • Mubarak Shah
    • 1
  1. 1.Computer Vision LabUniversity of Central FloridaOrlandoUSA

Personalised recommendations