Advertisement

WarpCut – Fast Obstacle Segmentation in Monocular Video

  • Andreas Wedel
  • Thomas Schoenemann
  • Thomas Brox
  • Daniel Cremers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4713)

Abstract

Autonomous collision avoidance in vehicles requires an accurate separation of obstacles from the background, particularly near the focus of expansion. In this paper, we present a technique for fast segmentation of stationary obstacles from video recorded by a single camera that is installed in a moving vehicle. The input image is divided into three motion segments consisting of the ground plane, the background, and the obstacle. This constrained scenario allows for good initial estimates of the motion models, which are iteratively refined during segmentation. The horizon is known due to the camera setup. The remaining binary partitioning problem is solved by a graph cut on the motion-compensated difference images.

Obstacle segmentation in realistic scenes with a monocular camera setup has not been feasible up to now. Our experimental evaluation shows that the proposed approach leads to fast and accurate obstacle segmentation and distance estimation without prior knowledge about the size, shape or base point of obstacles.

Keywords

IEEE Computer Society Ground Plane Camera Motion Stationary Obstacle Motion Segmentation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Birchfield, S., Tomasi, C.: Multiway cut for stereo and motion with slanted surfaces. In: International Conference on Computer Vision, pp. 489–495 (1999)Google Scholar
  2. 2.
    Boykov, Y., Jolly, M.P.: Interactive graph cuts for optimal boundary & region segmentation of objects in n-d images. In: International Conference on Computer Vision, vol. 1, pp. 105–112 (2001)Google Scholar
  3. 3.
    Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. In: Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 359–374 (2001)Google Scholar
  4. 4.
    Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(11), 1222–1239 (2001)CrossRefGoogle Scholar
  5. 5.
    Criminisi, A., Cross, G., Blake, A., Kolmogorov, V.: Bilayer segmentation of live video. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, pp. 53–60. IEEE Computer Society Press, Los Alamitos (2006)Google Scholar
  6. 6.
    Greig, D., Porteous, B., Seheult, A.: Exact maximum a posteriori estimation for binary images. Royal Journal on Statistical Society 51(2), 271–279 (1989)Google Scholar
  7. 7.
    Hager, G., Belhumeur, P.: Efficient region tracking with parametric models of geometry and illumination. IEEE Transactions on Pattern Analysis and Machine Intelligence , 1025–1039 (1998)Google Scholar
  8. 8.
    Ke, Q., Kanade, T.: Transforming camera geometry to a virtual downward-looking camera: Robust ego-motion estimation and ground-layer detection. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 390–397. IEEE Computer Society Press, Los Alamitos (2003)Google Scholar
  9. 9.
    Kolmogorov, V., Criminisi, A., Blake, A., Cross, G., Rother, C.: Bi-layer segmentation of binocular stereo video. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 407–417. IEEE Computer Society Press, Los Alamitos (2005)Google Scholar
  10. 10.
    Longuet-Higgins, H., Prazdny, K.: The interpretation of a moving retinal image. Proceedings of the Royal Society of London, Biological Sciences, Series B, vol. 208, pp. 385–397 (1980)Google Scholar
  11. 11.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, pp. 674–679 (1981)Google Scholar
  12. 12.
    Schoenemann, T., Cremers, D.: Near real-time motion segmentation using graph cuts. In: Franke, K., Müller, K.-R., Nickolay, B., Schäfer, R. (eds.) Pattern Recognition. LNCS, vol. 4174, pp. 455–464. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  13. 13.
    Sun, J., Zhang, W., Tang, X., Shum, H.-Y.: Background cut. In: Leonardis, A., Bischof, H., Pinz, A. (eds.) ECCV 2006. LNCS, vol. 3952, pp. 628–641. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  14. 14.
    Wedel, A., Franke, U., Klappstein, J., Brox, T., Cremers, D.: Realtime depth estimation and obstacle detection from monocular video. In: Franke, K., Müller, K.-R., Nickolay, B., Schäfer, R. (eds.) Pattern Recognition. LNCS, vol. 4174, pp. 475–484. Springer, Heidelberg (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Andreas Wedel
    • 1
    • 2
  • Thomas Schoenemann
    • 1
  • Thomas Brox
    • 1
  • Daniel Cremers
    • 1
  1. 1.Computer Vision Group University of Bonn 
  2. 2.DaimlerChrysler Research 

Personalised recommendations