Advertisement

Motion Segmentation by Tracking Edge Information over Multiple Frames

  • Paul Smith
  • Tom Drummond
  • Roberto Cipolla
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1843)

Abstract

This paper presents a new Bayesian framework for layered motion segmentation, dividing the frames of an image sequence into foreground and background layers by tracking edges. The first frame in the sequence is segmented into regions using image edges, which are tracked to estimate two affine motions. The probability of the edges fitting each motion is calculated using 1st order statistics along the edge. The most likely region labelling is then resolved using these probabilities, together with a Markov Random Field prior. As part of this process one of the motions is also identified as the foreground motion.

Good results are obtained using only two frames for segmentation. However, it is also demonstrated that over multiple frames the probabilities may be accumulated to provide an even more accurate and robust segmentation. The final region labelling can be used, together with the two motion models, to produce a good segmentation of an extended sequence.

Keywords

Motion Estimation Markov Random Field Markov Chain Model Foreground Object Motion Segmentation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. [1]
    S. Ayer, P. Schroeter, and J. Bigün. Segmentation of moving objects by robust motion parameter estimation over multiple frames. In Proc. 3rd European Con-ference on Computer Vision, volume II, pages 317–327, Stockholm, Sweden, May 1994.Google Scholar
  2. [2]
    L. Bergen and F. Meyer. Motion segmentation and depth ordering based on morphological segmentation. In Proc. 5th European Conference on Computer Vision, volume II, pages 531–547, Freiburg, Germany, June 1998.Google Scholar
  3. [3]
    M.J. Black and D.J. Fleet. Probabilistic detection and tracking of motion discontinuities. In Proc. 7th International Conference on Computer Vision, volume I, pages 551–558, Kerkyra, Greece, September 1999.Google Scholar
  4. [4]
    A. Blake and M. Isard. Active Contours. Springer-Verlag, 1998.Google Scholar
  5. [5]
    A. P. Dempster, H. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of Royal Statistical Society Series B, 39:1–38, 1977.zbMATHMathSciNetGoogle Scholar
  6. [6]
    T. Drummond and R. Cipolla. Visual tracking and control using lie algebras. In Proc. IEEE Conference on Computer Vision and Pattern Recognition’ 99, volume 2, pages 652–657, Fort Collins, CO, June 1999.Google Scholar
  7. [7]
    S. Hsu, P. Anandan, and S. Peleg. Accurate computation of optical flow by using layered motion representations. In Proc. 12th International Conference on Pattern Recognition, pages 743–746, Jerusalem, Israel, October 1994.Google Scholar
  8. [8]
    J. MacCormick and A. Blake. Spatial dependence in the observation of visual contours. In Proc. 5th European Conference on Computer Vision, volume II, pages 765–781, Freiburg, Germany, June 1998.Google Scholar
  9. [9]
    F. Moscheni and F. Dufaux. Region merging based on robust statistical testing. In Proc. SPIE Visual Communications and Image Processing’ 96, Orlando, Florida, USA, March 1996.Google Scholar
  10. [10]
    J. M. Odobez and P. Bouthemy. Separation of moving regions from background in an image sequence acquired with a mobile camera. In Video Data Compression for Multimedia Computing, pages 283–311. Kluwer Academic Publisher, 1997.Google Scholar
  11. [11]
    H. S. Sawhney and S. Ayer. Compact representations of videos through dominant and multiple motion estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 18(8):814–830, August 1996.Google Scholar
  12. [12]
    J. Shi and J. Malik. Motion segmentation and tracking using normalized cuts. In Proc. 6th International Conference on Computer Vision, pages 1154–1160, Bombay, India, January 1998.Google Scholar
  13. [13]
    D. Sinclair. Voronoi seeded colour image segmentation. Technical Report 1999.3, AT&T Laboratories Cambridge, 1999.Google Scholar
  14. [14]
    P. Smith, T. Drummond, and R. Cipolla. Edge tracking for motion segmentation and depth ordering. In Proc. 10th British Machine Vision Conference, volume 2, pages 369–378, Nottingham, September 1999.Google Scholar
  15. [15]
    P. H. S. Torr, R. Szeliski, and P. Anandan. An integrated Bayesian approach to layer extraction from image sequences. In Proc. 7th International Conference on Computer Vision, volume II, pages 983–990, Kerkyra, Greece, September 1999.Google Scholar
  16. [16]
    J.Y.A Wang. and E.H. Adelson. Layered representation for motion analysis. In Proc. IEEE Conference on Computer Vision and Pattern Recognition’ 93, pages 361–366, New York, NY, June 1993.Google Scholar
  17. [17]
    Y. Weiss and E. H. Adelson. A unified mixture framework for motion segmentation: Incorporating spatial coherence and estimating the number of models. In Proc. IEEE Conference on Computer Vision and Pattern Recognition’ 96, pages 321–326, San Francisco, CA, June 1996.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2000

Authors and Affiliations

  • Paul Smith
    • 1
  • Tom Drummond
    • 1
  • Roberto Cipolla
    • 1
  1. 1.Department of EngineeringUniversity of CambridgeCambridgeUK

Personalised recommendations