Joint Random Sample Consensus and Multiple Motion Models for Robust Video Tracking

  • Petter Strandmark
  • Irene Y. H. Gu
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5575)

Abstract

We present a novel method for tracking multiple objects in video captured by a non-stationary camera. For low quality video, ransac estimation fails when the number of good matches shrinks below the minimum required to estimate the motion model. This paper extends ransac in the following ways: (a) Allowing multiple models of different complexity to be chosen at random; (b) Introducing a conditional probability to measure the suitability of each transformation candidate, given the object locations in previous frames; (c) Determining the best suitable transformation by the number of consensus points, the probability and the model complexity. Our experimental results have shown that the proposed estimation method better handles video of low quality and that it is able to track deformable objects with pose changes, occlusions, motion blur and overlap. We also show that using multiple models of increasing complexity is more effective than just using ransac with the complex model only.

Keywords

Feature Point Motion Estimation Transformation Model Motion Model Previous Frame 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bay, H., Ess, A., Tuytelaars, T., Van Gool, L.: SURF: Speeded up robust features. Computer Vision and Image Understanding (CVIU) 110(3), 346–359 (2008)CrossRefGoogle Scholar
  2. 2.
    Clarke, J.C., Zisserman, A.: Detection and tracking of independent motion. Image and Vision Computing 14, 565–572 (1996)CrossRefGoogle Scholar
  3. 3.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Gee, A.H., Cipolla, R., Gee, A., Cipolla, R.: Fast visual tracking by temporal consensus. Image and Vision Computing 14, 105–114 (1996)CrossRefGoogle Scholar
  5. 5.
    Grabner, M., Grabner, H., Bischof, H.: Learning features for tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2007, June 2007, pp. 1–8 (2007)Google Scholar
  6. 6.
    Li, L., Huang, W., Gu, I.Y.-H., Luo, R., Tian, Q.: An efficient sequential approach to tracking multiple objects through crowds for real-time intelligent cctv systems. IEEE Trans. on Systems, Man, and Cybernetics 38(5), 1254–1269 (2008)CrossRefGoogle Scholar
  7. 7.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 20, 91–110 (2004)CrossRefGoogle Scholar
  8. 8.
    Malik, S., Roth, G., McDonald, C.: Robust corner tracking for real-time augmented reality. In: VI 2002, p. 399 (2002)Google Scholar
  9. 9.
    Ross, D., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. International Journal of Computer Vision 77(1), 125–141 (2008)CrossRefGoogle Scholar
  10. 10.
    Simon, G., Fitzgibbon, A.W., Zisserman, A.: Markerless tracking using planar structures in the scene. In: IEEE and ACM International Symposium on Augmented Reality (ISAR 2000). Proceedings (2000)Google Scholar
  11. 11.
    Skrypnyk, I., Lowe, D.G.: Scene modelling, recognition and tracking with invariant image features. In: ISMAR 2004, Washington, DC, USA, pp. 110–119. IEEE Comp. Society, Los Alamitos (2004)Google Scholar
  12. 12.
    Li, X.-R., Li, X.-M., Li, H.-L., Cao, M.-Y.: Rejecting outliers based on correspondence manifold. Acta Automatica Sinica (2008)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Petter Strandmark
    • 1
    • 2
  • Irene Y. H. Gu
    • 1
  1. 1.Dept. of Signals and SystemsChalmers Univ. of TechnologySweden
  2. 2.Centre for Mathematical SciencesLund UniversitySweden

Personalised recommendations