Advertisement

Sputnik Tracker: Having a Companion Improves Robustness of the Tracker

  • Lukáš Cerman
  • Jiří Matas
  • Václav Hlaváč
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5575)

Abstract

Tracked objects rarely move alone. They are often temporarily accompanied by other objects undergoing similar motion. We propose a novel tracking algorithm called Sputnik Tracker. It is capable of identifying which image regions move coherently with the tracked object. This information is used to stabilize tracking in the presence of occlusions or fluctuations in the appearance of the tracked object, without the need to model its dynamics. In addition, Sputnik Tracker is based on a novel template tracker integrating foreground and background appearance cues. The time varying shape of the target is also estimated in each video frame, together with the target position. The time varying shape is used as another cue when estimating the target position in the next frame.

Keywords

Machine Intelligence Object Tracking Appearance Model Robust Tracking Video Summarization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Tao, H., Sawhney, H.S., Kumar, R.: Dynamic layer representation with applications to tracking. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 134–141. IEEE Computer Society, Los Alamitos (2000)Google Scholar
  2. 2.
    Tao, H., Sawhney, H.S., Kumar, R.: Object tracking with Bayesian estimation of dynamic layer representations. IEEE Transactions on Pattern Analysis and Machine Intelligence 24(1), 75–89 (2002)CrossRefGoogle Scholar
  3. 3.
    Weiss, Y., Adelson, E.H.: A unified mixture framework for motion segmentation: Incorporating spatial coherence and estimating the number of models. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp. 321–326. IEEE Computer Society, Los Alamitos (1996)Google Scholar
  4. 4.
    Wang, J.Y.A., Adelson, E.H.: Layered representation for motion analysis. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp. 361–366. IEEE Computer Society, Los Alamitos (1993)CrossRefGoogle Scholar
  5. 5.
    Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. In: Proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 2, p. 252 (1999)Google Scholar
  6. 6.
    Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 22(8), 747–757 (2000)CrossRefGoogle Scholar
  7. 7.
    Felzenschwalb, P.F., Huttenlocher, D.P.: Pictorial structures for object recognition. International Journal of Computer Vision 61(1), 55–79 (2005)CrossRefGoogle Scholar
  8. 8.
    Korč, F., Hlaváč, V.: Detection and tracking of humans in single view sequences using 2D articulated model. In: Human Motion, Understanding, Modelling, Capture and Animation, vol. 36, pp. 105–130. Springer, Heidelberg (2007)Google Scholar
  9. 9.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(5), 564–575 (2003)CrossRefGoogle Scholar
  10. 10.
    Babu, R.V., Pérez, P., Bouthemy, P.: Robust tracking with motion estimation and local kernel-based color modeling. Image and Vision Computing 25(8), 1205–1216 (2007)CrossRefGoogle Scholar
  11. 11.
    Georgescu, B., Comaniciu, D., Han, T.X., Zhou, X.S.: Multi-model component-based tracking using robust information fusion. In: Comaniciu, D., Mester, R., Kanatani, K., Suter, D. (eds.) SMVP 2004. LNCS, vol. 3247, pp. 61–70. Springer, Heidelberg (2004)CrossRefGoogle Scholar
  12. 12.
    Jepson, A.D., Fleet, D.J., El-Maraghi, T.F.: Robust online appearance models for visual tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(10), 1296–1311 (2003)CrossRefGoogle Scholar
  13. 13.
    Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. In: Proceedings of the British Machine Vision Conference, vol. 1, pp. 47–56 (2006)Google Scholar
  14. 14.
    Collins, R., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. IEEE Transactions on Pattern Analysis and Machine Intelligence 27(10), 1631–1643 (2005)CrossRefGoogle Scholar
  15. 15.
    Kristan, M., Pers, J., Perse, M., Kovacic, S.: Closed-world tracking of multiple interacting targets for indoor-sports applications. Computer Vision and Image Understanding (in press, 2008)Google Scholar
  16. 16.
    Ramanan, D.: Learning to parse images of articulated bodies. In: Schölkopf, B., Platt, J., Hoffman, T. (eds.) Advances in Neural Information Processing Systems, pp. 1129–1136. MIT Press, Cambridge (2006)Google Scholar
  17. 17.
    Ramanan, D., Forsyth, D.A., Zisserman, A.: Tracking people by learning their appearance. IEEE Transactions on Pattern Analysis and Machine Intelligence 29(1), 65–81 (2007)CrossRefGoogle Scholar
  18. 18.
    Boykov, Y., Funka-Lea, G.: Graph cuts and efficient n-d image segmentation. Int. J. Comput. Vision 70(2), 109–131 (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Lukáš Cerman
    • 1
  • Jiří Matas
    • 1
  • Václav Hlaváč
    • 1
  1. 1.Faculty of Electrical Engineering, Center for Machine PerceptionCzech Technical UniversityPrague 2Czech Republic

Personalised recommendations