Advertisement

Exploiting Contextual Motion Cues for Visual Object Tracking

  • Stefan DuffnerEmail author
  • Christophe Garcia
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8926)

Abstract

In this paper, we propose an algorithm for on-line, real-time tracking of arbitrary objects in videos from unconstrained environments. The method is based on a particle filter framework using different visual features and motion prediction models. We effectively integrate a discriminative on-line learning classifier into the model and propose a new method to collect negative training examples for updating the classifier at each video frame. Instead of taking negative examples only from the surroundings of the object region, or from specific distracting objects, our algorithm samples the negatives from a contextual motion density function. We experimentally show that this type of learning improves the overall performance of the tracking algorithm. Finally, we present quantitative and qualitative results on four challenging public datasets that show the robustness of the tracking algorithm with respect to appearance and view changes, lighting variations, partial occlusions as well as object deformations.

Keywords

Object tracking Adaptive particle filter Motion cues 

References

  1. 1.
    Avidan, S.: Ensemble tracking. IEEE Trans. on Pattern Analysis and Machine Intelligence 29(2), 261–271 (2007)Google Scholar
  2. 2.
    Babenko, B., Yang, M.H., Belongie, S.: Visual tracking with online multiple instance learning. In: Proc. of the International Conference on Computer Vision and Pattern Recognition, December 2009Google Scholar
  3. 3.
    Dinh, T., Vo, N., Medioni, G.: Context tracker: Exploring supporters and distracters in unconstrained environments. In: Proc. of the Computer Vision and Pattern Recognition (2011)Google Scholar
  4. 4.
    Felsberg, M.: Enhanced distribution field tracking using channel representations. In: Visual Object Tracking Challenge (VOT 2013), ICCV (2013)Google Scholar
  5. 5.
    Gao, J., Xing, J., Hu, W., X., Z.: Graph embedding based semi-supervised discriminative tracker. In: Visual Object Tracking Challenge (VOT 2013), ICCV (2013)Google Scholar
  6. 6.
    Gengembre, N., Pérez, P.: Probabilistic color-based multi-object tracking with application to team sports. Tech. Rep. 6555, INRIA (2008)Google Scholar
  7. 7.
    Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. In: Proc. of the British Machine Vision Conference (2006)Google Scholar
  8. 8.
    Grabner, H., Matas, J., Van Gool, L., Cattin, P.: Tracking the invisible: Learning where the object might be. In: Proc. of the Computer Vision and Pattern Recognition, vol. 3, pp. 1285–1292 (2010)Google Scholar
  9. 9.
    Hare, S., Saffari, A., Torr, P.H.S.: Struck: Structured output tracking with kernels. In: Proc. of the International Conference on Computer Vision (2011)Google Scholar
  10. 10.
    Hong, Z., Mei, X., Tao, D.: Dual-Force Metric Learning for Robust Distracter-Resistant Tracker. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part I. LNCS, vol. 7572, pp. 513–527. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  11. 11.
    Kristan, M., Cehovin, L., Pflugfelder, R., Nebehay, G., Fernandez, G., Matas, J., et al.: The Visual Object Tracking VOT 2013 challenge results. In: Proc. of the International Conference on Computer Vision (Workshops) (2013)Google Scholar
  12. 12.
    Maggio, E.: Adaptive multifeature tracking in a particle filtering framework. IEEE Trans. on Circuits and Systems for Video Technology 17(10), 1348–1359 (2007)Google Scholar
  13. 13.
    Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE Trans. on Pattern Analysis and Machine Intelligence 33(11), 2259–72 (2011)Google Scholar
  14. 14.
    Odobez, J.M., Bouthemy, P.: Robust multiresolution estimation of parametric motion models. Journal of Visual Communication and Image Representation 6(4), 348–365 (1995)Google Scholar
  15. 15.
    Odobez, J.M., Gatica-Perez, D., Ba, S.O.: Embedding motion in model-based stochastic tracking. IEEE Trans. on Image Processing 15(11), 3514–3530 (2006)CrossRefGoogle Scholar
  16. 16.
    Okuma, K., Taleghani, A., de Freitas, N., Little, J.J., Lowe, D.G.: A Boosted Particle Filter: Multitarget Detection and Tracking. In: Pajdla, T., Matas, J.G. (eds.) ECCV 2004. LNCS, vol. 3021, pp. 28–39. Springer, Heidelberg (2004) CrossRefGoogle Scholar
  17. 17.
    Pérez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-Based Probabilistic Tracking. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002, Part I. LNCS, vol. 2350, pp. 661–675. Springer, Heidelberg (2002) CrossRefGoogle Scholar
  18. 18.
    Sun, Z., Yao, H., Zhang, S., Sun, X.: Robust visual tracking via context objects computing. In: Proc. of the International Conference Image Processing, pp. 509–512, September 2011Google Scholar
  19. 19.
    Čehovin, L., Kristan, M., Leonardis, A.: Robust visual tracking using an adaptive coupled-layer visual model. IEEE Trans. on Pattern Analysis and Machine Intelligence 35(4), 941–953 (2013)Google Scholar
  20. 20.
    Vojíř, T., Matas, J.: Robustifying the flock of trackers. In: Computer Vision Winter Workshop, pp. 91–97 (2011)Google Scholar
  21. 21.
    Wen, L., Cai, Z., Lei, Z., Yi, D., Li, S.: Robust online learned spatio-temporal context model for visual tracking. IEEE Trans. on Image Processing 23(2) (2013)Google Scholar
  22. 22.
    Xiao, J., Stolkin, R., Leonardis, A.: An enhanced adaptive coupled-layer lgtracker++. In: Visual Object Tracking Challenge (VOT 2013), ICCV (2013)Google Scholar
  23. 23.
    Yang, M., Wu, Y., Hua, G.: Context-aware visual tracking. IEEE Trans. on Pattern Analysis and Machine Intelligence 31(7), 1195–1209 (2009)Google Scholar
  24. 24.
    Zhang, G., Jia, J., Xiong, W., Wong, T.T., Heng, P.A., Bao, H.: Moving object extraction with a hand-held camera. In: Proc. of the International Conference on Computer Vision, pp. 1–8 (2007)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Université de Lyon, CNRS INSA-Lyon, LIRISLyonFrance

Personalised recommendations