Advertisement

Machine Vision and Applications

, Volume 26, Issue 2–3, pp 205–217 | Cite as

Visual tracking in complex scenes through pixel-wise tri-modeling

  • Kwang Moo Yi
  • Hawook Jeong
  • Beongju Lee
  • Jin Young Choi
Original Paper
  • 386 Downloads

Abstract

In this paper we propose a pixel-wise visual tracking method using a novel tri-model representation. The newly proposed tri-model is composed of three models, which each model learns the target object, the background, and other non-target moving objects online. The proposed method performs tracking by simultaneous estimation of the holistic position of the target object and the pixel-wise labels. By utilizing the information in the background and the foreground models as well as the target model, our method obtains robust results even under background clutters and partial occlusions in complex scenes. Furthermore, our method is able to give pixel-wise results, and uses them in the learning process to prevent drifting. The method is extensively tested against seven representative trackers both quantitatively and qualitatively showing promising results.

Keywords

Visual tracking Complex scenes Gaussian model Pixel-wise labeling Occlusion 

Notes

Acknowledgments

This research was sponsored by the SNU Brain Korea 21 Plus Information Technology program.

Supplementary material

Supplementary material 1 (mp4 56598 KB)

References

  1. 1.
    Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 798–805 (2006)Google Scholar
  2. 2.
    Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. Pattern Anal. Mach. Intell. IEEE Trans. 33(8), 1619–1632 (2011)CrossRefGoogle Scholar
  3. 3.
    Bao, C., Wu, Y., Ling, H., Ji, H.: Real time robust l1 tracker using accelerated proximal gradient approach. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1830–1837 (2012)Google Scholar
  4. 4.
    Collins, R.: Mean-shift blob tracking through scale space. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, II:234–240 (2003)Google Scholar
  5. 5.
    Collins, R.T., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. Pattern Anal. Mach. Intell. IEEE Trans. 27(10), 1631–1643 (2005)CrossRefGoogle Scholar
  6. 6.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. Pattern Anal. Mach. Intell. IEEE Trans. 25(5), 564–577 (2003)CrossRefGoogle Scholar
  7. 7.
    Dinh, T.B., Vo, N., Medioni, G.: Context tracker: exploring supporters and distracters in unconstrained environments. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1177–1184 (2011)Google Scholar
  8. 8.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Godec, M., Roth, P., Bischof, H.: Hough-based tracking of non-rigid objects. Comput. Vis. Image Underst. 117(10), 1245–1256 (2013)CrossRefGoogle Scholar
  10. 10.
    Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. Proc. British Mach. Vis. Conf. 1, 47–56 (2006)Google Scholar
  11. 11.
    Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. In: Proceedings of Computer Vision, European Conference on, pp. 234–247 (2008)Google Scholar
  12. 12.
    Hare, S., Saffari, A., Torr, P.H.: Struck: Structured output tracking with kernels. In: Proceedings of Computer Vision, IEEE International Conference on, pp. 263–270 (2011)Google Scholar
  13. 13.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Proceedings of Computer Vision, European Conference on, pp. 702–715 (2012)Google Scholar
  14. 14.
    Jia, X., Lu, H., Yang, M.H.: Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1822–1829 (2012)Google Scholar
  15. 15.
    KaewTrakulPong, P., Bowden, R.: A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes. Image Vis. Comput. 21(10), 913–929 (2003)CrossRefGoogle Scholar
  16. 16.
    Kalal, Z., Matas, J., Mikolajczyk, K.: P-n learning: bootstrapping binary classifiers by structural constraints. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 49–56 (2010)Google Scholar
  17. 17.
    Krausz, B., Bauckhage, C.: Automatic detection of dangerous motion behavior in human crowds. In: Proceedings of Advanced Video and Signal Based Surveillance, IEEE International Conference on, pp. 224–229 (2011)Google Scholar
  18. 18.
    Liu, B., Huang, J., Yang, L., Kulikowsk, C.: Robust tracking using local sparse appearance model and k-selection. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1313–1320 (2011)Google Scholar
  19. 19.
    Oron, S., Bar-Hillel, A., Levi, D., Avidan, S.: Locally orderless tracking. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1940–1947 (2012)Google Scholar
  20. 20.
    Pérez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: Proceedings of Computer Vision, European Conference on, pp. 661–675 (2002)Google Scholar
  21. 21.
    Ross, D.A., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77, 125–141 (2008)CrossRefGoogle Scholar
  22. 22.
    Rother, C., Kolmogorov, V., Blake, A.: Grabcut: interactive foreground extraction using iterated graph cuts. ACM Trans. Gr. 23, 309–314 (2004)CrossRefzbMATHGoogle Scholar
  23. 23.
    Sevilla-Lara, L., Learned-Miller, E.: Distribution fields for tracking. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1910–1917 (2012)Google Scholar
  24. 24.
    Stalder, S., Grabner, H., van Gool, L.: Beyond semi-supervised tracking: tracking should be as simple as detection, but not simpler than recognition. In: Proceedings of Computer Vision Workshops (ICCV Workshops), IEEE International Conference on, pp. 1409–1416 (2009)Google Scholar
  25. 25.
    Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. Proc. Comput. Vis. Pattern Recognit. IEEE Conf. 2, 246–252 (1999)Google Scholar
  26. 26.
    Tomasi, C., Kanade, T.: Detection and tracking of point features. Carnegie Mellon University, Tech. rep. (1991)Google Scholar
  27. 27.
    Wu, Y., Lim, J., Yang, M.H.: Online object tracking: A benchmark. In: Proceedins of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 2411–2418 (2013)Google Scholar
  28. 28.
    Wu, Y., Shen, B., Ling, H.: Online robust image alignment via iterative convex optimization. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1808–1814 (2012)Google Scholar
  29. 29.
    Yi, K.M., Jeong, H., Kim, S.W., Choi, J.Y.: Visual tracking with dual modeling. In: Proceedings of the 27th Conference on Image and Vision Computing New Zealand, IVCNZ ’12, pp. 25–30 (2012)Google Scholar
  30. 30.
    Yi, K.M., Yun, K., Kim, S.W., Chang, H.J., Jeong, H., Choi, J.Y.: Detection of moving objects with non-stationary cameras in 5.8ms: Bringing motion detection to your mobile device. In: Proceedings of Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Conference on, pp. 27–34 (2013)Google Scholar
  31. 31.
    Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. In: Proceedings of Computer Vision, European Conference on, pp. 864–877 (2012)Google Scholar
  32. 32.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 2042–2049 (2012)Google Scholar
  33. 33.
    Zhong, W., Lu, H., Yang, M.H.: Robust object tracking via sparsity-based collaborative model. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1838–1845 (2012)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Kwang Moo Yi
    • 1
  • Hawook Jeong
    • 2
  • Beongju Lee
    • 2
  • Jin Young Choi
    • 2
  1. 1.Computer Vision LabÈcole Polytechnique Fèdèrale de LausanneLausanneSwitzerland
  2. 2.Department of Electrical and Computer Engineering (College of Engineering)ASRI, Seoul National Univ. #048SeoulKorea

Personalised recommendations