Skip to main content
Log in

Visual tracking in complex scenes through pixel-wise tri-modeling

  • Original Paper
  • Published:
Machine Vision and Applications Aims and scope Submit manuscript

An Erratum to this article was published on 29 July 2015

Abstract

In this paper we propose a pixel-wise visual tracking method using a novel tri-model representation. The newly proposed tri-model is composed of three models, which each model learns the target object, the background, and other non-target moving objects online. The proposed method performs tracking by simultaneous estimation of the holistic position of the target object and the pixel-wise labels. By utilizing the information in the background and the foreground models as well as the target model, our method obtains robust results even under background clutters and partial occlusions in complex scenes. Furthermore, our method is able to give pixel-wise results, and uses them in the learning process to prevent drifting. The method is extensively tested against seven representative trackers both quantitatively and qualitatively showing promising results.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. http://opencv.org/downloads.html.

  2. EC Funded CAVIAR project/IST 2001 37540, found at URL: http://homepages.inf.ed.ac.uk/rbf/CAVIAR/.

References

  1. Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 798–805 (2006)

  2. Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. Pattern Anal. Mach. Intell. IEEE Trans. 33(8), 1619–1632 (2011)

    Article  Google Scholar 

  3. Bao, C., Wu, Y., Ling, H., Ji, H.: Real time robust l1 tracker using accelerated proximal gradient approach. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1830–1837 (2012)

  4. Collins, R.: Mean-shift blob tracking through scale space. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, II:234–240 (2003)

  5. Collins, R.T., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. Pattern Anal. Mach. Intell. IEEE Trans. 27(10), 1631–1643 (2005)

    Article  Google Scholar 

  6. Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. Pattern Anal. Mach. Intell. IEEE Trans. 25(5), 564–577 (2003)

    Article  Google Scholar 

  7. Dinh, T.B., Vo, N., Medioni, G.: Context tracker: exploring supporters and distracters in unconstrained environments. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1177–1184 (2011)

  8. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  9. Godec, M., Roth, P., Bischof, H.: Hough-based tracking of non-rigid objects. Comput. Vis. Image Underst. 117(10), 1245–1256 (2013)

    Article  Google Scholar 

  10. Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. Proc. British Mach. Vis. Conf. 1, 47–56 (2006)

    Google Scholar 

  11. Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. In: Proceedings of Computer Vision, European Conference on, pp. 234–247 (2008)

  12. Hare, S., Saffari, A., Torr, P.H.: Struck: Structured output tracking with kernels. In: Proceedings of Computer Vision, IEEE International Conference on, pp. 263–270 (2011)

  13. Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Proceedings of Computer Vision, European Conference on, pp. 702–715 (2012)

  14. Jia, X., Lu, H., Yang, M.H.: Visual tracking via adaptive structural local sparse appearance model. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1822–1829 (2012)

  15. KaewTrakulPong, P., Bowden, R.: A real time adaptive visual surveillance system for tracking low-resolution colour targets in dynamically changing scenes. Image Vis. Comput. 21(10), 913–929 (2003)

    Article  Google Scholar 

  16. Kalal, Z., Matas, J., Mikolajczyk, K.: P-n learning: bootstrapping binary classifiers by structural constraints. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 49–56 (2010)

  17. Krausz, B., Bauckhage, C.: Automatic detection of dangerous motion behavior in human crowds. In: Proceedings of Advanced Video and Signal Based Surveillance, IEEE International Conference on, pp. 224–229 (2011)

  18. Liu, B., Huang, J., Yang, L., Kulikowsk, C.: Robust tracking using local sparse appearance model and k-selection. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1313–1320 (2011)

  19. Oron, S., Bar-Hillel, A., Levi, D., Avidan, S.: Locally orderless tracking. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1940–1947 (2012)

  20. Pérez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: Proceedings of Computer Vision, European Conference on, pp. 661–675 (2002)

  21. Ross, D.A., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77, 125–141 (2008)

    Article  Google Scholar 

  22. Rother, C., Kolmogorov, V., Blake, A.: Grabcut: interactive foreground extraction using iterated graph cuts. ACM Trans. Gr. 23, 309–314 (2004)

    Article  MATH  Google Scholar 

  23. Sevilla-Lara, L., Learned-Miller, E.: Distribution fields for tracking. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1910–1917 (2012)

  24. Stalder, S., Grabner, H., van Gool, L.: Beyond semi-supervised tracking: tracking should be as simple as detection, but not simpler than recognition. In: Proceedings of Computer Vision Workshops (ICCV Workshops), IEEE International Conference on, pp. 1409–1416 (2009)

  25. Stauffer, C., Grimson, W.E.L.: Adaptive background mixture models for real-time tracking. Proc. Comput. Vis. Pattern Recognit. IEEE Conf. 2, 246–252 (1999)

    Google Scholar 

  26. Tomasi, C., Kanade, T.: Detection and tracking of point features. Carnegie Mellon University, Tech. rep. (1991)

    Google Scholar 

  27. Wu, Y., Lim, J., Yang, M.H.: Online object tracking: A benchmark. In: Proceedins of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 2411–2418 (2013)

  28. Wu, Y., Shen, B., Ling, H.: Online robust image alignment via iterative convex optimization. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1808–1814 (2012)

  29. Yi, K.M., Jeong, H., Kim, S.W., Choi, J.Y.: Visual tracking with dual modeling. In: Proceedings of the 27th Conference on Image and Vision Computing New Zealand, IVCNZ ’12, pp. 25–30 (2012)

  30. Yi, K.M., Yun, K., Kim, S.W., Chang, H.J., Jeong, H., Choi, J.Y.: Detection of moving objects with non-stationary cameras in 5.8ms: Bringing motion detection to your mobile device. In: Proceedings of Computer Vision and Pattern Recognition Workshops (CVPRW), IEEE Conference on, pp. 27–34 (2013)

  31. Zhang, K., Zhang, L., Yang, M.H.: Real-time compressive tracking. In: Proceedings of Computer Vision, European Conference on, pp. 864–877 (2012)

  32. Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 2042–2049 (2012)

  33. Zhong, W., Lu, H., Yang, M.H.: Robust object tracking via sparsity-based collaborative model. In: Proceedings of Computer Vision and Pattern Recognition, IEEE Conference on, pp. 1838–1845 (2012)

Download references

Acknowledgments

This research was sponsored by the SNU Brain Korea 21 Plus Information Technology program.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jin Young Choi.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (mp4 56598 KB)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yi, K.M., Jeong, H., Lee, B. et al. Visual tracking in complex scenes through pixel-wise tri-modeling. Machine Vision and Applications 26, 205–217 (2015). https://doi.org/10.1007/s00138-015-0658-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00138-015-0658-1

Keywords

Navigation