Advertisement

Machine Vision and Applications

, Volume 28, Issue 3–4, pp 327–339 | Cite as

Multiscale salient region-based visual tracking

  • Sihua Yi
  • Wenyu Liu
Original Paper

Abstract

This paper proposes a novel visual model to detect the salient regions of the target in complex tracking scenarios. The main idea of the proposed visual model is to generate an overcomplete set of local image patches to describe the multiscale regions of the target, and select the most important and reliable regions. The importance of each patch is evaluated by its stability and discrimination in the local feature space, while the reliability is measured by the contrast of the target and its surrounding background in the global feature space. By combining the importance and reliability, the salient regions are selected from the patch set to represent the target. Experimental results on benchmark video sequences show that the proposed visual model can improve the tracking performance effectively.

Keywords

Visual tracking Visually salient region Multiscale Overcomplete 

Notes

Acknowledgements

We thank anonymous reviewers for their very useful comments and suggestions. This work was supported in part by the National Natural Science Foundation of China under Grant 61572207.

References

  1. 1.
    Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 798–805. IEEE (2006)Google Scholar
  2. 2.
    Yang, M., Yuan, J., Wu, Y.: Spatial selection for attentional visual tracking. In: IEEE Conference on Computer Vision and Pattern Recognition, 2007. CVPR’07. IEEE, pp. 1–8 (2007)Google Scholar
  3. 3.
    Fan, J., Wu, Y., Dai, S.: Discriminative spatial attention for robust tracking. In: Computer Vision—ECCV 2010, pp. 480–493. Springer (2010)Google Scholar
  4. 4.
    Cehovin, L., Kristan, M., Leonardis, A.: Robust visual tracking using an adaptive coupled-layer visual model. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 941–953 (2013)CrossRefGoogle Scholar
  5. 5.
    Black, M.J., Jepson, A.D.: Eigentracking: robust matching and tracking of articulated objects using a view-based representation. Int. J. Comput. Vis. 26(1), 63–84 (1998)CrossRefGoogle Scholar
  6. 6.
    Isard, M., Blake, A.: Condensationconditional density propagation for visual tracking. Int. J. Comput. Vis. 29(1), 5–28 (1998)CrossRefGoogle Scholar
  7. 7.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)CrossRefGoogle Scholar
  8. 8.
    Collins, R.T., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1631–1643 (2005)CrossRefGoogle Scholar
  9. 9.
    Yao, Z., Liu, W.: Extracting robust distribution using adaptive gaussian mixture model and online feature selection. Neurocomputing 101, 258–274 (2013)CrossRefGoogle Scholar
  10. 10.
    Mei, X., Ling, H.: Robust visual tracking using l1 minimization. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1436–1443. IEEE (2009)Google Scholar
  11. 11.
    Bao, C., Wu, Y., Ling, H., Ji, H.: Real time robust l1 tracker using accelerated proximal gradient approach. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1830–1837. IEEE (2012)Google Scholar
  12. 12.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2042–2049. IEEE (2012)Google Scholar
  13. 13.
    Lim, J., Ross, D.A., Lin, R.-S., Yang, M.-H.: Incremental learning for visual tracking. In: Advances in Neural Information Processing Systems (NIPS), vol. 17, pp. 793–800. Vancouver, British Columbia, Canada (2004)Google Scholar
  14. 14.
    Ross, D.A., Lim, J., Lin, R.-S., Yang, M.-H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(1–3), 125–141 (2008)CrossRefGoogle Scholar
  15. 15.
    Zhou, Y., Bai, X., Liu, W., Latecki, L.J.: Fusion with diffusion for robust visual tracking. In: Advances in Neural Information Processing Systems (NIPS), vol. 25, pp. 2978–2986. Lake Tahoe, Harrahs and Harveys, USA (2012)Google Scholar
  16. 16.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Computer Vision—ECCV 2012, pp. 702–715. Springer (2012)Google Scholar
  17. 17.
    Zhang, K., Zhang, L., Yang, M.-H., Zhang, D.: Fast tracking via spatio-temporal context learning. arXiv preprint arXiv:1311.1939
  18. 18.
    Kwon, J., Lee, K.M.: Tracking of a non-rigid object via patch-based dynamic appearance modeling and adaptive basin hopping monte carlo sampling. In: IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009, pp. 1208–1215. IEEE (2009)Google Scholar
  19. 19.
    Kwon, J., Lee, K.M.: Highly nonrigid object tracking via patch-based dynamic appearance modeling. IEEE Trans. Pattern Anal. Mach. Intell. 35(10), 2427–2441 (2013)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Li, Y., Zhu, J., Hoi, S.C.H.: Reliable patch trackers: robust visual tracking by exploiting reliable patches. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 353–361 (2015). doi: 10.1109/CVPR.2015.7298632
  21. 21.
    Zhang, K., Zhang, L., Yang, M.-H.: Real-time compressive tracking. In: Computer Vision—ECCV 2012, pp. 864–877. Springer (2012)Google Scholar
  22. 22.
    Zhang, K., Zhang, L., Yang, M.-H.: Fast compressive tracking. IEEE Trans. Pattern Anal. Mach. Intell. 36(10), 2002–2015 (2014)CrossRefGoogle Scholar
  23. 23.
    Tsagkatakis, G., Savakis, A.: Online distance metric learning for object tracking. IEEE Trans. Circuits Syst. Video Technol. 21(12), 1810–1821 (2011)Google Scholar
  24. 24.
    Sevilla-Lara, L., Learned-Miller, E.: Distribution fields for tracking, In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1910–1917. IEEE (2012)Google Scholar
  25. 25.
    Babenko, B., Yang, M.-H., Belongie, S.: Visual tracking with online multiple instance learning. In: IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009, pp. 983–990. IEEE (2009)Google Scholar
  26. 26.
    Wang, Q., Chen, F., Xu, W., Yang, M.-H.: Object tracking via partial least squares analysis. IEEE Trans. Image Process. 21(10), 4454–4465 (2012)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Hare, S., Saffari, A., Torr, P.H.: Struck: structured output tracking with kernels. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 263–270. IEEE (2011)Google Scholar
  28. 28.
    Zhang, T., Liu, S., Ahuja, N.: Robust visual tracking via consistent low-rank sparse learning. Int. J. Comput. Vis. 111(2), 171–190 (2015)CrossRefGoogle Scholar
  29. 29.
    Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: a benchmark. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2411–2418. IEEE (2013)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2017

Authors and Affiliations

  1. 1.College of Electronics and Information EngineeringHuazhong University of Science and TechnologyWuhanPeople’s Republic of China
  2. 2.School of Information and Safety EngineeringZhongnan University of Economics and LawWuhanPeople’s Republic of China

Personalised recommendations