Advertisement

Robust particle tracking via spatio-temporal context learning and multi-task joint local sparse representation

  • Xizhe Xue
  • Ying LiEmail author
Article
  • 8 Downloads

Abstract

Particle filters have been proven very successful for non-linear and non-Gaussian estimation problems and extensively used in object tracking. However, high computational costs and particle decadency problem limit its practical application. In this paper, we present a robust particle tracking approach based on spatio-temporal context learning and multi-task joint local sparse representation. The proposed tracker samples particles according to the confidence map constructed by the spatio-temporal context information of the target. This sampling strategy can ameliorate problems of sample impoverishment and particle degeneracy, target state distribution to obtain robust tracking performance. In order to locate the target more accurately and be less sensitive to occlusion, the local sparse appearance model is adopted to capture the local and structural information of the target. Finally, the multi-task learning where the representations of particles are learned jointly is employed to further improve tracking performance and reduce overall computational complexity. Both qualitative and quantitative evaluations on challenging benchmark image sequences have demonstrated that the proposed tracking algorithm performs favorably against several state-of-the-art methods.

Keywords

Visual tracking Local sparse representation Spatio-temporal context Multi-task learning 

Notes

Acknowledgments

This work was supported by the National Key Research and Development Program of China (2016YFB0502502), the National Natural Science Foundation of China (61871460, 61876152) and the Foundation Project for Advanced Research Field of China (614023804016HK03002). The authors would like to thank the editors and the anonymous referees for their constructive comments which have been very helpful in revising this paper. We would like also to appreciate Prof. Jonathan C-W for his assistance in English writing and Mr. Bin Lin for his thoughtful suggestions on experiments.

References

  1. 1.
    Adam A, Rivlin E, Shimshoni I (2006) Robust fragments-based tracking using the integral histogram. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 798–805Google Scholar
  2. 2.
    Avidan S (2007) Ensemble tracking. IEEE Trans Pattern Anal Mach Intell 29(2)Google Scholar
  3. 3.
    Babenko B, Yang M-H, Belongie S (2009) Visual tracking with online multiple instance learning. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 983–990Google Scholar
  4. 4.
    Bao C, Wu Y, Ling H, Ji H (2012) Real time robust l1 tracker using accelerated proximal gradient approach. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 1830–1837Google Scholar
  5. 5.
    Black MJ, Jepson AD (1998) Eigentracking: robust matching and tracking of articulated objects using a view-based representation. Int J Comput Vis 26(1):63–84CrossRefGoogle Scholar
  6. 6.
    Cai B, Xu X, Xing X, Jia K, Miao J, Tao D (2016) BIT: biologically inspired tracker. IEEE Trans Image Process 25(3):1327–1339MathSciNetCrossRefGoogle Scholar
  7. 7.
    Caruana R (1997) Multitask learning. Mach Learn 28(1):41–75MathSciNetCrossRefGoogle Scholar
  8. 8.
    Chen X, Pan W, Kwok JT, Carbonell JG (2009) Accelerated gradient method for multi-task sparse learning problem. In: Ninth IEEE international conference on data mining. IEEE, pp 746–751Google Scholar
  9. 9.
    Comaniciu D, Ramesh V, Meer P (2003) Kernel-based object tracking. IEEE Trans Pattern Anal Mach Intell 25(5):564–577CrossRefGoogle Scholar
  10. 10.
    Danelljan M, Häger G, Khan FS, Felsberg M (2017) Discriminative scale space tracking. IEEE Trans Pattern Anal Mach Intell 39(8):1561–1575CrossRefGoogle Scholar
  11. 11.
    Dinh TB, Vo N, Medioni G (2011) Context tracker: exploring supporters and distracters in unconstrained environments. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 1177–1184Google Scholar
  12. 12.
    Grabner H, Grabner M, Bischof H (2006) Real-time tracking via on-line boosting. In: British machine vision conference, vol 5, p 6Google Scholar
  13. 13.
    Grabner H, Leistner C, Bischof H (2008) Semi-supervised on-line boosting for robust tracking. In: European conference on computer vision. Springer, pp 234–247Google Scholar
  14. 14.
    Grabner H, Matas J, Van Gool L, Cattin P (2010) Tracking the invisible: learning where the object might be. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 1285–1292Google Scholar
  15. 15.
    Henriques JF, Caseiro R, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with kernels. In: European conference on computer vision. Springer, pp 702–715Google Scholar
  16. 16.
    Henriques JF, Caseiro R, Martins P, Batista J (2015) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596CrossRefGoogle Scholar
  17. 17.
    Jepson AD, Fleet DJ, El-Maraghi TF (2003) Robust online appearance models for visual tracking. IEEE Trans Pattern Anal Mach Intell 25(10):1296–1311CrossRefGoogle Scholar
  18. 18.
    Ji Z, Ma Y, Pang Y, Li X (2017) Query-aware sparse coding for multi-video summarization. arXiv preprint arXiv:170704021Google Scholar
  19. 19.
    Ji Z, Wang J, Yu Y, Pang Y, Han J (2019) Class-specific synthesized dictionary model for zero-shot learning. Neurocomputing 329: 339-347.Google Scholar
  20. 20.
    Jia X, Lu H, Yang M-H (2012) Visual tracking via adaptive structural local sparse appearance model. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 1822–1829Google Scholar
  21. 21.
    Jiang N, Liu W, Wu Y (2011) Adaptive and discriminative metric differential tracking. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 1161–1168Google Scholar
  22. 22.
    Kwon J, Lee KM (2010) Visual tracking decomposition. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 1269–1276Google Scholar
  23. 23.
    Leistner C, Godec M, Saffari A, Bischof H (2010) On-line multi-view forests for tracking. In: Joint pattern recognition symposium. Springer, pp 493–502Google Scholar
  24. 24.
    Li X, Dick A, Wang H, Shen C, van den Hengel A (2011) Graph mode-based contextual kernels for robust SVM tracking. In: IEEE international conference on computer vision. IEEE, pp 1156–1163Google Scholar
  25. 25.
    Liu R, Cheng J, Lu H (2009) A robust boosting tracker with minimum error bound in a co-training framework. In: IEEE 12th international conference on computer vision. IEEE, pp 1459–1466Google Scholar
  26. 26.
    Liu B, Huang J, Kulikowski C, Yang L (2013) Robust visual tracking using local sparse appearance model and k-selection. IEEE Trans Pattern Anal Mach Intell 35(12):2968–2981CrossRefGoogle Scholar
  27. 27.
    Mei X, Ling H (2009) Robust visual tracking using ℓ 1 minimization. In: IEEE 12th international conference on computer vision. IEEE, pp 1436–1443Google Scholar
  28. 28.
    Mei X, Ling H (2011) Robust visual tracking and vehicle classification via sparse representation. IEEE Trans Pattern Anal Mach Intell 33(11):2259–2272CrossRefGoogle Scholar
  29. 29.
    Mei X, Ling H, Y W, Blasch E, Bai L (2011) Minimum error bounded efficient ℓ1 tracker with occlusion detection. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 1257–1264Google Scholar
  30. 30.
    Pérez P, Hue C, Vermaak J, Gangnet M (2002) Color-based probabilistic tracking. In: European conference on computer vision. Springer, pp 661–675Google Scholar
  31. 31.
    Ross DA, Lim J, Lin R-S, Yang M-H (2008) Incremental learning for robust visual tracking. Int J Comput Vis 77(1-3):125–141CrossRefGoogle Scholar
  32. 32.
    Wen L, Cai Z, Lei Z, Yi D, Li SZ (2012) Online spatio-temporal structural context learning for visual tracking. In: European conference on computer vision. Springer, pp 716–729Google Scholar
  33. 33.
    Wu Y, Lim J, Yang M-H (2013) Online object tracking: a benchmark. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 2411–2418Google Scholar
  34. 34.
    Wu S, Chen H, Bai Y, Zhu G (2016) A remote sensing image classification method based on sparse representation. Multimed Tools Appl 75(19):12137–12154CrossRefGoogle Scholar
  35. 35.
    Xu X, Shimada A, Nagahara H, Taniguchi R-i (2016) Learning multi-task local metrics for image annotation. Multimed Tools Appl 75(4):2203–2231CrossRefGoogle Scholar
  36. 36.
    Yang M, Wu Y, Hua G (2009) Context-aware visual tracking. IEEE Trans Pattern Anal Mach Intell 31(7):1195–1209CrossRefGoogle Scholar
  37. 37.
    Zhang T, Ghanem B, Liu S, Ahuja N (2012) Robust visual tracking via multi-task sparse learning. In: IEEE conference on computer vision and pattern recognition. IEEE, pp 2042–2049Google Scholar
  38. 38.
    Zhang K, Zhang L, Yang M-H (2012) Real-time compressive tracking. In: European conference on computer vision. Springer, pp 864–877Google Scholar
  39. 39.
    Zhang K, Zhang L, Yang MH, Zhang D (2014) Fast tracking via dense spatio-temporal context learning. In: European conference on computer vision. Springer, pp 127–141Google Scholar
  40. 40.
    Zhong W, Lu H, Yang M-H (2014) Robust object tracking via sparse collaborative appearance model. IEEE Trans Image Process 23(5):2356–2368MathSciNetCrossRefGoogle Scholar
  41. 41.
    Zhou SK, Chellappa R, Moghaddam B (2004) Visual tracking and recognition using appearance-adaptive models in particle filters. IEEE Trans Image Process 13(11):1491–1506CrossRefGoogle Scholar
  42. 42.
    Zhu G, Wang J, Zhao C, Lu H (2015) Weighted part context learning for visual tracking. IEEE Trans Image Process 24(12):5140–5151MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.School of Computer Science and Engineering, Shaanxi Provincial Key Laboratory of Speech and Image Information ProcessingNorthwestern Polytechnical UniversityXi’anChina

Personalised recommendations