Advertisement

Multimedia Tools and Applications

, Volume 77, Issue 23, pp 30969–30991 | Cite as

Robust object tracking using a sparse coadjutant observation model

  • Jianwei Zhao
  • Weidong Zhang
  • Feilong Cao
Article
  • 76 Downloads

Abstract

This paper develops a classical visual tracker that is called a discriminative sparse similarity (DSS) tracker. Based on the classical Laplacian multi-task reverse sparse representation to get a DSS map in the DSS tracker, we introduce a sparse generative model (SGM) to handle the appearance variation in the DSS tracker. With the alliance of the DSS map and the SGM, our proposed method can track the object under the occlusion and appearance variations effectively. Numerous experiments on various challenging videos of a tracking benchmark illustrate that the proposed tracker performs favorably against several state-of-the-art trackers.

Keywords

Object tracking Sparse representation Observation model Discriminative score model Generative model 

Notes

Acknowledgments

This work was funded by the National Natural Science Foundation of China (61571410 and 61672477) and the Zhejiang Provincial Nature Science Foundation of China (LY18F020018).

Compliance with Ethical Standards

Conflict of interests

The authors declare that they have no conflict of interest.

Research involving Human Participants and/or Animals

This study did not involve Human Participants and Animals.

Informed Consent

The all authors of this paper have consented the submission.

References

  1. 1.
    Adam A, Rivlin E, Shimshoni I (2006) Robust fragments-based tracking using the integral histogram. IEEE Conf Comput Vis Pattern Recogn 2006:798–805Google Scholar
  2. 2.
    Avidan S (2007) Ensemble tracking. IEEE Trans Pattern Anal Mach Intell 29(2):261–271CrossRefGoogle Scholar
  3. 3.
    Babenko B, Yang MH, Belongie S (2009) Visual tracking with online multiple instance learning. IEEE Comput Vis Pattern Recogn 2009:983–990Google Scholar
  4. 4.
    Cui J, Liu Y, Xu Y et al (2013) Tracking generic human motion via fusion of low-and high-dimensional approaches. IEEE Trans Syst Cybern Part B 43(4):996–1002Google Scholar
  5. 5.
    Danelljan M, Khan FS, Felsberg M et al (2014) Adaptive color attributes for real-time visual tracking. IEEE Conf Comput Vis Pattern Recogn 2014:1090–1097Google Scholar
  6. 6.
    Danelljan M, Häger G, Khan FS et al (2017) Discriminative scale space tracking. IEEE Trans Pattern Anal Mach Intell 39(8):1561–1575CrossRefGoogle Scholar
  7. 7.
    Dinh TB, Medioni G (2011) Co-training framework of generative and discriminative trackers with partial occlusion handling. IEEE Work Appl Comput Vis 2011:642–649Google Scholar
  8. 8.
    Gao S, Tsang WH, Chia L T et al (2010) Local features are not lonely- Laplacian sparse coding for image classification. IEEE Comput Vis Pattern Recogn 2010:3555–3561Google Scholar
  9. 9.
    Hare S, Saffari A, Torr PHS (2012) Struck: structured output tracking with kernels. IEEE Int Conf Comput Vis 2012:263–270Google Scholar
  10. 10.
    Henriques JF, Rui C, Martins P et al (2014) High-speed tracking with kernelized correlation filters. IEEE Trans Pattern Anal Mach Intell 37(3):583–596CrossRefGoogle Scholar
  11. 11.
    Hu W, Li X, Zhang X et al (2011) Incremental tensor subspace learning and its applications to foreground segmentation and tracking. IEEE Int J Comput Vis 91 (3):303–327CrossRefGoogle Scholar
  12. 12.
    Ji H, Ling H, Wu Y et al (2012) Real time robust \(l_{1}\) tracker using accelerated proximal gradient approach. IEEE Comput Vis Pattern Recogn 2012:1830–1837Google Scholar
  13. 13.
    Kalal Z, Matas J, Mikolajczyk K (2010) P-N Learning: bootstrapping binary classifiers by structural constraints. IEEE Comput Vis Pattern Recogn 2010:49–56Google Scholar
  14. 14.
    Kalal Z, Mikolajczyk K, Matas J (2012) Tracking-learning-detection. IEEE Trans Pattern Anal Mach Intell 34(7):1409–1422CrossRefGoogle Scholar
  15. 15.
    Kwon J, Lee KM (2010) Visual tracking decomposition. IEEE Comput Vis Pattern Recogn 2010:1269–1276Google Scholar
  16. 16.
    Li X, Hu W, Shen C et al (2013) A survey of appearance models in visual object tracking. ACM Trans Intell Syst Technol 4(4):58CrossRefGoogle Scholar
  17. 17.
    Liu B, Huang J, Yang L et al (2011) Robust tracking using local sparse appearance model and K-selection. IEEE Comput Vis Pattern Recogn 2011:1313–1320Google Scholar
  18. 18.
    Liu Y, Cui J, Zhao H et al (2012) Fusion of low-and high-dimensional approaches by trackers sampling for generic human motion tracking. Int Conf Pattern Recogn 2012:898–901Google Scholar
  19. 19.
    Liu Y, Nie L, Han L et al (2015) Action2activity: recognizing complex activities from sensor data. Int Conf Artif Intell 2015:1617–1623Google Scholar
  20. 20.
    Liu L, Cheng L, Liu Y et al (2016) Recognizing complex activities by a probabilistic interval-based model. Thirtieth Aaai Conf Artif Intell 2016:1266–1272Google Scholar
  21. 21.
    Liu Y, Nie L, Li L et al (2016) From action to activity: sensor-based activity recognition. Neurocomputing 181:108–115CrossRefGoogle Scholar
  22. 22.
    Lu H, Jia X, Yang MH (2012) Visual tracking via adaptive structural local sparse appearance model. IEEE Comput Vis Pattern Recogn 2012:1822–1829Google Scholar
  23. 23.
    Mei X, Ling H (2009) Robust visual tracking using \(l_{1}\) minimization. IEEE Int Conf Comput Vis 2009:1436–1443Google Scholar
  24. 24.
    Mei X, Ling H, Wu Y et al (2011) Minimum error bounded efficient \(l_{1}\) tracker with occlusion detection. IEEE Comput Vis Pattern Recogn 2011:1257–1264Google Scholar
  25. 25.
    Ross DA, Lim J, Lin RS et al (2008) Incremental learning for robust visual tracking. Int J Comput Vis 77(1-3):125–141CrossRefGoogle Scholar
  26. 26.
    Rui C, Martins P, Batista J (2012) Exploiting the circulant structure of tracking-by-detection with kernels. IEEE Eur Conf Comput Vis 2012:702–715Google Scholar
  27. 27.
    Santner J, Leistner C, Saffari A et al (2010) PROST: Parallel robust online simple tracking. IEEE Comput Vis Pattern Recogn 2010:723–730Google Scholar
  28. 28.
    Tseng P (2008) On accelerated proximal gradient methods for convex-concave optimization. SIAM J OptimGoogle Scholar
  29. 29.
    Wang D, Lu H, Chen YW (2010) Incremental MPCA for color object tracking. IEEE Int Conf Pattern Recogn 2010:1751–1754Google Scholar
  30. 30.
    Wang J, Yang J, Yu K et al (2010) Locality-constrained linear coding for image classification. IEEE Comput Vis Pattern Recogn 2010:3360–3367Google Scholar
  31. 31.
    Wang S, Lu H, Yang F et al (2011) Superpixel tracking. IEEE Int Conf Comput Vis 2011:1323–1330Google Scholar
  32. 32.
    Wang Q, Yang MH (2012) Online discriminative object tracking with local sparse representation. Appl Comput Vis 2012:425–432Google Scholar
  33. 33.
    Wang D, Lu H, Xiao Z et al (2015) Inverse sparse tracker with a locally weighted distance metric. IEEE Trans Image Process 24(9):2646–2657MathSciNetCrossRefGoogle Scholar
  34. 34.
    Wu Y, Lim J, Yang MH (2013) Online object tracking: a benchmark. IEEE Comput Vis Pattern Recogn 2013:2411–2418Google Scholar
  35. 35.
    Yang J, Yu K, Gong Y et al (2009) Linear spatial pyramid matching using sparse coding for image classification. IEEE Comput Vis Pattern Recogn 2009:1794–1801Google Scholar
  36. 36.
    Yang L, Yang L, Huang J et al (2010) Robust and fast collaborative tracking with two stage sparse optimization. IEEE Eur Conf Comput Vis 2010:624–637Google Scholar
  37. 37.
    Zhang T, Ghanem B, Liu S et al (2012) Robust visual tracking via multi-task sparse learning. IEEE Comput Vis Pattern Recogn 2012:2042–2049Google Scholar
  38. 38.
    Zhang K, Zhang L, Yang MH (2014) Fast compressive tracking. IEEE Trans Pattern Anal Mach Intell 36(10):2002–2015CrossRefGoogle Scholar
  39. 39.
    Zhang T, Liu S, Xu C et al (2015) Structural sparse tracking. IEEE Comput Vis Pattern Recogn 2015:150–158Google Scholar
  40. 40.
    Zhang K, Liu Q, Wu Y et al (2016) Robust visual tracking via convolutional networks without training. IEEE Trans Image Process 25(4):1779–1792MathSciNetGoogle Scholar
  41. 41.
    Zhong W, Lu H, Yang M (2014) Robust object tracking via sparsity based collaborative model. IEEE Trans Image Process 23(5):2356–2368MathSciNetCrossRefGoogle Scholar
  42. 42.
    Zhuang B, Lu H, Xiao Z et al (2014) Visual tracking via discriminative sparse similarity map. IEEE Trans Image Process 23(4):1872–1881MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Information Sciences and MathematicsChina Jiliang UniversityHangzhouPeople’s Republic of China

Personalised recommendations