Advertisement

On Combining Compressed Sensing and Sparse Representations for Object Tracking

  • Hang Sun
  • Jing LiEmail author
  • Bo Du
  • Dacheng Tao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9916)

Abstract

The tracking algorithm of compressed sensing takes advantage of the objective’s background information, but lacks the feedback mechanism towards the results. The 11 sparse tracking algorithm adapts to the changes in the objectives’ appearances but at the cost of losing their background information. To enhance the effectiveness and robustness of the algorithm in coping with such distractions as occlusion and illumination variation, this paper proposes a tracking framework with the 11 sparse representation being the detector and compressed sensing algorithm the tracker, and establishes a complementary classifier model. A second-order model updating strategy has therefore been proposed to preserve the most representative templates in the 11 sparse representations. It is concluded that this tracking algorithm is better than the prevalent 8 ones with a respective precision plot of 77.15 %, 72.33 % and 81.13 % and a respective success plot of 77.67 %, 74.01 %, 81.51 % in terms of the overall, occlusion and illumination variation.

Keywords

Target tracking Sparse representations Compressed sensing Classifier Updating strategy 

References

  1. 1.
    Cannons, K.: A review of visual tracking. Technical report CSE 2008–07, York University, Canada (2008)Google Scholar
  2. 2.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking: a survey. ACM Comput. Surv. 38(4), 1–45 (2006)CrossRefGoogle Scholar
  3. 3.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)CrossRefGoogle Scholar
  4. 4.
    Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29(2), 261–271 (2008)CrossRefGoogle Scholar
  5. 5.
    Babenko, B., Yang, M.-H., Belongie, S.: Visual tracking with online multiple instance learning. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 983–990 (2009)Google Scholar
  6. 6.
    Kalal, Z., Matas, J., Mikolajczyk, K.: P-N learning: bootstrapping binary classifiers by structural constraints. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 49–56 (2010)Google Scholar
  7. 7.
    Zhang, K., Zhang, L., Yang, M.-H.: Fast compressive tracking. IEEE Trans. Pattern Anal. Mach. Intell. 36(10), 2002–2015 (2014)CrossRefGoogle Scholar
  8. 8.
    Mei, X., Ling, H.: Robust visual tracking using L1 minimization. In: IEEE International Conference on Computer Vision, pp. 1436–1443 (2009)Google Scholar
  9. 9.
    Mei, X., Ling, H., Wu, Y., et al.: Minimum error bounded efficient L1 tracker with occlusion detection. IEEE Trans. Image Process. 22(7), 2661–2675 (2013)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via Structured multitask sparse learning. Int. J. Comput. Vis. 101, 367–383 (2013)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Zhong, W., Lu, H., Yang, M.-H.: Robust object tracking via sparsity-based Collaborative model. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1838–1845 (2012)Google Scholar
  12. 12.
    Jia, X., Lu, H., Yang, M.-H.: Visual tracking via adaptive structural local sparse appearance model. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1822–1829 (2012)Google Scholar
  13. 13.
    Liu, B., Huang, J., Yang, L., Kulikowsk, C.: Robust tracking using local sparse appearance model and K-selection. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2968–2981 (2013)CrossRefGoogle Scholar
  14. 14.
    Wu, Y., Lim, J., Yang, M.-H.: Online object tracking: a benchmark. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2411–2418 (2013)Google Scholar
  15. 15.
    Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(1), 210–227 (2009)CrossRefGoogle Scholar
  16. 16.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7575, pp. 702–715. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33765-9_50 CrossRefGoogle Scholar
  17. 17.
    Hare, S., Saffari, A., Torr, P.H.S.: Struck: structured output tracking with kernels. In: IEEE International Conference on Computer Vision, pp. 263–270 (2011)Google Scholar
  18. 18.
    Dinh, T.B., Vo, N., Medioni, G.: Context tracker: exploring supporters and distracters in unconstrained environments. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1177–1184 (2011)Google Scholar
  19. 19.
    Li, H., Shen, C., Shi, Q.: Real-time visual tracking using compressive sensing. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1305–1312 (2011)Google Scholar
  20. 20.
    Sun, H., Li, J., Chang, J., et al.: Efficient compressive sensing tracking via mixed classifier decision. Sci. China Inf. Sci. 59(7), 1–15 (2016)CrossRefGoogle Scholar
  21. 21.
    Kwon, J., Lee, K.M.: Tracking by sampling trackers. In: IEEE International Conference on Computer Vision, pp. 1195–1202 (2011)Google Scholar
  22. 22.
    Henriques, J.F., Caseiro, R., Martins, P., Batista, J.: High-speed tracking with kernelized correlation filters. IEEE Trans. Pattern Anal. Mach. Intell. 37(3), 583–596 (2015)CrossRefGoogle Scholar
  23. 23.
    Zhang, K., Liu, Q., Wu, Y., Yan, M.-H.: Robust visual tracking via convolutional networks without training. IEEE Trans. Image Process. 25(4), 1779–1792 (2016)MathSciNetGoogle Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  1. 1.Computer SchoolWuhan UniversityWuhanChina
  2. 2.Faculty of Engineering and Information TechnologyUniversity of Technology SydneySydneyAustralia

Personalised recommendations