Advertisement

Enhanced Laplacian Group Sparse Learning with Lifespan Outlier Rejection for Visual Tracking

  • Behzad BozorgtabarEmail author
  • Roland Goecke
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9007)

Abstract

Recently, sparse based learning methods have attracted much attention in robust visual tracking due to their effectiveness and promising tracking results. By representing the target object sparsely, utilising only a few adaptive dictionary templates, in this paper, we introduce a new particle filter based tracking method, in which we aim to capture the underlying structure among the particle samples using the proposed similarity graph in a Laplacian group sparse framework, such that the tracking results can be improved. Furthermore, in our tracker, particles contribute with different probabilities in the tracking result with respect to their relative positions in a given frame in regard to the current target object location. In addition, since the new target object can be well modelled by the most recent tracking results, we prefer to utilise the particle samples that are highly associated to the preceding tracking results. We demonstrate that the proposed formulation can be efficiently solved using the Accelerated Proximal method with just a small number of iterations. The proposed approach has been extensively evaluated on 12 challenging video sequences. Experimental results compared to the state-of-the-art methods demonstrate the merits of the proposed tracker.

Keywords

Target Object Particle Filter Sparse Representation Current Frame Visual Tracking 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Supplementary material

Supplementary material (mp4 20,153 KB)

References

  1. 1.
    Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 33, 2259–2272 (2011)CrossRefGoogle Scholar
  2. 2.
    Mei, X., Ling, H., Wu, Y., Blasch, E., Bai, L.: Minimum error bounded efficient ? 1 tracker with occlusion detection. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1257–1264. IEEE (2011)Google Scholar
  3. 3.
    Black, M.J., Jepson, A.D.: Eigentracking: robust matching and tracking of articulated objects using a view-based representation. In: Buxton, B., Cipolla, R. (eds.) ECCV 1996. LNCS, vol. 1064, pp. 329–342. Springer, Heidelberg (1996)Google Scholar
  4. 4.
    Comaniciu, D., Ramesh, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25, 564–577 (2003)CrossRefGoogle Scholar
  5. 5.
    Yang, M., Wu, Y., Hua, G.: Context-aware visual tracking. IEEE Trans. Pattern Anal. Mach. Intell. 31, 1195–1209 (2009)CrossRefGoogle Scholar
  6. 6.
    Adam, A., Rivlin, E.: Robust fragments-based tracking using the integral histogram. In: Computer Vision and Pattern Recognition (CVPR), pp. 798–805 (2006)Google Scholar
  7. 7.
    Ross, D., Lim, J., Lin, R.S., Yang, M.H.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77, 125–141 (2008)CrossRefGoogle Scholar
  8. 8.
    Kwon, J., Lee, K.M.: Visual tracking decomposition. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1269–1276. IEEE (2010)Google Scholar
  9. 9.
    Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via online boosting. In: British Machine Vision Conference (BMVC), pp. 47–56 (2006)Google Scholar
  10. 10.
    Avidan, S.: Ensemble tracking. IEEE Trans. Pattern Anal. Mach. Intell. 29, 261–271 (2007)CrossRefGoogle Scholar
  11. 11.
    Liu, R., Cheng, J., Lu, H.: A robust boosting tracker with minimum error bound in a co-training framework. In: 2009 IEEE 12th International Conference on Computer Vision, pp. 1459–1466. IEEE (2009)Google Scholar
  12. 12.
    Jiang, N., Liu, W., Wu, Y.: Adaptive and discriminative metric differential tracking. In: 2011 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1161–1168. IEEE (2011)Google Scholar
  13. 13.
    Babenko, B., Yang, M., Belongie, S.: Visual tracking with online multiple instance learning. In: Computer Vision and Pattern Recognition (CVPR) (2009)Google Scholar
  14. 14.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2042–2049. IEEE (2012)Google Scholar
  15. 15.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via structured multi-task sparse learning. Int. J. Comput. Vis. 101, 367–383 (2013)CrossRefMathSciNetGoogle Scholar
  16. 16.
    Elhamifar, E., Vidal, R.: Sparse subspace clustering: algorithm, theory, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 35, 2765–2781 (2013)CrossRefGoogle Scholar
  17. 17.
    Gao, S., Tsang, I.H., Chia, L.T.: Laplacian sparse coding, hypergraph laplacian sparse coding, and applications. IEEE Trans. Pattern Anal. Mach. Intell. 35, 92–104 (2013)CrossRefGoogle Scholar
  18. 18.
    Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2, 183–202 (2009)CrossRefzbMATHMathSciNetGoogle Scholar
  19. 19.
    Efron, B., Hastie, T., Johnstone, I., Tibshirani, R., et al.: Least angle regression. Ann. Stat. 32, 407–499 (2004)CrossRefzbMATHMathSciNetGoogle Scholar
  20. 20.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Low-rank sparse learning for robust visual tracking. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 470–484. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  21. 21.
    Hare, S., Saffari, A., Torr, P.H.: Struck: structured output tracking with kernels. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 263–270. IEEE (2011)Google Scholar
  22. 22.
    Zhang, K., Zhang, L., Yang, M.-H.: Real-time compressive tracking. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part III. LNCS, vol. 7574, pp. 864–877. Springer, Heidelberg (2012) CrossRefGoogle Scholar
  23. 23.
    Babenko, B., Yang, M.H., Belongie, S.: Robust object tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1619–1632 (2011)CrossRefGoogle Scholar
  24. 24.
    Sevilla-Lara, L., Learned-Miller, E.: Distribution fields for tracking. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1910–1917. IEEE (2012)Google Scholar
  25. 25.
    Kalal, Z., Matas, J., Mikolajczyk, K.: Pn learning: bootstrapping binary classifiers by structural constraints. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 49–56. IEEE (2010)Google Scholar
  26. 26.
    Jia, X., Lu, H., Yang, M.H.: Visual tracking via adaptive structural local sparse appearance model. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1822–1829. IEEE (2012)Google Scholar
  27. 27.
    Zhong, W., Lu, H., Yang, M.H.: Robust object tracking via sparsity-based collaborative model. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1838–1845. IEEE (2012)Google Scholar
  28. 28.
    Bao, C., Wu, Y., Ling, H., Ji, H.: Real time robust l1 tracker using accelerated proximal gradient approach. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1830–1837. IEEE (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Vision and Sensing, HCC Lab, ESTeMUniversity of CanberraCanberraAustralia
  2. 2.IHCC, RSCS, CECSAustralian National UniversityCanberraAustralia

Personalised recommendations