Advertisement

Robust visual tracking using information theoretical learning

  • Weifu DingEmail author
  • Jiangshe Zhang
Article
  • 126 Downloads

Abstract

This paper presents a novel online object tracking algorithm with sparse representation for learning effective appearance models under a particle filtering framework. Compared with the state-of-the-art 1 sparse tracker, which simply assumes that the image pixels are corrupted by independent Gaussian noise, our proposed method is based on information theoretical Learning and is much less sensitive to corruptions; it achieves this by assigning small weights to occluded pixels and outliers. The most appealing aspect of this approach is that it can yield robust estimations without using the trivial templates adopted by the previous sparse tracker. By using a weighted linear least squares with non-negativity constraints at each iteration, a sparse representation of the target candidate is learned; to further improve the tracking performance, target templates are dynamically updated to capture appearance changes. In our template update mechanism, the similarity between the templates and the target candidates is measured by the earth movers’ distance(EMD). Using the largest open benchmark for visual tracking, we empirically compare two ensemble methods constructed from six state-of-the-art trackers, against the individual trackers. The proposed tracking algorithm runs in real-time, and using challenging sequences performs favorably in terms of efficiency, accuracy and robustness against state-of-the-art algorithms.

Keywords

Robust visual object tracking Information theoretical learning Adaptive appearance model Particle filtering Occlusion and outlier 

Mathematics Subject Classification (2010)

68T45 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Notes

Acknowledgments

This work was supported by the National Basic Research Program of China (973 Program) under Grant no. 2013CB329404, the Major Research Project of the National Natural Science Foundation of China under Grant no. 91230101, the National Natural Science Foundation of China under Grant no. 61075006 and 11201367, the Key Project of the National Natural Science Foundation of China under Grant no. 11131006 and the Research Fund for the Doctoral Program of Higher Education of China under Grant no. 20100201120048, natural science Fund of Ningxia, China under Grant no. NZ12209.

References

  1. 1.
    Adam, A., Rivlin, E., Shimshoni, I.: Robust fragments-based tracking using the integral histogram Proceedings of the International Conference on Computer Vision and Pattern Recognition, pp 798–805 (2006)Google Scholar
  2. 2.
    Arulampalam, M., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process. 50(2), 174–188 (2002)CrossRefGoogle Scholar
  3. 3.
    Avidan, S.: Support vector tracking. IEEE Trans. Pattern Anal. Mach. Intell. 26(8), 1064–1072 (2004)CrossRefGoogle Scholar
  4. 4.
    Avidan, S.: Ensemble tracking. Proceedings of the 10th European Conference on Computer Vision, 494C501 (2005)Google Scholar
  5. 5.
    Babenko, B., Yang, M., Belongie, S.: Visual tracking with online multiple instance learning. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1619–1632 (2011)CrossRefGoogle Scholar
  6. 6.
    Black, M.: EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation. Int. J. Comput. Vision 26(1), 63C84 (1998)CrossRefGoogle Scholar
  7. 7.
    Collins, R., Liu, Y., Leordeanu, M.: Online selection of discriminative tracking features. IEEE Trans. Pattern Anal. Mach. Intell. 27(10), 1631–1643 (2004)CrossRefGoogle Scholar
  8. 8.
    Comaniciu, D., Member, V., Meer, P.: Kernel-based object tracking. IEEE Trans. Pattern Anal. Mach. Intell. 25(5), 564–577 (2003)CrossRefGoogle Scholar
  9. 9.
    Grabner, H., Grabner, M., Bischof, H.: Real-time tracking via on-line boosting. Proceedings of the British Machine Vision Conference, 47–56 (2006)Google Scholar
  10. 10.
    Grabner, H., Leistner, C., Bischof, H.: Semi-supervised on-line boosting for robust tracking. Proceedings of the 10th European Conference on Computer Vision, 234–247 (2008)Google Scholar
  11. 11.
    Hare, S., Saffari, A., Torr, P.: Struck:structured output tracking with kernels. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 263–270 (2011)Google Scholar
  12. 12.
    Henriques, J., Caseiro, R., Martins, P., Batista, J.: Exploiting the circulant structure of tracking-by-detection with kernels. Proceedings of European Conference on Computer Vision, 702–715 (2012)Google Scholar
  13. 13.
    Kwon, J., Lee, K.: Visual tracking decomposition. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 1269–1276 (2010)Google Scholar
  14. 14.
    Liu, W., Pokharel, P., Principe, J.: Error Entropy, Correntropy and M-Estimation. Proceedings Workshop of Machine Learning for Signal Processing (2006)Google Scholar
  15. 15.
    Liu, W., Pokharel, P., Principe, J.: Correntropy: Properties and applications in Non-Gaussian signal processing. IEEE Trans. Signal Process. 55(11), 5286–5298 (2007)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Mei, X., Ling, H.: Robust visual tracking and vehicle classification via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 33(11), 2259–2272 (2011)CrossRefGoogle Scholar
  17. 17.
    Ran, H., Zheng, S., Gang, H.: Maximum correntropy criterion for robust face recognition. IEEE Trans. Pattern Anal. Mach. Intell. 33(8), 1561–1576 (2011)CrossRefGoogle Scholar
  18. 18.
    Ross, D., Lim, J., Lin, R., Yang, M.: Incremental learning for robust visual tracking. Int. J. Comput. Vis. 77(8), 125–141 (2008)CrossRefGoogle Scholar
  19. 19.
    Wang, D., Lu, H., Yang, M.: Online object tracking with sparse prototypes. IEEE Trans. Image Process. 22(1), 314–325 (2013)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Wright, J., Yang, A. Y., Ganesh, A., Sastry, S. S., Ma, Y.: Roubust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 31(2), 210–227 (2009)CrossRefGoogle Scholar
  21. 21.
    Wu, Y., Lim, J., Yang, M.: Online object tracking: A benchmark. Proceedings of the International Conference on Computer Vision and Pattern Recognition (2011)Google Scholar
  22. 22.
    Yilmaz, A., Javed, O., Shah, M.: Object Tracking: A survey. ACM Comput. Surv. 38(4), 81–93 (2006)CrossRefGoogle Scholar
  23. 23.
    Yuan, X., Hu, B.: Robust Feature Extraction via information theoretic learning. Proceedings of International Conference on Machine learning, 1193–1200 (2009)Google Scholar
  24. 24.
    Zhang, K., Zhang, L., Yang, M.: Real-time compressive tracking. Proceedings of the 10th European Conference on Computer Vision, 864–877 (2012)Google Scholar
  25. 25.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Low-rank sparse learning for robust visual tracking. Proceedings of the 10th European Conference on Computer Vision, 470–484 (2012)Google Scholar
  26. 26.
    Zhang, T., Ghanem, B., Liu, S., Ahuja, N.: Robust visual tracking via multi-task sparse learning. Proceedings of the International Conference on Computer Vision and Pattern Recognition, 2042–2049 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2017

Authors and Affiliations

  1. 1.School of Mathematics and InformationBeiFang University of NationalitiesYinchuanChina
  2. 2.School of Mathematics and StatisticsXi’an Jiaotong UniversityXi’anChina

Personalised recommendations